Yes indeed quite so.
We agree that 1.7Angstrom is low for this type of refinement.
But I concede there may be more to try in this case, as you summarised.
And as you remark in your recent talk slides (thankyou for the weblink by the way) 'avoid fitting into noise' or as I would put it 'false precision' is also important. (I was impressed by your very recent FEM maps paper in Acta D by the way, and put out a tweet from my twitter account @HelliwellJohn highlighting it).
Greetings,
John
Emeritus Prof John R Helliwell DSc
On 1 Aug 2015, at 20:23, Pavel Afonine
Dear John,
And what would you use firstly to monitor success (an Rfree drop presumably but how much is of a drop is significant?) and second to guard against 'over fitting' eg (restraining the aniso Bij?).
PDB Redo use a Hamilton Rfactor test algorithm to judge if aniso can sensibly be applied. That seems to me the ideal check (Hamilton).
Rfree and other single-number quality metrics are global figures, they will not tell you if one or few refined B-factors are bad or atoms are misfit or local geometry is distorted.
This is why when it comes to validation I like to think of it being applied to data, model and model-to-data fit, for each item looking at local and global metrics: http://phenix-online.org/presentations/latest/pavel_validation.pdf
Specifically to B-factors: I think Ethan Merritt's tools are most comprehensive, such as http://skuld.bmsc.washington.edu/parvati/ and other.
All the best, Pavel