[phenixbb] Geometry Restraints - Anisotropic truncation
Frank von Delft
frank.vondelft at sgc.ox.ac.uk
Thu May 3 00:15:40 PDT 2012
Is the explanation not simpler? The volumes of reciprocal space that
were left out did not in fact contain signal, and it's by removing those
noise non-reflections that the actual R is revealed. As James Holton
posted a while ago, Rfactors calculated for noise give randomly large
So it seems less less misleading to refer to it as "anisotropy-TRUNCATED".
On 03/05/2012 07:40, Pavel Afonine wrote:
> Hi Kendall,
> I just did this quick test: calculated R-factors using original and
> anisotropy-corrected Mike Sawaya's data (*)
> r_work : 0.3026
> r_free : 0.3591
> number of reflections: 26944
> r_work : 0.2640
> r_free : 0.3178
> number of reflections: 18176
> The difference in R-factors is not too surprising given how many
> reflections was removed (about 33%).
> (*) Note, the data available in PDB is anisotropy corrected. The
> original data set was kindly provided to me by the author.
> On 5/2/12 5:25 AM, Kendall Nettles wrote:
>> I didnt think the structure was publishable with Rfree of 33%
>> because I was expecting the reviewers to complain.
>> We have tested a number of data sets on the UCLA server and it
>> usually doesn't make much difference. I wouldn't expect truncation
>> alone to change Rfree by 5%, and it usually doesn't. The two times I
>> have seen dramatic impacts on the maps ( and Rfree ), the highly
>> anisotrophic sets showed strong waves of difference density as well,
>> which was fixed by throwing out the noise. We have moved to using
>> loose data cutoffs for most structures, but I do think anisotropic
>> truncation can be helpful in rare cases.
>> On May 1, 2012, at 3:07 PM, "Dale
>> Tronrud"<det102 at uoxray.uoregon.edu> wrote:
>>> While philosophically I see no difference between a spherical
>>> cutoff and an elliptical one, a drop in the free R can't be the
>>> for the switch. A model cannot be made more "publishable" simply by
>>> We have a whole bunch of empirical guides for judging the
>>> quality of this
>>> and that in our field. We determine the resolution limit of a data
>>> set (and
>>> imposing a "limit" is another empirical choice made) based on Rmrg,
>>> or Rmes,
>>> or Rpim getting too big or I/sigI getting too small and there is no
>>> on how "too big/small" is too "too big/small".
>>> We then have other empirical guides for judging the quality of
>>> the models
>>> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people
>>> seem to
>>> recognize that the these criteria need to be applied differently for
>>> resolutions. A lower resolution model is allowed a higher Rfree, for
>>> Isn't is also true that a model refined to data with a cutoff of
>>> I/sigI of
>>> 1 would be expected to have a free R higher than a model refined to
>>> data with
>>> a cutoff of 2? Surely we cannot say that the decrease in free R
>>> that results
>>> from changing the cutoff criteria from 1 to 2 reflects an improved
>>> model. It
>>> is the same model after all.
>>> Sometimes this shifting application of empirical criteria
>>> enhances the
>>> adoption of new technology. Certainly the TLS parametrization of
>>> motion has been widely accepted because it results in lower working
>>> and free
>>> Rs. I've seen it knock 3 to 5 percent off, and while that certainly
>>> that the model fits the data better, I'm not sure that the quality
>>> of the
>>> hydrogen bond distances, van der Waals distances, or maps are any
>>> The latter details are what I really look for in a model.
>>> On the other hand, there has been good evidence through the
>>> years that
>>> there is useful information in the data beyond an I/sigI of 2 or an
>>> Rmeas> 100% but getting people to use this data has been a hard
>>> slog. The
>>> reason for this reluctance is that the R values of the resulting models
>>> are higher. Of course they are higher! That does not mean the models
>>> are of poorer quality, only that data with lower signal/noise has been
>>> used that was discarded in the models you used to develop your "gut
>>> for the meaning of R.
>>> When you change your criteria for selecting data you have to
>>> your old notions about the acceptable values of empirical quality
>>> You either have to normalize your measure, as Phil Jeffrey
>>> recommends, by
>>> ensuring that you calculate your R's with the same reflections, or by
>>> making objective measures of map quality.
>>> Dale Tronrud
>>> P.S. It is entirely possible that refining a model to a very optimistic
>>> resolution cutoff and calculating the map to a lower resolution
>>> might be
>>> better than throwing out the data altogether.
>>> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>>>> I have seen dramatic improvements in maps and behavior during
>>>> refinement following use of the UCLA anisotropy server in two
>>>> different cases. For one of them the Rfree went from 33% to 28%. I
>>>> don't think it would have been publishable otherwise.
>>>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil
>>>>> Evans<pre at mrc-lmb.cam.ac.uk> wrote:
>>>>>> Are anisotropic cutoff desirable?
>>>>> is there a peer-reviewed publication - perhaps from Acta
>>>>> Crystallographica - which describes precisely why scaling or
>>>>> refinement programs are inadequate to ameliorate the problem of
>>>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>>>> satisfies this need?
> phenixbb mailing list
> phenixbb at phenix-online.org
More information about the phenixbb