[phenixbb] Geometry Restraints - Anisotropic truncation

Pavel Afonine pafonine at lbl.gov
Tue May 1 17:32:38 PDT 2012


Hi Frank,

>> we discussed this with Randy the other day.. A couple of copy-pasts from
>> that discussion:
>>
>> In general, given highly anisotropic data set:
>>
>> 1) maps calculated using all (unmodified) data by phenix.refine,
>> phenix.maps and similar tools are better than maps calculated using
>> anisotropy truncated data. So, yes, for the purpose of map calculation
>> there is no need to do anything: Phenix map calculation tools deal with
>> anisotropy very well.
> How did you define "better"?

oh, totally ad hoc: looks better so one can build more model into it, 
for example. Continuous blobs get deconvoluted such that you can see 
main and side chains, for example. etc. I looked quite a bit at such 
maps and that's the impression I got.
No, I meant just comparing with and without anisotropy truncation 
without any filling of missing Fobs.

> If there are a lot of reflections without signal, that makes them 
> essentially missing, so by including them, you're effectively filling 
> in for those reflections with only DFc.  If anisotropy is very strong 
> (i.e. many missing reflections), does that not introduce very 
> significant model bias?  The maps would look cleaner, though.

That's a different story. If you do anisotropy truncation then in case 
of severe anisotropy there will be lots of removed weak Fobs, which will 
be subsequently filled in with DFc, and such maps will have a better 
chance to be more model biased. However, phenix.refine always creates 
two 2mFo-DFc maps: with and without filling missing Fobs, so you can 
quickly compare them and get an idea.

>> 2) phenix.refine refinement may fail if one uses original anisotropic
>> data set. This is probably because the ML target does not use
>> experimental sigmas (and anisotropy correction by UCLA server is nothing
>> but Miller index dependent removing the data by sigma criterion - yeah,
>> that old well criticized practice of throwing away the data that you
>> worked hard to measure!). May be using sigmas in ML calculation could
>> solve the problem but that has to be proved.
> If there are large swathes of reciprocal space without signal - I 
> don't see why shouldn't that be excluded?  Tossing out individual 
> reflections is of course silly, but what's wrong with trimming down 
> the edges of the elipsoid?

Well, I guess the thing is that it's not entirely "no signal", but 
rather a little of weak signal buried in noise, which may be better than 
nothing if treated properly. The last bit here, "treated properly", is 
important one and probably should be addressed as part of methodology 
improvements.
But may be you are just right - may be including that signal would not 
change anything visibly, or may be (likely) it's case-dependent. I'm not 
aware of anyone done this kind of study systematically.

Pavel



More information about the phenixbb mailing list