Hi Pavel
In general, given highly anisotropic
data set:
1) maps calculated using all (unmodified) data by
phenix.refine,
phenix.maps and similar tools are better than maps calculated
using
anisotropy truncated data. So, yes, for the purpose of map
calculation
there is no need to do anything: Phenix map calculation tools
deal with
anisotropy very well.
If there are a lot of reflections without
signal, that makes them essentially missing, so by including
them, you're effectively filling in for those reflections with
only DFc. If anisotropy is very strong (i.e. many missing
reflections), does that not introduce very significant model
bias? The maps would look cleaner, though.
That's a different story. If you do anisotropy truncation then in
case of severe anisotropy there will be lots of removed weak Fobs,
which will be subsequently filled in with DFc, and such maps will
have a better chance to be more model biased. However,
phenix.refine always creates two 2mFo-DFc maps: with and without
filling missing Fobs, so you can quickly compare them and get an
idea.
No, the comparison I mean is
no anisotropy cut-off --vs-- anisotropy cut-off
WITHOUT filling in missing reflections.
I'm wondering about what happens when you do NOT do anisotropy
truncation: that generates large volumes of reciprocal space where
Fobs is approximately zero, and therefore the map coefficients
(2mFo-DFc) become DFc -- i.e. the equivalent to filling in missing
Fobs for very incomplete data.
The maps to compare would be:
(Of course, it presumably matters how effectively D down-weights
those reflections; but how is calculation of D affected by a
resolution bin being dominated by near-zero Fobs?)
phx.