we discussed this with Randy the other day.. A couple of copy-pasts from that discussion:
In general, given highly anisotropic data set:
1) maps calculated using all (unmodified) data by phenix.refine, phenix.maps and similar tools are better than maps calculated using anisotropy truncated data. So, yes, for the purpose of map calculation there is no need to do anything: Phenix map calculation tools deal with anisotropy very well. How did you define "better"? If there are a lot of reflections without signal, that makes them essentially missing, so by including them, you're effectively filling in for those reflections with only DFc. If anisotropy is very strong (i.e. many missing reflections), does that not introduce very significant model bias? The maps would look cleaner,
Hi Pavel though. (I haven't tested myself, btw; I'm genuinely wondering.)
2) phenix.refine refinement may fail if one uses original anisotropic data set. This is probably because the ML target does not use experimental sigmas (and anisotropy correction by UCLA server is nothing but Miller index dependent removing the data by sigma criterion - yeah, that old well criticized practice of throwing away the data that you worked hard to measure!). May be using sigmas in ML calculation could solve the problem but that has to be proved. If there are large swathes of reciprocal space without signal - I don't see why shouldn't that be excluded? Tossing out individual reflections is of course silly, but what's wrong with trimming down the edges of the elipsoid?
Cheers phx