Re: [phenixbb] Geometry Restraints - Anisotropic truncation
we discussed this with Randy the other day.. A couple of copy-pasts from that discussion:
In general, given highly anisotropic data set:
1) maps calculated using all (unmodified) data by phenix.refine, phenix.maps and similar tools are better than maps calculated using anisotropy truncated data. So, yes, for the purpose of map calculation there is no need to do anything: Phenix map calculation tools deal with anisotropy very well. How did you define "better"? If there are a lot of reflections without signal, that makes them essentially missing, so by including them, you're effectively filling in for those reflections with only DFc. If anisotropy is very strong (i.e. many missing reflections), does that not introduce very significant model bias? The maps would look cleaner,
Hi Pavel though. (I haven't tested myself, btw; I'm genuinely wondering.)
2) phenix.refine refinement may fail if one uses original anisotropic data set. This is probably because the ML target does not use experimental sigmas (and anisotropy correction by UCLA server is nothing but Miller index dependent removing the data by sigma criterion - yeah, that old well criticized practice of throwing away the data that you worked hard to measure!). May be using sigmas in ML calculation could solve the problem but that has to be proved. If there are large swathes of reciprocal space without signal - I don't see why shouldn't that be excluded? Tossing out individual reflections is of course silly, but what's wrong with trimming down the edges of the elipsoid?
Cheers phx
Hi Frank,
we discussed this with Randy the other day.. A couple of copy-pasts from that discussion:
In general, given highly anisotropic data set:
1) maps calculated using all (unmodified) data by phenix.refine, phenix.maps and similar tools are better than maps calculated using anisotropy truncated data. So, yes, for the purpose of map calculation there is no need to do anything: Phenix map calculation tools deal with anisotropy very well. How did you define "better"?
oh, totally ad hoc: looks better so one can build more model into it, for example. Continuous blobs get deconvoluted such that you can see main and side chains, for example. etc. I looked quite a bit at such maps and that's the impression I got. No, I meant just comparing with and without anisotropy truncation without any filling of missing Fobs.
If there are a lot of reflections without signal, that makes them essentially missing, so by including them, you're effectively filling in for those reflections with only DFc. If anisotropy is very strong (i.e. many missing reflections), does that not introduce very significant model bias? The maps would look cleaner, though.
That's a different story. If you do anisotropy truncation then in case of severe anisotropy there will be lots of removed weak Fobs, which will be subsequently filled in with DFc, and such maps will have a better chance to be more model biased. However, phenix.refine always creates two 2mFo-DFc maps: with and without filling missing Fobs, so you can quickly compare them and get an idea.
2) phenix.refine refinement may fail if one uses original anisotropic data set. This is probably because the ML target does not use experimental sigmas (and anisotropy correction by UCLA server is nothing but Miller index dependent removing the data by sigma criterion - yeah, that old well criticized practice of throwing away the data that you worked hard to measure!). May be using sigmas in ML calculation could solve the problem but that has to be proved. If there are large swathes of reciprocal space without signal - I don't see why shouldn't that be excluded? Tossing out individual reflections is of course silly, but what's wrong with trimming down the edges of the elipsoid?
Well, I guess the thing is that it's not entirely "no signal", but rather a little of weak signal buried in noise, which may be better than nothing if treated properly. The last bit here, "treated properly", is important one and probably should be addressed as part of methodology improvements. But may be you are just right - may be including that signal would not change anything visibly, or may be (likely) it's case-dependent. I'm not aware of anyone done this kind of study systematically. Pavel
Dear All, By Coot we can build a peptide according to 2Fo-Fc and Fo-Fc. For the most part of the peptide especially in the middle of the peptide, the reliability constructed should be very high. But at each end of the peptide, there are 1-2 residues constructed based on 2Fo-Fc contoured at lower sigma (even 0.8). Will you please explain hot to analysis the reliability of the terminal reisudes constructed at each end of the peptide? I am looking forward to getting a reply from you. Cheers, Fenghui
On Tue, May 1, 2012 at 5:44 PM, Dialing Pretty
By Coot we can build a peptide according to 2Fo-Fc and Fo-Fc. For the most part of the peptide especially in the middle of the peptide, the reliability constructed should be very high. But at each end of the peptide, there are 1-2 residues constructed based on 2Fo-Fc contoured at lower sigma (even 0.8).
Will you please explain hot to analysis the reliability of the terminal reisudes constructed at each end of the peptide?
In theory you can use the correlation coefficient of these residues to density, along with the 2Fo-FC level, and the presence or absence of Fo-Fc density. Personally, I would be very suspicious of single-conformation residues which fall entirely outside of 2Fo-FC density contoured at 1 sigma, or which have a CC below 0.8. Certainly it would be wise to avoid making any biological interpretations of residues like this without additional supporting evidence. But this is one of those questions that's very difficult to answer non-interactively. Please, please ask a friendly local crystallographer (or even better, more than one) to inspect your maps with you and confirm the reliability of the model. There are plenty of good crystallographers where you work (I can suggest a couple of names if you don't know any), and you are certain to get a clearer answer faster by talking to them than by asking the phenixbb, especially when we don't have your maps in front of us. -Nat
Hi Pavel
In general, given highly anisotropic data set:
1) maps calculated using all (unmodified) data by phenix.refine, phenix.maps and similar tools are better than maps calculated using anisotropy truncated data. So, yes, for the purpose of map calculation there is no need to do anything: Phenix map calculation tools deal with anisotropy very well.
If there are a lot of reflections without signal, that makes them essentially missing, so by including them, you're effectively filling in for those reflections with only DFc. If anisotropy is very strong (i.e. many missing reflections), does that not introduce very significant model bias? The maps would look cleaner, though.
That's a different story. If you do anisotropy truncation then in case of severe anisotropy there will be lots of removed weak Fobs, which will be subsequently filled in with DFc, and such maps will have a better chance to be more model biased. However, phenix.refine always creates two 2mFo-DFc maps: with and without filling missing Fobs, so you can quickly compare them and get an idea. No, the comparison I mean is
no anisotropy cut-off /--vs-- /anisotropy cut-off WITHOUT filling in missing reflections. I'm wondering about what happens when you do NOT do anisotropy truncation: that generates large volumes of reciprocal space where Fobs is approximately zero, and therefore the map coefficients (2mFo-DFc) become DFc -- i.e. the equivalent to filling in missing Fobs for very incomplete data. The maps to compare would be: (Of course, it presumably matters how effectively D down-weights those reflections; but how is calculation of D affected by a resolution bin being dominated by near-zero Fobs?) phx.
participants (4)
-
Dialing Pretty
-
Frank von Delft
-
Nathaniel Echols
-
Pavel Afonine