Cross Validation Reflection Set question
Thanks for the replies.
I just tried doing this, and it looks like the R-free flags will cover the full resolution range, at least with the current code. By current code, you mean 1.7.2-869 and newer, right?
Is this really necessary in this case (refine against lower res data, then continue with the whole data set I think I have a backbone shift in this structure (relative to native protein used for MR) so I wanted to do it this way befor trying to go all the way down to 2A.
One last question does phnix do NCS averaging to improve maps? -- Yuri Pompeu
Open question, Is this 'step wise resolution increase' still necessary in the days of maximum likelihood refinement? On Nov 6, 2011, at 7:14 PM, Yuri wrote:
Is this really necessary in this case (refine against lower res data, then continue with the whole data set
I think I have a backbone shift in this structure (relative to native protein used for MR) so I wanted to do it this way befor trying to go all the way down to 2A.
--------------------------------------------- Francis E. Reyes M.Sc. 215 UCB University of Colorado at Boulder
Hi,
Open question,
Is this 'step wise resolution increase' still necessary in the days of maximum likelihood refinement?
it not really open question. If ML target was that powerful you could do rigid body refinement using all reflections without the need of cutting off high resolution. It's not the case however, as we show in Automatic multiple-zone rigid-body refinement with a large convergence radius. P. V. Afonine, R. W. Grosse-Kunstleve, A. Urzhumtsev and P. D. Adams J. Appl. Cryst. 42, 607-615 (2009). STIR may be necessary for example in case of "very poor" starting model and "very high" resolution data, although the whole procedure should probably be more complex: for example, model parametrization should change as more higher resolution data is added. Pavel
Another benefit is reduced memory usage and increased speed: if sigmaA is close to zero/very low for d values below (say) 2 angstrom up to the high resolution limit of (say) 1 angstrom, you could shave off execution time during the starting cycles of refinement.
I guess patience is a virtue though....
Peter
Sent from my iPhone
On Nov 6, 2011, at 6:59 PM, Pavel Afonine
Hi,
Open question,
Is this 'step wise resolution increase' still necessary in the days of maximum likelihood refinement?
it not really open question. If ML target was that powerful you could do rigid body refinement using all reflections without the need of cutting off high resolution. It's not the case however, as we show in
Automatic multiple-zone rigid-body refinement with a large convergence radius. P. V. Afonine, R. W. Grosse-Kunstleve, A. Urzhumtsev and P. D. Adams J. Appl. Cryst. 42, 607-615 (2009).
STIR may be necessary for example in case of "very poor" starting model and "very high" resolution data, although the whole procedure should probably be more complex: for example, model parametrization should change as more higher resolution data is added.
Pavel
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
This answer confuses two independent properties of a refinement - the target function and the optimization method. Rigid body refinement is required because the target function (whatever it is) is being optimized with a method that assumes that off-diagonal elements of the second derivative matrix are small. When correction of the errors in the model requires the concerted motion of groups of atoms the off-diagonal elements of this matrix are very large and the assumption that they are not causes the error to be uncorrected. Forcing rigid body refinement reparameterizes the model so that the new second derivative matrix really has small off-diagonal elements and the optimization can correct the problem. Low resolution data are sufficient to define the optimal rigid body parameters. With the least-squares target the presence of the high resolution data reduced the radius of convergence of the optimization making the reduction of the resolution limit mandatory. A good ML target should set all the high resolution gradients to zero making them irrelevant. As has been mentioned elsewhere, since it is just a very computationally expensive way to calculate zero one can save time by reducing the resolution limit anyway. I should emphasize "good" in good ML target. The calculation of sigma A, itself, assumes that the atomic positional errors are uncorrelated, so the currently used ML target is not a "good" ML target for models with this type of error. This is what, I believe, is the cause of the resolution limit effects reported in Afonine's 2009 paper. Dale Tronrud On 11/06/11 18:59, Pavel Afonine wrote:
Hi,
Open question,
Is this 'step wise resolution increase' still necessary in the days of maximum likelihood refinement?
it not really open question. If ML target was that powerful you could do rigid body refinement using all reflections without the need of cutting off high resolution. It's not the case however, as we show in
Automatic multiple-zone rigid-body refinement with a large convergence radius. P. V. Afonine, R. W. Grosse-Kunstleve, A. Urzhumtsev and P. D. Adams J. Appl. Cryst. 42, 607-615 (2009).
STIR may be necessary for example in case of "very poor" starting model and "very high" resolution data, although the whole procedure should probably be more complex: for example, model parametrization should change as more higher resolution data is added.
Pavel
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Hi Dale,
This answer confuses two independent properties of a refinement - the target function and the optimization method.
to be accurate, the answer confused model parameterization (rigid body) and refinement target (LS vs ML, computed using different subsets of reflections). The answer did not involve the optimization method (e.g. minimization in this case). Too much jargon is never good.
Low resolution data are sufficient to define the optimal rigid body parameters. With the least-squares target the presence of the high resolution data reduced the radius of convergence of the optimization making the reduction of the resolution limit mandatory. A good ML target should set all the high resolution gradients to zero making them irrelevant. As has been mentioned elsewhere, since it is just a very computationally expensive way to calculate zero one can save time by reducing the resolution limit anyway.
That is what we hoped. It turned out it's not - ML does not set all the high res terms to zero (probably, for the reason you mentioned below), or does it insufficiently compared to manual cutting high res data off. That's what we observed when preparing that "Afonine's 2009 paper".
I should emphasize "good" in good ML target. The calculation of sigma A, itself, assumes that the atomic positional errors are uncorrelated, so the currently used ML target is not a "good" ML target for models with this type of error. This is what, I believe, is the cause of the resolution limit effects reported in Afonine's 2009 paper.
Pavel
On Sun, Nov 6, 2011 at 6:14 PM, Yuri
I just tried doing this, and it looks like the R-free flags will cover the full resolution range, at least with the current code.
By current code, you mean 1.7.2-869 and newer, right?
"Current" in this case means "as of 10AM this morning", but I would be surprised if 1.7.2 was any different. It should be easy to confirm, anyway.
One last question does phnix do NCS averaging to improve maps?
Not phenix.refine, but AutoBuild will certainly do this. -Nat
participants (6)
-
Dale Tronrud
-
Francis E Reyes
-
Nathaniel Echols
-
Pavel Afonine
-
Phzwart
-
Yuri