[phenixbb] problems running refinement of shaken model against half map

L. Hielkema l.hielkema at rug.nl
Fri Dec 6 10:30:20 PST 2019


Dear Pavel,

Thanks again for your answer. I understand that you have some concerns with regard to the validation we use. Now this makes me really curious how the rest of the EM community feels about this. 

> "widely used" does not always mean "makes sense". As it often happens, some may just blindly follow the crowd. And how "widely" it is used, in fact? I stated my arguments as to why this does not make sense (to me at least). I'm yet to hear any strong counter-arguments.
> 
So you stated the following before: "First off, I think this way of validation is quite faulty conceptually... How you know 0.5A is the best number and say not 0.7 or 0.3? By shaking the model, you validate the shaken (perturbed) model, which is not the model you are publishing and reporting statistics for. If you shake the model twice with same shake doze, you will get two different models with the same amount of perturbation, which means numbers you derive from such models can (and likely will) be different.”

Is there anyone that would have any strong counter arguments? Or thinks along the same way?
Because we would really like to know if we should change anything in our approach. Or would then the approach that Pavel suggest with the phenix tool Comprehensive validation be a good solution to this issue?

> The paper you refer to indeed describes the protocol they followed but it does not answer concerns that I raised before.
> 
> History knows plenty of examples of something being almost a standard for a while and then discarded later. Such as cutting off low-resolution diffraction data (because at the time there was no good bulk-solvent models invented) or cutting data by 2 or 3 "sigma" criteria (because maximum-likelihood target functions were not in use yet), etc etc etc. Or not adding H atoms to atomic models...
> 
> Cryo-EM isn't as mature as protein crystallography where most procedures and protocols are pretty much standard. Cross validation for cryo-EM is yet to settle to some more meaningful method. There is indeed a number of attempts made so far but none of them seem to be as good as Brunger's free-R approach!

Sure many things have to be still established, but it would be nice to know what is the generally accepted method at the moment according to the community.

> 
>> As far as I could see the phenix tool Comprehensive validation did not provide the validation against the half maps, that one needs to obtain FSCwork and FSCfree, which is described in the attached papers.
> 
> This is correct, it doesn't for reasons I mentioned before.

Best,
Lisa





> On 5 Dec 2019, at 21:02, Pavel Afonine <pafonine at lbl.gov> wrote:
> 
> Hi Lisa,
> 
>> Thanks for your answer. Though I am a bit confused, as far as I and my group understand this method is widely used for several years (see attached papers for first introduction and since then you see many performing this type of validation).
> 
> "widely used" does not always mean "makes sense". As it often happens, some may just blindly follow the crowd. And how "widely" it is used, in fact? I stated my arguments as to why this does not make sense (to me at least). I'm yet to hear any strong counter-arguments.
> 
> The paper you refer to indeed describes the protocol they followed but it does not answer concerns that I raised before.
> 
> History knows plenty of examples of something being almost a standard for a while and then discarded later. Such as cutting off low-resolution diffraction data (because at the time there was no good bulk-solvent models invented) or cutting data by 2 or 3 "sigma" criteria (because maximum-likelihood target functions were not in use yet), etc etc etc. Or not adding H atoms to atomic models...
> 
> Cryo-EM isn't as mature as protein crystallography where most procedures and protocols are pretty much standard. Cross validation for cryo-EM is yet to settle to some more meaningful method. There is indeed a number of attempts made so far but none of them seem to be as good as Brunger's free-R approach!
> 
>> As far as I could see the phenix tool Comprehensive validation did not provide the validation against the half maps, that one needs to obtain FSCwork and FSCfree, which is described in the attached papers.
> 
> This is correct, it doesn't for reasons I mentioned before.
> 
> All the best,
> Pavel
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://phenix-online.org/pipermail/phenixbb/attachments/20191206/e85a682b/attachment.htm>


More information about the phenixbb mailing list