Hi Ed,

I agree I was too generous in my statement and you promptly caught it, thanks!

phenix.refine does catch and deal with clearly nonsensical situations, like having Fobs<=0 in refinement. So, saying "phenix.refine does not use any data cutoff for refinement" was not precise, indeed. In addition, phenix.refine automatically removes Fobs outliers based on R.Read paper.

I don't see much sense having a term (0-Fcalc)**2 in least-squares target or equivalent one in ML target. Implementing an intensity based ML target function (or corresponding LS) would allow using Iobs<=0, but this is not done yet, and this is a different story -  your original question below was about Fo (Fobs).

Do you have rock solid evidence that substituting missing (unmeasured) Fobs with 0 would be better than just using actual set (Fobs>0) in refinement? Or did I miss any relevant paper on this matter? I would appreciate if you point me out. Unless I see a clear evidence that this would improve things I wouldn't waste time on implementing it. Unfortunately I don't  have time right now for experimenting with this myself.

Thanks!
Pavel.


On 5/17/10 6:52 AM, Ed Pozharski wrote:
On Fri, 2010-05-14 at 15:35 -0700, Pavel Afonine wrote:
  
phenix.refine does not use any data cutoff for refinement.
    

So was the Fo>0 hard-wired cutoff removed?  I don't have the latest
version so I can't check myself.