[phenixbb] WAS: changing TLS groups mid refinement

Phil Jeffrey pjeffrey at princeton.edu
Mon May 17 09:08:44 PDT 2010

This has been discussed before.  For a start look at French and Wilson 
(French G.S. and Wilson K.S. Acta. Cryst. (1978), A34, 517.)

Fobs < 0 is not possible, but Fobs = 0 clearly conveys some information 
(i.e. the reflection is weak). Simply deleting the data is the worst 
case scenario where you remove any information content on that 
reflection during refinement.  I'm surprised that this would even need 
further exposition, especially in light of the community tendency to use 
higher outer-shell Rsymms (50-60%) where a significant proportion of the 
data would be expected to be weak and therefore subject to risk of 
arbitrary cutoff by phenix.refine. If I/sigI = 2 (a not uncommon outer 
shell criterion) then a decent proportion of the data might have I<0, 
and this data is really there and weak and not the imagination of the 
processing program.

Does phenix.refine enforce an I<=0 cutoff too ?  It certainly behaves as 
if it does.

As it stands the best way to get around this undesirable feature of 
phenix.refine is to use CCP4's TRUNCATE program (with "TRUNCATE YES") 
which makes much of the weak data small but positive.  You still have to 
tell phenix.refine not to use Imean and SigImean (it will use those if 
it can find them) and use F instead.

Again, include F=0 data.  F<0 should probably immediately terminate 
program execution, since it infers a data content error.

Phil Jeffrey

Pavel Afonine wrote:
> Hi Ed,
> I agree I was too generous in my statement and you promptly caught it, 
> thanks!
> phenix.refine does catch and deal with clearly nonsensical situations, 
> like having Fobs<=0 in refinement. So, saying "phenix.refine does not 
> use any data cutoff for refinement" was not precise, indeed. In 
> addition, phenix.refine automatically removes Fobs outliers based on 
> R.Read paper.
> I don't see much sense having a term (0-Fcalc)**2 in least-squares 
> target or equivalent one in ML target. Implementing an intensity based 
> ML target function (or corresponding LS) would allow using Iobs<=0, but 
> this is not done yet, and this is a different story -  your original 
> question below was about Fo (Fobs).
> Do you have rock solid evidence that substituting missing (unmeasured) 
> Fobs with 0 would be better than just using actual set (Fobs>0) in 
> refinement? Or did I miss any relevant paper on this matter? I would 
> appreciate if you point me out. Unless I see a clear evidence that this 
> would improve things I wouldn't waste time on implementing it. 
> Unfortunately I don't  have time right now for experimenting with this 
> myself.
> Thanks!
> Pavel.
> On 5/17/10 6:52 AM, Ed Pozharski wrote:
>> On Fri, 2010-05-14 at 15:35 -0700, Pavel Afonine wrote:
>>> phenix.refine does not use any data cutoff for refinement.
>> So was the Fo>0 hard-wired cutoff removed?  I don't have the latest
>> version so I can't check myself.
> ------------------------------------------------------------------------
> _______________________________________________
> phenixbb mailing list
> phenixbb at phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb

More information about the phenixbb mailing list