[phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"

Michael Thompson miket at chem.ucla.edu
Fri May 31 14:50:51 PDT 2013


As another follow up to this point, I have had a recent example where Xtriage reported that my intensity statistics indicated twinning (<|L|>=0.399 and <L^2>=0.227, alpha=0.4). After having refined the structure with the suggested twin law, and making composite omit maps along the way (using autobuild with no twin law), I ran model vs. data at the end of refinement and noticed that it reported R/Rfree that were about 10% higher than the R/Rfree from refinement, similar to what the original post described. I checked the model vs. data logfile and found that the L-test routine used by model vs. data was failing, so the subsequent analysis was carried out as if there was no twinning. 

So in my case, the crystal was twinned, but model vs. data was not able to perform the L-test so it was outputting incorrect R-factors.

The error message at the top of the model vs. data log file is:

Twin analysis failed: Python argument types in
    l_test.__init__(l_test, miller_index, double, space_group, bool, float, float, float, int)
did not match C++ signature:
    __init__(_object*, scitbx::af::const_ref<cctbx::miller::index<int>, scitbx::af::trivial_accessor> miller_indices, scitbx::af::const_ref<double, scitbx::af::trivial_accessor> intensity, cctbx::sgtbx::space_group space_group, bool anomalous_flag, long parity_h, long parity_k, long parity_l, unsigned long max_delta_h)
Twin analysis failed: Python argument types in
    l_test.__init__(l_test, miller_index, double, space_group, bool, float, float, float, int)
did not match C++ signature:
    __init__(_object*, scitbx::af::const_ref<cctbx::miller::index<int>, scitbx::af::trivial_accessor> miller_indices, scitbx::af::const_ref<double, scitbx::af::trivial_accessor> intensity, cctbx::sgtbx::space_group space_group, bool anomalous_flag, long parity_h, long parity_k, long parity_l, unsigned long max_delta_h)


Perhaps there are some other users who are seeing this behavior because the L-test fails within model vs. data.

Mike







----- Original Message -----
From: "Nathaniel Echols" <nechols at lbl.gov>
To: "PHENIX user mailing list" <phenixbb at phenix-online.org>
Sent: Wednesday, May 29, 2013 5:18:57 PM GMT -08:00 US/Canada Pacific
Subject: Re: [phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"

On Wed, May 29, 2013 at 4:13 PM, Sam Stampfer <Samuel.Stampfer at tufts.edu> wrote:
> When I refined my model in phenix, I used the twin law h,-h-k,-l. I read in
> the documentation that twinning can account for some of this discrepancy,
> but that the program is supposed to take twinning into account if it will
> lower the calculated R-work by more than 2%, which it doesn't seem to have
> done (or there is some other problem with my data).

Okay, the problem is that your data don't actually appear to be
twinned.  The automatic method used by phenix.model_vs_data (which is
used internally for Table 1 and the validation GUI) only tries
possible twin laws if the results of the "L test" show a suspicious
distribution of intensities.  Your data look fine, so it doesn't
bother trying the twin laws.  That the R-factors are lower when you
refine with a twin law isn't necessarily indicative of the data
actually being twinned - Garib Murshudov has looked into this in
detail but I confess to being ignorant of the math (but I can probably
dig up his paper on the subject if anyone is interested).  However,
I'm pretty sure the data are actually in a higher-symmetry space
group.  Will send details and new files off-list (probably tomorrow at
this rate).

I should probably change some of the programs and/or documentation to
make it more clear what is being done internally, since it took me a
bit of digging to realize what was going on.  In general, though,
always be very careful before running twinned refinement!  I have seen
several users do this by mistake when they really had higher symmetry.
 The maps will also be more model-biased when using twinned
refinement, so it's good to avoid doing this unless absolutely
necessary.

-Nat
_______________________________________________
phenixbb mailing list
phenixbb at phenix-online.org
http://phenix-online.org/mailman/listinfo/phenixbb

-- 
Michael C. Thompson

Graduate Student

Biochemistry & Molecular Biology Division

Department of Chemistry & Biochemistry

University of California, Los Angeles

miket at chem.ucla.edu


More information about the phenixbb mailing list