[phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"

Randy Read rjr27 at cam.ac.uk
Sat Jun 1 07:36:12 PDT 2013


And, in the case of translational non-crystallographic symmetry or tNCS (where the particular special case of pseudo-centering is most serious), Phaser can now correct for the statistical effects.  This works pretty well when there's only a single tNCS translation, but the corrections aren't (yet) as robust when there's more than one tNCS translation.

Best wishes,

Randy Read

-----
Randy J. Read
Department of Haematology, University of Cambridge
Cambridge Institute for Medical Research    Tel: +44 1223 336500
Wellcome Trust/MRC Building                         Fax: +44 1223 336827
Hills Road                                                            E-mail: rjr27 at cam.ac.uk
Cambridge CB2 0XY, U.K.                               www-structmed.cimr.cam.ac.uk

On 1 Jun 2013, at 02:15, "Terwilliger, Thomas C" <terwilliger at lanl.gov> wrote:

> I think it is important to remember that perhaps the best statistic we have right now for identification of whether a structure is twinned is the Wilson ratio  <I**2> / <I>**2. For acentric reflections this is 2.0 for untwinned and 1.5 for perfect twin.   This can of course be complicated by pseudo-centering.  Aside from that issue, it is a really good indicator. 
> 
> It can be better than using the L test or others that compare reflections because it does not require that you have the correct symmetry.
> 
> So in xtriage:
> 
> TWINNED STRUCTURE:
> 
> Wilson ratio and moments  
> 
> Acentric reflections
>   <I^2>/<I>^2    :1.667   (untwinned: 2.000; perfect twin 1.500)
>   <F>^2/<F^2>    :0.857   (untwinned: 0.785; perfect twin 0.885)
>   <|E^2 - 1|>    :0.609   (untwinned: 0.736; perfect twin 0.541)
> 
> 
> NOT TWINNED STRUCTURE:
> 
> Wilson ratio and moments
> 
> Acentric reflections
>   <I^2>/<I>^2    :1.995   (untwinned: 2.000; perfect twin 1.500)
>   <F>^2/<F^2>    :0.788   (untwinned: 0.785; perfect twin 0.885)
>   <|E^2 - 1|>    :0.732   (untwinned: 0.736; perfect twin 0.541)
> 
> 
> Centric reflections
>   <I^2>/<I>^2    :2.939   (untwinned: 3.000; perfect twin 2.000)
>   <F>^2/<F^2>    :0.648   (untwinned: 0.637; perfect twin 0.785)
>   <|E^2 - 1|>    :0.990   (untwinned: 0.968; perfect twin 0.736)
> 
> 
> If this test says your crystal is not twinned, it probably is not twinned.
> 
> All the best
> Tom T
> 
> 
> 
> ________________________________________
> From: phenixbb-bounces at phenix-online.org [phenixbb-bounces at phenix-online.org] on behalf of Michael Thompson [miket at chem.ucla.edu]
> Sent: Friday, May 31, 2013 3:50 PM
> To: PHENIX user mailing list
> Subject: Re: [phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"
> 
> As another follow up to this point, I have had a recent example where Xtriage reported that my intensity statistics indicated twinning (<|L|>=0.399 and <L^2>=0.227, alpha=0.4). After having refined the structure with the suggested twin law, and making composite omit maps along the way (using autobuild with no twin law), I ran model vs. data at the end of refinement and noticed that it reported R/Rfree that were about 10% higher than the R/Rfree from refinement, similar to what the original post described. I checked the model vs. data logfile and found that the L-test routine used by model vs. data was failing, so the subsequent analysis was carried out as if there was no twinning.
> 
> So in my case, the crystal was twinned, but model vs. data was not able to perform the L-test so it was outputting incorrect R-factors.
> 
> The error message at the top of the model vs. data log file is:
> 
> Twin analysis failed: Python argument types in
>    l_test.__init__(l_test, miller_index, double, space_group, bool, float, float, float, int)
> did not match C++ signature:
>    __init__(_object*, scitbx::af::const_ref<cctbx::miller::index<int>, scitbx::af::trivial_accessor> miller_indices, scitbx::af::const_ref<double, scitbx::af::trivial_accessor> intensity, cctbx::sgtbx::space_group space_group, bool anomalous_flag, long parity_h, long parity_k, long parity_l, unsigned long max_delta_h)
> Twin analysis failed: Python argument types in
>    l_test.__init__(l_test, miller_index, double, space_group, bool, float, float, float, int)
> did not match C++ signature:
>    __init__(_object*, scitbx::af::const_ref<cctbx::miller::index<int>, scitbx::af::trivial_accessor> miller_indices, scitbx::af::const_ref<double, scitbx::af::trivial_accessor> intensity, cctbx::sgtbx::space_group space_group, bool anomalous_flag, long parity_h, long parity_k, long parity_l, unsigned long max_delta_h)
> 
> 
> Perhaps there are some other users who are seeing this behavior because the L-test fails within model vs. data.
> 
> Mike
> 
> 
> 
> 
> 
> 
> 
> ----- Original Message -----
> From: "Nathaniel Echols" <nechols at lbl.gov>
> To: "PHENIX user mailing list" <phenixbb at phenix-online.org>
> Sent: Wednesday, May 29, 2013 5:18:57 PM GMT -08:00 US/Canada Pacific
> Subject: Re: [phenixbb] Discrepancy between R-factors from phenix.refine vs phenix Generate "Table 1"
> 
> On Wed, May 29, 2013 at 4:13 PM, Sam Stampfer <Samuel.Stampfer at tufts.edu> wrote:
>> When I refined my model in phenix, I used the twin law h,-h-k,-l. I read in
>> the documentation that twinning can account for some of this discrepancy,
>> but that the program is supposed to take twinning into account if it will
>> lower the calculated R-work by more than 2%, which it doesn't seem to have
>> done (or there is some other problem with my data).
> 
> Okay, the problem is that your data don't actually appear to be
> twinned.  The automatic method used by phenix.model_vs_data (which is
> used internally for Table 1 and the validation GUI) only tries
> possible twin laws if the results of the "L test" show a suspicious
> distribution of intensities.  Your data look fine, so it doesn't
> bother trying the twin laws.  That the R-factors are lower when you
> refine with a twin law isn't necessarily indicative of the data
> actually being twinned - Garib Murshudov has looked into this in
> detail but I confess to being ignorant of the math (but I can probably
> dig up his paper on the subject if anyone is interested).  However,
> I'm pretty sure the data are actually in a higher-symmetry space
> group.  Will send details and new files off-list (probably tomorrow at
> this rate).
> 
> I should probably change some of the programs and/or documentation to
> make it more clear what is being done internally, since it took me a
> bit of digging to realize what was going on.  In general, though,
> always be very careful before running twinned refinement!  I have seen
> several users do this by mistake when they really had higher symmetry.
> The maps will also be more model-biased when using twinned
> refinement, so it's good to avoid doing this unless absolutely
> necessary.
> 
> -Nat
> _______________________________________________
> phenixbb mailing list
> phenixbb at phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> 
> --
> Michael C. Thompson
> 
> Graduate Student
> 
> Biochemistry & Molecular Biology Division
> 
> Department of Chemistry & Biochemistry
> 
> University of California, Los Angeles
> 
> miket at chem.ucla.edu
> _______________________________________________
> phenixbb mailing list
> phenixbb at phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> _______________________________________________
> phenixbb mailing list
> phenixbb at phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb



More information about the phenixbb mailing list