completeness in table1
Dear developers, What could be the reason for discrepancy in completeness reported in XSCALE.LP (94.5% - 100.0%) vs from Table1: 89.57% (37.49%)? Best wishes, Alex -- Alex Batyuk The Plueckthun Lab www.bioc.uzh.ch/plueckthun
On Mon, Jun 24, 2013 at 11:25 AM, Alexander Batyuk
What could be the reason for discrepancy in completeness reported in XSCALE.LP (94.5% - 100.0%) vs from Table1: 89.57% (37.49%)?
I don't know the explanation offhand, but I would check whether XSCALE is reporting the completeness relative to *merged* Friedel pairs. If you have anomalous data, Phenix will always report the completeness with F+ and F- counted separately. But if you can send me the files I will take a look. Of course since I have absolutely no clue what XSCALE is doing internally and no way of finding out, it will still be somewhat of a guessing game. PS. I should mention, after playing around with the Table 1 code quite a bit, I do not trust log files for anything at this point - with the possible exception of SCALA's logs, which seem to be reasonably sensible and consistent with how we report statistics. I am strongly tempted to remove the logfile harvesting feature entirely and force users to enter unmerged data instead if they want the merging statistics. -Nat
On Mon, Jun 24, 2013 at 12:10 PM, Nathaniel Echols
On Mon, Jun 24, 2013 at 11:25 AM, Alexander Batyuk
wrote: What could be the reason for discrepancy in completeness reported in XSCALE.LP (94.5% - 100.0%) vs from Table1: 89.57% (37.49%)?
I don't know the explanation offhand, but I would check whether XSCALE is reporting the completeness relative to *merged* Friedel pairs. If you have anomalous data, Phenix will always report the completeness with F+ and F- counted separately.
A correction: this statement is true for most CCTBX-based programs, but I had actually implemented Table 1 (and the standalone merging statistics program) so it always merges Friedel mates when calculating these statistics, since that seems to be more in line with standard practice (?). Regardless, the explanation in this case is that the completeness is calculated from the data you use in refinement, not the original intensities. And French-Wilson treatment throws out an unusually large number of reflections - both Jeff's implementation in Phenix, and ctruncate in CCP4 - almost all in the highest-resolution shell. I think we probably need to add a loud warning if a) our French-Wilson routine discards more than X% of intensities, and b) if the completeness calculated from unmerged intensities significantly deviates from the completeness of the amplitudes in Table 1. -Nat
participants (2)
-
Alexander Batyuk
-
Nathaniel Echols