Hi Nat and Bjørn, I apologize if this is way off base, but perhaps the difference in completeness is due to the (potential) difference between I-obs / F-obs (original reflections after data processing and reduction) and I-obs-filtered / F-obs-filtered (reflections actually used in refinement). Specifically, I am referring to data that was refined in older versions of Phenix (before French&Wison was default) -- and datasets that have some amount of weak reflections or negative intensities. Perhaps that is the case here? An easy to tell if there is a difference is to open an output mtz from refinement in ViewHKL (comes with CCP4 6.3) -- it nicely lists the original data vs data used for refinement and the corresponding completeness for each. Hope that helps, Kip On Oct 25, 2012, at 1:57 PM, Nathaniel Echols wrote:
On Thu, Oct 25, 2012 at 9:59 AM, Bjørn Pedersen
wrote: I was under the impression that the differences arise due to XDS uses SIGNAL/NOISE >= -3.0 as the cutoff, thus allowing some negative reflections, while phenix use a strict cutoff of 0? Note, that the completeness in the high res. shell is lower in phenix than XDS (90.6 vs 99.7), a behavior I have seem several times solving different structures. XDS will show high completeness at high res. and then in phenix.refine it always drops.
Phenix used to have a strict cutoff, but it now runs French&Wilson treatment by default, so negative intensities are allowed. However: Kay pointed out that in another piece of code, I am calculating I/sigma starting from amplitudes, which are always positive. For Table 1, if you use intensities as input, this won't matter, but if you use amplitudes and many of the original intensities were negative, it may be overestimating I/sigma. Fixing this now.
It still doesn't explain why the completeness would be so far off, however.
-Nat _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb