Thanks, Pavel. We'll put zero for cutoff of Fobs and "NULL" for cutoff on Iobs. And use the number ADIT already gleaned from the remark in the PDB for total number of reflections (measured and observed) used in refinement. Actually original input data was I's from scalepack, but it is clear that negative intensities were not rejected at that stage (or there would have been more rejections) and anyway I'm used to thinking of I->F as part of data reduction, not refinement. So taking F's as input to the actual refinement process, some were rejected as outliers, but none by sigma cutoff of zero. I find it odd that min(F/Fobs) is greater than 1 when the I's go clear through zero to negatives. Maybe part of the magic of French and Wilson? I'll check tomorrow if, as Engin reports, there are individual reflections with F/sigma lower than reported, and if so let you know. eab Pavel Afonine wrote:
Hi Ed,
We are depositing a structure refined by phenix, and the final pdb for deposit has the line "min Fobs/sigma=1.34".
this value is the result of taking all input Fobs, then dividing them by corresponding sigmas and taking the min. This is for reporting purposes only, no data was rejected using this criterion.
The .eff file confirms that rejection criterion on Fobs (and on Iobs) is 0.0 .
We need to make it clearer. When the input data are Fobs then this criterion is actually used; that is all reflections with Fobs<0 are rejected. If Iobs are the input data then phenix.refine uses French&Wilson method to convert the Iobs into Fobs.
Jeff: could clarify how sigma rejection criterion is used in this case?
Also, phenix.refine uses dynamic outliers rejection. So the amount of Fobs actually used in refinement may be slightly different. This is why phenix.refine always outputs one MTZ file that contains four sets of data: 1) original data (just copy of input data); 2) Fobs actually used in refinement; 3) Fmodel - total model structure factor that includes bulk-solvent contribution and all scales, so you can take Fobs from "3)" and Fmodel and reproduce the reported R-factors. 4) Fourier map coefficients.
For details see page 57 here: http://www.phenix-online.org/presentations/latest/pavel_phenix_refine.pdf
or simply to
phenix.mtz.dump file.mtz
2)Also, this is a structure containing heavy atoms (derivative was better than the native) and so refined against anomalous data. The final PDB file that was uploaded listed unique reflections with Bijvoet mates separate. Is that what we want to report? or the number after merging Bijvoet mates?
It reports unique reflections with Bijvoet mates separate because that what was actually used in refinement. I will add another line showing the number of merged reflections (I thought I did it a while ago).
=================== more
The data.refine .mtz has I's and F's. The I's go from small negative to large positive, while the F's go from .2 or .4 (F+ and F-) to around 5000 French and Wilson truncate procedure is being used, so only about .2% reflections get rejected. However many more have negative I's. The .eff indicates a cutoff on Iobs/sigma=0. If this is applied to the I's in data.refine.mtz it will reject a lot of weak reflections and make (min F/sigma) larger- but clearly that is not happening, besides I saw min Fobs/sigma > 1 for another project in which phenix was given only F, all positive of course.
I hope Jeff will comment on this.
Pavel
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb