If by "scaling" you mean scaling unmerged observations together, I don't think it is right that it assumes a unimodal intensity distribution. In refinement I believe the underlying distributions are assumed to be unimodal, but in data reduction scaling there is no such assumption. That scaling just assumes that the experiment (mainly the diffraction geometry) can be parameterised in such a way that reflections close together in reciprocal space and in measurement time have similar and parameterisable scales. This is independent of the intensity distribution. Phil On 19 May 2010, at 08:44, [email protected] wrote:
Frank,
Having had a case of such hectic pseudosymmetry, it turns out that the problem originates already from the scaling that assumes a unimodal intensity distribution. The weak reflections then have their sigmas heavily overestimated and the strong ones overestimated. I had to scale the weak and strong reflections separately and ridig-body refine against the weak ones... Of course this type of problem has little to do with issue of treating negative intensities.
Esko
Quoting "Frank von Delft"
: The philosophical arguments are fine, but is Pavel not justified in asking for real cases where it matters? It's not like he hasn't spent time thinking about it, and it's not like the whole phenix community isn't sitting in his ear about their own favourite missing fix.
I suppose what comes to mind is a case with hectic pseudotranslation causing half the reflections to be systematically almost but not quite zero. But then again, I understand that even if you don't toss the weak ones out, current algorithms don't deal with this well anyway, so it needs special treatment (for now: refine in smaller cell, then rigid-body refine in super-cell).
phx.
On 18/05/2010 18:33, Ed Pozharski wrote:
On Mon, 2010-05-17 at 16:20 -0700, Peter Zwart wrote:
You make it sound like it is a bad thing. The effect of restraint weights (ADP, geometry) has most likely a much bigger impact on the final structure then a small fraction of smallish intensities (*)
Peter,
I think discarding data has to be justified. Two points:
1. The fraction of negative intensities is not necessarily small. It depends on resolution cutoff (and you make an excellent point about PTS), but looking at scalepack log-files I can tell that in my hands the fraction is often 10% or more (DISCLAIMER: I belong to I/sigma=1 resolution cutoff cult).
2. Just because these reflections are weak does not mean that they are insignificant. Their contribution to the maps may be small (now I fear another round of "fill-in missing Fobs with Fc for map calculation" discussion), but keeping Fc close to zero for these reflections during refinement seems to be just as important as to keep Fc close to whatever values the strong reflections have.
The weak reflections are not fundamentally worse than strong(er) reflections in the same resolution range. They are measured with roughly the same precision. Moreover, the practice of setting negative intensities to zero and then ignoring them in refinement discards those that are barely negative and leaves in those that were (quite randomly) barely positive.
You are absolutely right that other factors will have impact on the model. But that does not mean that discarding weak data is justified. Crystallographic refinement is a Rube Goldberg machine, and all the components should be as good as we can make them. Perhaps there could be something better than French&Wilson, but discarding negative intensity reflections is hardly the solution.
Cheers,
Ed.
PS. Personally I have no stake in this, since I always use truncate.
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb