I have learned just short time ago from Pavel that MEM program write mtz file with both MEM and ORIG (original) coefficients. I read both sequentially using one of the suitable options of COOT.
It works, you see hem both in the same way. Easy to compare.
FF
Dr Felix Frolow   
Professor of Structural Biology and Biotechnology, Department of Molecular Microbiology and Biotechnology
Tel Aviv University 69978, Israel

Acta Crystallographica F, co-editor

e-mail: [email protected]
Tel:  ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608

On Feb 22, 2013, at 12:18 , Morten Groftehauge <[email protected]> wrote:

Hi Pavel,

The maximum entropy maps look wonderful and it looks like they might be useful in the doubtful cases. It is however hard to compare them to the standard 2Fo-Fc when the grid sampling isn't the same.

Cheers,
Morten


On 14 February 2013 19:46, Nathaniel Echols <[email protected]> wrote:
On Thu, Feb 14, 2013 at 7:50 AM, Pavel Afonine <[email protected]> wrote:
> The algorithm implemented in Phenix is fast: it should take from a few
> seconds for small structures to a few minutes for large ones. I do not
> understand why it should take long time to run (as pointed out in that Acta
> D paper).

I suspect that's because they're running a much different algorithm.
The Phenix implementation doesn't reproduce the difference densities
they display, for what it's worth, but since neither the code or even
the binaries for the ENIGMA program are available (!), it's hard to
know exactly what they're doing differently.

>> I see that phenix.maximum_entropy_map is now a command in Phenix.
>> Some quick questions: Where is this likely to be must useful and does it
>> take ridiculously long to run? From the Nishibori 2008 paper in Acta D it
>> seems like this would mainly be useful for very high resolution structures
>> that you would normally call complete - and that it would take a very long
>> time to compute.

I must say, I find that paper very misleading - the conventional maps
from Phenix are sufficient to identify the alternate conformation of
Tyr33 in Figure 2, for instance.  The published structure doesn't have
*any* alternate conformations, which at 1.3� resolution is absurd, so
it's very easy to produce an improved model without doing anything
fancy.  In Figure 5 they compare a conventional omit map with the MEM
version, but they're using much different grid spacings, so of course
they look different!

Maximum entropy tends to be used most frequently by small-molecule
crystallographers looking at charge densities, which is partly what
the Nishibori paper is doing.  For proteins, this is what Nicholas
Glykos (author of GraphEnt) told me about its use:

"For well behaved and complete data the maps look very similar. But in
other cases the ability of the maxent map to alleviate the problems
arising from series termination errors made a difference. We had one case
of [redacted] that diffracted to ~0.8A. Four passes were made to measure
both high and low resolution data. Unfortunately, for one of the passes the
time-per-frame was completely wrong and we ended-up with a data set missing
all terms between ~4 and ~3 Angstrom. The conventional FFT had numerous
peaks arising from the series termination errors, the maxent map was
significantly better. In other cases, we use maxent to artificially sharpen
maps (by reducing the esd's) while avoiding the noise introduced by normal
(E-value-based) sharpening."

-Nat
_______________________________________________
phenixbb mailing list
[email protected]
http://phenix-online.org/mailman/listinfo/phenixbb



--
Morten K Gr�ftehauge, PhD 
Pohl Group
Durham University
_______________________________________________
phenixbb mailing list
[email protected]
http://phenix-online.org/mailman/listinfo/phenixbb