Hi guys, I see that phenix.maximum_entropy_map is now a command in Phenix. Some quick questions: Where is this likely to be must useful and does it take ridiculously long to run? From the Nishibori 2008 paper in Acta D it seems like this would mainly be useful for very high resolution structures that you would normally call complete - and that it would take a very long time to compute. I love trying out new methods but I need a bit more to go on with this one. Cheers, Morten -- Morten K Grøftehauge, PhD Pohl Group Durham University
Hi Morten, the plan was to describe it in January issue of CCN newsletter article but I did not make it by the deadline. The algorithm implemented in Phenix is fast: it should take from a few seconds for small structures to a few minutes for large ones. I do not understand why it should take long time to run (as pointed out in that Acta D paper). I would be interested to know your experience with this tool. Pavel On 2/14/13 6:25 AM, Morten Groftehauge wrote:
Hi guys,
I see that phenix.maximum_entropy_map is now a command in Phenix. Some quick questions: Where is this likely to be must useful and does it take ridiculously long to run? From the Nishibori 2008 paper in Acta D it seems like this would mainly be useful for very high resolution structures that you would normally call complete - and that it would take a very long time to compute. I love trying out new methods but I need a bit more to go on with this one.
Cheers, Morten
On Thu, Feb 14, 2013 at 7:50 AM, Pavel Afonine
The algorithm implemented in Phenix is fast: it should take from a few seconds for small structures to a few minutes for large ones. I do not understand why it should take long time to run (as pointed out in that Acta D paper).
I suspect that's because they're running a much different algorithm. The Phenix implementation doesn't reproduce the difference densities they display, for what it's worth, but since neither the code or even the binaries for the ENIGMA program are available (!), it's hard to know exactly what they're doing differently.
I see that phenix.maximum_entropy_map is now a command in Phenix. Some quick questions: Where is this likely to be must useful and does it take ridiculously long to run? From the Nishibori 2008 paper in Acta D it seems like this would mainly be useful for very high resolution structures that you would normally call complete - and that it would take a very long time to compute.
I must say, I find that paper very misleading - the conventional maps from Phenix are sufficient to identify the alternate conformation of Tyr33 in Figure 2, for instance. The published structure doesn't have *any* alternate conformations, which at 1.3Å resolution is absurd, so it's very easy to produce an improved model without doing anything fancy. In Figure 5 they compare a conventional omit map with the MEM version, but they're using much different grid spacings, so of course they look different! Maximum entropy tends to be used most frequently by small-molecule crystallographers looking at charge densities, which is partly what the Nishibori paper is doing. For proteins, this is what Nicholas Glykos (author of GraphEnt) told me about its use: "For well behaved and complete data the maps look very similar. But in other cases the ability of the maxent map to alleviate the problems arising from series termination errors made a difference. We had one case of [redacted] that diffracted to ~0.8A. Four passes were made to measure both high and low resolution data. Unfortunately, for one of the passes the time-per-frame was completely wrong and we ended-up with a data set missing all terms between ~4 and ~3 Angstrom. The conventional FFT had numerous peaks arising from the series termination errors, the maxent map was significantly better. In other cases, we use maxent to artificially sharpen maps (by reducing the esd's) while avoiding the noise introduced by normal (E-value-based) sharpening." -Nat
Hi Pavel,
The maximum entropy maps look wonderful and it looks like they might be
useful in the doubtful cases. It is however hard to compare them to the
standard 2Fo-Fc when the grid sampling isn't the same.
Cheers,
Morten
On 14 February 2013 19:46, Nathaniel Echols
On Thu, Feb 14, 2013 at 7:50 AM, Pavel Afonine
wrote: The algorithm implemented in Phenix is fast: it should take from a few seconds for small structures to a few minutes for large ones. I do not understand why it should take long time to run (as pointed out in that Acta D paper).
I suspect that's because they're running a much different algorithm. The Phenix implementation doesn't reproduce the difference densities they display, for what it's worth, but since neither the code or even the binaries for the ENIGMA program are available (!), it's hard to know exactly what they're doing differently.
I see that phenix.maximum_entropy_map is now a command in Phenix. Some quick questions: Where is this likely to be must useful and does it take ridiculously long to run? From the Nishibori 2008 paper in Acta D it seems like this would mainly be useful for very high resolution structures that you would normally call complete - and that it would take a very long time to compute.
I must say, I find that paper very misleading - the conventional maps from Phenix are sufficient to identify the alternate conformation of Tyr33 in Figure 2, for instance. The published structure doesn't have *any* alternate conformations, which at 1.3Å resolution is absurd, so it's very easy to produce an improved model without doing anything fancy. In Figure 5 they compare a conventional omit map with the MEM version, but they're using much different grid spacings, so of course they look different!
Maximum entropy tends to be used most frequently by small-molecule crystallographers looking at charge densities, which is partly what the Nishibori paper is doing. For proteins, this is what Nicholas Glykos (author of GraphEnt) told me about its use:
"For well behaved and complete data the maps look very similar. But in other cases the ability of the maxent map to alleviate the problems arising from series termination errors made a difference. We had one case of [redacted] that diffracted to ~0.8A. Four passes were made to measure both high and low resolution data. Unfortunately, for one of the passes the time-per-frame was completely wrong and we ended-up with a data set missing all terms between ~4 and ~3 Angstrom. The conventional FFT had numerous peaks arising from the series termination errors, the maxent map was significantly better. In other cases, we use maxent to artificially sharpen maps (by reducing the esd's) while avoiding the noise introduced by normal (E-value-based) sharpening."
-Nat _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Morten K Grøftehauge, PhD Pohl Group Durham University
I have learned just short time ago from Pavel that MEM program write mtz file with both MEM and ORIG (original) coefficients. I read both sequentially using one of the suitable options of COOT.
It works, you see hem both in the same way. Easy to compare.
FF
Dr Felix Frolow
Professor of Structural Biology and Biotechnology, Department of Molecular Microbiology and Biotechnology
Tel Aviv University 69978, Israel
Acta Crystallographica F, co-editor
e-mail: [email protected]
Tel: ++972-3640-8723
Fax: ++972-3640-9407
Cellular: 0547 459 608
On Feb 22, 2013, at 12:18 , Morten Groftehauge
Hi Pavel,
The maximum entropy maps look wonderful and it looks like they might be useful in the doubtful cases. It is however hard to compare them to the standard 2Fo-Fc when the grid sampling isn't the same.
Cheers, Morten
On 14 February 2013 19:46, Nathaniel Echols
wrote: On Thu, Feb 14, 2013 at 7:50 AM, Pavel Afonine wrote: The algorithm implemented in Phenix is fast: it should take from a few seconds for small structures to a few minutes for large ones. I do not understand why it should take long time to run (as pointed out in that Acta D paper).
I suspect that's because they're running a much different algorithm. The Phenix implementation doesn't reproduce the difference densities they display, for what it's worth, but since neither the code or even the binaries for the ENIGMA program are available (!), it's hard to know exactly what they're doing differently.
I see that phenix.maximum_entropy_map is now a command in Phenix. Some quick questions: Where is this likely to be must useful and does it take ridiculously long to run? From the Nishibori 2008 paper in Acta D it seems like this would mainly be useful for very high resolution structures that you would normally call complete - and that it would take a very long time to compute.
I must say, I find that paper very misleading - the conventional maps from Phenix are sufficient to identify the alternate conformation of Tyr33 in Figure 2, for instance. The published structure doesn't have *any* alternate conformations, which at 1.3Å resolution is absurd, so it's very easy to produce an improved model without doing anything fancy. In Figure 5 they compare a conventional omit map with the MEM version, but they're using much different grid spacings, so of course they look different!
Maximum entropy tends to be used most frequently by small-molecule crystallographers looking at charge densities, which is partly what the Nishibori paper is doing. For proteins, this is what Nicholas Glykos (author of GraphEnt) told me about its use:
"For well behaved and complete data the maps look very similar. But in other cases the ability of the maxent map to alleviate the problems arising from series termination errors made a difference. We had one case of [redacted] that diffracted to ~0.8A. Four passes were made to measure both high and low resolution data. Unfortunately, for one of the passes the time-per-frame was completely wrong and we ended-up with a data set missing all terms between ~4 and ~3 Angstrom. The conventional FFT had numerous peaks arising from the series termination errors, the maxent map was significantly better. In other cases, we use maxent to artificially sharpen maps (by reducing the esd's) while avoiding the noise introduced by normal (E-value-based) sharpening."
-Nat _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Morten K Grøftehauge, PhD Pohl Group Durham University _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Hi Morten, thanks for feedback. This is exactly why the program outputs two maps in one file: the original one that you supplied to the program and MEM map. When you open them in Coot they will be calculated and displayed using the same gridding. Also, the maps are re-scaled such that if you contour them at the same level the maps will show identical volumes, which means you don't need to play with sigma levels for both maps to make them comparable. Pavel On 2/22/13 2:18 AM, Morten Groftehauge wrote:
Hi Pavel,
The maximum entropy maps look wonderful and it looks like they might be useful in the doubtful cases. It is however hard to compare them to the standard 2Fo-Fc when the grid sampling isn't the same.
Cheers, Morten
On 14 February 2013 19:46, Nathaniel Echols
mailto:[email protected]> wrote: On Thu, Feb 14, 2013 at 7:50 AM, Pavel Afonine
mailto:[email protected]> wrote: > The algorithm implemented in Phenix is fast: it should take from a few > seconds for small structures to a few minutes for large ones. I do not > understand why it should take long time to run (as pointed out in that Acta > D paper). I suspect that's because they're running a much different algorithm. The Phenix implementation doesn't reproduce the difference densities they display, for what it's worth, but since neither the code or even the binaries for the ENIGMA program are available (!), it's hard to know exactly what they're doing differently.
>> I see that phenix.maximum_entropy_map is now a command in Phenix. >> Some quick questions: Where is this likely to be must useful and does it >> take ridiculously long to run? From the Nishibori 2008 paper in Acta D it >> seems like this would mainly be useful for very high resolution structures >> that you would normally call complete - and that it would take a very long >> time to compute.
I must say, I find that paper very misleading - the conventional maps from Phenix are sufficient to identify the alternate conformation of Tyr33 in Figure 2, for instance. The published structure doesn't have *any* alternate conformations, which at 1.3Å resolution is absurd, so it's very easy to produce an improved model without doing anything fancy. In Figure 5 they compare a conventional omit map with the MEM version, but they're using much different grid spacings, so of course they look different!
Maximum entropy tends to be used most frequently by small-molecule crystallographers looking at charge densities, which is partly what the Nishibori paper is doing. For proteins, this is what Nicholas Glykos (author of GraphEnt) told me about its use:
"For well behaved and complete data the maps look very similar. But in other cases the ability of the maxent map to alleviate the problems arising from series termination errors made a difference. We had one case of [redacted] that diffracted to ~0.8A. Four passes were made to measure both high and low resolution data. Unfortunately, for one of the passes the time-per-frame was completely wrong and we ended-up with a data set missing all terms between ~4 and ~3 Angstrom. The conventional FFT had numerous peaks arising from the series termination errors, the maxent map was significantly better. In other cases, we use maxent to artificially sharpen maps (by reducing the esd's) while avoiding the noise introduced by normal (E-value-based) sharpening."
-Nat _______________________________________________ phenixbb mailing list [email protected] mailto:[email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Morten K Grøftehauge, PhD Pohl Group Durham University
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Hi, I was attempting simultaneous x-ray and neutron refinement (5PTI from pdb, the neutron data are flagged there as FOBS-1). Both x-ray and neutron data were deposited; however, only x-ray data were recognized by phenix.refine. I worked around the problem by extracting neutron data into a separate .mtz file. New R-free flags had to be generated for neutron data. After the first refinement I ended up with xxx_data.mtz (attached), in which both Fobs and R-free flags for both x-ray and neutron data are specified. However, phenix.refine GUI still does not recognize the 'neutron' part of the input. In order to proceed with newly-generated r-free flags for neutrons I was forced to extract neutron data into separate file and neutron r-free flags into yet another separate file, read-in both additional files into the GUI and then specify from a drop-down menu what each of the files contained. Is there any way for phenix to read in all four coefficients (Fobs, neutron Fobs and associated R-free flags) from a single mtz file? Regards, Anna Makal PS: I was using PHENIX dev-1280 version.
On Thu, Feb 14, 2013 at 11:29 AM,
After the first refinement I ended up with xxx_data.mtz (attached), in which both Fobs and R-free flags for both x-ray and neutron data are specified. However, phenix.refine GUI still does not recognize the 'neutron' part of the input. In order to proceed with newly-generated r-free flags for neutrons I was forced to extract neutron data into separate file and neutron r-free flags into yet another separate file, read-in both additional files into the GUI and then specify from a drop-down menu what each of the files contained.
Is there any way for phenix to read in all four coefficients (Fobs, neutron Fobs and associated R-free flags) from a single mtz file?
Yes, you should be able to do this. The only problem that I see is that it doesn't automatically pull out the neutron data, and even after you specify that the file contains neutron data, the default label is still F-obs-xray - I think I can fix this, although it's only going to work for obvious labels (e.g. the *_data.mtz file from phenix.refine). In these cases I can also make it use the neutron data automatically, but I don't want to do this in other circumstances, because it's quite common to have multiple data arrays present in an MTZ file, and they'll usually just be different forms of the same thing (e.g. merged amplitudes and anomalous intensities). Anyway, you simply add the file, it gets recognized as containing X-ray data and flags, then you either right-click the "Data type" field, or select the file and click "Modify file type", and select "Neutron data" and "Neutron R-free" in the menu. The data and flags labels will be automatically populated, and then you can change them to the appropriate arrays. I'm using the most recent code (equivalent to 1.8.2-1296, which you are encouraged to install), but it should work in build 1280 as well. -Nat
participants (5)
-
amakal@chem.uw.edu.pl
-
Felix Frolow
-
Morten Groftehauge
-
Nathaniel Echols
-
Pavel Afonine