Hi David, it's hard to say without looking at concrete example. phenix.refine creates maps as good as it can - I mean you don't need to "toggle something on" to get even better map. phenix.refine can output maps either as Fourier map coefficients that you can see then as actual map in Coot (for example) or it can output X-plor formatted maps (which you can see in PyMol, for example). I'm not sure if an additional conversion step (into dsn6) may alter something. Do you have an example in which: - you take a model and data and compute X-plor formatted map in CNS, - then you take the exact same data and model and compute the same map in phenix.refine, - you look at both maps in PyMol and see the "weak features" in CNS map and do not see these "weak features" in phenix.refine map ? If you have such an example and can send it to me then I would have something to investigate. Otherwise I can only shake the air with my guesses/speculations. Also, you may consider looking at both maps: missing-Fobs-filled and regular ones. phenix.refine create both types: http://www.phenix-online.org/pipermail/phenixbb/2009-August/002315.html Pavel. On 2/26/10 2:29 PM, David Garboczi wrote:
We sometimes build model into density at ~0.8 sigma, refine, and the Rfree becomes lower. So we believe that we put model where it should be.
Running the same data and same model in phenix under default conditions at the command line, we obtain maps that do not show the weaker features in which we believe in CNS.
We are wondering if we can believe the weaker density or is it that we are not toggling something on in phenix.
We routinely view the phenix-generated maps after transforming them to dsn6 files in mapman.
thanks, Dave