<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html; charset=ISO-8859-1"
http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
Hi Sabine,<br>
<br>
<blockquote cite="mid:4E451354.5060603@mytum.de" type="cite">For
instance after MR I did a bit of shaking the coordinates with
pdbset (noise 0.1), </blockquote>
<br>
there are many ways of doing it using PHENIX:<br>
<br>
- using the tool that is specifically designed to do this and many
more of similar manipulations with your model:<br>
phenix.pdbtools:<br>
<a class="moz-txt-link-freetext" href="http://phenix-online.org/documentation/pdbtools.htm">http://phenix-online.org/documentation/pdbtools.htm</a><br>
Example:<br>
phenix.pdbtools model.pdb sites.shake=0.1<br>
<br>
- using phenix.refine: it can modify you model before refinement
starts (so you don't have to run phenix.pdbtools):<br>
<br>
phenix.refine model.pdb data.mtz modify_start_model.sites.shake=0.1<br>
<br>
- by the way, 0.1 is ridiculously small. 2A or even larger is within
convergence radius of phenix.refine most of the time. It depends on
resolution, of course.<br>
<br>
<blockquote cite="mid:4E451354.5060603@mytum.de" type="cite">followed
by simulated annealing in Phenix.
<br>
</blockquote>
<br>
Cartesian or torsion?<br>
<br>
<blockquote cite="mid:4E451354.5060603@mytum.de" type="cite">Phenix
states after SA:
<br>
<br>
Start R-work = 0.2671, R-free = 0.2992
<br>
Final R-work = 0.2312, R-free = 0.2666
<br>
<br>
When I use the output pdb of phenix directly in Refmac (with same
mtz as input for Phenix)
<br>
Refmac tells me:
<br>
Initial R factor 0.2392 R free 0.2887
<br>
<br>
So I am quite puzzled about the discrepancy. Or can someone tell
me if I made an error in reasoning somewhere?
<br>
</blockquote>
<br>
There may be 10+ reasons for this, I'm sure I listed them before
(about a year ago or more), so you can find it in phenixbb archives.
Also, see Nat's reply. Some them:<br>
<br>
- Fobs outliers removal in phenix.refine;<br>
- efficient bulk-solvent and anisotropic scaling (<span
class="Apple-style-span" style="border-collapse: separate; color:
rgb(0, 0, 0); font-family: Times; font-style: normal;
font-variant: normal; font-weight: normal; letter-spacing: normal;
line-height: normal; orphans: 2; text-indent: 0px; text-transform:
none; white-space: normal; widows: 2; word-spacing: 0px;
font-size: medium;"><span class="Apple-style-span"
style="font-family: verdana,helvetica,arial,sans-serif;
font-size: 14px;">P.V. Afonine, R.W. Grosse-Kunstleve & P.D.
Adams. Acta Cryst. (2005). D61, 850-855. "A robust bulk-solvent
correction and anisotropic scaling procedure"</span></span>) and
mask parameters optimization;<br>
- If your input data file contains Iobs (not Fobs) then different
algorithms of conversion Iobs to Fobs (phenix.refine uses
French&Wilson method);<br>
... and many many many more.... <br>
<br>
In practice, I was only able obtain IDENTICAL R-factors between
phenix.refine and SHELXL in a very artificial test (where
bulk-solvent was turned off, I was using identical scattering
factors, etc..).<br>
<br>
Pavel.<br>
<br>
</body>
</html>