Hi Gerwald, - I'm not aware of any systematic study on this matter (and even if there is one I'm sure it would be specific to particular software): they are all planned as part of further automation of phenix.refine so the appropriate refinement strategy is chosen automatically based on the inputs. - At your resolution I would try a range of possible refinement strategies. In the end it's not that bad to run an array of 5-10 refinement jobs and see which one works best in your particular case. In the absence of systematic study results, to me this strategy is better than shaking the air with speculations -:) 1: individual coordinates + individual ADP + TLS; 2: torsion angle refinement of individual coordinates + individual ADP + TLS; 3: individual coordinates + group ADP with one or two refinable B per residue + TLS; 4: torsion angle refinement of individual coordinates + group ADP with one or two refinable B per residue + TLS; Of course, you may want to run it with target weights optimization (at least at the final refinement stage). Check for NCS and use it if available (main.ncs=true). Make sure you correctly (optimally) define TLS groups. Use TLSMD for this, and before make sure the B-factors are sensible (to do so, run a cycle of group B-factor refinement only). Pavel. On 8/14/09 2:18 PM, gerwald jogl wrote:
Hi All,
I am struggling with low resolution data (3.6 A) and I am concerned about overfitting. I have a ratio of about 3 reflections per atom.
Possible refinement scenarios would be a) coordinates+tls, b) coordinates, group_adp, tls or c) coordinates, individual_adp, tls.
The last scenario seems to be pushing the limits. I wonder if there are studies, thoughts or opinions out there on which refinement scenario to use.
Thanks for any comment on this, Gerwald