Hello Xiao,

Xtriage estimates twin fraction, while phenix.refine refines it. Therefore an estimated value may be different from refined one. By how much different? That I would not dare to guess!

Given that twining is present and you picked correct twin law the value of twin fraction from phenix.refine is going to be more accurate than its estimate from Xtriage.

Pavel

On 12/10/14 12:37 PM, Xiao Lei wrote:
Dear All,

I have a x-ray dataset of a protein-DNA complex to 2.8 A resolution with space group P312 checked by phenix xtriage for twinning. The estimated twin fraction from the output of xtriage is: 0.115 (Britton analysis); 0.119 (H test) and 0.022 (maximum likelihood method). However, the L-test graph in xtriage shows my observed data almost perfectly overlay with theoretical perfect twin data. In addition, when I tried to use phenix to do refinement with twin law -h,-k,l, the log file shows my twin fraction estimation is 0.49, which is very high and much bigger than Britton analysis and H test estimation. 

As far as my understanding is that if a twin fraction is lower than 15%, I still have hope to solve the structure (molecular replacement in this case) with reasonable R value, but if the twin fraction is 0.49, which is almost a perfect twin, which makes detwin impossible and refinement will stall at high R values (in my case, R free start: 0.4199; R work start: 0.4121;  and R free final:  0.4038 and R work final: 0.3640 after running phenix refinement with twin law -h, -k, l). 

My question:  which twin fraction estimation is more reliable? is my data almost perfectly twined?

I attached the graphs of L test, Britton analysis and twin estimation from phenix xtriage and part of log file from phenix refine here.

Many thanks in advance.

Xiao


_______________________________________________
phenixbb mailing list
[email protected]
http://phenix-online.org/mailman/listinfo/phenixbb