As far as I remember from my early encounters with Denzo and Scalepack during my visit to Yale University in the beginning of 90's (Alan Friedman, if you are listening, thank you for that)  Scalepack already is doing sort of "tinkering" and French&Wilson statistics known in other places that claim priority as Haiker statistics, is not necessary. However maybe I am wrong, and input from Otwinowsky or/and Minor may clarify situation.
FF


Dr  Felix Frolow

Professor of Structural Biology and Biotechnology

Department of Molecular Microbiology

and Biotechnology

Tel Aviv University 69978, Israel


Acta Crystallographica D, co-editor


e-mail: [email protected]

Tel:           ++972 3640 8723

Fax:          ++972 3640 9407

Cellular:   ++972 547 459 608

On May 18, 2010, at 01:35 , Nathaniel Echols wrote:

On Mon, May 17, 2010 at 3:18 PM, Ed Pozharski <[email protected]> wrote:
I have always used denzo/scalepack, and then scalepack2mtz to
convert .sca file to .mtz file.  So my data is always processed
according to French&Wilson.

Now from what you are saying I understand that there is some possibility
to get into using non-truncated data with phenix?  And not only that, it
seems to be the default?

FYI, AutoSol, AutoMR, and Phaser all accept scalepack files as input (or d*TREK or XDS, I think), and generate MTZ files as output, so if a user jumps directly from HKL2000 to Phenix, it would be very easy to skip the French&Wilson step.  The need to run an extra conversion step in a different suite is not going to be obvious to grad students (and many if not most postdocs).

We've discussed implementing the French&Wilson protocol in CCTBX, but I don't know how much work that is (since I still don't know what it actually does after reading this entire discussion).

-Nat
_______________________________________________
phenixbb mailing list
[email protected]
http://phenix-online.org/mailman/listinfo/phenixbb