Search results for query "look through"
- 520 messages

Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
by Billy Poon
Hi Keitaro,
In Phenix, we set use_internal_variance to false whenever possible. We had
users ask about the difference in merging statistics, which led to the
discussion about the parameter with Richard. We picked the default that is
mostly consistent with previous versions and added an option to change it,
but we must have missed certain cases where the parameter was not set.
Thanks for finding it!
phenix.merging_statistics is just a different way for calling the same code
as iotbx.merging_statistics, so the default value for use_internal_variance
is true on the command-line. However, I explicitly set the default to false
in the GUI for phenix.merging_statistics. phenix.table_one also sets
use_internal_variance to false. However, looking through other code in the
Phenix tree, there are some instances where use_internal_variance is set to
false with no option to change it, so we will double check if that is the
behavior that we want.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Tue, Nov 1, 2016 at 7:06 AM, Keitaro Yamashita <k.yamashita(a)spring8.or.jp
> wrote:
> Dear Richard,
>
> Thanks a lot. I hope some Phenix developer will make a comment.
>
> Best regards,
> Keitaro
>
> 2016-11-01 20:19 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> > Dear Keitaro,
> >
> > I've made the change you suggested in merging_statistics.py - it looks
> like an oversight, which didn't affect xia2 since we are always calculating
> merging statistics given an scaled but unmerged mtz file, never an XDS or
> scalepack-format file.
> >
> > As to what defaults Phenix uses, that is better left to one of the
> Phenix developers to comment on.
> >
> > Cheers,
> >
> > Richard
> >
> > Dr Richard Gildea
> > Data Analysis Scientist
> > Tel: +441235 77 8078
> >
> > Diamond Light Source Ltd.
> > Diamond House
> > Harwell Science & Innovation Campus
> > Didcot
> > Oxfordshire
> > OX11 0DE
> >
> > ________________________________________
> > From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces@phenix-
> online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> > Sent: 01 November 2016 10:41
> > To: cctbx mailing list
> > Subject: Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
> >
> > Dear Richard and everyone,
> >
> > Thanks for your reply. What kind of input do you give to
> > iotbx.merging_statistics in xia2? For example, when XDS file is given,
> > use_internal_variance=False is not passed to merge_equivalents()
> > function. Please look at the lines of
> > filter_intensities_by_sigma.__init__() in iotbx/merging_statistics.py.
> > When sigma_filtering == "xds" or sigma_filtering == "scalepack",
> > array_merged is recalculated using merge_equivalents() with default
> > arguments.
> >
> > If nobody disagrees, I would like to commit the fix so that
> > use_internal_variance variable is passed to all merge_equivalents()
> > function calls.
> >
> >
> > I am afraid that the behavior in the phenix-1.11 would be confusing.
> > In phenix.table_one (mmtbx/command_line/table_one.py),
> > use_internal_variance=False is default. This will be OK with the fix
> > I suggested above.
> >
> > Can it also be default in phenix.merging_statistics, not to change the
> > program behavior through phenix versions?
> >
> >
> > Best regards,
> > Keitaro
> >
> > 2016-11-01 18:21 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> >> Dear Keitaro,
> >>
> >> iotbx.merging_statistics does have the option to change the parameter
> use_internal_variance. In xia2 we use the defaults
> use_internal_variance=False, eliminate_sys_absent=False, n_bins=20, when
> calculating merging statistics which give comparable results to those
> calculate by Aimless:
> >>
> >> $ iotbx.merging_statistics
> >> Usage:
> >> phenix.merging_statistics [data_file] [options...]
> >>
> >> Calculate merging statistics for non-unique data, including R-merge,
> R-meas,
> >> R-pim, and redundancy. Any format supported by Phenix is allowed,
> including
> >> MTZ, unmerged Scalepack, or XDS/XSCALE (and possibly others). Data
> should
> >> already be on a common scale, but with individual observations unmerged.
> >> Diederichs K & Karplus PA (1997) Nature Structural Biology 4:269-275
> >> (with erratum in: Nat Struct Biol 1997 Jul;4(7):592)
> >> Weiss MS (2001) J Appl Cryst 34:130-135.
> >> Karplus PA & Diederichs K (2012) Science 336:1030-3.
> >>
> >>
> >> Full parameters:
> >>
> >> file_name = None
> >> labels = None
> >> space_group = None
> >> unit_cell = None
> >> symmetry_file = None
> >> high_resolution = None
> >> low_resolution = None
> >> n_bins = 10
> >> extend_d_max_min = False
> >> anomalous = False
> >> sigma_filtering = *auto xds scala scalepack
> >> .help = "Determines how data are filtered by SigmaI and I/SigmaI.
> XDS"
> >> "discards reflections whose intensity after merging is less
> than"
> >> "-3*sigma, Scalepack uses the same cutoff before merging,
> and"
> >> "SCALA does not do any filtering. Reflections with negative
> SigmaI"
> >> "will always be discarded."
> >> use_internal_variance = True
> >> eliminate_sys_absent = True
> >> debug = False
> >> loggraph = False
> >> estimate_cutoffs = False
> >> job_title = None
> >> .help = "Job title in PHENIX GUI, not used on command line"
> >>
> >>
> >> Below is my email to Pavel and Billy when we discussed this issue by
> email a while back:
> >>
> >> The difference between use_internal_variance=True/False is explained
> in Luc's document here:
> >>
> >> libtbx.pdflatex $(libtbx.find_in_repositories cctbx/miller)/equivalent_
> reflection_merging.tex
> >>
> >> Essentially use_internal_variance=False uses only the unmerged sigmas
> to compute the merged sigmas, whereas use_internal_variance=True uses
> instead the spread of the unmerged intensities to compute the merged
> sigmas. Furthermore, use_internal_variance=True uses the largest of the
> variance coming from the spread of the intensities and that computed from
> the unmerged sigmas. As a result, use_internal_variance=True can only ever
> give lower I/sigI than use_internal_variance=False. The relevant code in
> the cctbx is here:
> >>
> >> https://sourceforge.net/p/cctbx/code/HEAD/tree/trunk/
> cctbx/miller/merge_equivalents.h#l379
> >>
> >> Aimless has a similar option for the SDCORRECTION keyword, if you set
> the option SAMPLESD, which I think is equivalent to
> use_internal_variance=True. The default behaviour of Aimless is equivalent
> to use_internal_variance=False:
> >>
> >> http://www.mrc-lmb.cam.ac.uk/harry/pre/aimless.html#sdcorrection
> >>
> >> "SAMPLESD is intended for very high multiplicity data such as XFEL
> serial data. The final SDs are estimated from the weighted population
> variance, assuming that the input sigma(I)^2 values are proportional to the
> true errors. This probably gives a more realistic estimate of the error in
> <I>. In this case refinement of the corrections is switched off unless
> explicitly requested."
> >>
> >> I think that the "external" variance is probably better if the sigmas
> from the scaling program are reliable, or for low multiplicity data. For
> high multiplicity data or if the sigmas from the scaling program are not
> reliable, then "internal" variance is probably better.
> >>
> >> Cheers,
> >>
> >> Richard
> >>
> >> Dr Richard Gildea
> >> Data Analysis Scientist
> >> Tel: +441235 77 8078
> >>
> >> Diamond Light Source Ltd.
> >> Diamond House
> >> Harwell Science & Innovation Campus
> >> Didcot
> >> Oxfordshire
> >> OX11 0DE
> >>
> >> ________________________________________
> >> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces@phenix-
> online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> >> Sent: 01 November 2016 07:23
> >> To: cctbx mailing list
> >> Subject: [cctbxbb] use_internal_variance in iotbx.merging_statistics
> >>
> >> Dear Phenix/CCTBX developers,
> >>
> >> iotbx/merging_statistics.py is used by phenix.merging_statistics,
> >> phenix.table_one, and so on. By upgrading phenix from 1.10.1 to 1.11,
> >> merging statistics-related codes were significantly changed.
> >>
> >> Previously, miller.array.merge_equivalents() was always called with
> >> argument use_internal_variance=False, which is consistent with XDS,
> >> Aimless and so on. Currently, use_internal_variance=True is default,
> >> and cannot be changed by users (see below).
> >>
> >> These changes were made by @afonine and @rjgildea in rev. 22973 (Sep
> >> 26, 2015) and 23961 (Mar 8, 2016). Could anyone explain why these
> >> changes were introduced?
> >>
> >> https://sourceforge.net/p/cctbx/code/22973
> >> https://sourceforge.net/p/cctbx/code/23961
> >>
> >>
> >> My points are:
> >>
> >> - We actually cannot control use_internal_variance= parameter because
> >> it is not passed to merge_equivalents() in class
> >> filter_intensities_by_sigma.
> >>
> >> - In previous versions, if I gave XDS output to
> >> phenix.merging_statistics, <I/sigma> values calculated in the same way
> >> (as XDS does) were shown; but not in the current version.
> >>
> >> - For (for example) phenix.table_one users who expect this behavior,
> >> it can give inconsistency. The statistics would not be consistent with
> >> the data used in refinement.
> >>
> >>
> >> cf. the related discussion in cctbxbb:
> >> http://phenix-online.org/pipermail/cctbxbb/2012-October/000611.html
> >>
> >>
> >> Best regards,
> >> Keitaro
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >>
> >> --
> >> This e-mail and any attachments may contain confidential, copyright and
> or privileged material, and are for the use of the intended addressee only.
> If you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> >> Any opinions expressed within this e-mail are those of the individual
> and not necessarily of Diamond Light Source Ltd.
> >> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> >> Diamond Light Source Limited (company no. 4375679). Registered in
> England and Wales with its registered office at Diamond House, Harwell
> Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
> >>
> >>
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
8 years, 8 months

Re: [phenixbb] Unable to install phenix 1.8.1-1168 on Scientific Linux 6.3 64 bits
by Kay Diederichs
Davi,
This looks to me like you tried installation using e.g. the fc14
installer (that has a glibc "too new" for SL6), and installed with the
fc3 on top of that (SL6 works well with the fc13 or lower installer).
Since the installers write to the same directory, the fc3 installer
finds that the files are already there and does basically nothing that
would fix the problem, i.e. replace the binaries and libs.
So just remove the /usr/local/phenix-1.8.1-1168 directory (or its
contents) before you try a different installer.
(Side note: maybe the installer should do that automatically, but there
are probably reasons why it doesn't)
HTH,
Kay
On 01/29/2013 12:31 AM, phenixbb-request(a)phenix-online.org wrote:
> Message: 5
> Date: Fri, 25 Jan 2013 16:15:01 -0800
> From: Davi de Miranda Fonseca<davi.fonseca(a)ntnu.no>
> To: phenixbb(a)phenix-online.org
> Subject: [phenixbb] Unable to install phenix 1.8.1-1168 on Scientific
> Linux 6.3 64 bits
> Message-ID:<51032005.9040800(a)ntnu.no>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Dear all,
>
> I am unable to install phenix 1.8.1-1168 on Scientific Linux 6.3 64
> bits. Hence, I would greatly appreciate any 2 cents (or diamonds).
>
> I installed Scientific Linux 6.3 64 bits, updated everything and
> installed a couple of things that I think would be necessary. Then I
> uncompressed phenix-installer-1.8.1-1168-intel-linux-2.6-x86_64-fc3.tar
> to /tmp and tried to install, however I got some errors. Hence, I
> installed a couple more things and tried installing again.
> This time it went through most of the installation but failed during the
> configuration of phenix packages.
>
> **** Here is my shell output:
>
> [root@scientifix phenix-installer-1.8.1-1168]# ./install
>
> ==========================================================================
> PHENIX Installation
>
> version: 1.8.1
> release tag: 1168
> machine type: intel-linux-2.6-x86_64
> source machine type: intel-linux-2.6-x86_64
> OS version: linux
> user shell: /bin/bash
> destination: /usr/local
> =========================================================================
>
> ==========================================================================
> Installing from binary bundle
> ==========================================================================
>
> bundle file:
> bundle-1.8.1-1168-intel-linux-2.6-x86_64.tar.gz
> PHENIX installation source directory set to:
> /tmp/phenix-installer-1.8.1-1168
> PHENIX installation target directory<PHENIX_LOC> set to:
> /usr/local/phenix-1.8.1-1168
> PHENIX installation build directory set to:
> <PHENIX_LOC> /build/intel-linux-2.6-x86_64
> PHENIX temporary build directory set to:
> build-binary/intel-linux-2.6-x86_64/scientifix/tmp
> PHENIX installation log directory set to:
> build-binary/intel-linux-2.6-x86_64/scientifix/log
>
> ==== warning: "/usr/local/phenix-1.8.1-1168" already exists ====
> ==== warning: cannot determine installation time-stamp, proceeding ====
>
>
> installing binary components
>
> removing existing files in
> build-binary/intel-linux-2.6-x86_64/scientifix/tmp/binary
> finding installed versions
> finding binary versions
> binary components up-to-date
> the binary package will not be installed
>
> setting environment
>
> configuring PHENIX components
>
> Error configuring: see
> /tmp/phenix-installer-1.8.1-1168/build-binary/intel-linux-2.6-x86_64/scientifix/log/binary.log
> for details
>
>
>
>
> ***** And here are the last lines of
> /tmp/phenix-installer-1.8.1-1168/build-binary/intel-linux-2.6-x86_64/scientifix/log/binary.log:
>
> ./build/intel-linux-2.6-x86_64/include/
> ./build/intel-linux-2.6-x86_64/include/phaser_defaults.params
> ./build/intel-linux-2.6-x86_64/include/phaser_nma_defaults.phil
> /usr/local/phenix-1.8.1-1168/build/intel-linux-2.6-x86_64/base/bin/python:
> /lib64/libc.so.6: version `GLIBC_2.14' not found (required by
> /usr/local/phenix-1.8.1-1168/build/intel-linux-2.6-x86_64/base/bin/python)
>
> By the way, Scientific linux 6.3 comes with glibc 2.12.
>
> It might be too naive from me, but would it be possible to copy
> libc.so.6 from glibc-2.14 somewhere and use an option during the
> installation to point to it? Any better ideas? (Like a step-wise
> description used in successful installation of phenix in Scientific
> Linux 6.3 64)
>
> Thank you for your time and help.
>
> Cheers,
> Davi
>
--
Kay Diederichs http://strucbio.biologie.uni-konstanz.de
email: Kay.Diederichs(a)uni-konstanz.de Tel +49 7531 88 4049 Fax 3183
Fachbereich Biologie, Universität Konstanz, Box M647, D-78457 Konstanz
This e-mail is digitally signed. If your e-mail client does not have the
necessary capabilities, just ignore the attached signature "smime.p7s".
12 years, 5 months

Re: [phenixbb] Error determining reference matches
by Jeff Headd
Hi Dan,
I agree with Pavel, we'll really need to see your files to be able to
figure out what is going on. Something is going wrong with the alignment
routine that is used to decide residue matching for reference restraints,
but without seeing the files it's hard to speculate as to why.
If you could send your files to me or Pavel (directly, not to the list)
we'll be able to sort out what is going wrong.
Thanks,
Jeff
On Fri, Aug 30, 2013 at 11:46 PM, Pavel Afonine <pafonine(a)lbl.gov> wrote:
> Hi Dan,
>
> the only way we can help (= debug the problem, fix it or/and suggest a
> work-around) is if we can reproduce it here locally. To do this we need all
> inputs and commands necessary to reproduce the refinement run that crashed.
> Please send files to me (not to whole mailing list) and I will look myself
> or redirect to a respective developer.
>
> Thanks,
> Pavel
>
>
> On 8/30/13 4:32 PM, Dan McNamara wrote:
>
> Hi all,
>
> I am encountering a strange crash when running phenix.refine through the
> command line with reference model restraints turned on. The references
> point to two different PDB files, one labeled as chain A and one labeled as
> chain B.
>
> This error occurs whether I have my refinement model in P1 with 192 chains
> or P2(1) with 96 chains. The chains are A-EZ (192) or A-BH (96). It appears
> to fail to align the reference models to the sequences in the refinement
> model. This error does not occur with other smaller target structures under
> 25 chains and reference models used with the same installation of phenix.
>
> I am hoping for any advice on why this might be happening or how I might
> get around it.
>
> ================== Extract refinement strategy and selections
> =================
>
> Refinement flags and selection counts:
> individual_sites = True (401472 atoms)
> torsion_angles = False (0 atoms)
> rigid_body = True (401472 atoms in 192 groups)
> individual_adp = False (iso = 0 aniso = 0)
> group_adp = True (201984 atoms in 25824 groups)
> tls = False (0 atoms in 0 groups)
> occupancies = False (0 atoms)
> group_anomalous = False
>
> n_use = 401472
> n_use_u_iso = 401472
> n_use_u_aniso = 0
> n_grad_site = 0
> n_grad_u_iso = 0
> n_grad_u_aniso = 0
> n_grad_occupancy = 0
> n_grad_fp = 0
> n_grad_fdp = 0
> total number of scatterers = 401472
> *** Adding Reference Model Restraints ***
> determining reference matches automatically...
> Traceback (most recent call last):
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/build/intel-linux-2.6-x86_64/../../phenix/phenix/command_line/refine.py",
> line 11, in <module>
> command_line.run(command_name="phenix.refine", args=sys.argv[1:])
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/phenix/phenix/refinement/command_line.py",
> line 92, in run
> master_params=customized_master_params)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/phenix/phenix/refinement/driver.py",
> line 501, in __init__
> log=self.log)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 147, in __init__
> self.get_reference_dihedral_proxies()
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 460, in get_reference_dihedral_proxies
> log=self.log)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 323, in process_reference_groups
> moving_chain = mod_h)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/utils.py",
> line 426, in _ssm_align
> ssm_alignment = ccp4io_adaptbx.SSMAlignment.residue_groups(match=ssm)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 215, in residue_groups
> return cls( match = match, indexer = indexer )
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 176, in __init__
> self.pairs.append( ( get( f, indexer1 ), get( s, indexer2 ) ) )
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 173, in get
> assert identifier in indexer, "Id %s missing" % str( identifier )
> AssertionError: Id ('A', 1, ' ') missing
>
> Best,
> Dan
>
>
> _______________________________________________
> phenixbb mailing [email protected]://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
11 years, 10 months

Re: [cctbxbb] constrained hydrogen geometry
by Ralf W. Grosse-Kunstleve
Hi Luc,
> I was trying to figure out whether this interface would be sufficient
> and/or necessary for the Hydrogen geometry constraints. I am not
> quite sure I understand what is gradient_sum_matrix exactly in fact.
gradient_sum_matrix is not directly relevant to the hydrogen problem.
The only commonality is the chain rule. Roughly, if you have to
variables x, y which are "normally" independent and have gradients
df/dx and df/dy, and if under "special circumstances" x becomes
a function of y (or vice versa), you have to add up the gradients
according to the chain rule. If you work this out for the linear
relations given by the symmetry operations, you can cast the result
in the form of the "gradient sum matrices".
Below is a simple example, by Peter Zwart, constraining the sum of
occupancies to one.
In the hydrogen case, I'd think you just write down the equations
constraining the coordinates to each other. Then study them to
apply the chain rule. Done! :)
Regarding the interfaces: I usually think about interfaces after I've
worked out the math and/or algorithms. It is good to make interfaces
as similar as possible, but ultimately "form follows function",
as a famous architect taught us.
Most importantly, find class, function and variable names that convey
meaning even to the "reader" no immersed in the problem. That's more
than half the battle.
> It is the philosophy of the cctbx, isn't it? You have constructed
> the cctbx so as to use it as Python library, relegating C++ to some
> sort of assembly language for efficiency.
Yes, that's a good viewpoint.
> The lack of abstraction in the C++ code of the cctbx
I don't know what you mean. :)
> (hardly any inheritance,
Because it has a large potential for obfuscating what's going on.
Inheritance is like salt in the soup.
> no advanced genericity)
You must have overlooked Boost.Python. :)
If that doesn't change your mind, look closer at the implementation
of the array algebra header files in scitbx/array_family, including
the auto-generated header files in cctbx_build/include.
> would require wrapping a lot of the cctbx classes behind
> proxies inheriting from abstract types.
I find this approach cumbersome and fruitless. I am a big believer
in form follows function. Function follows form (design interfaces
first) doesn't work for me. I always try to achieve first what
I actually want to do, then find a good form for it.
> A typical example for me are those classes dealing
> with position constraints and ADP constraints. They
> are unrelated in C++, cctbx::sgtbx::site_constraints and
> cctbx::sgtbx::tensor_rank_2::constraints, although they have both have
> the same member functions listed above.
Functionally the member functions are unrelated, unless you can come
up with a general engine that applies the chain rule automatically to
a given function. (Such tools exist, e.g. adolc, but are heavy-duty.)
I think it would be counter-productive to tie adp/site constraints
code together just because the names of the methods are similar. I
consider modularity a more important value.
> Of course, from Python, it does not matter, thanks to duck typing:
> if two objects answer the same method calls, then they are by all
> practical means of the same type.
The adp/site constraints are not a good example. We don't want to
use them interchangeably.
I've tried to use the "functor" idea (unifies interfaces without
inheritance) in some places, but even that often turns out to be a
struggle. This is especially the case when another person adds on
(e.g. cctbx.xray.target_functors).
What counts the most in my opinion is to avoid redundancy. That's my
most important goal when writing source code, because redundancy
hampers progress and multiplies bugs, among other bad things.
I'm trying to use all tools available to cut back redundancy.
I find both templates and inheritance invaluable and use them where
they help reducing redundancy, but I do not see "use templates" or
"use inheritance" as goals in their own right.
> Our group would be more than happy to contribute them to the cctbx
> since we absolutely need them for our project.
That's great! I think you can achieve a lot quickly if you limit
yourself initially to Python. If you then check in what you have, I'd
be happy to walk you through the first few times moving time-critical
functionality to C++.
Cheers,
Ralf
By Peter Zwart:
Some ideas on how to restrain sums of variables to unity.
say we have
f(a,b,c)
with
a(x1) = x1
b(x2) = x2
c(x1,x2) = 1 - x1 - x2
Say we have from the refinement engine the following numerical values
(right hand side) for our partial derivatives:
df/da = Ga
df/db = Gb
df/dc = Gc
The chain rule says:
dq/dx = (dq/du)(du/dx) + (dq/dv)(dv/dx)
Thus
df/dx1 = (df/da)(da/dx1) + (df/dc)(dc/dx1) = Ga - Gc
df/dx2 = (df/db)(db/dx2) + (df/dc)(dc/dx2) = Gb - Gc
This gives you the rule to go from the full set of derivatives to the
reduced set of derivatives (for the general case):
#--------------------------------------------------
def pack_derivatives( g_array ):
result = g_array[0:g_array.size()-1 ]
result = result - g_array[ g_array.size()-1 ]
return result
#--------------------------------------------------
You also need to go from a reduced set of parameters to the full set of
parameters (for the general case):
#--------------------------------------------------
def expand_parameters( x ):
result = x.deep_copy()
result = result.append( 1 - flex.sum(x) )
return result
#--------------------------------------------------
This of course assumes that the 'last' variable is the non unique none
during refinement.
____________________________________________________________________________________
Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now.
18 years, 6 months

[cctbxbb] Overloading Python builtin names with local variables
by Graeme.Winter@Diamond.ac.uk
Morning all,
As a side-effect of looking at some of the Python3 refactoring I stumbled across a few places where e.g. range is used as a variable name. While this is fine and legal, when moving the code to Python3
xrange => range
and
from builtins import range
is a back-port of Python3 range to Python2 which allows the existing behaviour of xrange generator to be maintained
This is fine except for where range is a variable name and we then get a “variable referenced before assignment” type error
As a general statement I would say overloading Python reserved names with local variables is untidy at best, and can easily cause confusion (as well as real problems as identified above) - I have therefore taken the liberty of adding a tool to libtbx which can be used to find such variables to give people a chance to avoid them:
Grey-Area dxtbx :) [master] $ libtbx.find_reserved_names format/
Checking format/
format/FormatSMVRigakuSaturn.py:239 "format"
format = self._scan_factory.format("SMV")
format/FormatTIFFRayonixESRF.py:36 "format"
format = {LITTLE_ENDIAN: "<", BIG_ENDIAN: ">"}[order]
format/FormatTIFFRayonixESRF.py:29 "bytes"
width, height, depth, order, bytes = FormatTIFFRayonix.get_tiff_header(
format/FormatCBFMini.py:231 "format"
format = self._scan_factory.format("CBF")
format/FormatTIFFRayonix.py:176 "format"
format = self._scan_factory.format("TIFF")
format/FormatTIFFRayonix.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatTIFFRayonix.py:74 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatSMVRigakuEiger.py:243 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVADSC.py:212 "format"
format = self._scan_factory.format("SMV")
format/Registry.py:88 "format"
for format in sorted(self._formats, key=lambda x: x.__name__):
format/FormatSMVJHSim.py:141 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVRigakuPilatus.py:188 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVADSCSN928.py:47 "format"
format = self._scan_factory.format("SMV")
format/FormatRAXISIVSpring8.py:134 "format"
format = self._scan_factory.format("RAXIS")
format/FormatTIFFRayonixSPring8.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFFRayonix.get_tiff_header(
format/FormatSMVADSCmlfsom.py:37 "format"
format = self._scan_factory.format("SMV")
format/FormatTIFFBruker.py:174 "format"
format = self._scan_factory.format("TIFF")
format/FormatTIFFBruker.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatTIFFBruker.py:71 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatCBFMiniADSCHF4M.py:26 "format"
for format in ["%a_%b_%d_%H:%M:%S_%Y"]:
format/FormatCBFMiniADSCHF4M.py:169 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVADSCNoDateStamp.py:54 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVRigakuSaturnNoTS.py:228 "format"
format = self._scan_factory.format("SMV")
format/FormatPYunspecified.py:152 "format"
format = ""
format/FormatSMVRigakuA200.py:234 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniEigerPhotonFactory.py:131 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVTimePix_SU.py:97 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniPilatus.py:83 "format"
format = self._scan_factory.format("CBF")
format/FormatCBFMiniPilatusHelpers.py:22 "format"
for format in ["%Y-%b-%dT%H:%M:%S", "%Y-%m-%dT%H:%M:%S", "%Y/%b/%d %H:%M:%S"]:
format/FormatSMVADSCSNAPSID19.py:91 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniEiger.py:216 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVNOIR.py:218 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVCMOS1.py:197 "format"
format = self._scan_factory.format("SMV")
format/FormatRAXIS.py:238 "format"
format = self._scan_factory.format("RAXIS")
is an example where “format” is scattered all over the place but is also a useful function - replacing with fmt will mean the same and avoid this overloading, as an example.
This uses AST parsing to work through the code and find variable assignments - doing this for dials found lots of examples of “file” and “type” being used.
How much people care about this is a local issue, but I thought the tool would be useful
All the best Graeme
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
6 years, 2 months

Re: [cctbxbb] Scons for python3 released
by markus.gerstel@diamond.ac.uk
Hi,
Just to add to this. I think Graeme's find_clutter idea has merit, which could certainly check for
from __future__ import absolute_import, print_statement
which would cover a lot of ground.
I found this also a worthwhile to read through: http://python-future.org/compatible_idioms.html
Especially handling things like xrange vs range should be done with a bit of thinking and when ancient code needs to be touched then it also presents an opportunity to make it clearer what it actually does. For example, the very first commit on the Python3 branch changed xrange->range here, and I wondered... https://github.com/cctbx/cctbx_project/commit/f10fd505841de372098bca83c40fc… (untested)
Finally, I think doing all 2-3-compatible conversions in a branch, for example print -> print() as it is happening now, will be a nightmare to merge later because you will be touching large portions of a large numbers of files, but other development does not stop. And, let's be honest, nobody will review a 100k LoC change set anyway.
I would suggest we do those refactoring changes directly on master. A single type of change (ie. print->print()) on a per-file basis along with "from __future__ import print_statement" in say 30 files within one directory tree per commit? Much more manageable.
Oh, and we need the future module installed from bootstrap.
-Markus
-----Original Message-----
From: cctbxbb-bounces(a)phenix-online.org [mailto:[email protected]] On Behalf Of Graeme.Winter(a)diamond.ac.uk
Sent: 17 October 2017 15:20
To: cctbxbb(a)phenix-online.org
Subject: Re: [cctbxbb] Scons for python3 released
Hi Robert
I think having more than one person independently look at the p3 problem is no bad thing - also with the geography it would seem perfectly possible for you / Nick to meet up and compare notes on this - it’s certainly something I would support.
Clearly there are a lot of things which could get caught up in the net with the p3 update - for example build system discussions, cleaning out cruft that is in there to support python 2.3 etc… however I did not read that Nick thought SCons3 was a waste of time - I think he was getting at the point that this is part of the move, and that there is also a lot of related work. Also that having p2 / p3 support for at least a transition rather than the “full Brexit” of no longer supporting p2 beyond the first date where p3 works would be good. I could imagine this transition period being O(1000 days) even.
I think the migration process is going to be a complex one, but doable. One thing I think we do need is to make sure that the code base as pushed by developers remains compatible with p2 and p3 - so perhaps extending find_clutter to check for things which only work in one or the other? Then developers would learn the tricks themselves and (ideally) not push p2-only or p3-only code, at least until post-transition. This I would compare with the svn to git move, which caused some grumbling and a little confusion but was ultimately successful…
Hope this is constructive, cheerio
Graeme
On 17 Oct 2017, at 13:50, R.D. Oeffner <rdo20(a)cam.ac.uk<mailto:[email protected]>> wrote:
Hi Nick and others,
That sounds like a great effort. A shame I didn't know about this. I have not had time to look in detail into your work but will nevertheless summarize my thoughts and work I have been doing lately in an effort to move CCTBX to python3.
I am not sure why it would be a waste of time to use SCons3.0 with python3 as I think you are suggesting. To me it seems as a necessary step in creating a codebase that runs both on python2 and python3. Do I understand correctly that as long as CCTBX code is changed to comply with python3 and remain python2 compliant then such a codebase can be used in place of the current python2 only codebase for derived projects such as Dials and Phenix? Assuming this is the case I think it is worth focusing just on CCTBX only for now.
My own attempt in porting CCTBX to python3 constitutes of the following steps:
* Replace Scons2 with Scons3
* Update the subset of Boost sources to version 1.63
* Run futurize stage1 and stage2 on the CCTBX
* Build base components like libtiff, hdf5, python3.6 + add-on modules)
* Run bootstrap.py build with Python3.6 repeatedly and provide mock-up fixes to allow the build to continue.
This work is almost near completion in the sense that the sources now can build but are unlikely to pass test due to the mock-up fixes which often constitutes of replacement of PyStringXXX functions with equivalent PyUnicodeXXX, PyBytestringXXX functions ignoring whether that is appropriate or not. These token fixes would also need to be guarded by #if PY_MAJOR_VERSION == 3 ... macros.
The sources are available on https://github.com/cctbx/cctbx_project/tree/Python3
The next steps are less well defined. One approach would be to set up a build system that migrates python2 code to python3 using the futurize script, then builds CCTBX and runs test and presents build log files online as in http://cci-vm-6.lbl.gov:8010/one_line_per_build. With a hook to GitHub this could also be done on the fly as people commit code to CCTBX. This should encourage people to write code that runs on python2 as well as python3. Eventually once all tests for CCTBX pass we are done and can merge this codebase into the master branch.
Robert
On 17/10/2017 11:56, Nicholas Devenish wrote:
Hi All,
I spent a little bit of time looking at python3/libtbx so have some input on this.
On Tue, Oct 10, 2017 at 6:16 PM, Billy Poon <bkpoon(a)lbl.gov<mailto:[email protected]>> wrote:
1) Use Python 2 to build Python 2 version of CCTBX (no work) This might not be as simple as "No Work" - cctbx is a few years behind on SCons versions (libtbx.scons --version suggests 2.2.0, from 2012) so there *might* be other issues upgrading the SCons version to 3.0, before trying python3.
I also feel that SCons-Python3 is something of a red herring - the only thing that non-python3-SCons prevents is an 100% python3-only codebase, and unless the plan is to migrate the entire codebase, including all downstream dependencies (like dials) to python3-only in one massive step (probably impossible), everything would need to be dual 2/3 first, and only then a decision taken on deprecating 2.7 support.
More usefully, outside of a small core of libtbx code, not much of the buildsystem files are bound to the main project so this shouldn't be too difficult. In fact, I've experimented with converting to CMake, and as one of the approaches I explored, I wrote a SCons-emulator that read and parsed the build *without* any scons/cctbx dependencies. To parse the entire "builder=dials" SCons-tree only required this subset of libtbx:
https://github.com/ndevenish/read_scons/blob/master/tbx2cmake/import_env.py…
[1]
(Note: my general CMake-work works but isn't complete/ready/documented for general viewing, and still much resembles a hacky project, but I thought that this was sufficient to decouple the buildsystem is usefully illustrative of how simple the task might be) Regarding general Python3 conversion, it's definitely not "Just changing the print statements". I undertook a study in august to convert libtbx (being the core that *everything* depends on) to dual
python2/3 and IIRC got most of the tests working in python3. It's a couple of months out-of-date, but is probably useful as a benchmark of the effort required. The repository links are:
https://github.com/ndevenish/cctbx_project/tree/py3k-modernize [2]
https://github.com/ndevenish/cctbx_project/tree/py3k [3] Probably best looked at with a graphical viewer to get a top-down view of the history. My approach was to separate manual/automatic changes as follows:
1. Remove legacy code/modules - e.g. old compatibility. The Optik removal came from this. We don't want to spend mental effort converting absorbed external libraries from a decade ago (see also e.g. pexpect, subprocess_with_fixes) 2. Make some manual fixes [Expanded as we go on] 3. Use futurize and modernize to update idioms ONLY e.g. remove
pre-2.7 deprecated ways of working. Each operation was done is a separate commit (so that changes are more visible and I thought people would have less objection than to a massive code-change dump), and each commit ran the test suite for libtbx. Some of the 'fixers' in each tool are complementary. If there are any problems with tests or automatic conversion, then fix the problem, put the fix into step 2, then start again. This step should be entirely scriptable. I had 17 commits for separate fixes in this chain.
This is the where the py3k-modernize branch stops, and should in principle be kept entirely safe to push back onto the python2-only repository. The next steps form the `py3k` branch (not being intended for direct pushing, is a little less organised - some of my changes could definitely be moved to step 2):
4. Run 'modernize' to convert the codebase to as much python2/3 as possible. This introduces the dependency on 'six'
5. Run tests, implement various fixes, repeat. This work was ongoing when I stopped working on the study.
Various (non-exhaustive) problems found:
- cStringIO isn't handled automatically, so these need to be fixed manually ( e.g.
https://github.com/ndevenish/cctbx_project/commit/c793eb58acc37c60360dccbbb…
[4] )
- Iterators needed to be fixed in cases where they were missed (next vs __next__)
- Rounding. Python3 uses 'Bankers Rounding' and there are formatting tests where this changes the output. I didn't know enough about the exact desired result to know the best way to fix this
- libtbx uses compiler.misc.mangle and I don't know why - this was always a private interface and was removed in 3.
- Moving print statements to functions - there was several failed tests relating to the old python2-print-soft-spacing behaviour, which was removed. Not too difficult, but definitely causes
- A couple of text/binary mode file issues, which seemed to be simple but may be more complicated than the test cases covered. I'd expect more issues with this in the format readers though.
I evaluated both the futurize (using future library) and modernize (using the well known six library) tools, both being different approaches to 2to3, but for dual 2/3 codebases. I liked the approach of futurize to attempt to make code look as python3-idiomatic as-possible, but some of the performance implications were slightly
opaque: e.g. libtbx makes heavy use of cStringIO (presumably for a good reason), and futurize converted all of these back to using StringIO in the Python2 case, so settled on modernize as I felt two different compatibility libraries would be messy. In either case, using the library means that you can identify exactly everywhere that needs to be removed when moving to python3 only.
My conclusions:
- Automatic tools are useful for the bulk of changes, but there are still lots of edge cases
- The complexity means that a phased approach is *absolutely* necessary - starting by converting the core to 2/3 and only moving to
3 once everything downstream is converted.Trying to convert everything at once would likely mean months of feature-freeze.
- A separate "Remove legacy" cleaning phase might be very useful, though obviously the domain of this could be endless
- SCons is probably the least important of the conversion worries Nick
Links:
------
[1]
https://github.com/ndevenish/read_scons/blob/master/tbx2cmake/import_env.py…
[2] https://github.com/ndevenish/cctbx_project/tree/py3k-modernize
[3] https://github.com/ndevenish/cctbx_project/tree/py3k
[4]
https://github.com/ndevenish/cctbx_project/commit/c793eb58acc37c60360dccbbb…
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org
http://phenix-online.org/mailman/listinfo/cctbxbb
7 years, 8 months

Re: [phenixbb] staraniso/phenix.refine
by Gerard Bricogne
Dear Andrea,
As Clemens Vonrhein is not not subscribed to the phenixbb, I am sending
the message below on his behalf.
With best wishes,
Gerard.
----------------------------------------------------------------------------
Dear Andrea,
Besides the paragraph in the Release Notes of our latest BUSTER release that
Luca directed you to, you may find it useful to consult the following Wiki
page:
https://www.globalphasing.com/buster/wiki/index.cgi?DepositionMmCif
Although it has not yet been updated so as to mention explicitly the cases
where refinement was done with REFMAC or phenix-refine, the procedure and
commands given there will work in these two cases as well. Our intention was
to get feedback from our users first, then to announce the extended
capability of aB_deposition_combine more broadly - but your question clearly
shows that we should make that announcement sooner rather than later.
Getting back to your message: although uploading either the phenix-generated
mtz file or the mmCIF file generated by aB_deposition_combine would be
acceptable to the OneDep system, we would highly recommend using those from
aB_deposition_combine for several reasons. First of all, these files should
already contain the data from Phenix but also provide additional data blocks
with richer reflection data (and metadata!), going right back to the scaled
and unmerged data without any cut-off applied. Furthermore, it should ensure
that the correct data quality metrics (i.e. merging statistics) are included
into the mmCIF files uploaded during deposition. Of course, you don't need
to use aB_deposition_combine to generate such a set of mmCIF files (model
and reflection data) - these are after all just text files - but the devil
is often in the details.
It may be useful to add some fairly general background information in order
to avoid misunderstandings and false impressions - so here are some relevant
points you may wish to consider.
(a) The deposition of model and associated data into the PDB should allow
for two things: (1) the validation of the current model and of its
parametrisation on their own, as well as a check against the data used
during refinement, and (2) the provision of additional data to allow
further analysis of, and improvements to, the data handling as well as
the model itself.
For the first task one needs to deposit the data exactly as used by the
depositor to arrive at the current model, i.e. the input reflection
data (as used as input to the refinement program) plus all available
Fourier coefficients (as output by that refinement program) needed to
compute the maps used in modeling.
The second task requires deposition of less and less "processed"
versions of the experimental data - ultimately going back to the raw
diffraction images. This might involve several sets of reflection data,
e.g. (i) scaled, merged and corrected reflection data, (ii) scaled and
merged data before correction and/or cutoff, and (iii) data scaled and
unmerged before outlier rejection - plus additional variants.
(b) When going from the raw diffraction images towards the final model, a
lot of selection and modification of the initial integrated intensities
takes place - from profile fitting, partiality scaling, polarization
correction, outlier rejection, empirical corrections or analytical
scale determination and error model adjustments, all the way the
application of truncation thresholds (isotropically, elliptically or
anisotropically), conversion to amplitudes (and special treatment of
weak intensities) and anisotropy correction.
There are often good reasons for doing all or some of these (with sound
scientific reasons underpinning them - not just waving some "magic
stick"), even if under special circumstances a deviation from standard
protocols is advisable. What is important from a developer's viewpoint
is to provide users with choices to influence these various steps and
to ensure that as many intermediate versions of reflection data as
possible are available for downstream processes and deposition.
(c) At deposition time, the use of a single reflection datablock is often
not adequate to provide all that information (e.g. refinement programs
might output not the original Fobs going in, but those anisotropically
rescaled against the current model - so a second block might be needed
to hold the original Fobs data, free from that rescaling). If different
types of map coefficients are to be provided, they too need to come in
different mmCIF datablocks (2mFo-DFc and mFo-DFc for observed data
only; 2mFo-DFc filled in with DFc in a sphere going out to the highest
diffraction limit; 2mFo-DFc filled in with DFc for only the reflections
deemed "observable" by STARANISO; F(early)-F(late) map coefficients for
radiation damage analysis; coefficients for anomalous Fourier maps etc).
So ultimately we need to combine (going backwards): the refinement
output data, the refinement input data and all intermediate versions of
reflection data (see above) ... ideally right back to the
scaled+unmerged intensities with a full description of context (image
numbers, detector positions etc). This is what the "Data Processing
Subgroup" of the PDBx/mmCIF Working Group has been looking at
extensively over the last months, and about which a paper has just been
submitted.
(d) autoPROC (running e.g. XDS, AIMLESS and STARANISO) and the STARANISO
server provide multi-datablock mmCIF files to simplify the submission
of a rich set of reflection data. autoPROC provides two versions here:
one with traditional isotropic analysis, and the another for the
anisotropic analysis done in STARANISO. It is up to the user to decide
which one to use for which downstream steps.
To help in combining the reflection data from data processing with that
from refinement, and in transferring all relevant meta data (data
quality metrics) into the model mmCIF file, we provide a tool called
aB_deposition_combine: it should hopefully work for autoPROC (with or
without STARANISO) data in conjunction with either BUSTER, REFMAC or
Phenix refinement results. At the end the user is provided with two
mmCIF files for deposition: (1) a model file with adequate data quality
metrics, and (2) a reflection mmCIF file with multiple datablocks all
the way back to the scaled+unmerged reflection data.
(e) It is important at the time of deposition to not just select whatever
reflection data file happens to be the first to make it through the
OneDep system, as this can often lead to picking up the simplest
version of an MTZ or mmCIF file, but to choose, if at all possible, the
most complete reflection data file containing the richest metadata.
Otherwise we simply increase the number of those rather unsatisfying
PDB entries whose reflection data files contain only the very minimum
of information about what they actually represent and what data quality
metrics (such as multiplicity, internal consistency criteria etc) would
have been attached to them in a richer deposition.
*** Please note that the current OneDep system seems to complain about the
fact that unmerged data blocks come without a test-set flag column:
this looks to us like an oversight, since test-set flags are attributes
belonging to the refinement process, so that it makes no logical sense
to require them for unmerged data. This will probably requires some
rethinking/clarification/changes on the OneDep side.
A final remark: Instead of trying to give the impression that there is only
one right way of doing things, and therefore only a single set of reflection
data that should (or needs to) be deposited, it would seem more helpful and
constructive to try and provide a clear description of the various "views"
of the same raw diffraction data that can be provided by the various
approaches and data analysis methods we have at our disposal. Together with
more developments regarding the PDBx/mmCIF dictionary, and coordinated
developments in the OneDep deposition infrastructure, this will enable
better and richer depositions to be made, helping future (re)users as well
as software developers.
Cheers,
Clemens
----------------------------------------------------------------------------
--
On Mon, Dec 19, 2022 at 05:42:24PM -0500, Andrea Piserchio wrote:
> So,
>
> Both the phenix-generated mtz file (silly me for not checking this first)
> and the cif file generated by aB_deposition_combine can be uploaded on the
> PDB server.
>
> Thank you all for your help!!
>
> Andrea
>
> On Sat, Dec 17, 2022 at 5:06 PM Pavel Afonine <pafonine(a)lbl.gov> wrote:
>
> > Hi,
> >
> > two hopefully relevant points:
> >
> > - phenix.refine always produces an MTZ file that contains the copy of
> > all inputs plus all is needed to run refinement (free-r flags, for
> > example). So if you use that file for deposition you have all you need.
> >
> > - Unless there are strongly advocated reasons to do otherwise in your
> > particular case, you better use in refinement and deposit the original
> > data and NOT the one touched by any of available these days magic sticks
> > (that "correct" for anisotropy, sharpen or else!).
> >
> > Other comments:
> >
> > > - However, CCP41/Refmac5 does not (yet) read .cif reflection files. As
> > > far as I know, Phenix Refine does not (yet) neither.
> >
> > Phenix supports complete input / outputs of mmcif/cif format. For
> > example, phenix.refine can read/write model and reflection data in cif
> > format. It's been this way for a long time now.
> >
> > Pavel
> >
> >
> > On 12/16/22 17:32, Andrea Piserchio wrote:
> > > Dear all,
> > >
> > >
> > > I am trying to validate and then (hopefully) deposit a structure
> > > generated using the autoproc/staraniso staraniso_alldata-unique.mtz
> > > file as input for phenix.refine.
> > >
> > > Autoproc also produces a cif file ( Data_1_autoPROC_STARANISO_all.cif)
> > > specifically for deposition.
> > >
> > > Long story short, the PDB validation server complains about the lack
> > > of a freeR set for both files. I realized that, at least for the cif
> > > file, the r_free_flag is missing (but why does the .mtz for the
> > > isotropic dataset work??),so I then tried to use for validation the
> > > *.reflections.cif file that can be generated by phenix.refine. This
> > > file can actually produce a validation report, but I still have some
> > > questions:
> > >
> > > 1) Is it proper to use the .reflections.cif file for this purpose?
> > > During the upload I do see some errors (see pics); also, the final
> > > results show various RSRZ outliers in regions of the structure that
> > > look reasonably good by looking at the maps on coot, which seems odd ...
> > >
> > > 2) In case the *.reflections.cif is not adequate/sufficient for
> > > deposition (I sent an inquiry to the PDB, but they did not respond
> > > yet), can I just add a _refln.status column to the autoproc cif file
> > > (within the loop containing the r_free_flag) where I insert “f” for
> > > r_free_flag = 0 and “o” everywhere else?
> > >
> > >
> > > Thank you in advance,
> > >
> > >
> > > Andrea
> > >
> > > _______________________________________________
> > > phenixbb mailing list
> > > phenixbb(a)phenix-online.org
> > > http://phenix-online.org/mailman/listinfo/phenixbb
> > > Unsubscribe: phenixbb-leave(a)phenix-online.org
> >
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> Unsubscribe: phenixbb-leave(a)phenix-online.org
2 years, 6 months

Re: [cctbxbb] Niggli-reduced cell C++ implementation
by Martin Uhrin
Dear Ralf,
you're quite right! I was looking at my basis along with the lattice which
lead me to think that the two systems should be equivalent when indeed the
lattices are not!
Thanks for pointing this out.
I'll get back to you once I'm ready with a version for the repository.
Best,
-Martin
On 24 April 2012 20:42, Ralf Grosse-Kunstleve <rwgrosse-kunstleve(a)lbl.gov>wrote:
> Hi Martin,
>
> Based on
>
> iotbx.lattice_symmetry --unit_cell="4.630811 4.630811 4.630811 90 90 90"
>
> and
>
> iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941
> 106.779 90"
>
> the first unit cell is (obviously) cubic, the second is only monoclinic.
> Even with
>
> iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941
> 106.779 90" --delta=20
>
> it only comes back as orthorhombic.
>
> Is this what you expect?
>
> Ralf
>
>
>
> On Tue, Apr 24, 2012 at 10:57 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk>wrote:
>
>> Dear cctbxers,
>>
>> I've finally found the time to play around with a C++ version of the KG
>> algorithm and I've come across a result I don't understand. I've tried
>> both David's C++ and the cctbx python niggli_cell() implementations and
>> they both give the roughly the same answer.
>>
>> I'm reducing the following cell with two, equivalent, representations (a,
>> b, c, alpha, beta, gamma):
>>
>> Before:
>>
>> 1: 4.630811 4.630811 4.630811 90 90 90
>> 2: 3.27448 5.67156 5.67156 99.5941 106.779 90
>>
>> After:
>>
>> 1: 4.63081 4.63081 4.63081 90 90 90
>> 2: 3.27448 5.67154 5.67156 99.5941 90 106.778
>>
>> Looking at the trace, cell 1 undergoes step 3 and finishes while cell 2
>> undergoes steps 2, 3, 7 and 4.
>>
>> Does anyone know why these haven't converged to the same cell?
>>
>> Many thanks,
>> -Martin
>>
>> On 23 March 2012 17:12, Ralf Grosse-Kunstleve <rwgrosse-kunstleve(a)lbl.gov
>> > wrote:
>>
>>> Hi Martin,
>>> Let me know if you need svn write access to check in your changes. All I
>>> need is your sourceforge user id.
>>> Ralf
>>>
>>>
>>> On Fri, Mar 23, 2012 at 3:35 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk
>>> > wrote:
>>>
>>>> Dear David and Rolf,
>>>>
>>>> thank you for your encouragement.
>>>>
>>>> David: I'm more than happy to port your implementation to cctbx if
>>>> you're happy with this. Of course I don't want to step on your toes so if
>>>> you'd rather do it yourself (or not at all) that's cool.
>>>>
>>>> There may be some licensing issues to sort out as it looks like cctbx
>>>> has a custom (non viral) license but the BSD license is likely compatible.
>>>>
>>>> On first impression I think a new class would be the way to go but I'd
>>>> have to look at the two algorithms in greater detail to be sure.
>>>>
>>>> All the best,
>>>> -Martin
>>>>
>>>>
>>>> On 22 March 2012 22:00, Ralf Grosse-Kunstleve <
>>>> rwgrosse-kunstleve(a)lbl.gov> wrote:
>>>>
>>>>> Hi Martin,
>>>>> You're very welcome to add a C++ version of the Krivy-Gruber algorithm
>>>>> to cctbx if that's what you had in mind.
>>>>> I'm not sure what's better, generalizing the fast-minimum-reduction
>>>>> code, or just having an independent implementation.
>>>>> Ralf
>>>>>
>>>>> On Thu, Mar 22, 2012 at 2:24 PM, Martin Uhrin <
>>>>> martin.uhrin.10(a)ucl.ac.uk> wrote:
>>>>>
>>>>>> Dear Cctbx community,
>>>>>>
>>>>>> Firstly I'd like to say thank you to Rolf, Nicholas and Paul for
>>>>>> their expertly thought through implementation of the reduced cell
>>>>>> algorithm. I've found it to be extremely useful for my work.
>>>>>>
>>>>>> My code is all in C++ and I'd like to be able to use the Krivy-Gruber
>>>>>> algorithm. My understanding is that only the reduced (Buerger) unit cell
>>>>>> algorithm is implemented in C++ [1] which guarantees shortest lengths but
>>>>>> not unique angles. From my understanding the Krivy-Gruber would also
>>>>>> guarantee me uniqueness of unit cell angles, however this is only
>>>>>> implemented in Python [2]. Sorry to be so verbose, I just wanted to check
>>>>>> that I was on the right page.
>>>>>>
>>>>>> Would it be possible for me to implement the Krivy-Gruber in C++ by
>>>>>> adding in the epsilon_relative to the parameter and following the procedure
>>>>>> found in the python version?
>>>>>>
>>>>>> Many thanks,
>>>>>> -Martin
>>>>>>
>>>>>> [1]
>>>>>> http://cctbx.sourceforge.net/current/c_plus_plus/classcctbx_1_1uctbx_1_1fas…
>>>>>> [2]
>>>>>> http://cctbx.sourceforge.net/current/python/cctbx.uctbx.krivy_gruber_1976.h…
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Martin Uhrin Tel: +44
>>>>>> 207 679 3466
>>>>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>>>>> University College London
>>>>>> martin.uhrin.10(a)ucl.ac.uk
>>>>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>>>>
>>>>>> _______________________________________________
>>>>>> cctbxbb mailing list
>>>>>> cctbxbb(a)phenix-online.org
>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> cctbxbb mailing list
>>>>> cctbxbb(a)phenix-online.org
>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Martin Uhrin Tel: +44
>>>> 207 679 3466
>>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>>> University College London
>>>> martin.uhrin.10(a)ucl.ac.uk
>>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>>
>>>> _______________________________________________
>>>> cctbxbb mailing list
>>>> cctbxbb(a)phenix-online.org
>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>
>>>>
>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>>
>>
>>
>> --
>> Martin Uhrin Tel: +44
>> 207 679 3466
>> Department of Physics & Astronomy Fax:+44 207 679 0595
>> University College London
>> martin.uhrin.10(a)ucl.ac.uk
>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
--
Martin Uhrin Tel: +44 207
679 3466
Department of Physics & Astronomy Fax:+44 207 679 0595
University College London martin.uhrin.10(a)ucl.ac.uk
Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
13 years, 2 months

Re: [phenixbb] Using LigandFit to identify unknown density
by Pavel Afonine
Hi Maia,
phenix.refine refines occupancies during occupancy refinement, it
refines B-factors during B-factor refinement and it refines coordinates
during coordinate refinement. The B-factor restraints are applied at
B-factor refinement step. phenix.refine iterates these steps as many
times as large the main.number_of_macro_cycles parameter is (3, by
default). Obviously, no B-factor are restraints applied if you refine
occupancies only.
Yes, what Peter mentioned actually happens during refinement (if
B-factor refinement is enabled). That's what the B-factor restraints do
in general.
Pavel.
On 1/27/10 3:28 PM, Maia Cherney wrote:
> Hi Pavel, Peter,
>
> Thank you for your reply. My question is if the phenix.refine actually
> uses the B-factor restraints in the occupancy refinement. I did not
> give any restraints, so it should happen automatically? I like the
> idea that Peter mentioned that the restraints should make B -factors
> similar to surrounding molecules. Again, my question is does
> phenix.refine actually uses this approach?
>
> Maia
>
>
>
> Pavel Afonine wrote:
>> Hi Maia,
>>
>> first, I agree with Peter - the B-factor restraints should help, indeed.
>>
>> Second, I think we discussed this subject already on November 25, 2009:
>>
>> Subject: Re: [phenixbb] occupancy refinement
>> Date: 11/25/09 7:38 AM
>>
>> and I believe I didn't change my mind about it since that. I'm
>> appending that email conversation to the bottom of this email.
>>
>> Overall, if you get good 2mFo-DFc map and clear residual mFo-DFc map,
>> and ligand's B-factors are similar or slightly larger than those of
>> surrounding atoms, and refined occupancy looks reasonable, then I
>> think you are fine.
>>
>> Pavel.
>>
>>
>> On 1/27/10 2:05 PM, Maia Cherney wrote:
>>> Hi Pavel,
>>>
>>> I have six ligands at partial occupacies in my structure.
>>> Simultaneous refinement of occupancy and B factors in phenix gives a
>>> value of 0.7 for the ligand occupancy that looks reasonable.
>>> How does phenix can perform such a refinement given the occupancies
>>> and B factors are highly correlated? Indeed, you can
>>> increase/decrease the ligand occupancies while simultaneously
>>> increacing/decreasing their B factors without changing the R factor
>>> value. What criteria does phenix use in such a refinement if R
>>> factor does not tell much?
>>>
>>> Maia
>>
>> ******* COPY (11/25/09)************
>>
>>
>>
>> On 11/25/09 7:38 AM, Maia Cherney wrote:
>>> Hi Pavel,
>>>
>>> It looks like all different refined occupancies starting from
>>> different initial occupancies converged to the same number upon
>>> going through very many cycles of refinement.
>>>
>>> Maia
>>>
>>>
>>> Pavel Afonine wrote:
>>>
>>>> Hi Maia,
>>>>
>>>> the atom parameters, such as occupancy, B-factor and even position
>>>> are interdependent in some sense. That is, if you have somewhat
>>>> incorrect occupancy, that B-factor refinement may compensate for
>>>> it; if you misplaced an atom the refinement of its occupancy or/and
>>>> B-factor will compensate for this. Note in all the above cases the
>>>> 2mFo-DFc and mFo-DFc maps will appear almost identical, as well as
>>>> R-factors.
>>>>
>>>> So, I think your goal of finding a "true" occupancy is hardly
>>>> achievable.
>>>>
>>>> Although, I think you can approach it by doing very many
>>>> refinements (say, several hundreds) (where you refine occupancies,
>>>> B-factors and coordinates) each refinement starting with different
>>>> occupancy and B-factor values, and make sure that each refinement
>>>> converges. Then select a subset of refined structures with similar
>>>> and low R-factors (discard those cases where refinement got stuck
>>>> for whatever reason and R-factors are higher) (and probably similar
>>>> looking 2mFo-DFc and mFo-DFc maps in the region of interest). Then
>>>> see where the refined occupancies and B-factors are clustering, and
>>>> the averaged values will probably give you an approximate values
>>>> for occupancy and B. I did not try this myself but always wanted to.
>>>>
>>>> If you have a structure consisting of 9 carbons and one gold atom,
>>>> then I would expect that the "second digit" in gold's occupancy
>>>> would matter. However, if we speak about dozen of ligand atoms
>>>> (which are probably a combination of C,N,O) out of a few thousands
>>>> of atoms of the whole structure, then I would not expect the
>>>> "second digit" to be visibly important.
>>>>
>>>> Pavel.
>>>>
>>>>
>>>> On 11/24/09 8:08 PM, chern wrote:
>>>>
>>>>> Thank you Kendall and Pavel for your responces.
>>>>> I really want to determine the occupancy of my ligand. I saw one
>>>>> suggestion to try different refinements with different occupancies
>>>>> and compare the B-factors.
>>>>> The occupancy with a B-factor that is at the level with the
>>>>> average protein B-factors, is a "true" occupancy.
>>>>> I also noticed the dependence of the ligand occupancy on the
>>>>> initial occupancy. I saw the difference of 10 to 15%, that is why
>>>>> I am wondering if the second digit after the decimal point makes
>>>>> any sence.
>>>>> Maia
>>>>>
>>>>> ----- Original Message -----
>>>>> *From:* Kendall Nettles <mailto:[email protected]>
>>>>> *To:* PHENIX user mailing list
>>>>> <mailto:[email protected]>
>>>>> *Sent:* Tuesday, November 24, 2009 8:22 PM
>>>>> *Subject:* Re: [phenixbb] occupancy refinement
>>>>>
>>>>> Hi Maia,
>>>>> I think the criteria for occupancy refinement of ligands is
>>>>> similar to a decision to add an alt conformation for an amino
>>>>> acid. I don’t refine occupancy of a ligand unless the difference
>>>>> map indicates that we have to. Sometimes part of the igand may be
>>>>> conformationally mobile and show poor density, but I personally
>>>>> don’t think this justifies occupancy refinement without evidence
>>>>> from the difference map. I agree with Pavel that you shouldn’t
>>>>> expect much change in overall statistics, unless the ligand has
>>>>> very low occupancy., or you have a very small protein. We
>>>>> typically see 0.5-1% difference in R factors from refining with
>>>>> ligand versus without for nuclear receptor igand binding domains
>>>>> of about 250 amino acids, and we see very small differences from
>>>>> occupancy refinement of the ligands.
>>>>>
>>>>> Regarding the error, I have noticed differences of 10% percent
>>>>> occupancy depending on what you set the starting occupancy before
>>>>> refinement. That is, if the starting occupancy starts at 1, you
>>>>> might end up with 50%, but if you start it at 0.01, you might get
>>>>> 40%. I don’t have the expertise to explain why this is, but I
>>>>> also don’t think it is necessarily important. I think it is more
>>>>> important to convince yourself that the ligand binds how you
>>>>> think it does. With steroid receptors, the ligand is usually
>>>>> planer, and tethered by hydrogen bonds on two ends. That leaves
>>>>> us with with four possible poses, so if in doubt, we will dock in
>>>>> the ligand in all of the four orientations and refine. So far, we
>>>>> have had only one of several dozen structures where the ligand
>>>>> orientation was not obvious after this procedure. I worry about a
>>>>> letter to the editor suggesting that the electron density for the
>>>>> ligand doesn’t support the conclusions of the paper, not whether
>>>>> the occupancy is 40% versus 50%.
>>>>>
>>>>> You might also want to consider looking at several maps, such as
>>>>> the simple or simulated annealing composite omit maps. These can
>>>>> be noisy, so also try the kicked maps (
>>>>>
>>>>> http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html),
>>>>>
>>>>>
>>>>> <http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html%…,>
>>>>>
>>>>> which I have become a big fan of.
>>>>>
>>>>> Regards,
>>>>> Kendall Nettles
>>>>>
>>>>> On 11/24/09 3:07 PM, "chern(a)ualberta.ca" <chern(a)ualberta.ca>
>>>>> wrote:
>>>>>
>>>>> Hi,
>>>>> I am wondering what is the criteria for occupancy
>>>>> refinement of
>>>>> ligands. I noticed that R factors change very little, but the
>>>>> ligand
>>>>> B-factors change significantly . On the other hand, the
>>>>> occupancy is
>>>>> refined to the second digit after the decimal point. How can
>>>>> I find
>>>>> out the error for the refined occupancy of ligands?
>>>>>
>>>>> Maia
>>>>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
15 years, 5 months

Re: [phenixbb] Using the Same Test Set in AutoBuild and Phenix.Refine
by Dale Tronrud
Thomas C. Terwilliger wrote:
> Hi Dale,
>
> Can you try something else:
>
> phenix.refine AutoBuild_run_12_/overall_best.pdb \
> refinement.input.xray_data.file_name=\
> AutoBuild_run_12/exptl_fobs_freeR_flags.mtz \
> refinement.main.high_resolution=2.2 refinement.main.low_resolution=20 \
> /usr/users/dale/geometry/chromophores/bcl_tnt.cif
>
>
> This differs from your run only by substituting
>
> AutoBuild_run_12/exptl_fobs_freeR_flags.mtz
>
> for your 2 refinement data files. This is the exact file that is used in
> refinement by AutoBuild.
>
I tried this command, very similar to yours:
phenix.refine AutoBuild_run_12_/overall_best.pdb \
refinement.input.xray_data.file_name=AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz \
refinement.main.high_resolution=2.2 refinement.main.low_resolution=20 \
/usr/users/dale/geometry/chromophores/bcl_tnt.cif output.prefix=junk2
The final output was:
F-obs:
AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz:FP,SIGFP
If previously used R-free flags are available run this command again
with the name of the file containing the original flags as an
additional input. If the structure was never refined before, or if the
original R-free flags are unrecoverable, run this command again with
the additional definition:
refinement.main.generate_r_free_flags=True
If the structure was refined previously using different R-free flags,
the values for R-free will become meaningful only after many cycles of
refinement.
Sorry: Please try again.
The output from mtz.dump for your .mtz is
Processing: AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz
Title: Resolve mtz file.
Space group symbol from file: P
Space group number from file: 212
Space group from matrices: P 43 3 2 (No. 212)
Point group symbol from file: 43
Number of crystals: 2
Number of Miller indices: 38159
Resolution range: 75.6238 2.14896
History:
From resolve_huge, 27/12/07 15:07:56
Crystal 1:
Name: HKL_base
Project: HKL_base
Id: 0
Unit cell: (169.1, 169.1, 169.1, 90, 90, 90)
Number of datasets: 1
Dataset 1:
Name: HKL_base
Id: 0
Wavelength: 0
Number of columns: 0
Crystal 2:
Name: allen-
Project: FMO-ct
Id: 2
Unit cell: (169.1, 169.1, 169.1, 90, 90, 90)
Number of datasets: 1
Dataset 1:
Name: 1
Id: 1
Wavelength: 0
Number of columns: 9
label #valid %valid min max type
H 38159 100.00% 0.00 38.00 H: index h,k,l
K 38159 100.00% 2.00 78.00 H: index h,k,l
L 38159 100.00% 0.00 55.00 H: index h,k,l
FP 38159 100.00% 32.00 15171.00 F: amplitude
SIGFP 38159 100.00% 23.00 1716.00 Q: standard deviation
PHIM 38159 100.00% -90.00 45.00 P: phase angle in degrees
FOMM 38159 100.00% 0.00 0.00 W: weight (of some sort)
FreeR_flag 38159 100.00% 0.00 19.00 I: integer
FC 38159 100.00% 0.00 0.00 F: amplitude
Dale Tronrud
> I agree that you should be able to use your original data file instead. A
> possible reason why this has failed is that the original data file has a
> couple reflections for which there is no data...and which were tossed by
> AutoBuild before creating exptl_fobs_freeR_flags.mtz . Two files that
> differ only in reflections with no data will still give different
> checksums, I think.
>
> All the best,
> Tom T
>
>> Hi Dale,
>>
>>>> 1) Why you specify reflection MTZ file twice in phenix.refine script?
>>>>
>>>>
>>> I put the mtz in twice because if I put it in once phenix.refine
>>> complains that I have no free R flags. It seems to want one file with
>>> the amplitudes and another with the flags. Since I have both in the
>>> same file I put that file on the line twice and phenix.refine finds
>>> everything it needs.
>>>
>> phenix.refine looks for free-R flags in your main data file
>> (1M50-2.mtz). Optionally you can provide a separate file containing
>> free-R flags (I have to write about this in the manual). However, if
>> your 1M50-2.mtz contains free-R flags then you don't need to give it
>> twice. So clearly something is wrong at this step and we need to find
>> out what is wrong before doing anything else. Could you send the result
>> of the command "phenix.mtz.dump 1M50-2.mtz" to see what's inside of your
>> data file? Or I can debug it myself if you send me the data and model.
>>
>>> If the MD5 hash of the test set depends on the resolution then
>>> certainly
>>> I could be in trouble.
>> No. It must always use the original files before any processing.
>>
>>> Does the resolution limit affect the MD5 hash of the test set?
>>>
>> No. If it does then it is a very bad bug. I will play with this myself
>> later tonight.
>>
>>>> 3) Does this work:
>>>>
>>>> (...)
>>> I'll try these but it will take a bit of time.
>>>
>> Don't run it until completion. Just make sure it passed through the
>> processing step.
>>
>> Pavel.
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
17 years, 6 months