Search results for query "look through"
- 527 messages
Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Kendall Nettles
I didnt think the structure was publishable with Rfree of 33% because I was expecting the reviewers to complain.
We have tested a number of data sets on the UCLA server and it usually doesn't make much difference. I wouldn't expect truncation alone to change Rfree by 5%, and it usually doesn't. The two times I have seen dramatic impacts on the maps ( and Rfree ), the highly anisotrophic sets showed strong waves of difference density as well, which was fixed by throwing out the noise. We have moved to using loose data cutoffs for most structures, but I do think anisotropic truncation can be helpful in rare cases.
Kendall
On May 1, 2012, at 3:07 PM, "Dale Tronrud" <det102(a)uoxray.uoregon.edu> wrote:
>
> While philosophically I see no difference between a spherical resolution
> cutoff and an elliptical one, a drop in the free R can't be the justification
> for the switch. A model cannot be made more "publishable" simply by discarding
> data.
>
> We have a whole bunch of empirical guides for judging the quality of this
> and that in our field. We determine the resolution limit of a data set (and
> imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
> or Rpim getting too big or I/sigI getting too small and there is no agreement
> on how "too big/small" is too "too big/small".
>
> We then have other empirical guides for judging the quality of the models
> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
> recognize that the these criteria need to be applied differently for different
> resolutions. A lower resolution model is allowed a higher Rfree, for example.
>
> Isn't is also true that a model refined to data with a cutoff of I/sigI of
> 1 would be expected to have a free R higher than a model refined to data with
> a cutoff of 2? Surely we cannot say that the decrease in free R that results
> from changing the cutoff criteria from 1 to 2 reflects an improved model. It
> is the same model after all.
>
> Sometimes this shifting application of empirical criteria enhances the
> adoption of new technology. Certainly the TLS parametrization of atomic
> motion has been widely accepted because it results in lower working and free
> Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
> that the model fits the data better, I'm not sure that the quality of the
> hydrogen bond distances, van der Waals distances, or maps are any better.
> The latter details are what I really look for in a model.
>
> On the other hand, there has been good evidence through the years that
> there is useful information in the data beyond an I/sigI of 2 or an
> Rmeas > 100% but getting people to use this data has been a hard slog. The
> reason for this reluctance is that the R values of the resulting models
> are higher. Of course they are higher! That does not mean the models
> are of poorer quality, only that data with lower signal/noise has been
> used that was discarded in the models you used to develop your "gut feeling"
> for the meaning of R.
>
> When you change your criteria for selecting data you have to discard
> your old notions about the acceptable values of empirical quality measures.
> You either have to normalize your measure, as Phil Jeffrey recommends, by
> ensuring that you calculate your R's with the same reflections, or by
> making objective measures of map quality.
>
> Dale Tronrud
>
> P.S. It is entirely possible that refining a model to a very optimistic
> resolution cutoff and calculating the map to a lower resolution might be
> better than throwing out the data altogether.
>
> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
>> Kendall
>>
>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>
>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>>> Are anisotropic cutoff desirable?
>>>
>>> is there a peer-reviewed publication - perhaps from Acta
>>> Crystallographica - which describes precisely why scaling or
>>> refinement programs are inadequate to ameliorate the problem of
>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>> satisfies this need?
>>>
>>> -Bryan
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 9 months
Re: [cctbxbb] some thoughts on cctbx and pip
by Luc Bourhis
Hi,
Even if we managed to ship our the boost dynamic libraries with pip, it would still not be pip-like, as we would still need our python wrappers to set LIBTBX_BUILD and LD_LIBRARY_PATH. Normal pip packages work with the standard python exe. LD_LIBRARY_PATH, we could get around that by changing the way we compile, using -Wl,-R, which is the runtime equivalent of build time -L. That’s a significant change that would need to be tested. But there is no way around setting LIBTBX_BUILD right now. Leaving that to the user is horrible. Perhaps there is a way to hack libtbx/env_config.py so that we can hardwire LIBTBX_BUILD in there when pip installs?
Best wishes,
Luc
> On 16 Aug 2019, at 22:47, Luc Bourhis <luc_j_bourhis(a)mac.com> wrote:
>
> Hi,
>
> I did look into that many years ago, and even toyed with building a pip installer. What stopped me is the exact conclusion you reached too: the user would not have the pip experience he expects. You are right that it is a lot of effort but is it worth it? Considering that remark, I don’t think so. Now, Conda was created specifically to go beyond pip pure-python-only support. Since cctbx has garnered support for Conda, the best avenue imho is to go the extra length to have a package on Anaconda.org <http://anaconda.org/>, and then to advertise it hard to every potential user out there.
>
> Best wishes,
>
> Luc
>
>
>> On 16 Aug 2019, at 21:45, Aaron Brewster <asbrewster(a)lbl.gov <mailto:[email protected]>> wrote:
>>
>> Hi, to avoid clouding Dorothee's documentation email thread, which I think is a highly useful enterprise, here's some thoughts about putting cctbx into pip. Pip doesn't install non-python dependencies well. I don't think boost is available as a package on pip (at least the package version we use). wxPython4 isn't portable through pip (https://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython… <https://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython…>). MPI libraries are system dependent. If cctbx were a pure python package, pip would be fine, but cctbx is not.
>>
>> All that said, we could build a manylinux1 version of cctbx and upload it to PyPi (I'm just learning about this). For a pip package to be portable (which is a requirement for cctbx), it needs to conform to PEP513, the manylinux1 standard (https://www.python.org/dev/peps/pep-0513/ <https://www.python.org/dev/peps/pep-0513/>). For example, numpy is built according to this standard (see https://pypi.org/project/numpy/#files <https://pypi.org/project/numpy/#files>, where you'll see the manylinux1 wheel). Note, the manylinux1 standard is built with Centos 5.11 which we no longer support.
>>
>> There is also a manylinux2010 standard, which is based on Centos 6 (https://www.python.org/dev/peps/pep-0571/ <https://www.python.org/dev/peps/pep-0571/>). This is likely a more attainable target (note though by default C++11 is not supported on Centos 6).
>>
>> If we built a manylinuxX version of cctbx and uploaded it to PyPi, the user would need all the non-python dependencies. There's no way to specify these in pip. For example, cctbx requires boost 1.63 or better. The user will need to have it in a place their python can find it, or we could package it ourselves and supply it, similar to how the pip h5py package now comes with an hd5f library, or how the pip numpy package includes an openblas library. We'd have to do the same for any packages we depend on that aren't on pip using the manylinux standards, such as wxPython4.
>>
>> Further, we need to think about how dials and other cctbx-based packages interact. If pip install cctbx is set up, how does pip install dials work, such that any dials shared libraries can find the cctbx libraries? Can shared libraries from one pip package link against libraries in another pip package? Would each package need to supply its own boost? Possibly this is well understood in the pip field, but not by me :)
>>
>> Finally, there's the option of providing a source pip package. This would require the full compiler toolchain for any given platform (macOS, linux, windows). These are likely available for developers, but not for general users.
>>
>> Anyway, these are some of the obstacles. Not saying it isn't possible, it's just a lot of effort.
>>
>> Thanks,
>> -Aaron
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org <mailto:[email protected]>
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
6 years, 5 months
Re: [phenixbb] questions / TLS+NCS bug
by Pavel Afonine
Hi Jianghai,
Thanks for the .log file ! I looked at it and now I think I now what's
the problem: this is the bug in phenix.refine that we will fix for the
next release. The bug arises when you try to refine using TLS and NCS at
once. Sorry for this.
The possible solution is to split your refinement into 2 parts:
1) First, refine coordinates and isotropic B-factors. Use NCS
restraints. DO NOT refine TLS.
2) At the end, as the final tune up, refine ONLY isotropic B-factors +
TLS and do not refine coordinates. For this run, please REMOVE all NCS
information (NCS selections) from the command file. This refinement run
should be the final one.
Once again, sorry for the inconvenience. This will be fixed in the next
release of CCI APPS and PHENIX. Once this fixed, you will be able to run
everything in "one go" and in any combination.
Please let me know if what I suggested helped you. Any further questions
are welcome!
Cheers,
Pavel.
Jianghai Zhu wrote:
> Here is the log file. Thanks.
>
> Jianghai
>
> =
> ------------------------------------------------------------------------
>
>
>
> On Dec 14, 2006, at 12:50 PM, Pavel Afonine wrote:
>
>> Hi Jianghai,
>>
>> could you please send us .log file from your refinement run, so we
>> can analyze what's going on.
>>
>> In general, "bad" B-factors can be:
>> - misplaced model;
>> - inadequate TLS model (= domains chosen for TLS do not correspond to
>> the reality).
>>
>> If you are using like 2 months old or older version of phenix.refine,
>> you may want to get the latest CCI APPS since we made lots of
>> improvements. Just goto http://www.phenix-online.org/download/cci_apps/
>>
>> Pavel.
>>
>>
>> Jianghai Zhu wrote:
>>> The resolution is 2.5 A. The wilson B is about 50. I know B factor
>>> of the backbone is lower than that of the sidechian. But a B factor
>>> like 4 is definitely wrong.
>>>
>>> Jianghai
>>>
>>>
>>> On Dec 14, 2006, at 12:11 PM, Peter Zwart wrote:
>>>
>>>>
>>>>>> 4) The refinement (TLS + ML + B individual) went through, I got
>>>>>> reasonable R, Rfree, rmsdBOND, rmsdANGLE. But the B factors are
>>>>>> pretty low. The B factor of the backbone is much lower than the
>>>>>> side
>>>>>> chain, some have numbers like 4. Some metal atoms also have B
>>>>>> factors around 4. What did I do wrong?
>>>>
>>>> What is the resolution of your data?
>>>>
>>>> Backbone B-s usually are lower than the main chain.
>>>>
>>>> What is the Wilson B value reported by phenix.refine?
>>>> You could re-refine and randomize all B-values and see what happens (I
>>>> have to get back to you to to get the exact command for this).
>>>> Maybe it
>>>> is useful to obtain a copy of the latest verison of phenix.refine by
>>>> downloading cci_apps from our server http://www.phenix-online.org.
>>>>
>>>>
>>>> If your B-values still come out lowish, try growing crystals that
>>>> do not
>>>> diffract very well, that usually does the trick.
>>>>
>>>> HTH
>>>>
>>>> Peter
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> phenixbb mailing list
>>>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>>> ------------------------------------------------------------------------
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
> =
19 years, 1 month
Re: [phenixbb] calculate Fc for the whole unit cell from a Fc of a single symmetric unit.
by Edward A. Berry
It seems to me there are two things that could be meant by "expand to P1"
One is when data has been reduced to the Reciprocal Space
asymmetric unit (or otherwise one asymmetric unit of a
symmetric dataset has been obtained) and you want to expand
it to P1 by using symmetry to generate all the
symmetry -equivalent reflections.
The other is where a full P1 dataset has been calculated from just
one asymmetric unit of the crystal (and hence does not exhibit the
crystallographic symmetry) and you want to generate the transform
of the entire crystal. (I think this is how all the space-group -
specific fft programs used to work before computers got so fast it
was less bother to just do everything in P1 with the whole cell)
Presumably this involves applying the real space symmetry
operators to get n rotated (or phase-shifted for translational
symmetry) P1 datasets and adding them vectorially.
It would be important to decide which of these is required, and which
each of the suggested methods provides
eab
Ralf Grosse-Kunstleve wrote:
> We can expand reciprocal-space arrays, too, with the
> cctbx.miller.array.expand_to_p1() method. You can use it from the
> command line via
>
> phenix.reflection_file_converter --expand-to-p1 ...
>
> See also:
> http://www.phenix-online.org/documentation/reflection_file_tools.htm
>
> Ralf
>
>
> On Mon, Jul 11, 2011 at 10:56 AM, <zhangh1(a)umbc.edu
> <mailto:[email protected]>> wrote:
>
> Sorry I haven't got a chance to check my email recently.
>
> Yes, I meant expansion to P1. The thing is cctbx relies on the atomic
> model I think, but I only have model Fc available.
>
> Hailiang
>
> > I suspect what Hailang means is expansion into P1.
> >
> > I am sure this can be accomplished through some either existing or
> > easily coded cctbx tool. However, when I looked into a different
> task
> > recently that included P1 expansion as a step, I learned that SFTOOLs
> > can do this, albeit there was a bug there which caused trouble in
> > certain space groups (may be fixed by now so check if there is an
> > update).
> >
> > Hailang - if P1 expansion is what you need, I could share my own
> code as
> > well, let me know if that is something you want to try.
> >
> > Cheers,
> >
> > Ed.
> >
> > On Fri, 2011-07-08 at 14:44 -0700, Ralf Grosse-Kunstleve wrote:
> >> Did you get responses already?
> >> If not, could you explain your situation some more?
> >> We have algorithms that do the symmetry summation in reciprocal
> space.
> >> The input is a list of Fc in P1, based on the unit cell of the
> >> crystal. Is that what you have?
> >> Ralf
> >>
> >> On Wed, Jul 6, 2011 at 1:38 PM, <zhangh1(a)umbc.edu
> <mailto:[email protected]>> wrote:
> >> Hi,
> >>
> >> I am wondering if I only have structure factors calculated
> >> from a single
> >> symmetric unit, is there any phenix utility which can
> >> calculate the
> >> structure factor for the whole unit cell given the symmetric
> >> operation or
> >> space group and crystal parameters? Note I don't have an
> >> atomic model and
> >> only have Fc.
> >>
> >> Thanks!
> >>
> >> Hailiang
> >>
> >> _______________________________________________
> >> phenixbb mailing list
> >> phenixbb(a)phenix-online.org <mailto:[email protected]>
> >> http://phenix-online.org/mailman/listinfo/phenixbb
> >>
> >>
> >> _______________________________________________
> >> phenixbb mailing list
> >> phenixbb(a)phenix-online.org <mailto:[email protected]>
> >> http://phenix-online.org/mailman/listinfo/phenixbb
> >
> >
> > _______________________________________________
> > phenixbb mailing list
> > phenixbb(a)phenix-online.org <mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/phenixbb
> >
> >
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org <mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
14 years, 6 months
Re: [phenixbb] Table 1 successor in 3D?
by John R Helliwell
Dear Gerard,
Thankyou for your detailed and informative reply to our message to Phenixbb
in reply to yours.
The inadequacies of detector setting will not be improved by saving
unmerged intensities, we agree. The raw diffraction images contain more
information nevertheless, even at an imperfect detector setting. The role
of the crystallographic associations and the facilities in professional
training through their courses is important.
There is much enthusiasm in the earlier literature about preserving raw
diffraction images, as Bernhard has also referred to (1913). We appreciate
the usefulness of Staraniso and the fact that it is more informative than
Table 1. Not least we greatly appreciate your work for the IUCr on these
matters as consultant to dddwg and now CommDat.
All best wishes,
Loes and John
On Tue, Jun 12, 2018 at 5:24 PM, Gerard Bricogne <gb10(a)globalphasing.com>
wrote:
> Dear John and Loes,
>
> Thank you for reiterating on this BB your point about depositing
> raw diffraction images. I will never disagree with any promotion of
> that deposition, or praise of its benefits, given that I was one of
> the earliest proponents and most persistently vociferous defenders of
> the idea, long before it gained general acceptance (see Acta D65, 2009
> 176-185). There has never been any statement on our part that the
> analysis done by STARANISO disposes of the need to store the original
> images and to revisit them regularly with improved processing and/or
> troubleshooting software. At any given stage in this evolution,
> however, (re)processing results will need to be displayed, and it is
> with the matter of what information about data quality is conveyed (or
> not) by various modes of presentation of such results that Bernhard's
> argument and (part of) our work on STARANISO are concerned.
>
> Furthermore we have made available the PDBpeep server at
>
> http://staraniso.globalphasing.org/cgi-bin/PDBpeep.cgi
>
> that takes as input a 4-character PDB entry code and generates figures
> from the deposited *merged amplitudes* associated with that entry. The
> numbers coming out of a PDBpeep run may well have questionable
> quantitative value (this is pointed out in the home page for that
> server) but the 3D WebGL picture it produces has informative value
> independently from that. Take a look, for instance, at 4zc9, 5f6m or
> 6c79: it is quite plain that these high-resolution datasets have
> significant systematic incompleteness issues, a conclusion that would
> not necessarily jump out of a Table 1 page, even after reprocessing
> the original raw images, without that 3D display.
>
> The truly pertinent point about this work in relation to keeping
> raw images is that the STARANISO display very often suggests that the
> merged data have been subjected to too drastic a resolution cut-off,
> and that it would therefore be worth going back to the raw images and
> to let autoPROC+STARANISO apply a more judicious cut-off. Sometimes,
> however, as in the example given in Bernhard's paper, significant data
> fail to be recorded because the detector was positioned too far from
> the crystal, in which case the raw images would only confirm that
> infelicity and would provide no means of remediating it.
>
>
> With best wishes,
>
> Gerard.
>
> --
> On Wed, Jun 06, 2018 at 09:35:38AM +0100, John R Helliwell wrote:
> > Dear Colleagues
> > Given that this message is now also placed on Phenixbb, we reiterate our
> > key point that deposition of raw diffraction images offers flexibility to
> > readers of our science results for their reuse and at no cost to the
> user.
> > As with all fields our underpinning data should be FAIR (Findable,
> > Accessible, Interoperable and Reusable). Possibilities for free storage
> of
> > data are Zenodo, SBGrid and proteindiffraction.org (IRRMC).
> > With respect to graphic displays of anisotropy of data Gerard's three
> > figures are very informative, we agree.
> > Best wishes
> >
> > Loes and John
> >
> > Kroon-Batenburg et al (2017) IUCrJ and Helliwell et al (2017) IUCrJ
> >
> > On Tue, Jun 5, 2018 at 4:49 PM, Gerard Bricogne <gb10(a)globalphasing.com>
> > wrote:
> >
> > > Dear phenixbb subscribers,
> > >
> > > I sent the message below to the CCP4BB and phenixbb at the same
> > > time last Friday. It went straight through to the CCP4BB subscribers
> > > but was caught by the phenixbb Mailman because its size exceeded 40K.
> > >
> > > Nigel, as moderator of this list, did his best to rescue it, but
> > > all his attempts failed. He therefore asked me to resubmit it, now
> > > that he has increased the upper size limit.
> > >
> > > Apologies to those of you who are also CCP4BB subscribers, who
> > > will already have received this message and the follow-up discussion
> > > it has given rise to.
> > >
> > >
> > > With best wishes,
> > >
> > > Gerard.
> > >
> > > ----- Forwarded message from Gerard Bricogne <gb10> -----
> > >
> > > Date: Fri, 1 Jun 2018 17:30:48 +0100
> > > From: Gerard Bricogne <gb10>
> > > Subject: Table 1 successor in 3D?
> > > To: CCP4BB(a)JISCMAIL.AC.UK, phenixbb(a)phenix-online.org
> > >
> > > Dear all,
> > >
> > > Bernhard Rupp has just published a "Perspective" article in
> > > Structure, accessible in electronic form at
> > >
> > > https://www.cell.com/structure/fulltext/S0969-2126(18)30138-2
> > >
> > > in which part of his general argument revolves around an example
> > > (given as Figure 1) that he produced by means of the STARANISO server
> > > at
> > > http://staraniso.globalphasing.org/ .
> > >
> > > The complete results of his submission to the server have been saved
> > > and may be accessed at
> > >
> > > http://staraniso.globalphasing.org/Gallery/Perspective01.html
> > >
> > > and it is to these results that I would like to add some annotations
> > > and comments. To help with this, I invite the reader to connect to
> > > this URL, type "+" a couple of times to make the dots bigger, and
> > > press/toggle "h" whenever detailed information on the display, or
> > > selection of some elements, or the thresholds used for colour coding
> > > the displays, needs to be consulted.
> > >
> > > The main comment is that the WebGL interactive 3D display does
> > > give information that makes visible characteristics that could hardly
> > > be inferred from the very condensed information given in Table 1, and
> > > the annotations will be in the form of a walk through the main
> > > elements of this display.
> > >
> > > For instance the left-most graphical object (a static view of
> > > which is attached as "Redundancy.png") shows the 3D distribution of
> > > the redundancy (or multiplicity) of measurements. The view chosen for
> > > the attached picture shows a strong non-uniformity in this redundancy,
> > > with the region dominated by cyan/magenta/white having about twice the
> > > redundancy (in the 6/7/8 range) of that which prevails in the region
> > > dominated by green/yellow (in the 3/5 range). Clear concentric gashes
> > > in both regions, with decreased redundancy, show the effects of the
> > > inter-module gaps on the Pilatus 2M detector of the MASSIF-1 beamline.
> > > The blue spherical cap along the a* axis corresponds to HKLs for which
> > > no measurement is available: it is clearly created by the detector
> > > being too far from the crystal.
> > >
> > > The second (central) graphical object, of which a view is given
> > > in Figure 1 of Bernhard's article and another in the attached picture
> > > "Local_I_over_sigI.png") shows vividly the blue cap of measurements
> > > that were missed but would probably have been significant (had they
> > > been measured) cutting into the green region, where the local average
> > > of I/sig(I) ranges between 16 and 29! If the detector had been placed
> > > closer, significant data extending to perhaps 3.0A resolution would
> > > arguably have been measured from this sample.
> > >
> > > The right-most graphical object (of which a static view is
> > > attached as "Debye-Waller.png") depicts the distribution of the
> > > anisotropic Debye-Waller factor (an anisotropic generalisation of the
> > > Wilson B) of the dataset, giving yet another visual hint that good
> > > data were truncated by the edges of a detector placed too far.
> > >
> > > Apologies for such a long "STARANISO 101" tutorial but Bernhard's
> > > invitation to lift our eyes from the terse numbers in Table 1 towards
> > > 3D illustrations of data quality criteria was irresistible ;-) . His
> > > viewpoint also agrees with one of the main purposes of our STARANISO
> > > developments (beyond the analysis and remediation of anisotropy, about
> > > which one can - and probably will - argue endlessly) namely contribute
> > > to facilitating a more direct and vivid perception by users of the
> > > quality of their data (or lack of it) and to nurturing evidence-based
> > > motivation to make whatever extra effort it takes to improve that
> > > quality. In this case, the undeniable evidence of non-uniformity of
> > > redundancy and of a detector placed too far would give immediate
> > > practical guidance towards doing a better experiment, while statistics
> > > in Table 1 for the same dataset would probably not ... .
> > >
> > > Thank you Bernhard!
> > >
> > >
> > > With best wishes,
> > >
> > > Gerard,
> > > for and on behalf of the STARANISO developers
> > >
> > >
> > > ----- End forwarded message -----
> > >
> > > _______________________________________________
> > > phenixbb mailing list
> > > phenixbb(a)phenix-online.org
> > > http://phenix-online.org/mailman/listinfo/phenixbb
> > > Unsubscribe: phenixbb-leave(a)phenix-online.org
> > >
> >
> >
> >
> > --
> > Professor John R Helliwell DSc
>
--
Professor John R Helliwell DSc
7 years, 7 months
Re: [phenixbb] reflection file utility and use of modified phases in refinement
by Thomas C. Terwilliger
Hi Engin,
Thanks, yes, I see now what you are looking at:
--- Data for refinement FP SIGFP PHIM FOMM HLAM HLBM HLCM HLDM
FreeR_flag ---
hklout_ref: AutoBuild_run_1_/exptl_fobs_phases_freeR_flags.mtz
The file exptl_fobs_phases_freeR_flags.mtz has a copy of the
(experimental) HL coefficients that were input to autobuild. The labels
HLAM HLBM etc are indeed confusing...they have the ending "M" because they
were copied by resolve and it outputs HLAM etc...but in fact they are not
density modified, just copied straight from the input data file.
Thank you for pointing that out.
All the best,
Tom T
>> Hi Tom,
>>
>> My second question was about autobuild recommending modified phases to
>> be used in further refinement.
>> This is the end of the log file printed by autobuild. See the line for
>> "Data for refinement":
>>
>> Summary of output files for Solution 3 from rebuild cycle 4
>>
>> --- Model (PDB file) ---
>> pdb_file: AutoBuild_run_1_/cycle_best_4.pdb
>>
>> --- Refinement log file ---
>> log_refine: AutoBuild_run_1_/cycle_best_4.log_refine
>>
>> --- Model-building log file ---
>> log: AutoBuild_run_1_/cycle_best_4.log
>>
>> --- Model-map correlation log file ---
>> log_eval: AutoBuild_run_1_/cycle_best_4.log_eval
>>
>> --- 2FoFc and FoFc map coefficients from refinement 2FOFCWT PH2FOFCWT
>> FOFCWT PH
>> FOFCWT ---
>> refine_map_coeffs: AutoBuild_run_1_/cycle_best_refine_map_coeffs_4.mtz
>>
>> --- Data for refinement FP SIGFP PHIM FOMM HLAM HLBM HLCM HLDM
>> FreeR_flag ---
>> hklout_ref: AutoBuild_run_1_/exptl_fobs_phases_freeR_flags.mtz
>>
>> --- Density-modification log file ---
>> log_denmod: AutoBuild_run_1_/cycle_best_4.log_denmod
>>
>> --- Density-modified map coefficients FP PHIM FOM ---
>> hklout_denmod: AutoBuild_run_1_/cycle_best_4.mtz
>>
>> If HLAM, HLBM, HLCM and HLDM are density-modified phases, it looks like
>> that's what autobuild suggests.
>>
>> Thanks again,
>>
>> Engin
>>
>> On 8/28/09 7:07 AM, Thomas C. Terwilliger wrote:
>>> Hi Engin,
>>>
>>> I'm not sure about your main question...I hope that Nat or Pavel will
>>> answer you on that.
>>>
>>> On the use of density-modified phases in refinement: AutoBuild expects
>>> experimental phases in the data file, with experimental HL coefficients,
>>> and it by default it will use those HL coefficients in refinement with
>>> an
>>> MLHL target.
>>>
>>> The phase probabilities from resolve statistical density modification
>>> are
>>> pretty accurate, and not inflated, so you could use them in refinement
>>> if
>>> you wanted to. I don't suggest it, however, because the
>>> density-modification phase information is not fully independent of the
>>> other information used in refinement (e.g., a flat solvent is implicit
>>> in
>>> your refinement already, so including that through density modification
>>> is
>>> partially redundant).
>>>
>>> ps: I hope AutoBuild doesn't recommend using density-modified phases in
>>> refinement, so if you could send me the text where it says that, I will
>>> check that out!
>>>
>>> All the best,
>>> Tom T
>>>
>>>
>>>>> Hi everybody,
>>>>>
>>>>> I had some trouble with the reflection file utility today. I've been
>>>>> trying to import Rfree-flag column from one of the mtz's to my
>>>>> combined
>>>>> mtz, and it never does. The R-free flag is always left out of the
>>>>> output
>>>>> even when I have it selected. Have you guys seen this (I'm using 147)?
>>>>>
>>>>> Another question I have is about the output of phenix.autobuild.
>>>>> Phenix.autobuild tells me to use modified phase probabilities (HLAM,
>>>>> etc.) in refinement. I am assuming this is density-modified phases.
>>>>> But
>>>>> I've always thought that this would be bad practice (possibly because
>>>>> of
>>>>> unrealistically high FOMs and possible flattening of loops, etc, but
>>>>> maybe resolve does a better job than, say DM). Any ideas on that one?
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Engin
>>>>>
>>>>> --
>>>>> Engin Özkan
>>>>> Post-doctoral Scholar
>>>>> Dept of Molecular and Cellular Physiology
>>>>> Howard Hughes Medical Institute
>>>>> Stanford University School of Medicine
>>>>> ph: (650)-498-7111
>>>>>
>>>>> _______________________________________________
>>>>> phenixbb mailing list
>>>>> phenixbb(a)phenix-online.org
>>>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>>>
>>>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>>
>>
>> --
>> Engin Özkan
>> Post-doctoral Scholar
>> Laboratory of K. Christopher Garcia
>> Howard Hughes Medical Institute
>> Dept of Molecular and Cellular Physiology
>> 279 Campus Drive, Beckman Center B173
>> Stanford School of Medicine
>> Stanford, CA 94305
>> ph: (650)-498-7111
>>
>>
16 years, 5 months
Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
by Billy Poon
Hi Keitaro,
In Phenix, we set use_internal_variance to false whenever possible. We had
users ask about the difference in merging statistics, which led to the
discussion about the parameter with Richard. We picked the default that is
mostly consistent with previous versions and added an option to change it,
but we must have missed certain cases where the parameter was not set.
Thanks for finding it!
phenix.merging_statistics is just a different way for calling the same code
as iotbx.merging_statistics, so the default value for use_internal_variance
is true on the command-line. However, I explicitly set the default to false
in the GUI for phenix.merging_statistics. phenix.table_one also sets
use_internal_variance to false. However, looking through other code in the
Phenix tree, there are some instances where use_internal_variance is set to
false with no option to change it, so we will double check if that is the
behavior that we want.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Tue, Nov 1, 2016 at 7:06 AM, Keitaro Yamashita <k.yamashita(a)spring8.or.jp
> wrote:
> Dear Richard,
>
> Thanks a lot. I hope some Phenix developer will make a comment.
>
> Best regards,
> Keitaro
>
> 2016-11-01 20:19 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> > Dear Keitaro,
> >
> > I've made the change you suggested in merging_statistics.py - it looks
> like an oversight, which didn't affect xia2 since we are always calculating
> merging statistics given an scaled but unmerged mtz file, never an XDS or
> scalepack-format file.
> >
> > As to what defaults Phenix uses, that is better left to one of the
> Phenix developers to comment on.
> >
> > Cheers,
> >
> > Richard
> >
> > Dr Richard Gildea
> > Data Analysis Scientist
> > Tel: +441235 77 8078
> >
> > Diamond Light Source Ltd.
> > Diamond House
> > Harwell Science & Innovation Campus
> > Didcot
> > Oxfordshire
> > OX11 0DE
> >
> > ________________________________________
> > From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces@phenix-
> online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> > Sent: 01 November 2016 10:41
> > To: cctbx mailing list
> > Subject: Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
> >
> > Dear Richard and everyone,
> >
> > Thanks for your reply. What kind of input do you give to
> > iotbx.merging_statistics in xia2? For example, when XDS file is given,
> > use_internal_variance=False is not passed to merge_equivalents()
> > function. Please look at the lines of
> > filter_intensities_by_sigma.__init__() in iotbx/merging_statistics.py.
> > When sigma_filtering == "xds" or sigma_filtering == "scalepack",
> > array_merged is recalculated using merge_equivalents() with default
> > arguments.
> >
> > If nobody disagrees, I would like to commit the fix so that
> > use_internal_variance variable is passed to all merge_equivalents()
> > function calls.
> >
> >
> > I am afraid that the behavior in the phenix-1.11 would be confusing.
> > In phenix.table_one (mmtbx/command_line/table_one.py),
> > use_internal_variance=False is default. This will be OK with the fix
> > I suggested above.
> >
> > Can it also be default in phenix.merging_statistics, not to change the
> > program behavior through phenix versions?
> >
> >
> > Best regards,
> > Keitaro
> >
> > 2016-11-01 18:21 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> >> Dear Keitaro,
> >>
> >> iotbx.merging_statistics does have the option to change the parameter
> use_internal_variance. In xia2 we use the defaults
> use_internal_variance=False, eliminate_sys_absent=False, n_bins=20, when
> calculating merging statistics which give comparable results to those
> calculate by Aimless:
> >>
> >> $ iotbx.merging_statistics
> >> Usage:
> >> phenix.merging_statistics [data_file] [options...]
> >>
> >> Calculate merging statistics for non-unique data, including R-merge,
> R-meas,
> >> R-pim, and redundancy. Any format supported by Phenix is allowed,
> including
> >> MTZ, unmerged Scalepack, or XDS/XSCALE (and possibly others). Data
> should
> >> already be on a common scale, but with individual observations unmerged.
> >> Diederichs K & Karplus PA (1997) Nature Structural Biology 4:269-275
> >> (with erratum in: Nat Struct Biol 1997 Jul;4(7):592)
> >> Weiss MS (2001) J Appl Cryst 34:130-135.
> >> Karplus PA & Diederichs K (2012) Science 336:1030-3.
> >>
> >>
> >> Full parameters:
> >>
> >> file_name = None
> >> labels = None
> >> space_group = None
> >> unit_cell = None
> >> symmetry_file = None
> >> high_resolution = None
> >> low_resolution = None
> >> n_bins = 10
> >> extend_d_max_min = False
> >> anomalous = False
> >> sigma_filtering = *auto xds scala scalepack
> >> .help = "Determines how data are filtered by SigmaI and I/SigmaI.
> XDS"
> >> "discards reflections whose intensity after merging is less
> than"
> >> "-3*sigma, Scalepack uses the same cutoff before merging,
> and"
> >> "SCALA does not do any filtering. Reflections with negative
> SigmaI"
> >> "will always be discarded."
> >> use_internal_variance = True
> >> eliminate_sys_absent = True
> >> debug = False
> >> loggraph = False
> >> estimate_cutoffs = False
> >> job_title = None
> >> .help = "Job title in PHENIX GUI, not used on command line"
> >>
> >>
> >> Below is my email to Pavel and Billy when we discussed this issue by
> email a while back:
> >>
> >> The difference between use_internal_variance=True/False is explained
> in Luc's document here:
> >>
> >> libtbx.pdflatex $(libtbx.find_in_repositories cctbx/miller)/equivalent_
> reflection_merging.tex
> >>
> >> Essentially use_internal_variance=False uses only the unmerged sigmas
> to compute the merged sigmas, whereas use_internal_variance=True uses
> instead the spread of the unmerged intensities to compute the merged
> sigmas. Furthermore, use_internal_variance=True uses the largest of the
> variance coming from the spread of the intensities and that computed from
> the unmerged sigmas. As a result, use_internal_variance=True can only ever
> give lower I/sigI than use_internal_variance=False. The relevant code in
> the cctbx is here:
> >>
> >> https://sourceforge.net/p/cctbx/code/HEAD/tree/trunk/
> cctbx/miller/merge_equivalents.h#l379
> >>
> >> Aimless has a similar option for the SDCORRECTION keyword, if you set
> the option SAMPLESD, which I think is equivalent to
> use_internal_variance=True. The default behaviour of Aimless is equivalent
> to use_internal_variance=False:
> >>
> >> http://www.mrc-lmb.cam.ac.uk/harry/pre/aimless.html#sdcorrection
> >>
> >> "SAMPLESD is intended for very high multiplicity data such as XFEL
> serial data. The final SDs are estimated from the weighted population
> variance, assuming that the input sigma(I)^2 values are proportional to the
> true errors. This probably gives a more realistic estimate of the error in
> <I>. In this case refinement of the corrections is switched off unless
> explicitly requested."
> >>
> >> I think that the "external" variance is probably better if the sigmas
> from the scaling program are reliable, or for low multiplicity data. For
> high multiplicity data or if the sigmas from the scaling program are not
> reliable, then "internal" variance is probably better.
> >>
> >> Cheers,
> >>
> >> Richard
> >>
> >> Dr Richard Gildea
> >> Data Analysis Scientist
> >> Tel: +441235 77 8078
> >>
> >> Diamond Light Source Ltd.
> >> Diamond House
> >> Harwell Science & Innovation Campus
> >> Didcot
> >> Oxfordshire
> >> OX11 0DE
> >>
> >> ________________________________________
> >> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces@phenix-
> online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> >> Sent: 01 November 2016 07:23
> >> To: cctbx mailing list
> >> Subject: [cctbxbb] use_internal_variance in iotbx.merging_statistics
> >>
> >> Dear Phenix/CCTBX developers,
> >>
> >> iotbx/merging_statistics.py is used by phenix.merging_statistics,
> >> phenix.table_one, and so on. By upgrading phenix from 1.10.1 to 1.11,
> >> merging statistics-related codes were significantly changed.
> >>
> >> Previously, miller.array.merge_equivalents() was always called with
> >> argument use_internal_variance=False, which is consistent with XDS,
> >> Aimless and so on. Currently, use_internal_variance=True is default,
> >> and cannot be changed by users (see below).
> >>
> >> These changes were made by @afonine and @rjgildea in rev. 22973 (Sep
> >> 26, 2015) and 23961 (Mar 8, 2016). Could anyone explain why these
> >> changes were introduced?
> >>
> >> https://sourceforge.net/p/cctbx/code/22973
> >> https://sourceforge.net/p/cctbx/code/23961
> >>
> >>
> >> My points are:
> >>
> >> - We actually cannot control use_internal_variance= parameter because
> >> it is not passed to merge_equivalents() in class
> >> filter_intensities_by_sigma.
> >>
> >> - In previous versions, if I gave XDS output to
> >> phenix.merging_statistics, <I/sigma> values calculated in the same way
> >> (as XDS does) were shown; but not in the current version.
> >>
> >> - For (for example) phenix.table_one users who expect this behavior,
> >> it can give inconsistency. The statistics would not be consistent with
> >> the data used in refinement.
> >>
> >>
> >> cf. the related discussion in cctbxbb:
> >> http://phenix-online.org/pipermail/cctbxbb/2012-October/000611.html
> >>
> >>
> >> Best regards,
> >> Keitaro
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >>
> >> --
> >> This e-mail and any attachments may contain confidential, copyright and
> or privileged material, and are for the use of the intended addressee only.
> If you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> >> Any opinions expressed within this e-mail are those of the individual
> and not necessarily of Diamond Light Source Ltd.
> >> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> >> Diamond Light Source Limited (company no. 4375679). Registered in
> England and Wales with its registered office at Diamond House, Harwell
> Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
> >>
> >>
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
9 years, 3 months
Re: [phenixbb] Unable to install phenix 1.8.1-1168 on Scientific Linux 6.3 64 bits
by Kay Diederichs
Davi,
This looks to me like you tried installation using e.g. the fc14
installer (that has a glibc "too new" for SL6), and installed with the
fc3 on top of that (SL6 works well with the fc13 or lower installer).
Since the installers write to the same directory, the fc3 installer
finds that the files are already there and does basically nothing that
would fix the problem, i.e. replace the binaries and libs.
So just remove the /usr/local/phenix-1.8.1-1168 directory (or its
contents) before you try a different installer.
(Side note: maybe the installer should do that automatically, but there
are probably reasons why it doesn't)
HTH,
Kay
On 01/29/2013 12:31 AM, phenixbb-request(a)phenix-online.org wrote:
> Message: 5
> Date: Fri, 25 Jan 2013 16:15:01 -0800
> From: Davi de Miranda Fonseca<davi.fonseca(a)ntnu.no>
> To: phenixbb(a)phenix-online.org
> Subject: [phenixbb] Unable to install phenix 1.8.1-1168 on Scientific
> Linux 6.3 64 bits
> Message-ID:<51032005.9040800(a)ntnu.no>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Dear all,
>
> I am unable to install phenix 1.8.1-1168 on Scientific Linux 6.3 64
> bits. Hence, I would greatly appreciate any 2 cents (or diamonds).
>
> I installed Scientific Linux 6.3 64 bits, updated everything and
> installed a couple of things that I think would be necessary. Then I
> uncompressed phenix-installer-1.8.1-1168-intel-linux-2.6-x86_64-fc3.tar
> to /tmp and tried to install, however I got some errors. Hence, I
> installed a couple more things and tried installing again.
> This time it went through most of the installation but failed during the
> configuration of phenix packages.
>
> **** Here is my shell output:
>
> [root@scientifix phenix-installer-1.8.1-1168]# ./install
>
> ==========================================================================
> PHENIX Installation
>
> version: 1.8.1
> release tag: 1168
> machine type: intel-linux-2.6-x86_64
> source machine type: intel-linux-2.6-x86_64
> OS version: linux
> user shell: /bin/bash
> destination: /usr/local
> =========================================================================
>
> ==========================================================================
> Installing from binary bundle
> ==========================================================================
>
> bundle file:
> bundle-1.8.1-1168-intel-linux-2.6-x86_64.tar.gz
> PHENIX installation source directory set to:
> /tmp/phenix-installer-1.8.1-1168
> PHENIX installation target directory<PHENIX_LOC> set to:
> /usr/local/phenix-1.8.1-1168
> PHENIX installation build directory set to:
> <PHENIX_LOC> /build/intel-linux-2.6-x86_64
> PHENIX temporary build directory set to:
> build-binary/intel-linux-2.6-x86_64/scientifix/tmp
> PHENIX installation log directory set to:
> build-binary/intel-linux-2.6-x86_64/scientifix/log
>
> ==== warning: "/usr/local/phenix-1.8.1-1168" already exists ====
> ==== warning: cannot determine installation time-stamp, proceeding ====
>
>
> installing binary components
>
> removing existing files in
> build-binary/intel-linux-2.6-x86_64/scientifix/tmp/binary
> finding installed versions
> finding binary versions
> binary components up-to-date
> the binary package will not be installed
>
> setting environment
>
> configuring PHENIX components
>
> Error configuring: see
> /tmp/phenix-installer-1.8.1-1168/build-binary/intel-linux-2.6-x86_64/scientifix/log/binary.log
> for details
>
>
>
>
> ***** And here are the last lines of
> /tmp/phenix-installer-1.8.1-1168/build-binary/intel-linux-2.6-x86_64/scientifix/log/binary.log:
>
> ./build/intel-linux-2.6-x86_64/include/
> ./build/intel-linux-2.6-x86_64/include/phaser_defaults.params
> ./build/intel-linux-2.6-x86_64/include/phaser_nma_defaults.phil
> /usr/local/phenix-1.8.1-1168/build/intel-linux-2.6-x86_64/base/bin/python:
> /lib64/libc.so.6: version `GLIBC_2.14' not found (required by
> /usr/local/phenix-1.8.1-1168/build/intel-linux-2.6-x86_64/base/bin/python)
>
> By the way, Scientific linux 6.3 comes with glibc 2.12.
>
> It might be too naive from me, but would it be possible to copy
> libc.so.6 from glibc-2.14 somewhere and use an option during the
> installation to point to it? Any better ideas? (Like a step-wise
> description used in successful installation of phenix in Scientific
> Linux 6.3 64)
>
> Thank you for your time and help.
>
> Cheers,
> Davi
>
--
Kay Diederichs http://strucbio.biologie.uni-konstanz.de
email: Kay.Diederichs(a)uni-konstanz.de Tel +49 7531 88 4049 Fax 3183
Fachbereich Biologie, Universität Konstanz, Box M647, D-78457 Konstanz
This e-mail is digitally signed. If your e-mail client does not have the
necessary capabilities, just ignore the attached signature "smime.p7s".
13 years
Re: [phenixbb] Error determining reference matches
by Jeff Headd
Hi Dan,
I agree with Pavel, we'll really need to see your files to be able to
figure out what is going on. Something is going wrong with the alignment
routine that is used to decide residue matching for reference restraints,
but without seeing the files it's hard to speculate as to why.
If you could send your files to me or Pavel (directly, not to the list)
we'll be able to sort out what is going wrong.
Thanks,
Jeff
On Fri, Aug 30, 2013 at 11:46 PM, Pavel Afonine <pafonine(a)lbl.gov> wrote:
> Hi Dan,
>
> the only way we can help (= debug the problem, fix it or/and suggest a
> work-around) is if we can reproduce it here locally. To do this we need all
> inputs and commands necessary to reproduce the refinement run that crashed.
> Please send files to me (not to whole mailing list) and I will look myself
> or redirect to a respective developer.
>
> Thanks,
> Pavel
>
>
> On 8/30/13 4:32 PM, Dan McNamara wrote:
>
> Hi all,
>
> I am encountering a strange crash when running phenix.refine through the
> command line with reference model restraints turned on. The references
> point to two different PDB files, one labeled as chain A and one labeled as
> chain B.
>
> This error occurs whether I have my refinement model in P1 with 192 chains
> or P2(1) with 96 chains. The chains are A-EZ (192) or A-BH (96). It appears
> to fail to align the reference models to the sequences in the refinement
> model. This error does not occur with other smaller target structures under
> 25 chains and reference models used with the same installation of phenix.
>
> I am hoping for any advice on why this might be happening or how I might
> get around it.
>
> ================== Extract refinement strategy and selections
> =================
>
> Refinement flags and selection counts:
> individual_sites = True (401472 atoms)
> torsion_angles = False (0 atoms)
> rigid_body = True (401472 atoms in 192 groups)
> individual_adp = False (iso = 0 aniso = 0)
> group_adp = True (201984 atoms in 25824 groups)
> tls = False (0 atoms in 0 groups)
> occupancies = False (0 atoms)
> group_anomalous = False
>
> n_use = 401472
> n_use_u_iso = 401472
> n_use_u_aniso = 0
> n_grad_site = 0
> n_grad_u_iso = 0
> n_grad_u_aniso = 0
> n_grad_occupancy = 0
> n_grad_fp = 0
> n_grad_fdp = 0
> total number of scatterers = 401472
> *** Adding Reference Model Restraints ***
> determining reference matches automatically...
> Traceback (most recent call last):
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/build/intel-linux-2.6-x86_64/../../phenix/phenix/command_line/refine.py",
> line 11, in <module>
> command_line.run(command_name="phenix.refine", args=sys.argv[1:])
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/phenix/phenix/refinement/command_line.py",
> line 92, in run
> master_params=customized_master_params)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/phenix/phenix/refinement/driver.py",
> line 501, in __init__
> log=self.log)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 147, in __init__
> self.get_reference_dihedral_proxies()
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 460, in get_reference_dihedral_proxies
> log=self.log)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/reference_model.py",
> line 323, in process_reference_groups
> moving_chain = mod_h)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/cctbx_project/mmtbx/torsion_restraints/utils.py",
> line 426, in _ssm_align
> ssm_alignment = ccp4io_adaptbx.SSMAlignment.residue_groups(match=ssm)
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 215, in residue_groups
> return cls( match = match, indexer = indexer )
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 176, in __init__
> self.pairs.append( ( get( f, indexer1 ), get( s, indexer2 ) ) )
> File
> "/auto_nfs/joule2/programs/phenix/phenix-installer-dev-1457/phenix-dev-1457/ccp4io_adaptbx/__init__.py",
> line 173, in get
> assert identifier in indexer, "Id %s missing" % str( identifier )
> AssertionError: Id ('A', 1, ' ') missing
>
> Best,
> Dan
>
>
> _______________________________________________
> phenixbb mailing [email protected]://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
12 years, 5 months
Re: [cctbxbb] constrained hydrogen geometry
by Ralf W. Grosse-Kunstleve
Hi Luc,
> I was trying to figure out whether this interface would be sufficient
> and/or necessary for the Hydrogen geometry constraints. I am not
> quite sure I understand what is gradient_sum_matrix exactly in fact.
gradient_sum_matrix is not directly relevant to the hydrogen problem.
The only commonality is the chain rule. Roughly, if you have to
variables x, y which are "normally" independent and have gradients
df/dx and df/dy, and if under "special circumstances" x becomes
a function of y (or vice versa), you have to add up the gradients
according to the chain rule. If you work this out for the linear
relations given by the symmetry operations, you can cast the result
in the form of the "gradient sum matrices".
Below is a simple example, by Peter Zwart, constraining the sum of
occupancies to one.
In the hydrogen case, I'd think you just write down the equations
constraining the coordinates to each other. Then study them to
apply the chain rule. Done! :)
Regarding the interfaces: I usually think about interfaces after I've
worked out the math and/or algorithms. It is good to make interfaces
as similar as possible, but ultimately "form follows function",
as a famous architect taught us.
Most importantly, find class, function and variable names that convey
meaning even to the "reader" no immersed in the problem. That's more
than half the battle.
> It is the philosophy of the cctbx, isn't it? You have constructed
> the cctbx so as to use it as Python library, relegating C++ to some
> sort of assembly language for efficiency.
Yes, that's a good viewpoint.
> The lack of abstraction in the C++ code of the cctbx
I don't know what you mean. :)
> (hardly any inheritance,
Because it has a large potential for obfuscating what's going on.
Inheritance is like salt in the soup.
> no advanced genericity)
You must have overlooked Boost.Python. :)
If that doesn't change your mind, look closer at the implementation
of the array algebra header files in scitbx/array_family, including
the auto-generated header files in cctbx_build/include.
> would require wrapping a lot of the cctbx classes behind
> proxies inheriting from abstract types.
I find this approach cumbersome and fruitless. I am a big believer
in form follows function. Function follows form (design interfaces
first) doesn't work for me. I always try to achieve first what
I actually want to do, then find a good form for it.
> A typical example for me are those classes dealing
> with position constraints and ADP constraints. They
> are unrelated in C++, cctbx::sgtbx::site_constraints and
> cctbx::sgtbx::tensor_rank_2::constraints, although they have both have
> the same member functions listed above.
Functionally the member functions are unrelated, unless you can come
up with a general engine that applies the chain rule automatically to
a given function. (Such tools exist, e.g. adolc, but are heavy-duty.)
I think it would be counter-productive to tie adp/site constraints
code together just because the names of the methods are similar. I
consider modularity a more important value.
> Of course, from Python, it does not matter, thanks to duck typing:
> if two objects answer the same method calls, then they are by all
> practical means of the same type.
The adp/site constraints are not a good example. We don't want to
use them interchangeably.
I've tried to use the "functor" idea (unifies interfaces without
inheritance) in some places, but even that often turns out to be a
struggle. This is especially the case when another person adds on
(e.g. cctbx.xray.target_functors).
What counts the most in my opinion is to avoid redundancy. That's my
most important goal when writing source code, because redundancy
hampers progress and multiplies bugs, among other bad things.
I'm trying to use all tools available to cut back redundancy.
I find both templates and inheritance invaluable and use them where
they help reducing redundancy, but I do not see "use templates" or
"use inheritance" as goals in their own right.
> Our group would be more than happy to contribute them to the cctbx
> since we absolutely need them for our project.
That's great! I think you can achieve a lot quickly if you limit
yourself initially to Python. If you then check in what you have, I'd
be happy to walk you through the first few times moving time-critical
functionality to C++.
Cheers,
Ralf
By Peter Zwart:
Some ideas on how to restrain sums of variables to unity.
say we have
f(a,b,c)
with
a(x1) = x1
b(x2) = x2
c(x1,x2) = 1 - x1 - x2
Say we have from the refinement engine the following numerical values
(right hand side) for our partial derivatives:
df/da = Ga
df/db = Gb
df/dc = Gc
The chain rule says:
dq/dx = (dq/du)(du/dx) + (dq/dv)(dv/dx)
Thus
df/dx1 = (df/da)(da/dx1) + (df/dc)(dc/dx1) = Ga - Gc
df/dx2 = (df/db)(db/dx2) + (df/dc)(dc/dx2) = Gb - Gc
This gives you the rule to go from the full set of derivatives to the
reduced set of derivatives (for the general case):
#--------------------------------------------------
def pack_derivatives( g_array ):
result = g_array[0:g_array.size()-1 ]
result = result - g_array[ g_array.size()-1 ]
return result
#--------------------------------------------------
You also need to go from a reduced set of parameters to the full set of
parameters (for the general case):
#--------------------------------------------------
def expand_parameters( x ):
result = x.deep_copy()
result = result.append( 1 - flex.sum(x) )
return result
#--------------------------------------------------
This of course assumes that the 'last' variable is the non unique none
during refinement.
____________________________________________________________________________________
Any questions? Get answers on any topic at www.Answers.yahoo.com. Try it now.
19 years, 1 month