Search results for query "look through"
- 527 messages
Re: [cctbxbb] Boost Python 1.56
by Pavel Afonine
Hi Graeme,
hm.. this is a good question. We've been through back-and-forth
iterations of editing this file and I think the latest I have is from
Paul. But I can't find a non-PDF version of it. Paul: do you have an
editable version of this file?
Thanks,
Pavel
On 4/6/17 13:45, Graeme.Winter(a)diamond.ac.uk wrote:
> Hi Pavel
>
> These all seem sensible
>
> If you have the original non pdf document it may be easier to transcribe this over..
>
> I also note that it lacks the actual detail on how to run tests! However would be happy to add this once on wiki
>
> Best wishes Graeme
>
> On 6 Apr 2017, at 04:00, Pavel Afonine <pafonine(a)lbl.gov<mailto:[email protected]>> wrote:
>
> Not sure if that answers your questions but once upon a time we here at Berkeley tried to write a some sort of document that was supposed to answer questions like this. Attached. By no means it is complete, up-to-date, etc, but it might be worth reading for anyone who contributes to cctbx. (Even not sure if I'm sending the latest version).
> Unfortunately, nobody bothered to put it in some central place.
>
> Pavel
>
> On 4/6/17 10:51, James Holton wrote:
> Hey Billy,
>
> On a related note. How do I run these regression tests before committing something into Git? Is there a document on dials regression testing I can't find?
>
> -James
>
> On Apr 5, 2017, at 3:38 PM, Billy Poon <bkpoon(a)lbl.gov<mailto:[email protected]>> wrote:
>
> Hi all,
>
> I tested Boost 1.56 on our buildbot servers and got some new test failures with
>
> cctbx_project/scitbx/array_family/boost_python/tst_flex.py
> cctbx_project/scitbx/random/tests/tst_random.py
>
> The full log for CentOS 6 can be found at
>
> http://cci-vm-6.lbl.gov:8010/builders/phenix-nightly-intel-linux-2.6-x86_64…
>
> It looks like the errors are related to random number generation. For a given seed, would the sequence of numbers change when Boost is changed? I did a diff between Boost 1.56 and the current Boost and could not see any changes that immediately stood out as being related to random numbers.
>
> Are these tests failing for others as well?
>
> --
> Billy K. Poon
> Research Scientist, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Road, M/S 33R0345
> Berkeley, CA 94720
> Tel: (510) 486-5709
> Fax: (510) 486-5909
> Web: https://phenix-online.org<https://phenix-online.org/>
>
> On Wed, Apr 5, 2017 at 8:12 AM, Charles Ballard <charles.ballard(a)stfc.ac.uk<mailto:[email protected]>> wrote:
> FYI, we (CCP4) have been using 1.56 for building cctbx/phaser/dials for the last while with no issues. Don't know about 1.60, but 1.59 causes issues with the boost python make_getter and make_setter (initialisation of none const reference if the passed type is a temporary).
>
> Charles
>
> On 3 Apr 2017, at 14:31, Luc Bourhis wrote:
>
> Hi all,
>
> everybody seemed to agree but then it was proposed to move straight to Boost 1.60, and this caused troubles. Could we consider again to move to at least 1.56? As far as I can tell, this does not cause any issue and as stated one year ago, it would help me and Olex 2.
>
> Thanks,
>
> Luc
>
> On 10 Feb 2016, at 15:17, Nicholas Sauter <nksauter(a)lbl.gov<mailto:[email protected]>> wrote:
>
> Nigel, Billy & Aaron,
>
> I completely endorse this move to Boost 1.56. Can we update our build?
>
> Nick
>
> Nicholas K. Sauter, Ph. D.
> Computer Staff Scientist, Molecular Biophysics and Integrated Bioimaging Division
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Rd., Bldg. 33R0345
> Berkeley, CA 94720
> (510) 486-5713<tel:%28510%29%20486-5713>
>
> On Wed, Feb 10, 2016 at 2:41 PM, Luc Bourhis <luc_j_bourhis(a)mac.com<mailto:[email protected]>> wrote:
> Hi,
>
> I have improvements to the smtbx on their way to be committed which require Boost version 1.56. This is related to Boost.Threads, whose support I re-activated a few months ago on Nick’s request. I need the function boost::thread::physical_concurrency which returns the number of physical cores on the machine, as opposed to virtual cores when hyperthreading is enabled (which it is by default on any Intel machine). That function is not available in Boost 1.55 which is the version currently used in the nightly tests: it appeared in 1.56.
>
> So, would it be possible to move to Boost 1.56? Otherwise, I will need to backport that function. Not too difficult but not thrilling.
>
> Best wishes,
>
> Luc
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> <cctbx-developer-guidance-08-2015.pdf>_______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
8 years, 10 months
Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Kendall Nettles
Hi Pavel,
What happens if you throw out that many reflections that have signal? Can you take out a random set of the same size?
Best,
Kendall
On May 3, 2012, at 2:41 AM, "Pavel Afonine" <pafonine(a)lbl.gov> wrote:
> Hi Kendall,
>
> I just did this quick test: calculated R-factors using original and
> anisotropy-corrected Mike Sawaya's data (*)
>
> Original:
> r_work : 0.3026
> r_free : 0.3591
> number of reflections: 26944
>
> Truncated:
> r_work : 0.2640
> r_free : 0.3178
> number of reflections: 18176
>
> The difference in R-factors is not too surprising given how many
> reflections was removed (about 33%).
>
> Pavel
>
> (*) Note, the data available in PDB is anisotropy corrected. The
> original data set was kindly provided to me by the author.
>
>
> On 5/2/12 5:25 AM, Kendall Nettles wrote:
>> I didnt think the structure was publishable with Rfree of 33% because I was expecting the reviewers to complain.
>>
>> We have tested a number of data sets on the UCLA server and it usually doesn't make much difference. I wouldn't expect truncation alone to change Rfree by 5%, and it usually doesn't. The two times I have seen dramatic impacts on the maps ( and Rfree ), the highly anisotrophic sets showed strong waves of difference density as well, which was fixed by throwing out the noise. We have moved to using loose data cutoffs for most structures, but I do think anisotropic truncation can be helpful in rare cases.
>>
>> Kendall
>>
>> On May 1, 2012, at 3:07 PM, "Dale Tronrud"<det102(a)uoxray.uoregon.edu> wrote:
>>
>>> While philosophically I see no difference between a spherical resolution
>>> cutoff and an elliptical one, a drop in the free R can't be the justification
>>> for the switch. A model cannot be made more "publishable" simply by discarding
>>> data.
>>>
>>> We have a whole bunch of empirical guides for judging the quality of this
>>> and that in our field. We determine the resolution limit of a data set (and
>>> imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
>>> or Rpim getting too big or I/sigI getting too small and there is no agreement
>>> on how "too big/small" is too "too big/small".
>>>
>>> We then have other empirical guides for judging the quality of the models
>>> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
>>> recognize that the these criteria need to be applied differently for different
>>> resolutions. A lower resolution model is allowed a higher Rfree, for example.
>>>
>>> Isn't is also true that a model refined to data with a cutoff of I/sigI of
>>> 1 would be expected to have a free R higher than a model refined to data with
>>> a cutoff of 2? Surely we cannot say that the decrease in free R that results
>>> from changing the cutoff criteria from 1 to 2 reflects an improved model. It
>>> is the same model after all.
>>>
>>> Sometimes this shifting application of empirical criteria enhances the
>>> adoption of new technology. Certainly the TLS parametrization of atomic
>>> motion has been widely accepted because it results in lower working and free
>>> Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
>>> that the model fits the data better, I'm not sure that the quality of the
>>> hydrogen bond distances, van der Waals distances, or maps are any better.
>>> The latter details are what I really look for in a model.
>>>
>>> On the other hand, there has been good evidence through the years that
>>> there is useful information in the data beyond an I/sigI of 2 or an
>>> Rmeas> 100% but getting people to use this data has been a hard slog. The
>>> reason for this reluctance is that the R values of the resulting models
>>> are higher. Of course they are higher! That does not mean the models
>>> are of poorer quality, only that data with lower signal/noise has been
>>> used that was discarded in the models you used to develop your "gut feeling"
>>> for the meaning of R.
>>>
>>> When you change your criteria for selecting data you have to discard
>>> your old notions about the acceptable values of empirical quality measures.
>>> You either have to normalize your measure, as Phil Jeffrey recommends, by
>>> ensuring that you calculate your R's with the same reflections, or by
>>> making objective measures of map quality.
>>>
>>> Dale Tronrud
>>>
>>> P.S. It is entirely possible that refining a model to a very optimistic
>>> resolution cutoff and calculating the map to a lower resolution might be
>>> better than throwing out the data altogether.
>>>
>>> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>>>> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
>>>> Kendall
>>>>
>>>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>>>
>>>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>>>>> Are anisotropic cutoff desirable?
>>>>> is there a peer-reviewed publication - perhaps from Acta
>>>>> Crystallographica - which describes precisely why scaling or
>>>>> refinement programs are inadequate to ameliorate the problem of
>>>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>>>> satisfies this need?
>>>>>
>>>>> -Bryan
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 9 months
Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
by Randy John Read
Fantastic! Sorry it took so long to get my hands on an ARM-based Mac since the first reports of problems!
----
Randy J. Read
> On 4 Jul 2023, at 21:02, Luca Jovine <luca.jovine(a)ki.se> wrote:
>
> Thank you for the info Randy,
>
> I confirm that in the last available nightly build (1.21rc1-5015) the issue is clearly fixed, resulting in a >70-fold speed increase compared to Phaser-MR from 1.21rc1-5008. For two sample jobs using intensity data on a M2 MacBook Pro:
>
> 1.21rc1-5008:
> ------------------
> Job 1: CPU Time: 0 days 0 hrs 37 mins 23.01 secs ( 2243.01 secs)
> Job 2: CPU Time: 0 days 0 hrs 31 mins 10.79 secs ( 1870.79 secs)
>
> 1.21rc1-5015:
> -------------------
> Job 1: CPU Time: 0 days 0 hrs 0 mins 31.13 secs ( 31.13 secs)
> Job 2: CPU Time: 0 days 0 hrs 0 mins 25.44 secs ( 25.44 secs)
>
> ...thanks for the fix!!
>
> Luca
>
> -----Original Message-----
> From: Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]>>
> Date: Tuesday, 4 July 2023 at 17:37
> To: Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>>
> Cc: PHENIX user mailing list <phenixbb(a)phenix-online.org <mailto:[email protected]>>, Luca Jovine <luca.jovine(a)ki.se <mailto:[email protected]>>
> Subject: Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
>
>
> Hi,
>
>
> Thanks for sending the log files!
>
>
> The jobs turn out not actually to be identical. The GUI automatically chose to use the intensity data (which is what we prefer for use in Phaser) whereas your job run from a script is using amplitude data. The issue I alluded to earlier occurs only for intensity data, because the analysis of those data involves applying different equations, which use a special function (tgamma from the Boost library). For some reason I don’t understand, when the Intel version of the tgamma algorithm is computed using the Rosetta functionality to run it on an ARM processor, it’s much much slower than other calculations.
>
>
> Just last week (right after I finally got an M2 MacBook Pro), we tracked this down and replaced the calls to Boost tgamma with alternative code, and that problem should exist any more. You can use it already in Phenix by getting a recent nightly build, and I’ve asked the CCP4 people to compile a new version of Phaser and release it as an update to CCP4 as well.
>
>
> Best wishes,
>
>
> Randy
>
>
>> On 4 Jul 2023, at 12:05, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>> wrote:
>>
>> For information
>>
>> Apple M2 running Ventura 13.4.1 with 64 Go memory
>> Phenix 1.20.1-4487 (Intel one).
>>
>> I’ve run MR of the same dataset (2.15A - I422) with the same model both with the command line and through the GUI.
>>
>> Command line (phenix.phaser) : 48 secs.
>> GUI (Phaser-MR simple one component interface): 18 mins !
>>
>> In copy the two log files if this helps
>>
>>
>>
>>>> Le 4 juil. 2023 à 12:54, Luca Jovine <luca.jovine(a)ki.se <mailto:[email protected]>> a écrit :
>>>
>>> Hi Xavier and Randy, I'm also experiencing the same on a M2 Mac!
>>> -Luca
>>>
>>> -----Original Message-----
>>> From: <phenixbb-bounces(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>> on behalf of Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>
>>> Date: Tuesday, 4 July 2023 at 12:38
>>> To: Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>
>>> Cc: PHENIX user mailing list <phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>>
>>> Subject: Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
>>>
>>>
>>> Hi Randy,
>>>
>>>
>>> Indeed I’m running Phenix on a brand new M2 Mac.
>>> I will benchmark the two processes (GUI vs command line) and post them here.
>>>
>>>
>>>> Le 4 juil. 2023 à 12:32, Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>> a écrit :
>>>>
>>>> Hi Xavier,
>>>>
>>>> We haven’t noticed that, or at least any effect is small enough not to stand out. There shouldn’t be a lot of overhead in communicating with the GUI (i.e. updating the terse log output and the graphs) but if there is we should look into it and see if we can do something about it.
>>>>
>>>> Could you tell me how much longer (say, in percentage terms) a job takes when you run it through the GUI compared to running the same job outside the GUI on the same computer? Also, it’s possible the architecture matters so could you say which type of computer and operating system you’re using? If it’s a Mac, is it one with an Intel processor or an ARM (M1 or M2) processor? (By the way, we finally managed to track down and fix an issue that cause Phaser to run really slowly on an M1 or M2 Mac when using the version compiled for Intel, once I got my hands on a new Mac.)
>>>>
>>>> Best wishes,
>>>>
>>>> Randy
>>>>
>>>>> On 4 Jul 2023, at 10:44, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>> wrote:
>>>>>
>>>>> Dear Phenix users
>>>>>
>>>>> I’ve noticed that molecular replacement was clearly slower while running from the GUI compared to using the command line (phenix.phaser).
>>>>>
>>>>> Did you also observe such behavior?
>>>>>
>>>>> Best
>>>>> Xavier
>>>>> _______________________________________________
>>>>> phenixbb mailing list
>>>>> phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>
>>>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb> <http://phenix-online.org/mailman/listinfo/phenixbb> <http://phenix-online.org/mailman/listinfo/phenixbb;>
>>>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>
>>>>
>>>>
>>>> -----
>>>> Randy J. Read
>>>> Department of Haematology, University of Cambridge
>>>> Cambridge Institute for Medical Research Tel: +44 1223 336500
>>>> The Keith Peters Building
>>>> Hills Road E-mail: rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>
>>>> Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk
>>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>
>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb> <http://phenix-online.org/mailman/listinfo/phenixbb> <http://phenix-online.org/mailman/listinfo/phenixbb;>
>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected] <mailto:[email protected]>>
>>>
>>>
>>>
>>> När du skickar e-post till Karolinska Institutet (KI) innebär detta att KI kommer att behandla dina personuppgifter. Här finns information om hur KI behandlar personuppgifter<https://ki.se/medarbetare/integritetsskyddspolicy> <https://ki.se/medarbetare/integritetsskyddspolicy;>.
>>>
>>>
>>> Sending email to Karolinska Institutet (KI) will result in KI processing your personal data. You can read more about KI’s processing of personal data here<https://ki.se/en/staff/data-protection-policy> <https://ki.se/en/staff/data-protection-policy>>.
>>
>> <command_line_PHASER.log><GUI_phaser.log>
>
>
> -----
> Randy J. Read
> Department of Haematology, University of Cambridge
> Cambridge Institute for Medical Research Tel: +44 1223 336500
> The Keith Peters Building
> Hills Road E-mail: rjr27(a)cam.ac.uk <mailto:[email protected]>
> Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk
>
>
>
>
>
2 years, 7 months
Re: [cctbxbb] Scons for python3 released
by Billy Poon
Hi all,
I agree with everything and think that the Python 2/3 approach discussed
during the conference call and this message thread is the least disruptive
approach. And some additional points about testing in no particular order,
1) Updating SCons - I also tried building with SCons 3.0 and I just needed
to update some SConscript files. I only did this on CentOS 6, but I'll test
other operating systems before checking it in. I think anything affecting
building should be tested across multiple operating systems before check-in.
2) I think updating Python files to make them 2-3 compatible can be done by
modules. In theory, they're self-contained, so I agree that fixing things
in parts is more manageable. Since these fixes only affect Python files,
there should be fewer OS-specific issues, so we can catch those in the
nightly builds.
3) C++ changes - Folks in Berkeley can coordinate on this since we have
several people who know C++. We can also collaborate with the Diamond folks
on a separate branch. These fixes would need to be tested across multiple
OSes and compilers. Also, I would like these changes to be C++11 compliant.
Currently, CCTBX and Phenix (including DIALS) can be built with the C++11
standard, so let's keep it that way.
4) Boost - I also tested Boost 1.63 and there seems to be several
Boost-related test errors. These would need to be sorted out before
upgrading. Is there a particular feature in 1.65 that is very useful? Since
Boost is so fundamental, I would be more conservative on making changes to
it. Again, this would need to be tested on multiple OSes and compilers.
5) Reorganizing code - I support this as well, but we have to be careful
that we do not remove functionality. Also, if code is reorganized,
documentation should be added so that the Sphinx documentation can be more
complete.
Since we plan on a Phenix release in December, let's work on the Boost and
C++11 changes in January, but Python changes can go ahead as long as
nothing new breaks.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Wed, Oct 18, 2017 at 4:12 AM, <markus.gerstel(a)diamond.ac.uk> wrote:
> Hi,
>
> Just to add to this. I think Graeme's find_clutter idea has merit, which
> could certainly check for
> from __future__ import absolute_import, print_statement
> which would cover a lot of ground.
>
> I found this also a worthwhile to read through: http://python-future.org/
> compatible_idioms.html
> Especially handling things like xrange vs range should be done with a bit
> of thinking and when ancient code needs to be touched then it also presents
> an opportunity to make it clearer what it actually does. For example, the
> very first commit on the Python3 branch changed xrange->range here, and I
> wondered... https://github.com/cctbx/cctbx_project/commit/
> f10fd505841de372098bca83c40fc62211439f86 (untested)
>
> Finally, I think doing all 2-3-compatible conversions in a branch, for
> example print -> print() as it is happening now, will be a nightmare to
> merge later because you will be touching large portions of a large numbers
> of files, but other development does not stop. And, let's be honest, nobody
> will review a 100k LoC change set anyway.
> I would suggest we do those refactoring changes directly on master. A
> single type of change (ie. print->print()) on a per-file basis along with
> "from __future__ import print_statement" in say 30 files within one
> directory tree per commit? Much more manageable.
>
> Oh, and we need the future module installed from bootstrap.
>
> -Markus
>
>
> -----Original Message-----
> From: cctbxbb-bounces(a)phenix-online.org [mailto:cctbxbb-bounces@
> phenix-online.org] On Behalf Of Graeme.Winter(a)diamond.ac.uk
> Sent: 17 October 2017 15:20
> To: cctbxbb(a)phenix-online.org
> Subject: Re: [cctbxbb] Scons for python3 released
>
> Hi Robert
>
> I think having more than one person independently look at the p3 problem
> is no bad thing - also with the geography it would seem perfectly possible
> for you / Nick to meet up and compare notes on this - it’s certainly
> something I would support.
>
> Clearly there are a lot of things which could get caught up in the net
> with the p3 update - for example build system discussions, cleaning out
> cruft that is in there to support python 2.3 etc… however I did not read
> that Nick thought SCons3 was a waste of time - I think he was getting at
> the point that this is part of the move, and that there is also a lot of
> related work. Also that having p2 / p3 support for at least a transition
> rather than the “full Brexit” of no longer supporting p2 beyond the first
> date where p3 works would be good. I could imagine this transition period
> being O(1000 days) even.
>
> I think the migration process is going to be a complex one, but doable.
> One thing I think we do need is to make sure that the code base as pushed
> by developers remains compatible with p2 and p3 - so perhaps extending
> find_clutter to check for things which only work in one or the other? Then
> developers would learn the tricks themselves and (ideally) not push p2-only
> or p3-only code, at least until post-transition. This I would compare with
> the svn to git move, which caused some grumbling and a little confusion but
> was ultimately successful…
>
> Hope this is constructive, cheerio
>
> Graeme
>
>
>
> On 17 Oct 2017, at 13:50, R.D. Oeffner <rdo20(a)cam.ac.uk<mailto:rdo20@
> cam.ac.uk>> wrote:
>
> Hi Nick and others,
>
> That sounds like a great effort. A shame I didn't know about this. I have
> not had time to look in detail into your work but will nevertheless
> summarize my thoughts and work I have been doing lately in an effort to
> move CCTBX to python3.
>
> I am not sure why it would be a waste of time to use SCons3.0 with python3
> as I think you are suggesting. To me it seems as a necessary step in
> creating a codebase that runs both on python2 and python3. Do I understand
> correctly that as long as CCTBX code is changed to comply with python3 and
> remain python2 compliant then such a codebase can be used in place of the
> current python2 only codebase for derived projects such as Dials and
> Phenix? Assuming this is the case I think it is worth focusing just on
> CCTBX only for now.
>
> My own attempt in porting CCTBX to python3 constitutes of the following
> steps:
> * Replace Scons2 with Scons3
> * Update the subset of Boost sources to version 1.63
> * Run futurize stage1 and stage2 on the CCTBX
> * Build base components like libtiff, hdf5, python3.6 + add-on modules)
> * Run bootstrap.py build with Python3.6 repeatedly and provide mock-up
> fixes to allow the build to continue.
>
> This work is almost near completion in the sense that the sources now can
> build but are unlikely to pass test due to the mock-up fixes which often
> constitutes of replacement of PyStringXXX functions with equivalent
> PyUnicodeXXX, PyBytestringXXX functions ignoring whether that is
> appropriate or not. These token fixes would also need to be guarded by #if
> PY_MAJOR_VERSION == 3 ... macros.
>
> The sources are available on https://github.com/cctbx/
> cctbx_project/tree/Python3
>
> The next steps are less well defined. One approach would be to set up a
> build system that migrates python2 code to python3 using the futurize
> script, then builds CCTBX and runs test and presents build log files online
> as in http://cci-vm-6.lbl.gov:8010/one_line_per_build. With a hook to
> GitHub this could also be done on the fly as people commit code to CCTBX.
> This should encourage people to write code that runs on python2 as well as
> python3. Eventually once all tests for CCTBX pass we are done and can merge
> this codebase into the master branch.
>
>
> Robert
>
>
>
> On 17/10/2017 11:56, Nicholas Devenish wrote:
> Hi All,
> I spent a little bit of time looking at python3/libtbx so have some input
> on this.
> On Tue, Oct 10, 2017 at 6:16 PM, Billy Poon <bkpoon(a)lbl.gov<mailto:bkpoon@
> lbl.gov>> wrote:
> 1) Use Python 2 to build Python 2 version of CCTBX (no work) This might
> not be as simple as "No Work" - cctbx is a few years behind on SCons
> versions (libtbx.scons --version suggests 2.2.0, from 2012) so there
> *might* be other issues upgrading the SCons version to 3.0, before trying
> python3.
> I also feel that SCons-Python3 is something of a red herring - the only
> thing that non-python3-SCons prevents is an 100% python3-only codebase, and
> unless the plan is to migrate the entire codebase, including all downstream
> dependencies (like dials) to python3-only in one massive step (probably
> impossible), everything would need to be dual 2/3 first, and only then a
> decision taken on deprecating 2.7 support.
> More usefully, outside of a small core of libtbx code, not much of the
> buildsystem files are bound to the main project so this shouldn't be too
> difficult. In fact, I've experimented with converting to CMake, and as one
> of the approaches I explored, I wrote a SCons-emulator that read and parsed
> the build *without* any scons/cctbx dependencies. To parse the entire
> "builder=dials" SCons-tree only required this subset of libtbx:
> https://github.com/ndevenish/read_scons/blob/master/
> tbx2cmake/import_env.py#L202-L235
> [1]
> (Note: my general CMake-work works but isn't complete/ready/documented for
> general viewing, and still much resembles a hacky project, but I thought
> that this was sufficient to decouple the buildsystem is usefully
> illustrative of how simple the task might be) Regarding general Python3
> conversion, it's definitely not "Just changing the print statements". I
> undertook a study in august to convert libtbx (being the core that
> *everything* depends on) to dual
> python2/3 and IIRC got most of the tests working in python3. It's a couple
> of months out-of-date, but is probably useful as a benchmark of the effort
> required. The repository links are:
> https://github.com/ndevenish/cctbx_project/tree/py3k-modernize [2]
> https://github.com/ndevenish/cctbx_project/tree/py3k [3] Probably best
> looked at with a graphical viewer to get a top-down view of the history. My
> approach was to separate manual/automatic changes as follows:
> 1. Remove legacy code/modules - e.g. old compatibility. The Optik removal
> came from this. We don't want to spend mental effort converting absorbed
> external libraries from a decade ago (see also e.g. pexpect,
> subprocess_with_fixes) 2. Make some manual fixes [Expanded as we go on] 3.
> Use futurize and modernize to update idioms ONLY e.g. remove
> pre-2.7 deprecated ways of working. Each operation was done is a separate
> commit (so that changes are more visible and I thought people would have
> less objection than to a massive code-change dump), and each commit ran the
> test suite for libtbx. Some of the 'fixers' in each tool are complementary.
> If there are any problems with tests or automatic conversion, then fix the
> problem, put the fix into step 2, then start again. This step should be
> entirely scriptable. I had 17 commits for separate fixes in this chain.
> This is the where the py3k-modernize branch stops, and should in principle
> be kept entirely safe to push back onto the python2-only repository. The
> next steps form the `py3k` branch (not being intended for direct pushing,
> is a little less organised - some of my changes could definitely be moved
> to step 2):
> 4. Run 'modernize' to convert the codebase to as much python2/3 as
> possible. This introduces the dependency on 'six'
> 5. Run tests, implement various fixes, repeat. This work was ongoing when
> I stopped working on the study.
> Various (non-exhaustive) problems found:
> - cStringIO isn't handled automatically, so these need to be fixed
> manually ( e.g.
> https://github.com/ndevenish/cctbx_project/commit/
> c793eb58acc37c60360dccbbbdd5205504ec3f1a
> [4] )
> - Iterators needed to be fixed in cases where they were missed (next vs
> __next__)
> - Rounding. Python3 uses 'Bankers Rounding' and there are formatting tests
> where this changes the output. I didn't know enough about the exact desired
> result to know the best way to fix this
> - libtbx uses compiler.misc.mangle and I don't know why - this was always
> a private interface and was removed in 3.
> - Moving print statements to functions - there was several failed tests
> relating to the old python2-print-soft-spacing behaviour, which was
> removed. Not too difficult, but definitely causes
> - A couple of text/binary mode file issues, which seemed to be simple but
> may be more complicated than the test cases covered. I'd expect more issues
> with this in the format readers though.
> I evaluated both the futurize (using future library) and modernize (using
> the well known six library) tools, both being different approaches to 2to3,
> but for dual 2/3 codebases. I liked the approach of futurize to attempt to
> make code look as python3-idiomatic as-possible, but some of the
> performance implications were slightly
> opaque: e.g. libtbx makes heavy use of cStringIO (presumably for a good
> reason), and futurize converted all of these back to using StringIO in the
> Python2 case, so settled on modernize as I felt two different compatibility
> libraries would be messy. In either case, using the library means that you
> can identify exactly everywhere that needs to be removed when moving to
> python3 only.
> My conclusions:
> - Automatic tools are useful for the bulk of changes, but there are still
> lots of edge cases
> - The complexity means that a phased approach is *absolutely* necessary -
> starting by converting the core to 2/3 and only moving to
> 3 once everything downstream is converted.Trying to convert everything at
> once would likely mean months of feature-freeze.
> - A separate "Remove legacy" cleaning phase might be very useful, though
> obviously the domain of this could be endless
> - SCons is probably the least important of the conversion worries Nick
> Links:
> ------
> [1]
> https://github.com/ndevenish/read_scons/blob/master/
> tbx2cmake/import_env.py#L202-L235
> [2] https://github.com/ndevenish/cctbx_project/tree/py3k-modernize
> [3] https://github.com/ndevenish/cctbx_project/tree/py3k
> [4]
> https://github.com/ndevenish/cctbx_project/commit/
> c793eb58acc37c60360dccbbbdd5205504ec3f1a
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
8 years, 3 months
Re: [cctbxbb] bootstrap.py build on Ubuntu
by David Waterman
Hi Billy
Thanks for the advice. I have come across this before on Ubuntu, where
older libraries don't work with the debian multi-arch conventions. I think
generally this is not a problem with newer packages, so for example I doubt
there would be an issue with a reasonably recent Pillow in place of PIL.
Cheers
David
On Tue, 14 Jun 2016, 21:44 Billy Poon, <bkpoon(a)lbl.gov> wrote:
> Hi David,
>
> Scratch the lib64z1 and lib64z1-dev packages. Apparently, those are for
> i386.
>
> For 12.04, 14.04, and 16.04, libz.so is placed in
> /usr/lib/x86_64-linux-gnu by the zlib1g-dev package. It looks like Ubuntu
> libraries are now placed in a directory named by architecture to better
> support multiple architectures on the same machine.
>
> I updated install_base_packages to supply that directory for building PIL.
> This is specific to x86_64, so this won't work on 32-bit Ubuntu. But if you
> want a 32-bit Ubuntu build, installing lib64z1-dev should work.
>
> Let me know if you have any more issues. Thanks!
>
> --
> Billy K. Poon
> Research Scientist, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Road, M/S 33R0345
> Berkeley, CA 94720
> Tel: (510) 486-5709
> Fax: (510) 486-5909
> Web: https://phenix-online.org
>
> On Tue, Jun 14, 2016 at 11:29 AM, David Waterman <dgwaterman(a)gmail.com>
> wrote:
>
>> Hey Billy,
>>
>> Thanks. I'm travelling at the moment, but once I'm back I'll give that a
>> go.
>>
>> Cheers
>> David
>>
>> On Tue, 14 Jun 2016, 17:34 Billy Poon, <bkpoon(a)lbl.gov> wrote:
>>
>>> Hi David,
>>>
>>> Actually, it looks like the lib64z1-dev package provides libz.so in
>>> /usr/lib64, so installing that package should fix your issue. It's a bit
>>> odd that the lib64z1 package does not provide that file.
>>>
>>> --
>>> Billy K. Poon
>>> Research Scientist, Molecular Biophysics and Integrated Bioimaging
>>> Lawrence Berkeley National Laboratory
>>> 1 Cyclotron Road, M/S 33R0345
>>> Berkeley, CA 94720
>>> Tel: (510) 486-5709
>>> Fax: (510) 486-5909
>>> Web: https://phenix-online.org
>>>
>>> On Mon, Jun 13, 2016 at 1:53 PM, Billy Poon <bkpoon(a)lbl.gov> wrote:
>>>
>>>> Hi David,
>>>>
>>>> I don't have a fix yet, but here is a workaround. It seems like
>>>> setup.py is looking for libz.so instead of libz.so.1, so you can fix the
>>>> issue by making a symbolic link for libz.so in /usr/lib64.
>>>>
>>>> sudo ln -s /usr/lib64/libz.so.1 /usr/lib64/libz.so
>>>>
>>>> This requires root access, so that's why it's just a workaround.
>>>>
>>>> --
>>>> Billy K. Poon
>>>> Research Scientist, Molecular Biophysics and Integrated Bioimaging
>>>> Lawrence Berkeley National Laboratory
>>>> 1 Cyclotron Road, M/S 33R0345
>>>> Berkeley, CA 94720
>>>> Tel: (510) 486-5709
>>>> Fax: (510) 486-5909
>>>> Web: https://phenix-online.org
>>>>
>>>> On Sat, Jun 11, 2016 at 5:05 PM, Billy Poon <bkpoon(a)lbl.gov> wrote:
>>>>
>>>>> Hi David,
>>>>>
>>>>> Sorry it look so long! Setting up all the virtual machines was a time
>>>>> sink and getting things to work on 32-bit CentOS 5 and Ubuntu 12.04 was a
>>>>> little tricky.
>>>>>
>>>>> It looks like Ubuntu 16.04 moved its libraries around. I used apt-get
>>>>> to install libz-dev and lib64z1 (the 64-bit library). There is a libz.so.1
>>>>> file in /lib/x86_64-linux-gnu and in /usr/lib64.
>>>>>
>>>>> I have not gotten it to work yet, but I'm pretty sure this is the
>>>>> issue. I'll have to double-check 12.04 and 14.04.
>>>>>
>>>>> As for Pillow, I did test it a few months ago, but I remember there
>>>>> being API changes that will need to fixed.
>>>>>
>>>>> --
>>>>> Billy K. Poon
>>>>> Research Scientist, Molecular Biophysics and Integrated Bioimaging
>>>>> Lawrence Berkeley National Laboratory
>>>>> 1 Cyclotron Road, M/S 33R0345
>>>>> Berkeley, CA 94720
>>>>> Tel: (510) 486-5709
>>>>> Fax: (510) 486-5909
>>>>> Web: https://phenix-online.org
>>>>>
>>>>> On Sat, Jun 11, 2016 at 2:04 AM, David Waterman <dgwaterman(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Billy,
>>>>>>
>>>>>> I'm replying on this old thread because I have finally got round to
>>>>>> trying a bootstrap build for DIALS out again on Ubuntu, having waited for
>>>>>> updates to the dependencies and updating the OS to 16.04.
>>>>>>
>>>>>> The good news is, the build ran through fine. This is the first time
>>>>>> I've had a bootstrap build complete without error on Ubuntu, so thanks to
>>>>>> you and the others who have worked on improving the build in the last few
>>>>>> months!
>>>>>>
>>>>>> The bad news is I'm getting two failures in the DIALS tests:
>>>>>>
>>>>>> dials/test/command_line/tst_export_bitmaps.py
>>>>>> dials_regression/test.py
>>>>>>
>>>>>> Both are from PIL
>>>>>>
>>>>>> File
>>>>>> "/home/fcx32934/dials_test_build/base/lib/python2.7/site-packages/PIL/Image.py",
>>>>>> line 401, in _getencoder
>>>>>> raise IOError("encoder %s not available" % encoder_name)
>>>>>> IOError: encoder zip not available
>>>>>>
>>>>>> Indeed, from base_tmp/imaging_install_log it looks like PIL is not
>>>>>> configured properly
>>>>>>
>>>>>> --------------------------------------------------------------------
>>>>>> PIL 1.1.7 SETUP SUMMARY
>>>>>> --------------------------------------------------------------------
>>>>>> version 1.1.7
>>>>>> platform linux2 2.7.8 (default_cci, Jun 10 2016, 16:04:32)
>>>>>> [GCC 5.3.1 20160413]
>>>>>> --------------------------------------------------------------------
>>>>>> *** TKINTER support not available
>>>>>> *** JPEG support not available
>>>>>> *** ZLIB (PNG/ZIP) support not available
>>>>>> *** FREETYPE2 support not available
>>>>>> *** LITTLECMS support not available
>>>>>> --------------------------------------------------------------------
>>>>>>
>>>>>> Any ideas? I have zlib headers but perhaps PIL can't find them.
>>>>>>
>>>>>> On a related note, the free version of PIL has not been updated for
>>>>>> years. The replacement Pillow has started to diverge. I first noticed this
>>>>>> when Ubuntu 16.04 gave me Pillow 3.1.2 and my cctbx build with the system
>>>>>> python produced failures because it no longer supports certain deprecated
>>>>>> methods from PIL. I worked around that in r24587, but these things are a
>>>>>> losing battle. Is it time to switch cctbx over to Pillow instead of PIL?
>>>>>>
>>>>>> Cheers
>>>>>>
>>>>>> -- David
>>>>>>
>>>>>> On 7 January 2016 at 18:12, Billy Poon <bkpoon(a)lbl.gov> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> Since wxPython was updated to 3.0.2, I have been thinking about
>>>>>>> updating the other GUI-related packages to more recent versions. I would
>>>>>>> probably update to the latest, stable version that does not involve major
>>>>>>> changes to the API so that backwards compatibility is preserved. Let me
>>>>>>> know if that would be helpful and I can prioritize the migration and
>>>>>>> testing.
>>>>>>>
>>>>>>> --
>>>>>>> Billy K. Poon
>>>>>>> Research Scientist, Molecular Biophysics and Integrated Bioimaging
>>>>>>> Lawrence Berkeley National Laboratory
>>>>>>> 1 Cyclotron Road, M/S 33R0345
>>>>>>> Berkeley, CA 94720
>>>>>>> Tel: (510) 486-5709
>>>>>>> Fax: (510) 486-5909
>>>>>>> Web: https://phenix-online.org
>>>>>>>
>>>>>>> On Thu, Jan 7, 2016 at 9:30 AM, Nicholas Sauter <nksauter(a)lbl.gov>
>>>>>>> wrote:
>>>>>>>
>>>>>>>> David,
>>>>>>>>
>>>>>>>> I notice that the Pango version, 1.16.1, was released in 2007, so
>>>>>>>> perhaps it is no surprise that the latest Ubuntu does not support it.
>>>>>>>> Maybe this calls for stepping forward the Pango version until you find one
>>>>>>>> that works. I see that the latest stable release is 1.39.
>>>>>>>>
>>>>>>>> This would be valuable information for us..Billy Poon in the Phenix
>>>>>>>> group is supporting the Phenix GUI, so it might be advisable for him to
>>>>>>>> update the Pango version in the base installer.
>>>>>>>>
>>>>>>>> Nick
>>>>>>>>
>>>>>>>> Nicholas K. Sauter, Ph. D.
>>>>>>>> Computer Staff Scientist, Molecular Biophysics and Integrated
>>>>>>>> Bioimaging Division
>>>>>>>> Lawrence Berkeley National Laboratory
>>>>>>>> 1 Cyclotron Rd., Bldg. 33R0345
>>>>>>>> Berkeley, CA 94720
>>>>>>>> (510) 486-5713
>>>>>>>>
>>>>>>>> On Thu, Jan 7, 2016 at 8:54 AM, David Waterman <
>>>>>>>> dgwaterman(a)gmail.com> wrote:
>>>>>>>>
>>>>>>>>> Hi again
>>>>>>>>>
>>>>>>>>> Another data point: I just tried this on a different Ubuntu
>>>>>>>>> machine, this time running 14.04. In this case pango installed just fine.
>>>>>>>>> In fact all other packages installed too and the machine is now compiling
>>>>>>>>> cctbx.
>>>>>>>>>
>>>>>>>>> I might have enough for comparison between the potentially working
>>>>>>>>> 14.04 and failed 15.04 builds to figure out what is wrong in the second
>>>>>>>>> case.
>>>>>>>>>
>>>>>>>>> Cheers
>>>>>>>>>
>>>>>>>>> -- David
>>>>>>>>>
>>>>>>>>> On 7 January 2016 at 09:56, David Waterman <dgwaterman(a)gmail.com>
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> Hi folks
>>>>>>>>>>
>>>>>>>>>> I recently tried building cctbx+dials on Ubuntu 15.04 following
>>>>>>>>>> the instructions here:
>>>>>>>>>> http://dials.github.io/documentation/installation_developer.html
>>>>>>>>>>
>>>>>>>>>> This failed during installation of pango-1.16.1. Looking
>>>>>>>>>> at pango_install_log, I see the command that failed was as follows:
>>>>>>>>>>
>>>>>>>>>> gcc -DHAVE_CONFIG_H -I. -I. -I../..
>>>>>>>>>> -DSYSCONFDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/etc\"
>>>>>>>>>> -DLIBDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/lib\"
>>>>>>>>>> -DG_DISABLE_CAST_CHECKS -I../.. -DG_DISABLE_DEPRECATED
>>>>>>>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include
>>>>>>>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include/freetype2 -g -O2
>>>>>>>>>> -Wall -MT fribidi.lo -MD -MP -MF .deps/fribidi.Tpo -c fribidi.c -fPIC
>>>>>>>>>> -DPIC -o .libs/fribidi.o
>>>>>>>>>> In file included from fribidi.h:31:0,
>>>>>>>>>> from fribidi.c:28:
>>>>>>>>>> fribidi_config.h:1:18: fatal error: glib.h: No such file or
>>>>>>>>>> directory
>>>>>>>>>>
>>>>>>>>>> The file glib.h appears to be in base/include/glib-2.0/, however
>>>>>>>>>> this directory was not explicitly included in the command above, only its
>>>>>>>>>> parent. This suggests a configuration failure in pango to me. Taking a look
>>>>>>>>>> at base_tmp/pango-1.16.1/config.log, I see what look like the relevant
>>>>>>>>>> lines:
>>>>>>>>>>
>>>>>>>>>> configure:22227: checking for GLIB
>>>>>>>>>> configure:22235: $PKG_CONFIG --exists --print-errors
>>>>>>>>>> "$GLIB_MODULES"
>>>>>>>>>> configure:22238: $? = 0
>>>>>>>>>> configure:22253: $PKG_CONFIG --exists --print-errors
>>>>>>>>>> "$GLIB_MODULES"
>>>>>>>>>> configure:22256: $? = 0
>>>>>>>>>> configure:22304: result: yes
>>>>>>>>>>
>>>>>>>>>> but this doesn't tell me very much. Does anyone have any
>>>>>>>>>> suggestions as to how I might proceed?
>>>>>>>>>>
>>>>>>>>>> Many thanks
>>>>>>>>>>
>>>>>>>>>> -- David
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> cctbxbb mailing list
>>>>>>>>> cctbxbb(a)phenix-online.org
>>>>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> cctbxbb mailing list
>>>>>>>> cctbxbb(a)phenix-online.org
>>>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> cctbxbb mailing list
>>>>>>> cctbxbb(a)phenix-online.org
>>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> cctbxbb mailing list
>>>>>> cctbxbb(a)phenix-online.org
>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>
>>>>>>
>>>>>
>>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
9 years, 7 months
Re: [cctbxbb] Boost broken on mac?
by richard.gildea@diamond.ac.uk
Hi Luc,
Markus and I were looking into the cctbx Makefile, and wondered if it would make sense to add the line
./bin/libtbx.configure . --clear-scons-memory
to the "make clean" command. I guess this would solve the issue of SCons thinking it was using a much older version of the compiler than it actually was?
$ cat Makefile
# DO NOT EDIT THIS FILE!
# This file will be overwritten by the next libtbx/configure.py,
# libtbx.configure, or libtbx.refresh.
default:
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`"
nostop:
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`" -k
bp:
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`" -k boost_python_tests=1
reconf:
./bin/libtbx.configure .
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`"
redo:
./bin/libtbx.configure . --clear-scons-memory
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`"
clean:
./bin/libtbx.scons -j "`./bin/libtbx.show_number_of_processors`" -c
# example
selfx:
rm -rf selfx_tmp
mkdir selfx_tmp
cd selfx_tmp ; \
libtbx.start_binary_bundle example boost ; \
libtbx.bundle_as_selfx example build_id ; \
mv example_build_id.selfx .. ; \
cd .. ; \
ls -l example_build_id.selfx
Cheers,
Richard
Dr Richard Gildea
Data Analysis Scientist
Tel: +441235 77 8078
Diamond Light Source Ltd.
Diamond House
Harwell Science & Innovation Campus
Didcot
Oxfordshire
OX11 0DE
________________________________________
From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Luc Bourhis [luc_j_bourhis(a)mac.com]
Sent: 07 April 2016 11:05
To: cctbx mailing list
Subject: Re: [cctbxbb] Boost broken on mac?
SCons caches the result of TryRun in .sconf_temp and it gets stale sometimes. In your case, SCons did not track the fact that clang had changed I guess.
I did commit my fix to libtbx/SConscript. Note that it does not fix the error in xray/conversions.h, weirdly enough. So I left my hack in there.
So now, I’ve got some work cut for myself for the next rainy day: come with a reduce test case that I can submit to the clang bugzilla. The problem in xray/conversions.h is clearly the most promising to start with. But still, a lot of work.
> On 7 Apr 2016, at 11:43, richard.gildea(a)diamond.ac.uk wrote:
>
> Strangely for my development build, the build system appears convinced that it is using clang 6.1.0...
>
> Editing around line 784 of libtbx/SConscript, where clang_version is determined, to read:
>
> from subprocess import call
> print call(["which", "clang"])
> print call(["clang", "--version"])
> flag, output = conf.TryRun("\n".join(test_code), extension='.cpp')
> conf.Finish()
> assert flag
> output = output.replace(":,", ":None,")
> print output
>
> I get the following output:
>
>
> /usr/bin/clang
>
> 0
>
> Apple LLVM version 7.3.0 (clang-703.0.29)
>
> Target: x86_64-apple-darwin15.3.0
>
> Thread model: posix
>
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>
> 0
>
> {"llvm":1, "clang":1, "clang_major":6, "clang_minor":1, "clang_patchlevel":0, "GNUC":4, "GNUC_MINOR":2, "GNUC_PATCHLEVEL":1, "clang_version": "6.1.0 (clang-602.0.53)", "VERSION": "4.2.1 Compatible Apple LLVM 6.1.0 (clang-602.0.53)", }
>
> On MacOS, using clang 6.1.0
>
>
> If I wipe the build directory (or rather, renamed so not wiped completely) then it is now happy that it is using clang 7.3.0:
>
>
> /usr/bin/clang
>
> 0
>
> Apple LLVM version 7.3.0 (clang-703.0.29)
>
> Target: x86_64-apple-darwin15.3.0
>
> Thread model: posix
>
> InstalledDir: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin
>
> 0
>
> {"llvm":1, "clang":1, "clang_major":7, "clang_minor":3, "clang_patchlevel":0, "GNUC":4, "GNUC_MINOR":2, "GNUC_PATCHLEVEL":1, "clang_version": "7.3.0 (clang-703.0.29)", "VERSION": "4.2.1 Compatible Apple LLVM 7.3.0 (clang-703.0.29)", }
>
> On MacOS, using clang 7.3.0
>
>
> Any idea what is going on here? Why does it think it is somehow using an older version of the clang to what is actually installed?!
>
>
> Cheers,
>
>
> Richard
>
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
> ________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Luc Bourhis [luc_j_bourhis(a)mac.com]
> Sent: 07 April 2016 10:23
> To: cctbx mailing list
> Subject: Re: [cctbxbb] Boost broken on mac?
>
> Could you confirm whether the following two tests fail for you after your latest change?
>
> libtbx.python "/Users/rjgildea/tmp/tst_bootstrap/tmp/modules/cctbx_project/cctbx/maptbx/boost_python/tst_maptbx.py" [FAIL]
> […]
>
> libtbx.python "/Users/rjgildea/tmp/tst_bootstrap/tmp/modules/cctbx_project/scitbx/lbfgs/tst_lbfgs_fem.py" [FAIL]
> […]
>
> I see those too. Tracking them is too much effort indeed. I followed the route already suggested by Marcin: break -ffast-math in finer options and find the one breaking our code. After a round-trip through Clang source code (I wish they had a proper documentation of *all* their command-line options as it exists for gcc), I came with a fix which keeps all useful optimisations enabled while the 3 failing tests do now pass.
>
> Gonna commit shortly.
>
> Best wishes,
>
> Luc
>
>
>
>
> --
> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org
http://phenix-online.org/mailman/listinfo/cctbxbb
9 years, 9 months
Re: [phenixbb] calculate Fc for the whole unit cell from a Fc of a single symmetric unit.
by Dale Tronrud
On 07/11/11 15:35, Edward A. Berry wrote:
> It seems to me there are two things that could be meant by "expand to P1"
>
> One is when data has been reduced to the Reciprocal Space
> asymmetric unit (or otherwise one asymmetric unit of a
> symmetric dataset has been obtained) and you want to expand
> it to P1 by using symmetry to generate all the
> symmetry -equivalent reflections.
>
> The other is where a full P1 dataset has been calculated from just
> one asymmetric unit of the crystal (and hence does not exhibit the
> crystallographic symmetry) and you want to generate the transform
> of the entire crystal. (I think this is how all the space-group -
> specific fft programs used to work before computers got so fast it
> was less bother to just do everything in P1 with the whole cell)
> Presumably this involves applying the real space symmetry
> operators to get n rotated (or phase-shifted for translational
> symmetry) P1 datasets and adding them vectorially.
If I recall, that is how XPLOR calculated structure factors but
that is not how a Ten Eyck space group specific FFT works. First
it expands the atoms into the largest subgroup composed only of
symmetry elements that can be optimized (e.g. three-folds are
out). Then the electron density of the asymmetric unit of that
space group is calculated. Finally the structure factors are
calculated by a combination of line group optimized 1-D FFTs and
clever rearrangements that move data from useless positions in
memory to places where the final 1-D FFTs can use them.
Both the calculation of the map (the slowest step) and the
FFT itself are sped up by a factor of the number of usable
symmetry operators. In addition the amount of RAM required was
greatly reduced. Back in the day the latter was often more
important because memory was so limited.
Dale Tronrud
>
> It would be important to decide which of these is required, and which
> each of the suggested methods provides
>
> eab
>
> Ralf Grosse-Kunstleve wrote:
>> We can expand reciprocal-space arrays, too, with the
>> cctbx.miller.array.expand_to_p1() method. You can use it from the
>> command line via
>>
>> phenix.reflection_file_converter --expand-to-p1 ...
>>
>> See also:
>> http://www.phenix-online.org/documentation/reflection_file_tools.htm
>>
>> Ralf
>>
>>
>> On Mon, Jul 11, 2011 at 10:56 AM, <zhangh1(a)umbc.edu
>> <mailto:[email protected]>> wrote:
>>
>> Sorry I haven't got a chance to check my email recently.
>>
>> Yes, I meant expansion to P1. The thing is cctbx relies on the atomic
>> model I think, but I only have model Fc available.
>>
>> Hailiang
>>
>> > I suspect what Hailang means is expansion into P1.
>> >
>> > I am sure this can be accomplished through some either existing or
>> > easily coded cctbx tool. However, when I looked into a different
>> task
>> > recently that included P1 expansion as a step, I learned that
>> SFTOOLs
>> > can do this, albeit there was a bug there which caused trouble in
>> > certain space groups (may be fixed by now so check if there is an
>> > update).
>> >
>> > Hailang - if P1 expansion is what you need, I could share my own
>> code as
>> > well, let me know if that is something you want to try.
>> >
>> > Cheers,
>> >
>> > Ed.
>> >
>> > On Fri, 2011-07-08 at 14:44 -0700, Ralf Grosse-Kunstleve wrote:
>> >> Did you get responses already?
>> >> If not, could you explain your situation some more?
>> >> We have algorithms that do the symmetry summation in reciprocal
>> space.
>> >> The input is a list of Fc in P1, based on the unit cell of the
>> >> crystal. Is that what you have?
>> >> Ralf
>> >>
>> >> On Wed, Jul 6, 2011 at 1:38 PM, <zhangh1(a)umbc.edu
>> <mailto:[email protected]>> wrote:
>> >> Hi,
>> >>
>> >> I am wondering if I only have structure factors
>> calculated
>> >> from a single
>> >> symmetric unit, is there any phenix utility which can
>> >> calculate the
>> >> structure factor for the whole unit cell given the
>> symmetric
>> >> operation or
>> >> space group and crystal parameters? Note I don't have an
>> >> atomic model and
>> >> only have Fc.
>> >>
>> >> Thanks!
>> >>
>> >> Hailiang
>> >>
>> >> _______________________________________________
>> >> phenixbb mailing list
>> >> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> >> http://phenix-online.org/mailman/listinfo/phenixbb
>> >>
>> >>
>> >> _______________________________________________
>> >> phenixbb mailing list
>> >> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> >> http://phenix-online.org/mailman/listinfo/phenixbb
>> >
>> >
>> > _______________________________________________
>> > phenixbb mailing list
>> > phenixbb(a)phenix-online.org <mailto:[email protected]>
>> > http://phenix-online.org/mailman/listinfo/phenixbb
>> >
>> >
>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
14 years, 6 months
Re: [phenixbb] Validation of structure with modified residue
by Xavier Brazzolotto
After some careful inspection.
The geometry on the C atom of the ligand is weird, I don’t get something close to tetrahedral (or similar).
Probably some angles are missing or I did something wrong with the ligand cif file.
Not fixed yet
> Le 21 avr. 2022 à 13:39, Xavier Brazzolotto <xbrazzolotto(a)gmail.com> a écrit :
>
> I’ve re-processed the structure separating the SER residue from the ligand part. Now I have independent ligand.
> In the « Custom Geometry Restraints » I’ve defined the bond between OG and the carbon atom of the ligand and I’ve defined the angles (I’ve used the values from the previously determined eLBOW run off the SER-bound ligand complex), saved the restraints and launched the refinement. At a first look it was processed correctly and the final cif file has now the whole protein in Chain A.
>
> I’ve used prepare PDB deposition using the FASTA sequence of the protein (wonder if I have to provide the ligand CIF file and add more options) and ran phenix.get_pdb_validation : the report looks ok for protein and some other basic ligands (sugars, buffer, glycerol, etc...) but the ligand of interest was not processed (EDS FAILED...). In the PDB file, all these extra ligands are also in Chain A, with water in chain B.
>
> If I try the validation through the website (PDBe@EBI) with both cif files from the Refine or the Prepare_PDB_Deposition process, both seem to crash the server as it takes forever without Finalizing...
>
> I wonder if I am missing something… Maybe declaration of removal of atoms : HG bound to OG in SER or/and removal of one H from the carbon of the ligand involved in the bond ?
>
> Xavier
>
>> Le 21 avr. 2022 à 08:06, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>> a écrit :
>>
>> Thank you for your feedback.
>>
>> @Paul, I’ve run the « Prepare model for deposition » with the option modified residue (SLG). Not sure it will change if I change the name as it is already the PDB database, but I will give it another try.
>>
>> I think that I will have to describe only the ligand and add some parameters restricting distance and angles between the OG of SER and the ligand, I think this is right way.
>> @ Nigel, is that what you mean with « details » ? If you have any other « tips/tricks » they are welcome.
>>
>> Best
>> Xavier
>>
>>> Le 21 avr. 2022 à 02:47, Nigel Moriarty <nwmoriarty(a)lbl.gov <mailto:[email protected]>> a écrit :
>>>
>>> Xavier
>>>
>>> Paul's point is very valid because the "Prepare for Deposition" step is what generates the sequence (which is the crucial point here) for deposition. However, because you have "created" a new amino acid, there will still be issues as highlighted by Pavel. It is a corner case.
>>>
>>> One small addition point is that SLG is already taken in the PDB Ligand list. There are tools in Phenix to find an used code.
>>>
>>> Can you re-engineer it with SER+ligand? This will solve the problem using the current Phenix version. I can help with the details if needed.
>>>
>>> Cheers
>>>
>>> Nigel
>>>
>>> ---
>>> Nigel W. Moriarty
>>> Building 33R0349, Molecular Biophysics and Integrated Bioimaging
>>> Lawrence Berkeley National Laboratory
>>> Berkeley, CA 94720-8235
>>> Phone : 510-486-5709 Email : NWMoriarty(a)LBL.gov <mailto:[email protected]>
>>> Fax : 510-486-5909 Web : CCI.LBL.gov <http://cci.lbl.gov/>
>>> ORCID : orcid.org/0000-0001-8857-9464 <https://orcid.org/0000-0001-8857-9464>
>>>
>>>
>>> On Wed, Apr 20, 2022 at 5:02 PM Paul Adams <pdadams(a)lbl.gov <mailto:[email protected]>> wrote:
>>>
>>> Please also remember that you need to run “Prepare model for PDB deposition” (in the GUI under "PDB Deposition") on the mmCIF file you get from phenix.refine. This provides important information that is required for the deposition at the PDB.
>>>
>>>> On Apr 20, 2022, at 1:58 PM, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>> wrote:
>>>>
>>>> Dear Phenix users,
>>>>
>>>> I don’t know if my problem is related to Phenix but for information I’m running Phenix 1.20.1-4487 under MacOS 12.3.1.
>>>>
>>>> I’ve finalized a structure where a ligand covalently modified the protein.
>>>>
>>>> I’ve generated the modified residue (named SLG for serine modified by ligand). For this I’ve generated the molecules in SMILES and used eLBOW to generate the restraints. Then I’ve modified the cif file defining the molecule as a L-peptide and replacing the atom names of the Serine part (CA, CB, OG, C, O, N, and OXT)
>>>> In coot (from CCP4 : 0.9.6 EL), I’ve used the modified cif file and it allowed merging of the modified residue into the polypeptide chain as expected and further refinements went without any issue in Phenix (providing the modified cif file of course). Everything seems well interpreted. So far so good.
>>>>
>>>> However, now I would like to validate the structure and both Phenix validation tool and the PDB web server do not accept the final cif file.
>>>>
>>>> Checking this file I’ve noticed that the protein seems split into 3 pieces (chain A, first residue up to the one before the modified residue; chain B the modified residue by itself described as HETATM and chain C the rest of the polypeptide up to the C-ter).
>>>> The PDB file presents only one chain A for the whole protein with the modified residue...
>>>>
>>>> I don’t know if this is an issue with Phenix generating this final cif file in this specific case or if I need to modify this final file by hand ?
>>>>
>>>> Any help is welcome.
>>>> Thanks
>>>>
>>>> Xavier
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> phenixbb mailing list
>>>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
>>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]>
>>> --
>>> Paul Adams (he/him/his)
>>> Associate Laboratory Director for Biosciences, LBL (https://biosciences.lbl.gov/leadership/ <https://biosciences.lbl.gov/leadership/>)
>>> Principal Investigator, Computational Crystallography Initiative, LBL (https://cci.lbl.gov <https://cci.lbl.gov/>)
>>> Vice President for Technology, the Joint BioEnergy Institute (http://www.jbei.org <http://www.jbei.org/>)
>>> Principal Investigator, ALS-ENABLE, Advanced Light Source (https://als-enable.lbl.gov <https://als-enable.lbl.gov/>)
>>> Division Deputy for Biosciences, Advanced Light Source (https://als.lbl.gov <https://als.lbl.gov/>)
>>> Laboratory Research Manager, ENIGMA Science Focus Area (http://enigma.lbl.gov <http://enigma.lbl.gov/>)
>>> Adjunct Professor, Department of Bioengineering, UC Berkeley (http://bioeng.berkeley.edu <http://bioeng.berkeley.edu/>)
>>> Member of the Graduate Group in Comparative Biochemistry, UC Berkeley (http://compbiochem.berkeley.edu <http://compbiochem.berkeley.edu/>)
>>>
>>> Building 33, Room 250
>>> Building 978, Room 4126
>>> Building 977, Room 268
>>> Tel: 1-510-486-4225
>>> http://cci.lbl.gov/paul <http://cci.lbl.gov/paul>
>>> ORCID: 0000-0001-9333-8219
>>>
>>> Lawrence Berkeley Laboratory
>>> 1 Cyclotron Road
>>> BLDG 33R0345
>>> Berkeley, CA 94720, USA.
>>>
>>> Executive Assistant: Michael Espinosa [ MEEspinosa(a)lbl.gov <mailto:[email protected]> ] [ 1-510-333-6788 ]
>>> Phenix Consortium: Ashley Dawn [ AshleyDawn(a)lbl.gov <mailto:[email protected]> ][ 1-510-486-5455 ]
>>>
>>> --
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]>
>
3 years, 9 months
Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
by Keitaro Yamashita
Dear Richard and everyone,
Thanks for your reply. What kind of input do you give to
iotbx.merging_statistics in xia2? For example, when XDS file is given,
use_internal_variance=False is not passed to merge_equivalents()
function. Please look at the lines of
filter_intensities_by_sigma.__init__() in iotbx/merging_statistics.py.
When sigma_filtering == "xds" or sigma_filtering == "scalepack",
array_merged is recalculated using merge_equivalents() with default
arguments.
If nobody disagrees, I would like to commit the fix so that
use_internal_variance variable is passed to all merge_equivalents()
function calls.
I am afraid that the behavior in the phenix-1.11 would be confusing.
In phenix.table_one (mmtbx/command_line/table_one.py),
use_internal_variance=False is default. This will be OK with the fix
I suggested above.
Can it also be default in phenix.merging_statistics, not to change the
program behavior through phenix versions?
Best regards,
Keitaro
2016-11-01 18:21 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> Dear Keitaro,
>
> iotbx.merging_statistics does have the option to change the parameter use_internal_variance. In xia2 we use the defaults use_internal_variance=False, eliminate_sys_absent=False, n_bins=20, when calculating merging statistics which give comparable results to those calculate by Aimless:
>
> $ iotbx.merging_statistics
> Usage:
> phenix.merging_statistics [data_file] [options...]
>
> Calculate merging statistics for non-unique data, including R-merge, R-meas,
> R-pim, and redundancy. Any format supported by Phenix is allowed, including
> MTZ, unmerged Scalepack, or XDS/XSCALE (and possibly others). Data should
> already be on a common scale, but with individual observations unmerged.
> Diederichs K & Karplus PA (1997) Nature Structural Biology 4:269-275
> (with erratum in: Nat Struct Biol 1997 Jul;4(7):592)
> Weiss MS (2001) J Appl Cryst 34:130-135.
> Karplus PA & Diederichs K (2012) Science 336:1030-3.
>
>
> Full parameters:
>
> file_name = None
> labels = None
> space_group = None
> unit_cell = None
> symmetry_file = None
> high_resolution = None
> low_resolution = None
> n_bins = 10
> extend_d_max_min = False
> anomalous = False
> sigma_filtering = *auto xds scala scalepack
> .help = "Determines how data are filtered by SigmaI and I/SigmaI. XDS"
> "discards reflections whose intensity after merging is less than"
> "-3*sigma, Scalepack uses the same cutoff before merging, and"
> "SCALA does not do any filtering. Reflections with negative SigmaI"
> "will always be discarded."
> use_internal_variance = True
> eliminate_sys_absent = True
> debug = False
> loggraph = False
> estimate_cutoffs = False
> job_title = None
> .help = "Job title in PHENIX GUI, not used on command line"
>
>
> Below is my email to Pavel and Billy when we discussed this issue by email a while back:
>
> The difference between use_internal_variance=True/False is explained in Luc's document here:
>
> libtbx.pdflatex $(libtbx.find_in_repositories cctbx/miller)/equivalent_reflection_merging.tex
>
> Essentially use_internal_variance=False uses only the unmerged sigmas to compute the merged sigmas, whereas use_internal_variance=True uses instead the spread of the unmerged intensities to compute the merged sigmas. Furthermore, use_internal_variance=True uses the largest of the variance coming from the spread of the intensities and that computed from the unmerged sigmas. As a result, use_internal_variance=True can only ever give lower I/sigI than use_internal_variance=False. The relevant code in the cctbx is here:
>
> https://sourceforge.net/p/cctbx/code/HEAD/tree/trunk/cctbx/miller/merge_equ…
>
> Aimless has a similar option for the SDCORRECTION keyword, if you set the option SAMPLESD, which I think is equivalent to use_internal_variance=True. The default behaviour of Aimless is equivalent to use_internal_variance=False:
>
> http://www.mrc-lmb.cam.ac.uk/harry/pre/aimless.html#sdcorrection
>
> "SAMPLESD is intended for very high multiplicity data such as XFEL serial data. The final SDs are estimated from the weighted population variance, assuming that the input sigma(I)^2 values are proportional to the true errors. This probably gives a more realistic estimate of the error in <I>. In this case refinement of the corrections is switched off unless explicitly requested."
>
> I think that the "external" variance is probably better if the sigmas from the scaling program are reliable, or for low multiplicity data. For high multiplicity data or if the sigmas from the scaling program are not reliable, then "internal" variance is probably better.
>
> Cheers,
>
> Richard
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
>
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> Sent: 01 November 2016 07:23
> To: cctbx mailing list
> Subject: [cctbxbb] use_internal_variance in iotbx.merging_statistics
>
> Dear Phenix/CCTBX developers,
>
> iotbx/merging_statistics.py is used by phenix.merging_statistics,
> phenix.table_one, and so on. By upgrading phenix from 1.10.1 to 1.11,
> merging statistics-related codes were significantly changed.
>
> Previously, miller.array.merge_equivalents() was always called with
> argument use_internal_variance=False, which is consistent with XDS,
> Aimless and so on. Currently, use_internal_variance=True is default,
> and cannot be changed by users (see below).
>
> These changes were made by @afonine and @rjgildea in rev. 22973 (Sep
> 26, 2015) and 23961 (Mar 8, 2016). Could anyone explain why these
> changes were introduced?
>
> https://sourceforge.net/p/cctbx/code/22973
> https://sourceforge.net/p/cctbx/code/23961
>
>
> My points are:
>
> - We actually cannot control use_internal_variance= parameter because
> it is not passed to merge_equivalents() in class
> filter_intensities_by_sigma.
>
> - In previous versions, if I gave XDS output to
> phenix.merging_statistics, <I/sigma> values calculated in the same way
> (as XDS does) were shown; but not in the current version.
>
> - For (for example) phenix.table_one users who expect this behavior,
> it can give inconsistency. The statistics would not be consistent with
> the data used in refinement.
>
>
> cf. the related discussion in cctbxbb:
> http://phenix-online.org/pipermail/cctbxbb/2012-October/000611.html
>
>
> Best regards,
> Keitaro
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> --
> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
9 years, 3 months
Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
by Keitaro Yamashita
Dear Richard,
Thanks a lot. I hope some Phenix developer will make a comment.
Best regards,
Keitaro
2016-11-01 20:19 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
> Dear Keitaro,
>
> I've made the change you suggested in merging_statistics.py - it looks like an oversight, which didn't affect xia2 since we are always calculating merging statistics given an scaled but unmerged mtz file, never an XDS or scalepack-format file.
>
> As to what defaults Phenix uses, that is better left to one of the Phenix developers to comment on.
>
> Cheers,
>
> Richard
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
>
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
> Sent: 01 November 2016 10:41
> To: cctbx mailing list
> Subject: Re: [cctbxbb] use_internal_variance in iotbx.merging_statistics
>
> Dear Richard and everyone,
>
> Thanks for your reply. What kind of input do you give to
> iotbx.merging_statistics in xia2? For example, when XDS file is given,
> use_internal_variance=False is not passed to merge_equivalents()
> function. Please look at the lines of
> filter_intensities_by_sigma.__init__() in iotbx/merging_statistics.py.
> When sigma_filtering == "xds" or sigma_filtering == "scalepack",
> array_merged is recalculated using merge_equivalents() with default
> arguments.
>
> If nobody disagrees, I would like to commit the fix so that
> use_internal_variance variable is passed to all merge_equivalents()
> function calls.
>
>
> I am afraid that the behavior in the phenix-1.11 would be confusing.
> In phenix.table_one (mmtbx/command_line/table_one.py),
> use_internal_variance=False is default. This will be OK with the fix
> I suggested above.
>
> Can it also be default in phenix.merging_statistics, not to change the
> program behavior through phenix versions?
>
>
> Best regards,
> Keitaro
>
> 2016-11-01 18:21 GMT+09:00 <richard.gildea(a)diamond.ac.uk>:
>> Dear Keitaro,
>>
>> iotbx.merging_statistics does have the option to change the parameter use_internal_variance. In xia2 we use the defaults use_internal_variance=False, eliminate_sys_absent=False, n_bins=20, when calculating merging statistics which give comparable results to those calculate by Aimless:
>>
>> $ iotbx.merging_statistics
>> Usage:
>> phenix.merging_statistics [data_file] [options...]
>>
>> Calculate merging statistics for non-unique data, including R-merge, R-meas,
>> R-pim, and redundancy. Any format supported by Phenix is allowed, including
>> MTZ, unmerged Scalepack, or XDS/XSCALE (and possibly others). Data should
>> already be on a common scale, but with individual observations unmerged.
>> Diederichs K & Karplus PA (1997) Nature Structural Biology 4:269-275
>> (with erratum in: Nat Struct Biol 1997 Jul;4(7):592)
>> Weiss MS (2001) J Appl Cryst 34:130-135.
>> Karplus PA & Diederichs K (2012) Science 336:1030-3.
>>
>>
>> Full parameters:
>>
>> file_name = None
>> labels = None
>> space_group = None
>> unit_cell = None
>> symmetry_file = None
>> high_resolution = None
>> low_resolution = None
>> n_bins = 10
>> extend_d_max_min = False
>> anomalous = False
>> sigma_filtering = *auto xds scala scalepack
>> .help = "Determines how data are filtered by SigmaI and I/SigmaI. XDS"
>> "discards reflections whose intensity after merging is less than"
>> "-3*sigma, Scalepack uses the same cutoff before merging, and"
>> "SCALA does not do any filtering. Reflections with negative SigmaI"
>> "will always be discarded."
>> use_internal_variance = True
>> eliminate_sys_absent = True
>> debug = False
>> loggraph = False
>> estimate_cutoffs = False
>> job_title = None
>> .help = "Job title in PHENIX GUI, not used on command line"
>>
>>
>> Below is my email to Pavel and Billy when we discussed this issue by email a while back:
>>
>> The difference between use_internal_variance=True/False is explained in Luc's document here:
>>
>> libtbx.pdflatex $(libtbx.find_in_repositories cctbx/miller)/equivalent_reflection_merging.tex
>>
>> Essentially use_internal_variance=False uses only the unmerged sigmas to compute the merged sigmas, whereas use_internal_variance=True uses instead the spread of the unmerged intensities to compute the merged sigmas. Furthermore, use_internal_variance=True uses the largest of the variance coming from the spread of the intensities and that computed from the unmerged sigmas. As a result, use_internal_variance=True can only ever give lower I/sigI than use_internal_variance=False. The relevant code in the cctbx is here:
>>
>> https://sourceforge.net/p/cctbx/code/HEAD/tree/trunk/cctbx/miller/merge_equ…
>>
>> Aimless has a similar option for the SDCORRECTION keyword, if you set the option SAMPLESD, which I think is equivalent to use_internal_variance=True. The default behaviour of Aimless is equivalent to use_internal_variance=False:
>>
>> http://www.mrc-lmb.cam.ac.uk/harry/pre/aimless.html#sdcorrection
>>
>> "SAMPLESD is intended for very high multiplicity data such as XFEL serial data. The final SDs are estimated from the weighted population variance, assuming that the input sigma(I)^2 values are proportional to the true errors. This probably gives a more realistic estimate of the error in <I>. In this case refinement of the corrections is switched off unless explicitly requested."
>>
>> I think that the "external" variance is probably better if the sigmas from the scaling program are reliable, or for low multiplicity data. For high multiplicity data or if the sigmas from the scaling program are not reliable, then "internal" variance is probably better.
>>
>> Cheers,
>>
>> Richard
>>
>> Dr Richard Gildea
>> Data Analysis Scientist
>> Tel: +441235 77 8078
>>
>> Diamond Light Source Ltd.
>> Diamond House
>> Harwell Science & Innovation Campus
>> Didcot
>> Oxfordshire
>> OX11 0DE
>>
>> ________________________________________
>> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Keitaro Yamashita [k.yamashita(a)spring8.or.jp]
>> Sent: 01 November 2016 07:23
>> To: cctbx mailing list
>> Subject: [cctbxbb] use_internal_variance in iotbx.merging_statistics
>>
>> Dear Phenix/CCTBX developers,
>>
>> iotbx/merging_statistics.py is used by phenix.merging_statistics,
>> phenix.table_one, and so on. By upgrading phenix from 1.10.1 to 1.11,
>> merging statistics-related codes were significantly changed.
>>
>> Previously, miller.array.merge_equivalents() was always called with
>> argument use_internal_variance=False, which is consistent with XDS,
>> Aimless and so on. Currently, use_internal_variance=True is default,
>> and cannot be changed by users (see below).
>>
>> These changes were made by @afonine and @rjgildea in rev. 22973 (Sep
>> 26, 2015) and 23961 (Mar 8, 2016). Could anyone explain why these
>> changes were introduced?
>>
>> https://sourceforge.net/p/cctbx/code/22973
>> https://sourceforge.net/p/cctbx/code/23961
>>
>>
>> My points are:
>>
>> - We actually cannot control use_internal_variance= parameter because
>> it is not passed to merge_equivalents() in class
>> filter_intensities_by_sigma.
>>
>> - In previous versions, if I gave XDS output to
>> phenix.merging_statistics, <I/sigma> values calculated in the same way
>> (as XDS does) were shown; but not in the current version.
>>
>> - For (for example) phenix.table_one users who expect this behavior,
>> it can give inconsistency. The statistics would not be consistent with
>> the data used in refinement.
>>
>>
>> cf. the related discussion in cctbxbb:
>> http://phenix-online.org/pipermail/cctbxbb/2012-October/000611.html
>>
>>
>> Best regards,
>> Keitaro
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>> --
>> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
>> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
>> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
>> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
9 years, 3 months