Search results for query "look through"
- 527 messages
Re: [cctbxbb] New error appeared recently, do not know who it belongs to...
by Nicholas Sauter
I'll look into the simage test later today.
Nick
Nicholas K. Sauter, Ph. D.
Senior Scientist, Molecular Biophysics & Integrated Bioimaging Division
Lawrence Berkeley National Laboratory
1 Cyclotron Rd., Bldg. 33R0345
Berkeley, CA 94720
(510) 486-5713
On Sun, Jul 16, 2017 at 11:47 PM, <Graeme.Winter(a)diamond.ac.uk> wrote:
> Good detective work Markus, probably also explains
>
> https://github.com/xia2/xia2/issues/158
>
> (I'll need you to run through this process with me real slow like, but
> good result)
>
> Cheers Graeme
>
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org]
> on behalf of markus.gerstel(a)diamond.ac.uk [markus.gerstel(a)diamond.ac.uk]
> Sent: 17 July 2017 07:41
> To: cctbxbb(a)phenix-online.org
> Subject: Re: [cctbxbb] New error appeared recently, do not know who it
> belongs to...
>
> It worked on my checked out revision (4eeed...), so I:
>
> $ git checkout master; git pull --rebase
> $ git bisect start
> $ git bisect bad
> $ git bisect good 4eeed
> Bisecting: 7 revisions left to test after this (roughly 3 steps)
> [d80a6e66937818a4da8d65e5571c402868fd5c34] catch case with zero target
> length
> (make reconf, run test, test fails)
>
> $ git bisect bad
> Bisecting: 3 revisions left to test after this (roughly 2 steps)
> [166904e0b9678f9c092464bddb7d3d2fc12f6085] Longstanding formula for
> areaupper implements a very complicated heuristic that is not consistent
> with the simple explanation given on the web page. Change to the simple
> semantics: area limit = minimum_spot_area * spot_area_maximum_factor.
> (make reconf, run test, test fails)
>
> $ git bisect bad
> Bisecting: 1 revision left to test after this (roughly 1 step)
> [050b520791887430d5ac3116040ea4bc3877039e] remove faulty .map_data(); add
> .crystal_symmetry()
> (make reconf, run test, test succeeds)
>
> $ git bisect good
> Bisecting: 0 revisions left to test after this (roughly 0 steps)
> [20ece6eb8f0ef6dc3298d90cf6463159159874bc] Store all H atoms in
> connectivity list (also those without bond proxies).
> (make reconf, run test, test succeeds)
>
> $ git bisect good
> 166904e0b9678f9c092464bddb7d3d2fc12f6085 is the first bad commit
> commit 166904e0b9678f9c092464bddb7d3d2fc12f6085
> Author: Nicholas Sauter <nksauter(a)lbl.gov>
> Date: Wed Jul 12 20:39:15 2017 -0700
>
> Longstanding formula for areaupper implements a very complicated
> heuristic that is not
> consistent with the simple explanation given on the web page. Change
> to the simple semantics:
> area limit = minimum_spot_area * spot_area_maximum_factor.
>
> :040000 040000 bf112507a50e171d9d29a4d181414c77d8ab564c
> ed60de2719af6f6f75f4b07b592db12cac0207ae M spotfinder
>
>
> Thus https://github.com/cctbx/cctbx_project/commit/166904e to blame
>
> -Markus
>
>
> -----Original Message-----
> From: cctbxbb-bounces(a)phenix-online.org [mailto:cctbxbb-bounces@
> phenix-online.org] On Behalf Of Graeme.Winter(a)diamond.ac.uk
> Sent: 17 July 2017 07:13
> To: cctbxbb(a)phenix-online.org
> Subject: Re: [cctbxbb] New error appeared recently, do not know who it
> belongs to...
>
> Nick,
>
> I share your confusion. If I could have tracked it down to a revision I
> would have done…
>
> Cheers Graeme
>
>
> On 17 Jul 2017, at 07:07, Nicholas Sauter <nksauter(a)lbl.gov<mailto:nksau
> ter(a)lbl.gov>> wrote:
>
> that code hasn't changed in 5 years, so how can an error suddenly appear?
> Nick
>
> Nicholas K. Sauter, Ph. D.
> Senior Scientist, Molecular Biophysics & Integrated Bioimaging Division
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Rd., Bldg. 33R0345
> Berkeley, CA 94720
> (510) 486-5713
>
> On Sun, Jul 16, 2017 at 10:57 PM, <Graeme.Winter(a)diamond.ac.uk<mailto:
> Graeme.Winter(a)diamond.ac.uk>> wrote:
> Exhibit A:
>
> libtbx.python "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/
> simage/tst.py"
> rstbx.simage.explore_completeness d_min=10 rstbx.simage.explore_completeness
> d_min=10 intensity_symmetry=P4 use_symmetry=True multiprocessing=True
> rstbx.simage.solver d_min=5 rstbx.simage.solver d_min=5
> lattice_symmetry=R32:R intensity_symmetry=R3:R rstbx.simage.solver d_min=5
> lattice_symmetry=R32:R intensity_symmetry=P1 rstbx.simage.solver d_min=5
> lattice_symmetry=P422 intensity_symmetry=P4 index_and_integrate=True
> multiprocessing=True Traceback (most recent call last):
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/tst.py",
> line 273, in <module>
> run(args=sys.argv[1:])
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/tst.py",
> line 269, in run
> exercise_solver()
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/tst.py",
> line 255, in exercise_solver
> "index_and_integrate=True", "multiprocessing=True"])
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/tst.py",
> line 213, in run
> command=cmd, stdout_splitlines=False).raise_if_errors().stdout_buffer
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/libtbx/easy_run.py",
> line 39, in raise_if_errors
> raise Error(msg)
> RuntimeError: child process stderr output:
> command: 'rstbx.simage.solver d_min=5 lattice_symmetry=P422
> intensity_symmetry=P4 index_and_integrate=True multiprocessing=True'
> Traceback (most recent call last):
> File "/Users/graeme/svn/cctbx/build/../modules/cctbx_
> project/rstbx/command_line/simage.solver.py<http://simage.solver.py/>",
> line 5, in <module>
> run(args=sys.argv[1:])
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/solver.py",
> line 1161, in run
> return run_fresh(args)
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/solver.py",
> line 1145, in run_fresh
> process(work_params, i_calc)
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/solver.py",
> line 1088, in process
> process_core(work_params, i_calc.p1_anom, reindexing_assistant,
> image_mdls)
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/solver.py",
> line 985, in process_core
> work_params, reindexing_assistant, image_mdls, usables)
> File "/Users/graeme/svn/cctbx/modules/cctbx_project/rstbx/simage/solver.py",
> line 883, in build_image_cluster
> raise RuntimeError("Insufficient connectivity between images.")
> RuntimeError: Insufficient connectivity between images.
> usr+sys time: 0.78 seconds, ticks: 777467, micro-seconds/tick: 1.003
> wall clock time: 6.99 seconds
>
> OS X & RHEL6
>
> Any volunteers?
>
> Cheers Graeme
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
8 years, 6 months
Re: [phenixbb] AUTOSOL for MAD data with error message-'Need a symfile for solve'
by Graeme Winter
Hi Mohan,
P22121 is not the standard setting for this spacegroup - in
International Tables the convention is to have the spacegroup's unique
axis as C, e.g. P21212 in this case. If you run this reflection file
through reindex as follows:
reindex hklin in.mtz hklout out.mtz << eof
reindex k,l,h
eof
Then it should work just fine. An alternative would be to compose a
solve "symm" file and put this in the right place. This would look
something like:
4 Equiv positions, P22121 SG # 3018
X,Y,Z
X,-Y,-Z
-X,1/2+Y,1/2-Z
-X,1/2-Y,1/2+Z
(copied from spacegroup #3018 in the CCP4 symop.lib) which would need to go in
phenix-1.6-289/solve_resolve/ext_ref_files
On balance, reindexing is probably easiest!
Best wishes,
Graeme
On 2 February 2010 13:38, <m.b.rajasekaran(a)reading.ac.uk> wrote:
>
>
> Dear all,
> I have a query regarding the Autosol option in the
> PHENIX-1.6-289 version. We are trying to process a MAD data for a zinc bound
> protein crystal with AuTOSOL. We uploaded the scaled files, sequence and all
> other details and started running AUTOSOL. But the program stopped with the
> following error message pasted below mentioning some missing of symfile. We
> were not able to locate any parameters file on the installation directory.
> It would be helpful if we get some useful suggestions on this.
>
> Thanks in advance,
> Mohan
> ***********************************************************
>
> PHENIX AutoSol Tue Feb 2 12:37:41 2010
>
> ************************************************************
>
> Working directory: /home/sar06mbr/30JanDiamondI02/107/107_php22121
> PHENIX VERSION: 1.6 of 29-01-2010 PHENIX RELEASE_TAG : 289 PHENIX MTYPE :
> intel-linux-2.6 PHENIX MVERSION : suse AutoSol_start AutoSol Run 2 Tue Feb 2
> 12:37:41 2010 INPUT FILE LIST:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_4_P22121_scala2.mtz']
>
> Copied /home/sar06mbr/30JanDiamondI02/107/107_php22121/result.seq to
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/AutoSol_run_2_/sequence_autosol.dat
>
> Setting chain_type to PROTEIN Setting defaults for data_quality moderate
> Setting thorough_denmod to Yes Settin fix_xyz_after_denmod to False Setting
> max_ha_iterations to 2 Setting fixscattfactors to No Settin defaults for
> thoroughness quick Setting best_of_n_hyss_always to 1 Setting
> max_extra_unique_solutions to 0 Settin ha_iteration to No Setting
> max_choices to 1 Setting number_of_models to 1 Setting number_of_builds to 1
> Setting test_mask_type to No Setting n_cycle_build to 0 Setting
> thorough_loop_fit to Setting create_scoring_table to No Not truncating ha
> sites at start of resolve as this is not PHASER SAD phasing
>
> Trying to guess refinement file for later use from
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_4_P22121_scala2.mtz']
>
> Choosing guess of refinement file for later use from all files:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz',
> '/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_4_P22121_scala2.mtz']
>
> FILE TYPE ccp4_mtz
>
> COLUMN LABELS: ['H', 'K', 'L', 'FreeR_flag', 'F_107_2re', 'SIGF_107_2re',
> 'F_107_2re(+)', 'SIGF_107_2re(+)', 'F_107_2re(-)', 'SIGF_107_2re(-)',
> 'DANO_107_2re', 'SIGDANO_107_2re', 'IMEAN_107_2re', 'SIGIMEAN_107_2re',
> 'I_107_2re(+)', 'SIGI_107_2re(+)', 'I_107_2re(-)', 'SIGI_107_2re(-)']
>
> GUESS FILE TYPE MERGE TYPE mtz premerged TARGET LABELS ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1'] Unit cell: (34.79, 108.92, 134.88, 90, 90, 90) Space
> group: P 2 21 21 (No. 18) CONTENTS:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz',
> 'mtz', 'premerged', 'P 2 21 21', [34.790000915527344, 108.92009735107422,
> 134.8800048828125, 90.0, 90.0, 90.0], 2.4890566908055836, ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1']]
>
> File
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz
> RES: 2.5
>
> FILE TYPE ccp4_mtz COLUMN LABELS: ['H', 'K', 'L', 'FreeR_flag', 'F_107_3re',
> 'SIGF_107_3re', 'F_107_3re(+)', 'SIGF_107_3re(+)', 'F_107_3re(-)',
> 'SIGF_107_3re(-)', 'DANO_107_3re', 'SIGDANO_107_3re', 'IMEAN_107_3re',
> 'SIGIMEAN_107_3re', 'I_107_3re(+)', 'SIGI_107_3re(+)', 'I_107_3re(-)',
> 'SIGI_107_3re(-)']
>
> GUESS FILE TYPE MERGE TYPE mtz premerged TARGET LABELS ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1'] Unit cell: (34.85, 109.18, 135.27, 90, 90, 90) Space
> group: P 2 21 21 (No. 18) CONTENTS:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz',
> 'mtz', 'premerged', 'P 2 21 21', [34.849998474121094, 109.18000030517578,
> 135.26980590820312, 90.0, 90.0, 90.0], 2.4889952912293629, ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1']]
>
> File
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz
> RES: 2.5
>
> FILE TYPE ccp4_mtz COLUMN LABELS: ['H', 'K', 'L', 'FreeR_flag', 'F_107_4re',
> 'SIGF_107_4re', 'F_107_4re(+)', 'SIGF_107_4re(+)', 'F_107_4re(-)',
> 'SIGF_107_4re(-)', 'DANO_107_4re', 'SIGDANO_107_4re', 'IMEAN_107_4re',
> 'SIGIMEAN_107_4re', 'I_107_4re(+)', 'SIGI_107_4re(+)', 'I_107_4re(-)',
> 'SIGI_107_4re(-)']
>
> GUESS FIL TARGET LABELS ['F1', 'SIGF1', 'DANO1', 'SIGDANO1'] Unit cell:
> (34.87, 109.2, 135.33, 90, 90, 90) Space group: P 2 21 21 (No. 18) CONTENTS:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_4_P22121_scala2.mtz',
> 'mtz', 'premerged', 'P 2 21 21', [34.869998931884766, 109.19999694824219,
> 135.32989501953125, 90.0, 90.0, 90.0], 2.4890134311310943, ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1']]
>
> File
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_4_P22121_scala2.mtz
> RES: 2.5
>
> Guess of datafile for refinement:
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz
>
> Using
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz
> for refinement
>
> Specify
> input_refinement_file=/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_3_P22121_scala2.mtz
> to change this
>
> HKLIN ENTRY:
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz
>
> FILE TYPE ccp4_mtz
>
> COLUMN LABELS: ['H', 'K', 'L', 'FreeR_flag', 'F_107_2re', 'SIGF_107_2re',
> 'F_107_2re(+)', 'SIGF_107_2re(+)', 'F_107_2re(-)', 'SIGF_107_2re(-)',
> 'DANO_107_2re', 'SIGDANO_107_2re', 'IMEAN_107_2re', 'SIGIMEAN_107_2re',
> 'I_107_2re(+)', 'SIGI_107_2re(+)', 'I_107_2re(-)', 'SIGI_107_2re(-)']
>
> GUESS FILE TYPE MERGE TYPE mtz premerged TARGET LABELS ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1'] Unit cell: (34.79, 108.92, 134.88, 90, 90, 90) Space
> group: P 2 21 21 (No. 18) CONTENTS:
> ['/home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz',
> 'mtz', 'premerged', 'P 2 21 21', [34.790000915527344, 108.92009735107422,
> 134.8800048828125, 90.0, 90.0, 90.0], 2.4890566908055836, ['F1', 'SIGF1',
> 'DANO1', 'SIGDANO1']]
>
> Inverse hand of space group: P 2 21 21 Creating sg entry from
> /home/sar06mbr/30JanDiamondI02/107/107_php22121/M75_Zn_107_2_P22121_scala2.mtz
> Unit cell: (34.79, 108.92, 134.88, 90, 90, 90) Space group: P 2 21 21 (No.
> 18) Space group name is: P 2 21 21 symbol is: p22121
>
> ****************************************
>
> AutoSol Input failed
>
> Need a symfile for solve
>
> ****************************************
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
16 years
Re: [cctbxbb] Git status update
by Aaron Brewster
Hi folks, we are good to go on our side. The final dates are Nov 8th for
this first stage of the transition (updating bootstrap.py, buildbot and
Jenkins to all point at git) and Nov 22nd (locking the SVN tree on
sourceforge). Markus, now is a great time to send your helpful tip emails
that you did for DIALS :)
Thanks all!
-Aaron
On Wed, Nov 2, 2016 at 1:18 AM, <markus.gerstel(a)diamond.ac.uk> wrote:
> Hi Aaron,
>
>
>
> Yes. Please give a shout when ready.
>
>
>
> -Markus
>
>
>
> (Side note: If it’s the same to you I would prefer a hangout call because
> I can’t remember my Skype account details. Otherwise I’ll dig them up or
> get a new account.)
>
>
>
> Dr Markus Gerstel MBCS
>
> Postdoctoral Research Associate
>
> Tel: +44 1235 778698
>
>
>
> Diamond Light Source Ltd.
>
> Diamond House
>
> Harwell Science & Innovation Campus
>
> Didcot
>
> Oxfordshire
>
> OX11 0DE
>
>
>
> *From:* cctbxbb-bounces(a)phenix-online.org [mailto:cctbxbb-bounces@
> phenix-online.org] *On Behalf Of *Aaron Brewster
> *Sent:* 01 November 2016 17:51
>
> *To:* cctbx mailing list
> *Subject:* Re: [cctbxbb] Git status update
>
>
>
> Hi Markus,
>
>
>
> Indeed, dials.lbl.gov is built off of the same sources. I was more
> concerned about cctbx.sourceforge.net. We can look into that on our end.
>
>
>
> As for the repositories, there are licensing issues with the way you have
> things set up and we would like to examine if the code duplication that is
> occurring can be avoided. We (Billy and I) will make up a list of
> dependencies and their licenses and how we think they should be hosted.
> Should we do a brief skype call over the next couple days?
>
>
>
> Thanks,
>
> -Aaron
>
>
>
> On Tue, Nov 1, 2016 at 2:04 AM, <markus.gerstel(a)diamond.ac.uk> wrote:
>
> Hi Aaron,
>
>
>
> If you could send me your gmail addresses and I’ll share the doc.
>
>
>
> Regarding your questions:
>
> 1. Syncing the repositories is done using subgit and I run this
> currently 4 times a day.
>
> 2. Yes, the repositories are copies of what we are distributing.
> No, I do not think we should remove them, rather we should if at all
> possible extend them by adding their development history and actually point
> to them. Currently no changes can be made to any of those packages, and –
> from a distribution point of view, worse – nobody knows if and when those
> packages do change. What I therefore do for dials releases is a) download
> all the packages, b) unpack them into the respective repositories and c)
> see if any changes were made before I d) tag the repository for release. As
> you can see this has a massive time impact and is part of the reason why
> releases are currently more painful than they have to be. Therefore if the
> packages were recently updated this is not a problem since nobody currently
> refers to them outside the stable releases.
>
> 3. Isn’t dials.lbl.gov already running off the
> https://github.com/dials/dials.github.io repository? I am sure we can
> have something similar for the cctbx websites. We already generate
> http://cctbx.github.io/ but I don’t atm know where cctbx.sourceforge.net
> comes from.
>
>
>
> -Markus
>
>
>
> Dr Markus Gerstel MBCS
>
> Postdoctoral Research Associate
>
> Tel: +44 1235 778698
>
>
>
> Diamond Light Source Ltd.
>
> Diamond House
>
> Harwell Science & Innovation Campus
>
> Didcot
>
> Oxfordshire
>
> OX11 0DE
>
>
>
> *From:* cctbxbb-bounces(a)phenix-online.org [mailto:cctbxbb-bounces@
> phenix-online.org] *On Behalf Of *Aaron Brewster
> *Sent:* 28 October 2016 23:48
>
>
> *To:* cctbx mailing list
> *Subject:* Re: [cctbxbb] Git status update
>
>
>
> Hi Markus,
>
>
>
> Billy, Nigel and I worked through the plan you originally sent out (linked
> at the top of this email thread). We've added a couple questions as
> comments in the document (expounded on below). Also, could you make the
> document writeable by Billy, Nigel and I? We want to assign some of the
> tasks listed.
>
>
>
> Here are the questions:
>
> - How is the repository at https://github.com/cctbx/cctbx being synced
> with the SVN version? (This is just my own curiosity :)
> - We think the non-cctbx repositories there that are copies (clipper,
> boost, annlib, etc.) of the bundles distributed alongside cctbx need to be
> removed from the git cctbx organization as they are duplications of what we
> are currently distributing. You say they are there so you can do tagging
> with the dials releases, but if Billy has updated the versions at LBL, then
> wouldn't these versions hosted by git already be out of date? Which would
> affect the dials stable releases?
> - We need to think about how websites such as dials.lbl.gov or
> cctbx.sourceforge.net will be affected by moving to git. Probably
> it's just updating links or moving html files?
>
> Thanks,
>
> -Aaron
>
>
>
> On Tue, Aug 23, 2016 at 6:32 AM, <markus.gerstel(a)diamond.ac.uk> wrote:
>
> Hi Marcin.
>
> Currently I would say there is no plan. I do not have the history of the
> projects, so I could not do anything about that. I also don't use any of
> those repositories, so can't really comment on their status or future. I
> created the repositories only so I can tag versions for the DIALS releases,
> otherwise someone changing the downloadable archive would affect stable
> releases.
>
> -Markus
>
> Dr Markus Gerstel MBCS
> Postdoctoral Research Associate
> Tel: +44 1235 778698
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
>
>
> -----Original Message-----
> From: cctbxbb-bounces(a)phenix-online.org [mailto:cctbxbb-bounces@
> phenix-online.org] On Behalf Of Marcin Wojdyr
> Sent: 23 August 2016 14:26
> To: cctbx mailing list
> Subject: Re: [cctbxbb] Git status update
>
> Hi Markus,
> what's the plan for the repositories that are currently imported as "copy
> of LBL archive" (without history).
> I guess the plan is to preserve the history - but will they be merged or
> added as submodule/subtree of the main cctbx repository?
>
> Marcin
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
>
>
>
> --
>
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
9 years, 3 months
Re: [cctbxbb] some thoughts on cctbx and pip
by Billy Poon
Hi Gergely,
It's still a work in progress. I'm sorting out some Windows issues right
now. I can probably build a test package on a separate channel for people
that want to test it (let's say next week?). I'll provide instructions, but
basically, the test conda package will be in its own separate channel and
the dependencies will be pulled from the conda-forge channel. I want most
things to be working correctly on Python 2.7, 3.6, 3.7, and 3.8 on all 3
platforms.
Thanks!
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Fri, Dec 13, 2019 at 5:44 AM Gergely Katona <gkatona(a)gmail.com> wrote:
> Dear Billy,
>
> This sounds very promising and exciting. I am not sure if cctbx is
> already functional as a conda package in anaconda3 (Linux) or this is
> still work in progress. My technical expertise does not allow me to
> tell the difference. What I tried:
>
> Fresh install of anaconda3. Adding - cctbx and - conda-forge to
> .condarc . Installing pyside2 with conda. Running conda install
> conda_dependencies . I get a lot package version conflict, and I
> cannot import cctbx or iotbx to anaconda python. Am I following the
> right instructions? Or it is too early to expect that cctbx works when
> installed through conda?
>
> Best wishes,
>
> Gergely
>
> On Wed, Nov 27, 2019 at 3:56 PM Billy Poon <BKPoon(a)lbl.gov> wrote:
> >
> > Hi all,
> >
> > For a brief update, I have submitted a recipe for cctbxlite to
> conda-forge (https://github.com/conda-forge/staged-recipes/pull/10021)
> and support for Python 3.7 and 3.8 is being added (
> https://github.com/cctbx/cctbx_project/pull/409). With some fixes for
> Windows (https://github.com/cctbx/cctbx_project/pull/416), all platforms
> (macOS, linux, and Windows) can build for Python 2.7, 3.6, 3.7, and 3.8.
> Some additional changes will be needed to get Windows to work with Python 3
> and for tests to pass with Boost 1.70.0. That will enable the conda-forge
> recipe to build for all platforms and for all supported versions of Python.
> >
> > Currently, the conda-forge recipe will install into the conda python and
> cctbx imports can be done without sourcing the environment scripts. It
> looks like a lot of the environment variables being set in the dispatchers
> can be removed since the Python files and C++ extensions are in the right
> places. I'll update the libtbx_env file so that commands that load the
> environment can work correctly.
> >
> > --
> > Billy K. Poon
> > Research Scientist, Molecular Biophysics and Integrated Bioimaging
> > Lawrence Berkeley National Laboratory
> > 1 Cyclotron Road, M/S 33R0345
> > Berkeley, CA 94720
> > Tel: (510) 486-5709
> > Fax: (510) 486-5909
> > Web: https://phenix-online.org
> >
> >
> > On Sun, Aug 25, 2019 at 2:33 PM Tristan Croll <tic20(a)cam.ac.uk> wrote:
> >>
> >> Hi Luc,
> >>
> >> That sounds promising. From there, I’d need to work out how to make a
> fully-packaged installer (basically a modified wheel file) for the ChimeraX
> ToolShed - the aim is for the end user to not have to worry about any of
> this. That adds a couple of complications - e.g. $LIBTBX_BUILD would need
> to be set dynamically before first import - but doesn’t seem insurmountable.
> >>
> >> Thanks,
> >>
> >> Tristan
> >>
> >>
> >>
> >> Tristan Croll
> >> Research Fellow
> >> Cambridge Institute for Medical Research
> >> University of Cambridge CB2 0XY
> >>
> >>
> >>
> >>
> >> > On 25 Aug 2019, at 18:31, Luc Bourhis <luc_j_bourhis(a)mac.com> wrote:
> >> >
> >> > Hi Tristan,
> >> >
> >> > cctbx could be built to use your ChimeraX python, now that cctbx is
> moving to Python 3. The option —with-python is there for that with the
> bootstrap script. The specific environment setup boil down to setting two
> environment variable LIBTBX_BUILD and either LD_LIBRARY_PATH on Linux, PATH
> on Win32, or DYLIB_LIBRARY_PATH on MacOS. If you work within a framework
> such as ChimeraX, that should not be difficult to ensure those two
> variables are set.
> >> >
> >> > Best wishes,
> >> >
> >> > Luc
> >> >
> >> >
> >> >> On 23 Aug 2019, at 19:02, Tristan Croll <tic20(a)cam.ac.uk> wrote:
> >> >>
> >> >> To add my two cents on this: probably the second-most common
> question I've had about ISOLDE's implementation is, "why didn't you use
> CCTBX?". The honest answer to that is, "I didn't know how."
> >> >>
> >> >> Still don't, really - although the current developments are rather
> promising. The problem I've faced is that CCTBX was designed as its own
> self-contained Python (2.7, until very recently) environment, with its own
> interpreter and a lot of very specific environment setup. Meanwhile I'm
> developing ISOLDE in ChimeraX, which is *also* its own self-contained
> Python (3.7) environment. To plug one into the other in that form... well,
> I don't think I'm a good enough programmer to really know where to start.
> >> >>
> >> >> The move to Conda and a more modular CCTBX architecture should make
> a lot more possible in that direction. Pip would be even better for me
> personally (ChimeraX can install directly from the PyPI, but doesn't
> interact with Conda) - but I understand pretty well the substantial
> challenge that would amount to (not least being that the PyPI imposes a
> limit - around 100MB from memory? - on the size of an individual package).
> >> >>
> >> >> Best regards,
> >> >>
> >> >> Tristan
> >> >>
> >> >>> On 2019-08-23 09:28, Luc Bourhis wrote:
> >> >>> Hi Graeme,
> >> >>> Yes, I know. But “black" is a program doing a very particular task
> >> >>> (code formatting from the top of my head). Requiring to use a
> wrapper
> >> >>> for python itself is another level. But ok, I think I am mellowing
> to
> >> >>> the idea after all! Talking with people around me, and
> extrapolating,
> >> >>> I would bet that, right now, a great majority of people interested
> by
> >> >>> cctbx in pip have already used the cctbx, so they know about the
> >> >>> Python wrapper, and they would not be too sanguine about that. My
> >> >>> concern is for the future, when pip will be the first time some
> people
> >> >>> use cctbx. Big fat warning notices on PyPI page and a better error
> >> >>> message when cctbx fails because LIBTBX_BUILD is not set would be
> >> >>> needed but that could be all right.
> >> >>> If we do a pip installer, we should aim at a minimal install: cctbx,
> >> >>> iotbx and their dependencies, and that’s it.
> >> >>> Best wishes,
> >> >>> Luc
> >> >>>> On 23 Aug 2019, at 07:17, Graeme.Winter(a)Diamond.ac.uk <
> Graeme.Winter(a)diamond.ac.uk> wrote:
> >> >>>> Without discussing the merits of this or whether we _choose_ to
> make the move to supporting PIP, I am certain it would be _possible_ - many
> other packages make dispatcher scripts when you pip install them e.g.
> >> >>>> Silver-Surfer rescale_f2 :) $ which black; cat $(which black)
> >> >>>> /Library/Frameworks/Python.framework/Versions/3.6/bin/black
> >> >>>> #!/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6
> >> >>>> # -*- coding: utf-8 -*-
> >> >>>> import re
> >> >>>> import sys
> >> >>>> from black import main
> >> >>>> if __name__ == '__main__':
> >> >>>> sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
> >> >>>> sys.exit(main())
> >> >>>> So we _could_ work around the absence of LIBTBX_BUILD etc. in the
> system. Whether or not we elect to do the work is a different question, and
> it seems clear that here are very mixed opinions on this.
> >> >>>> Best wishes Graeme
> >> >>>> On 23 Aug 2019, at 01:21, Luc Bourhis <luc_j_bourhis(a)mac.com
> <mailto:[email protected]>> wrote:
> >> >>>> Hi,
> >> >>>> Even if we managed to ship our the boost dynamic libraries with
> pip, it would still not be pip-like, as we would still need our python
> wrappers to set LIBTBX_BUILD and LD_LIBRARY_PATH. Normal pip packages work
> with the standard python exe. LD_LIBRARY_PATH, we could get around that by
> changing the way we compile, using -Wl,-R, which is the runtime equivalent
> of build time -L. That’s a significant change that would need to be tested.
> But there is no way around setting LIBTBX_BUILD right now. Leaving that to
> the user is horrible. Perhaps there is a way to hack libtbx/env_config.py
> so that we can hardwire LIBTBX_BUILD in there when pip installs?
> >> >>>> Best wishes,
> >> >>>> Luc
> >> >>>> On 16 Aug 2019, at 22:47, Luc Bourhis <luc_j_bourhis(a)mac.com
> <mailto:[email protected]>> wrote:
> >> >>>> Hi,
> >> >>>> I did look into that many years ago, and even toyed with building
> a pip installer. What stopped me is the exact conclusion you reached too:
> the user would not have the pip experience he expects. You are right that
> it is a lot of effort but is it worth it? Considering that remark, I don’t
> think so. Now, Conda was created specifically to go beyond pip
> pure-python-only support. Since cctbx has garnered support for Conda, the
> best avenue imho is to go the extra length to have a package on
> Anaconda.org<http://anaconda.org/>, and then to advertise it hard to
> every potential user out there.
> >> >>>> Best wishes,
> >> >>>> Luc
> >> >>>> On 16 Aug 2019, at 21:45, Aaron Brewster <asbrewster(a)lbl.gov
> <mailto:[email protected]>> wrote:
> >> >>>> Hi, to avoid clouding Dorothee's documentation email thread, which
> I think is a highly useful enterprise, here's some thoughts about putting
> cctbx into pip. Pip doesn't install non-python dependencies well. I don't
> think boost is available as a package on pip (at least the package version
> we use). wxPython4 isn't portable through pip (
> https://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython…).
> MPI libraries are system dependent. If cctbx were a pure python package,
> pip would be fine, but cctbx is not.
> >> >>>> All that said, we could build a manylinux1 version of cctbx and
> upload it to PyPi (I'm just learning about this). For a pip package to be
> portable (which is a requirement for cctbx), it needs to conform to PEP513,
> the manylinux1 standard (https://www.python.org/dev/peps/pep-0513/). For
> example, numpy is built according to this standard (see
> https://pypi.org/project/numpy/#files, where you'll see the manylinux1
> wheel). Note, the manylinux1 standard is built with Centos 5.11 which we
> no longer support.
> >> >>>> There is also a manylinux2010 standard, which is based on Centos 6
> (https://www.python.org/dev/peps/pep-0571/). This is likely a more
> attainable target (note though by default C++11 is not supported on Centos
> 6).
> >> >>>> If we built a manylinuxX version of cctbx and uploaded it to PyPi,
> the user would need all the non-python dependencies. There's no way to
> specify these in pip. For example, cctbx requires boost 1.63 or better.
> The user will need to have it in a place their python can find it, or we
> could package it ourselves and supply it, similar to how the pip h5py
> package now comes with an hd5f library, or how the pip numpy package
> includes an openblas library. We'd have to do the same for any packages we
> depend on that aren't on pip using the manylinux standards, such as
> wxPython4.
> >> >>>> Further, we need to think about how dials and other cctbx-based
> packages interact. If pip install cctbx is set up, how does pip install
> dials work, such that any dials shared libraries can find the cctbx
> libraries? Can shared libraries from one pip package link against
> libraries in another pip package? Would each package need to supply its
> own boost? Possibly this is well understood in the pip field, but not by
> me :)
> >> >>>> Finally, there's the option of providing a source pip package.
> This would require the full compiler toolchain for any given platform
> (macOS, linux, windows). These are likely available for developers, but
> not for general users.
> >> >>>> Anyway, these are some of the obstacles. Not saying it isn't
> possible, it's just a lot of effort.
> >> >>>> Thanks,
> >> >>>> -Aaron
> >> >>>> _______________________________________________
> >> >>>> cctbxbb mailing list
> >> >>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> >> >>>> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >>>> _______________________________________________
> >> >>>> cctbxbb mailing list
> >> >>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> >> >>>> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >>>> _______________________________________________
> >> >>>> cctbxbb mailing list
> >> >>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> >> >>>> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >>>> --
> >> >>>> This e-mail and any attachments may contain confidential,
> copyright and or privileged material, and are for the use of the intended
> addressee only. If you are not the intended addressee or an authorised
> recipient of the addressee please notify us of receipt by returning the
> e-mail and do not use, copy, retain, distribute or disclose the information
> in or attached to the e-mail.
> >> >>>> Any opinions expressed within this e-mail are those of the
> individual and not necessarily of Diamond Light Source Ltd.
> >> >>>> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> >> >>>> Diamond Light Source Limited (company no. 4375679). Registered in
> England and Wales with its registered office at Diamond House, Harwell
> Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
> >> >>>> _______________________________________________
> >> >>>> cctbxbb mailing list
> >> >>>> cctbxbb(a)phenix-online.org
> >> >>>> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >>> _______________________________________________
> >> >>> cctbxbb mailing list
> >> >>> cctbxbb(a)phenix-online.org
> >> >>> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >>
> >> >>
> >> >> _______________________________________________
> >> >> cctbxbb mailing list
> >> >> cctbxbb(a)phenix-online.org
> >> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >> >
> >> >
> >> > _______________________________________________
> >> > cctbxbb mailing list
> >> > cctbxbb(a)phenix-online.org
> >> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >>
> >>
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
>
> --
> Gergely Katona, PhD
> Associate Professor
> Department of Chemistry and Molecular Biology, University of Gothenburg
> Box 462, 40530 Göteborg, Sweden
> Tel: +46-31-786-3959 / M: +46-70-912-3309 / Fax: +46-31-786-3910
> Web: http://katonalab.eu, Email: gergely.katona(a)gu.se
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
6 years, 1 month
Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Sebastiano Pasqualato
Sorry guys, I got a bit lost in this long thread.
This test means that by truncating the data with the anisotropy server we get better R/Rfree statistics.
We are throwing away "bad" data, because throwing away the same number of a random set of relfections, the statistics don't drop.
Hence, the server is indeed helping in dropping the statistics, and if I got it right, it is also providing better maps.
Is my understanding correct?
Thanks,
ciao,
s
On May 3, 2012, at 1:45 PM, Pavel Afonine wrote:
> Hi Kendall,
>
> removing same amount of data randomly gives Rwork/Rfree ~ 30/35%.
>
> Pavel
>
> On 5/3/12 4:13 AM, Kendall Nettles wrote:
>> Hi Pavel,
>> What happens if you throw out that many reflections that have signal? Can you take out a random set of the same size?
>> Best,
>> Kendall
>>
>> On May 3, 2012, at 2:41 AM, "Pavel Afonine"<pafonine(a)lbl.gov> wrote:
>>
>>> Hi Kendall,
>>>
>>> I just did this quick test: calculated R-factors using original and
>>> anisotropy-corrected Mike Sawaya's data (*)
>>>
>>> Original:
>>> r_work : 0.3026
>>> r_free : 0.3591
>>> number of reflections: 26944
>>>
>>> Truncated:
>>> r_work : 0.2640
>>> r_free : 0.3178
>>> number of reflections: 18176
>>>
>>> The difference in R-factors is not too surprising given how many
>>> reflections was removed (about 33%).
>>>
>>> Pavel
>>>
>>> (*) Note, the data available in PDB is anisotropy corrected. The
>>> original data set was kindly provided to me by the author.
>>>
>>>
>>> On 5/2/12 5:25 AM, Kendall Nettles wrote:
>>>> I didnt think the structure was publishable with Rfree of 33% because I was expecting the reviewers to complain.
>>>>
>>>> We have tested a number of data sets on the UCLA server and it usually doesn't make much difference. I wouldn't expect truncation alone to change Rfree by 5%, and it usually doesn't. The two times I have seen dramatic impacts on the maps ( and Rfree ), the highly anisotrophic sets showed strong waves of difference density as well, which was fixed by throwing out the noise. We have moved to using loose data cutoffs for most structures, but I do think anisotropic truncation can be helpful in rare cases.
>>>>
>>>> Kendall
>>>>
>>>> On May 1, 2012, at 3:07 PM, "Dale Tronrud"<det102(a)uoxray.uoregon.edu> wrote:
>>>>
>>>>> While philosophically I see no difference between a spherical resolution
>>>>> cutoff and an elliptical one, a drop in the free R can't be the justification
>>>>> for the switch. A model cannot be made more "publishable" simply by discarding
>>>>> data.
>>>>>
>>>>> We have a whole bunch of empirical guides for judging the quality of this
>>>>> and that in our field. We determine the resolution limit of a data set (and
>>>>> imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
>>>>> or Rpim getting too big or I/sigI getting too small and there is no agreement
>>>>> on how "too big/small" is too "too big/small".
>>>>>
>>>>> We then have other empirical guides for judging the quality of the models
>>>>> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
>>>>> recognize that the these criteria need to be applied differently for different
>>>>> resolutions. A lower resolution model is allowed a higher Rfree, for example.
>>>>>
>>>>> Isn't is also true that a model refined to data with a cutoff of I/sigI of
>>>>> 1 would be expected to have a free R higher than a model refined to data with
>>>>> a cutoff of 2? Surely we cannot say that the decrease in free R that results
>>>>> from changing the cutoff criteria from 1 to 2 reflects an improved model. It
>>>>> is the same model after all.
>>>>>
>>>>> Sometimes this shifting application of empirical criteria enhances the
>>>>> adoption of new technology. Certainly the TLS parametrization of atomic
>>>>> motion has been widely accepted because it results in lower working and free
>>>>> Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
>>>>> that the model fits the data better, I'm not sure that the quality of the
>>>>> hydrogen bond distances, van der Waals distances, or maps are any better.
>>>>> The latter details are what I really look for in a model.
>>>>>
>>>>> On the other hand, there has been good evidence through the years that
>>>>> there is useful information in the data beyond an I/sigI of 2 or an
>>>>> Rmeas> 100% but getting people to use this data has been a hard slog. The
>>>>> reason for this reluctance is that the R values of the resulting models
>>>>> are higher. Of course they are higher! That does not mean the models
>>>>> are of poorer quality, only that data with lower signal/noise has been
>>>>> used that was discarded in the models you used to develop your "gut feeling"
>>>>> for the meaning of R.
>>>>>
>>>>> When you change your criteria for selecting data you have to discard
>>>>> your old notions about the acceptable values of empirical quality measures.
>>>>> You either have to normalize your measure, as Phil Jeffrey recommends, by
>>>>> ensuring that you calculate your R's with the same reflections, or by
>>>>> making objective measures of map quality.
>>>>>
>>>>> Dale Tronrud
>>>>>
>>>>> P.S. It is entirely possible that refining a model to a very optimistic
>>>>> resolution cutoff and calculating the map to a lower resolution might be
>>>>> better than throwing out the data altogether.
>>>>>
>>>>> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>>>>>> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
>>>>>> Kendall
>>>>>>
>>>>>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>>>>>
>>>>>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>>>>>>> Are anisotropic cutoff desirable?
>>>>>>> is there a peer-reviewed publication - perhaps from Acta
>>>>>>> Crystallographica - which describes precisely why scaling or
>>>>>>> refinement programs are inadequate to ameliorate the problem of
>>>>>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>>>>>> satisfies this need?
>>>>>>>
>>>>>>> -Bryan
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
--
Sebastiano Pasqualato, PhD
Crystallography Unit
Department of Experimental Oncology
European Institute of Oncology
IFOM-IEO Campus
via Adamello, 16
20139 - Milano
Italy
tel +39 02 9437 5167
fax +39 02 9437 5990
13 years, 9 months
Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Kendall Nettles
Hi Pavel,
Could you use a similar approach to figuring out where to cut your data in general? Could you compare the effects of throwing out reflections in different bins, based on I/sigma, for example, and use this to determine what is truly noise? I might predict that as you throw out "noise" reflections you will see a larger drop in Rfree than from throwing out "signal" reflections, which should converge as you approach the "true" resolution. While we don't use I/sigma exclusively, we do to tend towards cutting most of our data sets at the same i/sigma, around 1.5. It would be great if there was a more scientific approach.
Best,
Kendall
On May 3, 2012, at 7:45 AM, Pavel Afonine wrote:
> Hi Kendall,
>
> removing same amount of data randomly gives Rwork/Rfree ~ 30/35%.
>
> Pavel
>
> On 5/3/12 4:13 AM, Kendall Nettles wrote:
>> Hi Pavel,
>> What happens if you throw out that many reflections that have signal? Can you take out a random set of the same size?
>> Best,
>> Kendall
>>
>> On May 3, 2012, at 2:41 AM, "Pavel Afonine"<pafonine(a)lbl.gov> wrote:
>>
>>> Hi Kendall,
>>>
>>> I just did this quick test: calculated R-factors using original and
>>> anisotropy-corrected Mike Sawaya's data (*)
>>>
>>> Original:
>>> r_work : 0.3026
>>> r_free : 0.3591
>>> number of reflections: 26944
>>>
>>> Truncated:
>>> r_work : 0.2640
>>> r_free : 0.3178
>>> number of reflections: 18176
>>>
>>> The difference in R-factors is not too surprising given how many
>>> reflections was removed (about 33%).
>>>
>>> Pavel
>>>
>>> (*) Note, the data available in PDB is anisotropy corrected. The
>>> original data set was kindly provided to me by the author.
>>>
>>>
>>> On 5/2/12 5:25 AM, Kendall Nettles wrote:
>>>> I didnt think the structure was publishable with Rfree of 33% because I was expecting the reviewers to complain.
>>>>
>>>> We have tested a number of data sets on the UCLA server and it usually doesn't make much difference. I wouldn't expect truncation alone to change Rfree by 5%, and it usually doesn't. The two times I have seen dramatic impacts on the maps ( and Rfree ), the highly anisotrophic sets showed strong waves of difference density as well, which was fixed by throwing out the noise. We have moved to using loose data cutoffs for most structures, but I do think anisotropic truncation can be helpful in rare cases.
>>>>
>>>> Kendall
>>>>
>>>> On May 1, 2012, at 3:07 PM, "Dale Tronrud"<det102(a)uoxray.uoregon.edu> wrote:
>>>>
>>>>> While philosophically I see no difference between a spherical resolution
>>>>> cutoff and an elliptical one, a drop in the free R can't be the justification
>>>>> for the switch. A model cannot be made more "publishable" simply by discarding
>>>>> data.
>>>>>
>>>>> We have a whole bunch of empirical guides for judging the quality of this
>>>>> and that in our field. We determine the resolution limit of a data set (and
>>>>> imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
>>>>> or Rpim getting too big or I/sigI getting too small and there is no agreement
>>>>> on how "too big/small" is too "too big/small".
>>>>>
>>>>> We then have other empirical guides for judging the quality of the models
>>>>> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
>>>>> recognize that the these criteria need to be applied differently for different
>>>>> resolutions. A lower resolution model is allowed a higher Rfree, for example.
>>>>>
>>>>> Isn't is also true that a model refined to data with a cutoff of I/sigI of
>>>>> 1 would be expected to have a free R higher than a model refined to data with
>>>>> a cutoff of 2? Surely we cannot say that the decrease in free R that results
>>>>> from changing the cutoff criteria from 1 to 2 reflects an improved model. It
>>>>> is the same model after all.
>>>>>
>>>>> Sometimes this shifting application of empirical criteria enhances the
>>>>> adoption of new technology. Certainly the TLS parametrization of atomic
>>>>> motion has been widely accepted because it results in lower working and free
>>>>> Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
>>>>> that the model fits the data better, I'm not sure that the quality of the
>>>>> hydrogen bond distances, van der Waals distances, or maps are any better.
>>>>> The latter details are what I really look for in a model.
>>>>>
>>>>> On the other hand, there has been good evidence through the years that
>>>>> there is useful information in the data beyond an I/sigI of 2 or an
>>>>> Rmeas> 100% but getting people to use this data has been a hard slog. The
>>>>> reason for this reluctance is that the R values of the resulting models
>>>>> are higher. Of course they are higher! That does not mean the models
>>>>> are of poorer quality, only that data with lower signal/noise has been
>>>>> used that was discarded in the models you used to develop your "gut feeling"
>>>>> for the meaning of R.
>>>>>
>>>>> When you change your criteria for selecting data you have to discard
>>>>> your old notions about the acceptable values of empirical quality measures.
>>>>> You either have to normalize your measure, as Phil Jeffrey recommends, by
>>>>> ensuring that you calculate your R's with the same reflections, or by
>>>>> making objective measures of map quality.
>>>>>
>>>>> Dale Tronrud
>>>>>
>>>>> P.S. It is entirely possible that refining a model to a very optimistic
>>>>> resolution cutoff and calculating the map to a lower resolution might be
>>>>> better than throwing out the data altogether.
>>>>>
>>>>> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>>>>>> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
>>>>>> Kendall
>>>>>>
>>>>>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>>>>>
>>>>>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>>>>>>> Are anisotropic cutoff desirable?
>>>>>>> is there a peer-reviewed publication - perhaps from Acta
>>>>>>> Crystallographica - which describes precisely why scaling or
>>>>>>> refinement programs are inadequate to ameliorate the problem of
>>>>>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>>>>>> satisfies this need?
>>>>>>>
>>>>>>> -Bryan
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 9 months
[phenixbb] Re: adding together
by Billy Poon
Hi Pavel,
Yes, we generally filter out attachments, but if you use a .txt extension,
it should go through.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Fax: (510) 486-5909
Web: https://phenix-online.org
On Fri, Sep 12, 2025 at 1:23 PM Pavel Afonine <pafonine(a)lbl.gov> wrote:
> Hi James,
>
> script pasted below should do it. This time it is 54 lines mostly
> because I was assuming arrays you are summing do not have matching indices.
>
> ****
> from iotbx import reflection_file_reader
> import iotbx.pdb
> import os
> from scitbx.array_family import flex
>
> def run():
> # Collect all data arrays from input files
> arrays = []
> for f in os.listdir("."):
> if not (f.startswith("fmodel") and f.endswith(".mtz")): continue
> miller_arrays = reflection_file_reader.any_reflection_file(file_name =
> f).as_miller_arrays()
> for ma in miller_arrays:
> #print (ma.info().label_string())
> if(ma.info().label_string()=="FMODEL,PHIFMODEL"):
> arrays.append(ma)
> break
> # Assume arrays may not have matching indices (e.g., different
> resolution),
> # so make them all to have matching indices
> for ma in arrays:
> common_ref_array = arrays[0]
> aligned_arrays = [common_ref_array]
> new_arrays = []
> for i in range(1, len(arrays)):
> next_array = arrays[i]
> common_ref_array, next_array_aligned =
> common_ref_array.common_sets(next_array)
> aligned_arrays[0] = common_ref_array
> aligned_arrays.append(next_array_aligned)
> new_arrays = aligned_arrays
> #
> sum_cmpl = None
> sum_sq = None
> for ma in new_arrays:
> data = ma.data()
> data_abs = flex.abs(data)
> data_abs_sq = data_abs * data_abs
> if sum_cmpl is None:
> sum_cmpl = data
> sum_sq = data_abs_sq
> else:
> sum_cmpl = sum_cmpl + data
> sum_sq = sum_sq + data_abs_sq
> f_sum = ma.array(data = sum_cmpl)
> f_sq = ma.array(data = sum_sq)
> # Write into MTZ
> mtz_dataset = f_sum.as_mtz_dataset(column_root_label = "FSUM")
> mtz_dataset.add_miller_array(
> miller_array = f_sq,
> column_root_label = "FSQ")
> mtz_object = mtz_dataset.mtz_object()
> mtz_object.write(file_name = "result.mtz")
>
> if __name__ == '__main__':
> run()
>
> ****
>
> Pavel
>
> On 9/12/25 12:36, James Holton wrote:
> > Thank you for this Pavel,
> >
> > Funny, I got the attachment just fine? Maybe its on P's email client
> > side. I'm using thunderbird.
> >
> > However, the code you shared seems to be taking an amplitude from one
> > place and attaching it to a phase from another. What I want to do is
> > the complex sum of two phased structure factors.
> >
> > Specifically, I have is a stack of 1000 mtz files with FMODEL PHIMODEL
> > in each. What I want is the phased sum of all of them, as well as the
> > unphased sum of their squares. I've been doing the additions two at a
> > time because that is parallelizable. My script is here:
> >
> https://github.com/jmholton/altloc_diffuse/blob/main/addup_mtzs_diffuse.com
> > It is currently using sftools for these operations:
> > calc ( COL Fsum PHIsum ) = ( COL F1 P1 ) ( COL F2 P2 ) +
> > calc COL I1 = COL F1 2 **
> > calc COL I2 = COL F2 2 **
> > calc J COL Isum = COL I1 COL I2 +
> >
> > There does not seem to be a general-purpose math toolkit like sftools
> > in phenix?
> >
> > Much appreciate all the quick responses!
> >
> > -James
> >
> >
> > On 9/12/2025 12:12 PM, Pavel Afonine wrote:
> >>
> >> Oh, does phenixbb really strip off attachments under 1 KB in size?!
> >> Terrible. Anyway, I’m inlining it below (make sure your email client
> >> doesn’t mess up the indentation):
> >>
> >> ********
> >> from iotbx import reflection_file_reader
> >> import iotbx.pdb
> >>
> >> def run():
> >> # Read your reflection data
> >> miller_arrays = reflection_file_reader.any_reflection_file(file_name =
> >> "1yjp.mtz").as_miller_arrays()
> >> for ma in miller_arrays:
> >> print (ma.info().label_string())
> >> if(ma.info().label_string()=="FOBS,SIGFOBS"):
> >> f_obs = ma
> >> # Get phases by computing Fcalc (for example)
> >> pdb_inp = iotbx.pdb.input(file_name = "1yjp.pdb")
> >> xrs = pdb_inp.xray_structure_simple()
> >> f_calc =
> >> f_obs.structure_factors_from_scatterers(xray_structure=xrs).f_calc()
> >> # Transfer phased from f_calc to f_obs
> >> f_obs_cmpl = f_obs.phase_transfer(phase_source = f_calc.phases())
> >> print(f_obs_cmpl.data()) # just to see the data are now complex array!
> >> # Write into MTZ
> >> mtz_dataset = f_calc.as_mtz_dataset(column_root_label = "Fcmpl")
> >> mtz_object = mtz_dataset.mtz_object()
> >> mtz_object.write(file_name = "data.mtz")
> >>
> >> if __name__ == '__main__':
> >> run()
> >>
> >> ********
> >>
> >> Pavel
> >>
> >> On 9/12/25 12:04, Petrus Zwart wrote:
> >>> Hi Pavel,
> >>>
> >>> You say 25 lines, but there are zero - can you attach the code?
> >>>
> >>> P
> >>>
> >>> On Fri, Sep 12, 2025 at 11:57 AM Pavel Afonine <pafonine(a)lbl.gov>
> wrote:
> >>>
> >>> Hi James,
> >>>
> >>> attached script shows how to do that in cctbx. ( And it is just
> >>> 25 lines
> >>> including comments compared Claud's 173 )
> >>>
> >>> It uses 1yjp as a source of sample files to run the example, you
> >>> can get
> >>> it using
> >>>
> >>> phenix.fetch_pdb 1yjp action=all convert_to_mtz=true
> >>>
> >>> Pavel
> >>>
> >>> On 9/12/25 11:28, James Holton wrote:
> >>> > Greetings Phenix developers!
> >>> >
> >>> > I have what I hope is a quick question. I'm trying to add a
> >>> stack of
> >>> > phased structure factors together. Specifically from separate
> >>> runs of
> >>> > phenix.fmodel. Normally, I'd use sftools for this, but the
> >>> > distributed version has a size limitation. According to Tom's
> >>> phenix
> >>> > chatbot, I can use phenix.maps for adding two sets of structure
> >>> > factors together, but it seems to only want to combine an mtz
> >>> with a
> >>> > pdb. There is iotbx.reflection_file_editor, but it only works
> >>> from the
> >>> > GUI (and I want to add 1000 mtz files together). phenix.xmanip
> >>> looks
> >>> > promising too, but again, seems to want a pdb and an mtz. Is
> >>> there a
> >>> > recommended way to do this?
> >>> >
> >>> > I've already asked this on CCP4BB, but I wanted to get the
> >>> Phenix take
> >>> > on these kinds of manipulations.
> >>> >
> >>> > Thank you,
> >>> >
> >>> > -James Holton
> >>> > MAD Scientist
> >>> >
> >>> _______________________________________________
> >>> phenixbb mailing list -- phenixbb(a)phenix-online.org
> >>> To unsubscribe send an email to phenixbb-leave(a)phenix-online.org
> >>> Unsubscribe: phenixbb-leave@%(host_name)s
> >>>
> >>>
> >>>
> >>> --
> >>>
> ------------------------------------------------------------------------------------------
> >>> Peter Zwart
> >>> Staff Scientist, Molecular Biophysics and Integrated Bioimaging
> >>> Berkeley Synchrotron Infrared Structural Biology
> >>> Biosciences Lead, Center for Advanced Mathematics for Energy
> >>> Research Applications
> >>> Lawrence Berkeley National Laboratories
> >>> 1 Cyclotron Road, Berkeley, CA-94703, USA
> >>> Cell: 510 289 9246
> >>>
> ------------------------------------------------------------------------------------------
> >>>
> >
> _______________________________________________
> phenixbb mailing list -- phenixbb(a)phenix-online.org
> To unsubscribe send an email to phenixbb-leave(a)phenix-online.org
> Unsubscribe: phenixbb-leave@%(host_name)s
4 months, 4 weeks
Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Terwilliger, Thomas C
Hi Kendall,
This could work. You could define a fixed set of test reflections, and never touch these, and never include them in refinement, and always use this fixed set to calculate a free R. Then you could do whatever you want, throw away some work reflections, etc, refine, and evaluate how things are working with the fixed free R set.
All the best,
Tom T
________________________________________
From: phenixbb-bounces(a)phenix-online.org [phenixbb-bounces(a)phenix-online.org] on behalf of Kendall Nettles [knettles(a)scripps.edu]
Sent: Thursday, May 03, 2012 7:05 AM
To: PHENIX user mailing list
Subject: Re: [phenixbb] Geometry Restraints - Anisotropic truncation
Hi Pavel,
Could you use a similar approach to figuring out where to cut your data in general? Could you compare the effects of throwing out reflections in different bins, based on I/sigma, for example, and use this to determine what is truly noise? I might predict that as you throw out "noise" reflections you will see a larger drop in Rfree than from throwing out "signal" reflections, which should converge as you approach the "true" resolution. While we don't use I/sigma exclusively, we do to tend towards cutting most of our data sets at the same i/sigma, around 1.5. It would be great if there was a more scientific approach.
Best,
Kendall
On May 3, 2012, at 7:45 AM, Pavel Afonine wrote:
> Hi Kendall,
>
> removing same amount of data randomly gives Rwork/Rfree ~ 30/35%.
>
> Pavel
>
> On 5/3/12 4:13 AM, Kendall Nettles wrote:
>> Hi Pavel,
>> What happens if you throw out that many reflections that have signal? Can you take out a random set of the same size?
>> Best,
>> Kendall
>>
>> On May 3, 2012, at 2:41 AM, "Pavel Afonine"<pafonine(a)lbl.gov> wrote:
>>
>>> Hi Kendall,
>>>
>>> I just did this quick test: calculated R-factors using original and
>>> anisotropy-corrected Mike Sawaya's data (*)
>>>
>>> Original:
>>> r_work : 0.3026
>>> r_free : 0.3591
>>> number of reflections: 26944
>>>
>>> Truncated:
>>> r_work : 0.2640
>>> r_free : 0.3178
>>> number of reflections: 18176
>>>
>>> The difference in R-factors is not too surprising given how many
>>> reflections was removed (about 33%).
>>>
>>> Pavel
>>>
>>> (*) Note, the data available in PDB is anisotropy corrected. The
>>> original data set was kindly provided to me by the author.
>>>
>>>
>>> On 5/2/12 5:25 AM, Kendall Nettles wrote:
>>>> I didnt think the structure was publishable with Rfree of 33% because I was expecting the reviewers to complain.
>>>>
>>>> We have tested a number of data sets on the UCLA server and it usually doesn't make much difference. I wouldn't expect truncation alone to change Rfree by 5%, and it usually doesn't. The two times I have seen dramatic impacts on the maps ( and Rfree ), the highly anisotrophic sets showed strong waves of difference density as well, which was fixed by throwing out the noise. We have moved to using loose data cutoffs for most structures, but I do think anisotropic truncation can be helpful in rare cases.
>>>>
>>>> Kendall
>>>>
>>>> On May 1, 2012, at 3:07 PM, "Dale Tronrud"<det102(a)uoxray.uoregon.edu> wrote:
>>>>
>>>>> While philosophically I see no difference between a spherical resolution
>>>>> cutoff and an elliptical one, a drop in the free R can't be the justification
>>>>> for the switch. A model cannot be made more "publishable" simply by discarding
>>>>> data.
>>>>>
>>>>> We have a whole bunch of empirical guides for judging the quality of this
>>>>> and that in our field. We determine the resolution limit of a data set (and
>>>>> imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
>>>>> or Rpim getting too big or I/sigI getting too small and there is no agreement
>>>>> on how "too big/small" is too "too big/small".
>>>>>
>>>>> We then have other empirical guides for judging the quality of the models
>>>>> we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
>>>>> recognize that the these criteria need to be applied differently for different
>>>>> resolutions. A lower resolution model is allowed a higher Rfree, for example.
>>>>>
>>>>> Isn't is also true that a model refined to data with a cutoff of I/sigI of
>>>>> 1 would be expected to have a free R higher than a model refined to data with
>>>>> a cutoff of 2? Surely we cannot say that the decrease in free R that results
>>>>> from changing the cutoff criteria from 1 to 2 reflects an improved model. It
>>>>> is the same model after all.
>>>>>
>>>>> Sometimes this shifting application of empirical criteria enhances the
>>>>> adoption of new technology. Certainly the TLS parametrization of atomic
>>>>> motion has been widely accepted because it results in lower working and free
>>>>> Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
>>>>> that the model fits the data better, I'm not sure that the quality of the
>>>>> hydrogen bond distances, van der Waals distances, or maps are any better.
>>>>> The latter details are what I really look for in a model.
>>>>>
>>>>> On the other hand, there has been good evidence through the years that
>>>>> there is useful information in the data beyond an I/sigI of 2 or an
>>>>> Rmeas> 100% but getting people to use this data has been a hard slog. The
>>>>> reason for this reluctance is that the R values of the resulting models
>>>>> are higher. Of course they are higher! That does not mean the models
>>>>> are of poorer quality, only that data with lower signal/noise has been
>>>>> used that was discarded in the models you used to develop your "gut feeling"
>>>>> for the meaning of R.
>>>>>
>>>>> When you change your criteria for selecting data you have to discard
>>>>> your old notions about the acceptable values of empirical quality measures.
>>>>> You either have to normalize your measure, as Phil Jeffrey recommends, by
>>>>> ensuring that you calculate your R's with the same reflections, or by
>>>>> making objective measures of map quality.
>>>>>
>>>>> Dale Tronrud
>>>>>
>>>>> P.S. It is entirely possible that refining a model to a very optimistic
>>>>> resolution cutoff and calculating the map to a lower resolution might be
>>>>> better than throwing out the data altogether.
>>>>>
>>>>> On 5/1/2012 10:34 AM, Kendall Nettles wrote:
>>>>>> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
>>>>>> Kendall
>>>>>>
>>>>>> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>>>>>>
>>>>>>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>>>>>>> Are anisotropic cutoff desirable?
>>>>>>> is there a peer-reviewed publication - perhaps from Acta
>>>>>>> Crystallographica - which describes precisely why scaling or
>>>>>>> refinement programs are inadequate to ameliorate the problem of
>>>>>>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>>>>>>> satisfies this need?
>>>>>>>
>>>>>>> -Bryan
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
_______________________________________________
phenixbb mailing list
phenixbb(a)phenix-online.org
http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 9 months
Re: [cctbxbb] Boost Python 1.56
by Billy Poon
Hi James,
I agree with Graeme's tests and I would add
cctbx_regression.test_nightly
That command is a shortcut for running the test modules for libtbx,
boost_adaptbx, scitbx, cctbx, iotbx, dxtbx, and smtbx. The mmtbx module is
also tested if chem_data is available. Should we add rstbx to the shortcut
(cctbx_project/cctbx/command_line/cctbx_test_nightly.py)?
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Thu, Apr 6, 2017 at 1:37 AM, <richard.gildea(a)diamond.ac.uk> wrote:
> I've made a start on transcribing this document here:
>
> https://github.com/cctbx/cctbx_project/wiki/cctbx-Developer-Guidance
>
> It probably still needs cleaning up a bit (e.g. I couldn't figure out
> quickly how to do 3-level list nesting, 1.a.i etc.) and updating to reflect
> current practice (e.g. git not svn).
>
> Cheers,
>
> Richard
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
>
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org]
> on behalf of Pavel Afonine [pafonine(a)lbl.gov]
> Sent: 06 April 2017 09:07
> To: cctbx mailing list
> Subject: Re: [cctbxbb] Boost Python 1.56
>
> Hi Graeme,
>
> hm.. this is a good question. We've been through back-and-forth
> iterations of editing this file and I think the latest I have is from
> Paul. But I can't find a non-PDF version of it. Paul: do you have an
> editable version of this file?
>
> Thanks,
> Pavel
>
> On 4/6/17 13:45, Graeme.Winter(a)diamond.ac.uk wrote:
> > Hi Pavel
> >
> > These all seem sensible
> >
> > If you have the original non pdf document it may be easier to transcribe
> this over..
> >
> > I also note that it lacks the actual detail on how to run tests! However
> would be happy to add this once on wiki
> >
> > Best wishes Graeme
> >
> > On 6 Apr 2017, at 04:00, Pavel Afonine <pafonine(a)lbl.gov<mailto:pafon
> ine(a)lbl.gov>> wrote:
> >
> > Not sure if that answers your questions but once upon a time we here at
> Berkeley tried to write a some sort of document that was supposed to answer
> questions like this. Attached. By no means it is complete, up-to-date, etc,
> but it might be worth reading for anyone who contributes to cctbx. (Even
> not sure if I'm sending the latest version).
> > Unfortunately, nobody bothered to put it in some central place.
> >
> > Pavel
> >
> > On 4/6/17 10:51, James Holton wrote:
> > Hey Billy,
> >
> > On a related note. How do I run these regression tests before committing
> something into Git? Is there a document on dials regression testing I can't
> find?
> >
> > -James
> >
> > On Apr 5, 2017, at 3:38 PM, Billy Poon <bkpoon(a)lbl.gov<mailto:bkpoon@
> lbl.gov>> wrote:
> >
> > Hi all,
> >
> > I tested Boost 1.56 on our buildbot servers and got some new test
> failures with
> >
> > cctbx_project/scitbx/array_family/boost_python/tst_flex.py
> > cctbx_project/scitbx/random/tests/tst_random.py
> >
> > The full log for CentOS 6 can be found at
> >
> > http://cci-vm-6.lbl.gov:8010/builders/phenix-nightly-intel-
> linux-2.6-x86_64-centos6/builds/601/steps/test%20cctbx_
> regression.test_nightly/logs/stdio
> >
> > It looks like the errors are related to random number generation. For a
> given seed, would the sequence of numbers change when Boost is changed? I
> did a diff between Boost 1.56 and the current Boost and could not see any
> changes that immediately stood out as being related to random numbers.
> >
> > Are these tests failing for others as well?
> >
> > --
> > Billy K. Poon
> > Research Scientist, Molecular Biophysics and Integrated Bioimaging
> > Lawrence Berkeley National Laboratory
> > 1 Cyclotron Road, M/S 33R0345
> > Berkeley, CA 94720
> > Tel: (510) 486-5709
> > Fax: (510) 486-5909
> > Web: https://phenix-online.org<https://phenix-online.org/>
> >
> > On Wed, Apr 5, 2017 at 8:12 AM, Charles Ballard <
> charles.ballard(a)stfc.ac.uk<mailto:[email protected]>> wrote:
> > FYI, we (CCP4) have been using 1.56 for building cctbx/phaser/dials for
> the last while with no issues. Don't know about 1.60, but 1.59 causes
> issues with the boost python make_getter and make_setter (initialisation of
> none const reference if the passed type is a temporary).
> >
> > Charles
> >
> > On 3 Apr 2017, at 14:31, Luc Bourhis wrote:
> >
> > Hi all,
> >
> > everybody seemed to agree but then it was proposed to move straight to
> Boost 1.60, and this caused troubles. Could we consider again to move to at
> least 1.56? As far as I can tell, this does not cause any issue and as
> stated one year ago, it would help me and Olex 2.
> >
> > Thanks,
> >
> > Luc
> >
> > On 10 Feb 2016, at 15:17, Nicholas Sauter <nksauter(a)lbl.gov<mailto:nksau
> ter(a)lbl.gov>> wrote:
> >
> > Nigel, Billy & Aaron,
> >
> > I completely endorse this move to Boost 1.56. Can we update our build?
> >
> > Nick
> >
> > Nicholas K. Sauter, Ph. D.
> > Computer Staff Scientist, Molecular Biophysics and Integrated Bioimaging
> Division
> > Lawrence Berkeley National Laboratory
> > 1 Cyclotron Rd., Bldg. 33R0345
> > Berkeley, CA 94720
> > (510) 486-5713<tel:%28510%29%20486-5713>
> >
> > On Wed, Feb 10, 2016 at 2:41 PM, Luc Bourhis <luc_j_bourhis(a)mac.com
> <mailto:[email protected]>> wrote:
> > Hi,
> >
> > I have improvements to the smtbx on their way to be committed which
> require Boost version 1.56. This is related to Boost.Threads, whose support
> I re-activated a few months ago on Nick’s request. I need the function
> boost::thread::physical_concurrency which returns the number of physical
> cores on the machine, as opposed to virtual cores when hyperthreading is
> enabled (which it is by default on any Intel machine). That function is not
> available in Boost 1.55 which is the version currently used in the nightly
> tests: it appeared in 1.56.
> >
> > So, would it be possible to move to Boost 1.56? Otherwise, I will need
> to backport that function. Not too difficult but not thrilling.
> >
> > Best wishes,
> >
> > Luc
> >
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> >
> >
> > _______________________________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> >
> > <cctbx-developer-guidance-08-2015.pdf>_____________________
> __________________________
> > cctbxbb mailing list
> > cctbxbb(a)phenix-online.org<mailto:[email protected]>
> > http://phenix-online.org/mailman/listinfo/cctbxbb
> >
> >
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
8 years, 10 months
Re: [cctbxbb] some thoughts on cctbx and pip
by Luc Bourhis
Hi Tristan,
cctbx could be built to use your ChimeraX python, now that cctbx is moving to Python 3. The option —with-python is there for that with the bootstrap script. The specific environment setup boil down to setting two environment variable LIBTBX_BUILD and either LD_LIBRARY_PATH on Linux, PATH on Win32, or DYLIB_LIBRARY_PATH on MacOS. If you work within a framework such as ChimeraX, that should not be difficult to ensure those two variables are set.
Best wishes,
Luc
> On 23 Aug 2019, at 19:02, Tristan Croll <tic20(a)cam.ac.uk> wrote:
>
> To add my two cents on this: probably the second-most common question I've had about ISOLDE's implementation is, "why didn't you use CCTBX?". The honest answer to that is, "I didn't know how."
>
> Still don't, really - although the current developments are rather promising. The problem I've faced is that CCTBX was designed as its own self-contained Python (2.7, until very recently) environment, with its own interpreter and a lot of very specific environment setup. Meanwhile I'm developing ISOLDE in ChimeraX, which is *also* its own self-contained Python (3.7) environment. To plug one into the other in that form... well, I don't think I'm a good enough programmer to really know where to start.
>
> The move to Conda and a more modular CCTBX architecture should make a lot more possible in that direction. Pip would be even better for me personally (ChimeraX can install directly from the PyPI, but doesn't interact with Conda) - but I understand pretty well the substantial challenge that would amount to (not least being that the PyPI imposes a limit - around 100MB from memory? - on the size of an individual package).
>
> Best regards,
>
> Tristan
>
> On 2019-08-23 09:28, Luc Bourhis wrote:
>> Hi Graeme,
>> Yes, I know. But “black" is a program doing a very particular task
>> (code formatting from the top of my head). Requiring to use a wrapper
>> for python itself is another level. But ok, I think I am mellowing to
>> the idea after all! Talking with people around me, and extrapolating,
>> I would bet that, right now, a great majority of people interested by
>> cctbx in pip have already used the cctbx, so they know about the
>> Python wrapper, and they would not be too sanguine about that. My
>> concern is for the future, when pip will be the first time some people
>> use cctbx. Big fat warning notices on PyPI page and a better error
>> message when cctbx fails because LIBTBX_BUILD is not set would be
>> needed but that could be all right.
>> If we do a pip installer, we should aim at a minimal install: cctbx,
>> iotbx and their dependencies, and that’s it.
>> Best wishes,
>> Luc
>>> On 23 Aug 2019, at 07:17, Graeme.Winter(a)Diamond.ac.uk <Graeme.Winter(a)diamond.ac.uk> wrote:
>>> Without discussing the merits of this or whether we _choose_ to make the move to supporting PIP, I am certain it would be _possible_ - many other packages make dispatcher scripts when you pip install them e.g.
>>> Silver-Surfer rescale_f2 :) $ which black; cat $(which black)
>>> /Library/Frameworks/Python.framework/Versions/3.6/bin/black
>>> #!/Library/Frameworks/Python.framework/Versions/3.6/bin/python3.6
>>> # -*- coding: utf-8 -*-
>>> import re
>>> import sys
>>> from black import main
>>> if __name__ == '__main__':
>>> sys.argv[0] = re.sub(r'(-script\.pyw?|\.exe)?$', '', sys.argv[0])
>>> sys.exit(main())
>>> So we _could_ work around the absence of LIBTBX_BUILD etc. in the system. Whether or not we elect to do the work is a different question, and it seems clear that here are very mixed opinions on this.
>>> Best wishes Graeme
>>> On 23 Aug 2019, at 01:21, Luc Bourhis <luc_j_bourhis(a)mac.com<mailto:[email protected]>> wrote:
>>> Hi,
>>> Even if we managed to ship our the boost dynamic libraries with pip, it would still not be pip-like, as we would still need our python wrappers to set LIBTBX_BUILD and LD_LIBRARY_PATH. Normal pip packages work with the standard python exe. LD_LIBRARY_PATH, we could get around that by changing the way we compile, using -Wl,-R, which is the runtime equivalent of build time -L. That’s a significant change that would need to be tested. But there is no way around setting LIBTBX_BUILD right now. Leaving that to the user is horrible. Perhaps there is a way to hack libtbx/env_config.py so that we can hardwire LIBTBX_BUILD in there when pip installs?
>>> Best wishes,
>>> Luc
>>> On 16 Aug 2019, at 22:47, Luc Bourhis <luc_j_bourhis(a)mac.com<mailto:[email protected]>> wrote:
>>> Hi,
>>> I did look into that many years ago, and even toyed with building a pip installer. What stopped me is the exact conclusion you reached too: the user would not have the pip experience he expects. You are right that it is a lot of effort but is it worth it? Considering that remark, I don’t think so. Now, Conda was created specifically to go beyond pip pure-python-only support. Since cctbx has garnered support for Conda, the best avenue imho is to go the extra length to have a package on Anaconda.org<http://anaconda.org/>, and then to advertise it hard to every potential user out there.
>>> Best wishes,
>>> Luc
>>> On 16 Aug 2019, at 21:45, Aaron Brewster <asbrewster(a)lbl.gov<mailto:[email protected]>> wrote:
>>> Hi, to avoid clouding Dorothee's documentation email thread, which I think is a highly useful enterprise, here's some thoughts about putting cctbx into pip. Pip doesn't install non-python dependencies well. I don't think boost is available as a package on pip (at least the package version we use). wxPython4 isn't portable through pip (https://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython…). MPI libraries are system dependent. If cctbx were a pure python package, pip would be fine, but cctbx is not.
>>> All that said, we could build a manylinux1 version of cctbx and upload it to PyPi (I'm just learning about this). For a pip package to be portable (which is a requirement for cctbx), it needs to conform to PEP513, the manylinux1 standard (https://www.python.org/dev/peps/pep-0513/). For example, numpy is built according to this standard (see https://pypi.org/project/numpy/#files, where you'll see the manylinux1 wheel). Note, the manylinux1 standard is built with Centos 5.11 which we no longer support.
>>> There is also a manylinux2010 standard, which is based on Centos 6 (https://www.python.org/dev/peps/pep-0571/). This is likely a more attainable target (note though by default C++11 is not supported on Centos 6).
>>> If we built a manylinuxX version of cctbx and uploaded it to PyPi, the user would need all the non-python dependencies. There's no way to specify these in pip. For example, cctbx requires boost 1.63 or better. The user will need to have it in a place their python can find it, or we could package it ourselves and supply it, similar to how the pip h5py package now comes with an hd5f library, or how the pip numpy package includes an openblas library. We'd have to do the same for any packages we depend on that aren't on pip using the manylinux standards, such as wxPython4.
>>> Further, we need to think about how dials and other cctbx-based packages interact. If pip install cctbx is set up, how does pip install dials work, such that any dials shared libraries can find the cctbx libraries? Can shared libraries from one pip package link against libraries in another pip package? Would each package need to supply its own boost? Possibly this is well understood in the pip field, but not by me :)
>>> Finally, there's the option of providing a source pip package. This would require the full compiler toolchain for any given platform (macOS, linux, windows). These are likely available for developers, but not for general users.
>>> Anyway, these are some of the obstacles. Not saying it isn't possible, it's just a lot of effort.
>>> Thanks,
>>> -Aaron
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org<mailto:[email protected]>
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>> --
>>> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
>>> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
>>> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
>>> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
6 years, 5 months