Search results for query "look through"
- 520 messages

Re: [phenixbb] conversion from h3 to r3 with phenix apps
by Green, Todd
Thanks for taking the time to look at this Peter and Vaheh and other for that matter. I have processed multiple datasets in p1, p222, r3, and p23. I'll try to compile a something from the log files to submit for further questions that I have. I think it is possible that some of my datasets could be twinned, I'm certainly looking into it. xtriage doesn't seem to think so for the couple of datasets that i have run through it so far, although in some of the lower symmetry sg there are a few twin laws which indicate so, xtriage seems to attribute it to orientation of ncs. I'm also doing molecular replacement in each of the space groups.
Also in some of the datasets, I have alternating lattice lines of heavy and light reflections. Thusly without carefully selecting spots and the light reflections will not be chosen and the wrong lattice will be assigned(ie. the lower density spots get skipped). It goes without saying that some of the datasets are not optimal but its what i have at the moment.
One of the first red flags for me in regard to it not being cubic (at least one of the datasets), was that if i processed two seperate wedges of data from the same crystal not using the same indexing then tried to merge the datasets together the r-merge would be .4+. If i used xtriage to determine an appropriate reindexing operator then merged the data, the r-merge would be .08. I didn't think alternate indexing was possible for the cubic space group (other than for the assignment of +/- reflection). So even though in my case it is Se-met data, i didn't think that the difference of 30+ percent between alternative indexings could be possible. I thought this might be indicating that I didn't have a true p23 space group. Is my thinking correct here?
Also in some of the datasets, I have alternating lattice lines of heavy and light reflections. Thusly without carefully selecting spots, the light reflections will not be chosen and the wrong lattice will be assigned. It goes without saying that some of the datasets are not optimal but its what i have at the moment. Are the rows of light reflections typically indicative of pseudo-symmetry? I believe that I will have 20 copies of a hetero-complex in the cubic setting and 120 in orthorhombic which should have i believe a packing similar to the cubic crystal.
thanks in advance-
Todd
-----Original Message-----
From: phenixbb-bounces(a)phenix-online.org on behalf of Peter Zwart
Sent: Fri 8/10/2007 5:24 PM
To: PHENIX user mailing list
Subject: Re: [phenixbb] conversion from h3 to r3 with phenix apps
Hi Todd,
This table tells me that the data in P1 doesn't merge very well and that
limits for what is reasonable are maybe set to tight for your case.
Try using the keyword tanh_location=0.11 and see what happens.
Just looking at the table though, there seem to be a number of options
to be considered:
P1: one unused symmetry operator present with R value equal 10%. Not a
good sign, sg is too low
P432: ONe operator has an R value of 47%. Not a good sign, sg is too high
A couple of flavours of R3.
| R 3 :h (x+z,y-z,-x+y+z) | 0.148 | 0.148 | 0.256 |
0.103 |
| R 3 :h (-x+y+z,x+z,y-z) | 0.153 | 0.153 | 0.256
| 0.103 |
| R 3 :h (y-z,-x+y+z,x+z) | 0.164 | 0.164 | 0.256
| 0.103 |
| R 3 :R | 0.169 | 0.169 |
0.256 | 0.103 |
From the last column it is clearly seen that an operator is present
with a R-value equal to 10%, but is not used in the point group. The
point group might be too low.
The flavours of R32 don't work, as a operator is used that has a high R
value:
| R 3 2 :h (-x+y+z,x+z,y-z) | 0.484 | 0.489 | 0.109 |
0.103
| R 3 2 :h (y-z,-x+y+z,x+z) | 0.486 | 0.492 | 0.109 |
0.103
These options are nice(ish):
| P 2 3 | 0.134 | 0.169 |
0.478 | 0.478 | |
| P 2 2 2 | 0.109 | 0.110 |
0.481 | 0.478 | |
| P 4 2 2 (b,c,a) | 0.181 | 0.486 |
0.480 | 0.478 | |
Do you have any indication that your data is twinned?
Peter
Green, Todd wrote:
>
> I'm actually not sure that it's not orthorhombic. I know looking at
> the cell lengths and angles makes it very inviting to the cubic cell
> and i have been working with that as a possibility. i do have a real
> good suspicion that I have a pseudo-cubic cell. I can do molecular
> replacement in the cubic space group but am missing some density that
> I think should be there. I wanted to do molecular replacement in the
> rhombohedral setting and ultimately the lower symmetry space group too
> and see if I pick that density up. This is why I wanted to know how to
> convert the h3 to r3 setting. the h3 cell is a nightmare for my computer!
>
> as a side note not all of the crystals, process in the primitive
> lattice, but all process as rhombohedral. That probably means that I
> have a two different types of crystals more than anything else, etc
> etc etc. cross-xtal averaging is another reason to i wanted to figure
> out the setting difference.
>
> I have previously run one of my datasets processed in p1 thru Xtriage,
> it said:
>
> Exploring higher metric symmetry
>
> The point group of data as dictated by the space group is P 1
> the point group in the niggli setting is P 1
> The point group of the lattice is P 4 3 2
> A summary of R values for various possible point groups follow.
>
> ------------------------------------------------------------------------------------------------
> | Point group | mean R_used | max R_used | mean R_unused
> | min R_unused | choice |
> ------------------------------------------------------------------------------------------------
> | P 1 | None | None | 0.362
> | 0.103 | <--- |
> | P 2 3 | 0.134 | 0.169 | 0.478
> | 0.478 | |
> | P 4 | 0.152 | 0.488 | 0.377
> | 0.164 | |
> | C 1 2 1 (z,x+y,-x+y) | 0.489 | 0.489 | 0.337
> | 0.103 | |
> | P 4 2 2 (c,a,b) | 0.266 | 0.481 | 0.481
> | 0.478 | |
> | P 1 1 2 | 0.110 | 0.110 | 0.221
> | 0.103 | |
> | P 2 1 1 | 0.107 | 0.107 | 0.290
> | 0.110 | |
> | R 3 2 :R | 0.481 | 0.483 | 0.109
> | 0.103 | |
> | C 1 2 1 (x+y,-x+y,z) | 0.492 | 0.492 | 0.269
> | 0.103 | |
> | P 2 2 2 | 0.109 | 0.110 | 0.481
> | 0.478 | |
> | C 2 2 2 (x-y,x+y,z) | 0.190 | 0.478 | 0.339
> | 0.103 | |
> | P 4 (c,a,b) | 0.454 | 0.476 | 0.272
> | 0.110 | |
> | C 1 2 1 (x+y,z,x-y) | 0.483 | 0.483 | 0.351
> | 0.103 | |
> | C 1 2 1 (-x+y,z,x+y) | 0.486 | 0.486 | 0.351
> | 0.103 | |
> | P 1 2 1 | 0.103 | 0.103 | 0.343
> | 0.110 | |
> | R 3 2 :h (x+z,y-z,-x+y+z) | 0.488 | 0.492 | 0.109
> | 0.103 | |
> | P 4 2 2 (b,c,a) | 0.181 | 0.486 | 0.480
> | 0.478 | |
> | R 3 :h (x+z,y-z,-x+y+z) | 0.148 | 0.148 | 0.256
> | 0.103 | |
> | C 2 2 2 (z,x-y,x+y) | 0.426 | 0.489 | 0.309
> | 0.110 | |
> | R 3 2 :h (-x+y+z,x+z,y-z) | 0.484 | 0.489 | 0.109
> | 0.103 | |
> | C 2 2 2 (-x+y,z,x+y) | 0.386 | 0.486 | 0.327
> | 0.110 | |
> | P 4 2 2 | 0.243 | 0.492 | 0.386
> | 0.169 | |
> | R 3 :h (-x+y+z,x+z,y-z) | 0.153 | 0.153 | 0.256
> | 0.103 | |
> | R 3 :h (y-z,-x+y+z,x+z) | 0.164 | 0.164 | 0.256
> | 0.103 | |
> | P 4 3 2 | 0.362 | 0.492 | None
> | None | |
> | C 1 2 1 (x-y,x+y,z) | 0.478 | 0.478 | 0.269
> | 0.103 | |
> | R 3 2 :h (y-z,-x+y+z,x+z) | 0.486 | 0.492 | 0.109
> | 0.103 | |
> | R 3 :R | 0.169 | 0.169 | 0.256
> | 0.103 | |
> | P 4 (b,c,a) | 0.419 | 0.475 | 0.290
> | 0.110 | |
> | C 1 2 1 (z,x-y,x+y) | 0.481 | 0.481 | 0.337
> | 0.103 | |
> ------------------------------------------------------------------------------------------------
>
> R_used: mean and maximum R value for symmetry operators *used* in this
> point group
> R_unused: mean and minimum R value for symmetry operators *not used*
> in this point group
> The likely point group of the data is: P 1
>
> Possible space groups in this point groups are:
> Unit cell: (375.144, 375.711, 377.723, 90.002, 90.034, 90.094)
> Space group: P 1 (No. 1)
>
> what does this say to you?
>
> Thanks in advance-
> Todd
>
>
>
>
> -----Original Message-----
> From: phenixbb-bounces(a)phenix-online.org on behalf of Peter Zwart
> Sent: Fri 8/10/2007 4:35 PM
> To: PHENIX user mailing list
> Subject: Re: [phenixbb] conversion from h3 to r3 with phenix apps
>
> Hi Todd,
>
> Are you sure this is not cubic? You could run xtriage and find out
> relatively easely.
>
> I suggest you give phenix.xmanip a try for reindexing or try to use the
> following one-liner:
>
> iotbx.reflection_file_converter data.sca
> --change-of-basis=to_niggli_cell --sca=niggli.sca
>
> Cheers
>
> Peter
>
>
>
> Green, Todd wrote:
> >
> > Hello all,
> >
> > I have what i believe to be a rhombohedral crystal that has an
> > insanely large cell with the hexagonal setting:
> >
> > 533.026 533.026 652.887 90.000 90.000 120.000
> >
> > and a modestly large cell comparatively with the rhombohedral setting:
> >
> > 377 377 377 90 90 90
> >
> > I should be able to easily reindex to the smaller cell in scalepack
> > but for some reason i'm not getting it to work correctly. Rather than
> > struggle further on a friday afternoon, i figured that i'd give a
> > phenix app a try. i assume xtriage can do this, can someone point me
> > in the correct direction?
> >
> > thanks in advance-
> > Todd
> >
> > ------------------------------------------------------------------------
> >
> > _______________________________________________
> > phenixbb mailing list
> > phenixbb(a)phenix-online.org
> > http://www.phenix-online.org/mailman/listinfo/phenixbb
> >
>
>
> This email was scanned with Mcafee's Anti-Virus appliance, but this
> is no guarantee that no virus exists. You are asked to make sure you
> have virus protection and that it is up to date.
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
This email was scanned with Mcafee's Anti-Virus appliance, but this
is no guarantee that no virus exists. You are asked to make sure you
have virus protection and that it is up to date.
17 years, 10 months

Re: [cctbxbb] bootstrap.py build on Ubuntu
by Billy Poon
Hi David,
I don't have a fix yet, but here is a workaround. It seems like setup.py is
looking for libz.so instead of libz.so.1, so you can fix the issue by
making a symbolic link for libz.so in /usr/lib64.
sudo ln -s /usr/lib64/libz.so.1 /usr/lib64/libz.so
This requires root access, so that's why it's just a workaround.
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Tel: (510) 486-5709
Fax: (510) 486-5909
Web: https://phenix-online.org
On Sat, Jun 11, 2016 at 5:05 PM, Billy Poon <bkpoon(a)lbl.gov> wrote:
> Hi David,
>
> Sorry it look so long! Setting up all the virtual machines was a time sink
> and getting things to work on 32-bit CentOS 5 and Ubuntu 12.04 was a little
> tricky.
>
> It looks like Ubuntu 16.04 moved its libraries around. I used apt-get to
> install libz-dev and lib64z1 (the 64-bit library). There is a libz.so.1
> file in /lib/x86_64-linux-gnu and in /usr/lib64.
>
> I have not gotten it to work yet, but I'm pretty sure this is the issue.
> I'll have to double-check 12.04 and 14.04.
>
> As for Pillow, I did test it a few months ago, but I remember there being
> API changes that will need to fixed.
>
> --
> Billy K. Poon
> Research Scientist, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Road, M/S 33R0345
> Berkeley, CA 94720
> Tel: (510) 486-5709
> Fax: (510) 486-5909
> Web: https://phenix-online.org
>
> On Sat, Jun 11, 2016 at 2:04 AM, David Waterman <dgwaterman(a)gmail.com>
> wrote:
>
>> Hi Billy,
>>
>> I'm replying on this old thread because I have finally got round to
>> trying a bootstrap build for DIALS out again on Ubuntu, having waited for
>> updates to the dependencies and updating the OS to 16.04.
>>
>> The good news is, the build ran through fine. This is the first time I've
>> had a bootstrap build complete without error on Ubuntu, so thanks to you
>> and the others who have worked on improving the build in the last few
>> months!
>>
>> The bad news is I'm getting two failures in the DIALS tests:
>>
>> dials/test/command_line/tst_export_bitmaps.py
>> dials_regression/test.py
>>
>> Both are from PIL
>>
>> File
>> "/home/fcx32934/dials_test_build/base/lib/python2.7/site-packages/PIL/Image.py",
>> line 401, in _getencoder
>> raise IOError("encoder %s not available" % encoder_name)
>> IOError: encoder zip not available
>>
>> Indeed, from base_tmp/imaging_install_log it looks like PIL is not
>> configured properly
>>
>> --------------------------------------------------------------------
>> PIL 1.1.7 SETUP SUMMARY
>> --------------------------------------------------------------------
>> version 1.1.7
>> platform linux2 2.7.8 (default_cci, Jun 10 2016, 16:04:32)
>> [GCC 5.3.1 20160413]
>> --------------------------------------------------------------------
>> *** TKINTER support not available
>> *** JPEG support not available
>> *** ZLIB (PNG/ZIP) support not available
>> *** FREETYPE2 support not available
>> *** LITTLECMS support not available
>> --------------------------------------------------------------------
>>
>> Any ideas? I have zlib headers but perhaps PIL can't find them.
>>
>> On a related note, the free version of PIL has not been updated for
>> years. The replacement Pillow has started to diverge. I first noticed this
>> when Ubuntu 16.04 gave me Pillow 3.1.2 and my cctbx build with the system
>> python produced failures because it no longer supports certain deprecated
>> methods from PIL. I worked around that in r24587, but these things are a
>> losing battle. Is it time to switch cctbx over to Pillow instead of PIL?
>>
>> Cheers
>>
>> -- David
>>
>> On 7 January 2016 at 18:12, Billy Poon <bkpoon(a)lbl.gov> wrote:
>>
>>> Hi all,
>>>
>>> Since wxPython was updated to 3.0.2, I have been thinking about updating
>>> the other GUI-related packages to more recent versions. I would probably
>>> update to the latest, stable version that does not involve major changes to
>>> the API so that backwards compatibility is preserved. Let me know if that
>>> would be helpful and I can prioritize the migration and testing.
>>>
>>> --
>>> Billy K. Poon
>>> Research Scientist, Molecular Biophysics and Integrated Bioimaging
>>> Lawrence Berkeley National Laboratory
>>> 1 Cyclotron Road, M/S 33R0345
>>> Berkeley, CA 94720
>>> Tel: (510) 486-5709
>>> Fax: (510) 486-5909
>>> Web: https://phenix-online.org
>>>
>>> On Thu, Jan 7, 2016 at 9:30 AM, Nicholas Sauter <nksauter(a)lbl.gov>
>>> wrote:
>>>
>>>> David,
>>>>
>>>> I notice that the Pango version, 1.16.1, was released in 2007, so
>>>> perhaps it is no surprise that the latest Ubuntu does not support it.
>>>> Maybe this calls for stepping forward the Pango version until you find one
>>>> that works. I see that the latest stable release is 1.39.
>>>>
>>>> This would be valuable information for us..Billy Poon in the Phenix
>>>> group is supporting the Phenix GUI, so it might be advisable for him to
>>>> update the Pango version in the base installer.
>>>>
>>>> Nick
>>>>
>>>> Nicholas K. Sauter, Ph. D.
>>>> Computer Staff Scientist, Molecular Biophysics and Integrated
>>>> Bioimaging Division
>>>> Lawrence Berkeley National Laboratory
>>>> 1 Cyclotron Rd., Bldg. 33R0345
>>>> Berkeley, CA 94720
>>>> (510) 486-5713
>>>>
>>>> On Thu, Jan 7, 2016 at 8:54 AM, David Waterman <dgwaterman(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Hi again
>>>>>
>>>>> Another data point: I just tried this on a different Ubuntu machine,
>>>>> this time running 14.04. In this case pango installed just fine. In fact
>>>>> all other packages installed too and the machine is now compiling cctbx.
>>>>>
>>>>> I might have enough for comparison between the potentially working
>>>>> 14.04 and failed 15.04 builds to figure out what is wrong in the second
>>>>> case.
>>>>>
>>>>> Cheers
>>>>>
>>>>> -- David
>>>>>
>>>>> On 7 January 2016 at 09:56, David Waterman <dgwaterman(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi folks
>>>>>>
>>>>>> I recently tried building cctbx+dials on Ubuntu 15.04 following the
>>>>>> instructions here:
>>>>>> http://dials.github.io/documentation/installation_developer.html
>>>>>>
>>>>>> This failed during installation of pango-1.16.1. Looking
>>>>>> at pango_install_log, I see the command that failed was as follows:
>>>>>>
>>>>>> gcc -DHAVE_CONFIG_H -I. -I. -I../..
>>>>>> -DSYSCONFDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/etc\"
>>>>>> -DLIBDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/lib\"
>>>>>> -DG_DISABLE_CAST_CHECKS -I../.. -DG_DISABLE_DEPRECATED
>>>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include
>>>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include/freetype2 -g -O2
>>>>>> -Wall -MT fribidi.lo -MD -MP -MF .deps/fribidi.Tpo -c fribidi.c -fPIC
>>>>>> -DPIC -o .libs/fribidi.o
>>>>>> In file included from fribidi.h:31:0,
>>>>>> from fribidi.c:28:
>>>>>> fribidi_config.h:1:18: fatal error: glib.h: No such file or directory
>>>>>>
>>>>>> The file glib.h appears to be in base/include/glib-2.0/, however this
>>>>>> directory was not explicitly included in the command above, only its
>>>>>> parent. This suggests a configuration failure in pango to me. Taking a look
>>>>>> at base_tmp/pango-1.16.1/config.log, I see what look like the relevant
>>>>>> lines:
>>>>>>
>>>>>> configure:22227: checking for GLIB
>>>>>> configure:22235: $PKG_CONFIG --exists --print-errors "$GLIB_MODULES"
>>>>>> configure:22238: $? = 0
>>>>>> configure:22253: $PKG_CONFIG --exists --print-errors "$GLIB_MODULES"
>>>>>> configure:22256: $? = 0
>>>>>> configure:22304: result: yes
>>>>>>
>>>>>> but this doesn't tell me very much. Does anyone have any suggestions
>>>>>> as to how I might proceed?
>>>>>>
>>>>>> Many thanks
>>>>>>
>>>>>> -- David
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> cctbxbb mailing list
>>>>> cctbxbb(a)phenix-online.org
>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> cctbxbb mailing list
>>>> cctbxbb(a)phenix-online.org
>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>
>>>>
>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
>
9 years

Re: [phenixbb] measuring the angle between two DNA duplexes
by Phil Evans
One way that I've used for alpha-helices is to start with an ideal model with it's axis along say z, then get the rotation required to fit the ideal helix to the model. This works even for short helices
Phil
On 21 Jan 2014, at 09:25, Tim Gruene <tg(a)shelx.uni-ac.gwdg.de> wrote:
> Hi Pavel,
>
> that's the method described in
> http://journals.iucr.org/a/issues/2011/01/00/sc5036/index.html ;-) based
> on the moments of inertia (a computer scientist might name it
> differently). I am not sure, though, you would get the desired result
> for short helices. E.g. a helix defined by three atoms the eigenvalue
> would point roughly in the direction of the external phosphates, which
> is far from parallel with the helix axis.
>
> Best,
> Tim
>
> On 01/21/2014 04:20 AM, Pavel Afonine wrote:
>> Hi Ed,
>>
>> interesting idea! Although I was thinking to have a tool that is a
>> little more general and a little less context dependent. Say you have
>> two clouds of points that are (thinking in terms of macromolecules) two
>> alpha helices (for instance), and you want to know the angle between the
>> axes of the two helices. How would I approach this?..
>>
>> First, for each helix I would compute a symmetric 3x3 matrix like this:
>>
>> sum(xn-xc)**2 sum(xn-xc)*(yn-xc) sum(xn-xc)*(zn-zc)
>> sum(xn-xc)*(yn-xc) sum(yn-yc)**2 sum(yn-yc)*(yz-zc)
>> sum(xn-xc)*(zn-zc) sum(yn-yc)*(yz-zc) sum(zn-zc)**2
>>
>> where (xn,yn,zn) is the coordinate of nth atom, the sum is taken over
>> all atoms, and (xc,yc,zc) is the coordinate of the center of mass.
>>
>> Second, for each of the two matrices I would find its eigen-values and
>> eigen-vectors, and select eigen-vectors corresponding to largest
>> eigenvalues.
>>
>> Finally, the desired angle is the angle between the two eigen-vectors
>> found above, which is computed trivially.
>> I think this a little simpler than finding the best fit for a 3D line.
>>
>> What you think?
>>
>> Pavel
>>
>>
>> On 1/20/14, 2:14 PM, Edward A. Berry wrote:
>>>
>>>
>>> Pavel Afonine wrote:
>>> . .
>>>
>>>> The underlying procedure would do the following:
>>>> - extract two sets of coordinates of atoms corresponding to two
>>>> provided atom selections;
>>>> - draw two optimal lines (LS fit) passing through the above sets
>>>> of coordinates;
>>>> - compute and report angle between those two lines?
>>>>
>>>
>>> This could be innacurate for very short helices (admittedly not the
>>> case one usually would be looking for angles), or determining the axis
>>> of a short portion of a curved helix. A more accurate way to
>>> determine the axis- have a long canonical duplex constructed with its
>>> axis along Z (0,0,1). Superimpose as many residues of that as required
>>> on the duplex being tested, using only backbone atoms or even only
>>> phosphates. Operate on (0,0,1) with the resulting operator (i.e. take
>>> the third column of the rotation matrix) and use that as a vector
>>> parallel to the axis of the duplex being tested.
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>
> --
> Dr Tim Gruene
> Institut fuer anorganische Chemie
> Tammannstr. 4
> D-37077 Goettingen
>
> GPG Key ID = A46BEE1A
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
11 years, 5 months

Re: [cctbxbb] Module versioning (was Re: Debian Package)
by Picca Frédéric-Emmanuel
On Sat, 1 Sep 2012 11:35:04 +0200
Radostan Riedel <raybuntu(a)googlemail.com> wrote:
Hello, I cam back from longer holidays that I expected :). It is always good ;)
> > 1. Each and every cctbx developer becomes aware of so-versioning. Let's say a module is at 2.3.4, so we have
> > env.SetVersionInfo(vers="2.3.4")
> > and then a change is made to the C++ code. The developer responsible for that change should then figure out the new c.r.a and then add a comment
> > env.SetVersionInfo(vers="2.3.4", #next release# vers="c.r.a")
> > It would then be easy to automate an edit en-masse before release that would change that line and all its sisters to
> > env.SetVersionInfo(vers="c.r.a")
> > That's basically a formalised version of your proposition in your answer to Johan earlier.
> >
> > 2. Cctbx developers would not care about so-versioning and before a Debian release, the cctbx Debian maintainers would go through each call to SetVersionInfo to set the right c.r.a, based on a careful examination of the commits, complemented by asking questions to the core cctbx developers (or simply open questions on this forum).
> I'd of course like the first extreme better. For starters it would be great if
> everyone feels responsible for the "current" number. I'm not a c++ developer and
> I don't know if thats easy?
To my opinion the first solution whould be the best one also for other
distribution. Indeed this is a "social" problem, developpers of the C++
librarty should learn about ABI/API compatibility like expected by most
distributions when it comes to do some "long term" support into an
integrated environment. The person who did the change is the only one
which know the implication of the changes.
Now during the build of a Debian package we have some tools that could
detect API modifications and the build can fail if a library without a
so number bump, remove a symbols.
So it is a round trip collaboration, we can check if a so bump is
requiere by a build of the futur next stable release, Identify the
problem and explain to the person in charge of the C++ library how to
modify the corresponding so number. After a few roundtrip (release) I
think that peoples whould understand how to deal with thoses numbers.
>
> > There are relatively very few shlibs compared to Python modules. Keeping track of the version of all of the latter by hand would be an enormous amount of work. I don't think we can get it done cheaply with some automatic keyword expansion if we want proper major.minor.patch or worse something as involved as so-versioning.
> Maybe it can be good to look on other projects how they are doing this. As a
> python developer I'm always expecting different API's when it comes to new
> upstream versions. I checked the policy for Debian and there seems to be nothing
> special in versioning extensions and modules. Maybe Justin can tell us something
> about Gentoo. I'd say we don't need to worry about it for now.
Yes excepted if you distribut python extension used by other third
party to build other extension. A good example is the python-numpy
package. It was a nightmare for distribution before they introduce an
API and ABI number.
the public C API/ABI of this extension is now following a sort of so
number and when someone build a package relying on this python-numpy
extension the dh_numpy helper generate the right binay dependencies,
based on the ABI number maintained by the numpy upstream.
there is a dependency on python-numpy-abix. Exemple for the scipy package
Package: python-scipy
Version: 0.10.1+dfsg1-4
Installed-Size: 34405
Maintainer: Debian Python Modules Team <python-modules-team(a)lists.alioth.debian.org>
Architecture: i386
Provides: python2.6-scipy, python2.7-scipy
Depends: python-numpy (>= 1:1.6.1), python-numpy-abi9, python (>= 2.6.6-7~), python (<< 2.8), libamd2.2.0 (>= 1:3.4.0), libblas3 | libblas.so.3 | libatlas3-base, libc6 (>= 2.3.6-6~), libgcc1 (>= 1:4.1.1), libgfortran3 (>= 4.6), liblapack3 | liblapack.so.3 | libatlas3-base, libquadmath0 (>= 4.6), libstdc++6 (>= 4.1.1), libumfpack5.4.0 (>= 1:3.4.0)
Recommends: g++ | c++-compiler, python-dev, python-imaging
So the same think should be proposed if third party package rely on C
python extension of cctbx. But indeed it mean, you need to care about
the versionning of your public C API/ABI.
Cheers
Frederic
--
GPG public key 4096R/4696E015 2011-02-14
fingerprint = E92E 7E6E 9E9D A6B1 AA31 39DC 5632 906F 4696 E015
uid Picca Frédéric-Emmanuel <picca(a)synchrotron-soleil.fr>
GPG public key 1024D/A59B1171 2009-08-11
fingerprint = 1688 A3D6 F0BD E4DF 2E6B 06AA B6A9 BA6A A59B 1171
uid Picca Frédéric-Emmanuel <picca(a)debian.org>
12 years, 10 months

Re: [phenixbb] Selecting ellipsoid of data
by Morten Grøftehauge
Kay suggestion is better than mine which would have been
http://www.doe-mbi.ucla.edu/~sawaya/anisoscale/
On 16 July 2010 03:16, Frank von Delft <frank.vondelft(a)sgc.ox.ac.uk> wrote:
> Ah.... I was wondering about that: thanks for the pointer!!
>
>
>
> On 15/07/2010 17:38, Kay Diederichs wrote:
>
> Hi Frank,
>
> "such a tool" is at
> http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Aniso_cutoff
>
> where it's meant to be applied to INTEGRATE.HKL which comes out of XDS.
> Doing it this way has the benefit that the statistics printed out by XDS'
> CORRECT (or SCALA/TRUNCATE; there are people who prefer that route) match
> the data you refine agains.
>
> HTH,
>
> Kay
>
> Message: 1
> Date: Thu, 15 Jul 2010 06:32:40 +0100
> From: Frank von Delft<frank.vondelft(a)sgc.ox.ac.uk><frank.vondelft(a)sgc.ox.ac.uk>
> To: PHENIX user mailing list<phenixbb(a)phenix-online.org><phenixbb(a)phenix-online.org>
> Subject: [phenixbb] Selecting ellipsoid of data
> Message-ID:<4C3E9D78.5030203(a)sgc.ox.ac.uk> <4C3E9D78.5030203(a)sgc.ox.ac.uk>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi, is there a tool in phenix that allows me to select an ellipsoid of
> data -- specified e.g. by the highest resolutions in three reciprocal
> lattice directions. (Yes, I'm playing with anisotropy, "playing" being
> the operative word.)
>
> phx
>
>
> ------------------------------
>
> Message: 2
> Date: Wed, 14 Jul 2010 22:47:30 -0700
> From: "Ralf W. Grosse-Kunstleve"<rwgk(a)cci.lbl.gov> <rwgk(a)cci.lbl.gov>
> To: phenixbb(a)phenix-online.org
> Subject: Re: [phenixbb] Selecting ellipsoid of data
> Message-ID:<201007150547.o6F5lUoL018490(a)cci.lbl.gov><201007150547.o6F5lUoL018490(a)cci.lbl.gov>
> Content-Type: text/plain; charset=us-ascii
>
> Hi Frank,
>
> Hi, is there a tool in phenix that allows me to select an ellipsoid of
> data -- specified e.g. by the highest resolutions in three reciprocal
> lattice directions. (Yes, I'm playing with anisotropy, "playing" being
> the operative word.)
>
>
> I'm not aware of such a tool.
>
> Ralf
>
>
> ------------------------------
>
> Message: 3
> Date: Thu, 15 Jul 2010 07:15:58 +0100
> From: Frank von Delft<frank.vondelft(a)sgc.ox.ac.uk><frank.vondelft(a)sgc.ox.ac.uk>
> To: "Ralf W. Grosse-Kunstleve"<rwgk(a)cci.lbl.gov> <rwgk(a)cci.lbl.gov>,
> PHENIX user mailing
> list<phenixbb(a)phenix-online.org> <phenixbb(a)phenix-online.org>
> Subject: Re: [phenixbb] Selecting ellipsoid of data
> Message-ID:<4C3EA79E.2030503(a)sgc.ox.ac.uk> <4C3EA79E.2030503(a)sgc.ox.ac.uk>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> Hi Ralf
>
> Yeah, I figured. So if I want to use cctbx, where do I start? Just a
> pointer to a) package and b) function where I'll see the syntax.
>
> So equation for ellipsoid is x^2/a^2 + y^2/b^2 + z^2/c^2 = 1; so I
> imagine I take each reflection, convert each of h,k,l to 1/reso, and
> with a = 1/res(a*), I just check whether the above is< 1.
>
> The main thing I still need is to convert h,k,l into orthogonal
> coordinates.... or do I? I suppose I don't, as what I care for is not
> whether it's "really" an ellipsoid, only whether it cuts through miller
> index space anisotropically.
>
> Hmmmm... I may be able to do it in sftools; but if you can in<1minute
> give me a link to where to look to get started in cctbx, that would be
> awesome.
>
> (Thanks for listening :)
>
>
>
>
> Hi Frank,
>
>
> Hi, is there a tool in phenix that allows me to select an ellipsoid of
> data -- specified e.g. by the highest resolutions in three reciprocal
> lattice directions. (Yes, I'm playing with anisotropy, "playing" being
> the operative word.)
>
> I'm not aware of such a tool.
>
> Ralf
>
>
>
> _______________________________________________
> phenixbb mailing listphenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
14 years, 11 months

Re: [cctbxbb] [git/cctbx] master: rename test files, remove them from run_tests (23a4a6fe4)
by Nicholas Devenish
Hi all,
Plethora of additional points in support of pytest:
- Classification: bootstrap dials, then run the tests. All will say "OK"
but in reality only about 1/3 of them ran for various skip reasons. This is
*horribly* misleading and has led to broken builds in the past. I proposed
a patch a while ago to add a skip classification but this was rejected
because some tests (two, I think?) in phenix classified as 'SKIP' instead
of 'OK'. There is a system to classify as 'WARN', but *only* if
phenix_regression is installed, it misclassifies several items in
dials/cctbx (apparently OK showing as WARN is acceptable where OK as SKIP
wasn't) and in any case relies on a horrible whitelist of "text that looks
like warnings but shouldn't classify the test as a warning". In pytest,
things like skip/warn/expected failure are easy (e.g. you can mark tests as
"this is currently expected to fail" rather than leaving your CI^H^Hdaily
build failing permanently)
- Technical debt/NIH: It's one framework for you to learn, sure. But it's N
'frameworks' to learn for the next N developers - and since pytest is very
common, there is a good chance that they'll either know it already, or the
knowledge will be usefully applicable to other projects in the future.
Every single new developer has to go through the WTF-laden process of
learning the 'historical' idiosyncrasies.
- Categorisation: A dumb 'list of tests' is okay until you have e.g. very
long regression tests that don't need to be run as frequently as unit tests
- Documentation: How does something work/how do you do something
unfamiliar? I'll let you compare the documentation for libtbx approx_equal
https://github.com/cctbx/cctbx_project/blob/master/libtbx/test_utils/__init…
and
pytest's approx https://docs.pytest.org/en/latest/builtin.html#pytest.approx
- Maintenance/bit rot: Somebody else writes this documentation, somebody
else finds and fixes the bugs, and somebody else is responsible for making
things better. Something untouched for a decade that people are afraid to
or prevented from improving is not a good thing.
Additionally, I'd want to argue great improvements and benefits in things
like locality, parametrization, granularity (separation of tests),
formality, discoverability, as well as lots of other things, but I think
you get my point. Basically, every possible improvement a formal testing
suite could have over what is effectively a dumb list of files to run.
Python's unittest would have most of these benefits, but with the
disadvantage of a little more boilerplate. Pytest has less boilerplate and
syntax, and for almost equivalent total functionality you only have to
remember a) "call the files test_something.py and the functions
test_something" b) use assert (you can even use libtbx.approx_equal if you
want!).
Nick
On Sat, Mar 3, 2018 at 11:03 AM, <markus.gerstel(a)diamond.ac.uk> wrote:
> Hi Pavel,
>
> pytest was suggested* by a couple of people in 2015 when I asked about
> unittest. Your recommendation at that time was to go ahead, so thats what I
> did - with the result that in April 2016 the pytest compatibility layer was
> introduced to cctbx and pytest included in the base build, so we can use
> pytests alongside libtbx tests. In our packages we have been using pytests
> for well over 2 years now, and just one month ago converted xia2 to pytest
> completely.
>
> As to "why pytest" I would like to point you to our wiki page "Why
> pytest?" at https://github.com/dials/dials/wiki/Why-pytest%3F which does
> make the case in more detail and includes instructions to enable pytest for
> cctbx modules. However I'm more than happy to demonstrate part of it here
> using your example.
>
> First of all, the example you gave is not quite correct. You are missing
> the import for approx_equal and missed the crucial line in run_tests.py
> that ensures the test is actually run. This is one of the drawbacks of
> libtbx testing: there is no automatic test discovery. And this tends to
> happen quite a lot, cf. https://github.com/dials/dials/issues/506
>
> In pytest your example would look like this:
>
> import pytest
> def test_that_2_times_2_is_4():
> x = 2
> result = x*x
> assert result == pytest.approx(4)
>
> which I would argue has the exact same documentation/example value.
>
>
> Let's change the expectation value from 4 to 5, forget everything we know
> about the code, run it and compare the output:
>
>
> $ python tst_multiply.py
> approx_equal eps: 1e-06
> approx_equal multiplier: 10000000000.0
> 4.0 approx_equal ERROR
> 5.0 approx_equal ERROR
>
> Traceback (most recent call last):
> File "tst_multiply.py", line 10, in <module>
> exercise()
> File "tst_multiply.py", line 7, in exercise
> assert approx_equal(result, 5., 1.e-6)
> AssertionError
>
> *Something* went wrong, it has to do with 1e-06, a very large number (I
> still don't understand what that means), two errors for 4.0 and 5.0, and a
> function called exercise(), which tells me exactly nothing. I have to look
> into the code to understand what went wrong and why. From the outset I
> don't know what the code is supposed to do.
>
>
> $ pytest
> =================================== test session starts
> ===================================
> platform linux2 -- Python 2.7.13, pytest-3.1.3, py-1.4.34, pluggy-0.4.0
> rootdir: /dls/tmp/wra62962/directories/lh0UFeF6, inifile:
> plugins: xdist-1.20.1, timeout-1.2.0, forked-0.2, cov-2.5.1
> collected 2 items
>
> test_multiplication.py .F
>
> ======================================== FAILURES
> =========================================
> ___________________________ test_that_two_times_two_equals_five
> ___________________________
>
> def test_that_two_times_two_equals_five():
> x = 2
> result = x*x
> > assert result == pytest.approx(5)
> E AssertionError: assert 4 == 5 +- 5.0e-06
> E + where 5 +- 5.0e-06 = <class '_pytest.python.approx'>(5)
> E + where <class '_pytest.python.approx'> = pytest.approx
>
> test_multiplication.py:11: AssertionError
> =========================== 1 failed, 1 passed in 0.04 seconds
> ============================
>
>
> Oh look, a test to ensure two times two equals five failed. Apparently it
> compared 4 to 5 +- 5e-06.
> You also see that the other test in the file that I left in (which
> compares 2*2 to 4) worked. You didn't know that from the libtbx-style test
> output.
>
> Now I can fix the test, and pytest allows me to just rerun the failed
> tests, not everything.
> If you want to know how, have a look at https://github.com/dials/
> dials/wiki/pytest which contains a lot more information about running
> pytest and converting tests.
>
> If you don't want to use it that is fine, too. Thanks to the compatibility
> layer you can still use libtbx.run_tests_parallel, and you will still get
> the more useful assertion messages. I would encourage you to try it though.
> You might find it useful.
>
> -Markus
>
>
> * And rightly so; pytest requires much less boilerplate, produces cleaner
> code, and is overall just better - Thank you, Luc. Thank you, Jamasi.
> PS: On fable specifically: thanks to the conversion I already found and
> fixed one broken test which didn't fail, and another test with race
> conditions.
>
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org]
> on behalf of Pavel Afonine [pafonine(a)lbl.gov]
> Sent: Saturday, March 03, 2018 07:13
> To: cctbx mailing list; Winter, Graeme (DLSLtd,RAL,LSCI)
> Subject: Re: [cctbxbb] [git/cctbx] master: rename test files, remove them
> from run_tests (23a4a6fe4)
>
> I'd say at least because:
>
> - the first 10+ years of CCTBX did not use pytest. AFAIK, the first
> attempt was by our postdoc Youval Dar back in 2015 (correct me if I'm
> wrong). I feel adding different testing styles are only to make the
> code-base inconsistent (very much like mixing flex and np arrays isn't
> cool, in my opinion!).
>
> - originally tests were considered as simple usage examples for
> functionalities they are testing; this is because writing and (most
> importantly!) maintaining the proper documentation was not provisioned.
> A simple test like
>
> def exercise():
> """ Make sure 2*2 is 4. """
> x=2.
> result=x*x
> assert approx_equal(result, 4., 1.e-6)
>
> if(__name__ == "__main__"):
> exercise()
> print "OK"
>
> is much easier to grasp rather than the same cluttered with the stuff
> (that, to add to the trouble, one needs to learn in the first place!).
>
> All the best,
> Pavel
>
> On 3/3/18 14:36, Graeme.Winter(a)diamond.ac.uk wrote:
> > What’s bad about pytest?
> >
> >
> >
> >> On 3 Mar 2018, at 02:26, Pavel Afonine <pafonine(a)lbl.gov> wrote:
> >>
> >> Just to make sure: you are converting to use pytest this particular
> codes (fable), correct?
> >> Pavel
> >> P.S.: I'm allergic to pytest.
> >>
> >>
> >> On 3/3/18 07:46, CCTBX commit wrote:
> >>> This in preparation for pytestification.
> >> _______________________________________________
> >> cctbxbb mailing list
> >> cctbxbb(a)phenix-online.org
> >> http://phenix-online.org/mailman/listinfo/cctbxbb
> >
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
7 years, 4 months

Re: [cctbxbb] Niggli-reduced cell C++ implementation
by Ralf Grosse-Kunstleve
Hi Martin,
Based on
iotbx.lattice_symmetry --unit_cell="4.630811 4.630811 4.630811 90 90 90"
and
iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941 106.779
90"
the first unit cell is (obviously) cubic, the second is only monoclinic.
Even with
iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941 106.779
90" --delta=20
it only comes back as orthorhombic.
Is this what you expect?
Ralf
On Tue, Apr 24, 2012 at 10:57 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk>wrote:
> Dear cctbxers,
>
> I've finally found the time to play around with a C++ version of the KG
> algorithm and I've come across a result I don't understand. I've tried
> both David's C++ and the cctbx python niggli_cell() implementations and
> they both give the roughly the same answer.
>
> I'm reducing the following cell with two, equivalent, representations (a,
> b, c, alpha, beta, gamma):
>
> Before:
>
> 1: 4.630811 4.630811 4.630811 90 90 90
> 2: 3.27448 5.67156 5.67156 99.5941 106.779 90
>
> After:
>
> 1: 4.63081 4.63081 4.63081 90 90 90
> 2: 3.27448 5.67154 5.67156 99.5941 90 106.778
>
> Looking at the trace, cell 1 undergoes step 3 and finishes while cell 2
> undergoes steps 2, 3, 7 and 4.
>
> Does anyone know why these haven't converged to the same cell?
>
> Many thanks,
> -Martin
>
> On 23 March 2012 17:12, Ralf Grosse-Kunstleve <rwgrosse-kunstleve(a)lbl.gov>wrote:
>
>> Hi Martin,
>> Let me know if you need svn write access to check in your changes. All I
>> need is your sourceforge user id.
>> Ralf
>>
>>
>> On Fri, Mar 23, 2012 at 3:35 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk>wrote:
>>
>>> Dear David and Rolf,
>>>
>>> thank you for your encouragement.
>>>
>>> David: I'm more than happy to port your implementation to cctbx if
>>> you're happy with this. Of course I don't want to step on your toes so if
>>> you'd rather do it yourself (or not at all) that's cool.
>>>
>>> There may be some licensing issues to sort out as it looks like cctbx
>>> has a custom (non viral) license but the BSD license is likely compatible.
>>>
>>> On first impression I think a new class would be the way to go but I'd
>>> have to look at the two algorithms in greater detail to be sure.
>>>
>>> All the best,
>>> -Martin
>>>
>>>
>>> On 22 March 2012 22:00, Ralf Grosse-Kunstleve <
>>> rwgrosse-kunstleve(a)lbl.gov> wrote:
>>>
>>>> Hi Martin,
>>>> You're very welcome to add a C++ version of the Krivy-Gruber algorithm
>>>> to cctbx if that's what you had in mind.
>>>> I'm not sure what's better, generalizing the fast-minimum-reduction
>>>> code, or just having an independent implementation.
>>>> Ralf
>>>>
>>>> On Thu, Mar 22, 2012 at 2:24 PM, Martin Uhrin <
>>>> martin.uhrin.10(a)ucl.ac.uk> wrote:
>>>>
>>>>> Dear Cctbx community,
>>>>>
>>>>> Firstly I'd like to say thank you to Rolf, Nicholas and Paul for their
>>>>> expertly thought through implementation of the reduced cell algorithm.
>>>>> I've found it to be extremely useful for my work.
>>>>>
>>>>> My code is all in C++ and I'd like to be able to use the Krivy-Gruber
>>>>> algorithm. My understanding is that only the reduced (Buerger) unit cell
>>>>> algorithm is implemented in C++ [1] which guarantees shortest lengths but
>>>>> not unique angles. From my understanding the Krivy-Gruber would also
>>>>> guarantee me uniqueness of unit cell angles, however this is only
>>>>> implemented in Python [2]. Sorry to be so verbose, I just wanted to check
>>>>> that I was on the right page.
>>>>>
>>>>> Would it be possible for me to implement the Krivy-Gruber in C++ by
>>>>> adding in the epsilon_relative to the parameter and following the procedure
>>>>> found in the python version?
>>>>>
>>>>> Many thanks,
>>>>> -Martin
>>>>>
>>>>> [1]
>>>>> http://cctbx.sourceforge.net/current/c_plus_plus/classcctbx_1_1uctbx_1_1fas…
>>>>> [2]
>>>>> http://cctbx.sourceforge.net/current/python/cctbx.uctbx.krivy_gruber_1976.h…
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Martin Uhrin Tel: +44
>>>>> 207 679 3466
>>>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>>>> University College London
>>>>> martin.uhrin.10(a)ucl.ac.uk
>>>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>>>
>>>>> _______________________________________________
>>>>> cctbxbb mailing list
>>>>> cctbxbb(a)phenix-online.org
>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>
>>>>>
>>>>
>>>> _______________________________________________
>>>> cctbxbb mailing list
>>>> cctbxbb(a)phenix-online.org
>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>
>>>>
>>>
>>>
>>> --
>>> Martin Uhrin Tel: +44
>>> 207 679 3466
>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>> University College London
>>> martin.uhrin.10(a)ucl.ac.uk
>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
>
>
> --
> Martin Uhrin Tel: +44
> 207 679 3466
> Department of Physics & Astronomy Fax:+44 207 679 0595
> University College London
> martin.uhrin.10(a)ucl.ac.uk
> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
13 years, 2 months

Re: [cctbxbb] Making branches by accident
by markus.gerstel@diamond.ac.uk
Hi Nigel,
The log (and the blame log) should show the times of the commit, regardless of whether you merge the branch into master or rebase your branch onto master at the end.
When you rebase your commits you can also get the rebase time. eg. for your last rebased commit:
$ git show 4fa192b0 --format=fuller
The author+date pair is the person who originally wrote the code (rather: who made the original commit), the commit+date pair is set by whoever rebased/cherry-picked/etc. it last.
-Markus
PS: Just to clarify - if you deliberately use branches to do isolated feature development, merging is absolutely fine.
________________________________
From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Nigel Moriarty [nwmoriarty(a)lbl.gov]
Sent: Thursday, December 08, 2016 17:41
To: cctbx mailing list
Subject: Re: [cctbxbb] Making branches by accident
Markus
I was a non-rebaser but I have set it to true on my second machine. So my question is regardind branches. I have made a branch, made some changes and merged the master into the branch. I will make some more changes during testing. I assume that I merge into the master at some point. Will the commits appear in the log based on the time of the merge or the time I committed in the branch?
Cheers
Nigel
---
Nigel W. Moriarty
Building 33R0349, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
Berkeley, CA 94720-8235
Phone : 510-486-5709 Email : NWMoriarty(a)LBL.gov
Fax : 510-486-5909 Web : CCI.LBL.gov<http://CCI.LBL.gov>
On Thu, Dec 8, 2016 at 12:14 AM, <markus.gerstel(a)diamond.ac.uk<mailto:[email protected]>> wrote:
I use a custom prompt so I can see what is going on when I am in a git repository folder.
This is the code one could add to their ~/.bashrc:
https://gist.github.com/Anthchirp/dfc9a4382f8dfc9a97fe1039c9e6789a
This is what it looks like:
https://postimg.org/image/8c9h72qwd/
This is what happens in the image:
* yellow brackets indicate you are in git territory, and contain the current branch name
* red branch name = uncommitted changes in repository
* positive number: number of commits the local repository is ahead of the remote repository
* the 'git pull' command causes an implicit merge commit, which I undo with the next command
* negative number: number of commits the local repository is behind the remote repository
* both negative and positive number: branches have diverged
Maybe someone finds it useful.
-Markus
________________________________
From: cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]> [cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]>] on behalf of Pavel Afonine [pafonine(a)lbl.gov<mailto:[email protected]>]
Sent: Wednesday, December 07, 2016 18:24
To: cctbxbb(a)phenix-online.org<mailto:[email protected]>
Subject: Re: [cctbxbb] Making branches by accident
This happened to me a few times now, and just double-checked that my .gitconfig contains "rebase = true". Let's see if it happens again..
Pavel
On 12/7/16 00:02, Graeme.Winter(a)diamond.ac.uk<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>> wrote:
Morning all
I am seeing a certain amount of “Merge branch 'master' of github.com:cctbx/cctbx_project” coming through on the commits – this usually means you did not do a git pull –rebase before the git push. This can be set to the default by using the spell Markus sent out
git config --global pull.rebase true
This will need to be done on each machine you push from, else getting the habit of doing a git pull –rebase before push is a good one.
We have had this on and off with DIALS but it tends to pass easily enough.
What bad happens? Nothing really but the history becomes confusing…
So: may be worth checking that you have the pull.rebase thing set?
Cheerio Graeme
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]><mailto:[email protected]<mailto:[email protected]>>
http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
8 years, 6 months

Re: [phenixbb] Selecting ellipsoid of data
by Frank von Delft
Ah.... I was wondering about that: thanks for the pointer!!
On 15/07/2010 17:38, Kay Diederichs wrote:
> Hi Frank,
>
> "such a tool" is at
> http://strucbio.biologie.uni-konstanz.de/xdswiki/index.php/Aniso_cutoff
>
> where it's meant to be applied to INTEGRATE.HKL which comes out of XDS.
> Doing it this way has the benefit that the statistics printed out by
> XDS' CORRECT (or SCALA/TRUNCATE; there are people who prefer that
> route) match the data you refine agains.
>
> HTH,
>
> Kay
>
>> Message: 1
>> Date: Thu, 15 Jul 2010 06:32:40 +0100
>> From: Frank von Delft<frank.vondelft(a)sgc.ox.ac.uk>
>> To: PHENIX user mailing list<phenixbb(a)phenix-online.org>
>> Subject: [phenixbb] Selecting ellipsoid of data
>> Message-ID:<4C3E9D78.5030203(a)sgc.ox.ac.uk>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>> Hi, is there a tool in phenix that allows me to select an ellipsoid of
>> data -- specified e.g. by the highest resolutions in three reciprocal
>> lattice directions. (Yes, I'm playing with anisotropy, "playing" being
>> the operative word.)
>>
>> phx
>>
>>
>> ------------------------------
>>
>> Message: 2
>> Date: Wed, 14 Jul 2010 22:47:30 -0700
>> From: "Ralf W. Grosse-Kunstleve"<rwgk(a)cci.lbl.gov>
>> To: phenixbb(a)phenix-online.org
>> Subject: Re: [phenixbb] Selecting ellipsoid of data
>> Message-ID:<201007150547.o6F5lUoL018490(a)cci.lbl.gov>
>> Content-Type: text/plain; charset=us-ascii
>>
>> Hi Frank,
>>
>>> Hi, is there a tool in phenix that allows me to select an ellipsoid of
>>> data -- specified e.g. by the highest resolutions in three reciprocal
>>> lattice directions. (Yes, I'm playing with anisotropy, "playing" being
>>> the operative word.)
>>
>> I'm not aware of such a tool.
>>
>> Ralf
>>
>>
>> ------------------------------
>>
>> Message: 3
>> Date: Thu, 15 Jul 2010 07:15:58 +0100
>> From: Frank von Delft<frank.vondelft(a)sgc.ox.ac.uk>
>> To: "Ralf W. Grosse-Kunstleve"<rwgk(a)cci.lbl.gov>, PHENIX user mailing
>> list<phenixbb(a)phenix-online.org>
>> Subject: Re: [phenixbb] Selecting ellipsoid of data
>> Message-ID:<4C3EA79E.2030503(a)sgc.ox.ac.uk>
>> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>>
>> Hi Ralf
>>
>> Yeah, I figured. So if I want to use cctbx, where do I start? Just a
>> pointer to a) package and b) function where I'll see the syntax.
>>
>> So equation for ellipsoid is x^2/a^2 + y^2/b^2 + z^2/c^2 = 1; so I
>> imagine I take each reflection, convert each of h,k,l to 1/reso, and
>> with a = 1/res(a*), I just check whether the above is< 1.
>>
>> The main thing I still need is to convert h,k,l into orthogonal
>> coordinates.... or do I? I suppose I don't, as what I care for is not
>> whether it's "really" an ellipsoid, only whether it cuts through miller
>> index space anisotropically.
>>
>> Hmmmm... I may be able to do it in sftools; but if you can in<1minute
>> give me a link to where to look to get started in cctbx, that would be
>> awesome.
>>
>> (Thanks for listening :)
>>
>>
>>
>>
>>> Hi Frank,
>>>
>>>
>>>> Hi, is there a tool in phenix that allows me to select an ellipsoid of
>>>> data -- specified e.g. by the highest resolutions in three reciprocal
>>>> lattice directions. (Yes, I'm playing with anisotropy, "playing"
>>>> being
>>>> the operative word.)
>>>>
>>> I'm not aware of such a tool.
>>>
>>> Ralf
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
>
14 years, 11 months

Re: [phenixbb] phenix refinement question
by Pavel Afonine
Hi Martyn,
thanks for your feedback - it is very much appreciated!
> It is not strictly the case that TLS is neglected during pdb deposition.
This is in-sync with my understanding of the current situation. It is
really great!
(Although, I should re-run my tools through the whole PDB to quickly see
state-of-the-art.)
> The requirement for deposition now is that full ANISOU values have to be present if TLS has been used.
This is really great, too!
> In which case the TLS definitions are redundant as the full description of the ADP model is provided by the ATOM and ANISOU records.
No, this is not true. The TLS definitions define the model partitions
into TLS groups that with the current tools cannot be recovered from
just ANISOU records.
> There is therefore no absolute requirement for the TLS definitions in the header to be correctly read in order for the validation to proceed.
This is true in a sense that you can re-calculate the R-factor (since
the complete ANISOU records corresponding to Utotal (see reference
below) are present and therefore you can compute correct Fcalcs), but
inability to read this information correctly should be a BIG warning
sign for everyone involved.
Also, leaving out TLS records will result in obvious loss of information
about TLS groups (atom selection defining TLS groups). I don't see a
reason why one would want to give up this information.
> This aids accurate validation of the model against the provided SF data using EDS for example.
Absolutely true: the ability to reproduce the reported R-factors is in
close and direct relation with the ability to accurately validate the
model and data.
phenix.model_vs_data would do it almost unconditionally:
J. Appl. Cryst. (2010). 43, 669-676
phenix.model_vs_data: a high-level tool for the calculation of
crystallographic model and data statistics
P. V. Afonine, R. W. Grosse-Kunstleve, V. B. Chen, J. J. Headd, N. W.
Moriarty, J. S. Richardson, D. C. Richardson, A. Urzhumtsev, P. H. Zwart
and P. D. Adams
> The output with and without TLS was used historically to check whether the TLS definitions had been read correctly.
I see, but there are more to just having TLS hint in "REMARK 3"... The
TLS records contain the information about TLS groups (atom selections,
at least), that, if removed, cannot be easily guessed.
> Having said that the presence of TLS definitions is still informative for users of the coordinates to check that for example a full anisotropic refinement has not been carried out.
Well, "TLS refinement" = "Constrained anisotropic refinement", so I
don't really understand what is "full anisotropic refinement".
Also, what about performing TLS refinement on top of treating each atom
moving anisotropically:
see p. 24-31: http://www.phenix-online.org/newsletter/CCN_2010_07.pdf
for some overview.
> PDB curation involves checking the description of the TLS groups that have been chosen.
Great! Did you see my report that I sent to those who might be
interested a few months ago? If not, I can re-send you the old one and
meanwhile I can re-compute the most current.
> So, for example, it is useful that the selection expressions do not refer to ranges of residues that do not exist (for example "RESID -99:9999" for a 1-100 residue protein),
Absolutely true. This is what I pointed out in my report a few months ago.
> or to overlapping ranges, for example: a chain with its TLS group 1 defined as "RESID 45:90" and its TLS group 2 is defined as "RESID 75:150".
True.
> Depending on the wwPDB deposition site, the validation programs may differ.
Sure, the tools may vary under the requirement that the outcome must be
the same.
> PDBe uses an in-house version of the EDS server which uses REFMAC with TLS taken into account. RCSB and PDBj run the particular program that was used in determining the structure for validation, in addition to a validation check using SFCHECK.
Can you reproduce the reported R-factors of this entry using the above
described tools: 2WYX or 2R24?
Let me know if not.
> It is worth saying that the PDB sites are not attempting to completely reproduce the authors' Rfactors,
This is very unfortunate, since there is no reason for the R-factors to
be not 0.01% reproducible. IF you can't reproduce them, then there is
THE problem either with the structure/data or with the software you use.
Period.
See:
J. Appl. Cryst. (2010). 43, 669-676
phenix.model_vs_data: a high-level tool for the calculation of
crystallographic model and data statistics
P. V. Afonine, R. W. Grosse-Kunstleve, V. B. Chen, J. J. Headd, N. W.
Moriarty, J. S. Richardson, D. C. Richardson, A. Urzhumtsev, P. H. Zwart
and P. D. Adams
> but instead to check for errors in the deposition process.
Well, reproducing the R-factors is the very first sanity check to do
BEFORE wasting any time on checking the other lower level details.
Indeed, if such gross thing as the R-factor doesn't reasonably match
there is no point to validate fine details.
> You can check the details of the PHENIX header format for TLS at
> http://www.wwpdb.org/documentation/format32/remark3.html#Refinement%20using…
Thanks! This looks great.
Just a minor question:
What if I specify a TLS group as "chain A or chain a and resseq 123:456
and element N" (that I potentially can do no problem in PHENIX)?
All the best!
Pavel.
14 years, 7 months