Search results for query "look through"
- 527 messages
[cctbxbb] Overloading Python builtin names with local variables
by Graeme.Winter@Diamond.ac.uk
Morning all,
As a side-effect of looking at some of the Python3 refactoring I stumbled across a few places where e.g. range is used as a variable name. While this is fine and legal, when moving the code to Python3
xrange => range
and
from builtins import range
is a back-port of Python3 range to Python2 which allows the existing behaviour of xrange generator to be maintained
This is fine except for where range is a variable name and we then get a “variable referenced before assignment” type error
As a general statement I would say overloading Python reserved names with local variables is untidy at best, and can easily cause confusion (as well as real problems as identified above) - I have therefore taken the liberty of adding a tool to libtbx which can be used to find such variables to give people a chance to avoid them:
Grey-Area dxtbx :) [master] $ libtbx.find_reserved_names format/
Checking format/
format/FormatSMVRigakuSaturn.py:239 "format"
format = self._scan_factory.format("SMV")
format/FormatTIFFRayonixESRF.py:36 "format"
format = {LITTLE_ENDIAN: "<", BIG_ENDIAN: ">"}[order]
format/FormatTIFFRayonixESRF.py:29 "bytes"
width, height, depth, order, bytes = FormatTIFFRayonix.get_tiff_header(
format/FormatCBFMini.py:231 "format"
format = self._scan_factory.format("CBF")
format/FormatTIFFRayonix.py:176 "format"
format = self._scan_factory.format("TIFF")
format/FormatTIFFRayonix.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatTIFFRayonix.py:74 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatSMVRigakuEiger.py:243 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVADSC.py:212 "format"
format = self._scan_factory.format("SMV")
format/Registry.py:88 "format"
for format in sorted(self._formats, key=lambda x: x.__name__):
format/FormatSMVJHSim.py:141 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVRigakuPilatus.py:188 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVADSCSN928.py:47 "format"
format = self._scan_factory.format("SMV")
format/FormatRAXISIVSpring8.py:134 "format"
format = self._scan_factory.format("RAXIS")
format/FormatTIFFRayonixSPring8.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFFRayonix.get_tiff_header(
format/FormatSMVADSCmlfsom.py:37 "format"
format = self._scan_factory.format("SMV")
format/FormatTIFFBruker.py:174 "format"
format = self._scan_factory.format("TIFF")
format/FormatTIFFBruker.py:31 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatTIFFBruker.py:71 "bytes"
width, height, depth, order, bytes = FormatTIFF.get_tiff_header(image_file)
format/FormatCBFMiniADSCHF4M.py:26 "format"
for format in ["%a_%b_%d_%H:%M:%S_%Y"]:
format/FormatCBFMiniADSCHF4M.py:169 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVADSCNoDateStamp.py:54 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVRigakuSaturnNoTS.py:228 "format"
format = self._scan_factory.format("SMV")
format/FormatPYunspecified.py:152 "format"
format = ""
format/FormatSMVRigakuA200.py:234 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniEigerPhotonFactory.py:131 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVTimePix_SU.py:97 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniPilatus.py:83 "format"
format = self._scan_factory.format("CBF")
format/FormatCBFMiniPilatusHelpers.py:22 "format"
for format in ["%Y-%b-%dT%H:%M:%S", "%Y-%m-%dT%H:%M:%S", "%Y/%b/%d %H:%M:%S"]:
format/FormatSMVADSCSNAPSID19.py:91 "format"
format = self._scan_factory.format("SMV")
format/FormatCBFMiniEiger.py:216 "format"
format = self._scan_factory.format("CBF")
format/FormatSMVNOIR.py:218 "format"
format = self._scan_factory.format("SMV")
format/FormatSMVCMOS1.py:197 "format"
format = self._scan_factory.format("SMV")
format/FormatRAXIS.py:238 "format"
format = self._scan_factory.format("RAXIS")
is an example where “format” is scattered all over the place but is also a useful function - replacing with fmt will mean the same and avoid this overloading, as an example.
This uses AST parsing to work through the code and find variable assignments - doing this for dials found lots of examples of “file” and “type” being used.
How much people care about this is a local issue, but I thought the tool would be useful
All the best Graeme
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
6 years, 9 months
Re: [cctbxbb] Scons for python3 released
by markus.gerstel@diamond.ac.uk
Hi,
Just to add to this. I think Graeme's find_clutter idea has merit, which could certainly check for
from __future__ import absolute_import, print_statement
which would cover a lot of ground.
I found this also a worthwhile to read through: http://python-future.org/compatible_idioms.html
Especially handling things like xrange vs range should be done with a bit of thinking and when ancient code needs to be touched then it also presents an opportunity to make it clearer what it actually does. For example, the very first commit on the Python3 branch changed xrange->range here, and I wondered... https://github.com/cctbx/cctbx_project/commit/f10fd505841de372098bca83c40fc… (untested)
Finally, I think doing all 2-3-compatible conversions in a branch, for example print -> print() as it is happening now, will be a nightmare to merge later because you will be touching large portions of a large numbers of files, but other development does not stop. And, let's be honest, nobody will review a 100k LoC change set anyway.
I would suggest we do those refactoring changes directly on master. A single type of change (ie. print->print()) on a per-file basis along with "from __future__ import print_statement" in say 30 files within one directory tree per commit? Much more manageable.
Oh, and we need the future module installed from bootstrap.
-Markus
-----Original Message-----
From: cctbxbb-bounces(a)phenix-online.org [mailto:[email protected]] On Behalf Of Graeme.Winter(a)diamond.ac.uk
Sent: 17 October 2017 15:20
To: cctbxbb(a)phenix-online.org
Subject: Re: [cctbxbb] Scons for python3 released
Hi Robert
I think having more than one person independently look at the p3 problem is no bad thing - also with the geography it would seem perfectly possible for you / Nick to meet up and compare notes on this - it’s certainly something I would support.
Clearly there are a lot of things which could get caught up in the net with the p3 update - for example build system discussions, cleaning out cruft that is in there to support python 2.3 etc… however I did not read that Nick thought SCons3 was a waste of time - I think he was getting at the point that this is part of the move, and that there is also a lot of related work. Also that having p2 / p3 support for at least a transition rather than the “full Brexit” of no longer supporting p2 beyond the first date where p3 works would be good. I could imagine this transition period being O(1000 days) even.
I think the migration process is going to be a complex one, but doable. One thing I think we do need is to make sure that the code base as pushed by developers remains compatible with p2 and p3 - so perhaps extending find_clutter to check for things which only work in one or the other? Then developers would learn the tricks themselves and (ideally) not push p2-only or p3-only code, at least until post-transition. This I would compare with the svn to git move, which caused some grumbling and a little confusion but was ultimately successful…
Hope this is constructive, cheerio
Graeme
On 17 Oct 2017, at 13:50, R.D. Oeffner <rdo20(a)cam.ac.uk<mailto:[email protected]>> wrote:
Hi Nick and others,
That sounds like a great effort. A shame I didn't know about this. I have not had time to look in detail into your work but will nevertheless summarize my thoughts and work I have been doing lately in an effort to move CCTBX to python3.
I am not sure why it would be a waste of time to use SCons3.0 with python3 as I think you are suggesting. To me it seems as a necessary step in creating a codebase that runs both on python2 and python3. Do I understand correctly that as long as CCTBX code is changed to comply with python3 and remain python2 compliant then such a codebase can be used in place of the current python2 only codebase for derived projects such as Dials and Phenix? Assuming this is the case I think it is worth focusing just on CCTBX only for now.
My own attempt in porting CCTBX to python3 constitutes of the following steps:
* Replace Scons2 with Scons3
* Update the subset of Boost sources to version 1.63
* Run futurize stage1 and stage2 on the CCTBX
* Build base components like libtiff, hdf5, python3.6 + add-on modules)
* Run bootstrap.py build with Python3.6 repeatedly and provide mock-up fixes to allow the build to continue.
This work is almost near completion in the sense that the sources now can build but are unlikely to pass test due to the mock-up fixes which often constitutes of replacement of PyStringXXX functions with equivalent PyUnicodeXXX, PyBytestringXXX functions ignoring whether that is appropriate or not. These token fixes would also need to be guarded by #if PY_MAJOR_VERSION == 3 ... macros.
The sources are available on https://github.com/cctbx/cctbx_project/tree/Python3
The next steps are less well defined. One approach would be to set up a build system that migrates python2 code to python3 using the futurize script, then builds CCTBX and runs test and presents build log files online as in http://cci-vm-6.lbl.gov:8010/one_line_per_build. With a hook to GitHub this could also be done on the fly as people commit code to CCTBX. This should encourage people to write code that runs on python2 as well as python3. Eventually once all tests for CCTBX pass we are done and can merge this codebase into the master branch.
Robert
On 17/10/2017 11:56, Nicholas Devenish wrote:
Hi All,
I spent a little bit of time looking at python3/libtbx so have some input on this.
On Tue, Oct 10, 2017 at 6:16 PM, Billy Poon <bkpoon(a)lbl.gov<mailto:[email protected]>> wrote:
1) Use Python 2 to build Python 2 version of CCTBX (no work) This might not be as simple as "No Work" - cctbx is a few years behind on SCons versions (libtbx.scons --version suggests 2.2.0, from 2012) so there *might* be other issues upgrading the SCons version to 3.0, before trying python3.
I also feel that SCons-Python3 is something of a red herring - the only thing that non-python3-SCons prevents is an 100% python3-only codebase, and unless the plan is to migrate the entire codebase, including all downstream dependencies (like dials) to python3-only in one massive step (probably impossible), everything would need to be dual 2/3 first, and only then a decision taken on deprecating 2.7 support.
More usefully, outside of a small core of libtbx code, not much of the buildsystem files are bound to the main project so this shouldn't be too difficult. In fact, I've experimented with converting to CMake, and as one of the approaches I explored, I wrote a SCons-emulator that read and parsed the build *without* any scons/cctbx dependencies. To parse the entire "builder=dials" SCons-tree only required this subset of libtbx:
https://github.com/ndevenish/read_scons/blob/master/tbx2cmake/import_env.py…
[1]
(Note: my general CMake-work works but isn't complete/ready/documented for general viewing, and still much resembles a hacky project, but I thought that this was sufficient to decouple the buildsystem is usefully illustrative of how simple the task might be) Regarding general Python3 conversion, it's definitely not "Just changing the print statements". I undertook a study in august to convert libtbx (being the core that *everything* depends on) to dual
python2/3 and IIRC got most of the tests working in python3. It's a couple of months out-of-date, but is probably useful as a benchmark of the effort required. The repository links are:
https://github.com/ndevenish/cctbx_project/tree/py3k-modernize [2]
https://github.com/ndevenish/cctbx_project/tree/py3k [3] Probably best looked at with a graphical viewer to get a top-down view of the history. My approach was to separate manual/automatic changes as follows:
1. Remove legacy code/modules - e.g. old compatibility. The Optik removal came from this. We don't want to spend mental effort converting absorbed external libraries from a decade ago (see also e.g. pexpect, subprocess_with_fixes) 2. Make some manual fixes [Expanded as we go on] 3. Use futurize and modernize to update idioms ONLY e.g. remove
pre-2.7 deprecated ways of working. Each operation was done is a separate commit (so that changes are more visible and I thought people would have less objection than to a massive code-change dump), and each commit ran the test suite for libtbx. Some of the 'fixers' in each tool are complementary. If there are any problems with tests or automatic conversion, then fix the problem, put the fix into step 2, then start again. This step should be entirely scriptable. I had 17 commits for separate fixes in this chain.
This is the where the py3k-modernize branch stops, and should in principle be kept entirely safe to push back onto the python2-only repository. The next steps form the `py3k` branch (not being intended for direct pushing, is a little less organised - some of my changes could definitely be moved to step 2):
4. Run 'modernize' to convert the codebase to as much python2/3 as possible. This introduces the dependency on 'six'
5. Run tests, implement various fixes, repeat. This work was ongoing when I stopped working on the study.
Various (non-exhaustive) problems found:
- cStringIO isn't handled automatically, so these need to be fixed manually ( e.g.
https://github.com/ndevenish/cctbx_project/commit/c793eb58acc37c60360dccbbb…
[4] )
- Iterators needed to be fixed in cases where they were missed (next vs __next__)
- Rounding. Python3 uses 'Bankers Rounding' and there are formatting tests where this changes the output. I didn't know enough about the exact desired result to know the best way to fix this
- libtbx uses compiler.misc.mangle and I don't know why - this was always a private interface and was removed in 3.
- Moving print statements to functions - there was several failed tests relating to the old python2-print-soft-spacing behaviour, which was removed. Not too difficult, but definitely causes
- A couple of text/binary mode file issues, which seemed to be simple but may be more complicated than the test cases covered. I'd expect more issues with this in the format readers though.
I evaluated both the futurize (using future library) and modernize (using the well known six library) tools, both being different approaches to 2to3, but for dual 2/3 codebases. I liked the approach of futurize to attempt to make code look as python3-idiomatic as-possible, but some of the performance implications were slightly
opaque: e.g. libtbx makes heavy use of cStringIO (presumably for a good reason), and futurize converted all of these back to using StringIO in the Python2 case, so settled on modernize as I felt two different compatibility libraries would be messy. In either case, using the library means that you can identify exactly everywhere that needs to be removed when moving to python3 only.
My conclusions:
- Automatic tools are useful for the bulk of changes, but there are still lots of edge cases
- The complexity means that a phased approach is *absolutely* necessary - starting by converting the core to 2/3 and only moving to
3 once everything downstream is converted.Trying to convert everything at once would likely mean months of feature-freeze.
- A separate "Remove legacy" cleaning phase might be very useful, though obviously the domain of this could be endless
- SCons is probably the least important of the conversion worries Nick
Links:
------
[1]
https://github.com/ndevenish/read_scons/blob/master/tbx2cmake/import_env.py…
[2] https://github.com/ndevenish/cctbx_project/tree/py3k-modernize
[3] https://github.com/ndevenish/cctbx_project/tree/py3k
[4]
https://github.com/ndevenish/cctbx_project/commit/c793eb58acc37c60360dccbbb…
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org
http://phenix-online.org/mailman/listinfo/cctbxbb
8 years, 3 months
Re: [phenixbb] staraniso/phenix.refine
by Gerard Bricogne
Dear Andrea,
As Clemens Vonrhein is not not subscribed to the phenixbb, I am sending
the message below on his behalf.
With best wishes,
Gerard.
----------------------------------------------------------------------------
Dear Andrea,
Besides the paragraph in the Release Notes of our latest BUSTER release that
Luca directed you to, you may find it useful to consult the following Wiki
page:
https://www.globalphasing.com/buster/wiki/index.cgi?DepositionMmCif
Although it has not yet been updated so as to mention explicitly the cases
where refinement was done with REFMAC or phenix-refine, the procedure and
commands given there will work in these two cases as well. Our intention was
to get feedback from our users first, then to announce the extended
capability of aB_deposition_combine more broadly - but your question clearly
shows that we should make that announcement sooner rather than later.
Getting back to your message: although uploading either the phenix-generated
mtz file or the mmCIF file generated by aB_deposition_combine would be
acceptable to the OneDep system, we would highly recommend using those from
aB_deposition_combine for several reasons. First of all, these files should
already contain the data from Phenix but also provide additional data blocks
with richer reflection data (and metadata!), going right back to the scaled
and unmerged data without any cut-off applied. Furthermore, it should ensure
that the correct data quality metrics (i.e. merging statistics) are included
into the mmCIF files uploaded during deposition. Of course, you don't need
to use aB_deposition_combine to generate such a set of mmCIF files (model
and reflection data) - these are after all just text files - but the devil
is often in the details.
It may be useful to add some fairly general background information in order
to avoid misunderstandings and false impressions - so here are some relevant
points you may wish to consider.
(a) The deposition of model and associated data into the PDB should allow
for two things: (1) the validation of the current model and of its
parametrisation on their own, as well as a check against the data used
during refinement, and (2) the provision of additional data to allow
further analysis of, and improvements to, the data handling as well as
the model itself.
For the first task one needs to deposit the data exactly as used by the
depositor to arrive at the current model, i.e. the input reflection
data (as used as input to the refinement program) plus all available
Fourier coefficients (as output by that refinement program) needed to
compute the maps used in modeling.
The second task requires deposition of less and less "processed"
versions of the experimental data - ultimately going back to the raw
diffraction images. This might involve several sets of reflection data,
e.g. (i) scaled, merged and corrected reflection data, (ii) scaled and
merged data before correction and/or cutoff, and (iii) data scaled and
unmerged before outlier rejection - plus additional variants.
(b) When going from the raw diffraction images towards the final model, a
lot of selection and modification of the initial integrated intensities
takes place - from profile fitting, partiality scaling, polarization
correction, outlier rejection, empirical corrections or analytical
scale determination and error model adjustments, all the way the
application of truncation thresholds (isotropically, elliptically or
anisotropically), conversion to amplitudes (and special treatment of
weak intensities) and anisotropy correction.
There are often good reasons for doing all or some of these (with sound
scientific reasons underpinning them - not just waving some "magic
stick"), even if under special circumstances a deviation from standard
protocols is advisable. What is important from a developer's viewpoint
is to provide users with choices to influence these various steps and
to ensure that as many intermediate versions of reflection data as
possible are available for downstream processes and deposition.
(c) At deposition time, the use of a single reflection datablock is often
not adequate to provide all that information (e.g. refinement programs
might output not the original Fobs going in, but those anisotropically
rescaled against the current model - so a second block might be needed
to hold the original Fobs data, free from that rescaling). If different
types of map coefficients are to be provided, they too need to come in
different mmCIF datablocks (2mFo-DFc and mFo-DFc for observed data
only; 2mFo-DFc filled in with DFc in a sphere going out to the highest
diffraction limit; 2mFo-DFc filled in with DFc for only the reflections
deemed "observable" by STARANISO; F(early)-F(late) map coefficients for
radiation damage analysis; coefficients for anomalous Fourier maps etc).
So ultimately we need to combine (going backwards): the refinement
output data, the refinement input data and all intermediate versions of
reflection data (see above) ... ideally right back to the
scaled+unmerged intensities with a full description of context (image
numbers, detector positions etc). This is what the "Data Processing
Subgroup" of the PDBx/mmCIF Working Group has been looking at
extensively over the last months, and about which a paper has just been
submitted.
(d) autoPROC (running e.g. XDS, AIMLESS and STARANISO) and the STARANISO
server provide multi-datablock mmCIF files to simplify the submission
of a rich set of reflection data. autoPROC provides two versions here:
one with traditional isotropic analysis, and the another for the
anisotropic analysis done in STARANISO. It is up to the user to decide
which one to use for which downstream steps.
To help in combining the reflection data from data processing with that
from refinement, and in transferring all relevant meta data (data
quality metrics) into the model mmCIF file, we provide a tool called
aB_deposition_combine: it should hopefully work for autoPROC (with or
without STARANISO) data in conjunction with either BUSTER, REFMAC or
Phenix refinement results. At the end the user is provided with two
mmCIF files for deposition: (1) a model file with adequate data quality
metrics, and (2) a reflection mmCIF file with multiple datablocks all
the way back to the scaled+unmerged reflection data.
(e) It is important at the time of deposition to not just select whatever
reflection data file happens to be the first to make it through the
OneDep system, as this can often lead to picking up the simplest
version of an MTZ or mmCIF file, but to choose, if at all possible, the
most complete reflection data file containing the richest metadata.
Otherwise we simply increase the number of those rather unsatisfying
PDB entries whose reflection data files contain only the very minimum
of information about what they actually represent and what data quality
metrics (such as multiplicity, internal consistency criteria etc) would
have been attached to them in a richer deposition.
*** Please note that the current OneDep system seems to complain about the
fact that unmerged data blocks come without a test-set flag column:
this looks to us like an oversight, since test-set flags are attributes
belonging to the refinement process, so that it makes no logical sense
to require them for unmerged data. This will probably requires some
rethinking/clarification/changes on the OneDep side.
A final remark: Instead of trying to give the impression that there is only
one right way of doing things, and therefore only a single set of reflection
data that should (or needs to) be deposited, it would seem more helpful and
constructive to try and provide a clear description of the various "views"
of the same raw diffraction data that can be provided by the various
approaches and data analysis methods we have at our disposal. Together with
more developments regarding the PDBx/mmCIF dictionary, and coordinated
developments in the OneDep deposition infrastructure, this will enable
better and richer depositions to be made, helping future (re)users as well
as software developers.
Cheers,
Clemens
----------------------------------------------------------------------------
--
On Mon, Dec 19, 2022 at 05:42:24PM -0500, Andrea Piserchio wrote:
> So,
>
> Both the phenix-generated mtz file (silly me for not checking this first)
> and the cif file generated by aB_deposition_combine can be uploaded on the
> PDB server.
>
> Thank you all for your help!!
>
> Andrea
>
> On Sat, Dec 17, 2022 at 5:06 PM Pavel Afonine <pafonine(a)lbl.gov> wrote:
>
> > Hi,
> >
> > two hopefully relevant points:
> >
> > - phenix.refine always produces an MTZ file that contains the copy of
> > all inputs plus all is needed to run refinement (free-r flags, for
> > example). So if you use that file for deposition you have all you need.
> >
> > - Unless there are strongly advocated reasons to do otherwise in your
> > particular case, you better use in refinement and deposit the original
> > data and NOT the one touched by any of available these days magic sticks
> > (that "correct" for anisotropy, sharpen or else!).
> >
> > Other comments:
> >
> > > - However, CCP41/Refmac5 does not (yet) read .cif reflection files. As
> > > far as I know, Phenix Refine does not (yet) neither.
> >
> > Phenix supports complete input / outputs of mmcif/cif format. For
> > example, phenix.refine can read/write model and reflection data in cif
> > format. It's been this way for a long time now.
> >
> > Pavel
> >
> >
> > On 12/16/22 17:32, Andrea Piserchio wrote:
> > > Dear all,
> > >
> > >
> > > I am trying to validate and then (hopefully) deposit a structure
> > > generated using the autoproc/staraniso staraniso_alldata-unique.mtz
> > > file as input for phenix.refine.
> > >
> > > Autoproc also produces a cif file ( Data_1_autoPROC_STARANISO_all.cif)
> > > specifically for deposition.
> > >
> > > Long story short, the PDB validation server complains about the lack
> > > of a freeR set for both files. I realized that, at least for the cif
> > > file, the r_free_flag is missing (but why does the .mtz for the
> > > isotropic dataset work??),so I then tried to use for validation the
> > > *.reflections.cif file that can be generated by phenix.refine. This
> > > file can actually produce a validation report, but I still have some
> > > questions:
> > >
> > > 1) Is it proper to use the .reflections.cif file for this purpose?
> > > During the upload I do see some errors (see pics); also, the final
> > > results show various RSRZ outliers in regions of the structure that
> > > look reasonably good by looking at the maps on coot, which seems odd ...
> > >
> > > 2) In case the *.reflections.cif is not adequate/sufficient for
> > > deposition (I sent an inquiry to the PDB, but they did not respond
> > > yet), can I just add a _refln.status column to the autoproc cif file
> > > (within the loop containing the r_free_flag) where I insert “f” for
> > > r_free_flag = 0 and “o” everywhere else?
> > >
> > >
> > > Thank you in advance,
> > >
> > >
> > > Andrea
> > >
> > > _______________________________________________
> > > phenixbb mailing list
> > > phenixbb(a)phenix-online.org
> > > http://phenix-online.org/mailman/listinfo/phenixbb
> > > Unsubscribe: phenixbb-leave(a)phenix-online.org
> >
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> Unsubscribe: phenixbb-leave(a)phenix-online.org
3 years, 1 month
Re: [cctbxbb] Niggli-reduced cell C++ implementation
by Martin Uhrin
Dear Ralf,
you're quite right! I was looking at my basis along with the lattice which
lead me to think that the two systems should be equivalent when indeed the
lattices are not!
Thanks for pointing this out.
I'll get back to you once I'm ready with a version for the repository.
Best,
-Martin
On 24 April 2012 20:42, Ralf Grosse-Kunstleve <rwgrosse-kunstleve(a)lbl.gov>wrote:
> Hi Martin,
>
> Based on
>
> iotbx.lattice_symmetry --unit_cell="4.630811 4.630811 4.630811 90 90 90"
>
> and
>
> iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941
> 106.779 90"
>
> the first unit cell is (obviously) cubic, the second is only monoclinic.
> Even with
>
> iotbx.lattice_symmetry --unit_cell="3.27448 5.67156 5.67156 99.5941
> 106.779 90" --delta=20
>
> it only comes back as orthorhombic.
>
> Is this what you expect?
>
> Ralf
>
>
>
> On Tue, Apr 24, 2012 at 10:57 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk>wrote:
>
>> Dear cctbxers,
>>
>> I've finally found the time to play around with a C++ version of the KG
>> algorithm and I've come across a result I don't understand. I've tried
>> both David's C++ and the cctbx python niggli_cell() implementations and
>> they both give the roughly the same answer.
>>
>> I'm reducing the following cell with two, equivalent, representations (a,
>> b, c, alpha, beta, gamma):
>>
>> Before:
>>
>> 1: 4.630811 4.630811 4.630811 90 90 90
>> 2: 3.27448 5.67156 5.67156 99.5941 106.779 90
>>
>> After:
>>
>> 1: 4.63081 4.63081 4.63081 90 90 90
>> 2: 3.27448 5.67154 5.67156 99.5941 90 106.778
>>
>> Looking at the trace, cell 1 undergoes step 3 and finishes while cell 2
>> undergoes steps 2, 3, 7 and 4.
>>
>> Does anyone know why these haven't converged to the same cell?
>>
>> Many thanks,
>> -Martin
>>
>> On 23 March 2012 17:12, Ralf Grosse-Kunstleve <rwgrosse-kunstleve(a)lbl.gov
>> > wrote:
>>
>>> Hi Martin,
>>> Let me know if you need svn write access to check in your changes. All I
>>> need is your sourceforge user id.
>>> Ralf
>>>
>>>
>>> On Fri, Mar 23, 2012 at 3:35 AM, Martin Uhrin <martin.uhrin.10(a)ucl.ac.uk
>>> > wrote:
>>>
>>>> Dear David and Rolf,
>>>>
>>>> thank you for your encouragement.
>>>>
>>>> David: I'm more than happy to port your implementation to cctbx if
>>>> you're happy with this. Of course I don't want to step on your toes so if
>>>> you'd rather do it yourself (or not at all) that's cool.
>>>>
>>>> There may be some licensing issues to sort out as it looks like cctbx
>>>> has a custom (non viral) license but the BSD license is likely compatible.
>>>>
>>>> On first impression I think a new class would be the way to go but I'd
>>>> have to look at the two algorithms in greater detail to be sure.
>>>>
>>>> All the best,
>>>> -Martin
>>>>
>>>>
>>>> On 22 March 2012 22:00, Ralf Grosse-Kunstleve <
>>>> rwgrosse-kunstleve(a)lbl.gov> wrote:
>>>>
>>>>> Hi Martin,
>>>>> You're very welcome to add a C++ version of the Krivy-Gruber algorithm
>>>>> to cctbx if that's what you had in mind.
>>>>> I'm not sure what's better, generalizing the fast-minimum-reduction
>>>>> code, or just having an independent implementation.
>>>>> Ralf
>>>>>
>>>>> On Thu, Mar 22, 2012 at 2:24 PM, Martin Uhrin <
>>>>> martin.uhrin.10(a)ucl.ac.uk> wrote:
>>>>>
>>>>>> Dear Cctbx community,
>>>>>>
>>>>>> Firstly I'd like to say thank you to Rolf, Nicholas and Paul for
>>>>>> their expertly thought through implementation of the reduced cell
>>>>>> algorithm. I've found it to be extremely useful for my work.
>>>>>>
>>>>>> My code is all in C++ and I'd like to be able to use the Krivy-Gruber
>>>>>> algorithm. My understanding is that only the reduced (Buerger) unit cell
>>>>>> algorithm is implemented in C++ [1] which guarantees shortest lengths but
>>>>>> not unique angles. From my understanding the Krivy-Gruber would also
>>>>>> guarantee me uniqueness of unit cell angles, however this is only
>>>>>> implemented in Python [2]. Sorry to be so verbose, I just wanted to check
>>>>>> that I was on the right page.
>>>>>>
>>>>>> Would it be possible for me to implement the Krivy-Gruber in C++ by
>>>>>> adding in the epsilon_relative to the parameter and following the procedure
>>>>>> found in the python version?
>>>>>>
>>>>>> Many thanks,
>>>>>> -Martin
>>>>>>
>>>>>> [1]
>>>>>> http://cctbx.sourceforge.net/current/c_plus_plus/classcctbx_1_1uctbx_1_1fas…
>>>>>> [2]
>>>>>> http://cctbx.sourceforge.net/current/python/cctbx.uctbx.krivy_gruber_1976.h…
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Martin Uhrin Tel: +44
>>>>>> 207 679 3466
>>>>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>>>>> University College London
>>>>>> martin.uhrin.10(a)ucl.ac.uk
>>>>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>>>>
>>>>>> _______________________________________________
>>>>>> cctbxbb mailing list
>>>>>> cctbxbb(a)phenix-online.org
>>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> cctbxbb mailing list
>>>>> cctbxbb(a)phenix-online.org
>>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Martin Uhrin Tel: +44
>>>> 207 679 3466
>>>> Department of Physics & Astronomy Fax:+44 207 679 0595
>>>> University College London
>>>> martin.uhrin.10(a)ucl.ac.uk
>>>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>>>
>>>> _______________________________________________
>>>> cctbxbb mailing list
>>>> cctbxbb(a)phenix-online.org
>>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>>
>>>>
>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>>
>>
>>
>> --
>> Martin Uhrin Tel: +44
>> 207 679 3466
>> Department of Physics & Astronomy Fax:+44 207 679 0595
>> University College London
>> martin.uhrin.10(a)ucl.ac.uk
>> Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
--
Martin Uhrin Tel: +44 207
679 3466
Department of Physics & Astronomy Fax:+44 207 679 0595
University College London martin.uhrin.10(a)ucl.ac.uk
Gower St, London, WC1E 6BT, U.K. http://www.cmmp.ucl.ac.uk
13 years, 9 months
Re: [phenixbb] Using LigandFit to identify unknown density
by Pavel Afonine
Hi Maia,
phenix.refine refines occupancies during occupancy refinement, it
refines B-factors during B-factor refinement and it refines coordinates
during coordinate refinement. The B-factor restraints are applied at
B-factor refinement step. phenix.refine iterates these steps as many
times as large the main.number_of_macro_cycles parameter is (3, by
default). Obviously, no B-factor are restraints applied if you refine
occupancies only.
Yes, what Peter mentioned actually happens during refinement (if
B-factor refinement is enabled). That's what the B-factor restraints do
in general.
Pavel.
On 1/27/10 3:28 PM, Maia Cherney wrote:
> Hi Pavel, Peter,
>
> Thank you for your reply. My question is if the phenix.refine actually
> uses the B-factor restraints in the occupancy refinement. I did not
> give any restraints, so it should happen automatically? I like the
> idea that Peter mentioned that the restraints should make B -factors
> similar to surrounding molecules. Again, my question is does
> phenix.refine actually uses this approach?
>
> Maia
>
>
>
> Pavel Afonine wrote:
>> Hi Maia,
>>
>> first, I agree with Peter - the B-factor restraints should help, indeed.
>>
>> Second, I think we discussed this subject already on November 25, 2009:
>>
>> Subject: Re: [phenixbb] occupancy refinement
>> Date: 11/25/09 7:38 AM
>>
>> and I believe I didn't change my mind about it since that. I'm
>> appending that email conversation to the bottom of this email.
>>
>> Overall, if you get good 2mFo-DFc map and clear residual mFo-DFc map,
>> and ligand's B-factors are similar or slightly larger than those of
>> surrounding atoms, and refined occupancy looks reasonable, then I
>> think you are fine.
>>
>> Pavel.
>>
>>
>> On 1/27/10 2:05 PM, Maia Cherney wrote:
>>> Hi Pavel,
>>>
>>> I have six ligands at partial occupacies in my structure.
>>> Simultaneous refinement of occupancy and B factors in phenix gives a
>>> value of 0.7 for the ligand occupancy that looks reasonable.
>>> How does phenix can perform such a refinement given the occupancies
>>> and B factors are highly correlated? Indeed, you can
>>> increase/decrease the ligand occupancies while simultaneously
>>> increacing/decreasing their B factors without changing the R factor
>>> value. What criteria does phenix use in such a refinement if R
>>> factor does not tell much?
>>>
>>> Maia
>>
>> ******* COPY (11/25/09)************
>>
>>
>>
>> On 11/25/09 7:38 AM, Maia Cherney wrote:
>>> Hi Pavel,
>>>
>>> It looks like all different refined occupancies starting from
>>> different initial occupancies converged to the same number upon
>>> going through very many cycles of refinement.
>>>
>>> Maia
>>>
>>>
>>> Pavel Afonine wrote:
>>>
>>>> Hi Maia,
>>>>
>>>> the atom parameters, such as occupancy, B-factor and even position
>>>> are interdependent in some sense. That is, if you have somewhat
>>>> incorrect occupancy, that B-factor refinement may compensate for
>>>> it; if you misplaced an atom the refinement of its occupancy or/and
>>>> B-factor will compensate for this. Note in all the above cases the
>>>> 2mFo-DFc and mFo-DFc maps will appear almost identical, as well as
>>>> R-factors.
>>>>
>>>> So, I think your goal of finding a "true" occupancy is hardly
>>>> achievable.
>>>>
>>>> Although, I think you can approach it by doing very many
>>>> refinements (say, several hundreds) (where you refine occupancies,
>>>> B-factors and coordinates) each refinement starting with different
>>>> occupancy and B-factor values, and make sure that each refinement
>>>> converges. Then select a subset of refined structures with similar
>>>> and low R-factors (discard those cases where refinement got stuck
>>>> for whatever reason and R-factors are higher) (and probably similar
>>>> looking 2mFo-DFc and mFo-DFc maps in the region of interest). Then
>>>> see where the refined occupancies and B-factors are clustering, and
>>>> the averaged values will probably give you an approximate values
>>>> for occupancy and B. I did not try this myself but always wanted to.
>>>>
>>>> If you have a structure consisting of 9 carbons and one gold atom,
>>>> then I would expect that the "second digit" in gold's occupancy
>>>> would matter. However, if we speak about dozen of ligand atoms
>>>> (which are probably a combination of C,N,O) out of a few thousands
>>>> of atoms of the whole structure, then I would not expect the
>>>> "second digit" to be visibly important.
>>>>
>>>> Pavel.
>>>>
>>>>
>>>> On 11/24/09 8:08 PM, chern wrote:
>>>>
>>>>> Thank you Kendall and Pavel for your responces.
>>>>> I really want to determine the occupancy of my ligand. I saw one
>>>>> suggestion to try different refinements with different occupancies
>>>>> and compare the B-factors.
>>>>> The occupancy with a B-factor that is at the level with the
>>>>> average protein B-factors, is a "true" occupancy.
>>>>> I also noticed the dependence of the ligand occupancy on the
>>>>> initial occupancy. I saw the difference of 10 to 15%, that is why
>>>>> I am wondering if the second digit after the decimal point makes
>>>>> any sence.
>>>>> Maia
>>>>>
>>>>> ----- Original Message -----
>>>>> *From:* Kendall Nettles <mailto:[email protected]>
>>>>> *To:* PHENIX user mailing list
>>>>> <mailto:[email protected]>
>>>>> *Sent:* Tuesday, November 24, 2009 8:22 PM
>>>>> *Subject:* Re: [phenixbb] occupancy refinement
>>>>>
>>>>> Hi Maia,
>>>>> I think the criteria for occupancy refinement of ligands is
>>>>> similar to a decision to add an alt conformation for an amino
>>>>> acid. I don’t refine occupancy of a ligand unless the difference
>>>>> map indicates that we have to. Sometimes part of the igand may be
>>>>> conformationally mobile and show poor density, but I personally
>>>>> don’t think this justifies occupancy refinement without evidence
>>>>> from the difference map. I agree with Pavel that you shouldn’t
>>>>> expect much change in overall statistics, unless the ligand has
>>>>> very low occupancy., or you have a very small protein. We
>>>>> typically see 0.5-1% difference in R factors from refining with
>>>>> ligand versus without for nuclear receptor igand binding domains
>>>>> of about 250 amino acids, and we see very small differences from
>>>>> occupancy refinement of the ligands.
>>>>>
>>>>> Regarding the error, I have noticed differences of 10% percent
>>>>> occupancy depending on what you set the starting occupancy before
>>>>> refinement. That is, if the starting occupancy starts at 1, you
>>>>> might end up with 50%, but if you start it at 0.01, you might get
>>>>> 40%. I don’t have the expertise to explain why this is, but I
>>>>> also don’t think it is necessarily important. I think it is more
>>>>> important to convince yourself that the ligand binds how you
>>>>> think it does. With steroid receptors, the ligand is usually
>>>>> planer, and tethered by hydrogen bonds on two ends. That leaves
>>>>> us with with four possible poses, so if in doubt, we will dock in
>>>>> the ligand in all of the four orientations and refine. So far, we
>>>>> have had only one of several dozen structures where the ligand
>>>>> orientation was not obvious after this procedure. I worry about a
>>>>> letter to the editor suggesting that the electron density for the
>>>>> ligand doesn’t support the conclusions of the paper, not whether
>>>>> the occupancy is 40% versus 50%.
>>>>>
>>>>> You might also want to consider looking at several maps, such as
>>>>> the simple or simulated annealing composite omit maps. These can
>>>>> be noisy, so also try the kicked maps (
>>>>>
>>>>> http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html),
>>>>>
>>>>>
>>>>> <http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html%…,>
>>>>>
>>>>> which I have become a big fan of.
>>>>>
>>>>> Regards,
>>>>> Kendall Nettles
>>>>>
>>>>> On 11/24/09 3:07 PM, "chern(a)ualberta.ca" <chern(a)ualberta.ca>
>>>>> wrote:
>>>>>
>>>>> Hi,
>>>>> I am wondering what is the criteria for occupancy
>>>>> refinement of
>>>>> ligands. I noticed that R factors change very little, but the
>>>>> ligand
>>>>> B-factors change significantly . On the other hand, the
>>>>> occupancy is
>>>>> refined to the second digit after the decimal point. How can
>>>>> I find
>>>>> out the error for the refined occupancy of ligands?
>>>>>
>>>>> Maia
>>>>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
16 years
Re: [phenixbb] Using the Same Test Set in AutoBuild and Phenix.Refine
by Dale Tronrud
Thomas C. Terwilliger wrote:
> Hi Dale,
>
> Can you try something else:
>
> phenix.refine AutoBuild_run_12_/overall_best.pdb \
> refinement.input.xray_data.file_name=\
> AutoBuild_run_12/exptl_fobs_freeR_flags.mtz \
> refinement.main.high_resolution=2.2 refinement.main.low_resolution=20 \
> /usr/users/dale/geometry/chromophores/bcl_tnt.cif
>
>
> This differs from your run only by substituting
>
> AutoBuild_run_12/exptl_fobs_freeR_flags.mtz
>
> for your 2 refinement data files. This is the exact file that is used in
> refinement by AutoBuild.
>
I tried this command, very similar to yours:
phenix.refine AutoBuild_run_12_/overall_best.pdb \
refinement.input.xray_data.file_name=AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz \
refinement.main.high_resolution=2.2 refinement.main.low_resolution=20 \
/usr/users/dale/geometry/chromophores/bcl_tnt.cif output.prefix=junk2
The final output was:
F-obs:
AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz:FP,SIGFP
If previously used R-free flags are available run this command again
with the name of the file containing the original flags as an
additional input. If the structure was never refined before, or if the
original R-free flags are unrecoverable, run this command again with
the additional definition:
refinement.main.generate_r_free_flags=True
If the structure was refined previously using different R-free flags,
the values for R-free will become meaningful only after many cycles of
refinement.
Sorry: Please try again.
The output from mtz.dump for your .mtz is
Processing: AutoBuild_run_12_/exptl_fobs_phases_freeR_flags.mtz
Title: Resolve mtz file.
Space group symbol from file: P
Space group number from file: 212
Space group from matrices: P 43 3 2 (No. 212)
Point group symbol from file: 43
Number of crystals: 2
Number of Miller indices: 38159
Resolution range: 75.6238 2.14896
History:
From resolve_huge, 27/12/07 15:07:56
Crystal 1:
Name: HKL_base
Project: HKL_base
Id: 0
Unit cell: (169.1, 169.1, 169.1, 90, 90, 90)
Number of datasets: 1
Dataset 1:
Name: HKL_base
Id: 0
Wavelength: 0
Number of columns: 0
Crystal 2:
Name: allen-
Project: FMO-ct
Id: 2
Unit cell: (169.1, 169.1, 169.1, 90, 90, 90)
Number of datasets: 1
Dataset 1:
Name: 1
Id: 1
Wavelength: 0
Number of columns: 9
label #valid %valid min max type
H 38159 100.00% 0.00 38.00 H: index h,k,l
K 38159 100.00% 2.00 78.00 H: index h,k,l
L 38159 100.00% 0.00 55.00 H: index h,k,l
FP 38159 100.00% 32.00 15171.00 F: amplitude
SIGFP 38159 100.00% 23.00 1716.00 Q: standard deviation
PHIM 38159 100.00% -90.00 45.00 P: phase angle in degrees
FOMM 38159 100.00% 0.00 0.00 W: weight (of some sort)
FreeR_flag 38159 100.00% 0.00 19.00 I: integer
FC 38159 100.00% 0.00 0.00 F: amplitude
Dale Tronrud
> I agree that you should be able to use your original data file instead. A
> possible reason why this has failed is that the original data file has a
> couple reflections for which there is no data...and which were tossed by
> AutoBuild before creating exptl_fobs_freeR_flags.mtz . Two files that
> differ only in reflections with no data will still give different
> checksums, I think.
>
> All the best,
> Tom T
>
>> Hi Dale,
>>
>>>> 1) Why you specify reflection MTZ file twice in phenix.refine script?
>>>>
>>>>
>>> I put the mtz in twice because if I put it in once phenix.refine
>>> complains that I have no free R flags. It seems to want one file with
>>> the amplitudes and another with the flags. Since I have both in the
>>> same file I put that file on the line twice and phenix.refine finds
>>> everything it needs.
>>>
>> phenix.refine looks for free-R flags in your main data file
>> (1M50-2.mtz). Optionally you can provide a separate file containing
>> free-R flags (I have to write about this in the manual). However, if
>> your 1M50-2.mtz contains free-R flags then you don't need to give it
>> twice. So clearly something is wrong at this step and we need to find
>> out what is wrong before doing anything else. Could you send the result
>> of the command "phenix.mtz.dump 1M50-2.mtz" to see what's inside of your
>> data file? Or I can debug it myself if you send me the data and model.
>>
>>> If the MD5 hash of the test set depends on the resolution then
>>> certainly
>>> I could be in trouble.
>> No. It must always use the original files before any processing.
>>
>>> Does the resolution limit affect the MD5 hash of the test set?
>>>
>> No. If it does then it is a very bad bug. I will play with this myself
>> later tonight.
>>
>>>> 3) Does this work:
>>>>
>>>> (...)
>>> I'll try these but it will take a bit of time.
>>>
>> Don't run it until completion. Just make sure it passed through the
>> processing step.
>>
>> Pavel.
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
18 years, 1 month
Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
by Xavier Brazzolotto
Hi Randy
Many thanks for your feedback and explanations.
I’ll check the nightly or wait until the next stable. I’ve seen an RC1 recently, meaning the next stable should come soon.
Best,
Xavier
> Le 4 juil. 2023 à 17:37, Randy John Read <rjr27(a)cam.ac.uk> a écrit :
>
> Hi,
>
> Thanks for sending the log files!
>
> The jobs turn out not actually to be identical. The GUI automatically chose to use the intensity data (which is what we prefer for use in Phaser) whereas your job run from a script is using amplitude data. The issue I alluded to earlier occurs only for intensity data, because the analysis of those data involves applying different equations, which use a special function (tgamma from the Boost library). For some reason I don’t understand, when the Intel version of the tgamma algorithm is computed using the Rosetta functionality to run it on an ARM processor, it’s much much slower than other calculations.
>
> Just last week (right after I finally got an M2 MacBook Pro), we tracked this down and replaced the calls to Boost tgamma with alternative code, and that problem should exist any more. You can use it already in Phenix by getting a recent nightly build, and I’ve asked the CCP4 people to compile a new version of Phaser and release it as an update to CCP4 as well.
>
> Best wishes,
>
> Randy
>
>> On 4 Jul 2023, at 12:05, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>> wrote:
>>
>> For information
>>
>> Apple M2 running Ventura 13.4.1 with 64 Go memory
>> Phenix 1.20.1-4487 (Intel one).
>>
>> I’ve run MR of the same dataset (2.15A - I422) with the same model both with the command line and through the GUI.
>>
>> Command line (phenix.phaser) : 48 secs.
>> GUI (Phaser-MR simple one component interface): 18 mins !
>>
>> In copy the two log files if this helps
>>
>>
>>
>>> Le 4 juil. 2023 à 12:54, Luca Jovine <luca.jovine(a)ki.se <mailto:[email protected]>> a écrit :
>>>
>>> Hi Xavier and Randy, I'm also experiencing the same on a M2 Mac!
>>> -Luca
>>>
>>> -----Original Message-----
>>> From: <phenixbb-bounces(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>> on behalf of Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]> <mailto:[email protected]>>
>>> Date: Tuesday, 4 July 2023 at 12:38
>>> To: Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected]>>
>>> Cc: PHENIX user mailing list <phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>>
>>> Subject: Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
>>>
>>>
>>> Hi Randy,
>>>
>>>
>>> Indeed I’m running Phenix on a brand new M2 Mac.
>>> I will benchmark the two processes (GUI vs command line) and post them here.
>>>
>>>
>>>> Le 4 juil. 2023 à 12:32, Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected]>> a écrit :
>>>>
>>>> Hi Xavier,
>>>>
>>>> We haven’t noticed that, or at least any effect is small enough not to stand out. There shouldn’t be a lot of overhead in communicating with the GUI (i.e. updating the terse log output and the graphs) but if there is we should look into it and see if we can do something about it.
>>>>
>>>> Could you tell me how much longer (say, in percentage terms) a job takes when you run it through the GUI compared to running the same job outside the GUI on the same computer? Also, it’s possible the architecture matters so could you say which type of computer and operating system you’re using? If it’s a Mac, is it one with an Intel processor or an ARM (M1 or M2) processor? (By the way, we finally managed to track down and fix an issue that cause Phaser to run really slowly on an M1 or M2 Mac when using the version compiled for Intel, once I got my hands on a new Mac.)
>>>>
>>>> Best wishes,
>>>>
>>>> Randy
>>>>
>>>>> On 4 Jul 2023, at 10:44, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]> <mailto:[email protected]>> wrote:
>>>>>
>>>>> Dear Phenix users
>>>>>
>>>>> I’ve noticed that molecular replacement was clearly slower while running from the GUI compared to using the command line (phenix.phaser).
>>>>>
>>>>> Did you also observe such behavior?
>>>>>
>>>>> Best
>>>>> Xavier
>>>>> _______________________________________________
>>>>> phenixbb mailing list
>>>>> phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>
>>>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
>>>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>
>>>>
>>>>
>>>> -----
>>>> Randy J. Read
>>>> Department of Haematology, University of Cambridge
>>>> Cambridge Institute for Medical Research Tel: +44 1223 336500
>>>> The Keith Peters Building
>>>> Hills Road E-mail: rjr27(a)cam.ac.uk <mailto:[email protected]> <mailto:[email protected]>
>>>> Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk <http://www-structmed.cimr.cam.ac.uk/>
>>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>
>>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
>>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]> <mailto:[email protected]>
>>>
>>>
>>>
>>> När du skickar e-post till Karolinska Institutet (KI) innebär detta att KI kommer att behandla dina personuppgifter. Här finns information om hur KI behandlar personuppgifter<https://ki.se/medarbetare/integritetsskyddspolicy>.
>>>
>>>
>>> Sending email to Karolinska Institutet (KI) will result in KI processing your personal data. You can read more about KI’s processing of personal data here<https://ki.se/en/staff/data-protection-policy>.
>>
>> <command_line_PHASER.log><GUI_phaser.log>
>
> -----
> Randy J. Read
> Department of Haematology, University of Cambridge
> Cambridge Institute for Medical Research Tel: +44 1223 336500
> The Keith Peters Building
> Hills Road E-mail: rjr27(a)cam.ac.uk <mailto:[email protected]>
> Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk <http://www-structmed.cimr.cam.ac.uk/>
2 years, 7 months
Re: [phenixbb] Table 1 successor in 3D?
by Gerard Bricogne
Dear John and Loes,
Thank you for reiterating on this BB your point about depositing
raw diffraction images. I will never disagree with any promotion of
that deposition, or praise of its benefits, given that I was one of
the earliest proponents and most persistently vociferous defenders of
the idea, long before it gained general acceptance (see Acta D65, 2009
176-185). There has never been any statement on our part that the
analysis done by STARANISO disposes of the need to store the original
images and to revisit them regularly with improved processing and/or
troubleshooting software. At any given stage in this evolution,
however, (re)processing results will need to be displayed, and it is
with the matter of what information about data quality is conveyed (or
not) by various modes of presentation of such results that Bernhard's
argument and (part of) our work on STARANISO are concerned.
Furthermore we have made available the PDBpeep server at
http://staraniso.globalphasing.org/cgi-bin/PDBpeep.cgi
that takes as input a 4-character PDB entry code and generates figures
from the deposited *merged amplitudes* associated with that entry. The
numbers coming out of a PDBpeep run may well have questionable
quantitative value (this is pointed out in the home page for that
server) but the 3D WebGL picture it produces has informative value
independently from that. Take a look, for instance, at 4zc9, 5f6m or
6c79: it is quite plain that these high-resolution datasets have
significant systematic incompleteness issues, a conclusion that would
not necessarily jump out of a Table 1 page, even after reprocessing
the original raw images, without that 3D display.
The truly pertinent point about this work in relation to keeping
raw images is that the STARANISO display very often suggests that the
merged data have been subjected to too drastic a resolution cut-off,
and that it would therefore be worth going back to the raw images and
to let autoPROC+STARANISO apply a more judicious cut-off. Sometimes,
however, as in the example given in Bernhard's paper, significant data
fail to be recorded because the detector was positioned too far from
the crystal, in which case the raw images would only confirm that
infelicity and would provide no means of remediating it.
With best wishes,
Gerard.
--
On Wed, Jun 06, 2018 at 09:35:38AM +0100, John R Helliwell wrote:
> Dear Colleagues
> Given that this message is now also placed on Phenixbb, we reiterate our
> key point that deposition of raw diffraction images offers flexibility to
> readers of our science results for their reuse and at no cost to the user.
> As with all fields our underpinning data should be FAIR (Findable,
> Accessible, Interoperable and Reusable). Possibilities for free storage of
> data are Zenodo, SBGrid and proteindiffraction.org (IRRMC).
> With respect to graphic displays of anisotropy of data Gerard's three
> figures are very informative, we agree.
> Best wishes
>
> Loes and John
>
> Kroon-Batenburg et al (2017) IUCrJ and Helliwell et al (2017) IUCrJ
>
> On Tue, Jun 5, 2018 at 4:49 PM, Gerard Bricogne <gb10(a)globalphasing.com>
> wrote:
>
> > Dear phenixbb subscribers,
> >
> > I sent the message below to the CCP4BB and phenixbb at the same
> > time last Friday. It went straight through to the CCP4BB subscribers
> > but was caught by the phenixbb Mailman because its size exceeded 40K.
> >
> > Nigel, as moderator of this list, did his best to rescue it, but
> > all his attempts failed. He therefore asked me to resubmit it, now
> > that he has increased the upper size limit.
> >
> > Apologies to those of you who are also CCP4BB subscribers, who
> > will already have received this message and the follow-up discussion
> > it has given rise to.
> >
> >
> > With best wishes,
> >
> > Gerard.
> >
> > ----- Forwarded message from Gerard Bricogne <gb10> -----
> >
> > Date: Fri, 1 Jun 2018 17:30:48 +0100
> > From: Gerard Bricogne <gb10>
> > Subject: Table 1 successor in 3D?
> > To: CCP4BB(a)JISCMAIL.AC.UK, phenixbb(a)phenix-online.org
> >
> > Dear all,
> >
> > Bernhard Rupp has just published a "Perspective" article in
> > Structure, accessible in electronic form at
> >
> > https://www.cell.com/structure/fulltext/S0969-2126(18)30138-2
> >
> > in which part of his general argument revolves around an example
> > (given as Figure 1) that he produced by means of the STARANISO server
> > at
> > http://staraniso.globalphasing.org/ .
> >
> > The complete results of his submission to the server have been saved
> > and may be accessed at
> >
> > http://staraniso.globalphasing.org/Gallery/Perspective01.html
> >
> > and it is to these results that I would like to add some annotations
> > and comments. To help with this, I invite the reader to connect to
> > this URL, type "+" a couple of times to make the dots bigger, and
> > press/toggle "h" whenever detailed information on the display, or
> > selection of some elements, or the thresholds used for colour coding
> > the displays, needs to be consulted.
> >
> > The main comment is that the WebGL interactive 3D display does
> > give information that makes visible characteristics that could hardly
> > be inferred from the very condensed information given in Table 1, and
> > the annotations will be in the form of a walk through the main
> > elements of this display.
> >
> > For instance the left-most graphical object (a static view of
> > which is attached as "Redundancy.png") shows the 3D distribution of
> > the redundancy (or multiplicity) of measurements. The view chosen for
> > the attached picture shows a strong non-uniformity in this redundancy,
> > with the region dominated by cyan/magenta/white having about twice the
> > redundancy (in the 6/7/8 range) of that which prevails in the region
> > dominated by green/yellow (in the 3/5 range). Clear concentric gashes
> > in both regions, with decreased redundancy, show the effects of the
> > inter-module gaps on the Pilatus 2M detector of the MASSIF-1 beamline.
> > The blue spherical cap along the a* axis corresponds to HKLs for which
> > no measurement is available: it is clearly created by the detector
> > being too far from the crystal.
> >
> > The second (central) graphical object, of which a view is given
> > in Figure 1 of Bernhard's article and another in the attached picture
> > "Local_I_over_sigI.png") shows vividly the blue cap of measurements
> > that were missed but would probably have been significant (had they
> > been measured) cutting into the green region, where the local average
> > of I/sig(I) ranges between 16 and 29! If the detector had been placed
> > closer, significant data extending to perhaps 3.0A resolution would
> > arguably have been measured from this sample.
> >
> > The right-most graphical object (of which a static view is
> > attached as "Debye-Waller.png") depicts the distribution of the
> > anisotropic Debye-Waller factor (an anisotropic generalisation of the
> > Wilson B) of the dataset, giving yet another visual hint that good
> > data were truncated by the edges of a detector placed too far.
> >
> > Apologies for such a long "STARANISO 101" tutorial but Bernhard's
> > invitation to lift our eyes from the terse numbers in Table 1 towards
> > 3D illustrations of data quality criteria was irresistible ;-) . His
> > viewpoint also agrees with one of the main purposes of our STARANISO
> > developments (beyond the analysis and remediation of anisotropy, about
> > which one can - and probably will - argue endlessly) namely contribute
> > to facilitating a more direct and vivid perception by users of the
> > quality of their data (or lack of it) and to nurturing evidence-based
> > motivation to make whatever extra effort it takes to improve that
> > quality. In this case, the undeniable evidence of non-uniformity of
> > redundancy and of a detector placed too far would give immediate
> > practical guidance towards doing a better experiment, while statistics
> > in Table 1 for the same dataset would probably not ... .
> >
> > Thank you Bernhard!
> >
> >
> > With best wishes,
> >
> > Gerard,
> > for and on behalf of the STARANISO developers
> >
> >
> > ----- End forwarded message -----
> >
> > _______________________________________________
> > phenixbb mailing list
> > phenixbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/phenixbb
> > Unsubscribe: phenixbb-leave(a)phenix-online.org
> >
>
>
>
> --
> Professor John R Helliwell DSc
7 years, 7 months
Re: [phenixbb] Validation of structure with modified residue
by Nigel Moriarty
Xavier
I'm happy to take a closer look. Send me the two entities (SER and ligand)
along with the restraints and I will try to help.
Cheers
Nigel
---
Nigel W. Moriarty
Building 33R0349, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
Berkeley, CA 94720-8235
Phone : 510-486-5709 Email : NWMoriarty(a)LBL.gov
Fax : 510-486-5909 Web : CCI.LBL.gov
ORCID : orcid.org/0000-0001-8857-9464
On Thu, Apr 21, 2022 at 8:32 AM Xavier Brazzolotto <xbrazzolotto(a)gmail.com>
wrote:
> I recall now how I used to add the glycans before the Carbohydrate Module
> in Coot.
> I have to make the cif file with the putative O atom of SER OG and remove
> it before making the bond.
> I will try that, it will certainly correct my geometry issue but I am
> still not sure for the final file and validation.
>
> Fingers crossed...
>
> Le 21 avr. 2022 à 17:09, Xavier Brazzolotto <xbrazzolotto(a)gmail.com> a
> écrit :
>
> After some careful inspection.
> The geometry on the C atom of the ligand is weird, I don’t get something
> close to tetrahedral (or similar).
> Probably some angles are missing or I did something wrong with the ligand
> cif file.
> Not fixed yet
>
> Le 21 avr. 2022 à 13:39, Xavier Brazzolotto <xbrazzolotto(a)gmail.com> a
> écrit :
>
> I’ve re-processed the structure separating the SER residue from the ligand
> part. Now I have independent ligand.
> In the « Custom Geometry Restraints » I’ve defined the bond between OG
> and the carbon atom of the ligand and I’ve defined the angles (I’ve used
> the values from the previously determined eLBOW run off the SER-bound
> ligand complex), saved the restraints and launched the refinement. At a
> first look it was processed correctly and the final cif file has now the
> whole protein in Chain A.
>
> I’ve used prepare PDB deposition using the FASTA sequence of the protein
> (wonder if I have to provide the ligand CIF file and add more options) and
> ran phenix.get_pdb_validation : the report looks ok for protein and some
> other basic ligands (sugars, buffer, glycerol, etc...) but the ligand of
> interest was not processed (EDS FAILED...). In the PDB file, all these
> extra ligands are also in Chain A, with water in chain B.
>
> If I try the validation through the website (PDBe@EBI) with both cif
> files from the Refine or the Prepare_PDB_Deposition process, both seem to
> crash the server as it takes forever without Finalizing...
>
> I wonder if I am missing something… Maybe declaration of removal of atoms
> : HG bound to OG in SER or/and removal of one H from the carbon of the
> ligand involved in the bond ?
>
> Xavier
>
> Le 21 avr. 2022 à 08:06, Xavier Brazzolotto <xbrazzolotto(a)gmail.com> a
> écrit :
>
> Thank you for your feedback.
>
> @Paul, I’ve run the « Prepare model for deposition » with the option
> modified residue (SLG). Not sure it will change if I change the name as it
> is already the PDB database, but I will give it another try.
>
> I think that I will have to describe only the ligand and add some
> parameters restricting distance and angles between the OG of SER and the
> ligand, I think this is right way.
> @ Nigel, is that what you mean with « details » ? If you have any other «
> tips/tricks » they are welcome.
>
> Best
> Xavier
>
> Le 21 avr. 2022 à 02:47, Nigel Moriarty <nwmoriarty(a)lbl.gov> a écrit :
>
> Xavier
>
> Paul's point is very valid because the "Prepare for Deposition" step is
> what generates the sequence (which is the crucial point here) for
> deposition. However, because you have "created" a new amino acid, there
> will still be issues as highlighted by Pavel. It is a corner case.
>
> One small addition point is that SLG is already taken in the PDB Ligand
> list. There are tools in Phenix to find an used code.
>
> Can you re-engineer it with SER+ligand? This will solve the problem using
> the current Phenix version. I can help with the details if needed.
>
> Cheers
>
> Nigel
>
> ---
> Nigel W. Moriarty
> Building 33R0349, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> Berkeley, CA 94720-8235
> Phone : 510-486-5709 Email : NWMoriarty(a)LBL.gov <NWMoriarty(a)LBL.gov>
> Fax : 510-486-5909 Web : CCI.LBL.gov <http://cci.lbl.gov/>
> ORCID : orcid.org/0000-0001-8857-9464
>
>
> On Wed, Apr 20, 2022 at 5:02 PM Paul Adams <pdadams(a)lbl.gov> wrote:
>
>>
>> Please also remember that you need to run “Prepare model for PDB
>> deposition” (in the GUI under "PDB Deposition") on the mmCIF file you get
>> from phenix.refine. This provides important information that is required
>> for the deposition at the PDB.
>>
>> On Apr 20, 2022, at 1:58 PM, Xavier Brazzolotto <xbrazzolotto(a)gmail.com>
>> wrote:
>>
>> Dear Phenix users,
>>
>> I don’t know if my problem is related to Phenix but for information I’m
>> running Phenix 1.20.1-4487 under MacOS 12.3.1.
>>
>> I’ve finalized a structure where a ligand covalently modified the protein.
>>
>> I’ve generated the modified residue (named SLG for serine modified by
>> ligand). For this I’ve generated the molecules in SMILES and used eLBOW to
>> generate the restraints. Then I’ve modified the cif file defining the
>> molecule as a L-peptide and replacing the atom names of the Serine part
>> (CA, CB, OG, C, O, N, and OXT)
>> In coot (from CCP4 : 0.9.6 EL), I’ve used the modified cif file and it
>> allowed merging of the modified residue into the polypeptide chain as
>> expected and further refinements went without any issue in Phenix
>> (providing the modified cif file of course). Everything seems well
>> interpreted. So far so good.
>>
>> However, now I would like to validate the structure and both Phenix
>> validation tool and the PDB web server do not accept the final cif file.
>>
>> Checking this file I’ve noticed that the protein seems split into 3
>> pieces (chain A, first residue up to the one before the modified residue;
>> chain B the modified residue by itself described as HETATM and chain C the
>> rest of the polypeptide up to the C-ter).
>> The PDB file presents only one chain A for the whole protein with the
>> modified residue...
>>
>> I don’t know if this is an issue with Phenix generating this final cif
>> file in this specific case or if I need to modify this final file by hand ?
>>
>> Any help is welcome.
>> Thanks
>>
>> Xavier
>>
>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>> Unsubscribe: phenixbb-leave(a)phenix-online.org
>>
>>
>> --
>> Paul Adams (he/him/his)
>> Associate Laboratory Director for Biosciences, LBL (
>> https://biosciences.lbl.gov/leadership/)
>> Principal Investigator, Computational Crystallography Initiative, LBL (
>> https://cci.lbl.gov)
>> Vice President for Technology, the Joint BioEnergy Institute (
>> http://www.jbei.org)
>> Principal Investigator, ALS-ENABLE, Advanced Light Source (
>> https://als-enable.lbl.gov)
>> Division Deputy for Biosciences, Advanced Light Source (
>> https://als.lbl.gov)
>> Laboratory Research Manager, ENIGMA Science Focus Area (
>> http://enigma.lbl.gov)
>> Adjunct Professor, Department of Bioengineering, UC Berkeley (
>> http://bioeng.berkeley.edu)
>> Member of the Graduate Group in Comparative Biochemistry, UC Berkeley (
>> http://compbiochem.berkeley.edu)
>>
>> Building 33, Room 250
>> Building 978, Room 4126
>> Building 977, Room 268
>> Tel: 1-510-486-4225
>> http://cci.lbl.gov/paul
>> ORCID: 0000-0001-9333-8219
>>
>> Lawrence Berkeley Laboratory
>> 1 Cyclotron Road
>> BLDG 33R0345
>> Berkeley, CA 94720, USA.
>>
>> Executive Assistant: Michael Espinosa [ MEEspinosa(a)lbl.gov ] [
>> 1-510-333-6788 ]
>> Phenix Consortium: Ashley Dawn [ AshleyDawn(a)lbl.gov ][ 1-510-486-5455 ]
>>
>> --
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>> Unsubscribe: phenixbb-leave(a)phenix-online.org
>
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> Unsubscribe: phenixbb-leave(a)phenix-online.org
>
>
>
3 years, 9 months
Re: [phenixbb] Autosol: MIRAS
by Gabor Bunkoczi
Dear AK,
if you just need the Se positions, you can run Phaser MR-SAD (available
from the GUI) using your partial model as a starting point, together with
the Se-Met data. Dependending on the model quality (and the model does not
have to be very good), it may be able to locate the Se-sites.
BW, Gabor
On Sep 6 2012, ash.k(a)aol.com wrote:
> Satisfactory map means: I am expecting a coiled coil or helix bundle type
> of assembly and I could see some densities appropriate for helices. There
> are proper solvent channels and continuous stretches of densities.
>
> More to add about data on this: the data is anisotropic and the longer
> 'c' axis and alignment of helical density along c axis support this. This
> also makes me think that perhaps map is sensible.
>
> Few rounds of model building, refinement and DM has been successful to
> assign around 10 polyala helices and their distribution looks sensible
> from packing point of view. Now the problem is to assign the side chain.
> I was hoping to make use of Se locations for this. R and Rfree are still
> random, in the range of 50%. I am not sure, but it could be perhaps
> because most of the scattering material is still unassigned.
>
>
>
>
>
>
>-----Original Message-----
>From: Francis E Reyes <Francis.Reyes(a)colorado.edu>
>To: PHENIX user mailing list <phenixbb(a)phenix-online.org>
>Cc: phenixbb <phenixbb(a)phenix-online.org>
>Sent: Thu, Sep 6, 2012 8:07 am
>Subject: Re: [phenixbb] Autosol: MIRAS
>
>
> This one is solvable, but with extreme difficulty. I recently completed a
> structure solution with experimental phases starting at 5.0 A using phase
> information from multiple derivatives.
>
>
>How would you describe a somewhat satisfactory map?
>
>
>F
>
>On Sep 5, 2012, at 7:08 PM, ash.k(a)aol.com wrote:
>
>
>
>
>
>
>
>Hi Shya,
>
> I did wavelength scan, got a good signal for Se and used appropriate
> wavelengths for data collection and also used experimental f' and f''
> values for phasing. I think the reasons SAD or MAD for SeMet data is not
> working are (i) low resolution: 3.7 for SeMet (Anomalous data is up to
> 4.8A) (ii) I should have mentioned this earlier: 3 out of 6 Se are very
> close to N-terminus, possible they are disordered. Unit cell is also some
> what big..100, 120 and 320A; F222 space group.
>
>AK
>
>
>
>-----Original Message-----
>From: Shya Biswas <shyabiswas(a)gmail.com>
>To: PHENIX user mailing list <phenixbb(a)phenix-online.org>
>Sent: Thu, Sep 6, 2012 7:29 am
>Subject: Re: [phenixbb] Autosol: MIRAS
>
>
>Hi AK,
>Did you do a wavelength scan when you collected the SE dataset you
>need to put the values of f' and f'' from your wavelength scan in
>order to locate the heavy atom sites, 6 methionine should be enough to
>phase your molecule.
>Shya
>
>On Wed, Sep 5, 2012 at 9:25 PM, <ash.k(a)aol.com> wrote:
>> Hi all,
>>
>> I am trying to solve a structure through experimental phasing using
>> AUTOSOL. I have a couple of heavy atom derivative datasets (Hg, La, Eu,
>> Cd) and also a SeMet data. Unfortunately all the datasets are of low
>> resolution (3.7-4.2A) and there are possibly 4-8 molecules in the asu.
>> MIR, SAD and MAD alone did not give any convincing solution.
>>
>> However, MIRAS, with a combination of few heavy atom datasets and the
>> anomalous data from SeMet crystals, gave a somewhat satisfactory map.
>> But the heavy atom site picked by AUTOSOL list only one of the heavy
>> atoms i.e. Lanthanum. In another set of run, the solution of which was
>> not convincing, the heavy atom substructure had only Hg. There are 6 Met
>> out of 200 residues in one molecule and mass spec results show that Se
>> incorporation is 100%.
>>
>> Now, my doubt is that why does the heavy atom substructure contain only
>> La and how can I get the substructure involving Se from this solution
>> (or the datasets used)? Se location is going to help me a lot for
>> finding a starting point to assign side chains.
>>
>> Any suggestion would be greatly appreciated.
>>
>> Thanks
>> AK
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>_______________________________________________
>phenixbb mailing list
>phenixbb(a)phenix-online.org
>http://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
>
>_______________________________________________
>phenixbb mailing list
>phenixbb(a)phenix-online.org
>http://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
>_______________________________________________
>phenixbb mailing list
>phenixbb(a)phenix-online.org
>http://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
--
##################################################
Dr Gabor Bunkoczi
Cambridge Institute for Medical Research
Wellcome Trust/MRC Building
Addenbrooke's Hospital
Hills Road
Cambridge CB2 0XY
##################################################
13 years, 5 months