Search results for query "look through"
- 527 messages
Re: [phenixbb] Third issue (...) - P.S.
by Frank von Delft
Thanks for the elaboration. All I meant was that figure 2 did not show
improvements (points pretty much on the y=x line), while figure 1 showed
huge improvements (points below the y=x line). I should have added that
in the past I've had puzzled questions around here about phenix results
at lower resolutions, and I always wondered whether weight optimization
may be an issue.
So maybe what I was looking for was a figure like figure 2, but of the
old optimization.
Cheers
Frank
On 29/07/2011 17:41, Pavel Afonine wrote:
> Hi Frank,
>
> thanks for prompting me to have another look at this! Yes, a quick
> section reiterating the results shown in both figures would be
> definitely helpful. Ok, I'm doing it now (better late than never!):
>
> Figure 1 (old version vs new version), going from left to right, top to
> bottom:
>
> - the new procedure always produces smaller Rfree-Rwork gap (except two
> outliers out of ~110 cases). In a few dozens cases this means going from
> Rfree-Rwork ~10-14% down to 8% and less. Which is I believe an improvement.
>
> - Molprobity clashscore gets much lower in all cases, and in many cases
> it get down from clashscore>150-200 to less than 50. Again, I take this
> as an improvement.
>
> - Ramachandran plot: always less outliers, and in a few dozens cases it
> get down from 10-25% to less than 5%. It's good too!
>
> - Always less Rotamer outliers. Not bad.
>
> -<Bij> for bonded atoms gets always smaller. In many cases it goes down
> from ridiculously high ~30-50A**2 to less than 20 or so. There are
> outliers which, as pointed out in the text, need investigation.
>
> - Rfree difference wise (the last graphic at the bottom), almost all the
> differences stay within "equally good range of values", which is defined
> in the text. Some bad outliers, are 1) inoptimal NCS groups selection
> (which was done automatically using the old procedure, not the new one
> that does it in torsion angle space), 2) not using TLS even though this
> was used originally, and other issues that needs an investigation. There
> are improvements too.
>
> Figure 2 (with weight optimization vs without weight optimization):
> going from left to right, top to bottom:
>
> - Rfree-Rfree gets much better in a few dozens of cases, or stays in a
> "equally good range of values".
>
> - clashscore either stays the same or gets smaller, which is good. Same
> for Ramachandran and Rotamer outliers.
>
> - Except two outliers (out of total ~110 cases), Rfree(opt)-Rfree(no
> opt) is consistently smaller, with a number of cases having significant
> improvement (over ~2% or so).
>
> Thanks again for your comments, and prompting me to spell out a proper
> analysis of the presented pictures.
>
> Pavel.
>
>
> On 7/29/11 1:21 AM, Frank von Delft wrote:
>> Hi, just scanned through the article on automatic weight adjustment.
>>
>> If I had to summarise figures 1 and 2, I'd have to conclude that the
>> latest weight optimization only occasionally produces better results
>> than not doing it (fig 2); and that therefore the old optimization
>> was considerably worse than doing nothing at all.
>>
>> Or have I misinterpreted it?
>> Cheers
>> phx
>>
>>
>>
>> On 28/07/2011 19:59, Nigel Moriarty wrote:
>>> Dear Colleagues,
>>>
>>> I am pleased to announce the publication of the third issue of the
>>> Computational Crystallography Newsletter:
>>>
>>> http://www.phenix-online.org/newsletter/
>>>
>>> A listing of the articles and short communications is given below.
>>> Please note that the newsletter accepts articles of a general nature
>>> of interest to all crystallographers. Please send any articles to me
>>> at
>>> NWMoriarty(a)lbl.gov noting that there is a Word Template on the website
>>> to streamline production.
>>>
>>> Articles
>>> --------
>>>
>>> Improved target weight optimization in phenix.refine
>>> Mite-y lysozyme crystal and structure
>>>
>>>
>>> Short communications
>>> --------------------
>>>
>>> A lightweight, versatile framework for visualizing reciprocal-space data
>>> An extremely fast spotfinder for real-time beamline applications
>>> Hints for running phenix.mr_rosetta
>>>
>>>
>>> Cheers
>>> Nigel
>>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
14 years, 6 months
Re: [phenixbb] phenix.automr high zscore but too many clashes
by Georg Zocher
Dear Zach,
are you sure about your spacegroup? And what about the matthews
parameter (http://www.ruppweb.org/Mattprob/)?
Best Regards,
Georg
Andreas Förster schrieb:
> Hey Zach,
>
> remove the loops from your search model and rerun the MR.
> Alternatively, increase the maximum number of clashes allowed until you
> get a solution written out. See where your solutions clash. Again, if
> it's in a loop region, cut the loop and rerun. Your z-scores look
> great. (Notice that you have zed-scores when you run phaser through
> ccp4i and zee-scores when you run phaser through phenix.)
>
>
> Andreas
>
>
>
> zach powers wrote:
>
>> Hi,
>>
>>
>> I have a problem and i wonder if someone has had a similar experience. I
>> have been using phenix.automr program to find an MR solution for a
>> protein. When I ask Phaser to use copies=2 I find a number of solutions
>> with poor z-scores (3-4). When I ask it to use copies=3, I get great
>> Z-scores (see below) but no solution due to the high number of clashes
>> (>100!).
>>
>> As an x-ray newbie I am a bit perplexed of what to make of this: the
>> solution is not good because there is not enough space in the unit cell
>> to comfortably accommodate all three molecules yet the Z-score indicates
>> the structure is good.
>>
>> My structure does have several loops and these may be contributing to
>> the clashing residues. As a newbie I have a newbie question - what does
>> this mean (great z-score but no solutions)? Is this non-solution a
>> possible solution if I play with it, or is the packing simply too tight
>> and the Z-scores are not valid?
>>
>> In the meantime I am chopping my protein into sub-domains as recommended
>> in the Phaser documentation, but if anyone has seen something like this
>> before, any suggestions are welcome.
>>
>> thanks
>> zach charlop-powers
>>
>>
>>
>>
>> Packing Table: Space Group P 3
>> ------------------------------
>> Solutions accepted if number of clashes = 135 (lowest number of
>> clashes in list)
>> provided this number of clashes <= 10 (maximum number of allowed clashes)
>> # #Clashes # Accepted Annotation
>> 1 188 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=10.4
>> 2 156 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=10.2
>> 3 151 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=9.8
>> 4 146 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=9.7
>> 5 187 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=16.3
>> TFZ=10.5
>> 6 202 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=9.2
>> 7 170 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=9.1
>> 8 144 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=16.3
>> TFZ=10.2
>> 9 177 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=9.1
>> 10 153 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.9
>> 11 150 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.8
>> 12 217 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.6
>> 13 182 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.6
>> 14 135 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.5
>> 15 175 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.3
>> 16 159 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.3
>> 17 192 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.3
>> 18 165 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=16.3
>> TFZ=9.2
>> 19 169 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=16.3
>> TFZ=9.1
>> 20 158 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.0
>> 21 173 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=8.0
>> 22 154 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=16.3
>> TFZ=8.9
>> 23 206 NO RFZ=20.3 TFZ=18.2 PAK=10 LLG=431 RFZ=18.5
>> TFZ=7.8
>>
>> 0 accepted of 23 solutions
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>
>
--
Universität Tübingen
Interfakultäres Institut für Biochemie
Dr. Georg Zocher
Hoppe-Seyler-Str. 4
72076 Tuebingen
Germany
Fon: +49(0)-7071-2973374
Mail: georg.zocher(a)uni-tuebingen.de
http://www.ifib.uni-tuebingen.de
16 years, 9 months
Re: [cctbxbb] Git
by Graeme.Winter@diamond.ac.uk
Nick
I offer an opinion here - git is like C++ - in the same way as Fortran 77 is like SVN. You can code fortran in any language syntax you like just fine. If you write it using C++ syntax you need to use g++ to compile it but that’s it. You can do that without using any C++ idioms.
Or you can go the full monty and code with STL metaprogramming & such - much more complex and more powerful but not mandatory.
You can use git like svn which works a bit better, that’s OK. It does not stop other people from using branches and such - you just run off the trunk like you do with SVN and let them get on with it. Every so often you will see a massive commit come in, from someone who did use git more powerfully, but that is no different to someone running over svn.
What it does mean though is if you have a bunch of changes and one is a bug fix, you can easily cherry pick just the bug fix to push back for example, leaving the rest of the work in progress … in progress.
I was that old fashioned developer Luis alluded to, I did not like it much when I started and now I find it *awesome* - you can commit as you go and only push when you are ready, which makes it really easy to keep your head clearer on big things
Cheers Graeme
On 12 Jan 2016, at 17:52, Nicholas Sauter <nksauter(a)lbl.gov<mailto:[email protected]>> wrote:
>From the DIALS-West perspective, the switch to git has been a stumbling block to participation. I would have to agree with Ralf that git seems to be a tool for very smart people but not for folks who just want a simple tool for managing code.
I can't agree with Markus that a linear history is dispensable. In fact, this is one feature in svn that I've found very helpful over the years. If a feature is broken today, but I know for a fact that it worked sometime in the past, I simply do a svn update -r "[datestamp]" to narrow down the exact date when the feature became broken, then I isolate the exact commit and look at the code. Can this be done with git or does it even make sense if there is no concept of linear change?
The git environment seems a bit chaotic, and not conducive to close cooperation, from what I can see now. More experience with it may convince me otherwise.
Nick
Nicholas K. Sauter, Ph. D.
Computer Staff Scientist, Molecular Biophysics and Integrated Bioimaging Division
Lawrence Berkeley National Laboratory
1 Cyclotron Rd., Bldg. 33R0345
Berkeley, CA 94720
(510) 486-5713
On Tue, Jan 12, 2016 at 2:22 AM, <luis.fuentes-montero(a)diamond.ac.uk<mailto:[email protected]>> wrote:
Hi fellows,
I believe it is not a bad idea to move to https://github.com<https://github.com/> (the servers) and later on decide if move to use git (the CLI tool). This can an incremental way to do transition. And will allow really old fashion developers (those who refuse to learn git) or conservative policies to survive and still move to a new more reliable code repository.
Just my thoughts,
Greetings,
Luiso
From: cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]> [mailto:[email protected]<mailto:[email protected]>] On Behalf Of markus.gerstel(a)diamond.ac.uk<mailto:[email protected]>
Sent: 12 January 2016 09:56
To: cctbxbb(a)phenix-online.org<mailto:[email protected]>
Subject: Re: [cctbxbb] Git
Dear Luc,
We actually had a look at the quality of the Github svn interface when we prepared for the DIALS move. My verdict was that it is surprisingly useful.
All the possible git operations are mapped rather well onto a linear SVN history. All the branches and tags are accessible either by checking out the root of the repository, ie.
svn co https://github.com/dials/dials.git (not recommended, as, say, any new branch/tag will probably result in a huge update operation for you)
or by checking them out directly, ie.
svn co https://github.com/dials/dials.git/branches/fix_export
or svn co https://github.com/dials/dials.git/tags/1.0
Forks by other people are, as is the case when using git, simply not visible. I agree that looking at the project history through SVN may not be as clear as the git history. I wonder how relevant this is though, as you can explore the history, without running any commands, on the Github website, eg. https://github.com/dials/dials/network, https://github.com/dials/dials/commits/master, etc.
Creating new branches and tags and merging stuff via SVN would be a major operation. However, that has always been the case with SVN, and for that reason one generally just does not do these things in SVN.
But for an SVN user group those operations would not be important – you every only need to merge stuff if you create branches – so I don’t really see the problem. If you want to tag releases you can do that on the Github website as well as with a git command.
In summary, I do recognize that SVN users will have difficulties participating in branched development, and in particular will not be able to quickly switch between branches.
But I don’t think that this will be a problem, or that there is a need for a policy to keep the history linear.
Best wishes
-Markus
From: cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]> [mailto:[email protected]] On Behalf Of Luc Bourhis
Sent: 11 January 2016 16:45
To: cctbx mailing list
Subject: Re: [cctbxbb] Gi,
Hi Graeme,
Can we revisit the idea of moving to git for cctbx?
This brings to mind a question I have been asking myself since the subject has been brought forth. The idea Paul wrote about on this list was a move to Github. I guess some, perhaps many, developers will keep interacting with the repository using subversion. I am worried this would clash with the workflow of those of us who would go the native git way. By that I mean creating many branches and merge points, which one would merge into the official repository when ready. I am worried the history would look very opaque for the subversion users. I would even probably create a fork on Github, making it even more opaque for a tool like subversion. Has anybody thought about such issues? My preferred solution would be for everybody to move to git but I don’t think that’s realistic. At the other end of the spectrum, there is putting in place a policy to keep the history linear.
Best wishes,
Luc
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
10 years
Re: [cctbxbb] bootstrap.py build on Ubuntu
by David Waterman
Hi Billy,
I'm replying on this old thread because I have finally got round to trying
a bootstrap build for DIALS out again on Ubuntu, having waited for updates
to the dependencies and updating the OS to 16.04.
The good news is, the build ran through fine. This is the first time I've
had a bootstrap build complete without error on Ubuntu, so thanks to you
and the others who have worked on improving the build in the last few
months!
The bad news is I'm getting two failures in the DIALS tests:
dials/test/command_line/tst_export_bitmaps.py
dials_regression/test.py
Both are from PIL
File
"/home/fcx32934/dials_test_build/base/lib/python2.7/site-packages/PIL/Image.py",
line 401, in _getencoder
raise IOError("encoder %s not available" % encoder_name)
IOError: encoder zip not available
Indeed, from base_tmp/imaging_install_log it looks like PIL is not
configured properly
--------------------------------------------------------------------
PIL 1.1.7 SETUP SUMMARY
--------------------------------------------------------------------
version 1.1.7
platform linux2 2.7.8 (default_cci, Jun 10 2016, 16:04:32)
[GCC 5.3.1 20160413]
--------------------------------------------------------------------
*** TKINTER support not available
*** JPEG support not available
*** ZLIB (PNG/ZIP) support not available
*** FREETYPE2 support not available
*** LITTLECMS support not available
--------------------------------------------------------------------
Any ideas? I have zlib headers but perhaps PIL can't find them.
On a related note, the free version of PIL has not been updated for years.
The replacement Pillow has started to diverge. I first noticed this when
Ubuntu 16.04 gave me Pillow 3.1.2 and my cctbx build with the system python
produced failures because it no longer supports certain deprecated methods
from PIL. I worked around that in r24587, but these things are a losing
battle. Is it time to switch cctbx over to Pillow instead of PIL?
Cheers
-- David
On 7 January 2016 at 18:12, Billy Poon <bkpoon(a)lbl.gov> wrote:
> Hi all,
>
> Since wxPython was updated to 3.0.2, I have been thinking about updating
> the other GUI-related packages to more recent versions. I would probably
> update to the latest, stable version that does not involve major changes to
> the API so that backwards compatibility is preserved. Let me know if that
> would be helpful and I can prioritize the migration and testing.
>
> --
> Billy K. Poon
> Research Scientist, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Road, M/S 33R0345
> Berkeley, CA 94720
> Tel: (510) 486-5709
> Fax: (510) 486-5909
> Web: https://phenix-online.org
>
> On Thu, Jan 7, 2016 at 9:30 AM, Nicholas Sauter <nksauter(a)lbl.gov> wrote:
>
>> David,
>>
>> I notice that the Pango version, 1.16.1, was released in 2007, so perhaps
>> it is no surprise that the latest Ubuntu does not support it. Maybe this
>> calls for stepping forward the Pango version until you find one that works.
>> I see that the latest stable release is 1.39.
>>
>> This would be valuable information for us..Billy Poon in the Phenix group
>> is supporting the Phenix GUI, so it might be advisable for him to update
>> the Pango version in the base installer.
>>
>> Nick
>>
>> Nicholas K. Sauter, Ph. D.
>> Computer Staff Scientist, Molecular Biophysics and Integrated Bioimaging
>> Division
>> Lawrence Berkeley National Laboratory
>> 1 Cyclotron Rd., Bldg. 33R0345
>> Berkeley, CA 94720
>> (510) 486-5713
>>
>> On Thu, Jan 7, 2016 at 8:54 AM, David Waterman <dgwaterman(a)gmail.com>
>> wrote:
>>
>>> Hi again
>>>
>>> Another data point: I just tried this on a different Ubuntu machine,
>>> this time running 14.04. In this case pango installed just fine. In fact
>>> all other packages installed too and the machine is now compiling cctbx.
>>>
>>> I might have enough for comparison between the potentially working 14.04
>>> and failed 15.04 builds to figure out what is wrong in the second case.
>>>
>>> Cheers
>>>
>>> -- David
>>>
>>> On 7 January 2016 at 09:56, David Waterman <dgwaterman(a)gmail.com> wrote:
>>>
>>>> Hi folks
>>>>
>>>> I recently tried building cctbx+dials on Ubuntu 15.04 following the
>>>> instructions here:
>>>> http://dials.github.io/documentation/installation_developer.html
>>>>
>>>> This failed during installation of pango-1.16.1. Looking
>>>> at pango_install_log, I see the command that failed was as follows:
>>>>
>>>> gcc -DHAVE_CONFIG_H -I. -I. -I../..
>>>> -DSYSCONFDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/etc\"
>>>> -DLIBDIR=\"/home/fcx32934/sw/dials_bootstrap_test/base/lib\"
>>>> -DG_DISABLE_CAST_CHECKS -I../.. -DG_DISABLE_DEPRECATED
>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include
>>>> -I/home/fcx32934/sw/dials_bootstrap_test/base/include/freetype2 -g -O2
>>>> -Wall -MT fribidi.lo -MD -MP -MF .deps/fribidi.Tpo -c fribidi.c -fPIC
>>>> -DPIC -o .libs/fribidi.o
>>>> In file included from fribidi.h:31:0,
>>>> from fribidi.c:28:
>>>> fribidi_config.h:1:18: fatal error: glib.h: No such file or directory
>>>>
>>>> The file glib.h appears to be in base/include/glib-2.0/, however this
>>>> directory was not explicitly included in the command above, only its
>>>> parent. This suggests a configuration failure in pango to me. Taking a look
>>>> at base_tmp/pango-1.16.1/config.log, I see what look like the relevant
>>>> lines:
>>>>
>>>> configure:22227: checking for GLIB
>>>> configure:22235: $PKG_CONFIG --exists --print-errors "$GLIB_MODULES"
>>>> configure:22238: $? = 0
>>>> configure:22253: $PKG_CONFIG --exists --print-errors "$GLIB_MODULES"
>>>> configure:22256: $? = 0
>>>> configure:22304: result: yes
>>>>
>>>> but this doesn't tell me very much. Does anyone have any suggestions as
>>>> to how I might proceed?
>>>>
>>>> Many thanks
>>>>
>>>> -- David
>>>>
>>>
>>>
>>> _______________________________________________
>>> cctbxbb mailing list
>>> cctbxbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>>
>>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
9 years, 8 months
Re: [phenixbb] occupancy refinement
by Maia Cherney
Hi Pavel,
It looks like all different refined occupancies starting from different
initial occupancies converged to the same number upon going through very
many cycles of refinement.
Maia
Pavel Afonine wrote:
> Hi Maia,
>
> the atom parameters, such as occupancy, B-factor and even position are
> interdependent in some sense. That is, if you have somewhat incorrect
> occupancy, that B-factor refinement may compensate for it; if you
> misplaced an atom the refinement of its occupancy or/and B-factor will
> compensate for this. Note in all the above cases the 2mFo-DFc and
> mFo-DFc maps will appear almost identical, as well as R-factors.
>
> So, I think your goal of finding a "true" occupancy is hardly achievable.
>
> Although, I think you can approach it by doing very many refinements
> (say, several hundreds) (where you refine occupancies, B-factors and
> coordinates) each refinement starting with different occupancy and
> B-factor values, and make sure that each refinement converges. Then
> select a subset of refined structures with similar and low R-factors
> (discard those cases where refinement got stuck for whatever reason
> and R-factors are higher) (and probably similar looking 2mFo-DFc and
> mFo-DFc maps in the region of interest). Then see where the refined
> occupancies and B-factors are clustering, and the averaged values will
> probably give you an approximate values for occupancy and B. I did not
> try this myself but always wanted to.
>
> If you have a structure consisting of 9 carbons and one gold atom,
> then I would expect that the "second digit" in gold's occupancy would
> matter. However, if we speak about dozen of ligand atoms (which are
> probably a combination of C,N,O) out of a few thousands of atoms of
> the whole structure, then I would not expect the "second digit" to be
> visibly important.
>
> Pavel.
>
>
> On 11/24/09 8:08 PM, chern wrote:
>> Thank you Kendall and Pavel for your responces.
>> I really want to determine the occupancy of my ligand. I saw one
>> suggestion to try different refinements with different occupancies
>> and compare the B-factors.
>> The occupancy with a B-factor that is at the level with the average
>> protein B-factors, is a "true" occupancy.
>> I also noticed the dependence of the ligand occupancy on the initial
>> occupancy. I saw the difference of 10 to 15%, that is why I am
>> wondering if the second digit after the decimal point makes any sence.
>> Maia
>>
>> ----- Original Message -----
>> *From:* Kendall Nettles <mailto:[email protected]>
>> *To:* PHENIX user mailing list <mailto:[email protected]>
>> *Sent:* Tuesday, November 24, 2009 8:22 PM
>> *Subject:* Re: [phenixbb] occupancy refinement
>>
>> Hi Maia,
>> I think the criteria for occupancy refinement of ligands is
>> similar to a decision to add an alt conformation for an amino
>> acid. I don’t refine occupancy of a ligand unless the difference
>> map indicates that we have to. Sometimes part of the igand may be
>> conformationally mobile and show poor density, but I personally
>> don’t think this justifies occupancy refinement without evidence
>> from the difference map. I agree with Pavel that you shouldn’t
>> expect much change in overall statistics, unless the ligand has
>> very low occupancy., or you have a very small protein. We
>> typically see 0.5-1% difference in R factors from refining with
>> ligand versus without for nuclear receptor igand binding domains
>> of about 250 amino acids, and we see very small differences from
>> occupancy refinement of the ligands.
>>
>> Regarding the error, I have noticed differences of 10% percent
>> occupancy depending on what you set the starting occupancy before
>> refinement. That is, if the starting occupancy starts at 1, you
>> might end up with 50%, but if you start it at 0.01, you might get
>> 40%. I don’t have the expertise to explain why this is, but I
>> also don’t think it is necessarily important. I think it is more
>> important to convince yourself that the ligand binds how you
>> think it does. With steroid receptors, the ligand is usually
>> planer, and tethered by hydrogen bonds on two ends. That leaves
>> us with with four possible poses, so if in doubt, we will dock in
>> the ligand in all of the four orientations and refine. So far, we
>> have had only one of several dozen structures where the ligand
>> orientation was not obvious after this procedure. I worry about a
>> letter to the editor suggesting that the electron density for the
>> ligand doesn’t support the conclusions of the paper, not whether
>> the occupancy is 40% versus 50%.
>>
>> You might also want to consider looking at several maps, such as
>> the simple or simulated annealing composite omit maps. These can
>> be noisy, so also try the kicked maps (
>> http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html),
>> <http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html%…,>
>> which I have become a big fan of.
>>
>> Regards,
>> Kendall Nettles
>>
>> On 11/24/09 3:07 PM, "chern(a)ualberta.ca" <chern(a)ualberta.ca> wrote:
>>
>> Hi,
>> I am wondering what is the criteria for occupancy refinement of
>> ligands. I noticed that R factors change very little, but the
>> ligand
>> B-factors change significantly . On the other hand, the
>> occupancy is
>> refined to the second digit after the decimal point. How can
>> I find
>> out the error for the refined occupancy of ligands?
>>
>> Maia
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>> ------------------------------------------------------------------------
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
> ------------------------------------------------------------------------
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
16 years, 2 months
Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
by Luca Jovine
Hi Xavier and Randy, I'm also experiencing the same on a M2 Mac!
-Luca
-----Original Message-----
From: <phenixbb-bounces(a)phenix-online.org <mailto:[email protected]>> on behalf of Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>>
Date: Tuesday, 4 July 2023 at 12:38
To: Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]>>
Cc: PHENIX user mailing list <phenixbb(a)phenix-online.org <mailto:[email protected]>>
Subject: Re: [phenixbb] Discrepancy between Phenix GUI and command line for MR
Hi Randy,
Indeed I’m running Phenix on a brand new M2 Mac.
I will benchmark the two processes (GUI vs command line) and post them here.
> Le 4 juil. 2023 à 12:32, Randy John Read <rjr27(a)cam.ac.uk <mailto:[email protected]>> a écrit :
>
> Hi Xavier,
>
> We haven’t noticed that, or at least any effect is small enough not to stand out. There shouldn’t be a lot of overhead in communicating with the GUI (i.e. updating the terse log output and the graphs) but if there is we should look into it and see if we can do something about it.
>
> Could you tell me how much longer (say, in percentage terms) a job takes when you run it through the GUI compared to running the same job outside the GUI on the same computer? Also, it’s possible the architecture matters so could you say which type of computer and operating system you’re using? If it’s a Mac, is it one with an Intel processor or an ARM (M1 or M2) processor? (By the way, we finally managed to track down and fix an issue that cause Phaser to run really slowly on an M1 or M2 Mac when using the version compiled for Intel, once I got my hands on a new Mac.)
>
> Best wishes,
>
> Randy
>
>> On 4 Jul 2023, at 10:44, Xavier Brazzolotto <xbrazzolotto(a)gmail.com <mailto:[email protected]>> wrote:
>>
>> Dear Phenix users
>>
>> I’ve noticed that molecular replacement was clearly slower while running from the GUI compared to using the command line (phenix.phaser).
>>
>> Did you also observe such behavior?
>>
>> Best
>> Xavier
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
>> Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]>
>
>
> -----
> Randy J. Read
> Department of Haematology, University of Cambridge
> Cambridge Institute for Medical Research Tel: +44 1223 336500
> The Keith Peters Building
> Hills Road E-mail: rjr27(a)cam.ac.uk <mailto:[email protected]>
> Cambridge CB2 0XY, U.K. www-structmed.cimr.cam.ac.uk
>
_______________________________________________
phenixbb mailing list
phenixbb(a)phenix-online.org <mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/phenixbb <http://phenix-online.org/mailman/listinfo/phenixbb>
Unsubscribe: phenixbb-leave(a)phenix-online.org <mailto:[email protected]>
När du skickar e-post till Karolinska Institutet (KI) innebär detta att KI kommer att behandla dina personuppgifter. Här finns information om hur KI behandlar personuppgifter<https://ki.se/medarbetare/integritetsskyddspolicy>.
Sending email to Karolinska Institutet (KI) will result in KI processing your personal data. You can read more about KI’s processing of personal data here<https://ki.se/en/staff/data-protection-policy>.
2 years, 7 months
Re: [phenixbb] changing the project directory
by Rajagopalan, Senapathy
Hi,
I recently moved my project directory from a linux machine to my macbook
pro using the "Import project" menu. So now, in my macbook pro, I can view
all the job history but then when I try to restore a job, phenix tries to
find the input files from the old path and gives an error. It is however
able to load the results correctly and remember the refinement settings. I
have kind of fixed it by editing the file '*_jobid.eff' found under
'projectdirectory/phenix/project_data/' but then I will have to do it for
every job that I have submitted which is a pain.
So is there a simpler way to fix this when you want to move project
directories across platforms? I use the linux cluster to run autobuild
jobs (since my macbook pro can't handle it) but once I get the initial
model, I like to switch to working on my laptop.
Thanks
Sena
On 3/16/11 3:13 PM, "Nathaniel Echols" <nechols(a)lbl.gov> wrote:
>On Wed, Mar 16, 2011 at 12:15 PM, Charisse Crenshaw <ccrenshaw(a)salk.edu>
>wrote:
>> I have recently started a project whose folder was located on my
>> desktop. I would like to move that folder off of my desktop as I've
>> heard that having a messy desktop slows down one's computer. I moved
>> the project directory somewhere else, and this resulted in Phenix not
>> being able to find the project (understandably). So I moved it back
>> to its original location and Phenix could then open the project. So,
>> in an X11 shell, I copied the project folder to the new desired
>> location, then changed the output directory in the Phenix main window
>> to that new path. But when I clicked on the project again, the output
>> directory that is indicated changes back to the original path. Now I
>> am looking through the preferences etc for a way to change the path of
>> the project directory within Phenix and I cannot. Can anyone tell me
>> if this is possible and if so, how do I do it?
>
>It's possible, but not (yet) straightforward. These steps should do it:
>
>1. Make a copy of the entire project folder in the new location (I
>guess you've already done this)
>2. Open the main GUI and delete the old project. This will not wipe
>out any data in the old location, but it will delete all of the
>project tracking files that Phenix creates.
>3. Select "Import project" from the File menu, and select the new copy
>- this should restore the project, but with all directory paths
>altered.
>
>For now, the parameter files used for past jobs will contain the
>original paths, so restoring old jobs will not bring up the full
>parameters. I hope to fix this at some point. Obviously I need to
>add a "relocate project" function too.
>
>I suspect the warning about a messy desktop came from a Windows user -
>I've never heard about other OSes having this problem.
>
>-Nat
>_______________________________________________
>phenixbb mailing list
>phenixbb(a)phenix-online.org
>http://phenix-online.org/mailman/listinfo/phenixbb
Methodist. Leading Medicine.
Recognized by U.S.News & World Report as one of America's "Best Hospitals" in 13 specialties. Named to FORTUNE magazine's "100 Best Companies to Work For" list five years in a row Designated as a Magnet hospital for excellence in nursing Visit us at www.methodisthealth.com
Follow us at www.twitter.com/MethodistHosp and www.facebook.com/MethodistHospital
***CONFIDENTIALITY NOTICE*** This e-mail is the property of The Methodist Hospital and/or its relevant affiliates and may contain restricted and privileged material for the sole use of the intended recipient(s). Any review, use, distribution or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive for the recipient), please contact the sender and delete all copies of the message. Thank you.
14 years, 10 months
Re: [phenixbb] Third issue (...) - P.S.
by Pavel Afonine
Hi Frank,
thanks for prompting me to have another look at this! Yes, a quick
section reiterating the results shown in both figures would be
definitely helpful. Ok, I'm doing it now (better late than never!):
Figure 1 (old version vs new version), going from left to right, top to
bottom:
- the new procedure always produces smaller Rfree-Rwork gap (except two
outliers out of ~110 cases). In a few dozens cases this means going from
Rfree-Rwork ~10-14% down to 8% and less. Which is I believe an improvement.
- Molprobity clashscore gets much lower in all cases, and in many cases
it get down from clashscore>150-200 to less than 50. Again, I take this
as an improvement.
- Ramachandran plot: always less outliers, and in a few dozens cases it
get down from 10-25% to less than 5%. It's good too!
- Always less Rotamer outliers. Not bad.
- <Bij> for bonded atoms gets always smaller. In many cases it goes down
from ridiculously high ~30-50A**2 to less than 20 or so. There are
outliers which, as pointed out in the text, need investigation.
- Rfree difference wise (the last graphic at the bottom), almost all the
differences stay within "equally good range of values", which is defined
in the text. Some bad outliers, are 1) inoptimal NCS groups selection
(which was done automatically using the old procedure, not the new one
that does it in torsion angle space), 2) not using TLS even though this
was used originally, and other issues that needs an investigation. There
are improvements too.
Figure 2 (with weight optimization vs without weight optimization):
going from left to right, top to bottom:
- Rfree-Rfree gets much better in a few dozens of cases, or stays in a
"equally good range of values".
- clashscore either stays the same or gets smaller, which is good. Same
for Ramachandran and Rotamer outliers.
- Except two outliers (out of total ~110 cases), Rfree(opt)-Rfree(no
opt) is consistently smaller, with a number of cases having significant
improvement (over ~2% or so).
Thanks again for your comments, and prompting me to spell out a proper
analysis of the presented pictures.
Pavel.
On 7/29/11 1:21 AM, Frank von Delft wrote:
> Hi, just scanned through the article on automatic weight adjustment.
>
> If I had to summarise figures 1 and 2, I'd have to conclude that the
> latest weight optimization only occasionally produces better results
> than not doing it (fig 2); and that therefore the old optimization
> was considerably worse than doing nothing at all.
>
> Or have I misinterpreted it?
> Cheers
> phx
>
>
>
> On 28/07/2011 19:59, Nigel Moriarty wrote:
>> Dear Colleagues,
>>
>> I am pleased to announce the publication of the third issue of the
>> Computational Crystallography Newsletter:
>>
>> http://www.phenix-online.org/newsletter/
>>
>> A listing of the articles and short communications is given below.
>> Please note that the newsletter accepts articles of a general nature
>> of interest to all crystallographers. Please send any articles to me
>> at
>> NWMoriarty(a)lbl.gov noting that there is a Word Template on the website
>> to streamline production.
>>
>> Articles
>> --------
>>
>> Improved target weight optimization in phenix.refine
>> Mite-y lysozyme crystal and structure
>>
>>
>> Short communications
>> --------------------
>>
>> A lightweight, versatile framework for visualizing reciprocal-space data
>> An extremely fast spotfinder for real-time beamline applications
>> Hints for running phenix.mr_rosetta
>>
>>
>> Cheers
>> Nigel
>>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
14 years, 6 months
Re: [cctbxbb] some thoughts on cctbx and pip
by Nigel Moriarty
-1
Cheers
Nigel
---
Nigel W. Moriarty
Building 33R0349, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
Berkeley, CA 94720-8235
Phone : 510-486-5709 Email : NWMoriarty(a)LBL.gov
Fax : 510-486-5909 Web : CCI.LBL.gov
On Thu, Aug 22, 2019 at 1:48 AM Dr Robert Oeffner <rdo20(a)cam.ac.uk> wrote:
> Agree. I talked with a small molecules software developer here at the
> ECM32 yesterday. He would love to be able to do “pip install cctbx” when he
> develops his program, provided it doesn’t take up much file space.
>
>
>
> Rob
>
>
>
> Sent from my Windows 10 phone
> --
> Robert Oeffner, Ph.D.
> Research Associate, The Read Group
> Department of Haematology,
> Cambridge Institute for Medical Research
> University of Cambridge
> Cambridge Biomedical Campus
> Wellcome Trust/MRC Building
> Hills Road
> Cambridge CB2 0XY
>
> www.cimr.cam.ac.uk/investigators/read/index.html
> tel: +44(0)1223 763234
>
>
>
> *From: *Derek Mendez <dermen(a)lbl.gov>
> *Sent: *20 August 2019 18:06
> *To: *cctbx mailing list <cctbxbb(a)phenix-online.org>
> *Subject: *Re: [cctbxbb] some thoughts on cctbx and pip
>
>
>
> I think its worth getting a cctbx-light pip build.. I think modules like
> cctbx miller, sgtbx are extremely useful. Also simtbx.nanoBragg.
>
>
>
> -Derek
>
>
>
> On Sun, Aug 18, 2019 at 10:48 PM Graeme.Winter(a)Diamond.ac.uk <
> Graeme.Winter(a)diamond.ac.uk> wrote:
>
> Hi Aaron
>
> Re: talk about if interest
>
> I think it would be very useful to have a roadmap of where this is going,
> the intentions and how we expect it to play with other cctbx-dependent
> projects - have I missed this? Having it somewhere e.g. on the wiki would
> be a big help
>
> Thanks Graeme
>
> On 17 Aug 2019, at 00:36, Aaron Brewster <asbrewster(a)lbl.gov<mailto:
> asbrewster(a)lbl.gov>> wrote:
>
> Hi Luc, thanks. I did recall someone working on this a while back.
>
> For conda, there are a couple more things to finish and then we hope to
> have cctbx available through conda.
>
> 1) There is work being done in a branch to make cctbx use boost in
> standard locations (e.g. the system boost, or a conda boost). That will
> allow us to use install boost by conda and not build it ourselves. (
> https://github.com/cctbx/cctbx_project/tree/conda_boost). Also, newer
> versions of boost are being tested (up through 1.70). (This will also
> enable python 3.7 support.)
> 2) We need to work on getting a make install step in place so that we can
> build the conda package and upload it.
> 3) We want to split the dependencies up by builder (cctbx, cctbx-lite,
> dials, phenix, etc.) into meta packages and their associated manifests. I
> can talk more about this if there is interest.
>
> Thanks,
> -Aaron
>
>
>
> On Fri, Aug 16, 2019 at 1:48 PM Luc Bourhis <luc_j_bourhis(a)mac.com<mailto:
> luc_j_bourhis(a)mac.com>> wrote:
> Hi,
>
> I did look into that many years ago, and even toyed with building a pip
> installer. What stopped me is the exact conclusion you reached too: the
> user would not have the pip experience he expects. You are right that it is
> a lot of effort but is it worth it? Considering that remark, I don’t think
> so. Now, Conda was created specifically to go beyond pip pure-python-only
> support. Since cctbx has garnered support for Conda, the best avenue imho
> is to go the extra length to have a package on Anaconda.org<
> http://anaconda.org/>, and then to advertise it hard to every potential
> user out there.
>
> Best wishes,
>
> Luc
>
>
> On 16 Aug 2019, at 21:45, Aaron Brewster <asbrewster(a)lbl.gov<mailto:
> asbrewster(a)lbl.gov>> wrote:
>
> Hi, to avoid clouding Dorothee's documentation email thread, which I think
> is a highly useful enterprise, here's some thoughts about putting cctbx
> into pip. Pip doesn't install non-python dependencies well. I don't think
> boost is available as a package on pip (at least the package version we
> use). wxPython4 isn't portable through pip (
> https://wiki.wxpython.org/How%20to%20install%20wxPython#Installing_wxPython…).
> MPI libraries are system dependent. If cctbx were a pure python package,
> pip would be fine, but cctbx is not.
>
> All that said, we could build a manylinux1 version of cctbx and upload it
> to PyPi (I'm just learning about this). For a pip package to be portable
> (which is a requirement for cctbx), it needs to conform to PEP513, the
> manylinux1 standard (https://www.python.org/dev/peps/pep-0513/). For
> example, numpy is built according to this standard (see
> https://pypi.org/project/numpy/#files, where you'll see the manylinux1
> wheel). Note, the manylinux1 standard is built with Centos 5.11 which we
> no longer support.
>
> There is also a manylinux2010 standard, which is based on Centos 6 (
> https://www.python.org/dev/peps/pep-0571/). This is likely a more
> attainable target (note though by default C++11 is not supported on Centos
> 6).
>
> If we built a manylinuxX version of cctbx and uploaded it to PyPi, the
> user would need all the non-python dependencies. There's no way to specify
> these in pip. For example, cctbx requires boost 1.63 or better. The user
> will need to have it in a place their python can find it, or we could
> package it ourselves and supply it, similar to how the pip h5py package now
> comes with an hd5f library, or how the pip numpy package includes an
> openblas library. We'd have to do the same for any packages we depend on
> that aren't on pip using the manylinux standards, such as wxPython4.
>
> Further, we need to think about how dials and other cctbx-based packages
> interact. If pip install cctbx is set up, how does pip install dials work,
> such that any dials shared libraries can find the cctbx libraries? Can
> shared libraries from one pip package link against libraries in another pip
> package? Would each package need to supply its own boost? Possibly this
> is well understood in the pip field, but not by me :)
>
> Finally, there's the option of providing a source pip package. This would
> require the full compiler toolchain for any given platform (macOS, linux,
> windows). These are likely available for developers, but not for general
> users.
>
> Anyway, these are some of the obstacles. Not saying it isn't possible,
> it's just a lot of effort.
>
> Thanks,
> -Aaron
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> --
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
6 years, 5 months
Re: [phenixbb] PHENIX refine: detached sidechain atoms - SOLUTION
by Schubert, Carsten [PRDUS]
Good to know, that is a nasty side-effect of a complex procedure. We had
a similar case here, which was if I remember correctly was also phenix
version dependent, and we could not make sense of it. The ligand in our
case was actually generated in elbow, so that would not shield one from
problems. It always helps to give the cif files a second look.
> -----Original Message-----
> From: phenixbb-bounces(a)phenix-online.org [mailto:phenixbb-
> bounces(a)phenix-online.org] On Behalf Of Pavel Afonine
> Sent: Friday, June 25, 2010 1:22 AM
> To: oliver.king(a)chem.ox.ac.uk; PHENIX user mailing list
> Subject: Re: [phenixbb] PHENIX refine: detached sidechain atoms -
> SOLUTION
>
> Hi Oliver,
>
> thanks for sending the inputs so we could reproduce this issue and
> figure out what is wrong.
>
> The problem maker is the CIF file for your ligand that specifies
> unrealistic (too small) esds for some torsional angles that in turn
> creates huge terms in geometry restraints target that finally resulted
> in a huge X-ray weight. Using huge X-ray weight obviously means that
> you
> are doing nearly unrestrained refinement and therefore no surprises
> that
> the geometry was systematically distorted. This is something we will
be
> catching automatically in future, but meanwhile it is a good idea to
> use
> correct ligand CIF files.
>
> The solution: create a CIF file using either phenix.elbow or, better,
> using phenix.ready_set commands. For example,
>
> phenix.ready_set model.pdb
>
> will create a CIF file for all ligands in your model.pdb file. Using
> this file in refinement eliminates the problem that you have reported
> this morning. Also, this command adds H atoms to model.pdb file, that
> you can then use in subsequent refinements.
>
> I went ahead some more and did some refinement using the strategy that
> I
> think is the best for your case.
>
> Your starting R-factors (corresponding to the model you sent me):
> r_work = 0.1853 r_free = 0.2523 bonds = 0.069 angles = 4.460
>
> And after some refinement that I've done:
> r_work = 0.1779 r_free = 0.2202 bonds = 0.015 angles = 1.600
>
> In refinement I used:
> - automatic water update;
> - TLS;
> - riding hydrogen atoms.
>
> You can further improve your model by:
> - running automatic rotamer fixing (fix_rotamers=true);
> - weights optimization (optimize_wxc=true optimize_wxu=true);
> - more thoughtful selection of TLS groups.
>
> I'm sending to you all the files OFFLIST (in the next email).
>
> Please let me know if you have any questions.
>
> Pavel.
>
>
>
>
> On 6/24/10 3:42 AM, Oliver King wrote:
> > Hi All,
> >
> > I've noticed that when refining a model in phenix.refine using a PDB
> > file from Refmac, the sidechain atoms of certain residues become
> > detached from the rest of the molecule and appear to float on their
> > own, at least when viewing in Coot. I think this is down to the
> format
> > of the PDB file. For example the atoms of a Leu residue from a
> > phenix.refine file which displays well is of the form:
> >
> > ATOM 1 N LEU A 8 50.022 -34.247 -5.817 1.00
> > 58.12 N
> > ATOM 2 CA LEU A 8 49.788 -34.905 -4.539 1.00
> > 55.84 C
> > ATOM 3 C LEU A 8 48.339 -35.358 -4.348 1.00
> > 60.39 C
> > ATOM 4 O LEU A 8 48.008 -36.016 -3.360 1.00
> > 68.40 O
> > ATOM 5 CB LEU A 8 50.219 -34.011 -3.373 1.00
> > 57.98 C
> > ATOM 6 CG LEU A 8 51.690 -34.126 -2.975 1.00
> > 65.66 C
> > ATOM 7 CD1 LEU A 8 52.014 -33.170 -1.836 1.00
> > 59.99 C
> > ATOM 8 CD2 LEU A 8 52.020 -35.566 -2.594 1.00
> > 68.57 C
> >
> > where as from a Refmac PDB which also behaves well, it is of the
> form:
> >
> > ATOM 1 N LEU A 8 50.453 -35.722 -5.617 1.00
> > 20.00 N
> > ATOM 2 CA LEU A 8 49.649 -35.131 -4.482 1.00
> > 20.00 C
> > ATOM 3 CB LEU A 8 50.190 -33.735 -4.147 1.00
> > 20.00 C
> > ATOM 4 CG LEU A 8 51.461 -33.755 -3.275 1.00
> > 20.00 C
> > ATOM 5 CD1 LEU A 8 52.556 -32.768 -3.741 1.00
> > 20.00 C
> > ATOM 6 CD2 LEU A 8 51.082 -33.546 -1.799 1.00
> > 20.00 C
> > ATOM 7 C LEU A 8 48.166 -35.063 -4.824 1.00
> > 20.00 C
> > ATOM 8 O LEU A 8 47.822 -34.257 -5.568 1.00
> > 20.00 O
> >
> > but after refinement in phenix it becomes
> >
> > ATOM 1 N LEU A 8 50.734 -35.936 -5.935 1.00
> > 54.45 N
> > ATOM 2 CA LEU A 8 49.846 -35.185 -4.892 1.00
> > 51.89 C
> > ATOM 3 CB LEU A 8 50.377 -33.761 -4.516 1.00
> > 51.22 C
> > ATOM 4 CG LEU A 8 51.550 -33.730 -3.431 1.00
> > 59.27 C
> > ATOM 5 CD1 LEU A 8 52.457 -32.407 -3.635 1.00
> > 60.88 C
> > ATOM 6 CD2 LEU A 8 51.120 -34.090 -1.879 1.00
> > 55.50 C
> > ATOM 7 C LEU A 8 48.268 -35.042 -5.221 1.00
> > 47.92 C
> > ATOM 8 O LEU A 8 47.989 -34.282 -6.195 1.00
> > 47.32 O
> >
> > and the CD2 atom is now too far away to be part of the residue in
> Coot
> >
> > Is there an easy way to convert a PDB from Refmac into one which
will
> > behave itself when put through phenix.refine? I've tried using
> > phenix.pdb_tools and also putting the model through Molprobity and
> > hoping that that the output would be corrected.
> >
> > Thanks,
> >
> > Olly King
> >
> > _______________________________________________
> > phenixbb mailing list
> > phenixbb(a)phenix-online.org
> > http://phenix-online.org/mailman/listinfo/phenixbb
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
15 years, 7 months