Search results for query "look through"
- 520 messages

Re: [phenixbb] Unmasking cavities
by Morten Grøftehauge
Hi guys,
I'm a bit confused by this answer.
I get the "add dummy atoms and calculate map" to check whether it is Fourier
truncation ripples (which I don't think it will turn out to be).
But I wouldn't feel comfortable depositing a structure with dummy atoms even
if they do have zero occupancy. Are you really suggesting that people do
that?
Secondly, when I look in the .def for my refinements I find two entries for
mask calculation:
Under the fake_f_obs heading
mask {
solvent_radius = 1.11
shrink_truncation_radius = 0.9
grid_step_factor = 4
verbose = 1
mean_shift_for_mask_update = 0.1
ignore_zero_occupancy_atoms = True
ignore_hydrogens = True
}
And again under it's own heading towards the end
mask {
solvent_radius = 1.11
shrink_truncation_radius = 0.9
grid_step_factor = 4
verbose = 1
mean_shift_for_mask_update = 0.1
ignore_zero_occupancy_atoms = True
ignore_hydrogens = True
}
Which one is relevant? Also why didn't any of you suggest the
optimize_mask=true parameter? Shouldn't that automatically find the best
solvent_radius and shrink_truncation_radius values?
Sorry if these are dumb questions (and sorry that there are so many) but I
was just really confused by these answers.
Sincerely,
Morten Grøftehauge
2008/10/4 Pavel Afonine <PAfonine(a)lbl.gov>
> Hi Frank,
>
> I just want to add to Ralf's very comprehensive reply... The parameters
> solvent_radius, shrink_truncation_radius and grid_step_factor are
> explained in the original paper:
>
> Jiang, J.-S. & Brünger, A. T. (1994). J. Mol. Biol. 243, 100-115.
> "Protein hydration observed by X-ray diffraction. Solvation properties
> of penicillopepsin and neuraminidase crystal structures."
>
> The details of PHENIX implementation of this are described here:
>
> P.V. Afonine, R.W. Grosse-Kunstleve & P.D. Adams. Acta Cryst. (2005).
> D61, 850-855. "A robust bulk-solvent correction and anisotropic scaling
> procedure"
>
> Also, the negative peaks you observe can easily be Fourier series
> truncation ripples. I think Ralf's suggestion to place some dummy atoms
> there with zero occupancy is a good idea. I wouldn't even do any
> refinement (since moving atoms may cancel these artifacts), but just
> compute two maps - with and w/o the dummy atoms and see what happens to
> these negative peaks.
>
> Cheers,
> Pavel.
>
>
> On 9/28/2008 3:25 PM, Frank von Delft wrote:
> > Hi
> >
> > After being through phenix.refine, I see in my hydrophobic core a big
> > space (a few atoms wide) that is filled with strong negative difference
> > density. I suspect the culprit is the bulk solvent mask, which is
> > defined too tightly.
> >
> > The online manual mentions three parameters, but not what they do.
> > solvent_radius,
> > shrink_truncation_radius,
> > grid_step_factor
> >
> > What *exactly* do they do?
> >
> > (I thought I'd elicit a contribution for the online docs this way :)
> > Cheers
> > phx
> > _______________________________________________
> > phenixbb mailing list
> > phenixbb(a)phenix-online.org
> > http://www.phenix-online.org/mailman/listinfo/phenixbb
> >
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
--
Morten K Grøftehauge
PhD student
Department of Molecular Biology
Gustav Wieds Vej 10 C
8000 Aarhus C - Denmark
Phone: +45 89 42 52 61
Fax: +45 86 12 31 78
www.bioxray.dk
16 years, 5 months

Re: [phenixbb] matplotlib
by Francois Berenger
On 08/19/2011 03:29 AM, Francis E Reyes wrote:
> Since we're talking about phenix distributions....
>
>
> <flamebait>
> So when are we going to see phenix on the App Store?
> I hear the next version of OS X will only run binaries signed by Apple.
Well, let's not accept this business model.
> </flamebait>
>
> F
>
> On Aug 18, 2011, at 12:06 PM, Nathaniel Echols wrote:
>
>> On Thu, Aug 18, 2011 at 10:39 AM, Ed Pozharski<epozh001(a)umaryland.edu> wrote:
>> In a nutshell, phenix gui may in some circumstances screw up other
>> programs that use matplotlib.
>>
>> It's not the fault of Phenix; matplotlib is unusually inflexible in how it deals with these cache files, and it is nearly unique among Python modules in its dependence on writing to the user's home directory. This isn't the only problem; another issue is that matplotlib creates these caches, and the maintainers appear to never have considered what would happen if the cached directory were removed. So when you run the GUI in version X of Phenix, then remove version X and install version Y, it will still look for the fonts installed with version X. I complained about this last December, and it remains unsolved in any of the official releases (one of which was this year).
>>
>> The likely cause is that phinix gui calls on bundled matplotlib which is
>> different from one I have installed (not to mention that I am using
>> Lucid (because it's LTS) which has python 2.6 and not the 2.7 that is
>> bundled with phenix). However, it still writes into the same
>> ~/.matplotlib folder, thus I end up with incompatible data. Certainly,
>> the problem will be gone when matplotlib gets bumped up to 1.0.1 in next
>> Ubuntu release.
>>
>> The issue with removing installations will remain, however. You could avoid the incompatibility problem by running "phenix.wxpython" if you need to use matplotlib. (We're using Python 2.7.2 right now, and generally update to the latest release in the 2.x series shortly after it comes out.)
>>
>> This is yet another example of why the standalone installation approach
>> is ideologically objectionable on modern Linux. But of course, the
>> practical advantage gained by not having to package the software for any
>> possible OS flavor/version users may choose outweighs the lower risks of
>> package incompatibility and the reduced size of the packaged product.
>>
>> We don't have the resources to support a more ideologically pure distribution mechanism - the installers are maintained by me and Ralf in between other projects. Also, we often depend on new features in the various dependencies that would not be immediately available through the package managers (for instance, we switched to Python 2.6 almost immediately because I needed the multiprocessing module). There are many things in the current installers that I'm unhappy with, but they don't take very much time to maintain, which is essential.
>>
>> -Nat
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
>
>
> ---------------------------------------------
> Francis E. Reyes M.Sc.
> 215 UCB
> University of Colorado at Boulder
>
>
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 10 months

Re: [phenixbb] Using LigandFit to identify unknown density
by Pavel Afonine
Hi Maia,
first, I agree with Peter - the B-factor restraints should help, indeed.
Second, I think we discussed this subject already on November 25, 2009:
Subject: Re: [phenixbb] occupancy refinement
Date: 11/25/09 7:38 AM
and I believe I didn't change my mind about it since that. I'm appending
that email conversation to the bottom of this email.
Overall, if you get good 2mFo-DFc map and clear residual mFo-DFc map,
and ligand's B-factors are similar or slightly larger than those of
surrounding atoms, and refined occupancy looks reasonable, then I think
you are fine.
Pavel.
On 1/27/10 2:05 PM, Maia Cherney wrote:
> Hi Pavel,
>
> I have six ligands at partial occupacies in my structure. Simultaneous
> refinement of occupancy and B factors in phenix gives a value of 0.7
> for the ligand occupancy that looks reasonable.
> How does phenix can perform such a refinement given the occupancies
> and B factors are highly correlated? Indeed, you can increase/decrease
> the ligand occupancies while simultaneously increacing/decreasing
> their B factors without changing the R factor value. What criteria
> does phenix use in such a refinement if R factor does not tell much?
>
> Maia
******* COPY (11/25/09)************
On 11/25/09 7:38 AM, Maia Cherney wrote:
> Hi Pavel,
>
> It looks like all different refined occupancies starting from different
> initial occupancies converged to the same number upon going through very
> many cycles of refinement.
>
> Maia
>
>
> Pavel Afonine wrote:
>
>> Hi Maia,
>>
>> the atom parameters, such as occupancy, B-factor and even position are
>> interdependent in some sense. That is, if you have somewhat incorrect
>> occupancy, that B-factor refinement may compensate for it; if you
>> misplaced an atom the refinement of its occupancy or/and B-factor will
>> compensate for this. Note in all the above cases the 2mFo-DFc and
>> mFo-DFc maps will appear almost identical, as well as R-factors.
>>
>> So, I think your goal of finding a "true" occupancy is hardly achievable.
>>
>> Although, I think you can approach it by doing very many refinements
>> (say, several hundreds) (where you refine occupancies, B-factors and
>> coordinates) each refinement starting with different occupancy and
>> B-factor values, and make sure that each refinement converges. Then
>> select a subset of refined structures with similar and low R-factors
>> (discard those cases where refinement got stuck for whatever reason
>> and R-factors are higher) (and probably similar looking 2mFo-DFc and
>> mFo-DFc maps in the region of interest). Then see where the refined
>> occupancies and B-factors are clustering, and the averaged values will
>> probably give you an approximate values for occupancy and B. I did not
>> try this myself but always wanted to.
>>
>> If you have a structure consisting of 9 carbons and one gold atom,
>> then I would expect that the "second digit" in gold's occupancy would
>> matter. However, if we speak about dozen of ligand atoms (which are
>> probably a combination of C,N,O) out of a few thousands of atoms of
>> the whole structure, then I would not expect the "second digit" to be
>> visibly important.
>>
>> Pavel.
>>
>>
>> On 11/24/09 8:08 PM, chern wrote:
>>
>>> Thank you Kendall and Pavel for your responces.
>>> I really want to determine the occupancy of my ligand. I saw one
>>> suggestion to try different refinements with different occupancies
>>> and compare the B-factors.
>>> The occupancy with a B-factor that is at the level with the average
>>> protein B-factors, is a "true" occupancy.
>>> I also noticed the dependence of the ligand occupancy on the initial
>>> occupancy. I saw the difference of 10 to 15%, that is why I am
>>> wondering if the second digit after the decimal point makes any sence.
>>> Maia
>>>
>>> ----- Original Message -----
>>> *From:* Kendall Nettles <mailto:[email protected]>
>>> *To:* PHENIX user mailing list <mailto:[email protected]>
>>> *Sent:* Tuesday, November 24, 2009 8:22 PM
>>> *Subject:* Re: [phenixbb] occupancy refinement
>>>
>>> Hi Maia,
>>> I think the criteria for occupancy refinement of ligands is
>>> similar to a decision to add an alt conformation for an amino
>>> acid. I don’t refine occupancy of a ligand unless the difference
>>> map indicates that we have to. Sometimes part of the igand may be
>>> conformationally mobile and show poor density, but I personally
>>> don’t think this justifies occupancy refinement without evidence
>>> from the difference map. I agree with Pavel that you shouldn’t
>>> expect much change in overall statistics, unless the ligand has
>>> very low occupancy., or you have a very small protein. We
>>> typically see 0.5-1% difference in R factors from refining with
>>> ligand versus without for nuclear receptor igand binding domains
>>> of about 250 amino acids, and we see very small differences from
>>> occupancy refinement of the ligands.
>>>
>>> Regarding the error, I have noticed differences of 10% percent
>>> occupancy depending on what you set the starting occupancy before
>>> refinement. That is, if the starting occupancy starts at 1, you
>>> might end up with 50%, but if you start it at 0.01, you might get
>>> 40%. I don’t have the expertise to explain why this is, but I
>>> also don’t think it is necessarily important. I think it is more
>>> important to convince yourself that the ligand binds how you
>>> think it does. With steroid receptors, the ligand is usually
>>> planer, and tethered by hydrogen bonds on two ends. That leaves
>>> us with with four possible poses, so if in doubt, we will dock in
>>> the ligand in all of the four orientations and refine. So far, we
>>> have had only one of several dozen structures where the ligand
>>> orientation was not obvious after this procedure. I worry about a
>>> letter to the editor suggesting that the electron density for the
>>> ligand doesn’t support the conclusions of the paper, not whether
>>> the occupancy is 40% versus 50%.
>>>
>>> You might also want to consider looking at several maps, such as
>>> the simple or simulated annealing composite omit maps. These can
>>> be noisy, so also try the kicked maps (
>>> http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html),
>>> <http://www.phenix-online.org/pipermail/phenixbb/2009-September/002573.html%…,>
>>> which I have become a big fan of.
>>>
>>> Regards,
>>> Kendall Nettles
>>>
>>> On 11/24/09 3:07 PM, "chern(a)ualberta.ca" <chern(a)ualberta.ca> wrote:
>>>
>>> Hi,
>>> I am wondering what is the criteria for occupancy refinement of
>>> ligands. I noticed that R factors change very little, but the
>>> ligand
>>> B-factors change significantly . On the other hand, the
>>> occupancy is
>>> refined to the second digit after the decimal point. How can
>>> I find
>>> out the error for the refined occupancy of ligands?
>>>
>>> Maia
>>>
15 years, 5 months

Re: [phenixbb] phenix autobuild question
by Peter Zwart
Hi,
Try this (substring matching in action):
phenix.xtriage ab_xds_pointless_scala5.mtz obs_labels=+
or
phenix.xtriage ab_xds_pointless_scala5.mtz obs=M
HTH
P
2008/9/15 <aberndt(a)mrc-lmb.cam.ac.uk>:
> Hi Tom,
>
> thanks for your answer. I just checked the phenix version I used and it is
> version-1.3-final. Please find attached the .log file you asked for. I
> just realised that resolve apparently had a problem with the .mtz label
> assignment. I never had this problem before. Now that I mention that I
> have to confess that this is not entirely true. All the phenix tutorials I
> have looked through so far state that you can use ccp4s mtz the way they
> are. Yet, this is not entirely true. Say I want to have an analysis from
> phenix.xtriage by typing 'phenix.xtriage latest.mtz'. So phenix will give
> me the error message:
> Multiple equally suitable arrays of observed xray data found.
>
> Possible choices:
> ab_xds_pointless_scala5.mtz:IMEAN_XDSdataset,SIGIMEAN_XDSdataset
> ab_xds_pointless_scala5.mtz:I_XDSdataset(+),SIGI_XDSdataset(+),I_XDSdataset(-),SIGI_XDSdataset(-),merged
>
> Please use scaling.input.xray_data.obs_labels
> to specify an unambiguous substring of the target label.
>
> Sorry: Multiple equally suitable arrays of observed xray data found.
>
> Is there a way a way to avoid going back to ccp and sorting the labels
> accordingly. What would be the 'scaling.input.xray_data.obs_labels'
> command in the xtriage command line?
>
> Anyway, Tom, many thanks for your answer in advance! I reaaly like working
> with phenix. I just need to learn loads more ...
>
> Best,
> Alex
>
>
>
>> Hi Alex,
>>
>> I'm sorry for both the failure and the lack of a clear message here! I
>> will try to fix both.
>>
>> Is this running phenix-1.3-final? (If not, could you possibly run with
>> that version so we are on the same version.)
>>
>> Can you possibly send me the end of the output in
>>
>> /someplace/home/someuser/
>> work/someproject/phenix2/AutoBuild_run_1_/TEMP0/AutoBuild_run_1_/AutoBuild_run_1_1.log
>>
>> which I hope will have an actual error message to look at?
>>
>> The reason for the subdirectories is this: phenix.autobuild runs several
>> jobs in parallel (if you have set nproc ) or one after the other (if you
>> have not set nproc). These subdirectories contain those jobs...which are
>> then combined together to give the final results in your overall
>> AutoBuild_run_1_/ directory.
>>
>> I don't know the answer to the phenix.refine question...perhaps Pawel or
>> Ralf can answer that one.
>>
>> All the best,
>> Tom T
>>
>>> Dear phenix community,
>>>
>>> I started an AutoBuild run using the phenix GUI (ver 1.3 rc4) on a
>>> linux cluster. It worked okay and all the refinement looked pretty
>>> decent. However, after quite a while I obtained the following error
>>> message (see below). Would anyone please tell me what to do to prevent
>>> this error and get phenix get its job finished?
>>>
>>> Also, I don't understand why phenix AutoBuild creates 5 (or whatever
>>> number) AutoBuild_run_x in the AutoBuild-run_1/TEMP0 directory and not
>>> two dirs up in the tree. In other words in my /someplace/home/someuser/
>>> work/someproject/phenix2/AutoBuild_run_1_/TEMP0 directory are more
>>> AutoBuild_run_x_ directories (with x=1 to 5). It is a bit confusing to
>>> me.
>>>
>>> A final question: I realised that the phenix.refine drastically
>>> increases the number of outliers. I know that thare is a weighting
>>> term someplace ... but what was it again?
>>>
>>> Many thanks in advance,
>>> Alex
>>>
>>>
>>> warnings
>>> Failed to carry out AutoBuild_multiple_models:
>>> Sorry, subprocess failed...message is:
>>> ********************************************************************************
>>> phenix.autobuild \
>>>
>>> write_run_directory_to_file=/someplace/home/someuser/work/someproject/
>>> phenix2/AutoBuild_run_1_/TEMP0/INFO_FILE_1
>>>
>>> Reading effective parameters from /someplace/home/someuser/work/
>>> someproject/phenix2/AutoBuild_run_1_/TEMP0/PARAMS_1.eff
>>>
>>> Sending output to AutoBuild_run_1_/AutoBuild_run_1_1.log
>>>
>>> ********************************************************************************
>>>
>>> Failed to carry out AutoBuild_build_cycle:
>>>
>>> failure
>>>
>>> ********************************************************************************
>>>
>>> ********************************************************************************
>>>
>>> work_pdb_file "AutoBuild_run_1_/edited_pdb.pdb"
>>> working_directory "/someplace/home/someuser/work/someproject/phenix2"
>>> worst_percent_res_rebuild 2.0
>>>
>>> ---------------------------------------------------
>>> Alex Berndt
>>> MRC-Laboratory of Molecular Biology
>>> Hills Road
>>> Cambridge, CB2 0QH
>>> U.K.
>>>
>>> phone: +44 (0)1223 402113
>>> ---------------------------------------------------
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
>
--
-----------------------------------------------------------------
P.H. Zwart
Beamline Scientist
Berkeley Center for Structural Biology
Lawrence Berkeley National Laboratories
1 Cyclotron Road, Berkeley, CA-94703, USA
Cell: 510 289 9246
BCSB: http://bcsb.als.lbl.gov
PHENIX: http://www.phenix-online.org
CCTBX: http://cctbx.sf.net
-----------------------------------------------------------------
16 years, 9 months

Re: [phenixbb] Coot mutation of Asp residue to isoAsp residue
by Zhijie Li
Hi Xiao,
The previous IAS.cif I sent had the OXT atom missing. Please find the attached IAS_mon_lib.cif, which has this mistake fixed.
Zhijie
From: Xiao Lei
Sent: Tuesday, August 18, 2015 1:56 PM
To: Zhijie Li
Cc: PHENIX user mailing list
Subject: Re: [phenixbb] Coot mutation of Asp residue to isoAsp residue
Hi Zhejie,
For the 1AT6 structure, I downloaded its density in coot using "fetch density from EDS", but when I found the IAS at 101 position and try to do real space refine, it gives an error that "Refinement setup failure. Failed to find restraints for IAS."
I do not know how to fix this but it seems to me it's caused by incomplete restraints dictionary or monomer library in ccp4?
Thanks.
Xiao
On Tue, Aug 18, 2015 at 10:42 AM, Xiao Lei <xiaoleiusc(a)gmail.com> wrote:
Hi Zhijie,
Thank you very much for the information. For step 1 you mentioned, I can get monomer with L-Asp but it seems I can not drag it (or I do not know how to do) and can not delete or modify it to become isoAsp. I will try play around more though.
Xiao
On Mon, Aug 17, 2015 at 6:33 PM, Zhijie Li <zhijie.li(a)utoronto.ca> wrote:
Hi Xiao,
IsoAsp is essentially an L-Asp linked with next aa through its side chain (beta) carboxyl. So the mutation button won’t help you. You need to build in a new L-ASP, which is treated as a covalently linked ligand (HETATM records), instead of a standard residue (ATOM records) of the protein chain.
A practical method might be: 1) delete the original Asp, 2) import a free L-Asp using “get monomer”, delete its hydrogen atoms and drag it into the density, delete one oxygen atom on the beta-carboxyl and change the residue’s numbering and chain id to fit it into the sequence, 3) edit the PDB, if necessary, to turn the ASP into a ligand (a HETATM record inside the chain).
For step 3, you may need to rename the ASP to something else (IAS was used for isoASP in older pdb, so I would go with IAS ) so that coot won’t try to make a regular peptide bond using its main chain carboxyl during real space refinement. Of course you will need to make a cif file for the “new” compound too. I guess you can make a copy of ASP.cif from the monomer library and change everything in it to IAS. I think if you have placed the IAS to the right location and its ends are in bonding distance with the neighbouring aa residues you may not need to do anything for refmac. For phenix.refine you will need to add a bond description to the .edit file for each linkage the IAS makes to the neighboring aas.
You may take a look at the structure 1AT6 and its PDB file. The residue IAS 101 is an example of isoASP. Note that the IAS atoms are HETATM in the chain and there are two LINK records in the header to indicate its linkage to neighbouring aas (LINK records are normally not generated or needed during refinement using refmac or phenix.refine).
Zhijie
From: Xiao Lei
Sent: Monday, August 17, 2015 6:50 PM
To: PHENIX user mailing list
Subject: [phenixbb] Coot mutation of Asp residue to isoAsp residue
Dear Phenixbb members,
I suspect one Asp residue in my model may be an isoAsp (isomerization of Asp). I am asking if there is way to mutate Asp residue to isoAsp(isoaspartic acid) residue in coot GUI (I'm using coot 0.8.1 EL in Mac OS X10.10.5)?
I know there is a mutation button on coot, but the mutated aa lists are all natural amino acids. If I have to delete the Asp residue first and then build isoAsp into the density map, is there a way in coot to build an isoAsp residue in map?
Thanks ahead.
Xiao
----------------------------------------------------------------------------
_______________________________________________
phenixbb mailing list
phenixbb(a)phenix-online.org
http://phenix-online.org/mailman/listinfo/phenixbb
Unsubscribe: phenixbb-leave(a)phenix-online.org
9 years, 10 months

Re: [phenixbb] Geometry Restraints - Anisotropic truncation
by Dale Tronrud
While philosophically I see no difference between a spherical resolution
cutoff and an elliptical one, a drop in the free R can't be the justification
for the switch. A model cannot be made more "publishable" simply by discarding
data.
We have a whole bunch of empirical guides for judging the quality of this
and that in our field. We determine the resolution limit of a data set (and
imposing a "limit" is another empirical choice made) based on Rmrg, or Rmes,
or Rpim getting too big or I/sigI getting too small and there is no agreement
on how "too big/small" is too "too big/small".
We then have other empirical guides for judging the quality of the models
we produce (e.g. Rwork, Rfree, rmsds of various sorts). Most people seem to
recognize that the these criteria need to be applied differently for different
resolutions. A lower resolution model is allowed a higher Rfree, for example.
Isn't is also true that a model refined to data with a cutoff of I/sigI of
1 would be expected to have a free R higher than a model refined to data with
a cutoff of 2? Surely we cannot say that the decrease in free R that results
from changing the cutoff criteria from 1 to 2 reflects an improved model. It
is the same model after all.
Sometimes this shifting application of empirical criteria enhances the
adoption of new technology. Certainly the TLS parametrization of atomic
motion has been widely accepted because it results in lower working and free
Rs. I've seen it knock 3 to 5 percent off, and while that certainly means
that the model fits the data better, I'm not sure that the quality of the
hydrogen bond distances, van der Waals distances, or maps are any better.
The latter details are what I really look for in a model.
On the other hand, there has been good evidence through the years that
there is useful information in the data beyond an I/sigI of 2 or an
Rmeas > 100% but getting people to use this data has been a hard slog. The
reason for this reluctance is that the R values of the resulting models
are higher. Of course they are higher! That does not mean the models
are of poorer quality, only that data with lower signal/noise has been
used that was discarded in the models you used to develop your "gut feeling"
for the meaning of R.
When you change your criteria for selecting data you have to discard
your old notions about the acceptable values of empirical quality measures.
You either have to normalize your measure, as Phil Jeffrey recommends, by
ensuring that you calculate your R's with the same reflections, or by
making objective measures of map quality.
Dale Tronrud
P.S. It is entirely possible that refining a model to a very optimistic
resolution cutoff and calculating the map to a lower resolution might be
better than throwing out the data altogether.
On 5/1/2012 10:34 AM, Kendall Nettles wrote:
> I have seen dramatic improvements in maps and behavior during refinement following use of the UCLA anisotropy server in two different cases. For one of them the Rfree went from 33% to 28%. I don't think it would have been publishable otherwise.
> Kendall
>
> On May 1, 2012, at 11:10 AM, Bryan Lepore wrote:
>
>> On Mon, Apr 30, 2012 at 4:22 AM, Phil Evans<pre(a)mrc-lmb.cam.ac.uk> wrote:
>>> Are anisotropic cutoff desirable?
>>
>> is there a peer-reviewed publication - perhaps from Acta
>> Crystallographica - which describes precisely why scaling or
>> refinement programs are inadequate to ameliorate the problem of
>> anisotropy, and argues why the method applied in Strong, et. al. 2006
>> satisfies this need?
>>
>> -Bryan
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 2 months

Re: [cctbxbb] flex index
by Ralf Grosse-Kunstleve
Yes, that's true, the padded arrays are a permanent struggle. A few days ago
Nat changed the fft_map.real_map_unpadded() method to do the unpadding in
place by default, to avoid memory overhead. That's probably the way to go in
most situations.
Ralf
On Tue, Oct 18, 2011 at 4:20 AM, Monarin Uervirojnangkoorn <
monarin(a)biochem.uni-luebeck.de> wrote:
> Thanks!. very useful. although doesn't really work with padded array (as in
> the case of real_map object). but found my way round.
>
> best
> mona
>
> On Oct 18, 2011, at 11:17 AM, Ralf Grosse-Kunstleve wrote:
>
> Hi Mona,
>
> Try this:
>
> from scitbx.array_family import flex
> a=flex.double([1,4,5,7,5,6,6,1])
> a.set_selected(a > 5, 5)
> print list(a)
>
> Or in smaller steps:
>
> bool_selection = a > 5
> assert bool_selection.size() == a.size()
> print list(bool_selection)
> a.set_selected(bool_selection, 5)
> print list(a)
>
> You can also call a.set_selected(bool_selection, b)
> where b is an array with as many elements as you have True in the
> bool_selection.
>
> This kind of working with selections is very typical within cctbx/phenix.
> If you look through our sources you'll find a lot of examples; in passing,
> we have two types of selections, bool selections as above, and integer
> selections. The latter are useful for permutations, e.g. to sort array
> elements, or if you know you select only a small number of elements from a
> large array. Let me know if you have more questions about selections and
> I'll be happy to explain. (I guess one day I should write a "Working with
> selections" tutorial.)
>
> Ralf
>
> On Tue, Oct 18, 2011 at 2:54 AM, Monarin Uervirojnangkoorn <
> monarin(a)biochem.uni-luebeck.de> wrote:
>
>> Hi,
>>
>> I'm using flex and 'd like to get an access to elements fall beyond/ below
>> certain value. eg.
>> a=[1,4,5,7,5,6,6,1]
>> anything beyond 5 should be set to 5. (in matlab then a(a>5)=5;)
>> if you could tell me how to do this quickly for flex, that would be great.
>> for now i use loop and that is... pretty slow.
>>
>> many thanks.
>> mona
>>
>>
>>
>> ^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^
>> Institute of Biochemistry,
>> Institut für Neuro- und Bioinformatik,
>> Graduate School for Computing in Medicine and Life Sciences
>> University of Lübeck,
>> Ratzeburger Allee 160
>> Lübeck 23538
>> Germany
>> Tel: +49451-5004072
>> Fax: +49451-5004068
>>
>>
>>
>>
>>
>> _______________________________________________
>> cctbxbb mailing list
>> cctbxbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/cctbxbb
>>
>>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> ^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^-^
> Institute of Biochemistry,
> Institut für Neuro- und Bioinformatik,
> Graduate School for Computing in Medicine and Life Sciences
> University of Lübeck,
> Ratzeburger Allee 160
> Lübeck 23538
> Germany
> Tel: +49451-5004072
> Fax: +49451-5004068
>
>
>
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
13 years, 8 months

Re: [phenixbb] phenix.refine problems with my pdb
by Nicholas Noinaj
matt,
>>> Number of atoms with unknown nonbonded energy type symbols: 308
just curious, did you have 308 waters? as peter said, hard to determine exact problem without some files or examples to look at. also phenix does a good job with water picking, in my little experience.
just one more question, when you do get the refinement to work, is your ligand(s) included with the protein model? just no waters, right? it wasn't completely clear from your message. if the ligands aren't included, i recommend adding it(them) back and see if refinement works. then if that works, try water picking within phenix.
again, best of luck.
cheers,
nick
-----Original Message-----
From: Matthew Bowler <mwb(a)mrc-dunn.cam.ac.uk>
To: noinaj(a)uky.edu, PHENIX user mailing list <phenixbb(a)phenix-online.org>
Date: Thu, 15 Mar 2007 14:15:45 +0000
Subject: Re: [phenixbb] phenix.refine problems with my pdb
Thanks for replies so far....
I removed all ligands and then waters - the waters seem to be the
problem as the protein model is now refining, do they need to be
treated/marked in a different way for phenix?
Thanks again, Matt.
On 15 Mar 2007, at 13:32, Nicholas Noinaj wrote:
> matt,
>
> can't offer much help, but have you tried to remove the ligand and
> try refinement? it would let you know if the pdb file or the
> presence of the ligand is the problem. at least let you would know
> where you should focus your attention.
>
> if all else fails, have you tried getting parameters for your
> ligand from the PRODRG Server and using that for inputing the
> ligand parameters into phenix.refine directly when you setup the
> refinement job? (I think i have done this before???)
>
> here is the website in case you aren't familiar...
> http://davapc1.bioch.dundee.ac.uk/programs/prodrg/
>
> Again, Best of luck!
>
>
>
> cheers,
> nick
>
>
> -----Original Message-----
> From: Matthew Bowler <mwb(a)mrc-dunn.cam.ac.uk>
> To: phenixbb(a)phenix-online.org
> Date: Thu, 15 Mar 2007 13:00:42 +0000
> Subject: [phenixbb] phenix.refine problems with my pdb
>
> Dear All,
> I am trying to run phenix.refine but I keep getting the message:
>
>
> Sorry: Fatal problems interpreting PDB file:
> Number of atoms with unknown nonbonded energy type symbols: 308
> Please edit the PDB file to resolve the problems and/or supply a
> CIF file with matching restraint definitions. Note that
> elbow.builder is available to create restraint definitions.
>
> I have run my ligands through elbow and used the output .cif but I
> still get the same message. I can find no info on this problem in
> the manual or the BB, can anyone help? Thanks in advance, yours,
> Matt.
>
>
>
>
>
> Matthew Bowler
> MRC Dunn Human Nutrition Unit
> Wellcome Trust / MRC Building
> Hills Road
> Cambridge CB2 2XY
> Tel: 0044 (0) 1223 252826
> Fax: 0044 (0) 1223 252825
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
Matthew Bowler
MRC Dunn Human Nutrition Unit
Wellcome Trust / MRC Building
Hills Road
Cambridge CB2 2XY
Tel: 0044 (0) 1223 252826
Fax: 0044 (0) 1223 252825
18 years, 3 months

Re: [phenixbb] Joint X-ray / neutron refinement
by Billy Poon
Hi Kristoffer,
Are you running from the command-line? The current GUI in 1.21 does not
support joint refinement. We will release 1.21.1 to add support.
If you use the command-line, the presence of the xray and neutron scopes
will switch the parser to process the parameter file for joint xn
refinement. You should see a log that looks like
Starting phenix.refine
on Sat Feb 24 05:50:17 2024 by bkpoon
===============================================================================
Processing files:
-------------------------------------------------------------------------------
Found phil, params.eff
Processing PHIL parameters:
-------------------------------------------------------------------------------
Adding PHIL files:
------------------
params.eff
Switching to joint x-ray/neutron refinement mode
-------------------------------------------------------------------------------
The 1.21.1 release will probably be next week. I'm currently testing the
GUI. Thanks!
--
Billy K. Poon
Research Scientist, Molecular Biophysics and Integrated Bioimaging
Lawrence Berkeley National Laboratory
1 Cyclotron Road, M/S 33R0345
Berkeley, CA 94720
Fax: (510) 486-5909
Web: https://phenix-online.org
On Fri, Feb 23, 2024 at 11:58 AM Tim Gruene <tim.gruene(a)univie.ac.at> wrote:
> Hello Kristoffer,
>
> maybe this is related to the first paragraph at the link that you
> provide ("The paradigm and implementation of the joint X-ray and
> neutron refinement were changed in the fall of 2023, as documented
> here: https://pubmed.ncbi.nlm.nih.gov/37942718/"):
>
> It is better to use the information from an X-ray dataset via
> geometric restraints and refine against neutron data only (cf. DOI
> 10.1107/S1600576713027659), so that you focus on the information from
> the neutron data.
>
> Cheers,
> TIm
>
> On Fri, 23 Feb 2024 10:58:17 +0000 Kristoffer Lundgren
> <kristoffer.lundgren(a)compchem.lu.se> wrote:
>
> > Hello phenixbb,
> >
> > When trying to do joint refinement in phenix.refine 1.21-5207 using
> > the suggested joint_xn.eff file found at
> >
> https://phenix-online.org/documentation/reference/refinement.html#neutron-a…
> > as a template I get some error messages (running phenix.refine from
> > the command line):
> >
> > Unrecognized PHIL parameters:
> > -----------------------------
> > xray.refinement.refine.strategy (file "joint_xn.eff", line 33)
> > xray.refinement.main.simulated_annealing (file "joint_xn.eff",
> > line 36) xray.refinement.main.ordered_solvent (file "joint_xn.eff",
> > line 37) xray.refinement.main.number_of_macro_cycles (file
> > "joint_xn.eff", line 38) neutron.refinement.refine.strategy (file
> > "joint_xn.eff", line 45) neutron.refinement.main.simulated_annealing
> > (file "joint_xn.eff", line 48)
> > neutron.refinement.main.ordered_solvent (file "joint_xn.eff", line
> > 49) neutron.refinement.main.number_of_macro_cycles (file
> > "joint_xn.eff", line 50)
> >
> > Can you please advice on how to proceed? It looks like the options
> > supplied are not recognized at all by phenix.refine.
> >
> > Best regards
> > Kristoffer
> >
> >
> > From: <phenixbb-bounces(a)phenix-online.org> on behalf of Derek Logan
> > <derek.logan(a)biochemistry.lu.se> Date: Monday, 22 January 2024 at
> > 09:39 To: "phenixbb(a)phenix-online.org" <phenixbb(a)phenix-online.org>
> > Cc: Ulf Ryde <ulf.ryde(a)compchem.lu.se>, "esko.oksanen_ess.eu"
> > <esko.oksanen(a)ess.eu> Subject: ***SPAM*** [phenixbb] Joint X-ray /
> > neutron refinement
> >
> > Hi phenixbb,
> >
> > I'm trying to understand the current status of joint X-ray/neutron
> > refinement in phenix.refine. The announcement of the latest release
> > mentions a change in the algorithm as described in the recent
> > publication. In the documentation:
> >
> > Structure refinement in
> > PHENIX<
> https://phenix-online.org/documentation/reference/refinement.html#neutron-a…
> >
> > phenix-online.org<
> https://phenix-online.org/documentation/reference/refinement.html#neutron-a…
> >
> >
> > [AURqZhI53yp3gAAAABJRU5ErkJggg==]<
> https://phenix-online.org/documentation/reference/refinement.html#neutron-a…
> >
> >
> >
> >
> > it's described how you can run joint refinement using a parameter
> > file. Does this mean that it is *only* possible in this way and not
> > through the GUI? The GUI was very useful in the past as it
> > automatically opened Coot with the maps for both X-rays and neutrons,
> > and the refinement statistics for both were displayed in the GUI. As
> > far as I can see joint refinement via the GUI last worked in version
> > 1.19.2-4148 from 2021.
> >
> > Best regards
> > Derek
>
>
>
> --
> --
> Tim Gruene
> Head of the Centre for X-ray Structure Analysis
> Faculty of Chemistry
> University of Vienna
>
> Phone: +43-1-4277-70202
>
> GPG Key ID = A46BEE1A
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> Unsubscribe: phenixbb-leave(a)phenix-online.org
1 year, 4 months

Re: [cctbxbb] Git
by Nicholas Sauter
>From the DIALS-West perspective, the switch to git has been a stumbling
block to participation. I would have to agree with Ralf that git seems to
be a tool for very smart people but not for folks who just want a simple
tool for managing code.
I can't agree with Markus that a linear history is dispensable. In fact,
this is one feature in svn that I've found very helpful over the years. If
a feature is broken today, but I know for a fact that it worked sometime in
the past, I simply do a svn update -r "[datestamp]" to narrow down the
exact date when the feature became broken, then I isolate the exact commit
and look at the code. Can this be done with git or does it even make sense
if there is no concept of linear change?
The git environment seems a bit chaotic, and not conducive to close
cooperation, from what I can see now. More experience with it may convince
me otherwise.
Nick
Nicholas K. Sauter, Ph. D.
Computer Staff Scientist, Molecular Biophysics and Integrated Bioimaging
Division
Lawrence Berkeley National Laboratory
1 Cyclotron Rd., Bldg. 33R0345
Berkeley, CA 94720
(510) 486-5713
On Tue, Jan 12, 2016 at 2:22 AM, <luis.fuentes-montero(a)diamond.ac.uk> wrote:
> Hi fellows,
>
>
>
> I believe it is not a bad idea to move to https://github.com (the
> servers) and later on decide if move to use *git* (the CLI tool). This
> can an incremental way to do transition. And will allow really old fashion
> developers (those who refuse to learn *git*) or conservative policies to
> survive and still move to a new more reliable code repository.
>
>
>
> Just my thoughts,
>
>
>
> Greetings,
>
>
>
> Luiso
>
>
>
>
>
>
>
> *From:* cctbxbb-bounces(a)phenix-online.org [mailto:
> cctbxbb-bounces(a)phenix-online.org] *On Behalf Of *
> markus.gerstel(a)diamond.ac.uk
> *Sent:* 12 January 2016 09:56
> *To:* cctbxbb(a)phenix-online.org
> *Subject:* Re: [cctbxbb] Git
>
>
>
> Dear Luc,
>
>
>
> We actually had a look at the quality of the Github svn interface when we
> prepared for the DIALS move. My verdict was that it is surprisingly useful.
>
> All the possible git operations are mapped rather well onto a linear SVN
> history. All the branches and tags are accessible either by checking out
> the root of the repository, ie.
>
> svn co https://github.com/dials/dials.git (not recommended, as,
> say, any new branch/tag will probably result in a huge update operation for
> you)
>
> or by checking them out directly, ie.
>
> svn co https://github.com/dials/dials.git/branches/fix_export
>
> or svn co https://github.com/dials/dials.git/tags/1.0
>
>
>
> Forks by other people are, as is the case when using git, simply not
> visible. I agree that looking at the project history through SVN may not be
> as clear as the git history. I wonder how relevant this is though, as you
> can explore the history, without running any commands, on the Github
> website, eg. https://github.com/dials/dials/network,
> https://github.com/dials/dials/commits/master, etc.
>
>
>
> Creating new branches and tags and merging stuff via SVN would be a major
> operation. However, that has always been the case with SVN, and for that
> reason one generally just does not do these things in SVN.
>
> But for an SVN user group those operations would not be important – you
> every only need to merge stuff if you create branches – so I don’t really
> see the problem. If you want to tag releases you can do that on the Github
> website as well as with a git command.
>
>
>
> In summary, I do recognize that SVN users will have difficulties
> participating in branched development, and in particular will not be able
> to quickly switch between branches.
>
> But I don’t think that this will be a problem, or that there is a need for
> a policy to keep the history linear.
>
>
>
> Best wishes
>
> -Markus
>
>
>
>
>
> *From:* cctbxbb-bounces(a)phenix-online.org [
> mailto:[email protected]
> <cctbxbb-bounces(a)phenix-online.org>] *On Behalf Of *Luc Bourhis
> *Sent:* 11 January 2016 16:45
> *To:* cctbx mailing list
> *Subject:* Re: [cctbxbb] Gi,
>
>
>
> Hi Graeme,
>
>
>
> Can we revisit the idea of moving to git for cctbx?
>
>
>
> This brings to mind a question I have been asking myself since the subject
> has been brought forth. The idea Paul wrote about on this list was a move
> to Github. I guess some, perhaps many, developers will keep interacting
> with the repository using subversion. I am worried this would clash with
> the workflow of those of us who would go the native git way. By that I mean
> creating many branches and merge points, which one would merge into the
> official repository when ready. I am worried the history would look very
> opaque for the subversion users. I would even probably create a fork on
> Github, making it even more opaque for a tool like subversion. Has anybody
> thought about such issues? My preferred solution would be for everybody to
> move to git but I don’t think that’s realistic. At the other end of the
> spectrum, there is putting in place a policy to keep the history linear.
>
>
>
> Best wishes,
>
>
>
> Luc
>
>
>
>
>
> --
>
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
>
>
> --
>
> This e-mail and any attachments may contain confidential, copyright and or
> privileged material, and are for the use of the intended addressee only. If
> you are not the intended addressee or an authorised recipient of the
> addressee please notify us of receipt by returning the e-mail and do not
> use, copy, retain, distribute or disclose the information in or attached to
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and
> not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any
> attachments are free from viruses and we cannot accept liability for any
> damage which you may sustain as a result of software viruses which may be
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England
> and Wales with its registered office at Diamond House, Harwell Science and
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
9 years, 5 months