Search results for query "look through"
- 520 messages

Re: [phenixbb] Using the Same Test Set in AutoBuild and Phenix.Refine
by Thomas C. Terwilliger
Hi Dale,
Can you try something else:
phenix.refine AutoBuild_run_12_/overall_best.pdb \
refinement.input.xray_data.file_name=\
AutoBuild_run_12/exptl_fobs_freeR_flags.mtz \
refinement.main.high_resolution=2.2 refinement.main.low_resolution=20 \
/usr/users/dale/geometry/chromophores/bcl_tnt.cif
This differs from your run only by substituting
AutoBuild_run_12/exptl_fobs_freeR_flags.mtz
for your 2 refinement data files. This is the exact file that is used in
refinement by AutoBuild.
I agree that you should be able to use your original data file instead. A
possible reason why this has failed is that the original data file has a
couple reflections for which there is no data...and which were tossed by
AutoBuild before creating exptl_fobs_freeR_flags.mtz . Two files that
differ only in reflections with no data will still give different
checksums, I think.
All the best,
Tom T
> Hi Dale,
>
>>> 1) Why you specify reflection MTZ file twice in phenix.refine script?
>>>
>>>
>> I put the mtz in twice because if I put it in once phenix.refine
>> complains that I have no free R flags. It seems to want one file with
>> the amplitudes and another with the flags. Since I have both in the
>> same file I put that file on the line twice and phenix.refine finds
>> everything it needs.
>>
>
> phenix.refine looks for free-R flags in your main data file
> (1M50-2.mtz). Optionally you can provide a separate file containing
> free-R flags (I have to write about this in the manual). However, if
> your 1M50-2.mtz contains free-R flags then you don't need to give it
> twice. So clearly something is wrong at this step and we need to find
> out what is wrong before doing anything else. Could you send the result
> of the command "phenix.mtz.dump 1M50-2.mtz" to see what's inside of your
> data file? Or I can debug it myself if you send me the data and model.
>
>> If the MD5 hash of the test set depends on the resolution then
>> certainly
>> I could be in trouble.
>
> No. It must always use the original files before any processing.
>
>> Does the resolution limit affect the MD5 hash of the test set?
>>
>
> No. If it does then it is a very bad bug. I will play with this myself
> later tonight.
>
>>
>>> 3) Does this work:
>>>
>>> (...)
>>
>> I'll try these but it will take a bit of time.
>>
>
> Don't run it until completion. Just make sure it passed through the
> processing step.
>
> Pavel.
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
17 years, 6 months

Re: [phenixbb] ordered solvent - parameters?
by Pavel Afonine
Hi Mark,
phenix.refine finds potential water peaks looking at mFo-DFc map
(primary map). Then it sorts these peaks by the distances (water-water,
water-other). Then it checks for hydrogen bonding. Finally, it verifies
that the peaks are present at 2mFo-DFc map (secondary map) at given
sigma. Finally finally, the water B-factors (and occupancies, if
requested) are refined separately from refinement of other parameters in
order to get a reasonable starting point for the next refinement
macro-cycle. The procedure is smart enough about cases when your
structure has explicit hydrogen atoms.
What is crucial is that this procedure is built into refinement, so the
water molecules are updated (added / removed / refined) during
refinement and NOT as a separate step.
There are various options to refine water with isotropic or anisotropic
B-factors, refine water occupancies, etc.
We still need to do a better job on NOT picking water into ligand
density. This is in our to-do list and will be done at some point.
There are a few slides in one of the most recent phenix.refine
presentation that outline most of what I wrote above:
http://cci.lbl.gov/~afonine/aca2008_knoxville_neutron/phenix_refine_2008_ma…
Please let us know if you have any other questions,
Pavel.
On 7/31/2008 2:03 PM, Mark Collins wrote:
> Hi All
>
> I was just going through the .help section and website documentation
> for ordered_solvent. And wonder if somebody could enlighten me as too
> the meanings of;
>
> ordered_solvent {
> primary_map_type = mFobs-DFmodel
> secondary_map_type = 2mFobs-DFmodel
> h_bond_min_mac = 1.8
> h_bond_min_sol = 1.8
> h_bond_max = 3.2
> }
> peak_search {
> map_next_to_model {
> use_hydrogens = False
> }
> max_number_of_peaks = None
> peak_search {
> peak_search_level = 1
> max_peaks = 0
> interpolate = True
> min_distance_sym_equiv = None
> general_positions_only = False
> min_cross_distance = 1.8
> }
>
> What is the difference between the primary and secondary map? Is there
> away to pick sites in one and remove in another? Eg. pick sites at 3 sigma
> in FoFc and remove if 1.2 sigma in 2FoFc.
>
> I thought h-bonding was generously about 2.4-3.6A so why the 1.8 and 3.2
> defaults? Also if use_hydrogens = True, does the refinement use h_bond
> parameters instead of model_peak_dist and peak_peak_dist parameters, or
> both? Also what do all of the peak_search parameters do?
>
> And finally does anyone have any recommendations for progessively changing
> the ordered_solvent parameters? In CNS we used to gradually lower the
> 2FoFc peak picking (and deleting) and increase the bfactor_max for
> deleting?
>
> Thanks Mark
>
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
16 years, 11 months

Re: [phenixbb] TRIED resolve ...NEED A BIGGER VERSION... message
by Felix Frolow
Tom, does it mean that for structure of ANY size the right version of
RESOLVE will be activated now automatically?
If it does, it is very important improvement
FF
Dr Felix Frolow
Professor of Structural Biology and Biotechnology
Department of Molecular Microbiology
and Biotechnology
Tel Aviv University 69978, Israel
Acta Crystallographica D, co-editor
e-mail: mbfrolow(a)post.tau.ac.il
Tel: ++972 3640 8723
Fax: ++972 3640 9407
Cellular: ++972 547 459 608
On Jan 5, 2009, at 5:25 PM, Tom Terwilliger wrote:
> Hi Ben,
> Yes, that is just FYI. There are a few sizes of resolve and this
> means the smallest wasn't big enough. I'll change the message so
> that it is more informative...
> All the best,
> Tom T
>
>
>
> Thomas C. Terwilliger
> Mail Stop M888
> Los Alamos National Laboratory
> Los Alamos, NM 87545
>
> Tel: 505-667-0072 email: terwilliger(a)LANL.gov
> Fax: 505-665-3024 SOLVE web site: http://
> solve.lanl.gov
> PHENIX web site: http:www.phenix-online.org
> ISFI Integrated Center for Structure and Function Innovation web
> site: http://techcenter.mbi.ucla.edu
> TB Structural Genomics Consortium web site: http://www.doe-mbi.ucla.edu/TB
> CBSS Center for Bio-Security Science web site: http://www.lanl.gov/
> cbss
>
>
>
>
> On Jan 5, 2009, at 8:09 AM, Ben Eisenbraun wrote:
>
>>
>> Howdy Phenixians,
>>
>> One of my users is running phenix.autobuild, and while it seems to
>> run
>> okay, the output contains this warning:
>>
>> TRIED resolve ...NEED A BIGGER VERSION...
>>
>> Looking through phenix/phenix/autosol/run_resolve.py, it seems like
>> this
>> is an informational message, but I'm not sure, so I figured I'd ask.
>>
>> Is this actually an error?
>>
>> This is phenix-1.3-final on OS X 10.5.
>>
>> Thanks.
>>
>> -ben
>>
>> --
>> Ben Eisenbraun
>> Structural Biology Grid Harvard Medical
>> School
>> http://sbgrid.org http://hms.harvard.edu
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
16 years, 5 months

Re: [phenixbb] rotamers
by Kendall Nettles
We have been doing a lot of parallel refinements where we are checking out the new options in PHENIX refinement, and one of the things we have observed is that sometimes the sidechains with no clear electron density end up in the main chain density, and distort the model. There is no clear pattern as to which options lead to this phenotype, as different combinations give different results. Until we can sort out what is causing this, it seems clear to me that it is better to delete the side chains. If you want to leave a side chain with no clear electron density, you have to make sure each one is not distorting the model. So leaving the side chains with no clear electron density requires much work, with a benefit that is not clear to me.
Kendall Nettles
On Mar 28, 2011, at 1:04 PM, Ed Pozharski wrote:
> Pavel,
>
>> - what you mean by "no density",
>
> Lack of confidence in placement of the side chain. Everyone would have
> somewhat different take on it, but the question is more about what to
> do, not how to decide if the side chain is disordered.
>
>> Therefore this raises another item for your questionnaire:
>
> There is "other" option, feel free to use it
>
>> refine group
>> occupancy for these atoms (one occupancy per all atoms in question - the
>> occupancy typically will refine to something less than 0.5 or so).
>
> This raises an entirely different question regarding reliability of
> occupancy refinement in general due to its correlation with the
> B-factors. Another can of worms.
>
>> This trick with smearing out an atom by B-factor may only work for
>> isolated (single) atoms such as waters because they are not bonded to
>> anything through restraints.
>
> Certainly, presence of restraints makes the B-factor increase less
> steep. I just looked at an instance of a disordered arginine (no
> density above 1 sigma for any side chain atoms), and B-factors jump from
> 30 at the backbone to 90 at the tip of the side chain. This would
> reduce the density level ~5x, which is probably quite sufficient for
> blending it into the solvent. There could be a bit of a problem in the
> middle, where B-factors are inflated/deflated, but it does take care of
> density reduction.
>
> Things like atom-specific restraints and modified restraint target may
> be of some help, but the effect on the final model may be too small to
> validate the effort.
>
> --
> "I'd jump in myself, if I weren't so good at whistling."
> Julian, King of Lemurs
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
14 years, 3 months

Re: [phenixbb] High B-factors after Phenix restrained refinement
by Pavel Afonine
I agree, it is a matter of convention. The total ADP is:
Utotal=Ucryst+Ugroup+Ulocal. You can either output the total Utotal into
ATOM/ANISOU records or keep Ucryst+Ugroup in REMARKs and output Ulocal
into ATOMs. Both ways are valid as long as they yield identical Utotal.
For more relevant information, summary and some review, see:
- pages 23-29 here:
http://phenix-online.org/presentations/latest/pavel_refinement_general.pdf
- article "On atomic displacement parameters (ADP) and their
parameterization in PHENIX" here:
http://phenix-online.org/newsletter/
Pavel
On 12/16/11 8:11 AM, Steiner, Roberto wrote:
> Hi Da
>
> Even if you deposit a structure refined with Refmac the PDB now
> expects the total B values being present. Have a look at
> http://deposit.rcsb.org/adit/REFMAC.html
>
> What you call "more" correct does not really make much sense to me if
> I understand you properly. If you follow the link given above (or use
> TLSANL directly from the CCP4) and get 'total Bs' from Refmac I am
> sure they will be more or less the same and the Bs from phenix.refine.
>
> R
>
>
> On 16 Dec 2011, at 15:54, Da Duan wrote:
>
>> Hi Nat
>>
>> I was just looking at the average B in the refinement log files from
>> Refmac and Phenix Refine. Thanks for the clarification on how Refmac
>> and Phenix calculate the average B. My next question is when
>> depositing the structure, is it more common to deposit structures
>> with the "residual" B-factors or B-factors generated by Phenix that
>> includes the TLS and Ucryst contribution? I also performed sfcheck
>> and the average B generated by the Wilson plot is ~100 which seems to
>> suggest that the Phenix average B is probably "more" correct?
>>
>> Thanks again
>>
>> Da
>>
>>
>>
>> On Fri, Dec 16, 2011 at 12:31 AM, Nathaniel Echols <nechols(a)lbl.gov
>> <mailto:[email protected]>> wrote:
>>
>> On Thu, Dec 15, 2011 at 2:20 PM, Da Duan <2dd13(a)queensu.ca
>> <mailto:[email protected]>> wrote:
>> > I used Phenix AutoMR to solved a structure to 3.3A and after 1
>> round of
>> > rigidbody refinement with Phenix Refine I proceeded to restrained
>> > refinement. The R/Rfree from the refinement decreased nicely as
>> expected but
>> > the B average is at ~100 (using Group B factor refinement
>> option). I took
>> > the same model and mtz through Refmac and the B average is
>> about ~40. Has
>> > anyone experienced this before? I am almost positive it maybe a
>> setting
>> > issue in Phenix Refine that i should be looking at to get the B
>> factors to
>> > refine correctly.
>>
>> How are you calculating the average B? Refmac prints "residual"
>> B-factors in the B column of ATOM records - these do not include the
>> contribution from TLS and Ucryst (an overall B-factor for the entire
>> crystal). In Phenix, the ATOM records always have the total
>> isotropic
>> B-factor, and this will always be higher than the equivalent in
>> Refmac. So it's quite likely that both programs are correct, they're
>> just reporting very different things. (And for what it's worth, a
>> mean B-factor of 100 is totally normal at 3.3A resolution.)
>>
>> -Nat
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> http://phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>> <ATT00001..c>
>
> Roberto Steiner, PhD
> Randall Division of Cell and Molecular Biophysics Group Leader
> King's College London
>
> Room 3.10A
> New Hunt's House
> Guy's Campus
> SE1 1UL, London, UK
> Tel 0044-20-78488216
> Fax 0044-20-78486435
> roberto.steiner(a)kcl.ac.uk <mailto:[email protected]>
>
>
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
13 years, 6 months

Re: [phenixbb] phenix autobuild question
by aberndt@mrc-lmb.cam.ac.uk
Hi Tom,
thanks for your answer. I just checked the phenix version I used and it is
version-1.3-final. Please find attached the .log file you asked for. I
just realised that resolve apparently had a problem with the .mtz label
assignment. I never had this problem before. Now that I mention that I
have to confess that this is not entirely true. All the phenix tutorials I
have looked through so far state that you can use ccp4s mtz the way they
are. Yet, this is not entirely true. Say I want to have an analysis from
phenix.xtriage by typing 'phenix.xtriage latest.mtz'. So phenix will give
me the error message:
Multiple equally suitable arrays of observed xray data found.
Possible choices:
ab_xds_pointless_scala5.mtz:IMEAN_XDSdataset,SIGIMEAN_XDSdataset
ab_xds_pointless_scala5.mtz:I_XDSdataset(+),SIGI_XDSdataset(+),I_XDSdataset(-),SIGI_XDSdataset(-),merged
Please use scaling.input.xray_data.obs_labels
to specify an unambiguous substring of the target label.
Sorry: Multiple equally suitable arrays of observed xray data found.
Is there a way a way to avoid going back to ccp and sorting the labels
accordingly. What would be the 'scaling.input.xray_data.obs_labels'
command in the xtriage command line?
Anyway, Tom, many thanks for your answer in advance! I reaaly like working
with phenix. I just need to learn loads more ...
Best,
Alex
> Hi Alex,
>
> I'm sorry for both the failure and the lack of a clear message here! I
> will try to fix both.
>
> Is this running phenix-1.3-final? (If not, could you possibly run with
> that version so we are on the same version.)
>
> Can you possibly send me the end of the output in
>
> /someplace/home/someuser/
> work/someproject/phenix2/AutoBuild_run_1_/TEMP0/AutoBuild_run_1_/AutoBuild_run_1_1.log
>
> which I hope will have an actual error message to look at?
>
> The reason for the subdirectories is this: phenix.autobuild runs several
> jobs in parallel (if you have set nproc ) or one after the other (if you
> have not set nproc). These subdirectories contain those jobs...which are
> then combined together to give the final results in your overall
> AutoBuild_run_1_/ directory.
>
> I don't know the answer to the phenix.refine question...perhaps Pawel or
> Ralf can answer that one.
>
> All the best,
> Tom T
>
>> Dear phenix community,
>>
>> I started an AutoBuild run using the phenix GUI (ver 1.3 rc4) on a
>> linux cluster. It worked okay and all the refinement looked pretty
>> decent. However, after quite a while I obtained the following error
>> message (see below). Would anyone please tell me what to do to prevent
>> this error and get phenix get its job finished?
>>
>> Also, I don't understand why phenix AutoBuild creates 5 (or whatever
>> number) AutoBuild_run_x in the AutoBuild-run_1/TEMP0 directory and not
>> two dirs up in the tree. In other words in my /someplace/home/someuser/
>> work/someproject/phenix2/AutoBuild_run_1_/TEMP0 directory are more
>> AutoBuild_run_x_ directories (with x=1 to 5). It is a bit confusing to
>> me.
>>
>> A final question: I realised that the phenix.refine drastically
>> increases the number of outliers. I know that thare is a weighting
>> term someplace ... but what was it again?
>>
>> Many thanks in advance,
>> Alex
>>
>>
>> warnings
>> Failed to carry out AutoBuild_multiple_models:
>> Sorry, subprocess failed...message is:
>> ********************************************************************************
>> phenix.autobuild \
>>
>> write_run_directory_to_file=/someplace/home/someuser/work/someproject/
>> phenix2/AutoBuild_run_1_/TEMP0/INFO_FILE_1
>>
>> Reading effective parameters from /someplace/home/someuser/work/
>> someproject/phenix2/AutoBuild_run_1_/TEMP0/PARAMS_1.eff
>>
>> Sending output to AutoBuild_run_1_/AutoBuild_run_1_1.log
>>
>> ********************************************************************************
>>
>> Failed to carry out AutoBuild_build_cycle:
>>
>> failure
>>
>> ********************************************************************************
>>
>> ********************************************************************************
>>
>> work_pdb_file "AutoBuild_run_1_/edited_pdb.pdb"
>> working_directory "/someplace/home/someuser/work/someproject/phenix2"
>> worst_percent_res_rebuild 2.0
>>
>> ---------------------------------------------------
>> Alex Berndt
>> MRC-Laboratory of Molecular Biology
>> Hills Road
>> Cambridge, CB2 0QH
>> U.K.
>>
>> phone: +44 (0)1223 402113
>> ---------------------------------------------------
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
16 years, 9 months

Re: [phenixbb] high average b-factor vs. Wilson B - EXPLANATION
by Sue Roberts
Hello,
I'm a little confused about Pavel's response. Perhaps I'm not understanding it correctly.
I don't understand why you say this is nothing to worry about. It seems to me that it means that the refinement strategy isn't stable or optimal - the contributions from the overall Baniso and the individual B factors are not being (and prossibly can not be) separated properly - maybe they can't be because the overall anisotropy is so large that it swamps the anisotropic B or perhaps the resolution is not sufficient to refine anisotropic Bs for individual atoms. In these cases shouldn't either (1) the anisotopic B be refined and then fixed rather than be allowed to creep up or (2) anisotropic Bs for individual atoms not be refined?
Can this problem be debugged by looking at the anisotropic Bs in coot? If the anisotropic scale is being incorporated improperly into the atomic anisotropic Bs then would all the thermal ellipsoids have almost the same shape and orientation and only differ in magnitude?
Sue
On Aug 6, 2010, at 9:15 AM, Pavel Afonine wrote:
> Hi Geoffrey,
>
> thanks for sending me the data and model - this helped me to find out what happens in your and other similar repeatedly reported cases.
>
> Have a quick look at the total model structure factor formula that is used in all phenix.refine, phenix.model_vs_data, phenix.maps and many other similar tools:
>
> see page 6 here:
>
> http://www.phenix-online.org/presentations/latest/pavel_refinement_general.…
>
> and glance through the page 29, PHENIX Newsletter:
>
> http://www.phenix-online.org/newsletter/CCN_2010_07.pdf
>
> As you see there is overall anisotropic scale matrix Ucryst (see Acta Cryst. (2005). D61, 850-855 and references therein for deeper level of details). In refinement, the trace of this matrix is subtracted from it and added to individual ADPs and Bsol, making Fmodel invariant under this manipulation. Most of the time, this is a small value, but sometimes it is relatively large.
>
> Now, let's see what happens in your particular case.
> The Wilson B-factor is 27.
> If we reset all B-factors to 27 and repeat the refinement until convergence using two scenarios:
>
> 1) we add the trace of Ucryst to individual ADPs,
> 2) we do not add the trace of Ucryst to individual ADPs,
>
> we will get the following:
>
> 1) R-work = 0.1728, R-free = 0.2177
> Bcryst (Ucryst reported as B) = (-10.42,-11.48,21.90,-0.00,0.00,0.00); trace/3= 0.00
> ksol= 0.33 Bsol= 54.75
> Average ADP = 43.56
>
> 2) R-work = 0.1744, R-free = 0.2171
> Bcryst (Ucryst reported as B) = (4.42,3.35,36.58,0.00,0.00,0.00); trace/3= 14.78
> ksol= 0.32 Bsol= 40.00
> Average ADP = 29.03
>
> Clearly, in case "2)" you get almost exact match of Wilson B and Average ADP (27 and 29), while in case "1)" the B-factors are higher. Note, the R-factors in both cases are "identical" (negligibly different given the resolution and the R-factor value).
>
> So I guess everything is more or less consistent and clear. It depends where and how you keep different contributions to the total ADP, and which values you use to compute mean ADP.
>
> Answering your very original question "Should I be concerned about this?": the answer is no. But I had a quick look at your maps: and this is what I would spend some more time: you have a lot of positive and negative peaks, some of them are very strong: larger than 6sigma! I guess you are missing some ions, and alternative conformations may need some more attention. This is normal, it just needs some care and time to be spent before your structure is ready to go (to PDB).
>
> Pavel.
>
> PS> I'm sending you the results of these two runs off list.
>
>
> On 8/3/10 10:20 PM, Geoffrey Feld wrote:
>> Dear PhenixBBers,
>>
>> I'm working on a 1.45 A structure I solved using MR (phaser) and I'm pretty close to finishing, just plopping in waters and fixing rotamers. Rw = 19.8 Rfree= 22.8. I am a little concerned because my Wilson B is 27.00 while my average B for macromolecule is more like 43, and for solvent is 48. I have enough data to use anisotropic ADP refinement, which was a big help in bringing down the Rfree, but the average B hasn't really moved much. Should I be concerned about this? Should I try adjusting the wxu, or some other parameter?
>>
>> Thanks!
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
Dr. Sue A. Roberts
Dept. of Chemistry and Biochemistry
University of Arizona
1041 E. Lowell St., Tucson, AZ 85721
Phone: 520 621 8171
suer(a)email.arizona.edu
http://www.biochem.arizona.edu/xray
14 years, 10 months

Re: [phenixbb] phaser MR
by sbiswas2@ncsu.edu
Hi,
Thanks for your response. So I did one cycle of refinement and indeed the
R work goes down by 5 points when I apply twin law at P4222 space group
and when I look at the map the clashes that were present before are no
longer there. I will try to scale it in P4 or P422 and see how it looks. I
used HKL2000 to scale the data and have no idea if it will make a
difference to use SCALA.
This is what I got from phenix xtriage:
These values look inbetween untwinned and perfect twin
Acentric reflections
<I^2>/<I>^2 :1.980 (untwinned: 2.000; perfect twin 1.500)
<F>^2/<F^2> :0.797 (untwinned: 0.785; perfect twin 0.885)
<|E^2 - 1|> :0.727 (untwinned: 0.736; perfect twin 0.541)
Centric reflections
<I^2>/<I>^2 :2.863 (untwinned: 3.000; perfect twin 2.000)
<F>^2/<F^2> :0.673 (untwinned: 0.637; perfect twin 0.785)
<|E^2 - 1|> :0.925 (untwinned: 0.968; perfect twin 0.736)
-----------------------------------------------
| Z | Nac_obs | Nac_theo | Nc_obs | Nc_theo |
-----------------------------------------------
| 0.0 | 0.000 | 0.000 | 0.000 | 0.000 |
| 0.1 | 0.074 | 0.095 | 0.208 | 0.248 |
| 0.2 | 0.164 | 0.181 | 0.319 | 0.345 |
| 0.3 | 0.246 | 0.259 | 0.391 | 0.419 |
| 0.4 | 0.322 | 0.330 | 0.452 | 0.474 |
| 0.5 | 0.388 | 0.394 | 0.505 | 0.520 |
| 0.6 | 0.445 | 0.451 | 0.552 | 0.561 |
| 0.7 | 0.497 | 0.503 | 0.592 | 0.597 |
| 0.8 | 0.541 | 0.551 | 0.630 | 0.629 |
| 0.9 | 0.587 | 0.593 | 0.659 | 0.657 |
| 1.0 | 0.631 | 0.632 | 0.679 | 0.683 |
-----------------------------------------------
| Maximum deviation acentric : 0.021 |
| Maximum deviation centric : 0.040 |
| |
| <NZ(obs)-NZ(twinned)>_acentric : -0.009 |
| <NZ(obs)-NZ(twinned)>_centric : -0.014 |
Thanks again for the valuable input,
Shya
Shya,
>
> Did phaser complain that the asymmetric unit was too full? How do the
> self rotation maps look? Are the crystallographic peaks exact or off
> by a few degrees (your resolution data may make it difficult to see
> this)? How do the N(z) cumulative intensity distributions look (make
> sure to calculate this with thin resolution bins, i.e. increase BINS
> in Scala I think)? Does your data look sigmoidal on this plot?
>
> Perfect twinning or an NCS that's close to a crystallographic axis is
> difficult to diagnose from merged intensity statistics and even more
> difficult with resolution worse than 2.5. I recommend Dauter Acta
> Cryst (2003) D59 2004-2016 for a good discussion of this.
>
> Your space group might be too high. See the subgroups of P422 at
> http://cci.lbl.gov/~phzwart/p422_2.png
> . Reintegrate and merge the data in each space group, MR a single copy
> of your model (let phaser complete the ASU) and compare the Rpim's
> (from scaling/merging) and the Rwork/Rfree from a rigid body refine
> without NCS, with NCS, with appropriate twin laws, and with twin laws
> + NCS. No need to do a full refinement just yet. Allow phenix.refine
> to create the Rfree flags. Choose the space group which gives the best
> statistics.
>
> I recently had a case (Hardin, Reyes, Batey J. Biol. Chem., Vol. 284,
> Issue 22, 15317-15324, May 29, 2009) of a protein that merged into
> P422 but was difficult to refine in that space group. I brought it
> back to P4 and refined with NCS+twin to give more reasonable Rwork/
> Rfree (5-7% difference from the P422 to P4).
>
> HTH,
>
> FR
>
>
>
> On Jul 23, 2009, at 3:54 PM, sbiswas2(a)ncsu.edu wrote:
>
>> Hi Francis,
>> Thanks for your response. The matthews coefficient suggests two
>> molecules
>> in the AU. Phaser also finds two molecules. I ran the dataset through
>> phenix xtriage it did not indicate twinning though. The molecule also
>> exists in nature as a monomer.
>> Shya
>>
>>
>>> Twinning? What's your matthews coefficient say? Do you know if your
>>> structure is a multimer (biochemistry, etc)? Does it agree with the
>>> matthews coefficient?
>>>
>>> If the unit cell is not big enough to hold all of the contents,then
>>> this is an indicator for twinning .
>>>
>>> FR
>>>
>>> On Jul 23, 2009, at 3:09 PM, sbiswas2(a)ncsu.edu wrote:
>>>
>>>> Hi all,
>>>>
>>>> I was trying to solve a structure by molecular replacement. I scaled
>>>> the
>>>> data in P4222 space group (resolution 2.7A) with two molecules in
>>>> the
>>>> assymmetric unit (molecule A and B) I ran phaser with my model and
>>>> got a
>>>> Zscore of 5.1. When I look at the map that I got from phaser I could
>>>> easily see good electron density for both molecules, However upon
>>>> inspection of the electron density map there were considerable
>>>> interaction
>>>> or clashes with molecule B and a symmetry atom. Molecule A had no
>>>> clashes
>>>> however with the symmetry atoms. I was wondering if anyone knows how
>>>> to
>>>> resolve this. Could it be a problem of space group. The statistics
>>>> are
>>>> good for space group P4222 and the I/sigI was good till 2.7A.
>>>> Any advice is appreciated,
>>>> Shya
>>>>
>>>> _______________________________________________
>>>> phenixbb mailing list
>>>> phenixbb(a)phenix-online.org
>>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>>> ---------------------------------------------
>>> Francis Reyes M.Sc.
>>> 215 UCB
>>> University of Colorado at Boulder
>>>
>>> gpg --keyserver pgp.mit.edu --recv-keys 67BA8D5D
>>>
>>> 8AE2 F2F4 90F7 9640 28BC 686F 78FD 6669 67BA 8D5D
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
> ---------------------------------------------
> Francis Reyes M.Sc.
> 215 UCB
> University of Colorado at Boulder
>
> gpg --keyserver pgp.mit.edu --recv-keys 67BA8D5D
>
> 8AE2 F2F4 90F7 9640 28BC 686F 78FD 6669 67BA 8D5D
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
15 years, 11 months

Re: [phenixbb] Adding multiple conformers beyond 4
by Pavel Afonine
Hi George, hi Ben,
thanks a lot for explaining! Yes, I'm well aware of multi-model
refinement/building works and even myself was involved into one:
Acta Cryst. (2007). D63, 597-610. "Interpretation of ensembles created
by multiple iterative rebuilding of macromolecular models".
The "proper" way would be to enable using multiple models in
phenix.refine (those defined with MODEL-ENDMDL cards). Currently this is
not possible, but allowing it is in our to-do list. There is a number of
technical issues that we need to address first (and some of them are not
entirely up to me to re-solve: ed. Ralf's PDB interpretation procedure).
Yes, using altLoc identifiers solves the problem indeed, although the
under-the-hood calculations become extremely inefficient, but again, you
are right, it works (see remark below for potential issues!).
I can go ahead and remove "max=4" limitation.
One remark. Recently I went through the whole PDB and tried to
re-compute the reported statistics (R-factors, for example) for all
entries containing multiple models. You can do it using
phenix.model_vs_data tool:
phenix.model_vs_data model.pdb data.hkl
So, the observation is: more models your PDB file contains, less
reproducible the R-factors. The obvious reason for this is the
precision filed for occupancy in PDB file, which is 1.00. This means if
you have 16 models, then you report occupancy as 0.06 and NOT 1./16 =
0.0625. So the rounding errors are the issue here. What if you have 200
models?
Pavel.
On 9/15/09 10:58 AM, George Phillips wrote:
> Pavel,
>
> Ben and I are here now, so here is your response from both of us.
>
> We are trying to do complete ensemble refinements like we used to do
> with CNS. 8 or even 16 (or more)copies of the whole protein tends to
> give the best R-free. See the article below.
>
> We can recompile if you tell us what needs to be changed (or give us
> some clues where to look), but we need a lot more than four. Clearly
> this is not the normal use of this feature, but it works. We are
> getting drops in Rfree from your test examples in phenix even with four.
>
> Levin et al. Structure, 15: 1040 (2007).
>
> George N. Phillips, Jr., Ph.D.
> Professor of Biochemistry and of
> Computer Sciences
> University of Wisconsin-Madison
> 433 Babcock Dr. Madison, Wi 53706
> Phone/FAX (608) 263-6142
>
>
>
> On Sep 15, 2009, at 12:22 PM, Pavel Afonine wrote:
>
>> Hi Ben,
>>
>> to allow so I will have to slightly change the code... I went through
>> the whole PDB and did not find any item that has more than 3 or 4
>> conformers (at the moment of coding this). So that made my choice for
>> that temporary limitation of max=4 conformers (putting aside a number
>> of cases of abusing altlocs to mimic multiple models MODEL-ENDMDL).
>> Unfortunately, nothing is so permanent as temporary, so we have 4
>> since that -:)
>>
>> May I ask you: why you need to have more than 4 conformers? If it is
>> really a bottleneck and stops you from doing something important
>> right now, I can go ahead and fix it.
>>
>> Pavel.
>>
>>
>> On 9/15/09 9:20 AM, Ben Mueller wrote:
>>> I am a relatively new Phenix user and I am trying to see if it is
>>> possible to push the number of conformers beyond 4. I tried to do
>>> so, and I recieved the error message:
>>>
>>> RuntimeError: Exceed maximum allowable number of conformers (=4).
>>>
>>> Is there an easy (or difficult) way around this?
>>>
>>> Thanks for your time,
>>>
>>> Ben Mueller
>>>
>>> Phillips Lab
>>> Department of Biochemistry
>>> University of Wisconsin - Madison
>>> ------------------------------------------------------------------------
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org <mailto:[email protected]>
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
15 years, 9 months

Re: [cctbxbb] Making branches by accident
by markus.gerstel@diamond.ac.uk
I use a custom prompt so I can see what is going on when I am in a git repository folder.
This is the code one could add to their ~/.bashrc:
https://gist.github.com/Anthchirp/dfc9a4382f8dfc9a97fe1039c9e6789a
This is what it looks like:
https://postimg.org/image/8c9h72qwd/
This is what happens in the image:
* yellow brackets indicate you are in git territory, and contain the current branch name
* red branch name = uncommitted changes in repository
* positive number: number of commits the local repository is ahead of the remote repository
* the 'git pull' command causes an implicit merge commit, which I undo with the next command
* negative number: number of commits the local repository is behind the remote repository
* both negative and positive number: branches have diverged
Maybe someone finds it useful.
-Markus
________________________________
From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Pavel Afonine [pafonine(a)lbl.gov]
Sent: Wednesday, December 07, 2016 18:24
To: cctbxbb(a)phenix-online.org
Subject: Re: [cctbxbb] Making branches by accident
This happened to me a few times now, and just double-checked that my .gitconfig contains "rebase = true". Let's see if it happens again..
Pavel
On 12/7/16 00:02, Graeme.Winter(a)diamond.ac.uk<mailto:[email protected]> wrote:
Morning all
I am seeing a certain amount of “Merge branch 'master' of github.com:cctbx/cctbx_project” coming through on the commits – this usually means you did not do a git pull –rebase before the git push. This can be set to the default by using the spell Markus sent out
git config --global pull.rebase true
This will need to be done on each machine you push from, else getting the habit of doing a git pull –rebase before push is a good one.
We have had this on and off with DIALS but it tends to pass easily enough.
What bad happens? Nothing really but the history becomes confusing…
So: may be worth checking that you have the pull.rebase thing set?
Cheerio Graeme
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
8 years, 6 months