Search results for query "look through"
- 527 messages
Re: [phenixbb] Help with custom bond restraints syntax
by Andy Torelli
Thanks Pavel, that works well.
Best,
-Andy
On 9/29/2009 12:21 PM, Pavel Afonine wrote:
> Alternatively, you can do simply this:
>
> refinement.geometry_restraints.edits {
> bond {
> action = *add
> atom_selection_1 = chain F and resid 1 and name FE1
> atom_selection_2 = chain A and resid 69 and name SG
> distance_ideal = 2.35
> sigma = 0.1
> slack = None
> }
> }
>
>
> Pavel.
>
>
> On 9/29/09 8:54 AM, Andrew T. Torelli wrote:
>> Hi all,
>>
>> I'm having what I believe is a simple problem defining custom bond
>> restraints between a side chain in my protein model and a ligand
>> (non-covalent bond). Here is a minimal form of my custom bond
>> definition file that suffers from the error:
>>
>> refinement.geometry_restraints.edits {
>> Atom1 = chain F and resid 1 and name FE1
>> Atom2 = chain A and resid 69 and name SG
>> bond {
>> action = *add
>> atom_selection_1 = Atom1
>> atom_selection_2 = Atom2
>> distance_ideal = 2.35
>> sigma = 0.1
>> slack = None
>> }
>> }
>>
>> I'm using phenix 1.4-153 and I get the following error:
>> ERROR: Unused parameter definitions:
>> refinement.geometry_restraints.edits.Atom1 (file
>> "/HOME/andrew/PHENIX_paramfiles/test.edits", line 5)
>> refinement.geometry_restraints.edits.Atom2 (file
>> "/HOME/andrew/PHENIX_paramfiles/test.edits", line 6)
>>
>> I've checked through the online manual and I believe my syntax is
>> correct, but I'm not sure why my custom bond restraint is being
>> ignored.
>>
>> A second question is: once I get the custom bond/angle restraints
>> correctly implemented, is there a better or more convenient place to
>> look and confirm that the restraints have been imposed other than the
>> .geo file phenix.refine outputs?
>>
>> Thank you very much for your help,
>> -Andy
>>
>>
>>
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
16 years, 4 months
Re: [phenixbb] temp files
by James Holton
Thank you Tom!
Ok. So, in order to change the default I need to go through the code
looking for "temp_dir" and change things?
On 4/24/2021 10:16 AM, Tom Terwilliger wrote:
> Hi James,
>
> There is no overall Phenix temp directory specification, but most of
> the temp_dir usage is from autosol/autobuild/ligandfit/map_to_model.
> Each of these has the keyword "temp_dir=xxxx" which you should be
> able to set to any directory you want (and local is better as you
> note). Most programs using a temp_dir also have a keyword
> clean_up=True as well.
>
> All the best,
> Tom T
>
> On Sat, Apr 24, 2021 at 10:51 AM James Holton <jmholton(a)lbl.gov
> <mailto:[email protected]>> wrote:
>
> Thank you Li-Wei
>
> Definitely not placing blame on one program. Phenix.autobuild is
> another
> big temp file producer. So is XDS. Clearly this ligand run was a
> case
> of a misconfigured, runaway task that never finished. However, the
> files
> lingered on disk, eating up inodes for 3 years!
>
> The reason I'm asking is I think there are significant performance
> increases to be gained by using fast, local storage for scratch
> files.
> This is not just in speed but storage and overall system/cluster
> performance. Very few things are more expensive than an NFS write!
>
> Does anyone know how to change the default temp file location across
> phenix ? Is this a cctbx thing?
>
> Thanks
>
> -James
>
>
> On 4/23/2021 9:38 PM, Li-Wei Hung wrote:
> > Hi James,
> >
> > I'll leave the global Phenix temp aspect to Billy.
> > For ligand identification specifically, the working directory is
> where
> > all the files are located. The program will purge most of the
> > intermediate files upon completion. If the user interrupted the
> runs
> > or if the program crashed at certain spots, the purge mechanism
> might
> > not kick in. Even so, it'd take many runs to accumulate 20e6 (2e7?)
> > files. In any case, you've got a point and I'll look into salvaging
> > intermediate files of ligand identification as soon as they are not
> > needed in the process.
> >
> > Thanks,
> >
> > Li-Wei
> >
> > On 4/23/2021 7:03 PM, James Holton wrote:
> >> Hello all,
> >>
> >> Is there a way to configure phenix at install time (or perhaps
> >> post-install) to put temporary files under /tmp ? I just had to
> >> delete 20e6 temp files over NFS from a single user's phenix ligand
> >> identification run. The delete took almost a month.
> >>
> >> Apologies if I am neglecting to look somewhere obvious in the
> >> documentation,
> >>
> >> Happy Weekend!
> >>
> >> -James Holton
> >> MAD Scientist
> >>
> >> _______________________________________________
> >> phenixbb mailing list
> >> phenixbb(a)phenix-online.org <mailto:[email protected]>
> >> http://phenix-online.org/mailman/listinfo/phenixbb
> <http://phenix-online.org/mailman/listinfo/phenixbb>
> >> Unsubscribe: phenixbb-leave(a)phenix-online.org
> <mailto:[email protected]>
> >
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org <mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/phenixbb
> <http://phenix-online.org/mailman/listinfo/phenixbb>
> Unsubscribe: phenixbb-leave(a)phenix-online.org
> <mailto:[email protected]>
>
>
>
> --
> Thomas C Terwilliger
> Laboratory Fellow, Los Alamos National Laboratory
> Senior Scientist, New Mexico Consortium
> 100 Entrada Dr, Los Alamos, NM 87544
> Email: tterwilliger(a)newmexicoconsortium.org
> <mailto:[email protected]>
> Tel: 505-431-0010
>
4 years, 9 months
Re: [cctbxbb] rstbx test is failing without indication in test_all_parallel
by James Holton
I'm no expert, but it looks to me like the test is getting something
from the phenix regression repository. Maybe it is this dependency
that's making it bonk?
-James
On 9/22/2017 8:58 AM, richard.gildea(a)diamond.ac.uk wrote:
> I was unaware that the code was actually used by anyone.
>
> So does anyone know enough about the code to fix the failing test that Oleg and Graeme both complained about?
>
> Cheers,
>
> Richard
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
> ________________________________
> From: cctbxbb-bounces(a)phenix-online.org [cctbxbb-bounces(a)phenix-online.org] on behalf of Aaron Brewster [asbrewster(a)lbl.gov]
> Sent: 22 September 2017 15:56
> To: cctbx mailing list
> Subject: Re: [cctbxbb] rstbx test is failing without indication in test_all_parallel
>
> FWIW I have used it on several occasions to understand how diffraction looks for different settings. For example, when I was working on amyloids on the cspad detector I would run it like this:
>
> rstbx.simage.wx_display unit_cell=24.1460,4.8614,22.2291,90.000,107.319,90.000 wavelength=1.452514 detector.distance=111 detector.size=194.15,194.15 detector.pixels=1765,1765 ewald_proximity=0.042500 point_spread=20
>
> Twiddle the rotx, roty and rotz. It's nifty.
>
> I don't see any reason to kill this code.
>
> -Aaron
>
>
> On Fri, Sep 22, 2017 at 4:22 AM, <markus.gerstel(a)diamond.ac.uk<mailto:[email protected]>> wrote:
> This appears to be true: looking through cctbx_project, dials, labelit, phenix, phenix_regression I found no references apart from the tests that sparked this thread.
> I would just remove it for retirement - it will be preserved in the history.
>
> -Markus
> ________________________________________
> From: cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]> [cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]>] on behalf of richard.gildea(a)diamond.ac.uk<mailto:[email protected]> [richard.gildea(a)diamond.ac.uk<mailto:[email protected]>]
> Sent: Friday, September 22, 2017 12:08
> To: cctbxbb(a)phenix-online.org<mailto:[email protected]>
> Subject: Re: [cctbxbb] rstbx test is failing without indication in test_all_parallel
>
> As far as I recall rstbx/simage was a research project by Ralf that hasn't been touched for over 5 years, and isn't used by any code outside of rstbx/simage. Could this code be potentially "retired" somewhere outside of cctbx?
>
> Dr Richard Gildea
> Data Analysis Scientist
> Tel: +441235 77 8078<tel:%2B441235%2077%208078>
>
> Diamond Light Source Ltd.
> Diamond House
> Harwell Science & Innovation Campus
> Didcot
> Oxfordshire
> OX11 0DE
> ________________________________
> From: cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]> [cctbxbb-bounces(a)phenix-online.org<mailto:[email protected]>] on behalf of Oleg Sobolev [osobolev(a)lbl.gov<mailto:[email protected]>]
> Sent: 20 September 2017 19:33
> To: cctbx mailing list
> Subject: [cctbxbb] rstbx test is failing without indication in test_all_parallel
>
> Dear colleagues,
>
> Accidentally I found out that this command:
>
> rstbx.simage.solver d_min=5 lattice_symmetry=P422 intensity_symmetry=P4 index_and_integrate=True multiprocessing=True
> finishes with traceback.
>
> It is run as part of
> phenix_regression/misc/tst_rstbx.csh
>
> The second problem is that test_all_parallel script fails to detect an actual failure.
>
> It would be great if somebody who is in charge of rstbx and testing could look at this.
>
> Best regards,
> Oleg Sobolev.
>
> --
> This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
> Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org<mailto:[email protected]>
> http://phenix-online.org/mailman/listinfo/cctbxbb
>
>
> _______________________________________________
> cctbxbb mailing list
> cctbxbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/cctbxbb
8 years, 4 months
Re: [phenixbb] SAD related query
by Thomas C. Terwilliger
Hi Tara,
It sounds like autosol was not able to find a very good solution to your
structure. I would not be too optimistic from what you have said so far,
but here is a list of things to check over (from the about-to-be-released
phenix manual...!)
all the best,
Tom T
Autosol SAD tutorial: What to do if I do not get a good solution:
If you do not obtain a good solution, then it's not time to give up yet.
There are a number of standard things to try that may improve the
structure determination. Here are a few that you should always try:
* Have a careful look at all the output files. Work your way through
the main log file (e.g., AutoSol_run_1_1.log) and all the other
principal log files in order beginning with scaling
(dataset_1_scale.log), then looking at heavy-atom searching
(p9_se_w2_PHX.sca_ano_1.sca_hyss.log), phasing (e.g., phaser_1.log or
phaser_xx.log depending on which solution xx was the top solution) and
density modification (e.g., resolve_xx.log). Is there anything strange
or unusual in any of them that may give you a clue as to what to try
next? For example did the phasing work well (high figure of merit) yet
the density modification failed? (Perhaps the hand is incorrect). Was
the solvent content estimated correctly? (You can specify it yourself
if you want). What does the xtriage output say? Is there twinning or
strong translational symmetry? Are there problems with reflections
near ice rings? Are there many outlier reflections?
* Try a different resolution cutoff. For example 0.5 A lower
resolution than you tried before. Often the highest-resolution shells
have little useful information for structure solution (though the data
may be useful in refinement and density modification).
* Try a different rejection criterion for outliers. The default is
ratio_out=3.0 (toss reflections with delta F more than 3 times the rms
delta F of all reflections in the shell). Try instead ratio_out=5.0 to
keep almost everything.
* If the heavy-atom substructure search did not yield plausible
solutions, try searching with HYSS using the command-line interface,
and vary the resolution and number of sites you look for. Can you find
a solution that has a higher CC than the one found in AutoSol? If so,
you can read your solution in to AutoSol with sites_file=my_sites.pdb.
* Was an anisotropy correction applied in AutoSol? If there is some
anisotropy but no correction was applied, you can force AutoSol to
apply the correction with correct_aniso=True.
> Dear all,
> I am a novice user of phenix. I am trying to obtain phases for SAD
> dataset
> collected at 2.5 angs from autosol.But when i give the phases to
> autobuild,
> the R-factor is not decreasing below 49%. Also, in warp, it is unable to
> built with a message "encounterd an unknown element". Can someone suggest
> me
> what could be the problem.
>
> Thanks for any suggestion in advance.
> Tara Kashav
>
> On 9/3/07, phenixbb-request(a)phenix-online.org <
> phenixbb-request(a)phenix-online.org> wrote:
>>
>> Send phenixbb mailing list submissions to
>> phenixbb(a)phenix-online.org
>>
>> To subscribe or unsubscribe via the World Wide Web, visit
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>> or, via email, send a message with subject or body 'help' to
>> phenixbb-request(a)phenix-online.org
>>
>> You can reach the person managing the list at
>> phenixbb-owner(a)phenix-online.org
>>
>> When replying, please edit your Subject line so it is more specific
>> than "Re: Contents of phenixbb digest..."
>>
>>
>> Today's Topics:
>>
>> 1. command-line Patterson maps (Bryan W. Lepore)
>>
>>
>> ----------------------------------------------------------------------
>>
>> Message: 1
>> Date: Sun, 2 Sep 2007 00:07:51 -0500 (CDT)
>> From: "Bryan W. Lepore" <bryanlepore(a)mail.utexas.edu>
>> Subject: [phenixbb] command-line Patterson maps
>> To: phenixbb(a)phenix-online.org
>> Message-ID:
>> <
>> Pine.LNX.4.64.0709020001190.16588(a)cpe-70-116-17-26.austin.res.rr.com>
>> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>>
>> will phenix calculate Pattersons from trial sites or a reflection file
>> on
>> the command line? i.e. something besides hacking what is already there.
>>
>> e.g. cns has predict_patterson.inp. i can't seem to find a way to do it
>> from the command line - such as `phenix.patterson --sg=94
>> reflections.mtz`. i saw phenix.maps but that looks like electron
>> density
>> only.
>>
>> -bryan
>>
>>
>> ------------------------------
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://www.phenix-online.org/mailman/listinfo/phenixbb
>>
>>
>> End of phenixbb Digest, Vol 22, Issue 1
>> ***************************************
>>
>
>
>
> --
> Tara Kashav,
> Dr. S. Gourinath's Lab,
> Lab No 430,
> School of Life Sciences,
> Jawaharlal Nehru University,
> New Delhi - 110067
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://www.phenix-online.org/mailman/listinfo/phenixbb
>
18 years, 5 months
Re: [phenixbb] Dummy atoms
by Edward A. Berry
I think it is very important to be able to include unknown atoms
in a deposited pdb file (with echoing the caveat about flooding
the structure with UNK's to lower the R-factor).
For one thing, these structures are produced not just for structure-factor
calculation and validation. Many of the end users will never even
bother to do a structure factor calculation. It important for the
depositor to be able to refer to an unknown but likely significant
ligand and for the reader to be able to go and look at that position
(ideally surrounded by electron density).
For another thing, the structure factor calculation will give exactly
the same result whether the dummy atoms are omitted or are flagged
with zero occupancy or atom-type X to be ignored in sf calculation.
In the first case the person calculating structure factors can feel
good because the results are exactly right for that model. In the
second case he feels bad because he wasn't able to correctly
account for those atoms. But the first case is actually a better model.
Better to get a slightly wrong value for better model than the
correct result for the less good model, especially when the two
results are exactly the same.
Essentially we are faced with an insurmountable problem: we cannot
do a proper job of calculating sf's because of the unk atoms.
Better to include but ignore them in sf calc, I think, than to
eliminate them and kid ourselves that now we have the right answer.
However if the depositor has refined them (suggested by the B-factors
present in some of the files), and perhaps chosen an atom-type which
results in B-factors compatible with surrounding, it should be
possible to include the atom type so his R-factor can be reproduced.
This runs the risk of someone over-interpreting the PDB ("I thought
I knew what the UNK residue is, but my candidate has 3 C and one N
where the UNK has 4 C").
my 2 cents,
Ed
Pavel Afonine wrote:
> Hi Frank,
>
> thanks a lot for your feedback - as always very useful and critical
> which is great!
>
>> the 2nd-last one of the validation pack, where you recommend against
>> the use of UNK atoms, but don't say why:
>>
>> <snip>
>> Some programs and people tend to interpret unknown density using “dummy
>> atoms”. In PDB files it typically looks like this:
>> ATOM 10 O UNK 2 6.348 -11.323 10.667 1.00 8.06 X
>> ATOM 11 O UNK 2 6.994 -12.600 10.740 1.00 7.16 X
>> ATOM 12 O UNK 2 6.028 -13.737 10.607 1.00 6.58 X
>> ATOM 13 DUM UNK 2 6.796 -15.043 10.583 1.00 8.28
>> ATOM 14 DUM UNK 2 5.099 -13.727 11.792 1.00 7.15
>> - *Do not deposit this in PDB*, especially if chemical element type is
>> undefined
>> (rightmost column)
>> </snip>
>
> Sorry for not saying "why". If it ever happens for me to show these
> slides again in whatever School I promise to improve the slides to be as
> clear as possible.
>
> The problems with records like:
>
> ATOM 10 O UNK 2 6.348 -11.323 10.667 1.00 8.06 X
> ATOM 10 O UNK 2 6.348 -11.323 10.667 1.00 8.06
> ATOM 13 DUM UNK 2 6.796 -15.043 10.583 1.00 8.28
> ATOM 13 DUM UNK 2 6.796 -15.043 10.583 1.00 8.28 X
>
> are:
>
> - the chemical element type (column 77-78 ?) (that one that use in Fcalc
> calculation and also may provide the charge) is undefined (simply blank
> or "X"), so there is no way to include these dummy atoms into structure
> factor calculations;
>
> - even if you have "O" like in the first example this often contradicts
> with "X" in rightmost column, so you have to use guesswork, which is not
> good for interpreting well defined formatted data files. Plus, of
> course, not way to tell the charge;
>
> - even if you have "O" like in the second example the element type in
> rightmost column is missing. Therefore it is a weak information to take:
> we cannot reliably extract scattering type from atom label - classical
> example CA (Calcium) and CA (C-alpha);
>
> - of course, we can make the program simply ignore these atoms (hm...
> sounds like a bad practice: don't read it if you can't read it - this
> way we may end up being ignorant -:) ). But are we sure that the
> original program that put these dummies was also not using them in Fcalc
> calculation? Or may be it was using some default scattering factor for
> them? Which one: H or O or N (N better approximates than O)?
>
> - furthermore, since we are lacking such a fundamental property of these
> dummy atoms as scattering type, it it laughable to assign some B-factors
> to these atoms! Look through PDB: you will find a some smart looking
> B-factors, such as 8.06 A**2 for an non-existing element X -:)
>
> In summary:
>
> - do not put there anything hoping that future generation smarter
> software will find out what it is;
> - if you want to put something there (which has valid reasons actually -
> this will improve the overall map quality which is good - then please
> properly define it).
>
> All the best!
> Pavel.
>
>
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
15 years, 6 months
Re: [phenixbb] Autobuild
by Terwilliger, Thomas C
Hi Heather,
Hmm... a little hard to tell what the problem is.
So first: autobuild should have refined this model as one of the first things it did. Did it get about the same R that you got when refining it directly with phenix.refine? If no, then have a look at the refinement .eff files (the autobuild one is in AutoBuild_run_1_/TEMP0 unless that was cleaned up) and see what is different.
Next....what happens if you just run phenix.refine to get a map, also take the map from this autobuild run, and look at them along with your model. Is one a lot better than the other?
If you take your best working map and run phenix.fit_loops with this model...how does that do?
All the best,
Tom T
________________________________
From: phenixbb-bounces(a)phenix-online.org [phenixbb-bounces(a)phenix-online.org] on behalf of Heather L Condurso [condurso(a)bc.edu]
Sent: Friday, September 06, 2013 3:54 PM
To: PHENIX user mailing list
Subject: [phenixbb] Autobuild
I'm a bit confused about what is going on with my autobuild run. I solved a structure from anomalous data to 2.7A but many loops are missing. I ran phaser with the current model and native data that extends to 2.4A and got a solution and started autobuild from there. I used mostly the default settings and rebuild_in_place=False. After a pretty short time I realized the run was over and at the bottom of the log I see this:
Cycle 3 R/Rfree=0.28/0.35 Built=488 Placed=467
AutoBuild_build_cycle AutoBuild Run 162 Fri Sep 6 14:33:39 2013
Deciding if we should continue...
Recent R change per cycle: 0.0
Ending these build cycles as the recent change in R per cycle has been
0.0 and the required value of change is -0.005
and the R is somewhat acceptable at 0.276670644792
All done with build cycles in this region
AutoBuild_set_up_build AutoBuild Run 162 Fri Sep 6 14:33:40 2013
All omit/non-omit regions completed
finished AutoBuild Run 162 Fri Sep 6 14:33:40 2013
Finishing AutoBuild Run 162
Facts written to AutoBuild_run_162_/AutoBuild_Facts.dat
AutoBuild Run 162
Summary of model-building for run 162 Fri Sep 6 14:33:40 2013
Files are in the directory: /Users/heather/Desktop/M/AutoBuild_run_162_/
Starting mtz file: /Users/heather/Desktop/M/ImportRawData_run_4_/A-3_xx_PHX.mtz
Sequence file: /Users/heather/Desktop/M/m.pir
Best solution on cycle: 2 R/Rfree=0.26/0.33
Summary of output files for Solution 1 from rebuild cycle 2
--- Model (PDB file) ---
pdb_file: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.pdb
--- Model (CIF file) ---
cif_pdb_file: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.cif
--- Refinement log file ---
log_refine: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.log_refine
--- Model-building log file ---
log: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.log
--- Model-map correlation log file ---
log_eval: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.log_eval
--- 2FoFc and FoFc map coefficients from refinement 2FOFCWT PH2FOFCWT FOFCWT PHFOFCWT ---
refine_map_coeffs: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best_refine_map_coeffs.mtz
--- Data for refinement FP SIGFP PHIM FOMM HLAM HLBM HLCM HLDM FreeR_flag ---
refine_data: /Users/heather/Desktop/M/AutoBuild_run_162_/overall_best_refine_data.mtz
--- Density-modified map coefficients FWT PHWT ---
denmod_map_coeffs: /Users/heather/Desktop//AutoBuild_run_162_/overall_best_denmod_map_coeffs.mtz
You might consider making one very good model now with:
phenix.autobuild \
data=/Users/heather/Desktop/M/AutoBuild_run_162_/overall_best_refine_data.mtz \
model=/Users/heather/Desktop/M/AutoBuild_run_162_/overall_best.pdb \
rebuild_in_place=True \
seq_file=/Users/heather/Desktop/M/m.pir
My full sequence is 325 amino acids so only about 75% is build in. I am hoping autobuild can help find at least some of these missing residues. Why would the change in R be zero? What is meant by "You might consider making one very good model now with:"? I tried running autobuild with these new files and rebuild_in_place=true and get the same type of message. Is this the best that autobuild can do? I also ran the phaser solution through refinement and can significantly lower the Rs from 0.26/0.33 to 0.22/0.30. I appreciate any advice on how best to proceed.
I am using 1.8.2-1472 if that matters.
Thanks,
Heather
12 years, 5 months
Re: [phenixbb] Rosetta refinement fail on Mac
by Andy Watkins
I'm happy to try to debug this but since I don't ordinarily go through the
Rosetta refinement protocol -- that's the protein side, which Frank is
expert in -- I will probably need to know what Rosetta command line is
generated by the above.
On Wed, Aug 22, 2018 at 5:51 PM Billy Poon <BKPoon(a)lbl.gov> wrote:
> Hi Aaron,
>
> This looks like a High Sierra issue since this excessive amount of memory
> usage is not seen on 10.12. We are tracking this down and will be
> contacting the Rosetta developers. It looks like any version of Rosetta
> will cause this problem (I tested back to Rosetta 3.7).
>
> --
> Billy K. Poon
> Research Scientist, Molecular Biophysics and Integrated Bioimaging
> Lawrence Berkeley National Laboratory
> 1 Cyclotron Road, M/S 33R0345
> Berkeley, CA 94720
> Tel: (510) 486-5709
> Fax: (510) 486-5909
> Web: https://phenix-online.org
>
>
> On Wed, Aug 22, 2018 at 4:53 PM Aaron Oakley <aarono(a)uow.edu.au> wrote:
>
>> System: macOS 10.13.6
>> Phenix version: phenix-1.14rc1-3177
>> Rosetta version: 3.9 (rosetta_src_2018.09.60072_bundle)
>>
>> So I rebuilt rosetta 3.9 as root and can now run "Rosetta refinement".
>> However...Rosetta refinement ran for 2 hours, slowly gobbling memory up to
>> a maximum of 25GB before failing (error below).
>> Also gobbled up disk space, creating a 78 GB file “outtmp.map”.
>>
>> A memory leak issue?
>>
>> Thanks,
>>
>> a++
>>
>>
>>
>> RuntimeError : File 'None' not found.
>> Traceback:
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/cctbx_project/libtbx/runtime_utils.py",
>> line 179, in run
>> return_value = self.target()
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/cctbx_project/libtbx/runtime_utils.py",
>> line 74, in __call__
>> result = self.run()
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/phenix/phenix/command_line/rosetta_refine.py",
>> line 304, in run
>> return run(args=list(self.args))
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/phenix/phenix/command_line/rosetta_refine.py",
>> line 191, in run
>> debug=params.output.debug)
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/phenix/phenix/rosetta/refine.py",
>> line 313, in __init__
>> self.run_jobs()
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/phenix/phenix/rosetta/refine.py",
>> line 388, in run_jobs
>> check_result(result)
>>
>> File
>> "/Applications/phenix-1.14rc1-3177/modules/phenix/phenix/rosetta/refine.py",
>> line 383, in check_result
>> raise RuntimeError("File '%s' not found." % result.file_name)
>>
>> _______________________________________________
>> phenixbb mailing list
>> phenixbb(a)phenix-online.org
>> http://phenix-online.org/mailman/listinfo/phenixbb
>> Unsubscribe: phenixbb-leave(a)phenix-online.org
>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
> Unsubscribe: phenixbb-leave(a)phenix-online.org
7 years, 5 months
Re: [cctbxbb] after SourceForge awakens
by Graeme.Winter@diamond.ac.uk
Hi Aaron
I guess like our other projects, cctbx developers have a bunch of stuff to commit before migrating, and as Nigel says there’s a phenix release RSN anyhow.
However I for one am very worried that this message says we’ve found half of the svn repo data, we don’t know when we will be able to tell you that you should be expecting service to be restored. Given that the frequency of updates seems to be every two days it would suggest we will know when svn likely to be back on Friday at earliest.
We’re part way through moving xia2 to git on github before even being able to merge latest change sets. It’s looking more and more like that was a good choice.
Best wishes Graeme
On 22 Jul 2015, at 21:51, Aaron Brewster <asbrewster(a)lbl.gov<mailto:[email protected]>> wrote:
FYI:
* SourceForge Allura Subversion (SVN) service – offline, filesystem checks complete, data restoration at 50%. Restoration priority after Git and Hg services. ETA TBD, Future update will provide ETA.
>From this update that just went live:
http://sourceforge.net/blog/sourceforge-infrastructure-and-service-restorat…
Looks like we'll be waiting longer for SVN access to be restored. Seems like migrating to github makes more and more sense, indeed...
-Aaron
On Mon, Jul 20, 2015 at 1:20 PM, <Graeme.Winter(a)diamond.ac.uk<mailto:[email protected]>> wrote:
Hi Nigel
We're thinking along the same lines moving xia2 across first then maybe dials, depending on how people feel. Moving all of it across in the longer term would work well.
Thanks Graeme
On 20 Jul 2015 20:42, Nigel Moriarty <nwmoriarty(a)lbl.gov<mailto:[email protected]>> wrote:
Folks
I have not read the emails regarding the current SourceForge situation. However, we had been planning to move to github in the near future. SF has been very reliable so I don't think it was too much of pain given the long use we've had from them.
It does, however, come at a time when we are trying to get out a major release of Phenix. To that end, it would be helpful if any commits were bug fixes only to minimise the risk of breaking to the codebase.
We will be actively evaluating the options regarding to github after Phenix 1.10 is safely away...
Cheers
Nigel
---
Nigel W. Moriarty
Building 33R0349, Physical Biosciences Division
Lawrence Berkeley National Laboratory
Berkeley, CA 94720-8235
Phone : 510-486-5709<tel:510-486-5709> Email : NWMoriarty(a)LBL.gov<mailto:[email protected]>
Fax : 510-486-5909<tel:510-486-5909> Web : CCI.LBL.gov<http://cci.lbl.gov/><http://CCI.LBL.gov<http://cci.lbl.gov/>>
--
This e-mail and any attachments may contain confidential, copyright and or privileged material, and are for the use of the intended addressee only. If you are not the intended addressee or an authorised recipient of the addressee please notify us of receipt by returning the e-mail and do not use, copy, retain, distribute or disclose the information in or attached to the e-mail.
Any opinions expressed within this e-mail are those of the individual and not necessarily of Diamond Light Source Ltd.
Diamond Light Source Ltd. cannot guarantee that this e-mail or any attachments are free from viruses and we cannot accept liability for any damage which you may sustain as a result of software viruses which may be transmitted in or with the message.
Diamond Light Source Limited (company no. 4375679). Registered in England and Wales with its registered office at Diamond House, Harwell Science and Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
_______________________________________________
cctbxbb mailing list
cctbxbb(a)phenix-online.org<mailto:[email protected]>
http://phenix-online.org/mailman/listinfo/cctbxbb
10 years, 6 months
Re: [phenixbb] question on bad geometry
by Pavel Afonine
Hi,
I agree with Tom. Here is an example.
A 3ifk structure deposited in PDB (resolution 2.03A) has the following
statistics:
- reported Rwork/Rfree = 0.237/0.293
- recomputed Rwork/Rfree (using phenix.model_vs_data) = 0.2306/0.2835
- Molprobity scores (computed using phenix.model_vs_data):
Ramachandran plot, number of:
outliers : 4 (2.35 %)
allowed : 1 (0.59 %)
favored : 165 (97.06 %)
Rotamer outliers : 28 (18.67 %)
Cbeta deviations >0.25A : 10
All-atom clashscore : 61.61 (steric overlaps >0.4A per 1000 atoms)
After running this model through PHENIX AutoBuild and then refining it
with phenix.refine using fix_rotamers option and H atoms added, I'm
getting the following numbers:
- Rwork/Rfree = 0.2097/0.2402
- Molprobity scores:
Ramachandran plot, number of:
outliers : 2 (1.18 %)
allowed : 2 (1.18 %)
favored : 166 (97.65 %)
Rotamer outliers : 5 (3.57 %)
Cbeta deviations >0.25A : 1
All-atom clashscore : 37.14 (steric overlaps >0.4A per 1000 atoms)
As you can see, the change is pretty significant and no manual work at
all to achieve it!
Good luck!
Pavel.
P.S.: All files are here:
http://cci.lbl.gov/~afonine/example_3ifk/
3ifk.mtz - original data file
pdb3ifk.ent - model from PDB
pdb3ifk.mvd - result of "phenix.model_vs_data 3ifk.mtz pdb3ifk.ent >
pdb3ifk.mvd
" command
autobuild.mvd - result of "phenix.model_vs_data autobuild.pdb 3ifk.mtz >
autobuild.mvd
" command
autobuild.pdb - model after AutoBuild and phenix.refine
On 8/31/10 7:08 PM, Thomas C. Terwilliger wrote:
> Hi Fengyun,
>
> That does look like a lot of outliers to me. You could try improving the
> model by "rebuilding in place" with autobuild with the keyword
> "rebuild_in_place=True" which will try to rebuild your model without
> changing the sequence alignment. This procedure takes a while and it is
> useful to use a multiprocessor machine with nproc=5.
>
> You should also have a careful look at your model, as some manual
> rebuilding might improve it a lot.
>
> All the best,
> Tom T
>
>>> Hi everyone,
>>>
>>> I have a structure at 2.3 A refined in phenix to R/Rfree=0.23/0.28.
>>> However, the geometry is not that good, the followings are the output
>>> from phenix.model_vs_data,
>>>
>>> Stereochemistry statistics (mean, max, count):
>>> bonds : 0.0080 0.0547 3244
>>> angles : 1.1669 12.1085 4374
>>> dihedrals : 15.7241 85.6280 1202
>>> chirality : 0.0706 0.2831 536
>>> planarity : 0.0036 0.0246 573
>>> non-bonded (min) : 2.1217
>>> Ramachandran plot, number of:
>>> outliers : 17 (4.02 %)
>>> allowed : 37 (8.75 %)
>>> favored : 369 (87.23 %)
>>> Rotamer outliers : 30 (8.62 %) goal:< 1%
>>> Cbeta deviations>0.25A : 0
>>> All-atom clashscore : 40.51 (steric overlaps>0.4A per 1000
>>> atoms)
>>>
>>> I tried to use fix_rotamer=True, but it doesn't help at all. As to the
>>> rotamer outliers, the value is far higher than the ideal one. Does
>>> that mean I need to manually adjust the model? Or will autobuild help
>>> improve the geometry?
>>>
>>> Thanks!
>>> Fengyun
>>>
>>> _______________________________________________
>>> phenixbb mailing list
>>> phenixbb(a)phenix-online.org
>>> http://phenix-online.org/mailman/listinfo/phenixbb
>>>
> _______________________________________________
> phenixbb mailing list
> phenixbb(a)phenix-online.org
> http://phenix-online.org/mailman/listinfo/phenixbb
15 years, 5 months
[phenixbb] postdoctoral position at the Institute for Protein Innovation in Boston, USA
by Christopher Bahl
The Bahl Lab <http://function-structure.org/> at the Institute for Protein
Innovation (IPI) in Boston is recruiting Postdoctoral Fellows. We use
protein design to accelerate drug discovery and to engineer new lead
therapeutic molecules. In addition to traditional academic collaborations,
we work closely with pharmaceutical and biotech companies, as well as
venture capital firms.
We are looking for translationally-minded scientists who want their work to
have a direct impact on the world. We will provide training in
computational protein design using Rosetta, including de novo protein
design and the design of protein function and biophysical properties.
Postdoctoral Fellows at IPI leverage our high-throughput automated
laboratory infrastructure.
We are especially excited to expand our work in the following research
areas:
-
prediction and design of protein developability for therapeutic
applications
-
engineering of de novo miniprotein/peptide lead therapeutics for
oncology, infectious disease, hypertension and diabetes, etc.
-
antibody design, including de novo paratope design
-
enzyme design
Strong candidates will have demonstrated expertise in one or more of the
following:
-
protein biochemistry: protein production and biophysical
characterization, enzymology
-
structural biology: cryo-EM, X-ray crystallography, NMR
-
immuno-oncology: cell culture models, fluorescence activated cell sorting
-
traditional protein engineering: directed evolution, library
construction, yeast and/or phage display
-
computational biology: machine learning, data science, molecular
dynamics and/or modeling
If you are excited to join a team of protein designers working to tackle
audacious projects aimed directly at improving human health, please send
your CV to: chris.bahl(a)proteininnovation.org
About IPI
The Institute for Protein Innovation is a non-profit organization that is
advancing protein science to accelerate research and improve human health. The
Institute is located at Harvard Medical School. IPI uniquely combines the
freedom of academic exploration with the high-throughput scaling of
industry to take on transformative protein-based initiatives that no other
laboratory or organization can or will pursue. IPI is deploying
state-of-the-art technologies to build a repository of powerful protein
tools and reagents and share them through licensing and collaboration.
These will enable new biomedical and therapeutic discovery for researchers
in academia and industry alike. This constellation of vanguard technology
and expertise positions IPI to train the next generation of entrepreneurs
and protein scientists to lead the discovery of new medicines.
The IPI is an equal opportunity employer and prohibits discrimination based
on ethnicity, religion, sexual orientation, gender identity/expression,
national origin/ancestry, age, disability, socioeconomic status, marital or
veteran status, etc. Candidates from underrepresented backgrounds are
especially encouraged to apply.
--
PLEASE NOTE: This message, including any attachments, may include
privileged, confidential and/or inside information belonging to IPI. Any
distribution or use of this communication by anyone other than the intended
recipient(s) is strictly prohibited and may be unlawful. If you are not the
intended recipient, please notify the sender by replying to this message
and then delete it from your system. Thank you.
5 years, 6 months