Macromolecular Crystallography School announcement : 2013 in Uruguay
Dear Colleagues, We are pleased to announce the *second Macromolecular Crystallography School 2013 *at the Institut Pasteur de Montevideo (Uruguay). All details can be found at www.pasteur.edu.uy/mx2013 http://www.pasteur.edu.uy/mx2013 *Title:* "Macromolecular Crystallography School 2013: From data processing to structure refinement and beyond" *Dates:*April 9th-17th, 2013. *Site:*Institut Pasteur de Montevideo (Montevideo), Uruguay *The workshop content:* Conceived to provide theoretical background and hands-on abilities in the use of computational tools to exploit X ray diffraction data. Through lectures, tutorials and hands-on trouble-shooting, the students will be trained in state-of-the-art macromolecular crystallography. Particular emphasis will be given to data processing, phasing/structure determination and model refinement/validation. The workshop will feature authors and experts of many modern crystallographic software packages. This workshop represents the continuation within the series started in 2010 (http://www.pasteur.edu.uy/mxcourse) on Macromolecular Crystallography. In this opportunity it is being co-organized by the Center for Structural Biology of the Mercosur CeBEM (www.cebem.org.ar http://www.cebem.org.ar), jointly with the Collaborative Computational Project Number 4 (CCP4 -- UK www.ccp4.ac.uk http://www.ccp4.ac.uk). Support from the Institut Pasteur International Network, the International Union of Crystallography and Institut Pasteur de Montevideo is also greatly acknowledged. *Applicants:* Graduate students, postdoctoral researchers and young scientists are encouraged to apply. Only 20 applicants will be selected for participation. Participants of the workshop are strongly encouraged to bring their own problem data sets so they can be used during the workshop's hands-on sessions. There is no registration fee. Support for accommodation, per diem and local transportation will be provided to all participants from abroad. Support to cover travel expenses will be considered on a case-by-case basis. Specific requests should be well-grounded as we will only be able to select a limited number. *Application:* Application *deadline is February 10, 2013*. Application form, the program, contact info and other details can be found at www.pasteur.edu.uy/mx2013 http://www.pasteur.edu.uy/mx2013 Please address further inquiries to [email protected] mailto:[email protected] Ronan Keegan and Alejandro Buschiazzo -- Alejandro Buschiazzo, PhD Research Scientist Unit of Protein Crystallography Institut Pasteur de Montevideo Mataojo 2020 Montevideo 11400 URUGUAY Phone: +598 25220910 int. 120 Fax: +598 25224185 http://www.pasteur.edu.uy/pxf
Hi all, Many have argued that we should include weak data in refinement --- e.g., reflections much weaker than I/sigI=2 --- in order to take advantage of the useful information found in large numbers of uncertain data points (like argued in the recent Karplus and Diederichs Science paper on CC1/2). This makes sense to me as long as the uncertainty attached to each HKL is properly accounted for. However, I was surprised to hear rumors that with phenix "the data are not properly weighted in refinement by incorporating observed sigmas" and such. I was wondering if the phenix developers could comment on the sanity of including weak data in phenix refinement, and on how phenix handles it. Douglas ^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^` Douglas L. Theobald Assistant Professor Department of Biochemistry Brandeis University Waltham, MA 02454-9110 [email protected] http://theobald.brandeis.edu/ ^\ /` /^. / /\ / / /`/ / . /` / / ' ' '
Hi Douglas, that's the point: there are many hand-waving arguments supported with no or weak inconclusive data, and little rock-sold evidence! I'm not aware of a paper *clearly* demonstrating what kind of improvement using weak data in refinement brings? I mean not an R-factor improvement by a fraction of a percent or "cosmetics" like this, but a case where it is demonstrated that using it allowed more model to be built, or showing two maps side-by-side obtained without and with weak data where the latter would significantly be more useful (not just appears more pleasant after tweaking contouring thresholds to show the case favouritely). Regarding refinement itself: consider rigid-body refinement. One may think that with today's technology you would just dump all the data into refinement machinery and Maximum-likelihood target would do the trick (weight high-res data properly). No. For rigid-body refinement to actually work you still need to cut the high resolution end. See discussion in: Automatic multiple-zone rigid-body refinement with a large convergence radius. P. V. Afonine, R. W. Grosse-Kunstleve, A. Urzhumtsev and P. D. Adams J. Appl. Cryst. 42, 607-615 (2009)). Same logic might be applicable with weak data. Its amount and weakness may be just sufficient to make refinement target profile complex enough to stuck refinement or impede its convergence. On the other hand it may be just good enough to make refinement behave better and yield better model. Using it may harm refinement at the beginning but may help towards the end (remember arguments behind STIR option in SHELX?!), so the question may not be just "whether or not?", but also "when?". Maps that are mostly used (2mFo-DFc and mFo-DFc) are calculated without using experimental sigmas, unless you modify them using techniques such as maximization of entropy or so, where sigmas may be used somehow (but don't have to, though). So even if one weights weak data smartly for refinement and uses it the right moment, one still need to think about how to use it in map calculation so it brings good rather than noise into maps. All in all, yes, *conceptually* it is good to use weak data in refinement and map calculation, but two questions - 1) how and when? and 2) whether it's going to change anything significantly? - are yet to answer. It's in todo list to answer these questions. Finally, FYI: refinement targets that phenix.refine uses are described here (they are coded exactly as discussed in these papers): MLHL: Pannu, N.S., Murshudov, G.N., Dodson, E.J. & Read, R.J. (1998). Acta Cryst. D54, 1285-1294. "Incorporation of Prior Phase Information Strengthens Maximum-Likelihood Structure Refinement" ML: V.Y., Lunin, P.V. Afonine & A.G., Urzhumtsev. Acta Cryst. (2002). A58, 270-282. "Likelihood-based refinement. I. Irremovable model errors" Flavor of LS and accounting for scales in ML and MLHL functions: P.V. Afonine, R.W. Grosse-Kunstleve & P.D. Adams. Acta Cryst. (2005). D61, 850-855. "A robust bulk-solvent correction and anisotropic scaling procedure" and they work fine in phenix.refine since 2004. All the best, Pavel On 12/6/12 2:35 PM, Douglas Theobald wrote:
Hi all,
Many have argued that we should include weak data in refinement --- e.g., reflections much weaker than I/sigI=2 --- in order to take advantage of the useful information found in large numbers of uncertain data points (like argued in the recent Karplus and Diederichs Science paper on CC1/2). This makes sense to me as long as the uncertainty attached to each HKL is properly accounted for. However, I was surprised to hear rumors that with phenix "the data are not properly weighted in refinement by incorporating observed sigmas" and such. I was wondering if the phenix developers could comment on the sanity of including weak data in phenix refinement, and on how phenix handles it.
Douglas
^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^` Douglas L. Theobald Assistant Professor Department of Biochemistry Brandeis University Waltham, MA 02454-9110
[email protected] http://theobald.brandeis.edu/
^\ /` /^. / /\ / / /`/ / . /` / / ' ' '
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Hi Pavel,
Thanks for the clarification, comments, and esp. the refs. I was aware of your Acta Cryst 2002 paper. Am I correct in thinking that in phenix you are basically maximizing eqn 4 (a type of Rice distribution)? I had always assumed that experimental sigmas were somehow lumped into the alpha and beta parameters (esp. given your discussion in section 2.3). In principle they could be, right?
In any case, I wonder if your example of rigid-body refinement actually argues for incorporating experimental sigmas --- since high-res data is on average the most uncertain, incorporating sigmas would downweight high-res data most, and cutting high res data is really just a crude way of downweighting it.
Anyway, thanks for your work on phenix, it's my current favorite refinement software.
Cheers,
Douglas
On Dec 6, 2012, at 9:19 PM, Pavel Afonine
Hi Douglas,
that's the point: there are many hand-waving arguments supported with no or weak inconclusive data, and little rock-sold evidence!
I'm not aware of a paper *clearly* demonstrating what kind of improvement using weak data in refinement brings? I mean not an R-factor improvement by a fraction of a percent or "cosmetics" like this, but a case where it is demonstrated that using it allowed more model to be built, or showing two maps side-by-side obtained without and with weak data where the latter would significantly be more useful (not just appears more pleasant after tweaking contouring thresholds to show the case favouritely).
Regarding refinement itself: consider rigid-body refinement. One may think that with today's technology you would just dump all the data into refinement machinery and Maximum-likelihood target would do the trick (weight high-res data properly). No. For rigid-body refinement to actually work you still need to cut the high resolution end. See discussion in: Automatic multiple-zone rigid-body refinement with a large convergence radius. P. V. Afonine, R. W. Grosse-Kunstleve, A. Urzhumtsev and P. D. Adams J. Appl. Cryst. 42, 607-615 (2009)).
Same logic might be applicable with weak data. Its amount and weakness may be just sufficient to make refinement target profile complex enough to stuck refinement or impede its convergence. On the other hand it may be just good enough to make refinement behave better and yield better model. Using it may harm refinement at the beginning but may help towards the end (remember arguments behind STIR option in SHELX?!), so the question may not be just "whether or not?", but also "when?".
Maps that are mostly used (2mFo-DFc and mFo-DFc) are calculated without using experimental sigmas, unless you modify them using techniques such as maximization of entropy or so, where sigmas may be used somehow (but don't have to, though). So even if one weights weak data smartly for refinement and uses it the right moment, one still need to think about how to use it in map calculation so it brings good rather than noise into maps.
All in all, yes, *conceptually* it is good to use weak data in refinement and map calculation, but two questions - 1) how and when? and 2) whether it's going to change anything significantly? - are yet to answer. It's in todo list to answer these questions.
Finally, FYI: refinement targets that phenix.refine uses are described here (they are coded exactly as discussed in these papers):
MLHL: Pannu, N.S., Murshudov, G.N., Dodson, E.J. & Read, R.J. (1998). Acta Cryst. D54, 1285-1294. "Incorporation of Prior Phase Information Strengthens Maximum-Likelihood Structure Refinement"
ML: V.Y., Lunin, P.V. Afonine & A.G., Urzhumtsev. Acta Cryst. (2002). A58, 270-282. "Likelihood-based refinement. I. Irremovable model errors"
Flavor of LS and accounting for scales in ML and MLHL functions: P.V. Afonine, R.W. Grosse-Kunstleve & P.D. Adams. Acta Cryst. (2005). D61, 850-855. "A robust bulk-solvent correction and anisotropic scaling procedure"
and they work fine in phenix.refine since 2004.
All the best, Pavel
On 12/6/12 2:35 PM, Douglas Theobald wrote:
Hi all,
Many have argued that we should include weak data in refinement --- e.g., reflections much weaker than I/sigI=2 --- in order to take advantage of the useful information found in large numbers of uncertain data points (like argued in the recent Karplus and Diederichs Science paper on CC1/2). This makes sense to me as long as the uncertainty attached to each HKL is properly accounted for. However, I was surprised to hear rumors that with phenix "the data are not properly weighted in refinement by incorporating observed sigmas" and such. I was wondering if the phenix developers could comment on the sanity of including weak data in phenix refinement, and on how phenix handles it.
Douglas
^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^`^` Douglas L. Theobald Assistant Professor Department of Biochemistry Brandeis University Waltham, MA 02454-9110
[email protected] http://theobald.brandeis.edu/
^\ /` /^. / /\ / / /`/ / . /` / / ' ' '
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
Hi Douglas,
Am I correct in thinking that in phenix you are basically maximizing eqn 4 (a type of Rice distribution)?
yes, correct: formula (4) is what's called ML target in phenix.refine.
I had always assumed that experimental sigmas were somehow lumped into the alpha and beta parameters (esp. given your discussion in section 2.3). In principle they could be, right?
Yes, they could be.
In any case, I wonder if your example of rigid-body refinement actually argues for incorporating experimental sigmas --- since high-res data is on average the most uncertain, incorporating sigmas would downweight high-res data most, and cutting high res data is really just a crude way of downweighting it.
Right, making use of experimental sigmas, one way or another, is in todo list. All the best, Pavel
On Fri, 2012-12-07 at 13:57 -0800, Pavel Afonine wrote:
I had always assumed that experimental sigmas were somehow lumped into the alpha and beta parameters (esp. given your discussion in section 2.3). In principle they could be, right?
Yes, they could be.
I would like clarification on this. I was under impression (some of it formed by reading the papers) that it is not that experimental errors are "lumped" into beta, it is simply assumed that they are much smaller than "model error" and are therefore simply ignored because they are negligible. Alpha/beta are calculated in individual resolution shells. How can you "lump" experimental variance that varies by two orders of magnitude for individual reflections into a single parameter? I also wonder if introducing experimental uncertainty into Lunin's ML target is even possible without fundamentally altering it. It would remove ability to obtain analytical expressions for alpha/beta (also known as D/sigma_wc**2) as far as I can see. Which is the hallmark of this version of maximum likelihood target. Cheers, Ed. -- Bullseye! Excellent shot, Maurice. Julian, King of Lemurs.
Hi Ed,
I had always assumed that experimental sigmas were somehow lumped into the alpha and beta parameters (esp. given your discussion in section 2.3). In principle they could be, right?
Yes, they could be. I would like clarification on this. (...)
Alpha/beta are calculated in individual resolution shells. How can you "lump" experimental variance that varies by two orders of magnitude for individual reflections into a single parameter?
I'm sorry for being unclear (often email happens to be a poor way to express what you actually mean, unless you have heaps of time to write it lawyer style. There is always a room for reader to imagine unsaid. Sometimes it works, sometimes it doesn't.). Anyways, extending my phrase above to better reflect what I actually meant: "Yes, they could be, to some extent (approximation). Surely, a proper way is to have sigmas in the formula explicitly, as I stated in previous email of that thread. Whether that makes any practically useful difference is debatable, and yet to answer."
I also wonder if introducing experimental uncertainty into Lunin's ML target is even possible without fundamentally altering it.
It is possible, and depends how fundamentally you want to alter it. All the best, Pavel
Pavel: You are answering a different question, and the one I did not ask. So let me try again. If you compare equation (4) in Lunin et al.(2002) (or equation (14) in Lunin&Skovoroda,(1995)) and equation (14) in Murshudov et al.(1997) (or equation(1) in Cowtan,(2005)), you will see that the beta replaces the combination of the model error and experimental error. An assumption that beta is identical for every reflection in a given resolution shell is absolutely necessary for the minimization with respect to alpha/beta as outlined in appendix A of Lunin&Skovoroda. Derivation of the analytical expression breaks down if you introduce experimental errors for individual reflections. So as far as I am concerned, it appears that experimental errors are simply neglected. There must be a justification for this. It was my understanding that this is based on the assumption that the model errors overwhelm experimental errors, which is supposedly confirmed by showing that sqrt(beta)>>sigf (which is a different story with its own problems). So, here is my point. ML formalism as implemented in phenix does not "lump" experimental errors into some combined value. It simply ignores them because it assumes that they are much smaller than model errors. It is an important distinction. Cheers, Ed. On Mon, 2012-12-10 at 13:08 -0800, Pavel Afonine wrote:
Hi Ed,
I had always assumed that experimental sigmas were somehow lumped into the alpha and beta parameters (esp. given your discussion in section 2.3). In principle they could be, right?
Yes, they could be. I would like clarification on this. (...)
Alpha/beta are calculated in individual resolution shells. How can you "lump" experimental variance that varies by two orders of magnitude for individual reflections into a single parameter?
I'm sorry for being unclear (often email happens to be a poor way to express what you actually mean, unless you have heaps of time to write it lawyer style. There is always a room for reader to imagine unsaid. Sometimes it works, sometimes it doesn't.).
Anyways, extending my phrase above to better reflect what I actually meant: "Yes, they could be, to some extent (approximation). Surely, a proper way is to have sigmas in the formula explicitly, as I stated in previous email of that thread. Whether that makes any practically useful difference is debatable, and yet to answer."
I also wonder if introducing experimental uncertainty into Lunin's ML target is even possible without fundamentally altering it.
It is possible, and depends how fundamentally you want to alter it.
All the best, Pavel _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
-- Edwin Pozharski, PhD, Assistant Professor University of Maryland, Baltimore ---------------------------------------------- When the Way is forgotten duty and justice appear; Then knowledge and wisdom are born along with hypocrisy. When harmonious relationships dissolve then respect and devotion arise; When a nation falls to chaos then loyalty and patriotism are born. ------------------------------ / Lao Tse /
Pavel,
I mean not an R-factor improvement by a fraction of a percent or "cosmetics" like this, but a case where it is demonstrated that using it allowed more model to be built, or showing two maps side-by-side obtained without and with weak data where the latter would significantly be more useful (not just appears more pleasant after tweaking contouring thresholds to show the case favouritely).
If the standard is that improvements in crystallographic refinement process/algorithms only are justified when "more model can be built", then many algorithm improvements should be abolished. TLS refinement is a widely accepted strategy, although it never (to my knowledge) was really demonstrated that it allows you to magically see something new in the density that was not there before. You are right that a lot of things we do to optimize crystallographic models do not change electron density in a dramatic way. You are absolutely wrong in your assertion that improvements of the accuracy of model parameters are pointless. Cheers, Ed. -- I don't know why the sacrifice thing didn't work. Science behind it seemed so solid. Julian, King of Lemurs
Ed,
I mean not an R-factor improvement by a fraction of a percent or "cosmetics" like this, but a case where it is demonstrated that using it allowed more model to be built, or showing two maps side-by-side obtained without and with weak data where the latter would significantly be more useful (not just appears more pleasant after tweaking contouring thresholds to show the case favouritely). If the standard is that improvements in crystallographic refinement process/algorithms only are justified when "more model can be built", then many algorithm improvements should be abolished. TLS refinement is a widely accepted strategy, although it never (to my knowledge) was really demonstrated that it allows you to magically see something new in the density that was not there before. You are right that a lot of things we do to optimize crystallographic models do not change electron density in a dramatic way. You are absolutely wrong in your assertion that improvements of the accuracy of model parameters are pointless.
same remark goes here about efficiency of email communication (see previous email). The point was that one always needs to draw a line between pedanticism and practicality. Otherwise we would be all going back from FFT to direct summation in Fcalc and gradients calculation, as that would surely improve R-factors by 0.5-1% or so in some cases through improving the accuracy of refined model parameters. All the best, Pavel
Pavel: On Mon, 2012-12-10 at 13:17 -0800, Pavel Afonine wrote:
The point was that one always needs to draw a line between pedanticism and practicality. Otherwise we would be all going back from FFT to direct summation in Fcalc and gradients calculation, as that would surely improve R-factors by 0.5-1% or so in some cases through improving the accuracy of refined model parameters.
Let's not change the subject to some impractical examples. Optimal resolution cutoff is a very practical question that has nothing to do with FFT/direct summation. In practice, resolution cutoff is always been selected based on a particular set of prejudices individual crystallographer subscribes to. Karplus&Diederichs came up with a very reasonable approach for objective selection of such cutoff. It is simple and very practical and leads to (admittedly minor) model improvement. What's not to like? My objection is to you moving the bar to "significant changes in map quality" level. It's unreasonable (there are very few things that do that and your criterion itself is too vague). My deeply seated mistrust of the R-values notwithstanding, reasonable changes that can be proven to result in better correlation between experimental data and model predictions are to be welcomed. Ed. -- "Hurry up before we all come back to our senses!" Julian, King of Lemurs
On Thu, Dec 6, 2012 at 2:35 PM, Douglas Theobald
Many have argued that we should include weak data in refinement --- e.g., reflections much weaker than I/sigI=2 --- in order to take advantage of the useful information found in large numbers of uncertain data points (like argued in the recent Karplus and Diederichs Science paper on CC1/2). This makes sense to me as long as the uncertainty attached to each HKL is properly accounted for. However, I was surprised to hear rumors that with phenix "the data are not properly weighted in refinement by incorporating observed sigmas" and such. I was wondering if the phenix developers could comment on the sanity of including weak data in phenix refinement, and on how phenix handles it.
As a supplement to what Pavel said: yes, phenix.refine does not use experimental sigmas in the refinement target. I am very hazy on the details, but it is not at all clear that this actually matters in refinement when maximum likelihood weighting is used; if there is a reference that argues otherwise I would be interested in seeing it. (I don't know what the least-squares target does - these are problematic even with experimental sigmas.) In practice I do not think it will be a problem to use data out to whatever resolution you feel appropriate - and it may indeed help - but of course your overall R-factors will be slightly higher, and the refinement will take much longer. Having experimented with this recently, my instinct would be to refine at a "traditional" resolution cutoff for as long as possible, and add the extra, weak data near the end. If I was working at a resolution that were borderline for automated building, I might be more aggressive. -Nat
Hi Nat,
On Dec 6, 2012, at 9:36 PM, Nathaniel Echols
On Thu, Dec 6, 2012 at 2:35 PM, Douglas Theobald
wrote: Many have argued that we should include weak data in refinement --- e.g., reflections much weaker than I/sigI=2 --- in order to take advantage of the useful information found in large numbers of uncertain data points (like argued in the recent Karplus and Diederichs Science paper on CC1/2). This makes sense to me as long as the uncertainty attached to each HKL is properly accounted for. However, I was surprised to hear rumors that with phenix "the data are not properly weighted in refinement by incorporating observed sigmas" and such. I was wondering if the phenix developers could comment on the sanity of including weak data in phenix refinement, and on how phenix handles it.
As a supplement to what Pavel said: yes, phenix.refine does not use experimental sigmas in the refinement target. I am very hazy on the details, but it is not at all clear that this actually matters in refinement when maximum likelihood weighting is used;
Thanks for the clarification. I guess this was part of my question --- even if phenix does not explicitly use exptl sigmas, perhaps the ML weighting is doing more-or-less the same thing by fitting the weights?
if there is a reference that argues otherwise I would be interested in seeing it. (I don't know what the least-squares target does - these are problematic even with experimental sigmas.)
In practice I do not think it will be a problem to use data out to whatever resolution you feel appropriate - and it may indeed help - but of course your overall R-factors will be slightly higher, and the refinement will take much longer.
Maybe I've misunderstood, but Karplus and Diederichs used phenix in their paper arguing for including weak data. Whether the benefit they describe is considered cosmetic or non-trivial, it at least appears that including very weak data doesn't hurt. At least with phenix.
Having experimented with this recently, my instinct would be to refine at a "traditional" resolution cutoff for as long as possible, and add the extra, weak data near the end. If I was working at a resolution that were borderline for automated building, I might be more aggressive.
-Nat _______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb
On Fri, Dec 7, 2012 at 1:54 PM, Douglas Theobald
Maybe I've misunderstood, but Karplus and Diederichs used phenix in their paper arguing for including weak data. Whether the benefit they describe is considered cosmetic or non-trivial, it at least appears that including very weak data doesn't hurt. At least with phenix.
Yes, I forgot to mention this. I suspect that most other modern refinement programs, which all use maximum likelihood targets (but with various differences), would behave similarly. -Nat
Dear Colleagues, We are pleased to announce the *second Macromolecular Crystallography School 2013 *at the Institut Pasteur de Montevideo (Uruguay). All details can be found at www.pasteur.edu.uy/mx2013 http://www.pasteur.edu.uy/mx2013 *Title:* "Macromolecular Crystallography School 2013: From data processing to structure refinement and beyond" *Dates:*April 9th-17th, 2013. *Site:*Institut Pasteur de Montevideo (Montevideo), Uruguay *The workshop content:* Conceived to provide theoretical background and hands-on abilities in the use of computational tools to exploit X ray diffraction data. Through lectures, tutorials and hands-on trouble-shooting, the students will be trained in state-of-the-art macromolecular crystallography. Particular emphasis will be given to data processing, phasing/structure determination and model refinement/validation. The workshop will feature authors and experts of many modern crystallographic software packages. This workshop represents the continuation within the series started in 2010 (http://www.pasteur.edu.uy/mxcourse) on Macromolecular Crystallography. In this opportunity it is being co-organized by the Center for Structural Biology of the Mercosur CeBEM (www.cebem.org.ar http://www.cebem.org.ar), jointly with the Collaborative Computational Project Number 4 (CCP4 -- UK www.ccp4.ac.uk http://www.ccp4.ac.uk). Support from the Institut Pasteur International Network, the International Union of Crystallography and Institut Pasteur de Montevideo is also greatly acknowledged. *Applicants:* Graduate students, postdoctoral researchers and young scientists are encouraged to apply. Only 20 applicants will be selected for participation. Participants of the workshop are strongly encouraged to bring their own problem data sets so they can be used during the workshop's hands-on sessions. There is no registration fee. Support for accommodation, per diem and local transportation will be provided to all participants from abroad. Support to cover travel expenses will be considered on a case-by-case basis. Specific requests should be well-grounded as we will only be able to select a limited number. *Application:* Application *deadline is February 10, 2013*. Application form, the program, contact info and other details can be found at www.pasteur.edu.uy/mx2013 http://www.pasteur.edu.uy/mx2013 Please address further inquiries to [email protected] mailto:[email protected] Ronan Keegan and Alejandro Buschiazzo -- Alejandro Buschiazzo, PhD Research Scientist Unit of Protein Crystallography Institut Pasteur de Montevideo Mataojo 2020 Montevideo 11400 URUGUAY Phone: +598 25220910 int. 120 Fax: +598 25224185 http://www.pasteur.edu.uy/pxf
participants (5)
-
Alejandro Buschiazzo
-
Douglas Theobald
-
Ed Pozharski
-
Nathaniel Echols
-
Pavel Afonine