scaling in Fo-Fo maps
Hello everyone, I have a question on how the scaling is done when generating Fo-Fo maps in Phenix. So I want to compare three datasets (of the same protein of course): A,B,C. I generated three maps: A-B, A-C, B-C, and I can see the differences. But I also want to compare the peaks in the three maps, so I want to make sure the maps are generated using the same scaling. Thanks, Xun p.s. Any other ideas to compare three datasets?
Dear Xun,
such kind of questions (if I understood you well) has been addressed recently in our article in Acta Cryst (2014) D70, 2593-2606 with the tools available through phenix (otherwise, I have a stand-alone python program, in a testing stage; let me know if you wish it).
Best regards,
Sacha Urzhumtsev
________________________________
De : [email protected]
Hi Xun,
I have a question on how the scaling is done when generating Fo-Fo maps in Phenix.
I read you are asking about scaling in Isomorphous Difference Map (Phenix GUI -> Maps) or its command line equivalent phenix.fobs_minus_fobs_map. When implementing Fo-Fo map calculation in Phenix I could not find literature that would sufficiently describe the protocol so I could use it to write code (which is very typical for crystallographic protocols and algorithms, unfortunately!). So I had to "re-invent the wheel" myself from scratch. Below is the protocol I came up with. Looking at the code in /cctbx_project/mmtbx/maps/fobs_minus_fobs_map.py: You input two sets of Fobs: Fo1 and Fo2 and PDB file with a model that serves as a source of phases. It's your responsibility to prepare this model appropriately. Then common sets are obtained: Fo1, Fo2 = Fo1.common_sets(Fo2) . This is because Fo1 and Fo2 arrays may have different not necessarily matching Miller indices hkl. The following is then done: Fmodel1 = ktotal1 * (Fcalc + Fbulk1) and Fmodel2 = ktotal1 * (Fcalc + Fbulk2) are obtained from fitting to Fo1 and Fo2, correspondingly. Then normalized Fobs are obtained (on "absolute" scale): Fo1n = Fo1/ktotal1 and Fo2n = Fo2/ktotal2 Then they are scaled to each other using one scalar scale factor: k = SUM(Fo1n*Fo2n) / SUM(Fo2n**2) Instead (optionally), they can be scaled to each other using "multiscale" protocol, very similar to what used to be in CNS, if you know what I mean. Finally, the synthesis is computed as {Fo1n - k*Fo2n, phases from one of Fmodel above}. I can't see in the code which one of the two Fmodels I choose, I think it's arbitrary.
But I also want to compare the peaks in the three maps, so I want to make sure the maps are generated using the same scaling.
See other email posted to phenixbb from Sacha Urzhumtsev. This is the way to do it! It is implemented in Phenix GUI: Maps->Map Comparison. More intuitively, the procedure is described in Section "2.9. Note about the comparison of maps" here: FEM: feature-enhanced map. P.V. Afonine, N.W. Moriarty, M. Mustyakimov, O.V. Sobolev, T.C. Terwilliger, D. Turk, A. Urzhumtsev, and P.D. Adams. Acta Cryst. D71, 646-666 (2015). Hope this answers your questions! Pavel
Hi Pavel,
In line with the question of Xun, will you please introduce a way to scale-up the intensity of cryo-EM to the level of intensity of x-ray crystallography? I have a read a paper, in which the author have done the above-mentioned scaling-up in order to use the phenix reciprocal space refine for cryo-EM data refine, but the paper method was very difficult to understand. Currently with the availability of pheinix.real_space_refine, in a lot of situations reciprocal refine for cryo-EM data is still necessary.
Smith
At 2016-02-05 00:51:18, "Pavel Afonine"
Hi Xun,
I have a question on how the scaling is done when generating Fo-Fo maps in Phenix.
I read you are asking about scaling in Isomorphous Difference Map (Phenix GUI -> Maps) or its command line equivalent phenix.fobs_minus_fobs_map.
When implementing Fo-Fo map calculation in Phenix I could not find literature that would sufficiently describe the protocol so I could use it to write code (which is very typical for crystallographic protocols and algorithms, unfortunately!). So I had to "re-invent the wheel" myself from scratch. Below is the protocol I came up with.
Looking at the code in /cctbx_project/mmtbx/maps/fobs_minus_fobs_map.py:
You input two sets of Fobs: Fo1 and Fo2 and PDB file with a model that serves as a source of phases. It's your responsibility to prepare this model appropriately.
Then common sets are obtained: Fo1, Fo2 = Fo1.common_sets(Fo2) . This is because Fo1 and Fo2 arrays may have different not necessarily matching Miller indices hkl.
The following is then done:
Fmodel1 = ktotal1 * (Fcalc + Fbulk1) and Fmodel2 = ktotal1 * (Fcalc + Fbulk2) are obtained from fitting to Fo1 and Fo2, correspondingly.
Then normalized Fobs are obtained (on "absolute" scale): Fo1n = Fo1/ktotal1 and Fo2n = Fo2/ktotal2
Then they are scaled to each other using one scalar scale factor:
k = SUM(Fo1n*Fo2n) / SUM(Fo2n**2)
Instead (optionally), they can be scaled to each other using "multiscale" protocol, very similar to what used to be in CNS, if you know what I mean.
Finally, the synthesis is computed as {Fo1n - k*Fo2n, phases from one of Fmodel above}. I can't see in the code which one of the two Fmodels I choose, I think it's arbitrary.
But I also want to compare the peaks in the three maps, so I want to make sure the maps are generated using the same scaling.
See other email posted to phenixbb from Sacha Urzhumtsev. This is the way to do it! It is implemented in Phenix GUI: Maps->Map Comparison. More intuitively, the procedure is described in Section "2.9. Note about the comparison of maps" here:
FEM: feature-enhanced map. P.V. Afonine, N.W. Moriarty, M. Mustyakimov, O.V. Sobolev, T.C. Terwilliger, D. Turk, A. Urzhumtsev, and P.D. Adams. Acta Cryst. D71, 646-666 (2015).
Hope this answers your questions!
Pavel
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb Unsubscribe: [email protected]
Hi Smith,
In line with the question of Xun, will you please introduce a way to scale-up the intensity of cryo-EM to the level of intensity of x-ray crystallography?
map comparison does not care where maps come from. You give it two maps on the same gridding (same gridding is essential, obviously) and it tells you pairs of contouring levels that will show equivalent representations of your maps.
I have a read a paper, in which the author have done the above-mentioned scaling-up in order to use the phenix reciprocal space refine for cryo-EM data refine,
Normally, you should not use reciprocal spare refinement when working with cryo-EM maps as your data, cryo-EM map, is in real space. And if you use real-space refinement it does not matter on what scale the input map is as it rescales it internally anyway.
Currently with the availability of pheinix.real_space_refine, in a lot of situations reciprocal refine for cryo-EM data is still necessary.
I would be interesting to hear about those cases! Pavel
participants (4)
-
Alexandre OURJOUMTSEV
-
Pavel Afonine
-
Smith Liu
-
Xun Lu