on the criteria for outliers, phenix.molprobity and wwPDB validation
Hi all, I had already noticed that outliers spotted by phenix with its validation tool, with the nice feature to carry one straight to the "suspicious" residue in a coot session for visual inspection, differs a bit from what comes spotted from wwPDB validation, generally the latter spotting some few more residues. To one current structure, which at low resolution, I notice there is a bit larger difference between the number of spotted residues. I see from wwPDB validation, molprobity is also used to spot outliers for many of the analyses. I see the current molprobity version used is 4.02b-467, but I could not yet devise the molprobity version used by phenix. Of course, there would also be the "library" used in each case to spot the outliers. Maybe someone ever listed what is considered in each case, and their differences, or a link to that is welcome for me. What would be the best criteria for spotting is of course a good discussion. But a superficial question I would make is if there is a way to make phenix.molprobity output to resemble the one from wwPDB validation, at least the ones accounted by molprobity? Jorge
Hi Jorge, Possible reasons for discrepancies include: - Different versions of libraries used (Phenix always uses the latest provided by Richardson's lab). - Different libraries (e.g., Phenix uses CDL natively). - Explicit presence or absence of H atoms, and how they are treated by the software. Clearly, we won't be very enthusiastic about using outdated libraries to match others'. Most of the time, the differences stem from 'borderline' cases where residues contributing to differences are located at or very close to contour lines that separate outliers from allowed, or allowed from favorable. In the case of CDL, the idea of an 'ideal' value is a sort of moving target as it depends on the local conformation. In such cases, you won't be able to match other software unless they also use CDL. All in all, with this knowledge at hand, if you manage to convince yourself (and, if required, others) that the observed differences are insignificant, then all is just fine! Pavel On 4/26/24 11:39, Jorge Iulek wrote:
Hi all,
I had already noticed that outliers spotted by phenix with its validation tool, with the nice feature to carry one straight to the "suspicious" residue in a coot session for visual inspection, differs a bit from what comes spotted from wwPDB validation, generally the latter spotting some few more residues. To one current structure, which at low resolution, I notice there is a bit larger difference between the number of spotted residues. I see from wwPDB validation, molprobity is also used to spot outliers for many of the analyses. I see the current molprobity version used is 4.02b-467, but I could not yet devise the molprobity version used by phenix. Of course, there would also be the "library" used in each case to spot the outliers. Maybe someone ever listed what is considered in each case, and their differences, or a link to that is welcome for me. What would be the best criteria for spotting is of course a good discussion. But a superficial question I would make is if there is a way to make phenix.molprobity output to resemble the one from wwPDB validation, at least the ones accounted by molprobity?
Jorge
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb Unsubscribe: [email protected]
Thanks, Pavel, for your remarks. J. On 4/26/24 16:48, Pavel Afonine wrote:
Hi Jorge,
Possible reasons for discrepancies include:
- Different versions of libraries used (Phenix always uses the latest provided by Richardson's lab).
- Different libraries (e.g., Phenix uses CDL natively).
- Explicit presence or absence of H atoms, and how they are treated by the software.
Clearly, we won't be very enthusiastic about using outdated libraries to match others'.
Most of the time, the differences stem from 'borderline' cases where residues contributing to differences are located at or very close to contour lines that separate outliers from allowed, or allowed from favorable.
In the case of CDL, the idea of an 'ideal' value is a sort of moving target as it depends on the local conformation. In such cases, you won't be able to match other software unless they also use CDL.
All in all, with this knowledge at hand, if you manage to convince yourself (and, if required, others) that the observed differences are insignificant, then all is just fine!
Pavel
On 4/26/24 11:39, Jorge Iulek wrote:
Hi all,
I had already noticed that outliers spotted by phenix with its validation tool, with the nice feature to carry one straight to the "suspicious" residue in a coot session for visual inspection, differs a bit from what comes spotted from wwPDB validation, generally the latter spotting some few more residues. To one current structure, which at low resolution, I notice there is a bit larger difference between the number of spotted residues. I see from wwPDB validation, molprobity is also used to spot outliers for many of the analyses. I see the current molprobity version used is 4.02b-467, but I could not yet devise the molprobity version used by phenix. Of course, there would also be the "library" used in each case to spot the outliers. Maybe someone ever listed what is considered in each case, and their differences, or a link to that is welcome for me. What would be the best criteria for spotting is of course a good discussion. But a superficial question I would make is if there is a way to make phenix.molprobity output to resemble the one from wwPDB validation, at least the ones accounted by molprobity?
Jorge
_______________________________________________ phenixbb mailing list [email protected] http://phenix-online.org/mailman/listinfo/phenixbb Unsubscribe: [email protected]
participants (2)
-
Jorge Iulek
-
Pavel Afonine