21 Nov
2011
21 Nov
'11
4:34 a.m.
The likelihood function can then be plugged in to Bayes' law - if the model and data error terms are all accounted for, no other weighting should be necessary.
If I arbitrarily multiply the ML function (or any other - doesn't matter) by 100, the weight will have to account for this. The one based on ratio of gradient norms (or any other of similar kind) will do. Given the amount and variety of targets (both, data and restraints) we need to deal with (because each model and data quality require adequate proper parametrization), this flexibility is very essential. Postulating introduces rigidity, and that doesn't help to sample space when doing optimization. Pavel