I am a newbie pymc user and I have written an MCMC code which is quite slow and I would like to modify my code in order to speed it up. Is it possible to use multi-processing to speed up the performance of pymc? For instance if I have a make_model
function which consists of bunch of deterministic and stochastic and potential classes, and I am interested to find the posterior for couple of parameters. If the structure of my MCMC code would be as following:
def make_model(X):
.
.
return locals()
if __name__ == '__main__':
M = pm.MCMC(make_model(X)),db='pickle',dbname='NFWTracer.pickle')
M.use_step_method(pm.AdaptiveMetropolis, M.model_pars ,verbose=1)
M.isample(40000,8000,50)
How could I use multi-processing while all the chains are related in order to decide the next steps in the parameter space? If it is plausible, how it should be done?
make_model
function? Especially the bottleneck is likelihood and I wrote whole the process of likelihood in cython, therefore the only other option to improve the speed of the code is multi-processing or paralleling the code. – Dalek Jul 24 '14 at 5:12