Sign up ×
Stack Overflow is a community of 4.7 million programmers, just like you, helping each other. Join them; it only takes a minute:

I am a newbie pymc user and I have written an MCMC code which is quite slow and I would like to modify my code in order to speed it up. Is it possible to use multi-processing to speed up the performance of pymc? For instance if I have a make_model function which consists of bunch of deterministic and stochastic and potential classes, and I am interested to find the posterior for couple of parameters. If the structure of my MCMC code would be as following:

def make_model(X):
    .
    .
    return locals()

if __name__ == '__main__':

    M = pm.MCMC(make_model(X)),db='pickle',dbname='NFWTracer.pickle')
    M.use_step_method(pm.AdaptiveMetropolis, M.model_pars ,verbose=1)
    M.isample(40000,8000,50)

How could I use multi-processing while all the chains are related in order to decide the next steps in the parameter space? If it is plausible, how it should be done?

share|improve this question
    
do you have example code that can be run? – johntellsall Jul 23 '14 at 18:31
    
@shavenwarthog well I have an MCMC code which is slow but I am wondering whether it makes sense to use multi-processing and where I should use it, for instance inside the make_model function? Especially the bottleneck is likelihood and I wrote whole the process of likelihood in cython, therefore the only other option to improve the speed of the code is multi-processing or paralleling the code. – Dalek Jul 24 '14 at 5:12

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Browse other questions tagged or ask your own question.