trying to write a bunch of tests for a program that supports loading an arbitrary backend implementation dynamically using py.test parametrized tests, and it was a bit more complicated than I expected. What I'm intending to do (and not completely achieving) is to use the list_backends
method below to parametrize some tests based on the number of backend implementations that exist at runtime.
Something similar to my current implementation is below, where I copy a bunch of config files, one for each backend, to tmpdir
and then basically return a particular section of the config file named after the backend:
@pytest.fixture(scope='function')
def backend(request):
"""Return the backend implementation class."""
return get_backend(request.param)
@pytest.fixture(scope='function')
def configdir(tmpdir, request):
"""Copy all test config files to a temporary directory."""
testdir = py.path.local(request.module.__file__).dirname
src = testdir.join("configs")
dest = tmpdir.join("configs").mkdir()
for f in src.visit():
f.copy(dest)
return dest
@pytest.fixture(scope='function')
def config(configdir, request):
"""Return the appropriate config section for this test."""
cfgfile = configdir.join(request.param)
cfg = read_config(cfgfile)
return cfg[request.param]
@pytest.mark.parametrize(
S> 'backend,config', zip(*[list_backends()]*2), indirect=['backend', 'config'])
def test_backend_with_config(backend, config):
"""Run some tests with a particular backend and config."""
instance = backend(config)
# test stuff
...
First, am I overdoing this? Second, is there a simpler way to do this kind of parameterization with py.test?