For up_down, the algorithm is good, but can be implemented more simply:
Use cmp built-in, thx to @Winston
Use itertools built-ins to eliminate redundant index math,
indexed access("[]") , and list copying ("[:]").
# compare each element with its nth successor.
def up_down(data, n=1):
# iterate in parallel over data and its n-shifted slice
return imap(cmp, data, itertools.islice(data, n, None))]
For possible_lists, you could do much better than generating the entire
permutation set. Instead, consider an approach that generates an initial
list that provably matches the up_down data and then applies a series
of mutations to that list, generating other lists that preserve its up_down
profile until all such lists have been provably generated.
In an optimal solution,
the initial list would be cheap to generate
the initial list would be "minimal" according to some metric.
the mutator function would generate the next minimal list
(i.e. "next most minimal?" "next least?").
In a less-than-optimal solution,
the initial list would at least be cheaper to generate than it would
be to discover by the OP's "permute and test" method
the "minimality" idea could be abandoned
the mutator function could generate a set of successor lists, test
them against the set of lists already seen, and mutate any one of
them that isn't in a set of those already mutated.
I believe that an optimal solution is worth pursuing, especially since
it might allow possible_lists to export a "generator" interface that
produced each matching list sequentially with a minimal memory footprint.
I also believe that a good solution for n > 1 is likely to be found by
somehow post-processing all combinations of the n independent solutions for the
lists of every nth element of the original.