My first instinct was the same as Joe Kington's when it comes to lists, but I checked, and on my machine at least, islice
is consistently slower!
>>> timeit.timeit("sum(l[50:950])", "l = range(1000)", number=10000)
1.0398731231689453
>>> timeit.timeit("sum(islice(l, 50, 950))", "from itertools import islice; l = range(1000)", number=10000)
1.2317550182342529
>>> timeit.timeit("sum(l[50:950000])", "l = range(1000000)", number=10)
7.9020509719848633
>>> timeit.timeit("sum(islice(l, 50, 950000))", "from itertools import islice; l = range(1000000)", number=10)
8.4522969722747803
I tried a custom_sum
and found that it was faster, but not by much:
>>> setup = """
... def custom_sum(list, start, stop):
... s = 0
... for i in xrange(start, stop):
... s += list[i]
... return s
...
... l = range(1000)
... """
>>> timeit.timeit("custom_sum(l, 50, 950)", setup, number=1000)
0.66767406463623047
Furthermore, at larger numbers, it was slower by far!
>>> setup = setup.replace("range(1000)", "range(1000000)")
>>> timeit.timeit("custom_sum(l, 50, 950000)", setup, number=10)
14.185815095901489
I couldn't think of anything else to test. (Thoughts, anyone?)