How do I split a list of arbitrary length into equal sized chunks?
Related question: How to iterate over a list in chunks
Here's a generator that yields evenly-sized chunks:
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in range(0, len(lst), n):
yield lst[i:i + n]
import pprint
pprint.pprint(list(chunks(range(10, 75), 10)))
[[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
For Python 2, using xrange
instead of range
:
def chunks(lst, n):
"""Yield successive n-sized chunks from lst."""
for i in xrange(0, len(lst), n):
yield lst[i:i + n]
Below is a list comprehension one-liner. The method above is preferable, though, since using named functions makes code easier to understand. For Python 3:
[lst[i:i + n] for i in range(0, len(lst), n)]
For Python 2:
[lst[i:i + n] for i in xrange(0, len(lst), n)]
Something super simple:
def chunks(xs, n):
n = max(1, n)
return (xs[i:i+n] for i in range(0, len(xs), n))
For Python 2, use xrange()
instead of range()
.
len(l) or 1
to deal with empty lists.
I know this is kind of old but nobody yet mentioned numpy.array_split
:
import numpy as np
lst = range(50)
np.array_split(lst, 5)
Result:
[array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
array([10, 11, 12, 13, 14, 15, 16, 17, 18, 19]),
array([20, 21, 22, 23, 24, 25, 26, 27, 28, 29]),
array([30, 31, 32, 33, 34, 35, 36, 37, 38, 39]),
array([40, 41, 42, 43, 44, 45, 46, 47, 48, 49])]
np.array_split(lst, int(len(lst)/5))
to get a list where each sublist's length is 5 or less.
int()
rounds the number of chunks downwards, so each chunk will contain 5 or more items. For example, when lst = range(49)
, you get four chunks of 6 items and five chunks of 5 items.
Directly from the (old) Python documentation (recipes for itertools):
from itertools import izip, chain, repeat
def grouper(n, iterable, padvalue=None):
"grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
return izip(*[chain(iterable, repeat(padvalue, n-1))]*n)
The current version, as suggested by J.F.Sebastian:
#from itertools import izip_longest as zip_longest # for Python 2.x
from itertools import zip_longest # for Python 3.x
#from six.moves import zip_longest # for both (uses the six compat library)
def grouper(n, iterable, padvalue=None):
"grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)
I guess Guido's time machine works—worked—will work—will have worked—was working again.
These solutions work because [iter(iterable)]*n
(or the equivalent in the earlier version) creates one iterator, repeated n
times in the list. izip_longest
then effectively performs a round-robin of "each" iterator; because this is the same iterator, it is advanced by each such call, resulting in each such zip-roundrobin generating one tuple of n
items.
itertools
functional approach turning out some unreadable sludge, when compared to a simple and naive pure python implementation
I'm surprised nobody has thought of using iter
's two-argument form:
from itertools import islice
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
Demo:
>>> list(chunk(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)]
This works with any iterable and produces output lazily. It returns tuples rather than iterators, but I think it has a certain elegance nonetheless. It also doesn't pad; if you want padding, a simple variation on the above will suffice:
from itertools import islice, chain, repeat
def chunk_pad(it, size, padval=None):
it = chain(iter(it), repeat(padval))
return iter(lambda: tuple(islice(it, size)), (padval,) * size)
Demo:
>>> list(chunk_pad(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)]
>>> list(chunk_pad(range(14), 3, 'a'))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')]
Like the izip_longest
-based solutions, the above always pads. As far as I know, there's no one- or two-line itertools recipe for a function that optionally pads. By combining the above two approaches, this one comes pretty close:
_no_padding = object()
def chunk(it, size, padval=_no_padding):
if padval == _no_padding:
it = iter(it)
sentinel = ()
else:
it = chain(iter(it), repeat(padval))
sentinel = (padval,) * size
return iter(lambda: tuple(islice(it, size)), sentinel)
Demo:
>>> list(chunk(range(14), 3))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13)]
>>> list(chunk(range(14), 3, None))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, None)]
>>> list(chunk(range(14), 3, 'a'))
[(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, 11), (12, 13, 'a')]
I believe this is the shortest chunker proposed that offers optional padding.
As Tomasz Gandor observed, the two padding chunkers will stop unexpectedly if they encounter a long sequence of pad values. Here's a final variation that works around that problem in a reasonable way:
_no_padding = object()
def chunk(it, size, padval=_no_padding):
it = iter(it)
chunker = iter(lambda: tuple(islice(it, size)), ())
if padval == _no_padding:
yield from chunker
else:
for ch in chunker:
yield ch if len(ch) == size else ch + (padval,) * (size - len(ch))
Demo:
>>> list(chunk([1, 2, (), (), 5], 2))
[(1, 2), ((), ()), (5,)]
>>> list(chunk([1, 2, None, None, 5], 2, None))
[(1, 2), (None, None), (5, None)]
islice(it, size)
expression and embedded it (like I had done) in a loop construct. Only you thought of the two-argument version of iter()
(I was completely unaware of), which makes it super-elegant (and probably most performance-effective). I had no idea that the first argument to iter
changes to a 0-argument function when given the sentinel. You return a (pot. infinite) iterator of chunks, can use a (pot. infinite) iterator as input, have no len()
and no array slices. Awesome!
Here is a generator that work on arbitrary iterables:
def split_seq(iterable, size):
it = iter(iterable)
item = list(itertools.islice(it, size))
while item:
yield item
item = list(itertools.islice(it, size))
Example:
>>> import pprint
>>> pprint.pprint(list(split_seq(xrange(75), 10)))
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
Simple yet elegant
L = range(1, 1000)
print [L[x:x+10] for x in xrange(0, len(L), 10)]
or if you prefer:
def chunks(L, n): return [L[x: x+n] for x in xrange(0, len(L), n)]
chunks(L, 10)
1
and l
are indistinguishable. As are 0
and O
. And sometimes even I
and 1
.
def chunk(input, size):
return map(None, *([iter(input)] * size))
return map(lambda *x: x, *([iter(input)] * size))
. Yet it drops tail of the list if it cannot be divided in the equal chunks
How do you split a list into evenly sized chunks?
"Evenly sized chunks", to me, implies that they are all the same length, or barring that option, at minimal variance in length. E.g. 5 baskets for 21 items could have the following results:
>>> import statistics
>>> statistics.variance([5,5,5,5,1])
3.2
>>> statistics.variance([5,4,4,4,4])
0.19999999999999998
A practical reason to prefer the latter result: if you were using these functions to distribute work, you've built-in the prospect of one likely finishing well before the others, so it would sit around doing nothing while the others continued working hard.
Critique of other answers here
When I originally wrote this answer, none of the other answers were evenly sized chunks - they all leave a runt chunk at the end, so they're not well balanced, and have a higher than necessary variance of lengths.
For example, the current top answer ends with:
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
Others, like list(grouper(3, range(7)))
, and chunk(range(7), 3)
both return: [(0, 1, 2), (3, 4, 5), (6, None, None)]
. The None
's are just padding, and rather inelegant in my opinion. They are NOT evenly chunking the iterables.
Why can't we divide these better?
Cycle Solution
A high-level balanced solution using itertools.cycle
, which is the way I might do it today. Here's the setup:
from itertools import cycle
items = range(10, 75)
number_of_baskets = 10
Now we need our lists into which to populate the elements:
baskets = [[] for _ in range(number_of_baskets)]
Finally, we zip the elements we're going to allocate together with a cycle of the baskets until we run out of elements, which, semantically, it exactly what we want:
for element, basket in zip(items, cycle(baskets)):
basket.append(element)
Here's the result:
>>> from pprint import pprint
>>> pprint(baskets)
[[10, 20, 30, 40, 50, 60, 70],
[11, 21, 31, 41, 51, 61, 71],
[12, 22, 32, 42, 52, 62, 72],
[13, 23, 33, 43, 53, 63, 73],
[14, 24, 34, 44, 54, 64, 74],
[15, 25, 35, 45, 55, 65],
[16, 26, 36, 46, 56, 66],
[17, 27, 37, 47, 57, 67],
[18, 28, 38, 48, 58, 68],
[19, 29, 39, 49, 59, 69]]
To productionize this solution, we write a function, and provide the type annotations:
from itertools import cycle
from typing import List, Any
def cycle_baskets(items: List[Any], maxbaskets: int) -> List[List[Any]]:
baskets = [[] for _ in range(min(maxbaskets, len(items)))]
for item, basket in zip(items, cycle(baskets)):
basket.append(item)
return baskets
In the above, we take our list of items, and the max number of baskets. We create a list of empty lists, in which to append each element, in a round-robin style.
Slices
Another elegant solution is to use slices - specifically the less-commonly used step argument to slices. i.e.:
start = 0
stop = None
step = number_of_baskets
first_basket = items[start:stop:step]
This is especially elegant in that slices don't care how long the data are - the result, our first basket, is only as long as it needs to be. We'll only need to increment the starting point for each basket.
In fact this could be a one-liner, but we'll go multiline for readability and to avoid an overlong line of code:
from typing import List, Any
def slice_baskets(items: List[Any], maxbaskets: int) -> List[List[Any]]:
n_baskets = min(maxbaskets, len(items))
return [items[i::n_baskets] for i in range(n_baskets)]
And islice
from the itertools module will provide a lazily iterating approach, like that which was originally asked for in the question.
I don't expect most use-cases to benefit very much, as the original data is already fully materialized in a list, but for large datasets, it could save nearly half the memory usage.
from itertools import islice
from typing import List, Any, Generator
def yield_islice_baskets(items: List[Any], maxbaskets: int) -> Generator[List[Any], None, None]:
n_baskets = min(maxbaskets, len(items))
for i in range(n_baskets):
yield islice(items, i, None, n_baskets)
View results with:
from pprint import pprint
items = list(range(10, 75))
pprint(cycle_baskets(items, 10))
pprint(slice_baskets(items, 10))
pprint([list(s) for s in yield_islice_baskets(items, 10)])
Updated prior solutions
Here's another balanced solution, adapted from a function I've used in production in the past, that uses the modulo operator:
def baskets_from(items, maxbaskets=25):
baskets = [[] for _ in range(maxbaskets)]
for i, item in enumerate(items):
baskets[i % maxbaskets].append(item)
return filter(None, baskets)
And I created a generator that does the same if you put it into a list:
def iter_baskets_from(items, maxbaskets=3):
'''generates evenly balanced baskets from indexable iterable'''
item_count = len(items)
baskets = min(item_count, maxbaskets)
for x_i in range(baskets):
yield [items[y_i] for y_i in range(x_i, item_count, baskets)]
And finally, since I see that all of the above functions return elements in a contiguous order (as they were given):
def iter_baskets_contiguous(items, maxbaskets=3, item_count=None):
'''
generates balanced baskets from iterable, contiguous contents
provide item_count if providing a iterator that doesn't support len()
'''
item_count = item_count or len(items)
baskets = min(item_count, maxbaskets)
items = iter(items)
floor = item_count // baskets
ceiling = floor + 1
stepdown = item_count % baskets
for x_i in range(baskets):
length = ceiling if x_i < stepdown else floor
yield [items.next() for _ in range(length)]
Output
To test them out:
print(baskets_from(range(6), 8))
print(list(iter_baskets_from(range(6), 8)))
print(list(iter_baskets_contiguous(range(6), 8)))
print(baskets_from(range(22), 8))
print(list(iter_baskets_from(range(22), 8)))
print(list(iter_baskets_contiguous(range(22), 8)))
print(baskets_from('ABCDEFG', 3))
print(list(iter_baskets_from('ABCDEFG', 3)))
print(list(iter_baskets_contiguous('ABCDEFG', 3)))
print(baskets_from(range(26), 5))
print(list(iter_baskets_from(range(26), 5)))
print(list(iter_baskets_contiguous(range(26), 5)))
Which prints out:
[[0], [1], [2], [3], [4], [5]]
[[0], [1], [2], [3], [4], [5]]
[[0], [1], [2], [3], [4], [5]]
[[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]]
[[0, 8, 16], [1, 9, 17], [2, 10, 18], [3, 11, 19], [4, 12, 20], [5, 13, 21], [6, 14], [7, 15]]
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10, 11], [12, 13, 14], [15, 16, 17], [18, 19], [20, 21]]
[['A', 'D', 'G'], ['B', 'E'], ['C', 'F']]
[['A', 'D', 'G'], ['B', 'E'], ['C', 'F']]
[['A', 'B', 'C'], ['D', 'E'], ['F', 'G']]
[[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]]
[[0, 5, 10, 15, 20, 25], [1, 6, 11, 16, 21], [2, 7, 12, 17, 22], [3, 8, 13, 18, 23], [4, 9, 14, 19, 24]]
[[0, 1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20], [21, 22, 23, 24, 25]]
Notice that the contiguous generator provide chunks in the same length patterns as the other two, but the items are all in order, and they are as evenly divided as one may divide a list of discrete elements.
list(grouper(3, xrange(7)))
, and the second one, chunk(xrange(7), 3)
both return: [(0, 1, 2), (3, 4, 5), (6, None, None)]
. The None
's are just padding, and rather inelegant in my opinion. They are NOT evenly chunking the iterables. Thanks for your vote!
import pandas as pd; [pd.DataFrame(np.arange(7))[i::3] for i in xrange(3)]
If you know list size:
def SplitList(mylist, chunk_size):
return [mylist[offs:offs+chunk_size] for offs in range(0, len(mylist), chunk_size)]
If you don't (an iterator):
def IterChunks(sequence, chunk_size):
res = []
for item in sequence:
res.append(item)
if len(res) >= chunk_size:
yield res
res = []
if res:
yield res # yield the last, incomplete, portion
In the latter case, it can be rephrased in a more beautiful way if you can be sure that the sequence always contains a whole number of chunks of given size (i.e. there is no incomplete last chunk).
I saw the most awesome Python-ish answer in a duplicate of this question:
from itertools import zip_longest
a = range(1, 16)
i = iter(a)
r = list(zip_longest(i, i, i))
>>> print(r)
[(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, 15)]
You can create n-tuple for any n. If a = range(1, 15)
, then the result will be:
[(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, None)]
If the list is divided evenly, then you can replace zip_longest
with zip
, otherwise the triplet (13, 14, None)
would be lost. Python 3 is used above. For Python 2, use izip_longest
.
zip(i, i, i, ... i)
with "chunk_size" arguments to zip() can be written as zip(*[i]*chunk_size)
Whether that's a good idea or not is debatable, of course.
zip_longest
should be used, as done in: stackoverflow.com/a/434411/1959808
range(1, 15)
is already missing elements, because there are 14 elements in range(1, 15)
, not 15.
Don't reinvent the wheel.
Given
import itertools as it
import collections as ct
import more_itertools as mit
iterable = range(11)
n = 3
Code
list(mit.chunked(iterable, n))
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
list(mit.sliced(iterable, n))
# [range(0, 3), range(3, 6), range(6, 9), range(9, 11)]
list(mit.grouper(n, iterable))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]
list(mit.windowed(iterable, len(iterable)//n, step=n))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]
list(mit.chunked_even(iterable, n))
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
(or DIY, if you want)
The Standard Library
list(it.zip_longest(*[iter(iterable)] * n))
# [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, 10, None)]
d = {}
for i, x in enumerate(iterable):
d.setdefault(i//n, []).append(x)
list(d.values())
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
dd = ct.defaultdict(list)
for i, x in enumerate(iterable):
dd[i//n].append(x)
list(dd.values())
# [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10]]
References
more_itertools.chunked (related posted)
more_itertools.sliced
more_itertools.grouper (related post)
more_itertools.windowed (see also stagger, zip_offset)
more_itertools.chunked_even
zip_longest (related post, related post)
setdefault (ordered results requires Python 3.6+)
collections.defaultdict (ordered results requires Python 3.6+)
+ A third-party library that implements itertools recipes and more. > pip install more_itertools
[AA[i:i+SS] for i in range(len(AA))[::SS]]
Where AA is array, SS is chunk size. For example:
>>> AA=range(10,21);SS=3
>>> [AA[i:i+SS] for i in range(len(AA))[::SS]]
[[10, 11, 12], [13, 14, 15], [16, 17, 18], [19, 20]]
# or [range(10, 13), range(13, 16), range(16, 19), range(19, 21)] in py3
To expand the ranges in py3 do
(py3) >>> [list(AA[i:i+SS]) for i in range(len(AA))[::SS]]
[[10, 11, 12], [13, 14, 15], [16, 17, 18], [19, 20]]
If you had a chunk size of 3 for example, you could do:
zip(*[iterable[i::3] for i in range(3)])
source: http://code.activestate.com/recipes/303060-group-a-list-into-sequential-n-tuples/
I would use this when my chunk size is fixed number I can type, e.g. '3', and would never change.
zip_longest
from itertools
: docs.python.org/3/library/itertools.html#itertools.zip_longest
The toolz library has the partition
function for this:
from toolz.itertoolz.core import partition
list(partition(2, [1, 2, 3, 4]))
[(1, 2), (3, 4)]
With Assignment Expressions in Python 3.8 it becomes quite nice:
import itertools
def batch(iterable, size):
it = iter(iterable)
while item := list(itertools.islice(it, size)):
yield item
This works on an arbitrary iterable, not just a list.
>>> import pprint
>>> pprint.pprint(list(batch(range(75), 10)))
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
[30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
[40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
[50, 51, 52, 53, 54, 55, 56, 57, 58, 59],
[60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
[70, 71, 72, 73, 74]]
I was curious about the performance of different approaches and here it is:
Tested on Python 3.5.1
import time
batch_size = 7
arr_len = 298937
#---------slice-------------
print("\r\nslice")
start = time.time()
arr = [i for i in range(0, arr_len)]
while True:
if not arr:
break
tmp = arr[0:batch_size]
arr = arr[batch_size:-1]
print(time.time() - start)
#-----------index-----------
print("\r\nindex")
arr = [i for i in range(0, arr_len)]
start = time.time()
for i in range(0, round(len(arr) / batch_size + 1)):
tmp = arr[batch_size * i : batch_size * (i + 1)]
print(time.time() - start)
#----------batches 1------------
def batch(iterable, n=1):
l = len(iterable)
for ndx in range(0, l, n):
yield iterable[ndx:min(ndx + n, l)]
print("\r\nbatches 1")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in batch(arr, batch_size):
tmp = x
print(time.time() - start)
#----------batches 2------------
from itertools import islice, chain
def batch(iterable, size):
sourceiter = iter(iterable)
while True:
batchiter = islice(sourceiter, size)
yield chain([next(batchiter)], batchiter)
print("\r\nbatches 2")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in batch(arr, batch_size):
tmp = x
print(time.time() - start)
#---------chunks-------------
def chunks(l, n):
"""Yield successive n-sized chunks from l."""
for i in range(0, len(l), n):
yield l[i:i + n]
print("\r\nchunks")
arr = [i for i in range(0, arr_len)]
start = time.time()
for x in chunks(arr, batch_size):
tmp = x
print(time.time() - start)
#-----------grouper-----------
from itertools import zip_longest # for Python 3.x
#from six.moves import zip_longest # for both (uses the six compat library)
def grouper(iterable, n, padvalue=None):
"grouper(3, 'abcdefg', 'x') --> ('a','b','c'), ('d','e','f'), ('g','x','x')"
return zip_longest(*[iter(iterable)]*n, fillvalue=padvalue)
arr = [i for i in range(0, arr_len)]
print("\r\ngrouper")
start = time.time()
for x in grouper(arr, batch_size):
tmp = x
print(time.time() - start)
Results:
slice
31.18285083770752
index
0.02184295654296875
batches 1
0.03503894805908203
batches 2
0.22681021690368652
chunks
0.019841909408569336
grouper
0.006506919860839844
I like the Python doc's version proposed by tzot and J.F.Sebastian a lot, but it has two shortcomings:
it is not very explicit
I usually don't want a fill value in the last chunk
I'm using this one a lot in my code:
from itertools import islice
def chunks(n, iterable):
iterable = iter(iterable)
while True:
yield tuple(islice(iterable, n)) or iterable.next()
UPDATE: A lazy chunks version:
from itertools import chain, islice
def chunks(n, iterable):
iterable = iter(iterable)
while True:
yield chain([next(iterable)], islice(iterable, n-1))
You may also use get_chunks
function of utilspie
library as:
>>> from utilspie import iterutils
>>> a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> list(iterutils.get_chunks(a, 5))
[[1, 2, 3, 4, 5], [6, 7, 8, 9]]
You can install utilspie
via pip:
sudo pip install utilspie
Disclaimer: I am the creator of utilspie library.
code:
def split_list(the_list, chunk_size):
result_list = []
while the_list:
result_list.append(the_list[:chunk_size])
the_list = the_list[chunk_size:]
return result_list
a_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
print split_list(a_list, 3)
result:
[[1, 2, 3], [4, 5, 6], [7, 8, 9], [10]]
At this point, I think we need a recursive generator, just in case...
In python 2:
def chunks(li, n):
if li == []:
return
yield li[:n]
for e in chunks(li[n:], n):
yield e
In python 3:
def chunks(li, n):
if li == []:
return
yield li[:n]
yield from chunks(li[n:], n)
Also, in case of massive Alien invasion, a decorated recursive generator might become handy:
def dec(gen):
def new_gen(li, n):
for e in gen(li, n):
if e == []:
return
yield e
return new_gen
@dec
def chunks(li, n):
yield li[:n]
for e in chunks(li[n:], n):
yield e
heh, one line version
In [48]: chunk = lambda ulist, step: map(lambda i: ulist[i:i+step], xrange(0, len(ulist), step))
In [49]: chunk(range(1,100), 10)
Out[49]:
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15, 16, 17, 18, 19, 20],
[21, 22, 23, 24, 25, 26, 27, 28, 29, 30],
[31, 32, 33, 34, 35, 36, 37, 38, 39, 40],
[41, 42, 43, 44, 45, 46, 47, 48, 49, 50],
[51, 52, 53, 54, 55, 56, 57, 58, 59, 60],
[61, 62, 63, 64, 65, 66, 67, 68, 69, 70],
[71, 72, 73, 74, 75, 76, 77, 78, 79, 80],
[81, 82, 83, 84, 85, 86, 87, 88, 89, 90],
[91, 92, 93, 94, 95, 96, 97, 98, 99]]
def chunk
instead of chunk=lambda
has .__name__ attribute 'chunk' instead of '<lambda>'. The specific name is more useful in tracebacks.
def split_seq(seq, num_pieces):
start = 0
for i in xrange(num_pieces):
stop = start + len(seq[i::num_pieces])
yield seq[start:stop]
start = stop
usage:
seq = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
for seq in split_seq(seq, 3):
print seq
Another more explicit version.
def chunkList(initialList, chunkSize):
"""
This function chunks a list into sub lists
that have a length equals to chunkSize.
Example:
lst = [3, 4, 9, 7, 1, 1, 2, 3]
print(chunkList(lst, 3))
returns
[[3, 4, 9], [7, 1, 1], [2, 3]]
"""
finalList = []
for i in range(0, len(initialList), chunkSize):
finalList.append(initialList[i:i+chunkSize])
return finalList
Without calling len() which is good for large lists:
def splitter(l, n):
i = 0
chunk = l[:n]
while chunk:
yield chunk
i += n
chunk = l[i:i+n]
And this is for iterables:
def isplitter(l, n):
l = iter(l)
chunk = list(islice(l, n))
while chunk:
yield chunk
chunk = list(islice(l, n))
The functional flavour of the above:
def isplitter2(l, n):
return takewhile(bool,
(tuple(islice(start, n))
for start in repeat(iter(l))))
OR:
def chunks_gen_sentinel(n, seq):
continuous_slices = imap(islice, repeat(iter(seq)), repeat(0), repeat(n))
return iter(imap(tuple, continuous_slices).next,())
OR:
def chunks_gen_filter(n, seq):
continuous_slices = imap(islice, repeat(iter(seq)), repeat(0), repeat(n))
return takewhile(bool,imap(tuple, continuous_slices))
len()
on large lists; it's a constant-time operation.
See this reference
>>> orange = range(1, 1001)
>>> otuples = list( zip(*[iter(orange)]*10))
>>> print(otuples)
[(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), ... (991, 992, 993, 994, 995, 996, 997, 998, 999, 1000)]
>>> olist = [list(i) for i in otuples]
>>> print(olist)
[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], ..., [991, 992, 993, 994, 995, 996, 997, 998, 999, 1000]]
>>>
Python3
zip(*[iter(range(7))]*3)
only returns [(0, 1, 2), (3, 4, 5)]
and forgets the 6
from the input.
def chunks(iterable,n):
"""assumes n is an integer>0
"""
iterable=iter(iterable)
while True:
result=[]
for i in range(n):
try:
a=next(iterable)
except StopIteration:
break
else:
result.append(a)
if result:
yield result
else:
break
g1=(i*i for i in range(10))
g2=chunks(g1,3)
print g2
'<generator object chunks at 0x0337B9B8>'
print list(g2)
'[[0, 1, 4], [9, 16, 25], [36, 49, 64], [81]]'
Since everybody here talking about iterators. boltons
has perfect method for that, called iterutils.chunked_iter
.
from boltons import iterutils
list(iterutils.chunked_iter(list(range(50)), 11))
Output:
[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
[22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32],
[33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43],
[44, 45, 46, 47, 48, 49]]
But if you don't want to be mercy on memory, you can use old-way and store the full list
in the first place with iterutils.chunked
.
Consider using matplotlib.cbook pieces
for example:
import matplotlib.cbook as cbook
segments = cbook.pieces(np.arange(20), 3)
for s in segments:
print s
a = [1, 2, 3, 4, 5, 6, 7, 8, 9]
CHUNK = 4
[a[i*CHUNK:(i+1)*CHUNK] for i in xrange((len(a) + CHUNK - 1) / CHUNK )]
Success story sharing