Pages

July 25, 2008

Python - Counting Items In A List

Counting items in a list...

Lets say you have a list like this:
foo = [1, 2, 1, 2, 1, 2, 1, 1]
... and you want to count the items in the list, ending up with a dictionary full of items (dict keys) and counts (dict values), like this:
{1: 5, 2: 3}
Here is an easy way to do it:
d = {}
for i in set(foo):
    d[i] = foo.count(i)
got anything more Pythonic?

22 comments:

  1. AnonymousJuly 25, 2008

    maybe not super pythonic, but FP-ythonic:

    bar = dict (
    zip (
    set(foo),
    map (foo.count, set(foo)))
    )

    ah... this one is more pythonic:

    bar = dict ((i,foo.count(i))
    for i in set(foo))

    cheers,
    sebastien

    ReplyDelete
  2. This comment has been removed by the author.

    ReplyDelete
  3. AnonymousJuly 25, 2008

    The count method is O(n), so you end up with O(n*m) performance, where m is the size of the alphabet. Or in other words, your solution can get really slow really quick.

    A more robust solution is to use a dictionary containing counters (e.g. "defaultdict(int)") and then do a single pass over the data. Since dictionary lookups are O(1) and additions are amortized O(1), the resulting algorithn is O(n).

    (If you *know* that you have a list of small integers in a limited range, like in your example, you can of course use a plain list instead of a dictionary).

    ReplyDelete
  4. To demonstrate Fredrik's proposed solution:


    In [6]: x = [1,1,2,2,2,2,2,1,1,2,2,1,1,1,2]

    In [7]: d = defaultdict(lambda: 0)

    In [8]: for y in x: d[y] += 1

    In [9]: d

    Out[9]: defaultdict(<function <lambda> at 0x1123e30>, {1: 7, 2: 8})

    ReplyDelete
  5. d = defaultdict(int) is loads better than d = defaultdict(lambda: 0). I knew there must have been a shortcut for that!

    ReplyDelete
  6. What about itertools?

    from itertools import groupby
    from operator import itemgetter
    foo = [1, 2, 1, 2, 1, 2, 1, 1]
    x = dict([(a,len(list(b))) for a,b in groupby(sorted(foo))])
    print(x)

    ReplyDelete
  7. AnonymousJuly 25, 2008

    you wanted O(n)...

    foo = [1, 2, 1, 2, 1, 2, 1, 1]
    d = {}
    [d.__setitem__(f,1+d.get(f,0)) for f in foo]

    ReplyDelete
  8. Thanks for the great tips.

    I will update the post with some of these better approaches.

    ReplyDelete
  9. AnonymousJuly 25, 2008

    (where "additions" in my comment meant "additions to the dictionary", not "additions to the counter")

    As for Walter's "groupby" solution, it combines an O(n log n) sort with at three O(n) passes over the data (copy list, groupby, getting the length of each group operator), but these operations are all written in C, so I wouldn't be surprised if the approach could beat a single-pass O(n) python solution for also for relatively large n:s. Worth benchmarking, at least.

    ReplyDelete
  10. AnonymousJuly 25, 2008

    Dude, your original snippet is by far the best solution here. Easy to read, works. I'll take that over hard to read, runs faster.

    ReplyDelete
  11. AnonymousJuly 25, 2008

    on what machines can the original possibly run faster?

    ReplyDelete
  12. AnonymousJuly 25, 2008

    I'll go with your original or the defaultdict solution. Why? Because they run fast (see frederic) and they are the most readable solutions.

    Using map, zip and set in combination is nice for academic questions, but in cases where you have to share the code with others, it most often leads to misunderstandings and bugs in the code

    ReplyDelete
  13. AnonymousJuly 25, 2008

    There is something that anoy me, the set(foo) is a really complex operation.

    It need O(n) to iterate over the list and (if the set is binary tree or something like that) O(log(m)) to locate if the value is in the tree and insert it. (less when the tree is not full and more when the tree is not balanced)

    So the first set(foo) is O(nlog(m)). The iteration is O(m), the count is O(n) and the insertion in the list is O(1).

    Then you have something such as O(mn + nlog(m)). Like frederik said, it's ~O(mn). (but with small values of m and n, it's not really the same thing.. 2*infinty == infinity, but 2 hours is far more than one ;)

    But I do not agree with frederik, the dictionary lookup is not O(1). I don't know how it's implemented in python, but I don't think they solve this in less than O(log(n))

    For me the frederik solution is the better, and it's O(nlog(m)) in the worst case (O(n) for iteration and O(log(m)) for the lookup and incrementation)

    The problem with all that mathematical stuff it's that it become true for large value of n/m and does not take into account the difference between C builtins and python code, which can be enormous for little values of n/m)

    ;)

    ReplyDelete
  14. AnonymousJuly 26, 2008

    "But I do not agree with frederik, the dictionary lookup is not O(1). I don't know how it's implemented in python, but I don't think they solve this in less than O(log(n))"

    Python dictionaries and sets are hash tables, with open addressing and overallocation to keep the load factor below 2/3. Hash and perturbation algorithms are carefully chosen to minimize the number of collisions for a large number of real-life scenarios. In other words, O(1) lookup and amortized O(1) growth in practice. No trees anywhere in sight.

    ReplyDelete
  15. I'll benchmark some of these different solutions when I get a chance.

    Interesting stuff.

    ReplyDelete
  16. AnonymousJuly 27, 2008

    >>> for v in foo:
    ... d[v] = d.get(v,0)+1


    No conversions required, a single pass through the iterable, no special foo required. And, while I haven't benched it, I would be surprised if any of the other variants are faster.

    ReplyDelete
  17. AnonymousJuly 28, 2008

    I prefer the defaultdict(int);d[k]+=1 accumulation algorithm. I can follow it the easiest because it doesn't create additional temporary data structures.

    ReplyDelete
  18. Hi Corey,

    Interesting exercise.

    Could be done this way too:

    foo = [1,2,1,2,1,2,1,1]
    d = {}
    for v in foo: d[v] = 0
    for v in foo: d[v] += 1

    Can also be achived by simulating ternary operator:

    foo = [1,2,1,2,1,2,1,1]
    d = {}
    for v in foo:
    d[v] = d.has_key(v) and (d[v] + 1) or 1

    Regards,
    Rahul Verma

    ReplyDelete
  19. The test results I got were pretty interesting, and kind of all over the map, with the exception that set-based methods were uniformly bad:

    test narrow wide vwide
    dictget 0.231 0.278 0.283
    defaultd 0.130 0.326 0.419
    itertool 0.145 1.257 1.967
    makeset 0.056 11.474 44.859
    ternary 0.152 0.220 0.148
    fp 0.073 11.524 44.994
    generator 0.053 11.705 45.133

    I put up the script I used to get these results at: http://billmill.org/static/files/uniq_count_bench.py.

    ReplyDelete
  20. >with the exception that set-based >methods were uniformly bad

    Actually the set method worked relatively well when there are only a couple of possible values. The python for loops only twice and the set and count ones are at C speed

    For a group of very sparse values the ternary is fastest. I guess because for most cases it does a plain d[v]=1 instead of the d[v]=0;d[v]+=1 dance.

    The lesson here is that you can make any version of the algorithm look good or bad by choosing the data to feed them.

    ReplyDelete
  21. AnonymousJuly 01, 2010

    Very interesting,

    So is there a consensus for a faster method assuming large datasets?

    Thanks

    ReplyDelete
  22. AnonymousMay 21, 2011

    for item in list:
    try:
    item += 1
    except KeyError:
    item = 1

    ReplyDelete