Previous Up Next

Chapter 13  Case study: data structure selection

13.1  DSU

DSU stands for “decorate, sort, undecorate” and refers to a pattern that is often useful for sorting lists according to some attribute of the elements.

For example, if you have a dictionary that maps from mothers to lists of their children, you might want to sort the mothers by their number of children. Here is a function that does that:

def sort_by_children(mothers):
    t = []
    for mother, children in mothers.items():
       t.append((len(children), mother))

    t.sort()

    res = []
    for number, mother in t:
        res.append(mother)
    return res

The first loop assigns each mother to mother and each list of children to children. It builds a list of tuples, where each tuple is the number of children and a mother.

sort compares the first element, number of children, first, and only considers the second element to break ties. The result is a list of tuples sorted in increasing order by number of children.

The second loop traverses the list of tuples and builds a list of mothers, sorted by parity (which in this context means number of children).

This pattern is called “decorate, sort, undecorate” because the first loop “decorates” the list of mothers with by pairing each mother with her parity, and the last loop “undecorates” the sorted list by removing the parity information.

13.2  Word frequency analysis

As usual, you should at least attempt the following exercises before you read my solutions.

Exercise 1   Write a program that reads a file, breaks each line into words, strips whitespace and punctuation from the words, and converts them to lowercase.
Exercise 2   Go to Project Gutenberg (gutenberg.net) and download your favorite out-of-copyright book in plain text format.

Modify your program from the previous exercise to read the book you downloaded, skip over the header information at the beginning of the file, and process the rest of the words as before.

Then modify the program to count the total number of words in the book, and the number of times each word is used.

Print the number of different words used in the book. Compare different books by different authors, written in different eras. Which author uses the most extensive vocabulary?

Exercise 3   Modify the program from the previous exercise to print the 20 most frequently-used words in the book.
Exercise 4   Modify the previous program to read a word list (see Section 9.1) and then print all the words in the book that are not in the word list. How many of them are typos? How many of them are common words that should be in the word list, and how many of them are really obscure?

13.3  Random numbers

Most computer programs do the same thing every time they execute, given the same inputs, so they are said to be deterministic. Determinism is usually a good thing, since we expect the same calculation to yield the same result. For some applications, though, we want the computer to be unpredictable. Games are an obvious example, but there are more.

Making a program truly nondeterministic turns out to be not so easy, but there are ways to make it at least seem nondeterministic. One of them is to use algorithms that generate pseudorandom numbers. Pseudorandom numbers are not truly random because they are generated by a deterministic computation, but just by looking at the numbers it is all but impossible to distinguish them from random.

The random module provides functions that generate pseudorandom numbers (which I will simply call “random” from here on).

The function random returns a random float between 0.0 and 1.0 (including 0.0 but not 1.0). Each time you call random, you get the next number in a long series. To see a sample, run this loop:

import random

for i in range(10):
    x = random.random()
    print x

The function randint takes parameters low and high and returns an integer between low and high (including both).

>>> random.randint(5, 10)
5
>>> random.randint(5, 10)
9

To choose an element from a sequence at random, you can use choice:

>>> t = [1, 2, 3]
>>> random.choice(t)
2
>>> random.choice(t)
3

The random module also provides functions to generate random values from continuous distributions including Gaussian, exponential, gamma, and a few more.

Exercise 5   Write a function named choose_from_hist that takes a histogram as defined in Section 11.1 and returns a random value from the histogram, chosen with probability in proportion to frequency. For example, for this histogram:
>>> t = ['a', 'a', 'b']
>>> h = histogram(t)
>>> print h
{'a': 2, 'b': 1}

your function should 'a' with probability 2/3 and 'b' with probability 1/3.

13.4  Word histogram

Here is a program that reads a file and builds a histogram of the words in the file:

import string

def process_file(filename):
    h = {}
    fp = open(filename)
    for line in fp:
        process_line(line, h)
    return h

def process_line(line, h):
    line = line.replace('-', ' ')
    
    for word in line.split():
        word = word.strip(string.punctuation + string.whitespace)
        word = word.lower()

        h[word] = h.get(word, 0) + 1

hist = process_file('emma.txt')

This program reads emma.txt, which contains the text of Emma by Jane Austin.

process_file loops through the lines of the file, passing them one at a time to process_line. The histogram h is being used as an accumulator.

process_line uses the string method replace to replace hyphens with spaces before using split to break the line into a list of strings. It traverses the list of words and uses strip and lower to remove punctuation and convert to lower case. (It is a shorthand to say that strings are “converted;” remember that string are immutable, so methods like strip and lower return new strings.)

Finally, process_line updates the histogram by creating a new item or incrementing an existing one.

To count the total number of words in the file, we can add up the frequencies in the histogram:

def total_words(h):
    return sum(h.values())

The number of different words is just the number of items in the dictionary:

def different_words(h):
    return len(h)

Here is some code to print the results:

print 'Total number of words:', total_words(hist)
print 'Number of different words:', different_words(hist)

And the results:

Total number of words: 161073
Number of different words: 7212

13.5  Most common words

To find the most common words, we can apply th DSU pattern; most_common takes a histogram and returns a list of word-frequency tuples, sorted in reverse order by frequency:

def most_common(h):
    t = []
    for key, value in h.items():
        t.append((value, key))

    t.sort()
    t.reverse()
    return t

Here is a loop that prints the ten most common words:

t = most_common(hist)
print 'The most common words are:'
for freq, word in t[0:10]:
    print word, '\t', freq

And here are the results from Emma:

The most common words are:
to      5242
the     5204
and     4897
of      4293
i       3191
a       3130
it      2529
her     2483
was     2400
she     2364

13.6  Optional arguments

We have seen built-in functions that take a variable number of arguments. For example, range can take one, two, or three arguments.

It is possible to write user-defined functions with optional arguments, too. For example, here is a function that prints the most common words in a histogram

def print_most_common(hist, num=10)
    t = most_common(hist)
    print 'The most common words are:'
    for freq, word in t[0:num]:
        print word, '\t', freq

The first parameter is required; the second is optional. The default value of num is 10.

If you only provide one argument:

print_most_common(hist)

num gets the default value. If you provide two arguments:

print_most_common(hist, 20)

num gets the value of the argument instead. In other words, the optional argument overrides the default value.

If a function has both required and optional parameters, all the required parameters have to come first, followed by the optional ones.

13.7  Dictionary subtraction

Finding the words from the book that are not in the word list from words.txt is a problem you might recognize as set subtraction; that is, we want to find all the words from one set (the words in the book) that are not in another set (the words in the list).

subtract takes dictionaries d1 and d2 and returns a new dictionary that contains all the keys from d1 that are not in d2. Since we don't really care about the values, we set them all to None.

def subtract(d1, d2):
    res = {}
    for key in d1:
        if key not in d2:
            res[key] = None
    return res

To find the words in the book that are not in words.txt, we can use process_file to build a histogram for words.txt, and then subtract:

words = process_file('words.txt')
diff = subtract(hist, words)

print "The words in the book that aren't in the word list are:"
for word in diff.keys():
    print word,

Here are some of the results from Emma:

The words in the book that aren't in the word list are:
 rencontre jane's blanche woodhouses disingenuousness 
friend's venice apartment ...

Some of these words are names and possessives. Others, like “rencontre,” are no longer in common use. But a few are common words that should really be on the list!

13.8  Random words

To choose a random word from the histogram, the simplest algorithm it to build a list with multiple copies of each word, according to the observed frequency, and then choose from the list:

def random_word(h):
    t = []
    for word, freq in h.items():
        t.extend([word] * freq)

    return random.choice(t)

The expression [word] * freq creates a list with freq copies of the string word (actually, to be more precise, the elements are references to the same string). The extend method is similar to append except that the argument is a sequence.

This algorithm works, but it is wildly inefficient; each time you choose a random word, it rebuilds the list, which is as big as the original book.

If you generate a series of words from the book, you can get a sense of the vocabulary, but it probably won't make much sense:

this the small regard harriet which knightley's it most things

The next section is about generating random text that makes more sense.

13.9  Markov analysis

A series of random words seldom makes a sentence because there is no correlation between successive words. For example, in a real sentence you would expect an article like “the” to be followed by an adjective or a noun, and probably not a verb or adverb.

One way to measure these kinds of relationships is Markov analysis, which characterizes, for a given sequence of words, the probability of the word that comes next. For example, the song Eric, the Half a Bee begins:

Half a bee, philosophically, Must, ipso facto, half not be. But half the bee has got to be Vis a vis, its entity. D'you see?

But can a bee be said to be Or not to be an entire bee When half the bee is not a bee Due to some ancient injury?

In this text, the phrase “half the” is always followed by the word “bee,” but the phrase “the bee” might be followed by either “has” or “is”.

The result of Markov analysis is a mapping from each prefix (like “half the” and “the bee”) to all possible suffixes (like “has” and “is”).

Given this mapping, you can generate a random text by starting with any prefix and choosing at random from the possible suffixes. Next, you can combine the end of the prefix and the new suffix to form the next prefix, and repeat.

For example, if you start with the prefix “Half a,” then the next word has to be “bee,” because the prefix only appears once in the text. The next prefix is “a bee,” so the next suffix might be “philosophically,” “be” or “due.”

In this example the length of the prefix is always two, but you can do Markov analysis with any prefix length. The length of the prefix is called the “order” of the analysis.

Exercise 6   Write a program to read a text from a file and perform Markov analysis. The result should be a dictionary that maps from prefixes to a collection of possible suffixes. The collection might be a list, tuple, or dictionary; it is up to you to make an appropriate choice. You can text your program with prefix length two, but you should write the program in a way that makes it easy to try other lengths.
Exercise 7   Add a function to the previous program to generate random text based on the Markov analysis. Here is an example from Emma with prefix length 2:
He was very clever, be it sweetness or be angry, ashamed or only amused, at such a stroke. She had never thought of Hannah till you were never meant for me?" "I cannot make speeches, Emma:" he soon cut it all himself.

For this example, I left the punctuation attached to the words. The result is almost syntactically correct, but not quite. Semantically, it almost makes sense, in places, but not quite.

What happens if you increase the prefix length? Does the random text make more sense?

Exercise 8   Once your program is working, you might want to try a mash-up: if you analyze text from two or more books, the random text you generate will blend the vocabulary and phrases from the sources in interesting ways.

13.10  Data structures

Using Markov analysis to generate random text is fun, but there is also a point to this exercise: data structure selection.

Lists, dictionaries and tuples, and their combinations, like lists of tuples, are known as data structures. When you design programs, one of the decision you have to make is which data structures to use.

For example, in your solution to the previous exercises, you had to choose a data structu:

The last one is the easiest; the only mapping type we have seen is a dictionary, so it is the natural choice.

For the prefixes, the most obvious options are string, list of strings, or tuple of strings. For the suffixes, one option is a list another is a histogram (dictionary that maps strings to integers).

How should you choose? The first step is to think about the operations you will need to implement for each data structure. For the prefixes, we need to be able to remove words from the beginning and add to the end. For example, if the current prefix is “Half a,” and the next word is “bee,” you need to be able to form the next prefix, “a bee.”

Your first choice might be a list, since it is easy to add and remove elements, but we also need to be able to use the prefixes as keys in a dictionary, so that rules out lists. With tuples, you can't append or remove, but you can use the addition operator to form a new tuple:

def shift(prefix, word):
    return prefix[1:] + (word,)

shift takes a tuple of words, prefix, and a string, word, and forms a new tuple that has all the words in prefix except the first, and word added to the end.

For the collection of suffixes, the operations we need to perform include adding a new suffix (or increasing the frequency of an existing one), and choosing a random suffix.

Adding a new suffix is equally easy for the list implementation or the histogram. Choosing a random element from a list is easy; choosing from a histogram is harder and not very efficient (see Section 13.8).

So far we have been talking mostly about ease of implementation, but there are other factors to consider in choosing data structures. One is run time. Sometimes there is a theoretical reason to expect one data structure to be faster than other; for example, I mentioned that the in operator is faster for dictionaries than for lists, at least when the number of elements is large.

But often you don't know ahead of time which implementation will be faster. One option is to implement both of them and see which is better. This approach is called benchmarking. A practical alternative is to choose the data structure that is easiest to implement, and then see if it is fast enough for the intended application. If so, there is no need to go on. If not, there are tools, like the profile module, that can identify the places in a program that take the most time.

The other factor to consider is storage space. For example, using a histogram for the collection of suffixes might take less space because you only have to store each word once, no matter how many times it appears in the text. In some cases, saving space can also make your program run faster, and in the extreme, your program might not run at all if you run out of memory. But for many applications, space is a secondary consideration after run time.

One final thought: in this discussion, I have implied that we would use one data structure for both analysis and generation. But since these are separate phases, it would also be possible to use one structure for analysis and then convert to another structure for generation. This would be a net win if the time saved during generation exceeded the time spent in conversion.

13.11  Debugging

Scaling up and scaling down.

If you are looking for a needle in a haystack, choose a small haystack.

13.12  Glossary

DSU:
Abbreviation of “decorate-sort-undecorate,” a processing pattern that involves building a list of tuples, sorting, and (often) extracting part of the result.
deterministic:
Pertaining to a program that does the same thing each time it runs, given the same inputs.
pseudorandom:
Pertaining to a sequence of numbers that appear to be random, but are generated by a deterministic program.
default value:
The value given to an optional parameter if no argument is provided.
override:
To replace a default value with an argument.
data structure:
Any collection of values, including sequences and dictionaries.
benchmarking:
The process of choosing between data structures by implementing alternatives and testing them on a sample of the possible inputs.

Previous Up Next