Digital Oulipo: Programming Potential Literature

I am pleased to announce the publication of my article, Digital Oulipo: Programming Potential Literature in the current issue of Digital Humanities Quarterly! This article recounts the project I completed within the Princeton Center for Digital Humanities, its design, difficulties, and results (both expected and unexpected). Most importantly, in it, I offer my two cents on the use of exploratory programming in Digital Humanities scholarship. Enjoy!

http://digitalhumanities.org/dhq/vol/11/3/000325/000325.html

S+7 through NLTK

S+7

One of the earliest Oulipian procedures is Jean Lescure’s S+7. While its status as a constraint is debatable (originally called a method, sometimes referred to as a procedure), it is one of the most cited and perhaps also least understood of the Oulipo’s long list of techniques.

To begin, S stands for “substantif” (noun), but can be theoretically replaced with any other part of speech. One of the founders of the Oulipo, François Le Lionnais, pointed out that S+7 is a more specific version of m±n, where m is a “meaningful” part of speech and n is any integer. Carrying out an S+7 or any of its variations should be a purely mechanical procedure. All an author needs are two very important pieces: a pre-written text and a dictionary. Then, the author identifies all the nouns and replaces them with the nouns that come seven entries later in a dictionary of their choosing. The result therefore depends upon the original text and the dictionary chosen, but not much else.

Example

In the bench Governor created the help and the economist.
And the economist was without forum, and void; and day was upon the failure of the deep. And the Spring of Governor moved upon the failure of the weddings.
And Governor said, Let there be link: and there was link.

(generated on http://www.spoonbill.org/n+7/)

The interest of this particular S+7 and indeed most of the Oulipo’s best-loved examples is that the original text (Genesis from the Bible) is extremely recognizable. It isn’t the dictionary that led to the hilarity of the result, but rather that even with the noun substitutions, the original text is still very much audible, but with unexpected new words. While the choice of the dictionary could have created more specific substitutions, the Oulipo has not really done much experimenting with the dictionaries — they have used big ones and small ones (and in the case of one Queneau S+7, a culinary one).

Natural Language Processing

For my digital humanities project, I am making my own S+7 program using nltk with python. While my earlier programming efforts were difficult for a beginner, trying my hand at nltk makes me feel like I’ve made it to another level entirely. Going through their online textbook has been very helpful and has reinforced the programming knowledge I have already gained through working on this project. Also, Natural Language Processing has helped me better understand the early constraints of the Oulipo, greatly contributing to my chapter on algebra which includes analysis of the S+7 and its variations, as well as other methods that are based on simple substitutions, counting, or operations.

I am pleased to report that I am putting the final touches on this last program, which will allow the reader to generate a dictionary based on one author’s vocabulary (the one I am currently working with takes all the nouns from Edgar Allan Poe’s collected poetry) and substitute those nouns into a short excerpt from several other recognizable texts (Moby Dick, The Declaration of Independence, Genesis, A Tale of Two Cities, and The Raven).

Once I have worked out the kinks in my pluralizing function (if the original noun is plural, I need the substituted one to be plural as well), I will publish the code online in my Github repository as well as here on CORE. While I do not believe that this code is particularly useful, the process of creating it was invaluable to me as a scholar and a programmer. I now understand the Oulipo and their computer efforts much better, as well as their elementary procedures. Programming texts that seem gimmicky, but that are hardly ever “read” (such as the Cent mille milliards de poèmes) has forced me to design new ways to read them. I have also gained new insights into the digital humanities and how it can be used not to produce an online archive or digital editions of texts (though, I have created interactive, digital editions of certain texts or procedures), but rather to open eyes to the possibilities in such experimental fiction. Works written using new methods must be analyzed using new methods. In that sense, it was the intellectual process of carrying out this project and not the process itself that I will take with me.

Digital Humanities Summer School

Thanks to a travel grant from the Center for Digital Humanities @ Princeton, I have just completed the intensive week-long Digital Humanities summer school at the OBVIL laboratory at La Sorbonne. OBVIL stands for the “Observatoire de la vie littéraire” or the observatory of literary life. After my Digital Oulipo project and continued work on the Oulipo Archival Project, I cannot agree more with the metaphor of an observatory. Digital Humanities allow researchers to examine from a distance, which complements the traditional literary scholarship of “close readings.” Now more than ever, I believe humanities scholarship needs both perspectives to succeed.

In this intensive and rich program, I was able to continue to develop my skills in XML-TEI that I had been learning through the Oulipo Archival Project. Furthermore, I discovered exciting new software such as TXM, Phoebus, Médite, and Iramuteq and how they can be used to learn more about large corpuses of text. My favorite part of this program was that it was a specifically French introduction to European developments in the digital humanities, allowing me to broaden my perspective on the discipline.

Here is a brief summary of what I learned day by day. I am happy to answer any specific questions by email. Feel free to contact me if you want to know more about the OBVIL summer school, the specific tools discussed there, or just about digital humanities.

Day 1

The first day of the summer school was a general introduction to the history of digital humanities methods and how to establish a corpus to study using these digital methods. It was especially interesting for me to learn the history of these methods I have been experimenting with for months. I had no idea that the Textual Encoding Initiative (TEI) had been invented in 1987, before I was even born, as a new form of “literate” programming.

Surprisingly, the most useful workshop was a basic introduction to the various states of digital texts. While I knew most of the types of digital documents already as a natural byproduct of using computers in my day-to-day life, it was useful to discuss the specific terminology (in French even!) used to describe these various forms of texts and the advantages and disadvantages of each. For instance, while I knew that some PDFs were searchable while others are not, it was still useful to discuss how to create such documents, the advantages of each, and how to move from one medium to another.

Day 2

The second day of the summer school began by asking the not-so-simple question of “what’s in a word?” In the following sessions, we learned about everything from how to analyze word frequencies in texts to how to treat natural language automatically, through tokenization (segmenting text into elementary unities), tagging (determining the different characteristics of those unities), and lemmatization (identifying the base form of words).  We then had specific workshops meant to introduce us to ready-made tools we could use to treat language automatically. We did not discuss NLTK, however, which I am currently using to program the S+7 method for my Digital Oulipo project, most likely because using NLTK requires a basic understanding of programming in Python, which was out of the scope of this short summer school.

The second half of this day was an introduction to text encoding, how it works and why it is useful for analyzing large corpuses. While I was already familiar with everything covered here, it was still interesting to hear about the applications of TEI to something other than the Oulipo archive. It was especially interesting to hear about applications of TEI to highly structured texts such as 17th century French theater.

Day 3

This day was extremely technical. First we looked at co-occurrences of characters in Phèdre as an example of network graphs. Since the main technical work had been done for us, it was somewhat frustrating to be confronted with a result that we had no part in creating. While as a former mathematician, I knew how to understand the content of a network graph, many other students did not and took its spatial organization as somehow meaningful or significant. This demonstrates a potential pitfall in digital humanities research. One needs a proper technical understanding of the tools and how they function in order to interpret the results with accuracy.

In addition to network graphs, we also discussed how to use the XPath feature in Oxygen (an XML editor) to count various elements in classical theater such as spoken lines by characters, verses, or scenes in which characters take part. Once again, it was interesting to see how a computer could facilitate such a boring manual labor and how it could potentially be of interest for a scholar, but interpreting such statistical aspects of large corpuses of text is tricky work for someone whose last statistics class was in high school. This gave me the idea to create a course that would properly teach students how to use these tools and understand the results through workshops.

Day 4

This was another ready-made tool workshop in which we discussed using OBVIL’s programs Médite and Phoebus to edit online texts more efficiently and find differences between different editions. This was very interesting, but probably more useful for publishing houses than for graduate students.

The rest of the day was meant to introduce us to Textometry using TXM, but there were far too many technical issues with the computers provided by the university that we spent the entire time downloading the software on our personal laptops. This was not only frustrating, but ironic. One would think that a summer school in digital humanities run mostly by computer scientists would not have such technical difficulties.

Day 5

The final day of the program (Friday the 9th was devoted to discussing our personal projects with the staff) continued the work on TXM. In fact, as my section had had such issues the previous day, I decided to switch into the other group. This was a good decision, as the head of that session was more pedagogical in his approach, assigning a series of small exercises to introduce us to TXM. By experimenting with tokenization using TreeTagger and concordance of words, we were able to begin to write a bit of code that could parse a text and find specific groups of words.

This introduction was practical and hands on, but I wish there had been more. While I now know vaguely how to use TXM to analyze texts, I do not have experience coming up with the questions that such techniques might help me answer. This seems to me the key to effective digital humanities scholarship — asking a solvable question and knowing which tools can help you resolve it.

Digital Oulipo: Graph Theory for Literary Analysis

Raymond Queneau published Un conte à votre façon in 1967 in the midst of the Russian formalism excitement spurred by the translations of Vladimir Propp and his contemporaries. Propp’s premise is that all folktales can be broken into their simplest 31 narrative functions following an initial situation. À votre façon refers to a potential reader, who can compose it as he/she sees fit given a set of binary choices provided by Queneau. This “tree structure” comes from the mathematical field of graph theory that was being developed at that time by fellow Oulipian, Claude Berge. Queneau’s tale can, incidentally, be represented as a graph.

A graphical representation of the text designed by Queneau and published in a collected volume of the Oulipo

Queneau’s story initially gives this reader a choice between the story of three little peas, three big skinny beanpoles, or three average mediocre bushes. The following choices are all binary, and mostly stark oppositions. This is also a feature of algorithms, in the sense that binary choices must provide mutually exclusive alternatives. If not, the system would be contradictory. For instance, either the reader chooses to read the tale of the three peas or he does not. Should the reader prefer not to read this first option, (s)he will find that the two alternatives offer rather meager results. Refusing all three terminates the program almost immediately.

As with these preliminary choices, most nodes in Queneau’s short tale are superficial or contradictory, giving the reader only an illusion of choice. While the story and its various itineraries can be visualized as a flow chart, the story leaves very little freedom to the reader. The genius of the text lies in the simultaneous display of all possible options, allowing the reader unparalleled insight into the structure (and shortcomings) of this experimental tale.

At the end of Learn Python the Hard Way, I was able to make my own Un conte à votre façon program that allowed a reader to move through a “Map” with a pre-established number of scenes. While I was proud of myself for writing a program, I was still not satisfied. I wanted the graph and I wanted it to interact with the program somehow.

My technical lead, Cliff Wulfman introduced me to graphviz, an open source graph (network) visualization project that is freely available for Python. In graph theory, a graph is defined as a plot of nodes and edges. In graphviz as well, in order to make a graph, I had to define all requisite nodes and edges. The program then generates a visualization of the graph that is spatially efficient. As an exercise, I made a graph of the famous graph theory problem of the Bridges of Königsberg.

With this graph theoretical program in my Python arsenal, I was able to make my own graph of Un conte à votre façon. Still not enough, I aimed to integrate the two programs, and Cliff and I decided to give my original program the structure of a graph so that it would be able to generate a graph with graphviz at the end. My program now has nodes and edges that correspond with Queneau’s prompts and choices as follows.

With this structure, I was able to write a program that enters the graph at the first node and, depending on the choice of the user, proceeds through the story, printing out the selected nodes and the corresponding edges. The program records the path the reader takes and at the end, prints out a graph with the reader’s path filled in green, to represent the little peas. While my little program does not take into account the full potential of the intersection of graph theory and literature as proposed by Queneau’s text, I am very pleased with how it functions. For instance, I can leave to the reader the mathematical exercise of finding the minimum number of read-throughs required to hit every node. While there is still more that can be done, the graph my program generates is itself informative — side by side with the text, the reader can learn more about the potential of this three little peas story.

Digital OuLiPo: Learn Python the Hard Way

I wanted to write a brief blog post in praise of this free online textbook for Python. Over the January break, in order to move onto more complicated parts of my project (the Cent mille milliards de poèmes was a fairly basic introduction to programming), my technical lead proposed that I work my way through this textbook.

Written by Zed A. Shaw, this book has been helpful on many levels, not least of which because it introduced me to using the terminal rather than relying on some outside program. For my Cent mille milliards de poèmes annex, I had primarily been using AptanaStudio, which was a very powerful piece of software that allowed me to avoid learning the basics of programming. The first few chapters of Learn Python The Hard Way forced me to acquaint myself with the terminal.

The bulk of the chapters were similar to Code Academy, but working through the exercises outside of an online platform and then running them on the terminal was more pedagogical, as was the way the activities built upon themselves. I now feel more autonomous in my programming.

So for anyone else looking to learn a new (programming) language, I would highly recommend this free and easy online resource. Anything worth learning is worth learning the right way, and in this case, the “right” way seems to be “the hard way”!