Portfolio for English 606: Topics in Humanities Computing


February 2016

Find the pattern in this response. Is there one or is it apophenia? (Week 7)

Dan Dixon’s “Analysis Tool or Research Methodology” chapter in UDH introduced the psychological phenomenon of pattern recognition in the context of DH. He explains that the DH field has an affinity towards finding patterns, but the field (and most others) have ignored “the nature of what patterns are and their statuses as an epistemological object” (192).

After very briefly explaining the psychology of pattern recognition, the systems view of the world that all pattern-based approaches take, and validating patterns as an epistemic construct, he discusses the occurrence of abductions and apophenia, and this was the section that I found most interesting. As I read about apophenia, I thought about my studies and what I’ve learned so far about the DH field, and I thought doesn’t this happen a lot?

So when I read Dixon’s conclusion, I really took note of one of the questions he posed: “Are we designing patterns where none existed in the first place and is there an unavoidable tendency towards apophenia instead of pattern recognition?” (206).

I think this is a valid and important question that might not have a straightforward answer. I think that, yes, the field does tend towards apophenia, but I think it can be avoided, or alternatively, it may even be okay. I can easily see how one could tend towards apophenia. I think it’s natural to preemptively predict an answer to a research question before research begins.

I also think that there’s pressure for professionals to validate their research within their field, and this may cause apophenia or simply slight manipulation to reach the desired outcome, such as removing certain words from a word count.

My problem with apophenia, or at least Dixon’s definition of apophenia, is with the idea of “ascribing excessive meaning” (202). How do we know when someone has ascribed excessive meaning to a perceived pattern?

Dixon does get at how we determine if a pattern is really there. He suggests that pattern recognition, by itself, is not a valid method of enquiry, and then suggests using inductive and deductive reasoning to prove abductive reasoning. Induction and especially deduction can invalidate a pattern. I agree with this, and I think that it is the researcher’s responsibility to fully account for the valid patterns that appear and be able to recognize when apophenia has occurred.

However, I also think that even loosely developed patterns that are formed from apophenia can be important (as long as it is acknowledged as such). If the researcher can create a unique and productive discussion from the barely formed pattern, it shouldn’t be cast aside.

Furthermore, a single pattern can have several different meanings, depending on the researcher, the research question, the field of study, the context, etc. What may be unimportant in one field may be important to another.

Because the main topic of this weeks readings is digital archives, I want to quickly connect the Dixon reading to the Parrika and Rice & Rice readings. Patterns play a significant role in archives. They help archivists group and organize items. They influence the way items are tagged in an archive. They influence the software and interface of the archival system. And the way that items are grouped, organized, tagged, and retrieved can force patterns that may not emerge otherwise.


Teaching XML in the Digital Humanities (Week 6)

Birnhaum, in “What is XML and why should humanities scholars care,” addresses how we should teach XML. He suggests that the Text Encoding Initiative’s, or TEI’s, Gentle introduction to XML is not gentle enough and suggests learning the syntax of XML after the introduction (although, under this stance, character entities could have been removed from this introduction).

Birnhaum’s gentle introduction was written for an undergraduate course called “Computational methods in the humanities.” The course was “designed specifically to address the knowledge and skills involved in quantitative and formal reasoning within the context of the interests and needs of students in the humanities” (taken from the class syllabus at

In his gentle introduction, Birnhaum takes the stand that digital humanities scholars will need to learn XML at some point, and this stand is even clearer in the syllabus. How should we teach XML?

To help me explore that question, I try to relate it to how I’ve learned programming languages. How did I learn HTML? Mostly by reading online references like and practicing through Notepad. Every new command I read I tried to recreate on my local server. It was very skill based.

Yes, I wanted to be able to create a website, but I mostly wanted a skill to put on my resume. I didn’t think about design and functionality (other than, does the code do what it’s supposed to do). I didn’t think about why I, as an English student, should care or how HTML could be used in a context other than putting content online.

I’m currently learning Javascript through an introductory web development course on Udemy, and so far, I (and the instructor) have been focused on building a skill. I partitioned my screen to display the online reference on the left and Notepad ++ on the right. After I enter new code, I save and refresh my browser window to see if it worked.

The instructor likes to let the code’s output explain itself. He repeatedly says “this will make more sense later in the course.” Sometimes after successfully writing a section of code, I try to think of how it will be useful, and sometimes I can’t answer that.

The instructor essentially throws us in there with very little introduction, but I like that full immersion. HTML and Javascript are languages, and if immersion is an effective technique for learning French or German, why can’t it be an effective technique for learning programming languages?

It was hard for me to learn about XML from this introduction. It was especially hard to learn the terminology without seeing them in action. I actually felt like McDonough’s “XML, Interoperability and the Social Construction of Markup Languages: The Library Example” did a much better job at contextualizing the use of XML in digital humanities, even though it was specific to digital libraries.

Whether a digital humanist slowly learns XML or is thrown into the deep end probably depends on the person and the context. Regardless, I think it’s extremely beneficial to have XML (and other computer-based) classes specifically designed for digital humanists.

Those classes could fill in the gaps that, for example, occurred in my skill based learning. The classes could include discussions about XML problems in the digital humanities, such as interoperability, which is a problem that would not be as urgent to a web developer creating a website for a business.

Computational Humanists (Week 5)

For this week’s reading response, I’m going to hyper-focus on Jeannette Wing’s “Computational Thinking.”

This article was published in March 2006 in Communications of the ACM, a journal that focuses on the computing and information technology fields by covering “emerging areas of computer science, new trends in information technology, and practical applications” ( 

In this short 3 page article, in what I assume is an attempt to garner student (and parent) interest in the Computer Science degree program (at that time, Wing was the President’s Professor of Computer Science in and head of the Computer Science Department at Carnegie Mellon), Wing explains the extensive benefits of computational thinking.

Under the title of “Computational Thinking,” and in blue font to stand out, she writes “It represents a universally applicable attitude and skill set everyone, not just computer scientists, would be eager to learn and use.”

As I read that sentence with only a best guess of what computational thinking is, I nodded in agreement.

Although she says repeatedly that computational thinking is for everyone, she still seems to focus on computer science. It ends up sounding more like computational thinking is mostly another, perhaps more managerial, layer for those in computer science and similar fields.

Furthermore, when she lists the post college careers for computer scientists—”medicine, law, business, politics, any type of science or engineering, and even the arts”—the use of even makes it seem like she was anticipating it would be a surprise or that it may be considered a stretch.

This isn’t surprising. Based on our readings so far, it sounds like some in the DH field and many in the traditional humanities fields may consider it a stretch to go into the computer science field after receiving, for example, an English or history degree.

Despite her representation of “the arts” as a stretch, some of Wing’s characteristics of computational thinking resonate with some of the discussions our class has had from earlier readings, especially her claim that computational thinking is “a way that humans, not computers, think.”

This goes back to the idea that computers may be able to find answers, but they don’t know the right questions to ask. That’s on us.

Additionally, her claim that computational thinking focuses on “ideas, not artifacts” may tie to the discussion of whether digital humanists need to know how to code.

As Hayles suggests in the UDH reading this week, “not every scholar in the Digital Humanities needs to be an expert programmer, but to produce high quality work, they certainly need to know how to talk to those who are programmers” (58). This suggests that the computational concepts used to solve problems are as important (probably more so in the DH field) as the actual code.

What was most surprising in this reading was her claim that “some parents only see a narrow range of job opportunities for their children who major in computer science.”

Was the field hurting that badly in 2006? I know a lot has changed in the past 10 years, but it was still surprising to read this.

Today it seems like computer science is one of the best degrees for job prospects. Afterall, as exemplified in the Manovich reading this week, software is deeply integrated into our very culture and “‘adding software to culture changes the identity of everything that a culture is made from.” This results in a lot of jobs.

Hayles asks how engagements with digital technologies change the ways humanities scholars think. To follow, are humanities scholars implementing Wing’s description of computational thinking? Is “computational thinking” even in the field’s vocabulary?

Bogost’s Reading Machine (Week 4)

This week’s readings raised a lot of questions about the connections between the humanities (especially literary and rhetoric studies), the sciences, and computer technology.

Ramsey’s Reading Machines explores the use of computer programs for text analyses in the humanities. He supports the increased use of computer technology in the humanities, but he expresses concern that the field is trying to mimic the sciences’ position with computer technology as a means to create an objective analysis. Humanists conducting text analyses must find a balance between the machine’s objectivity and the researcher’s subjectivity.

Thinking about the title as I read, I couldn’t help wonder who is the machine: the computer, the researcher, or both combined? By the end, I would say it’s both combined.

A topic in this book that particularly caught my interest was Mathew’s algorithm, a procedure designed to generate poems by “remap[ping] the data structure of a set of linguistic units (letters of words, lines of poems, paragraphs of novels) into a two-dimensional tabular array” (29).

The author shifts the characters in each row to form new words in the column, combines the new words, and this creates an unpredictable poem or story.

While reading about Mathew’s algorithm, I was reminded of Bogost’s Latour Litanizer, as described in his book Alien Phenomenology, and so I wanted to put Reading Machines in conversation with Object Oriented Ontology (OOO).

The Latour Litanizer creates a list of things (objects, people, events) by utilizing Wikipedia’s random page API.

For example, right now I’m generating a list through the Latour Litanizer (by simply clicking a button) and the product is

“The Sea Urchins, Cults: Faith, Healing and Coercion, Subhash, Roman Catholic Diocese of Limburg, Barber-Mulligan Farm, Charles Teversham, 2010-11 Belgian First Division (women’s football), Knox Presbyterian Church (Toronto), George Davidsohn.”

The list is designed to be random (at least in the confines of the algorithm, which may exclude repeats and more). Despite the randomness, I still form connections between the words. For example, Roman Catholic and Presbyterian Church (and some may argue cults) relate to religion and Limburg and Belgium are connected geographically.

On Bogost’s blog with the Latour Litany, he explains that this was created out of his curiosity of combining ontography and carpentry.

He describes ontography as “the techniques that reveal objects’ existence and relation” and carpentry as “the construction of artifacts that illustrate the perspectives of objects.”

The list puts things together than otherwise may never be linked, and we create relations from our knowledge and experiences. Therefore, the list may mean more to one person than another. Not only do we form or not form connections with the objects, they may form or not form connections with each other, although these connections are much harder to understand.

Although the Latour Litanizer seems more random that Mathew’s algorithm, both reveal new ways of read a text. They reveal connections (for example, Mathew’s algorithm revealed a prominent connection in form and the Latour Litany revealed the diversity of things humans deem worthy of having a Wikipedia page).

Whereas the Mathew’s algorithm may focus on a novel or a poem, the Latour Litanizer is constantly demonstrating new ways to read Wikipedia as a large body of text that represents society to some degree.

The Latour Litany is a unique example of a program that performs a text analysis of an entire website. It might not be the most productive exercise for researchers, but perhaps for distant reading, it could be useful for getting the bigger picture.

Research Methods in the Social Sciences (Week 3)

In the “Tactical and Strategic: Qualitative Approaches to the Digital Humanities” chapter of Rhetoric and the Digital Humanities, McNely and Teston discuss the importance of carefully choosing strategies, as different strategies afford or limit certain tactics. As examples, they describe a WAGR approach to explore transmedia storytelling and a GT approach to collect and analyze data.

I had trouble with this reading, because my comfort level with methods and methodologies is poor, and adding the concept of strategies and tactics on top of that left me feeling like I did not fully understand the two approaches.

Despite my difficulty with this reading, research methods in the social sciences was a topic that really stood out to me this week. I was even a little surprised by Smagorinsky’s “The Method Section as Conceptual Epicenter in Constructing Social Science Research Reports.”

Clearly it’s important for people to understand how you conduct your research (validity!), but I guess I must have assumed that the humanities would kind of gloss over that section, whereas in the hard sciences, it sometimes seems like the method section is more important than the research question or findings.

Maybe I just had a brain lapse. Or maybe it’s because it’s not something that we normally talk about in our English classes.

It’s surprising that we do not talk much about methods, since we do apply research methods in most of the papers we write. We are often required to choose a sampling of readings from our field that will fit our research topic (annotated bibliography). We sometimes (carefully) use empirical evidence to back up our arguments. We use research methods, but the word method usually doesn’t come up.

The projects discussed or implied in the DH readings are often fairly different than the standard conference paper for a PWE grad class, but we still use research methods, and we don’t really refer to them as methods.

The only conference paper I wrote that required me to really think about my research methods as methods was for Catherine’s class last semester.

For my research, I collected a sample of images and text related to a popular meme of a fictional character on a TV show. I used Evernote to tag and organize my data and then I looked for patterns. I used Laurie Gries’ iconographic tracking method as a basis for my research, and although I employed a few methods, what would be my “methods section” was really weak, because I just don’t know the vocabulary.

Last semester in the grants class I took, Hawley asked us to pair up each week with another classmate to peer review sections of our grants. The week that I had drafted my project evaluation section, I was paired with a graduate student in a Sport Sciences program (I don’t remember which one specifically).

In the grant, I stated that after the project ended, I would write a paper to be published in an academic journal. My project evaluation section would influence my methods section of the paper more than any other part, because that section explains how I plan to collect data.

The initial draft had very simple statements. With the help of my very knowledgeable classmate (and a little help from, I was able to provide more specificity.

Research methods is an area that I get a little lost in, but I’m sure that will change through this class.

What’s in a name? (Week 2)

A continuous theme that I’ve noticed in many professional writing fields is that people get really hung up on definitions. ‘How do we define ourselves’ is a very prevalent debate in the tech comm field, and from the readings this week, it is clear that it is a very prevalent debate in the Digital Humanities (DH) field as well.

These debates are slightly annoying. Imagine what they could accomplish with all that time they’ve spent writing and presenting their disagreements with others’ definitions!? But when it comes to research, and especially when it comes to funding, the distinctions between the various similar fields becomes very important. From the definitions put forth in the readings, I was able to sense why it’s so difficult to define the DH field.

This may be completely incorrect, and is certainly an oversimplification, but I imagine a graph with humanities on the X axis and digital technology on the Y axis. A few people in the DH field are at (0.1, 10), a few others are at (10, 0.1), and everyone else is somewhere in between. To simplify, some people seem to lean more towards the “digital” side of DH and others lean more towards the “humanities” side of DH. Therefore, everyone is bringing a variety of skills and ideas to the DH field.

Although Gold did not express his preferred definition of DH in the introduction to The Digital Humanities Moment, he did note the tension after Ramsey’s “Who’s In and Who’s Out” talk. Ramsey describes DH as a field that builds/make things (a sentiment that aligns with the STEM fields).

This idea of building/creating is echoed in the introduction to Rhetoric and the Digital Humanities (RDH). Ridolfo and Hart-Davidson describe DH as a term that largely functions tactically (to get things done). They suggest two political moves for the scholars in rhet studies, TPW, and tech comm: selectively redefining their digital projects under the DH umbrella and studying the DH job market. When reading their introduction, the word “practical” kept coming to mind.

In the first chapter of RDH, Reid expresses that there is a problem with defining the DH field and tries to work out a definition by examining the fields doing DH work. As Reid points out, the obvious DH fields are those that employ computers to study traditional objects of humanities study (what used to be called humanities computing). Other fields he includes are media study and rhet and comp.

When it comes to the challenge of defining the field as a whole, Reid seems to partially blame the troubled relationship between rhetoric and humanities and the “correlationist view” both fields tend to take. Pulling in Latour, his suggestion is to approach rhetorical relations as relations with nonhumans. He calls to recognize how technology (nonhuman) affects technology. Rather than building, he focuses on theorizing.

Going back to the graph, I guess the DH field needs to find a balance between shortening the range while not excluding the field out of existence. It’s too early for me to settle on a definition, although I am certain that I lean a little more towards the “digital” side of DH.

Create a free website or blog at

Up ↑