Search

alexries606

Portfolio for English 606: Topics in Humanities Computing

Tag

bogost

Video Games: More Than Just Play (Week 13)

The readings this week were woven together with a major emphasis on processes, procedures, and expression. Furthermore, all of the readings this week advocate for more scholarly research of games, specifically the input (the code and procedures), as most existing research focuses on the output (the screen and gameplay).

In Procedural Rhetoric, Ian Bogost acknowledges that video games are not taken seriously in academia: “videogames are considered inconsequential because they are perceived to serve no cultural or social function save distraction at best, moral baseness at worst” (viii). He analyzes the procedural aspect of video games under a rhetorical lens and proposes the concept of procedural rhetoric to capture this rhetoric. He defines procedural rhetoric as “the art of persuasion through rule-based representations and interactions rather than the spoken word, writing, images, or moving pictures” (viii).

Bogost claims that, because video games are expressive, they are well suited for rhetorical speech/persuasion. Video games utilize the enthymeme: “the player performs a great deal of mental synthesis, filling the gap between subjectivity and game processes” and “a procedural model like a videogame could be seen as a system of nested enthymemes, individual procedural claims that the player literally completes through interaction” (43). The player is willing to accept the claims put forth in video games, and often this is done unknowingly.

I agree that video games deserve more attention as rhetorical objects and more “acceptance as a cultural form” (vii). The readings this week made me think critically about the different types of video games I’ve played, such as the Sims, Dance Dance Revolution (DDR), Runescape.

Relating this information to some games, like the Sims, is easy. It is more difficult with other games, like DDR. The procedures involved in DDR seem so simplistic; it’s difficult to initially see the value of studying the procedural representations of the game.

The player must step on the correct arrows as they match up with a line on the screen. The level of accuracy influences the number of points awarded and the words that appear on the screen, such as “Perfect” and “Boo.” There are no lively characters or realistic setting for the game, and there certainly is no story.

Noah Wardrip-Fruin, in Expressive Processing, uses the term expressive processing to “talk about what processes express in their design–which may not be visible to audiences,” such as their “histories, economies, and schools of thought” (4). Considering these hidden processes, such as the culture embedded in the game design, validates DDR as something that has significant cultural and social functions.

On another note, in the Introduction to Technical Communication for Games, Jennifer deWinter and Ryan Moeller discuss the technical communicator’s potential role in the production and dissemination of video games.

Although the reading did not touch upon this at all, I thought about my technical writing work at ITS. From there, I thought about the terminology of the audience. With technical writing for WVU’s IT services, the audience is the user. With video games, the audience is the player.

There may be nothing behind this distinction, but it may add to the perception of video games as not worthy of serious consideration from scholars and technical communicators.

Advertisements

Bogost’s Reading Machine (Week 4)

This week’s readings raised a lot of questions about the connections between the humanities (especially literary and rhetoric studies), the sciences, and computer technology.

Ramsey’s Reading Machines explores the use of computer programs for text analyses in the humanities. He supports the increased use of computer technology in the humanities, but he expresses concern that the field is trying to mimic the sciences’ position with computer technology as a means to create an objective analysis. Humanists conducting text analyses must find a balance between the machine’s objectivity and the researcher’s subjectivity.

Thinking about the title as I read, I couldn’t help wonder who is the machine: the computer, the researcher, or both combined? By the end, I would say it’s both combined.

A topic in this book that particularly caught my interest was Mathew’s algorithm, a procedure designed to generate poems by “remap[ping] the data structure of a set of linguistic units (letters of words, lines of poems, paragraphs of novels) into a two-dimensional tabular array” (29).

The author shifts the characters in each row to form new words in the column, combines the new words, and this creates an unpredictable poem or story.

While reading about Mathew’s algorithm, I was reminded of Bogost’s Latour Litanizer, as described in his book Alien Phenomenology, and so I wanted to put Reading Machines in conversation with Object Oriented Ontology (OOO).

The Latour Litanizer creates a list of things (objects, people, events) by utilizing Wikipedia’s random page API.

For example, right now I’m generating a list through the Latour Litanizer (by simply clicking a button) and the product is

“The Sea Urchins, Cults: Faith, Healing and Coercion, Subhash, Roman Catholic Diocese of Limburg, Barber-Mulligan Farm, Charles Teversham, 2010-11 Belgian First Division (women’s football), Knox Presbyterian Church (Toronto), George Davidsohn.”

The list is designed to be random (at least in the confines of the algorithm, which may exclude repeats and more). Despite the randomness, I still form connections between the words. For example, Roman Catholic and Presbyterian Church (and some may argue cults) relate to religion and Limburg and Belgium are connected geographically.

On Bogost’s blog with the Latour Litany, he explains that this was created out of his curiosity of combining ontography and carpentry.

He describes ontography as “the techniques that reveal objects’ existence and relation” and carpentry as “the construction of artifacts that illustrate the perspectives of objects.”

The list puts things together than otherwise may never be linked, and we create relations from our knowledge and experiences. Therefore, the list may mean more to one person than another. Not only do we form or not form connections with the objects, they may form or not form connections with each other, although these connections are much harder to understand.

Although the Latour Litanizer seems more random that Mathew’s algorithm, both reveal new ways of read a text. They reveal connections (for example, Mathew’s algorithm revealed a prominent connection in form and the Latour Litany revealed the diversity of things humans deem worthy of having a Wikipedia page).

Whereas the Mathew’s algorithm may focus on a novel or a poem, the Latour Litanizer is constantly demonstrating new ways to read Wikipedia as a large body of text that represents society to some degree.

The Latour Litany is a unique example of a program that performs a text analysis of an entire website. It might not be the most productive exercise for researchers, but perhaps for distant reading, it could be useful for getting the bigger picture.

Blog at WordPress.com.

Up ↑