Prior to this week’s readings, as many of my past blogs have shown, I have been quite cynical about how useful digital humanities might be to me as a graduate student. Broadly speaking, I’ve gone through a handful of phases. I first thought that digital humanities was something I wouldn’t be able to tackle. Then, I acknowledged the importance of digital humanities, but ultimately decided that if one were to follow digital humanities projects to their logical conclusions, they would be the reserve of faculty and not of graduate students. After all, history is an incredibly writing-centric discipline (and is getting more so, ironically).
However, this week’s readings changed a lot of my thinking. While I apologize for the overly literary title, I believe that it fits how I feel right now. Whereas before I thought digital humanities was useful for some but not for me, some of the projects we looked at this week show quite resolutely how useful digital humanities tools can be. To that end, I’ll start with the first reading and work through.
The Walsh and Horowitz reading was a good introductory piece. As with many of these tools, however, I’ll most likely understand them best when I get hands-on with them in class today. Nevertheless, I found the section on close reading to be quite revelatory. After reading the section on the various compromises that come with reading on a computer, I came to this section wondering how closely we can read with a computer. In this case, the use of a computer to analyze a text allows us to determine certain pattern that emerge. The example given in the book is an analysis of “The Raven.” This book also raised a lot of question. I found the text encoding section of the text particularly interesting. I am now wondering if scholars use this kind of tool to code oral histories? Any feedback on that matter would be very helpful.
The topic modeling article was also stimulating. This method’s reliance on a large corpus, however, poses some difficulties for scholars working in certain fields, particularly historians who work in countries whose archives are not digitized. Namely, in my area of research this would pose a particular issue. One possible application I see for topic modeling is political speeches. Particularly, this application could be useful in places in which political leaders were in power for a long time and spanning different events.
That’s a nice Segway into one of the projects we looked at for this week – Quantifying Kissinger. This project is incredibly in-depth and thorough. Whereas most projects (to an extent, for example, the other project we looked at this week, Mining the Dispatch) churn out statistics, Kaufman’s project gives us some insight into Kissinger’s motives, anxieties, and opinions over time. Mining the Dispatch does raise certain questions however – for example why does the date look this way. However, Kaufman’s project is a stunning example of what is possible with these tools.
I do not think I would be capable of such a project right now. However, more than any time in this course I feel as though a new vista has been opened up. One of my research interests is the rise of Evangelical Protestantism in post-coup Chile. One potential avenue for these tools in my research would be to use the tools to analyze preachers’ sermons. The response of the Catholic Church could also be interesting. As part of the cultural shifts that are part and parcel of the neoliberal turn, I could use these tools to examine the speeches of Augusto Pinochet, talks by the the Chicago Boys, and perhaps many more. This is the first time I have felt that I could incorporate digital humanities work into my dissertation.