Warning: Undefined variable $num in /home/shroutdo/public_html/courses/wp-content/plugins/single-categories/single_categories.php on line 126
Warning: Undefined variable $posts_num in /home/shroutdo/public_html/courses/wp-content/plugins/single-categories/single_categories.php on line 127
By Alec
After spending many hours collecting the 169 American love letters that comprise the corpus of my final project, I now get to spend many hours analyzing them. The first, and probably most tedious step of this process was to codify each letter and compile the relevant information into a big spreadsheet. This meant going into each and every document and quickly scanning the text for things like date written, author, recipient, and location. Upon spotting a relevant bit of information, I “tagged” it by surrounding it with the corresponding code – for example: {author}Henry Knox{/author}.
This is, as you might guess (or even know first hand, if your project involves the same process), a pretty mindless activity. Which means that I had plenty of time to really think about what I was doing. And what I wished I was doing, i.e., anything but individually tagging 169 love letters.
Jokes aside, it occurred to me that rapidly skimming texts to extract small bits of data is very different from actually reading them start to finish in both ends and means. Whereas the goal of performing a close reading of a text is to acquire a comprehensive of the document, its author, and its arguments, “marking up” or tagging a text effectively forfeits this intention, often with the hope that a computer program or algorithm can handle it for us. In an ideal world, there would be time for both micro- and macro-levels of analysis, and I imagine that many historians do indeed have this opportunity. With a rapidly approaching due date, however, I don’t exactly have this luxury. For every letter that I have read word-for-word in my collection, there are probably ten that I’ve only just glanced at, and I likely won’t become any more intimate with these.
As Kurt noted in his post last week, digitization and visualization services like Neatline do allow us to see the ‘big picture’ and to tease out trends from large collections of sources. In this sense, performing digital history can encourage a deeper reading of historical texts in the sense that we can back up conclusions about sources with percentages and graphs, not just subjective analysis. Yet it’s also a much shallower reading – and perhaps not a reading at all, at least on our part, considering how much is handed off to the computer.
One conclusion about this sort of ‘reading’ is one I think we’ve discussed in class before – that mechanically analyzing a text carries the danger of distancing it too much from its author and origins. However, in giving up close reading to computers, we also run the risk of losing sight of our own roles as voyeurs of history. Anyway, back to tagging.





Leave a Reply