The goal of this project was to assess to what extent digital textual analysis tools can be used to demonstrate attitudes and beliefs that underlie public health messaging within the National Library of Medicine’s “The Public Health Film Goes to War” archive, with hopes of further showcasing the extent to which they were a product of their time. Using digital media allowed me to more quickly and systematically analyze these videos, find points of comparison among the corpus as a whole, and dissect linguistic nuances on a smaller scale than traditional methods may have allowed me to.
Using textual analysis raised questions of reliability and forced me to redefine my scope several times in accordance with my own technological capacity and my limited timeframe. Throughout this process, I found myself thinking about how metrics for what constitutes “proof” differ between traditional historical research and digital humanities projects. Essentially, this often boils down to combining the world of means and z-scores with one that attempts to explain qualitative, sometimes intangible phenomena. What does a p-value mean to a cogent argument? I came in with expectations that I would find concrete quantitative evidence of the themes that I hypothesized would be prevalent within the video collection and emerge from this project with a series of impressive graphs and low p-values. While this was out of reach of my scope, I feel that simply tacking statistical analyses onto my data would risk flattening my content–a tension prevalent in to digital humanities world. Another question that came to mind is that what exactly a project of this nature could contribute, it’s no secret that the 1940s were rampant with racism, misogyny and all the like, but I found that applying digital tools could be useful in assessing the methods by which these social attitudes show through, as some were more subtle than others.
I approached the project in search of particular social sentiments that I knew were prevalent at this moment in history, WWI saw expanded global borders, racism and xenophobia, nationalism, misogyny, fear-mongering, you name it. However, early on in my project I encountered another way in which DH project differ from traditional history research, or at least how I approach research. While simultaneously discovering new analysis tools throughout the course, and attempting to “prove” presence of the themes I set out to find, I was forced me to keep my argument malleable depending on what I stumbled upon or found myself to be capable (or incapable) of showcasing in a convincing manner. The project did not turn out as expected in the sense that I had to change course in accordance with the tools I used, discarding useless tools and finding new ones prompted me to reassess what I wanted to look for and change my themes accordingly. This was the case when I discovered a named entity recognition tool while combing for textual analysis resources, it proved helpful in providing an effective method to quickly find unique locations within the texts, which revealed some attitudes concerning origin and race that I well expected to find. If I could start this project over, I would have perhaps chosen a larger database, or one catered towards a broader audience whereas this one was mainly catered towards members of the armed forces. This would have enhanced the generalizability of my project and perhaps the themes that I was looking for may have been more prevalent upon data analysis.
Given additional time to further develop this project, I would focus on expanding my database to include more videos from this era, (not only specific to military themes) and expand to a range of decades. I believe that this would provide more concrete evidence to support my insights and conclusions and potentially expand the amount of themes that I am able to explore. In addition, this would provide the opportunity to trace trends in social attitudes and messaging techniques across time and note their evolution.