This blog discusses the applicability of services such as automatic metadata generation and semantic annotation for automatic extraction of person names and locations from large datasets. This is demonstrated using Oral History Transcripts provided by the United States Holocaust Memorial Museum (USHMM).
In the late 1930s, just before war broke in Europe, a series of chaotic deporations took place expelling thousands of Jews from what is now Slovakia. As part of his research, Michel Frankl investigates the backgrounds of the deported people, and the trajectory of the journey they were taken on. This practical blog describes the tools and processes of analysis, and shows how a spatially enabled database can be made useful for answering similar questions in the humanities, and Holocaust Studies in particular.
Geographical Text Analysis (GTA) is a relatively recent development in the approach to studying, analysing, and extracting the content of textual sources that offers a new method for combining techniques from Natural Language Processing (NLP), Corpus Linguistics, and Geographic Information Systems (GIS) in Humanities research. This module offers a step-by-step guide with real data, with a focused interest in querying the geographic nature of textual sources, and analysis of spatial information on a large scale.
Do you work with digital images in a humanities discipline? Are you interested in exploring the spatial properties of your dataset but don't know how? Or maybe you are just curious on the topic. This workshop aims to introduce participants to the technologies and technical abilities required for the spatial exploration of image datasets and is of interest to a variety of digital humanities students, scholars and professionals.