In this presentation as part of Friday Frontiers, British Library Digital Curator Stella Wisdom discusses the challenges and surprises encountered in the process of curating the 'Digital Storytelling' exhibition: a physical exhibition using entirely digital resources.
Since their beginnings in the 17th century, newspapers have recorded billions of events, stories and personal names in almost every language and every country daily. This course from DariahTeach provides an introduction to digitised historical newspaper analysis, incorporating methods of Natural Language Processing for discovering, exploiting and visualising newspapers.
A partnership between Kazerne Dossin and EHRI was established to enable sharing of metadata with a broader audience. This partnership resulted in changes to the practices of cataloguing archival materials within Kazerne Dossin. Using the example of the Lewkowicz family collection, this article focuses on the revolution Kazerne Dossin went through while standardising descriptions, and on the tools EHRI provided to optimise the workflow for collection holding institutes.
Tableau is a powerful digital tool for analysing data that can help with mapping and interrogating data. In this short guide we will focus on an aspect of data analysis using mapping that has particular application for Holocaust and refugee studies.
EHRI (European Holocaust Research Infrastructure) supports the use of digital tools that can assist in the research of Holocaust and refugee related topics. In a continued effort to make these tools as accessible as possible so that researchers who have no experience with digital tools will consider trying new ways of using their data, this GitHub-based lesson showcases the use of entity match tools when dealing with geographic data.
Many Galleries, Libraries, Archives, and Museums (GLAMs) face difficulties sharing their collections metadata in standardised and sustainable ways, meaning that staff rely on more familiar general purpose office programs such as spreadsheets. However, while these tools offer a simple approach to data registration and digitisation they don’t allow for more advanced uses. This blogpost from EHRI explains a procedure for producing EAD (Encoded Archival Description) files from an Excel spreadsheet using OpenRefine.
The Fortunoff Visual Search is a tool for both data visualisation and collection discovery from the Fortunoff Video Archive for Holocaust Tesimonies. This blogpost demonstrates the Visual Search tool in the Fortunoff Video Archive, including the search and filtering interface, as well as interpreting the resulting visualisations
This blog discusses the applicability of services such as automatic metadata generation and semantic annotation for automatic extraction of person names and locations from large datasets. This is demonstrated using Oral History Transcripts provided by the United States Holocaust Memorial Museum (USHMM).
In the late 1930s, just before war broke in Europe, a series of chaotic deporations took place expelling thousands of Jews from what is now Slovakia. As part of his research, Michel Frankl investigates the backgrounds of the deported people, and the trajectory of the journey they were taken on. This practical blog describes the tools and processes of analysis, and shows how a spatially enabled database can be made useful for answering similar questions in the humanities, and Holocaust Studies in particular.
OpenCV is a very popular, free and open source software system used for a large variety of computer vision applications. This article is intended to help you get started in experimenting with OpenCV using an example of face detection in images as a case study.
This blog post from EHRI introduces 'quod' (querying OCRed documents), a prototype Python-based command line tool for OCRing and querying digitised historical documents, which can be used to organise large collections and improve information about provenance. To demonstrate its use in context, this blog takes the reader through a case study of the International Tracing Service, showing workflows and the steps taken from start to finish.
Topic modelling is a technique by which documents within a corpus are clustered based on how certain groups of terms are used together within the text. The commonalities between such term groupings tend to form what we would normally call “topics”, providing a way to automatically categorise documents by their structural content, rather than a more metadata-based knowledge system. Using resources held with EHRI's collections, this notebook offers learners an introduction to 'LDA' topic modelling using Python in a step-by-step guide.