The foundational skills at the intersection of digitization, bibliography, and the Digital Humanities are crucial for many scholars, yet instruction frequently only covers one maybe two of these intersecting aspects. For example, use of the Text Encoding Initiative XML standard is increasingly the norm in digital scholarly editing, but many individuals working with textual materials do not have access to relevant scholarly training in DH. Conversely, many DH departments, lack rare book specialists.
The goal of this video class is to teach the necessary skills for understanding how the materiality of pre-modern books can be digitized and provide a foundation for putting those skills into practice. After completing this course, students will understand the fundamentals of digitization and how books and manuscripts are described in the TEI, including the msdescription and transcription modules.
This class is part of the project Digitizing the Materiality of the Premodern Book and licensed Creative Commons BY NC SA. This project (2022-2023) is funded by CLARIAH-AT with the support of BMBWF. The videos were produced by Moving Stills.
This event, organised and provided by the CLS INFRA project, offers an introductory course to textual data annotation. The workshop introduces learners to how to edit, annotate, and query a text corpus without a single line of code, how to structure texts with the XML-TEI, and how to run an NLP tool to add linguistic information.
In this lecture from the Austrian Centre for Digital Humanities and Cultural Heritage (ACDH-CH), Laurent Romary outlines the main issues related to open science in the current scholarly landscape while showing how the Text Encoding Initative (TEI) has been seminal in setting up an open agenda for managing, documenting or disseminating scholarly sources and methods.
This course will introduce you to the creation of digital scholarly editions, for manuscripts or printed texts, with the help of the TEI and other related technologies.
This course is an introduction to the theories, practices, and methods of digitizing legacy dictionaries for research, preservation and online distribution. It focuses on a particular technique of modeling and describing lexical data using eXtensible Markup Language (XML) in accordance with the Guidelines of the Text Encoding Initiative, a de-facto standard for text encoding among humanities researchers.