Word Embeddings
- Authors
Natural language processing is one of the most powerful concepts in modern linguistics and computer science, bridging the understanding of language from human to machine, and in turn programming machines so they can perform complex linguistic tasks on their own. It is one of the core concepts in machine learning and provides us with several new opportunities on how to research language and the scale at which it can be done. Before that’s done, however, one has to convert information into a format that’s understandable to machines.
How do you feed information to artificial intelligence and what form does it take? One of the many ways in which that can be done is through word embeddings. Joseph Flanagan from the University of Helsinki, introduces us to the world of word embeddings by explaining their history, how they can be understood, the issues surrounding them, as well as giving practical examples of such in the field of digital humanities.
Joseph Flanagan is an English Philology professor from the University of Helsinki. His research interests focus primarily on English phonetics and phonology, reproducible research, and the digital humanities.
Learning outcomes
After watching this short video, learners should be able to:-
- understand what word embeddings are and how they are created
- recognise how word embeddings have been used in digital humanities over the years
- appreciate the problems regarding the use of word embeddings and how to tread when dealing with them