In the Deep Learning & Word Embeddings research area, we design deep learning algorithms for solving various NLP tasks.

Deep Learning is a branch of Machine Learning that uses Deep Artificial Neural Networks for modeling problems. While the field is as old as the field of Machine Learning itself, it has experienced a tremendous revival in recent years, with a large portion of top publications devoted to the different facets of Neural Networks and their applications to NLP tasks. Particular interest has been in so-called Word Embeddings, vector representations of words that encode their semantic and syntactic regularities, and that can be efficiently and very successfully learned with Deep Learning architectures. Word Embeddings, also called vector representations, may come in different facets, such as those that represent syntactic, semantic, or even multilingual information of words. 



Deep Learning benefits from the fact that the representation of the linguistic input (e.g. words or sentences) is learned fully automatically. In a growing number of applications, NLP methods based on Embeddings and Deep Learning outperform approaches based on machine learning with manually constructed feature representations.

At UKP, we use Deep Learning for NLP problems ranging from sequence labeling tasks such as Named Entity Recognition, Event Detection, Metaphor Detection, to Text Classification and Information Retrieval problems. Recently, we have been applying Deep Learning in the form of word embedding features to the very active field of Argumentation Mining.

Current Projects

  • Automatically structuring story chainsAs thousands of news articles are published daily, it is challenging to stay up-to-date on every topic. The goal of the project is to help readers to tackle the information-overload. This is done by extracting and analyzing the causal connections between articles. The results are useful for various tasks, e.g. to get a quicker overview on a certain topic.

  • Metaphor recognitionThis project focuses on the detection and exploration of metaphors in English and German texts. The aim to facilitate corpus analysis with regards to metaphors for researchers from Digital Humanities shall be achieved, amongst others, with methods from Deep Learning. As manual metaphor detection or identification is costly and tedious, reliable automatic detection can ease further analysis, e.g. into whether certain metaphors are often used to convey opinions, or if metaphors from specific sources tend to occur more often in some topics than in others.

  • C3 AiphesThis project is about connecting information from unstructured text to information stored in Knowledge Bases. The goal is to develop methods for embedding information from such different sources in a way that will benefit adaptive language processing. The learned embeddings will be applied to the task of Semantic Role Labeling which in turn is contributing to more high-level tasks as text summarization.



Project Publications

Additional Attributes


End-to-end Representation Learning for Question Answering with Weak Supervision

Daniil Sorokin, Iryna Gurevych
In: Semantic Web Challenges: 4th SemWebEval Challenge at ESWC 2017, Vol. 769, p. 70-83, October 2017
Springer, Cham
[Online-Edition: https://doi.org/10.1007/978-3-319-69146-6_7]

Context-Aware Representations for Knowledge Base Relation Extraction

Daniil Sorokin, Iryna Gurevych
In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (EMNLP), p. 1785-1790, September 2017
Association for Computational Linguistics
[Online-Edition: https://github.com/UKPLab/emnlp2017-relation-extraction]

UKP TU-DA at GermEval 2017: Deep Learning for Aspect Based Sentiment Detection

Ji-Ung Lee, Steffen Eger, Johannes Daxenberger, Iryna Gurevych
In: Proceedings of the GermEval 2017 – Shared Task on Aspect-based Sentiment in Social Media Customer Feedback, p. 22-29, September 2017

EELECTION at SemEval-2017 Task 10: Ensemble of nEural Learners for kEyphrase ClassificaTION

Steffen Eger, Erik-Lân Do Dinh, Ilia Kuznetsov, Masoud Kiaeeha, Iryna Gurevych
In: Proceedings of the International Workshop on Semantic Evaluation, p. 482--486, August 2017

End-to-End Non-Factoid Question Answering with an Interactive Visualization of Neural Attention Weights

Andreas Rücklé, Iryna Gurevych
In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics-System Demonstrations (ACL 2017), Vol. 4: System Demonstrations, p. 19-24, August 2017
Association for Computational Linguistics
[Online-Edition: https://github.com/UKPLab/acl2017-non-factoid-qa]

Neural End-to-End Learning for Computational Argumentation Mining

Steffen Eger, Johannes Daxenberger, Iryna Gurevych
In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL 2017), Vol. Volume 1: Long Papers, p. 11-22, July 2017
Association for Computational Linguistics

LSDSem 2017: Exploring Data Generation Methods for the Story Cloze Test

Michael Bugert, Yevgeniy Puzikov, Andreas Rücklé, Judith Eckle-Kohler, Teresa Martin, Eugenio Martínez Cámara, Daniil Sorokin, Maxime Peyrard, Iryna Gurevych
In: Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics (LSDSem, held in conjunction with EACL2017), p. 56-61, April 2017
Association for Computational Linguistics
[Online-Edition: https://github.com/UKPLab/lsdsem2017-story-cloze]
A A A | Drucken Print | Impressum Impressum | Sitemap Sitemap | Suche Search | Kontakt Contact | Webseitenanalyse: Mehr Informationen
zum Seitenanfangzum Seitenanfang