Experimental software from the research projects of UKP can be found at GitHub: https://github.com/ukplab

 

German Research Foundation (DFG)

ArguAna

Argumentation mining deals with the automatic identification of arguments and their relations from natural language text. This research project targets at the specific challenges of argumentation mining for the web. We seek to establish foundations of algorithms that apply argument mining to various forms of web argumentation, efficiently leverage the scale of the web, and complement argumentation mining with an argumentation analysis to effectively assess important quality dimensions.

INCEpTION: Towards an Infrastructure for the Distributed Exploration and Annotation of Large Corpora and Knowledge Bases

The annotation of specific semantic phenomena often require compiling task-specific corpora and creating or extending task-specific knowledge bases. Presently, researchers require a broad range of skills and tools to address such semantic annotation tasks. INCEpTION aims towards building an annotation platform that incorporates corpus extraction, annotation, and knowledge management into a joint platform. 

Information Consolidation: A New Paradigm in Knowledge Search (DIP project)

The DIP project - an international cooperation with Bar-Ilan University and Israel Institute of Technology - aims at the next big step in information access technology. The goal is to support users in identifying and assimilating the large set of relevant statements found within multitudes of documents which are usually retrieved by the current search technologies. Novel methods for statement extraction, information consolidation, and inferring relations represent the core research areas within this project.

QA-EduInf: Community-based Question Answering for Educational Information

The project aims at using natural language processing techniques to analyze educational information and answer user questions on various educational topics. Since a large portion of users' questions have already been asked by other people in community question answering forums and answered by educational experts or crowds, we use the available question and answer archives to answer these questions and minimize human effort in searching through educational information. The project consists of different components including question classification, question and answer retrieval, answer quality assessment, and summarization.

Research Training Group AIPHES ("Adaptive Information Preparation from Heterogeneous Sources"), DFG GRK 1994

AIPHES develops new methods to deal with information overload by summarizing multiple documents to a condensed summary. We develop adaptive methods to create summaries of any type from multiple sources and across different genres. To do so, we combine different methodological backgrounds – computational linguistics, computer science, machine learning – to approach the task of extracting, summarizing and evaluating textual information from different sources.

Federal Ministry of Education and Research (BMBF)

Argument Mining from Textual Data

In order to make informed decisions, appropriate arguments are needed. However, the mere amount of information and the complexity of many questions frequently prevents us from finding all arguments that are relevant for a reasonable decision. Within the "Decision support by means of automatically extracting natural language arguments from big data" (ArgumenText) project, the UKP Lab develops novel Argument Mining methods for extracting arguments from large and heterogeneous text sources in order to facilitate decision making processes. In response to a user-defined search query, neural networks determine relevant arguments in realtime and summarize them in a comprehensive way. In contrast to conventional systems, an argumentative information system can show the reasons for or against a decision.

Centre for the Digital Foundation of Research in the Humanities, Social, and Educational Sciences (CEDIFOR)

CEDIFOR intends to contribute to bridging the gap between research in the Humanities and computer based methods, and help researchers to master the characteristic problems in this process. It is a Digital Humanities Centre providing methodological expertise for advising researchers from the Humanities, Social, and Educational Sciences on adopting computer based methods in their research. This concerns the planning and operational stage of projects as well as the long-term provision of result data.

European Commission (EU)

OpenMinTeD

OpenMinTeD aspires to enable the creation of an infrastructure that fosters and facilitates the discovery and use of text mining technologies and interoperable services. It examines several use cases identified by experts from different scientific areas, ranging from generic scholarly communication to literature related to life sciences, food and agriculture, and social sciences and humanities.

Long-term UKP team projects

Darmstadt Knowledge Processing (DKPro) Repository

The DKPro Repository consists of a growing number of scalable, robust and flexible UIMA components for various kinds of NLP tasks such as tokenization, sentence splitting, PoS tagging, negation detection, lexical chaining, word pair extraction.

Feel free to download our DKPro Flyer.

UBY – Large-scale Sense-linked Lexical-semantic Resource

UBY is a large-scale lexical-semantic resource for natural language processing (NLP) based on the ISO standard Lexical Markup Framework (LMF). Most UBY related software is developed open source on Google Code. UBY combines a wide range of information from expert-constructed and collaboratively constructed resources for English and German.

Other

KDSL

The main topic of this PhD program is knowledge discovery in the vast amount of scientific literature ubiquitously available on the Web and in untapped historical sources. This research employs methods of intelligent identification and analysis of structures in scientific texts on all scales, enabling completely new, previously unforeseen forms of access to scientific information.

Processing of Audiovisual Content: Integration of Automatic and Manual Analysis

A large quantity of modern educational content is audiovisual. The amount of this type of content is increasing rapidly due to the use of consumer electronics for audiovisual content. However, one issue raises when audiovisual content has to be manually analysed by humanistic researchers: manual analysis is a very hard and tedious task. The goals of this project is to create frameworks which facilitate the integration of manual and automatic analysis of audiovisual content and investigate which machine learning methods can automatically classify educational audiovisual content.

A A A | Drucken Print | Impressum Impressum | Sitemap Sitemap | Suche Search | Kontakt Contact | Webseitenanalyse: Mehr Informationen
zum Seitenanfangzum Seitenanfang