Darmstadt Knowledge Processing Repository

At the UKP Lab, we put a strong focus on developing the software that is the basis for our experiments in a re-usable manner. We call that body of software that we produce the Darmstadt Knowledge Processing Software Repository (DKPro). 


Several products have grown from our DKPro philosophy and have been released under an open source license to the public:

  • CSniper is a search-based annotation tool to help distributed annotation teams finding infrequent linguistic phenomena in large corpora.
  • DKPro Core provides a set of ready to use software components for natural language processing, based on the Apache UIMA framework.
  • DKPro Lab is a lightweight framework for parameter sweeping experiments. It allows you to set up experiments consisting of multiple interdependent tasks in a declarative manner with minimal overhead.
  • DKPro LSR (Lexical Semantic Resources) is a unified API for several lexical-semantic resources.
  • DKPro Similarity is an open source software package for developing text similarity algorithms.
  • DKPro Spelling includes components for real-word spelling error correction and experimental frameworks for mining such errors from the Wikipedia revision history as well as for the "Helping Our Own" shared tasks 2011 and 2012.
  • DKPro Statistics is a collection of open-licensed statistical tools, currently including correlation and inter-rater agreement methods.
  • DKPro TC (Text Classification) is a UIMA-based text classification framework built on top of DKPro Core, DKPro Lab and the Weka Machine Learning Toolkit. It is intended to alleviate supervised machine learning experiments with any kind of textual data.
  • DKPro Uby is a Java framework for creating and accessing sense-linked lexical resources in accordance with the UBY-LMF lexicon model, an instantiation of the ISO standard Lexicon Markup Framework (LMF).
  • DKPro WSD is a modular, extensible Java framework for word sense disambiguation.
  • JOWKL (Java OmegaWiki Library) is an open-source, Java-based application programming interface that allows to access all information contained in OmegaWiki, such as glosses, usage examples, translations and much more.
  • JWKTL (Java Wiktionary Library) is a free, Java-based application programming interface that allows to access the information contained in Wiktionary.
  • JWPL (Java Wikipedia Library) is a free, Java-based application programming interface that allows to access all information contained in Wikipedia.
  • WebAnno is a general purpose web-based annotation tool for a wide range of linguistic annotations.


The principal investigator is Prof. Dr. Iryna Gurevych.

Richard Eckart de Castilho is currently the technical lead.

DKPro is a shared project of all UKP to which all group members contribute.


We use DKPro products in our courses:


The UKP group received two IBM's 2008 Unstructured Information Analytics (UIA) Awards for their DKPro proposals! The award was covered in the 30 June 2008 issue of the Darmstädter Echo.


Automatic Analysis of Flaws in Pre-Trained NLP Models

Author Richard Eckart de Castilho
Date December 2016
Kind Inproceedings
Book titleProceedings of the Third International Workshop on Worldwide Language Service Infrastructure and Second Workshop on Open Infrastructures and Analysis Frameworks for Human Language Technologies (WLSI3nOIAF2) at COLING 2016
LocationOsaka, Japan
Research Areas Ubiquitous Knowledge Processing, CEDIFOR, UKP_s_DKPro_Core, UKP_p_DKPro, UKP_reviewed, UKP_p_OpenMinTeD
Abstract Most tools for natural language processing (NLP) today are based on machine learning and come with pre-trained models. In addition, third-parties provide pre-trained models for popular NLP tools. The predictive power and accuracy of these tools depends on the quality of these models. Downstream researchers often base their results on pre-trained models instead of training their own. Consequently, pre-trained models are an essential resource to our community. However, to be best of our knowledge, no systematic study of pre-trained models has been conducted so far. This paper reports on the analysis of 274 pre-models for six NLP tools and four potential causes of problems: encoding, tokenization, normalization, and change over time. The analysis is implemented in the open source tool Model Investigator. Our work 1) allows model consumers to better assess whether a model is suitable for their task, 2) enables tool and model creators to sanity-check their models before distributing them, and 3) enables improvements in tool interoperability by performing automatic adjustments of normalization or other pre-processing based on the models used.
Website https://github.com/UKPLab/coling2016-modelinspector
Full paper (pdf)
[Export this entry to BibTeX]

Important Copyright Notice:

The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
A A A | Drucken Print | Impressum Impressum | Sitemap Sitemap | Suche Search | Kontakt Contact | Webseitenanalyse: Mehr Informationen
zum Seitenanfangzum Seitenanfang