Warning: A non-numeric value encountered in /homepages/28/d567998741/htdocs/ws/www/plugins/system/helix/core/helix.php on line 548

Warning: A non-numeric value encountered in /homepages/28/d567998741/htdocs/ws/www/plugins/system/helix/core/helix.php on line 548

Warning: A non-numeric value encountered in /homepages/28/d567998741/htdocs/ws/www/plugins/system/helix/core/helix.php on line 548

Warning: A non-numeric value encountered in /homepages/28/d567998741/htdocs/ws/www/plugins/system/helix/core/helix.php on line 548

Text Analysis

LCCore

Our solution for collecting linguistic data:

Vocabulary lists
Document properties
Linguistic statistical data
Register information

Data Cleanup

LCTidy

Our solution for cleansing language data:

Orthographic corrections
Quality metrics
Variant detection

Terminology Compilation

LCTerm

Our solution for terminology compilation:

Term extraction
Terminology evaluation
Terminology consolidation

Content Processing

LCIndex

Our solution for content processing:

Key word extraction
Statistical classification
Document indexation
Document clustering

 

We Have Moved

Since 1st of January 2019, IAI Linguistic Content AG is located right in the city centre of Saarbrücken, Rathausplatz 8.


IAILC AG at tekom Fair and tcworld Conference 2018

The focus of our presentations at the tekom annual fair and tcworld conference in Stuttgart was on meta data extraction and legacy data recovery, though not neglecting linguistic services such as terminology extraction, terminology evaluation or language quality checking.

In his tool presentation, Paul Schmidt focused on Automatic Metadata Extraction for Content Delivery based on linguistic analysis routines. All presentations attracted much interest.

 


Cleaning Up Legacy Product Data Is Feasible

This was the conclusion of the talk Verjüngungskur für sprachliche Altdaten (Rejuvenating Treatment for Legacy Data) presented by Axel Theofilidis at the tekom/tcworld conference 2017. Legacy product data may exhibit specific properties due to limitations of early IT systems such as texts written in capital letters only, without umlauts, with lots of orthographic errors and the like. Such data can be cleaned and recovered to a degree of 95 % and even more by using high quality linguistic processing tools. The tiny rest may easily be corrected and adapted intellectually.

Learn more about how legacy product data can be cleaned by using high quality linguistic processing tools. The results perfectly fit modern language technology applications such as authoring memories or translation memories.

 

 

Lemmatisation of Luther Bible 2017 supports lemma based search ...

Deutsche Bibelgesellschaft is publishing a new version of the Luther Bible with the support of  IAI Lingustic Content AG.

We use cookies to optimise this website and continuously update it according to your needs. With the usage of our services you permit us to use cookies
More information I agree