Text Analysis

LCCore

Our solution for collecting linguistic data:

Vocabulary lists
Document properties
Linguistic statistical data
Register information

Data Cleanup

LCTidy

Our solution for cleansing language data:

Orthographic corrections
Quality metrics
Variant detection

Terminology Compilation

LCTerm

Our solution for terminology compilation:

Term extraction
Terminology evaluation
Terminology consolidation

Content Processing

LCIndex

Our solution for content processing:

Key word extraction
Statistical classification
Document indexation
Document clustering

 

IAILC AG at tekom Fair and tcworld Conference 2019

IAILC AG will be at the next annual tekom Fair and tcworld conference 2019 in Stuttgart. IAILC AG will be represented with several talks in the conference program, on metadata extraction, especially on iiRDS meta data extraction, and on terminology evaluation.

IAILC AG will also be at the fair with a booth (2/C04) presenting a rich selection of unique services such as terminology extraction, terminology evaluation, language quality checking. A focus will be on legacy data recovery.

Contact us in advance and make an appointment for an individual meeting at the fair.


We Have Moved

Since 1st of January 2019, IAI Linguistic Content AG has been located right in the city centre of Saarbrücken, Rathausplatz 8.


IAILC AG at tekom Fair and tcworld Conference 2018

The focus of our presentations at the tekom annual fair and tcworld conference in Stuttgart was on meta data extraction and legacy data recovery, though not neglecting linguistic services such as terminology extraction, terminology evaluation or language quality checking.

In his tool presentation, Paul Schmidt focused on Automatic Metadata Extraction for Content Delivery based on linguistic analysis routines. All presentations attracted much interest.

 


Cleaning Up Legacy Product Data Is Feasible

This was the conclusion of the talk Verjüngungskur für sprachliche Altdaten (Rejuvenating Treatment for Legacy Data) presented by Axel Theofilidis at the tekom/tcworld conference 2017. Legacy product data may exhibit specific properties due to limitations of early IT systems such as texts written in capital letters only, without umlauts, with lots of orthographic errors and the like. Such data can be cleaned and recovered to a degree of 95 % and even more by using high quality linguistic processing tools. The tiny rest may easily be corrected and adapted intellectually.

Learn more about how legacy product data can be cleaned by using high quality linguistic processing tools. The results perfectly fit modern language technology applications such as authoring memories or translation memories.

 

 

Lemmatisation of Luther Bible 2017 supports lemma based search ...

Deutsche Bibelgesellschaft is publishing a new version of the Luther Bible with the support of  IAI Lingustic Content AG.

Our Mission ...

 … is to provide linguistic intelligence to help with all sorts of document processing. Linguistic services providers, translators, technical writers as well as publishers and providers of high quality information will take benefit from our tools and services  both in terms of efficiency and quality.

Read more

 

We use cookies to optimise this website and continuously update it according to your needs. With the usage of our services you permit us to use cookies
More information I agree