Close

Quick detection of duplicated data

Data deduplication is a process that avoids the unnecessary financial costs and reputation damage because of repeated contact with the customer on the same issue.

Applying associative memory to duplicate the same address data (the same companies or individuals), increases the number of duplicates detected – cases that are very difficult to detect by traditional data mining methods (based on, for example, rules defined in SQL).

Deduplication with associative processing does not require prior normalization of databases. Associative algorithms see all the similarities in data and “intuitively” can relate data to the same person or company, despite significant differences in their record format, without the need to define complex analytical rules.

The deduplication architecture (analogue based as well as normalization) allows for the possibility of dividing data processing into multiple parallel tasks performed at the same time using different processor cores or different computers. Thanks to this, it is possible to deduplicate a base of any size, not in weeks or months but in a few hours.

Tell us about your business’ needs, and we’ll find a solution