Abstract
Data integration is one of the trending topics in the modern computer science. It is not an uncommon task to deliver a unified perspective on a set of heterogenous data that would serve as a consensus of participating elements. Many computationally expensive solutions can be found in the literature. Moreover, one cannot determine how potential changes applied to inputs of these methods impact their results. In this paper we present a framework of managing evolving data and handling the entailments of the unforeseen alterations of inputs in terms of performing sound data integration in an acceptable time. We base our work on the consensus theory and provide theoretical foundations, an experimental evaluation and a statistical analysis of obtained results.
Get full access to this article
View all access options for this article.
