Abstract
Timely and continuous access to worldwide clinical trial data for safety reporting and analysis is becoming increasingly important as the number of clinical trials increases and the trend toward large-scale and multinational studies continues. As a consequence, new clinical data processing systems and changes in work practices are rapidly being introduced to ensure that limited resources are used effectively to track and correct errors and accelerate the time to a clean file. Traditional data processing methods are gradually being replaced as new technology becomes available. Several companies have experimented with distributed processing systems, usually at investigator sites and occasionally at subsidiary offices, in order to improve study monitoring, reduce data errors, and improve dataflow. These systems must be easy to implement and maintain while providing timely access to data at multiple remote locations.
This paper will review the rationale for developing distributed database systems, comment on some of the different technologies that have been used, and present some of the functional requirements and organizational changes that characterize this approach.
Get full access to this article
View all access options for this article.
