Abstract
Public organizations release their data for use by the public to open the government. Various benchmarks for evaluating the progress of open data adoption have emerged recently. In order to help bring about a better understanding of the common and differentiating elements in open data benchmarks and to identify the methodologies and metrics affecting their variation, this article compares open data benchmarks and describes lessons learned from their analysis. An interpretive meta-analysis approach was used and five benchmarks were compared with regard to metadata (key concepts, themes, and metaphors), meta-methods (methodologies underlying the benchmarks) and metatheories (theoretical assumptions at the foundation of the benchmarks). It was found that each benchmark has its strengths and weaknesses and is applicable in specific situations. Since the open data benchmarks have a different scope and focus and use different methodologies, they produce different results in terms of country ranks. There is an obvious gap in both the literature and benchmarks regarding the evolution of end-user practices and individual adoption of open data. Furthermore, lessons are drawn for the development of more comprehensive open data benchmarks and open government evaluation in general.
Get full access to this article
View all access options for this article.
