Summary: | Background: Many hospital rankings rely on the frequency of adverse outcomes and are based on administrative data. In the study presented here, we tried to find out, to what extent available administrative data of German Sickness Funds allow for an adequate hospital ranking and compared this with rankings based on additional information derived from a patient survey. Total hip replacement was chosen as an example procedure. In part I of the publication, we present the results of the approach based on administrative data. Methods: We used administrative data from the AOK-Lower Saxony of the years 2000, 2001 and 2002. The study population comprised all beneficiaries, who received total hip replacement in the years 2000 or 2001. Performance indicators used where “critical incident (Mortality or revision)” and “number of revisions” within the first year. Hospitals were ranked if they performed at least 20 procedures on AOK-beneficiaries in each of the two years. Multivariate modelling (logistic and poisson regression) was used to estimate the performance indicators by case-mix variables (age, sex, co-diagnoses) and hospital characteristics (hospital size, surgical volume). The actual ranking was based on these multivariate models, excluding hospital variables and adding dummy-variables for each hospital. Hospitals were ranked by their case-mix adjusted odds ratio or SMR respectively with respect to a pre-selected reference hospital. The resulting rankings were compared with each other, with regard to temporal stability, and the impact of case-mix variables.Results: About 4500 beneficiaries received total hip replacement in each year (n2000: 4482; n2001: 4579). The ranking included 65 hospitals. Comparing the years 2000 and 2001, the temporal stability of the rankings based on a single performance indicator was low (Spearman rang correlation coefficients 0.158 and 0.191). The agreement of rankings based on different performance indicators in the same year was high (Spearman: 0.80 and 0.85). Including case-mix variables improved the model fit remarkably. Odds ratios for hospitals varied from 0.0 to 10.0 (critical incident) and SMRs from 0.0 to 6.1 (number of revisions). Conclusions: Using data of two adjacent years together improves the reliability of hospital rankings. Adding the administrative data derived patient variables improves the explanation of the performance indicators. Whether this is sufficient to account for case-mix can not be determined at this point. If the case-mix was addressed properly, the rankings showed large differences in the quality of care, raising the need for action. In the second part of the publication, we will discuss, whether administrative data are good enough to provide information on relevant health outcomes and case-mix, or if hospital rankings should be based on additional information from patient surveys.
|