Summary: | Background: Antimicrobial IgG avidity is measured in the diagnosis of infectious disease, for dating of primary infection or immunization. It is generally determined by either of two approaches, termed here the avidity index (AI) or end-point ratio (EPR), which differ in complexity and workload. While several variants of these approaches have been introduced, little comparative information exists on their clinical utility. Methods: This study was performed to systematically compare the performances of these approaches and to design a new sensitive and specific calculation method, for easy implementation in the laboratory. The avidities obtained by AI, EPR, and the newly developed approach were compared, across parvovirus B19, cytomegalovirus, Toxoplasma gondii, rubella virus, and Epstein–Barr virus panels comprising 460 sera from individuals with a recent primary infection or long-term immunity. Results: With optimal IgG concentrations, all approaches performed equally, appropriately discriminating primary infections from past immunity (area under the receiver operating characteristic curve (AUC) 0.93–0.94). However, at lower IgG concentrations, the avidity status (low, borderline, high) changed in 17% of samples using AI (AUC 0.88), as opposed to 4% using EPR (AUC 0.91) and 6% using the new method (AUC 0.93). Conclusions: The new method measures IgG avidity accurately, in a broad range of IgG levels, while the popular AI approach calls for a sufficiently high antibody concentration.
|