Uncertainty quantification in ToxCast high throughput screening.

High throughput screening (HTS) projects like the U.S. Environmental Protection Agency's ToxCast program are required to address the large and rapidly increasing number of chemicals for which we have little to no toxicity measurements. Concentration-response parameters such as potency and effic...

Full description

Bibliographic Details
Main Authors: Eric D Watt, Richard S Judson
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2018-01-01
Series:PLoS ONE
Online Access:http://europepmc.org/articles/PMC6059398?pdf=render
id doaj-c9e466ef712141b4b12a6b168b23ef71
record_format Article
spelling doaj-c9e466ef712141b4b12a6b168b23ef712020-11-25T02:47:07ZengPublic Library of Science (PLoS)PLoS ONE1932-62032018-01-01137e019696310.1371/journal.pone.0196963Uncertainty quantification in ToxCast high throughput screening.Eric D WattRichard S JudsonHigh throughput screening (HTS) projects like the U.S. Environmental Protection Agency's ToxCast program are required to address the large and rapidly increasing number of chemicals for which we have little to no toxicity measurements. Concentration-response parameters such as potency and efficacy are extracted from HTS data using nonlinear regression, and models and analyses built from these parameters are used to predict in vivo and in vitro toxicity of thousands of chemicals. How these predictions are impacted by uncertainties that stem from parameter estimation and propagated through the models and analyses has not been well explored. While data size and complexity makes uncertainty quantification computationally expensive for HTS datasets, continued advancements in computational resources have allowed these computational challenges to be met. This study uses nonparametric bootstrap resampling to calculate uncertainties in concentration-response parameters from a variety of HTS assays. Using the ToxCast estrogen receptor model for bioactivity as a case study, we highlight how these uncertainties can be propagated through models to quantify the uncertainty in model outputs. Uncertainty quantification in model outputs is used to identify potential false positives and false negatives and to determine the distribution of model values around semi-arbitrary activity cutoffs, increasing confidence in model predictions. At the individual chemical-assay level, curves with high variability are flagged for manual inspection or retesting, focusing subject-matter-expert time on results that need further input. This work improves the confidence of predictions made using HTS data, increasing the ability to use this data in risk assessment.http://europepmc.org/articles/PMC6059398?pdf=render
collection DOAJ
language English
format Article
sources DOAJ
author Eric D Watt
Richard S Judson
spellingShingle Eric D Watt
Richard S Judson
Uncertainty quantification in ToxCast high throughput screening.
PLoS ONE
author_facet Eric D Watt
Richard S Judson
author_sort Eric D Watt
title Uncertainty quantification in ToxCast high throughput screening.
title_short Uncertainty quantification in ToxCast high throughput screening.
title_full Uncertainty quantification in ToxCast high throughput screening.
title_fullStr Uncertainty quantification in ToxCast high throughput screening.
title_full_unstemmed Uncertainty quantification in ToxCast high throughput screening.
title_sort uncertainty quantification in toxcast high throughput screening.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2018-01-01
description High throughput screening (HTS) projects like the U.S. Environmental Protection Agency's ToxCast program are required to address the large and rapidly increasing number of chemicals for which we have little to no toxicity measurements. Concentration-response parameters such as potency and efficacy are extracted from HTS data using nonlinear regression, and models and analyses built from these parameters are used to predict in vivo and in vitro toxicity of thousands of chemicals. How these predictions are impacted by uncertainties that stem from parameter estimation and propagated through the models and analyses has not been well explored. While data size and complexity makes uncertainty quantification computationally expensive for HTS datasets, continued advancements in computational resources have allowed these computational challenges to be met. This study uses nonparametric bootstrap resampling to calculate uncertainties in concentration-response parameters from a variety of HTS assays. Using the ToxCast estrogen receptor model for bioactivity as a case study, we highlight how these uncertainties can be propagated through models to quantify the uncertainty in model outputs. Uncertainty quantification in model outputs is used to identify potential false positives and false negatives and to determine the distribution of model values around semi-arbitrary activity cutoffs, increasing confidence in model predictions. At the individual chemical-assay level, curves with high variability are flagged for manual inspection or retesting, focusing subject-matter-expert time on results that need further input. This work improves the confidence of predictions made using HTS data, increasing the ability to use this data in risk assessment.
url http://europepmc.org/articles/PMC6059398?pdf=render
work_keys_str_mv AT ericdwatt uncertaintyquantificationintoxcasthighthroughputscreening
AT richardsjudson uncertaintyquantificationintoxcasthighthroughputscreening
_version_ 1724754445161463808