Oversmoothing regularization with $\ell^1$-penalty term

In Tikhonov-type regularization for ill-posed problems with noisy data, the penalty functionalis typically interpreted to carry a-priori information about the unknown true solution.We consider in this paper the case that the corresponding a-priori information is too strong such that thepenalty funct...

Full description

Bibliographic Details
Main Authors: Daniel Gerth, Bernd Hofmann
Format: Article
Language:English
Published: AIMS Press 2019-08-01
Series:AIMS Mathematics
Subjects:
Online Access:https://www.aimspress.com/article/10.3934/math.2019.4.1223/fulltext.html
id doaj-26f21c30ba03411ea3ff856ab28c91f6
record_format Article
spelling doaj-26f21c30ba03411ea3ff856ab28c91f62020-11-25T00:59:38ZengAIMS PressAIMS Mathematics2473-69882019-08-014412231247Oversmoothing regularization with $\ell^1$-penalty termDaniel Gerth0Bernd Hofmann1Faculty for Mathematics, Chemnitz University of Technology, 09107 Chemnitz, GermanyFaculty for Mathematics, Chemnitz University of Technology, 09107 Chemnitz, GermanyIn Tikhonov-type regularization for ill-posed problems with noisy data, the penalty functionalis typically interpreted to carry a-priori information about the unknown true solution.We consider in this paper the case that the corresponding a-priori information is too strong such that thepenalty functional is oversmoothing, which means that its value is infinite for the true solution. In the case of oversmoothing penalties, convergence and convergence rate assertions for the regularized solutions are difficult toderive, only for the Hilbert scale setting convincing results have been published. We attempt to extend this setting to $\ell^1$-regularization when the solutions are only in $\ell^2$. Unfortunately, we have to restrict our studies to the case of bounded linear operators with diagonal structure, mapping between $\ell^2$and a separable Hilbert space. But for this subcase, we are able to formulateand to prove a convergence theorem, which we support with numerical examples.https://www.aimspress.com/article/10.3934/math.2019.4.1223/fulltext.htmlregularizationinverse problemslinear ill-posed operator equationssparsity$\ell^1$-regularizationTikhonov functionaloversmoothing penaltyconvergence rate
collection DOAJ
language English
format Article
sources DOAJ
author Daniel Gerth
Bernd Hofmann
spellingShingle Daniel Gerth
Bernd Hofmann
Oversmoothing regularization with $\ell^1$-penalty term
AIMS Mathematics
regularization
inverse problems
linear ill-posed operator equations
sparsity
$\ell^1$-regularization
Tikhonov functional
oversmoothing penalty
convergence rate
author_facet Daniel Gerth
Bernd Hofmann
author_sort Daniel Gerth
title Oversmoothing regularization with $\ell^1$-penalty term
title_short Oversmoothing regularization with $\ell^1$-penalty term
title_full Oversmoothing regularization with $\ell^1$-penalty term
title_fullStr Oversmoothing regularization with $\ell^1$-penalty term
title_full_unstemmed Oversmoothing regularization with $\ell^1$-penalty term
title_sort oversmoothing regularization with $\ell^1$-penalty term
publisher AIMS Press
series AIMS Mathematics
issn 2473-6988
publishDate 2019-08-01
description In Tikhonov-type regularization for ill-posed problems with noisy data, the penalty functionalis typically interpreted to carry a-priori information about the unknown true solution.We consider in this paper the case that the corresponding a-priori information is too strong such that thepenalty functional is oversmoothing, which means that its value is infinite for the true solution. In the case of oversmoothing penalties, convergence and convergence rate assertions for the regularized solutions are difficult toderive, only for the Hilbert scale setting convincing results have been published. We attempt to extend this setting to $\ell^1$-regularization when the solutions are only in $\ell^2$. Unfortunately, we have to restrict our studies to the case of bounded linear operators with diagonal structure, mapping between $\ell^2$and a separable Hilbert space. But for this subcase, we are able to formulateand to prove a convergence theorem, which we support with numerical examples.
topic regularization
inverse problems
linear ill-posed operator equations
sparsity
$\ell^1$-regularization
Tikhonov functional
oversmoothing penalty
convergence rate
url https://www.aimspress.com/article/10.3934/math.2019.4.1223/fulltext.html
work_keys_str_mv AT danielgerth oversmoothingregularizationwithell1penaltyterm
AT berndhofmann oversmoothingregularizationwithell1penaltyterm
_version_ 1725217073007689728