An Experience-Based Direct Generation Approach to Automatic Image Cropping

Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model...

Full description

Bibliographic Details
Main Authors: Casper L. Christensen, Aneesh Vartakavi
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9500226/
id doaj-ae2f78962894453a9afdb9dc4ea14ab6
record_format Article
spelling doaj-ae2f78962894453a9afdb9dc4ea14ab62021-08-09T23:01:00ZengIEEEIEEE Access2169-35362021-01-01910760010761010.1109/ACCESS.2021.31008169500226An Experience-Based Direct Generation Approach to Automatic Image CroppingCasper L. Christensen0https://orcid.org/0000-0003-2560-0259Aneesh Vartakavi1https://orcid.org/0000-0001-8088-5782Gracenote, Emeryville, CA, USAGracenote, Emeryville, CA, USAAutomatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. We also find that the model can generate crops with better aesthetics than the ground truth in the MIRThumb dataset for image thumbnail generation with no fine tuning. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.https://ieeexplore.ieee.org/document/9500226/Automatic image croppingconvolutional neural networksimage enhancementimage processing
collection DOAJ
language English
format Article
sources DOAJ
author Casper L. Christensen
Aneesh Vartakavi
spellingShingle Casper L. Christensen
Aneesh Vartakavi
An Experience-Based Direct Generation Approach to Automatic Image Cropping
IEEE Access
Automatic image cropping
convolutional neural networks
image enhancement
image processing
author_facet Casper L. Christensen
Aneesh Vartakavi
author_sort Casper L. Christensen
title An Experience-Based Direct Generation Approach to Automatic Image Cropping
title_short An Experience-Based Direct Generation Approach to Automatic Image Cropping
title_full An Experience-Based Direct Generation Approach to Automatic Image Cropping
title_fullStr An Experience-Based Direct Generation Approach to Automatic Image Cropping
title_full_unstemmed An Experience-Based Direct Generation Approach to Automatic Image Cropping
title_sort experience-based direct generation approach to automatic image cropping
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2021-01-01
description Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. We also find that the model can generate crops with better aesthetics than the ground truth in the MIRThumb dataset for image thumbnail generation with no fine tuning. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm.
topic Automatic image cropping
convolutional neural networks
image enhancement
image processing
url https://ieeexplore.ieee.org/document/9500226/
work_keys_str_mv AT casperlchristensen anexperiencebaseddirectgenerationapproachtoautomaticimagecropping
AT aneeshvartakavi anexperiencebaseddirectgenerationapproachtoautomaticimagecropping
AT casperlchristensen experiencebaseddirectgenerationapproachtoautomaticimagecropping
AT aneeshvartakavi experiencebaseddirectgenerationapproachtoautomaticimagecropping
_version_ 1721213449498787840