Scene Text Deblurring Using Text-Specific Multiscale Dictionaries
Keywords
multi-scale dictionaries; non-unifrom deblurring; Scene text; text localization
Abstract
Texts in natural scenes carry critical semantic clues for understanding images. When capturing natural scene images, especially by handheld cameras, a common artifact, i.e., blur, frequently happens. To improve the visual quality of such images, deblurring techniques are desired, which also play an important role in character recognition and image understanding. In this paper, we study the problem of recovering the clear scene text by exploiting the text field characteristics. A series of text-specific multiscale dictionaries (TMD) and a natural scene dictionary is learned for separately modeling the priors on the text and nontext fields. The TMD-based text field reconstruction helps to deal with the different scales of strings in a blurry image effectively. Furthermore, an adaptive version of nonuniform deblurring method is proposed to efficiently solve the real-world spatially varying problem. Dictionary learning allows more flexible modeling with respect to the text field property, and the combination with the nonuniform method is more appropriate in real situations where blur kernel sizes are depth dependent. Experimental results show that the proposed method achieves the deblurring results with better visual quality than the state-of-the-art methods.
Publication Date
4-1-2015
Publication Title
IEEE Transactions on Image Processing
Volume
24
Issue
4
Number of Pages
1302-1314
Document Type
Article
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/TIP.2015.2400217
Copyright Status
Unknown
Socpus ID
84923567963 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84923567963
STARS Citation
Cao, Xiaochun; Ren, Wenqi; Zuo, Wangmeng; Guo, Xiaojie; and Foroosh, Hassan, "Scene Text Deblurring Using Text-Specific Multiscale Dictionaries" (2015). Scopus Export 2015-2019. 295.
https://stars.library.ucf.edu/scopus2015/295