Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection
Scene text detection methods have achieved impressive performance in some applications. However, there are some problems when text characteristics are challenging, for example, when shape, texture, or scale varies.
A recent paper on arXiv.org proposes a novel adaptive boundary proposal network for arbitrary shape text detection. The boundary proposal model is composed of multi-layer dilated convolutions.
The coarse boundary proposals can roughly locate texts and well separate adjacent texts. An adaptive boundary deformation model is created to perform iterative boundary deformation for generating accurate text instance shapes under the guidance of prior information. It is based on an encoder-decoder structure. The experiments demonstrate that the suggested framework achieves state-of-the-art performance on several datasets.
Arbitrary shape text detection is a challenging task due to the high variety and complexity of scenes texts. In this paper, we propose a novel unified relational reasoning graph network for arbitrary shape text detection. In our method, an innovative local graph bridges a text proposal model via Convolutional Neural Network (CNN) and a deep relational reasoning network via Graph Convolutional Network (GCN), making our network end-to-end trainable. To be concrete, every text instance will be divided into a series of small rectangular components, and the geometry attributes (e.g., height, width, and orientation) of the small components will be estimated by our text proposal model. Given the geometry attributes, the local graph construction model can roughly establish linkages between different text components. For further reasoning and deducing the likelihood of linkages between the component and its neighbors, we adopt a graph-based network to perform deep relational reasoning on local graphs. Experiments on public available datasets demonstrate the state-of-the-art performance of our method.