Please use this identifier to cite or link to this item: https://repository.cihe.edu.hk/jspui/handle/cihe/1255
Title: Vision‐based place recognition using ConvNet features and temporal correlation between consecutive frames
Author(s): Siu, Wan Chi 
Author(s): Li, C.-T.
Lun, D. P. K.
Issue Date: 2019
Publisher: IEEE
Related Publication(s): 2019 IEEE Intelligent Transportation Systems Conference (ITSC) Proceedings
Start page: 3062
End page: 3067
Abstract: 
The most challenging part of vision-based place recognition is the wide variety in appearance of places. However temporal information between consecutive frames can be used to infer the next locations of a vehicle and obtain information about its ego-motion. Effective use of temporal information is useful to narrow the search ranges of the next locations, hence an efficient place recognition system can be accomplished. This paper presents a robust vision-based place recognition method, using the recent discriminative ConvNet features and proposes a flexible tubing strategy which groups consecutive frames based on their similarities. With the tubing strategy, effective pair searching can be achieved. We also suggest to add additional variations in the appearance of places to further enhance the variety of the training data and fine-tune an off-the-shelf, CALC, network model to obtain better generalization about its extracted features. Experimental results show that our proposed temporal correlation based recognition strategy with the fine-tuned model achieves the best (0.572) F1 score improvement over the original CALC model. The proposed place recognition method is also faster than the linear full search method by a factor of 2.15.
URI: https://repository.cihe.edu.hk/jspui/handle/cihe/1255
DOI: 10.1109/ITSC.2019.8917364
CIHE Affiliated Publication: No
Appears in Collections:CIS Publication

SFX Query Show full item record

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.