Please use this identifier to cite or link to this item:
https://repository.cihe.edu.hk/jspui/handle/cihe/1261
Title: | Fast monocular vision-based railway localization for situations with varying speeds | Author(s): | Siu, Wan Chi | Author(s): | Li, C.-T. | Issue Date: | 2018 | Publisher: | IEEE | Related Publication(s): | 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) Proceedings | Start page: | 2006 | End page: | 2013 | Abstract: | This paper presents a railway localization algorithm, using a novel tube of frames concept with key frame based rectification. We extract effective feature patches from key frames with an offline feature-shifts approach to real-time train localization. We get the localization results from both actual calculation of frame matching and estimation based on the previous matches, the temporal information about the train motion. We focus on practical situations in which the train travels at inconstant speeds in different journeys and stops at different locations. The experimental results illustrate that our algorithm can achieve 88.8% precision with 100% recall under an acceptable range of deviation from the ground truth, which outperforms SeqSLAM, a benchmark of localization and mapping algorithm. Moreover, our algorithm is robust to illumination and less sensitive to the length of sequences than the benchmark. We also compare with the modern CNN features based approach. We show that blurring and heavy time cost are two limitations of the CNN. Our algorithm only requires 13.4 ms to process a frame on average using a regular desktop, which is 10 times faster than using the CNN approach and also faster than the benchmark, with the best result of 2.1 times faster on the sample dataset. |
URI: | https://repository.cihe.edu.hk/jspui/handle/cihe/1261 | DOI: | 10.23919/APSIPA.2018.8659660 | CIHE Affiliated Publication: | No |
Appears in Collections: | CIS Publication |
Show full item record
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.