98%
921
2 minutes
20
With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This paper analyzes modern vehicles in different configurations and proposes a low-cost, versatile indoor non-visual semantic mapping and localization solution based on low-cost sensors. Firstly, the sliding window-based semantic landmark detection method is designed to identify non-visual semantic landmarks (e.g., entrance/exit, ramp entrance/exit, road node). Then, we construct an indoor non-visual semantic map that includes the vehicle trajectory waypoints, non-visual semantic landmarks, and Wi-Fi fingerprints of RSS features. Furthermore, to estimate the position of modern vehicles in the constructed semantic maps, we proposed a graph-optimized localization method based on landmark matching that exploits the correlation between non-visual semantic landmarks. Finally, field experiments are conducted in two shopping mall scenes with different underground parking layouts to verify the proposed non-visual semantic mapping and localization method. The results show that the proposed method achieves a high accuracy of 98.1% in non-visual semantic landmark detection and a low localization error of 1.31 m.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11243959 | PMC |
http://dx.doi.org/10.3390/s24134263 | DOI Listing |
Acta Psychol (Amst)
March 2025
Faculty of Psychology, University of Vienna, Vienna, Austria; Vienna Cognitive Science Hub, Vienna, Austria.
Colour plays an important role in the sighted world, not only by guiding and warning, but also by helping to make decisions, form opinions, and influence emotional landscape. While not everyone has direct access to this information, even people without colour vision (i.e.
View Article and Find Full Text PDFCognition
October 2024
Otto von Guericke University, Medical Faculty, Magdeburg, Germany; Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany; Center for Intervention and Research on Adaptive and Maladaptive Brain Circuits Underlying Mental Health (C-I-R-C), Jen
Visual working memory content is commonly thought to be composed of a precise visual representation of stimulus information (e.g., color, shape).
View Article and Find Full Text PDFMemory
September 2024
University of Illinois at Urbana-Champaign, Champaign, IL, USA.
A small wearable camera, SenseCam, passively captured pictures from everyday experience that were later used to evaluate the accuracy and completeness of autobiographical memory. Nine undergraduates wore SenseCams that took pictures every 10 s for two days. After one week and one month, participants first recalled their experiences from specific time periods (timeslices), then reviewed the corresponding pictures to make corrections and report information omitted from initial recall.
View Article and Find Full Text PDFSensors (Basel)
June 2024
College of Sino-German Institute Science and Technology, Qingdao University of Science and Technology, Qingdao 266061, China.
With the transformation and development of the automotive industry, low-cost and seamless indoor and outdoor positioning has become a research hotspot for modern vehicles equipped with in-vehicle infotainment systems, Internet of Vehicles, or other intelligent systems (such as Telematics Box, Autopilot, etc.). This paper analyzes modern vehicles in different configurations and proposes a low-cost, versatile indoor non-visual semantic mapping and localization solution based on low-cost sensors.
View Article and Find Full Text PDFBrain Struct Funct
July 2024
Department of Computer Science, University of Warwick, Coventry, CV4 7AL, UK.
Connectivity maps are now available for the 360 cortical regions in the Human Connectome Project Multimodal Parcellation atlas. Here we add function to these maps by measuring selective fMRI activations and functional connectivity increases to stationary visual stimuli of faces, scenes, body parts and tools from 956 HCP participants. Faces activate regions in the ventrolateral visual cortical stream (FFC), in the superior temporal sulcus (STS) visual stream for face and head motion; and inferior parietal visual (PGi) and somatosensory (PF) regions.
View Article and Find Full Text PDF