Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Smartphones with integrated sensors play an important role in people's lives, and in advanced multi-sensor fusion navigation systems, the use of individual sensor information is crucial. Because of the different environments, the weights of the sensors will be different, which will also affect the method and results of multi-source fusion positioning. Based on the multi-source data from smartphone sensors, this study explores five types of information-Global Navigation Satellite System (GNSS), Inertial Measurement Units (IMUs), cellular networks, optical sensors, and Wi-Fi sensors-characterizing the temporal, spatial, and mathematical statistical features of the data, and it constructs a multi-scale, multi-window, and context-connected scene sensing model to accurately detect the environmental scene in indoor, semi-indoor, outdoor, and semi-outdoor spaces, thus providing a good basis for multi-sensor positioning in a multi-sensor navigation system. Detecting environmental scenes provides an environmental positioning basis for multi-sensor fusion localization. This model is divided into four main parts: multi-sensor-based data mining, a multi-scale convolutional neural network (CNN), a bidirectional long short-term memory (BiLSTM) network combined with contextual information, and a meta-heuristic optimization algorithm.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11510913PMC
http://dx.doi.org/10.3390/s24206669DOI Listing

Publication Analysis

Top Keywords

scene sensing
8
sensing model
8
based multi-source
8
multi-source data
8
multi-sensor fusion
8
basis multi-sensor
8
model based
4
data
4
data smartphones
4
smartphones smartphones
4

Similar Publications

Navigating image space.

Neuropsychologia

August 2025

School of Psychology and Clinical Language Sciences, University of Reading, Reading RG6 6AL, UK. Electronic address:

Navigation means getting from here to there. Unfortunately, for biological navigation, there is no agreed definition of what we might mean by 'here' or 'there'. Computer vision ('Simultaneous Localisation and Mapping', SLAM) uses a 3D world-based coordinate frame but that is a poor model for biological spatial representation.

View Article and Find Full Text PDF

Humans perceive a vividly colored world coherently across the visual field, even though our peripheral vision has limited color sensitivity compared to central vision. How is this sense of color uniformity achieved? This question can be explored through a phenomenon called the pan-field color illusion, in which observers perceive scene images achromatized in the peripheral region (chimera images) as full-color images. Our previous work demonstrated that inattention to the peripheral visual field contributed to this illusion.

View Article and Find Full Text PDF

In recent years, indoor user identification via Wi-Fi signals has emerged as a vibrant research area in smart homes and the Internet of Things, thanks to its privacy preservation, immunity to lighting conditions, and ease of large-scale deployment. Conventional deep-learning classifiers, however, suffer from poor generalization and demand extensive pre-collected data for every new scenario. To overcome these limitations, we introduce SimID, a few-shot Wi-Fi user recognition framework based on identity-similarity learning rather than conventional classification.

View Article and Find Full Text PDF

Crop classification plays a vital role in acquiring the spatial distribution of agricultural crops, enhancing agricultural management efficiency, and ensuring food security. With the continuous advancement of remote sensing technologies, achieving efficient and accurate crop classification using remote sensing imagery has become a prominent research focus. Conventional approaches largely rely on empirical rules or single-feature selection (e.

View Article and Find Full Text PDF

Fusion of Near-Infrared and UV Light Image via Artificial Visual Neurons Based on Mott Memristor.

Small

August 2025

State Key Laboratory of Wide Band Gap Semiconductor Devices and Integrated Technology, School of Microelectronics, Xidian University, Xi'an, 710071, China.

Multi-band image fusion in biological systems aims to integrate image data from various spectral bands to obtain more comprehensive, accurate, and effective image information. However, developing efficient and low-power artificial vision multi-band image fusion systems inspired by biological vision systems remains a challenge. Here, an artificial visual neuron based on the integration of InO/PY-IT phototransistor and NbO Mott memristor is proposed, which can simultaneously sense optical signals in the UV and near-infrared bands and achieve pulse encoding of different frequencies through light stimulation of different intensities.

View Article and Find Full Text PDF