98%
921
2 minutes
20
Hyperspectral data have been overshadowed by multispectral data for studying algal blooms for decades. However, newer hyperspectral missions, including the recent Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) Ocean Color Instrument (OCI), are opening the doors to accessible hyperspectral data, at spatial and temporal resolutions comparable to ocean color and multispectral missions. Simulation studies can help to understand the potential of these hyperspectral sensors prior to launch and without extensive field data collection. This study introduces a toolkit, HySIMU (HYperspectral SIMUlator), capable of simulating at-sensor data for hyperspectral sensors from simulated (or real) ground truth images. Six ground truth models are generated by populating distribution models that range from simulated to semi-realistic patterns of algal bloom targets, with various sets of spectral records. These ground truth models are run through HySIMU to simulate at-sensor PACE OCI and PRISMA radiance images using a radiative transfer model. The utility of these simulated images is demonstrated through the estimation of chlorophyll-a concentration from the simulated HySIMU models using the red-Near Infrared 2-band ratio and fluorescence line height (FLH) algorithms. Overall R and RMSE, excluding poorly performing outliers, range from ~0.4-0.9 and 2.4-41.8 μg/L, respectively. While simulated PRISMA models can resolve fine-scale features better than simulated PACE OCI models, their performance metrics are not consistently better. This may be evidence of the impacts of spectral mixing and averaging at the scale of the spatial resolution of the two sensors.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.scitotenv.2025.180313 | DOI Listing |
Proc IEEE Int Conf Big Data
December 2024
Dept. of Computer Science and Engineering, Mississippi State University Potentia Analytics Inc.; Dave C. Swalm School of Chemical Engineering, Mississippi State University.
This paper presents ClinicSum, a novel framework designed to automatically generate clinical summaries from patient-doctor conversations. It utilizes a two-module architecture: a retrieval-based filtering module that extracts Subjective, Objective, Assessment, and Plan (SOAP) information from conversation transcripts, and an inference module powered by fine-tuned Pre-trained Language Models (PLMs), which leverage the extracted SOAP data to generate abstracted clinical summaries. To fine-tune the PLM, we created a training dataset of consisting 1,473 conversations-summaries pair by consolidating two publicly available datasets, FigShare and MTS-Dialog, with ground truth summaries validated by Subject Matter Experts (SMEs).
View Article and Find Full Text PDFJAMIA Open
October 2025
Division of Pulmonary and Critical Care, Brigham and Women's Hospital, Boston, MA, United States.
Objectives: Unstructured data, such as procedure notes, contain valuable medical information that is frequently underutilized due to the labor-intensive nature of data extraction. This study aims to develop a generative artificial intelligence (GenAI) pipeline using an open-source Large Language Model (LLM) with built-in guardrails and a retry mechanism to extract data from unstructured right heart catheterization (RHC) notes while minimizing errors, including hallucinations.
Materials And Methods: A total of 220 RHC notes were randomly selected for pipeline development and 200 for validation from the Pulmonary Vascular Disease Registry.
J Med Imaging (Bellingham)
September 2025
Otto von Guericke University, Institute for Medical Engineering and Research Campus STIMULATE, Magdeburg, Germany.
Purpose: The combination of multi-layer flat panel detector (FPDT) X-ray imaging and physics-based material decomposition algorithms allows for the removal of anatomical structures. However, the reliability of these algorithms may be compromised by unaccounted materials or scattered radiation.
Approach: We investigated the two-material decomposition performance of a multi-layer FPDT in the context of 2D chest radiography without and with a 13:1 anti-scatter grid employed.
Mach Learn Health
December 2025
Medical Artificial Intelligence and Automation Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America.
Online adaptive radiation therapy (ART) personalizes treatment plans by accounting for daily anatomical changes, requiring workflows distinct from conventional radiotherapy. Deep learning-based dose prediction models can enhance treatment planning efficiency by rapidly generating accuracy dose distributions, reducing manual trial-and-error and accelerating the overall workflow; however, most existing approaches overlook critical pre-treatment plan information-specifically, physician-defined clinical objectives tailored to individual patients. To address this limitation, we introduce the multi-headed U-Net (MHU-Net), a novel architecture that explicitly incorporates physician intent from pre-treatment plans to improve dose prediction accuracy in adaptive head and neck cancer treatments.
View Article and Find Full Text PDFSci Total Environ
September 2025
Department of Geological Sciences and Geological Engineering, Queen's University, 99 University Ave, K7L 3N6 Kingston, Ontario, Canada.
Hyperspectral data have been overshadowed by multispectral data for studying algal blooms for decades. However, newer hyperspectral missions, including the recent Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) Ocean Color Instrument (OCI), are opening the doors to accessible hyperspectral data, at spatial and temporal resolutions comparable to ocean color and multispectral missions. Simulation studies can help to understand the potential of these hyperspectral sensors prior to launch and without extensive field data collection.
View Article and Find Full Text PDF