Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Deep brain stimulation (DBS) is an established intervention for Parkinson's disease (PD), but conventional open-loop systems lack adaptability, are energy-inefficient due to continuous stimulation, and provide limited personalization to individual neural dynamics. Adaptive DBS (aDBS) offers a closed-loop alternative, using biomarkers such as beta-band oscillations to dynamically modulate stimulation. While reinforcement learning (RL) holds promise for personalized aDBS control, existing methods suffer from high sample complexity, unstable exploration in binary action spaces, and limited deployability on resource-constrained hardware. We propose SEA-DBS, a sample-efficient actor-critic framework that addresses the core challenges of RL-based adaptive neurostimulation. SEA-DBS integrates a predictive reward model to reduce reliance on real-time feedback and employs Gumbel-Softmax-based exploration for stable, differentiable policy updates in binary action spaces. Together, these components improve sample efficiency, exploration robustness, and compatibility with resource-constrained neuromodulatory hardware. We evaluate SEA-DBS on a biologically realistic simulation of Parkinsonian basal ganglia activity, demonstrating faster convergence, stronger suppression of pathological beta-band power, and resilience to post-training FP16 quantization. Our results show that SEA-DBS offers a practical and effective RL-based aDBS framework for real-time, resource-constrained neuromodulation.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12265577PMC

Publication Analysis

Top Keywords

reinforcement learning
8
deep brain
8
brain stimulation
8
parkinson's disease
8
binary action
8
action spaces
8
sample-efficient reinforcement
4
learning controller
4
controller deep
4
stimulation
4

Similar Publications

Reward delays are often associated with reduced probability of reward, although standard assessments of delay discounting do not specify degree of reward certainty. Thus, the extent to which estimates of delay discounting are influenced by uncontrolled variance in perceived reward certainty remains unclear. Here we examine 370 participants who were randomly assigned to complete a delay discounting task when reward certainty was either unspecified (n=184) or specified as 100% (n = 186) in the task trials and task instructions.

View Article and Find Full Text PDF

The increasing dependence on cloud computing as a cornerstone of modern technological infrastructures has introduced significant challenges in resource management. Traditional load-balancing techniques often prove inadequate in addressing cloud environments' dynamic and complex nature, resulting in suboptimal resource utilization and heightened operational costs. This paper presents a novel smart load-balancing strategy incorporating advanced techniques to mitigate these limitations.

View Article and Find Full Text PDF

Rule following as choice: The role of reinforcement rate and rule accuracy on rule-following behavior.

J Exp Anal Behav

September 2025

Laboratorio de Análisis de la Conducta, Universidad Nacional Autónoma de México. Facultad de Estudios Superiores Iztacala.

Rules can control the listener's behavior, yet few studies have examined variables that quantitatively determine the extent of this control relative to other rules and contingencies. To explore these variables, we employed a novel procedure that required a choice between rules. Participants clicked two buttons on a computer screen to earn points exchangeable for money.

View Article and Find Full Text PDF

Multiagent Inductive Policy Optimization.

IEEE Trans Neural Netw Learn Syst

September 2025

Policy optimization methods are promising to tackle high-complexity reinforcement learning (RL) tasks with multiple agents. In this article, we derive a general trust region for policy optimization methods by considering the effect of subpolicy combinations among agents in multiagent environments. Based on this trust region, we propose an inductive objective to train the policy function, which can ensure agents learn monotonically improving policies.

View Article and Find Full Text PDF

In essence, reinforcement learning (RL) solves optimal control problem (OCP) by employing a neural network (NN) to fit the optimal policy from state to action. The accuracy of policy approximation is often very low in complex control tasks, leading to unsatisfactory control performance compared with online optimal controllers. A primary reason is that the landscape of value function is always not only rugged in most areas but also flat on the bottom, which damages the convergence to the minimum point.

View Article and Find Full Text PDF