Publications by authors named "Deming Yuan"

We focus on the problem of distributed online constrained convex optimization with statistical privacy in multiagent systems. The participating agents aim to collaboratively minimize the cumulative system-wide cost while a passive adversary corrupts some of them. The passive adversary collects information from corrupted agents and attempts to estimate the private information of the uncorrupted ones.

View Article and Find Full Text PDF

This paper considers a distributed constrained optimization problem over a multi-agent network in the non-Euclidean sense. The gossip protocol is adopted to relieve the communication burden, which also adapts to the constantly changing topology of the network. Based on this idea, a gossip-based distributed stochastic mirror descent (GB-DSMD) algorithm is proposed to handle the problem under consideration.

View Article and Find Full Text PDF

This article studies the distributed online stochastic convex optimization problem with the time-varying constraint over a multiagent system constructed by various agents. The sequences of cost functions and constraint functions, both of which have dynamic parameters following time-varying distributions, are unacquainted to the agent ahead of time. Agents in the network are able to interact with their neighbors through a sequence of strongly connected and time-varying graphs.

View Article and Find Full Text PDF

This article is concerned with the distributed convex constrained optimization over a time-varying multiagent network in the non-Euclidean sense, where the bandwidth limitation of the network is considered. To save the network resources so as to reduce the communication costs, we apply an event-triggered strategy (ETS) in the information interaction of all the agents over the network. Then, an event-triggered distributed stochastic mirror descent (ET-DSMD) algorithm, which utilizes the Bregman divergence as the distance-measuring function, is presented to investigate the multiagent optimization problem subject to a convex constraint set.

View Article and Find Full Text PDF

This article is concerned with the distributed stochastic multiagent-constrained optimization problem over a time-varying network with a class of communication noise. This article considers the problem in composite optimization setting, which is more general in the literature of noisy network optimization. It is noteworthy that the mainstream existing methods for noisy network optimization are Euclidean projection based.

View Article and Find Full Text PDF

This article considers the problem of stochastic strongly convex optimization over a network of multiple interacting nodes. The optimization is under a global inequality constraint and the restriction that nodes have only access to the stochastic gradients of their objective functions. We propose an efficient distributed non-primal-dual algorithm, by incorporating the inequality constraint into the objective via a smoothing technique.

View Article and Find Full Text PDF

In this article, we concentrate on distributed online convex optimization problems over multiagent systems, where the communication between nodes is represented by a class of directed graphs that are time varying and uniformly strongly connected. This problem is in bandit feedback, in the sense that at each time only the cost function value at the committed point is revealed to each node. Then, nodes update their decisions by exchanging information with their neighbors only.

View Article and Find Full Text PDF

In this paper, we consider the problem of solving distributed constrained optimization over a multiagent network that consists of multiple interacting nodes in online setting, where the objective functions of nodes are time-varying and the constraint set is characterized by an inequality. Through introducing a regularized convex-concave function, we present a consensus-based adaptive primal-dual subgradient algorithm that removes the need for knowing the total number of iterations in advance. We show that the proposed algorithm attains an [where ] regret bound and an bound on the violation of constraints; in addition, we show an improvement to an regret bound when the objective functions are strongly convex.

View Article and Find Full Text PDF

This paper studies the problem of minimizing a sum of (possible nonsmooth) convex functions that are corresponding to multiple interacting nodes, subject to a convex state constraint set. Time-varying directed network is considered here. Two types of computational constraints are investigated in this paper: one where the information of gradients is not available and the other where the projection steps can only be calculated approximately.

View Article and Find Full Text PDF

In this paper, we study the distributed constrained optimization problem where the objective function is the sum of local convex cost functions of distributed nodes in a network, subject to a global inequality constraint. To solve this problem, we propose a consensus-based distributed regularized primal-dual subgradient method. In contrast to the existing methods, most of which require projecting the estimates onto the constraint set at every iteration, only one projection at the last iteration is needed for our proposed method.

View Article and Find Full Text PDF

In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs).

View Article and Find Full Text PDF

This paper studies the problem of optimizing the sum of multiple agents' local convex objective functions, subject to global convex inequality constraints and a convex state constraint set over a network. Through characterizing the primal and dual optimal solutions as the saddle points of the Lagrangian function associated with the problem, we propose a distributed algorithm, named the distributed primal-dual subgradient method, to provide approximate saddle points of the Lagrangian function, based on the distributed average consensus algorithms. Under Slater's condition, we obtain bounds on the convergence properties of the proposed method for a constant step size.

View Article and Find Full Text PDF