https://www.selleckchem.com/products/r16.html Dynamic resource allocation problem (DRAP) with unknown cost functions and unknown resource transition functions is studied in this article. The goal of the agents is to minimize the sum of cost functions over given time periods in a distributed way, that is, by only exchanging information with their neighboring agents. First, we propose a distributed Q-learning algorithm for DRAP with unknown cost functions and unknown resource transition functions under discrete local feasibility constraints (DLFCs). It is theoretically proved that the joint policy of agents produced by the distributed Q-learning algorithm can always provide a feasible allocation (FA), that is, satisfying the constraints at each time period. Then, we also study the DRAP with unknown cost functions and unknown resource transition functions under continuous local feasibility constraints (CLFCs), where a novel distributed Q-learning algorithm is proposed based on function approximation and distributed optimization. It should be noted that the update rule of the local policy of each agent can also ensure that the joint policy of agents is an FA at each time period. Such property is of vital importance to execute the ϵ-greedy policy during the whole training process. Finally, simulations are presented to demonstrate the effectiveness of the proposed algorithms.This article investigates the cooperative output regulation problem for heterogeneous nonlinear multiagent systems subject to disturbances and quantization. The agent dynamics are modeled by the well-known Takagi-Sugeno fuzzy systems. Distributed reference generators are first devised to estimate the state of the exosystem under directed fixed and switching communication graphs, respectively. Then, distributed fuzzy cooperative controllers are designed for individual agents. Via the Lyapunov technique, sufficient conditions are obtained to guarantee the output synchronization of the resulting closed-l