https://www.selleckchem.com/products/mk-8353-sch900353.html In-hand manipulation and grasp adjustment with dexterous robotic hands is a complex problem that not only requires highly coordinated finger movements but also deals with interaction variability. The control problem becomes even more complex when introducing tactile information into the feedback loop. Traditional approaches do not consider tactile feedback and attempt to solve the problem either by relying on complex models that are not always readily available or by constraining the problem in order to make it more tractable. In this paper, we propose a hierarchical control approach where a higher level policy is learned through reinforcement learning, while low level controllers ensure grip stability throughout the manipulation action. The low level controllers are independent grip stabilization controllers based on tactile feedback. The independent controllers allow reinforcement learning approaches to explore the manipulation tasks state-action space in a more structured manner. We show that this structure allows learning the unconstrained task with RL methods that cannot learn it in a non-hierarchical setting. The low level controllers also provide an abstraction to the tactile sensors input, allowing transfer to real robot platforms. We show preliminary results of the transfer of policies trained in simulation to the real robot hand.This paper tackles the problem of formation reconstruction for a team of vehicles based on the knowledge of the range between agents of a subset of the participants. One main peculiarity of the proposed approach is that the relative velocity between agents, which is a fundamental data to solve the problem, is not assumed to be known in advance neither directly communicated. For the purpose of estimating this quantity, a collaborative control protocol is designed in order to mount the velocity data in the motion of each vehicle as a parameter through a dedicated control pro