In the cadaver study, the needle was successfully placed in the shoulder joint space in all the targeting attempts with translational and rotational accuracy of 2.07 ± 1.22 mm and 1.46 ± 1.06 degrees, respectively. The total time for the entire procedure was 94 min and the average time for each targeting attempt was 20 min in the cadaver study, while the average time for the entire workflow for the volunteer studies was 36 min. No image quality degradation due to the presence of the robot was detected. This Thiel-embalmed cadaver study along with the clinical workflow studies on human volunteers demonstrated the feasibility of using an MR-conditional, patient-mounted robotic system for MRI-guided shoulder arthrography procedure. Future work will be focused on moving the technology to clinical practice.Soft pneumatic actuators have been explored for endoscopic applications, but challenges in fabricating complex geometry with desirable dimensions and compliance remain. The addition of an endoscopic camera or tool channel is generally not possible without significant change in the diameter of the actuator. Radial expansion and ballooning of actuator walls during bending is undesirable for endoscopic applications. The inclusion of strain limiting methods like, wound fibre, mesh, or multi-material molding have been explored, but the integration of these design approaches with endoscopic requirements drastically increases fabrication complexity, precluding reliable translation into functional endoscopes. For the first time in soft robotics, we present a multi-channel, single material elastomeric actuator with a fully corrugated design (inspired by origami); offering specific functionality for endoscopic applications. The features introduced in this design include i) fabrication of multi-channel monolithic struct angle of 200° when integrated with manually driven endoscope. The simple 3-step fabrication technique produces a complex origami pattern in a soft robotic structure, which promotes low pressure bending through the opening of the corrugation while retaining a small diameter and a central lumen, required for successful endoscope integration.Recognizing the actions, plans, and goals of a person in an unconstrained environment is a key feature that future robotic systems will need in order to achieve a natural human-machine interaction. Indeed, we humans are constantly understanding and predicting the actions and goals of others, which allows us to interact in intuitive and safe ways. While action and plan recognition are tasks that humans perform naturally and with little effort, they are still an unresolved problem from the point of view of artificial intelligence. The immense variety of possible actions and plans that may be encountered in an unconstrained environment makes current approaches be far from human-like performance. In addition, while very different types of algorithms have been proposed to tackle the problem of activity, plan, and goal (intention) recognition, these tend to focus in only one part of the problem (e.g., action recognition), and techniques that address the problem as a whole have been not so thoroughly explored. https://www.selleckchem.com/ This review is meant to provide a general view of the problem of activity, plan, and goal recognition as a whole. It presents a description of the problem, both from the human perspective and from the computational perspective, and proposes a classification of the main types of approaches that have been proposed to address it (logic-based, classical machine learning, deep learning, and brain-inspired), together with a description and comparison of the classes. This general view of the problem can help on the identification of research gaps, and may also provide inspiration for the development of new approaches that address the problem in a unified way.New technology is of little use if it is not adopted, and surveys show that less than 10% of firms use Artificial Intelligence. This paper studies the uptake of AI-driven automation and its impact on employment, using a dynamic agent-based model (ABM). It simulates the adoption of automation software as well as job destruction and job creation in its wake. There are two types of agents manufacturing firms and engineering services firms. The agents choose between two business models consulting or automated software. From the engineering firms' point of view, the model exhibits static economies of scale in the software model and dynamic (learning by doing) economies of scale in the consultancy model. From the manufacturing firms' point of view, switching to the software model requires restructuring of production and there are network effects in switching. The ABM matches engineering and manufacturing agents and derives employment of engineers and the tasks they perform, i.e. consultancy, software development, software maintenance, or employment in manufacturing. We find that the uptake of software is gradual; slow in the first few years and then accelerates. Software is fully adopted after about 18 years in the base line run. Employment of engineers shifts from consultancy to software development and to new jobs in manufacturing. Spells of unemployment may occur if skilled jobs creation in manufacturing is slow. Finally, the model generates boom and bust cycles in the software sector.Frames-discursive structures that make dimensions of a situation more or less salient-are understood to influence how people understand novel technologies. As technological agents are increasingly integrated into society, it becomes important to discover how native understandings (i.e., individual frames) of social robots are associated with how they are characterized by media, technology developers, and even the agents themselves (i.e., produced frames). Moreover, these individual and produced frames may influence the ways in which people see social robots as legitimate and trustworthy agents-especially in the face of (im)moral behavior. This three-study investigation begins to address this knowledge gap by 1) identifying individually held frames for explaining an android's (im)moral behavior, and experimentally testing how produced frames prime judgments about an android's morally ambiguous behavior in 2) mediated representations and 3) face-to-face exposures. Results indicate that people rely on discernible ground rules to explain social robot behaviors; these frames induced only limited effects on responsibility judgments of that robot's morally ambiguous behavior.