Creating Interactive Social Robots and Multimodal Control of Robotic Hands with Artificial Muscles



Journal Title

Journal ISSN

Volume Title



Social robots are essential for healthcare applications as assistive devices or behavior-based intervention systems. Social interactions, robotic hands and multimodal control are three core aspects of social robots, which are investigated in this dissertation. First, we present a wearable sensor vest and an open-source software architecture with the Internet of Things (IoT) for social robots. The IoT feature allows the robot to interact with local humans and other humans over the Internet. The designed architecture is demonstrated in a humanoid robot, and it works for any social robot that has general-purpose graphics processing unit (GPGPU), I2C/SPI buses, Internet connection, and Robot Operating System. The modular design of this architecture enables developers to easily add/remove/update complex behaviors. The proposed software architecture provides IoT technology, GPGPU nodes, I2C and SPI bus mangers, audio-visual interaction nodes, and isolation between behavior nodes and other nodes. Second, our humanoid robot uses novel actuators, called twisted and coiled polymer (TCP) actuators/artificial muscles, to move its fingers. Classical controllers and fuzzy-based controllers are examined for force control of these actuators. It was noted that disturbance and noise are major challenges in system identification and control of TCPs. In a short term, the muscles behave like a first-order, linear time-invariant system when the input is voltage square and the output is either force or displacement. However, the behaviors and parameters of the polymer muscles slowly change. An on-policy adaptive controller is designed for regulating force of the muscles that is optimized by stochastic hill-climbing and a novel associated search element. The third part is multimodal control of robotic hands. Recent advancements in GPGPUs enable intelligent devices to run deep neural networks in real-time. Thus, state-of-the-art intelligent systems have rapidly shifted from the paradigm of composite subsystems optimization to the paradigm of end-to-end optimization. By taking advantages of GPGPU, we showed how to control robotic hands with raw electromyography signals and speech 2D features using deep learning and convolutional neural networks. The proposed convolutional neural networks are lightweight, such that it runs in real-time and locally in an embedded GPGPU.



Internet of things, Androids, Machine learning, Graphics processing units, Adaptive control systems, Artificial intelligence