Browsing by Author "Jafarzadeh, Mohsen"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Creating Interactive Social Robots and Multimodal Control of Robotic Hands with Artificial Muscles(2019-11-22) Jafarzadeh, Mohsen; Tadesse, Yonas; Gans, NicholasSocial robots are essential for healthcare applications as assistive devices or behavior-based intervention systems. Social interactions, robotic hands and multimodal control are three core aspects of social robots, which are investigated in this dissertation. First, we present a wearable sensor vest and an open-source software architecture with the Internet of Things (IoT) for social robots. The IoT feature allows the robot to interact with local humans and other humans over the Internet. The designed architecture is demonstrated in a humanoid robot, and it works for any social robot that has general-purpose graphics processing unit (GPGPU), I2C/SPI buses, Internet connection, and Robot Operating System. The modular design of this architecture enables developers to easily add/remove/update complex behaviors. The proposed software architecture provides IoT technology, GPGPU nodes, I2C and SPI bus mangers, audio-visual interaction nodes, and isolation between behavior nodes and other nodes. Second, our humanoid robot uses novel actuators, called twisted and coiled polymer (TCP) actuators/artificial muscles, to move its fingers. Classical controllers and fuzzy-based controllers are examined for force control of these actuators. It was noted that disturbance and noise are major challenges in system identification and control of TCPs. In a short term, the muscles behave like a first-order, linear time-invariant system when the input is voltage square and the output is either force or displacement. However, the behaviors and parameters of the polymer muscles slowly change. An on-policy adaptive controller is designed for regulating force of the muscles that is optimized by stochastic hill-climbing and a novel associated search element. The third part is multimodal control of robotic hands. Recent advancements in GPGPUs enable intelligent devices to run deep neural networks in real-time. Thus, state-of-the-art intelligent systems have rapidly shifted from the paradigm of composite subsystems optimization to the paradigm of end-to-end optimization. By taking advantages of GPGPU, we showed how to control robotic hands with raw electromyography signals and speech 2D features using deep learning and convolutional neural networks. The proposed convolutional neural networks are lightweight, such that it runs in real-time and locally in an embedded GPGPU.Item Robot Motion Planning in an Unknown Environment with Danger Space(MDPI, 2019-02-10) Jahanshahi, Hadi; Jafarzadeh, Mohsen; Sari, Naeimeh Najafizadeh; Viet-Thanh Pham; Van Van Huynh; Xuan Quynh Nguyen; 0000-0001-7719-7081 (Jafarzadeh, M); Jafarzadeh, MohsenThis paper discusses the real-time optimal path planning of autonomous humanoid robots in unknown environments regarding the absence and presence of the danger space. The danger is defined as an environment which is not an obstacle nor free space and robot are permitted to cross when no free space options are available. In other words, the danger can be defined as the potentially risky areas of the map. For example, mud pits in a wooded area and greasy floor in a factory can be considered as a danger. The synthetic potential field, linguistic method, and Markov decision processes are methods which have been reviewed for path planning in a free-danger unknown environment. The modified Markov decision processes based on the Takagi-Sugeno fuzzy inference system is implemented to reach the target in the presence and absence of the danger space. In the proposed method, the reward function has been calculated without the exact estimation of the distance and shape of the obstacles. Unlike other existing path planning algorithms, the proposed methods can work with noisy data. Additionally, the entire motion planning procedure is fully autonomous. This feature makes the robot able to work in a real situation. The discussed methods ensure the collision avoidance and convergence to the target in an optimal and safe path. An Aldebaran humanoid robot, NAO H25, has been selected to verify the presented methods. The proposed methods require only vision data which can be obtained by only one camera. The experimental results demonstrate the efficiency of the proposed methods.