Weather     Live Markets

Walking and running are complex biological movements that have been difficult to replicate in robots due to the inherent redundancies that humans possess to adjust to environments and alter their speed. Traditional AI models struggle to adapt to unknown or challenging environments as they are designed to generate one correct solution, whereas living organisms have a range of possible movements with no clear best or most efficient option. Deep reinforcement learning (DRL) has been proposed as a solution to this problem, leveraging deep neural networks to learn directly from sensory inputs, but it incurs a high computational cost, especially with a high degree of freedom.

Imitation learning is another approach where a robot learns by imitating motion data from a human, but it struggles in new situations or environments not encountered during training, limiting its adaptability and effectiveness. Researchers have combined imitation learning with central pattern generators (CPGs) and deep reinforcement learning to create a new method that overcomes the limitations of both approaches. In this new method, imitation learning is used to train a CPG-like controller, while deep learning is applied to reflex neural networks supporting the CPGs, allowing for more adaptability and stability in motion generation.

The adaptive imitated CPG (AI-CPG) method utilizes the structure of CPGs and reflex circuits to achieve remarkable adaptability and stability in generating human-like movements in robots. This breakthrough represents a new benchmark in robotics for generating human-like movement with unprecedented environmental adaptation capability. The method developed by the international research group from Tohoku University and Swiss Federal Institute of Technology in Lausanne shows significant progress in the development of generative AI technologies for robot control, with potential applications across various industries.

The combination of imitation learning, CPGs, and deep reinforcement learning enables the robot to imitate walking and running motions, generate movements for frequencies where motion data is lacking, transition smoothly from walking to running, and adapt to non-stable surfaces. This approach addresses the challenges faced by traditional models in accommodating unknown or challenging environments, making robots more efficient and effective in various scenarios. By incorporating biological principles of human movement into robot control, researchers have made a significant advancement in the field of generative AI technologies.

The research, published in IEEE Robotics and Automation Letters, showcases the successful integration of different learning approaches to create a more flexible and powerful method for generating human-like movement in robots. The innovative combination of CPGs, reflex circuits, and deep reinforcement learning demonstrates the potential for significant advancements in the field of robotics and AI, opening up new possibilities for robots to adapt and perform more human-like movements in various environments. The interdisciplinary collaboration between researchers from different institutions highlights the importance of bringing together diverse expertise to push the boundaries of AI technologies for robot control.

Share.
Exit mobile version