A Life of Their Own: Autonomous Robotics
Linus Nwankwo: Finding your own Path
Linus, a Ph.D. student from Nigeria at the Chair of Cyber Physical Systems, is interested in improving robot navigation intelligence. “My aim is to understand and simplify the localisation and navigation abilities of the robot in an environment with increasing complexities”, he says.
Intelligent autonomous robots have become an integral part of our daily lives. Their applications in industrial logistics, warehousing, or household tasks provide not only a cleaner and safer work environment, but also help to reduce production costs. However, several challenges such as dynamic obstacle detection and avoidance, and navigation in a constrained environment such as a cluttered and crowded environment still remain. Different approaches have been proposed in recent years to achieve this kind of intelligence in the field of navigation. One of these approaches is the use of path planning, simultaneous localisation and mapping (SLAM) algorithms.
"The objective of SLAM is to construct and update the map of an unknown environment with the help of high-resolution sensors attached to the robot, and then use this map to localize the robot in the environment, perceive the environment, interpret it without any human interaction, as well as interact with other robots by exchanging relevant information and that make safe decisions to achieve their goals. This approach is computationally expensive with accumulation of errors over time as the robot moves due to uncertainties in the robot's observations or measurements", Linus explains.
Vedant: How do you feel?
Humans perform complex motor skills (grasping, manipulating, throwing etc.) with relative ease. We start acquiring these days from the day that we are born and thus, are natural to us. However, these skills are very complex to learn for robotic systems. Vedant, also Ph.D. student at the Chair of Cyber Physical Systems, focuses on learning such complex motor skills and implementing them for robotic tasks.
“My first project was based on learning complex manipulation skills from tactile information. Tactile information refers to the sensations that we humans feel when we touch an object. Take the example of grasping a cup. If you want to grasp the object from the top, you will probably approach the cup from the top. If you want to grasp it from its handle, you will approach it sideways. In both approaches, the sensations felt by your fingers are different. Also, it is possible that you don't use all your fingers and just two fingers and a thumb to grasp it. In that case, only your two fingers will feel the touch. The question is: Can we infer motion in robotic systems just from the feel of the final touch? Is it possible to just make the robot predict its full arm motion just by imagining the sensations? These are some questions that my previous research answered”.
Currently, Vedant is working on the dynamical part of motor skills. Humans don't only learn and rely on kinematic motion, but also learn the dynamic motion very fast (motion with some forces involved like tossing a ball). “In the quest for learning, we curiously explore the unknown regions of the environment that are not mentioned to us by anyone. For example, babies start jumping while walking. Who tells them to do so?”, Vedant inquires. So his current work focuses on modelling this curiosity and use it so that the robot can effectively explore its own dynamics in an unsupervised fashion and learn new skills that were never mentioned to it prior.