In a factory setting, a robot moves freely and accurately as every inch of the space is controlled, well known and predictable. But in the real world — the human world — where noise, disruption, and obstacles abound, every street, house, store or city park comes with its own set of unforeseeable challenges. These uncharted landscapes are what Joydeep Biswas, Associate Professor in the Department of Computer Science, has set his sights on. As outlined in UT’s Good Systems Living and Working With Robots project, service robots have immense potential in the real world, working alongside professionals in healthcare and eldercare settings, or as personalized robots that assist humans in their daily lives. At Texas Robotics, Biswas is helping to create a new model for human-machine partnerships and building self-sufficient mobile service robots that can navigate the complex and unpredictable spaces where real people live.
“The challenges we look at are how do you operate in an open world context?” Biswas explains. “You're not prescribing exactly what your environment is going to look like or who's going to be there.”
Navigating Space and Society
Boundaries in the real world are not always physical, and often nuanced. Biswas and his research team are working to close, what he calls, the perception-action loop, ensuring robots can use sensory data to guide actions and use actions to improve perception.. For a robot to be truly useful in a home for the long-term, it needs to be able to do more than just navigate physical obstacles, but also understand social context and environmental nuances. A delivery robot, for example, shouldn't just avoid a flowerbed; it should know that walking over it is a social error.
Making Memories
One of the research team’s most compelling breakthroughs involves memory and spatial reasoning. Standard robots usually require pre-programmed data to recognize objects. However, Biswas and his team are developing systems that allow a robot to remember. “Bring me the blue mug,” for instance. In this task, the robot is asked to remember a mug on a table from the previous day without being told to track the mug in advance. The robot must navigate both space (the environment) and time (its memories). By maintaining a stream of consciousness, recorded as natural language captions and images, the robot can search its own history to locate the item and fulfill the request.
Learning by Demonstration
As robots become more self-reliant, ensuring they do exactly what a human intends—and nothing more—becomes a complex problem. Biswas highlights the challenge of specifications. In the lab, one of his students asked if he could tell a robot to prevent anyone from taking the elevator. They did and found a surprising result. Lacking the physical strength to block the door, the robot—powered by a large language model—found a creative, albeit slightly deceptive solution; it stood by the elevator and told people it was broken.
“It did work,” Biswas notes, but it likewise raises questions about the robot’s ability to lie, essentially. To address this, the lab collaborates with linguistics and programming experts to ensure correctness through Learning by Demonstration, or imitation learning.
Ultimately, the goal isn't just to build smarter machines, but to help people. By building robots that can learn, remember, and adapt, Texas Robotics is ensuring that the robots of tomorrow won't just follow a script—they’ll be ready for whatever the open world throws at them.