Mark Zolotas

Researcher on Human-Robot Interaction

Ergonomic Human-Robot Collaboration

Human-robot collaboration has immense potential to improve workplace ergonomics, i.e., a human worker’s productivity and well-being. There is a significant need for this research because conventional machines are a major cause of injury to workers in industrial settings and a leading cause of absence from work.

To advance this research direction, my colleagues and I presented a framework to quantify ergonomics in human-robot collaboration based on body posture [1]. We developed a robot-to-human handover architecture, where 15 lbs packages were exchanged between a robot and human to mimic typical industrial tasks with workers carrying loads in repetitive motion. Our proposed system estimated the ergonomic score of a human’s posture when receiving these packages using an external camera. The external camera recognized the receiver’s joints and used a standardized assessment scale to produce a measure of their ergonomic state, e.g., based on trigonometric angles between these joints. In response to this ergonomics score, the robot would then choose different locations in 3D space to transfer packages, with the aim of “stimulating” the receiving person’s dynamics. This “stimulating” handover policy improved ergonomics scores over standard approaches to robot-to-human handover.

References

[1] M. Zolotas, R. Luo, S. Bazzi, D. Saha, K. Mabulu, K. Kloeckl, and T. Padır, “Productive Inconvenience: Facilitating Posture Variability by Stimulating Robot-to-Human Handovers”, IEEE International Conference on Robot and Human Interactive Communication, 2022.

Teleoperation

Teleoperation of robots plays a vital role in complex and unpredictable settings where human supervision is necessary or when the co-presence of a human operator poses unwanted risk. However, direct teleoperation of robots is a challenging task that can expose human operators to adverse levels of workload. This is particularly true when the robot is more complex than the device the operator is using as a control interface, e.g., if using a handheld controller to operate a robot arm with multiple joints. Shared control offers a way to reduce this burden on operators, as was discussed here.

Here I will briefly demonstrate an example from my prior work on how a virtual reality (VR) headset interface can be used to expose the inner workings of a shared control system [1]. This example also falls under the research thread on “explainable” human-robot collaborations using extended reality.

In the below video, a VR interface provides visual feedback to operators as they teleoperate a robot arm during a screwdriver task. The underlying shared control in this work regulated operator motion to either guide the teleoperated robot along a task-specific path or restrict it to remain within a “safe” region. A compass visualization is shown above the robot’s model in VR, with the centered arrow pointing in the direction of this task path and the color indicating safety from obstacles. Please refer to the paper or contact me if you are interested in more details.

References

[1] M. Zolotas, M. Wonsick, P. Long, and T. Padır, “Motion Polytopes in Virtual Reality for Shared Control in Remote Manipulation Applications”Frontiers in Robotics and AI, 2021.

Self-Organizing Multi-agent Systems

Prior to my Ph.D., I implemented self-organizing multi-agent systems and conducted socioeconomic experiments in simulation as part of a research program, known as computational justice. By simulating artificial agent communities that sustained themselves from a common pool resource, I was able to study the effects of enforcing “retributive justice” on non-compliant agent behavior [1].

I won’t go into more detail about this work as I haven’t continued that path of research, but the prospects of exploring justice and ethics within artificial intelligence remains a relevant endeavor of mine.

References

[1] M. Zolotas and J. Pitt, “Self-Organising Error Detection and Correction in Open Multi-agent Systems”IEEE International Workshops on Foundations and Applications of Self* Systems, 2016.

Extended Reality in Human-Robot Interaction

Extended reality headsets are an emerging technology that have many applications in human-robot interaction. For instance, augmented reality (AR) headsets could be used to visually display a robot’s affordable behaviors through a graphical interface and thus show what the robot is capable of doing. While virtual reality (VR) is particularly advantageous when remotely controlling robots, as in teleoperation.

The image on the right (taken from our paper [1]) illustrates how a dual-arm collaborative robot’s affordable behaviors can be visualized by someone wearing an AR headset. You can see: 1) the overlaid robot model; 2) objects the robot can manipulate; and 3) selectable actions for the wearer, e.g., “clean” up the teddy bear. Click here for the corresponding video!

In line with my ambition to advance the capabilities of assistive robots, my colleague and I developed a novel system that combined an AR headset with a robotic wheelchair [2]. The resulting platform was a first of its kind and I hope to see many more examples of similar assistive robots in the near future. Check out the video below for details.

VR headsets also fall within the extended reality spectrum and are useful in gaming, remote teleoperation, and training scenarios. On the right, my colleagues and I designed a physical replica of the buzz wire game to explore how people perform when completing the task by teleoperating a robot in VR.

References

[1] M. Zolotas and Y. Demiris, “Transparent Intent for Explainable Shared Control in Assistive Robotics”, International Joint Conference on Artificial Intelligence, 2020.

[2] M. Zolotas, J. Elsdon and Y. Demiris, “Head-Mounted Augmented Reality for Explainable Robotic Wheelchair Assistance”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2018.

Shared Control

One of the most promising areas of research in human-robot collaboration is shared control. In this paradigm, a human and an intelligent autonomy “share” control of a robot by simultaneously issuing commands towards a common goal. Fusing inputs in this manner allows the human-robot team to harness the complementary advantages of both agents. Typical applications of shared control include robotic wheelchairs, surgical aids, space exploration rovers, vehicle driving assistance, and so forth. I’m passionate about this subject because of its capacity to ease control of robots, which can be challenging or result in excess workload when manually operated. This is particularly beneficial for individuals with disabilities.

But how do we develop shared control to best assist someone during a task? A popular answer to this question is to have the robot learn human “intentions”. By understanding what a person intends to accomplish in a given task, the robot can then effectively assist them; be it getting a robotic wheelchair from point ‘A’ to ‘B’, helping a surgeon guide a surgical instrument, or even steering a vehicle to avoid accidents. My prior research has explored this idea of making robots understand human intent [1].

That being said, even when a robot can infer the best strategy for assistance, it still may not behave as you would expect. This is what we term a “model misalignment”, i.e., a person’s mental model does not quite sync up with how the robot works internally.


Left: Why is my robotic wheelchair behaving weirdly?
Right: Augmented reality headset explaining why!

To help resolve this misalignment, I introduced the notion of Explainable Shared Control [2]. In this framework, an augmented reality headset was used as a way of visually exposing a robot’s inner workings. Check out the video below for a demonstration on a robotic wheelchair!

References

[1] M. Zolotas and Y. Demiris, “Disentangled sequence clustering for human intention inference”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022.
[2] M. Zolotas and Y. Demiris, “Towards Explainable Shared Control using Augmented Reality”, IEEE/RSJ International Conference on Intelligent Robots and Systems, 2019.