Research

Robot Proficiency Self-Assessment

SUCCESS: Self-Assessment and Understanding of Competence and Conditions to Ensure System Success

The core focus of the SUCCESS Multi-University Research Initiative (MURI) project is to develop new knowledge and techniques for system proficiency self-assessment. The work is framed as a taxonomy of two intersecting dimensions: Time and Levels of Self-Assessment. For the former, these approaches need to work a priori, in situ, and post hoc in order to support effective autonomy and utilization by human partners and supervisors. For the latter, self-assessment can range from simple detection of proficiency up through evaluation, explanation, and prediction. While this project is scoped for robot autonomy, these techniques have utility in other forms of artificial intelligence (AI).

The project team is comprised of Carnegie Mellon University, Brigham Young University, Tufts University, and University of Massachusetts Lowell. This project is supported by the Office of Naval Research (N00014-18-1-2503). For more information on the SUCCESS project: https://successmuri.org/

Selected Publications:

Zhao Han, Elizabeth Phillips , and Holly A. Yanco. The Need for Verbal Robot Explanations and How People Would Like a Robot to Explain Itself. ACM Transactions on Human-Robot Interaction (THRI), Vol. 10, No. 4, pp.1-42, December 2021.

Zhao Han, Daniel Giger, Jordan Allspaw, Michael S. Lee, Henny Admoni, and Holly A. Yanco. Building The Foundation of Robot Explanation Generation Using Behavior Trees. ACM Transactions on Human-Robot Interaction, Vol. 10, No. 3, pp.1-31, July 2021.

Assistive Technology

AI Institute for Collaborative Assistance and Response Interaction for Networked Groups (AI-CARING)

The NSF AI Institute for Collaborative Assistance and Responsive Interaction for Networked Groups (AI-CARING) will develop a discipline focused on personalized, longitudinal, collaborative AI, enabling the development of AI systems that learn personalized models of user behavior, understand how people’s behavior changes over time, and integrate that knowledge to support people and AIs working together. These networked Human-AI teams will work with elderly adults and their caregivers in order to provide sustainable long-term care solutions. 

AI-CARING is a collaboration between Georgia Tech, Carnegie Mellon University, Oregon State University, the University of Massachusetts Lowell and Oregon Health & Science University. The project is funded by the National Science Foundation (2112633). For more information on AI-CARING: http://ai-caring.org/ 

Robotic Manipulation Assistance for Activities of Daily Living

The goal of this project is to develop a robot system comprised of a robotic arm and a mobility scooter that provides both pick-and-drop and pick-and-place functionality for open world environments without modeling the objects or environment. The system uses a laser pointer to directly select an object in the world, with feedback to the user via projecting an interface into the world. Evaluations of this system suggests a significant improvement in both runtime and grasp success rate relative to a baseline from the literature, and furthermore demonstrates accurate pick and place capabilities for tabletop scenarios.

This project is a collaboration between Northeastern University and University of Massachusetts Lowell. This project is supported in part by the National Science Foundation (IIS-1763469 and IIS-1426968).

Selected Publications:

Gregory Lemasurier, Gal Bejerano, Victoria Albanese, Jenna Parrillo, Holly A. Yanco, Nicholas Amerson, Rebecca Hetrick, and Elizabeth Phillips. Methods for Expressing Robot Intent for Human–Robot Collaboration in Shared Workspaces. ACM Transactions on Human-Robot Interaction (THRI) , Vol. 10, No. 4, pp. 1-27, December 2021.

Alexander Wilkinson, Michael Gonzales, Patrick Hoey, David Kontak, Dian Wang, Noah Torname, Amelia Sinclaire, Zhao Han, Jordan Allspaw, Robert Platt, and Holly A. Yanco. Design Guidelines for Human-Robot Interaction with Assistive Robot Manipulation Systems. Paladyn, Journal of Behavioral Robotics, Vol. 12, No. 1, pp. 392-401, September 2021.

Legged and Humanoid Robots

Bipedal and Quadrupedal Robot Autonomy in Complex Unstructured Environments

The aim of this project is to increase the impact and use of robotics by the US Navy and DoD by creating a new integrated framework for perception, planning, navigation, and manipulation under uncertainty for humanoid and quadrupedal robots. The objective is to significantly advance the state of the art in autonomous legged robots operating in highly complex unstructured environments in collaboration with humans in order to create a complete system for assisting Marines on patrol or sailors in a vessel, whether for routine maintenance or for emergency situations.

This project is a collaboration between Brown University, University of Massachusetts Lowell, Office of Naval Research, and the Naval Undersea Warfare Center. This project is supported by the Office of Naval Research (N00014-21-1-2582). 

Virtual Reality Control of Humanoid Robots

Virtual reality (VR) interfaces for robots provide a three-dimensional (3D) view of the robot in its environment, which allows people to better plan complex robot movements in tight or cluttered spaces. In this project, we have developed VR interfaces to allow for the teleoperation of humanoid robots and mobile manipulator robots. A human-in-the-loop planner has been developed where the operator can send higher level manipulation and navigation goals in VR through functional waypoints, visualize the results of a robot planner in the 3D virtual space, and then deny, alter or confirm the plan to send to the robot.

This work is supported in part by the National Science Foundation (IIS-1944584) and the Department of Energy (DE-EM0004482).

Selected Publications:

Gregory LeMasurier, Jordan Allspaw, and Holly A. Yanco. Semi-Autonomous Planning and Visualization in Virtual Reality. ACM/IEEE HRI 2021 Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI), virtual, March 2021.

Jordan Allspaw, Gregory LeMasurier, and Holly Yanco. Implementing Virtual Reality for Teleoperation of a Humanoid Robot. ACM/IEEE HRI 2020 Workshop on Virtual, Augmented, and Mixed Reality for Human-Robot Interactions (VAM-HRI), March 2020.

Metrics and Evaluation Methods for Robots

REMOTE: Remote Experimentation of Manipulation for Online Test and Evaluation

REMOTE is a collaborative project between Oregon State University and the University of Massachusetts Lowell, developing a remotely accessible testbed for conducting repeatable robotic grasping and manipulation tests using a set of standard robotic hardware, benchmarking tasks (e.g., object grasping, shelf picking, door opening, part assembly), software framework, test metrics, and external sensors for recording ground truth and rich data collection.

This project is a collaboration between the University of Massachusetts Lowell and Oregon State University. is supported in part by the National Science Foundation (CNS-1925715 and CNS-1925604). For more information on the REMOTE project: https://sites.google.com/view/remotetestbed/ 

DECISIVE: Development and Execution of Comprehensive and Integrated Subterranean Intelligent Vehicle Evaluations

Small unmanned aerial systems (sUAS) may prove to be an advantageous asset for reconnaissance purposes when operating in subterranean or other constrained, indoor environments. This effort will produce several test methods for evaluating sUAS across several capability areas including communications, navigation, obstacle avoidance, mapping, field readiness, interface heuristics, trust, and situation awareness. The objectives of this project are: to (1) design test methods for evaluating subterranean and indoor sUAS including hardware characterization, physics-based analyses, functional capabilities, and HRI assessments, (2) conduct evaluations of commercial sUAS platforms to benchmark their performance using the developed test methods and identify capability gaps, and (3) identify and evaluate technologies to fill identified gaps. 

This project is sponsored by the Department of the Army, U.S. Army Combat Capabilities Development Command Soldier Center (W911QY-18-2-0006).

Selected Publication:

Adam Norton, Reza Ahmadzadeh, Kshitij Jerath, Paul Robinette, Jay Weitzen, Thanuka Wickramarathne, Holly Yanco, Minseop Choi, Ryan Donald, Brendan Donoghue, Christian Dumas, Peter Gavriel, Alden Giedraitis, Brendan Hertel, Jack Houle, Nathan Letteri, Edwin Meriaux, Zahra Rezaei, Rakshith Singh, Gregg Willcox, and Naye Yoni. DECISIVE Test Methods Handbook: Test Methods for Evaluating sUAS in Subterranean and Constrained Indoor Environments, Version 1.1. arXiv preprint arXiv:2211.01801, November 2022.

Adam Norton, Peter Gavriel, Brendan Donoghue, and Holly Yanco. Test Methods to Evaluate Mapping Capabilities of Small Unmanned Aerial Systems in Constrained Indoor and Subterranean Environments. In Proceedings of the IEEE International Symposium on Technologies for Homeland Security (HST) 2021, November 2021.

ARM Institute Metrics and Evaluation Working Group (MEWG)

The ARM Institute Metrics and Evaluation Working Group (MEWG) aims to improve the performance measurement and characterization of technology generated by ARM- funded technology projects by defining a common lexicon, metrics scheme, and evaluation framework. The MEWG developed a unified evaluation framework and resources to measure performance for the ARM Institute to quantify its impact in advancing manufacturing with robotics. Using this framework, core metrics are identified and developed that demonstrate improvements to current state-of-the-art technologies for manufacturing as enabled by robotics across six cardinal metric categories a: performance, efficiency, productivity, acquisition cost, sustaining cost, investment prudency. All metrics are evaluated in comparison to a baseline to calculate a percentage improvement. 

This work is supported by the Advanced Robotics for Manufacturing (ARM) Institute (28158).

Selection Publications:

Adam Norton, Elena Messina, and Holly A. Yanco. Advancing Capabilities of Industrial Robots Through Evaluation, Benchmarking, and Characterization. Manufacturing in the Era of 4th Industrial Revolution: A World Scientific Reference Volume 2: Recent Advances in Industrial Robotics, pp. 337-371, March 2021.

Adam Norton, Amy Saretsky, and Holly Yanco. Developing Metrics and Evaluation Methods for Assessing AI-Enabled Robots in Manufacturing. In Proceedings of the AAAI Spring Symposium on Artificial Intelligence and Manufacturing, March 2020.

Test Methods for Industrial Robots

The advent of robot systems in manufacturing, compared to traditional automation, provides the ability for the robotic technology to perceive elements in the environment and adapt to changes that are detected. Both of these capabilities are important for demonstrating a robot system’s ability to be flexible and agile in a manufacturing setting. Test methods for perception and adaptation of industrial robotics will enable small and medium manufacturers to make more informed procurement decisions and for the research community to benchmark their systems as the field progresses. To fill this gap, this project develops test methods for industrial robot mobility and manipulation with a focus on perception and adaptation challenges that are relevant to industrial manufacturing settings. Outputs of this project are used to develop standards through the ASTM F45 Committee on Robotics, Automation, and Autonomous Systems.

This work is supported in part by the National Institute of Standards and Technology (70NANB14H235, 70NANB20H199, 70NANB19H101, 70NANB17H256).

Selected Publications:

Joe Falco, Daniel Hemphill, Kenneth Kimble, Elena Messina, Adam Norton, Rafael Ropelato, and Holly Yanco. Benchmarking Protocols for Evaluating Grasp Strength, Grasp Cycle Time, Finger Strength, and Finger Repeatability of Robot End-effectors. IEEE Robotics and Automation Letters, Special Issue on Benchmarking Protocols for Robotic Manipulation, Volume 5, Issue 2, pp. 644-651, April 2020.

Adam Norton, Peter Gavriel, and Holly Yanco. A Standard Test Method for Evaluating Navigation and Obstacle Avoidance Capabilities of AGVs and AMRs. ASTM Journal of Smart and Sustainable Manufacturing Systems, Volume 3, Issue 2, pp. 106-126, November 2019.

Test Methods for Response Robots

Ground, aerial, and aquatic response robots are utilized for intelligence, surveillance, and reconnaissance (ISR), explosive ordnance disposal (EOD), and urban search and rescue (USAR) operations.  Standard test methods to evaluate the capabilities these systems are specified through NIST and ASTM to evaluate robotic functionality including mobility, maneuvering, dexterity, sensors, and human-robot interaction. UMass Lowell contributes to the development of these standards by prototyping new test methods, revising existing standards, and supporting the execution of test method exercise events including the RoboCup Rescue competitions. Outputs of this project are used to develop standards through the ASTM E54.09 Committee on Homeland Security Applications; Subcommittee on Response Robots.

This work is supported in part by the National Institute of Standards and Technology (70NANB20H021, 60NANB14D286).

Selected Publication:

Adam Norton, Brian Flynn, and Holly Yanco. Implementing Human-Robot Interaction Evaluation Using Standard Test Methods for Response Robots. Homeland Security and Public Safety: Research, Applications and Standards, ASTM STP1614, ASTM International, pp. 63-90, November 2019.

Exoskeletons and Wearable Robots

Performance Evaluation of Exoskeleton Capabilities and Training

Wearable technologies (i.e., exoskeletons) have great potential to enhance soldiers’ performance by augmenting their physical capabilities, allowing soldiers to maintain their performance for longer periods of time and perform more effectively while under heavy personnel protective equipment (PPE), such as an EOD suit. However, it is critical to evaluate the effects of wearing exoskeleton systems on human performance prior to the implementation for military use so we can ensure those exoskeletons meet their goals without impeding human performance or posing undue strain or stress for potential injuries. Additionally, effective training protocols and methodologies can enhance human adaptation to exoskeleton assistance and will increase soldier-exoskeleton performance and achieve rapid deployment. This project seeks to evaluate several candidate exoskeletons for soldier-relevant tasks and developing training protocols with a focus on greater user-exo fluency and performance in real-world settings. The results will be used to produce research-based guidelines on training methodologies that maximize user-exo adaptation and performance. Outputs of this project are used to develop standards through the ASTM F48 Committee on Exoskeletons and Exosuits.

This project is sponsored by the Department of the Army, U.S. Army Combat Capabilities Development Command Soldier Center (W911QY-18-2-0006, W911QY-20-2-0005).

Selected Publication:

Yi-Ning Wu, Adam Norton, Michael R. Zielinski, Pei-Chun Kao, Andrew Stanwicks, Patrick Pang, Charles H. Cring, Brian Flynn, and Holly A. Yanco. Characterizing the Effects of Explosive Ordnance Disposal Operations on the Human Body While Wearing Heavy Personal Protective Equipment. Human Factors: The Journal of the Human Factors and Ergonomic Society, February 2021.

Blake Bequette, Adam Norton, Eric Jones, and Leia Sterling. Physical and Cognitive Load Effects Due to a Powered Lower-Body Exoskeleton. Human Factors: The Journal of the Human Factors and Ergonomics Society, March 2020.

Fabric-Embedded Dynamic Sensing for Adaptive Exoskeleton Assistance

Exoskeletons can provide people with movement assistance when they become fatigued during long periods of exertion. This project will develop new human-robot interaction methods through adaptive exoskeleton control by using novel fabric-embedded sensors to measure how a person is moving and to develop a model to understand how these movements indicate when a person is becoming tired. The comfortable and breathable fabric-embedded sensors combined with an adaptive exoskeleton controller that can measure a person's fatigue in real time will allow endurance enhancement for human and exoskeleton performance. The resulting exoskeleton controller will enable adaptive assistance for endurance enhancement by delaying the onset of wearers' fatigue and allowing better usage of exoskeleton power.

This is a collaborative project between the University of Massachusetts Lowell and Yale University. This work is supported in part by the National Science Foundation (IIS-1955979).

Command and Control Interfaces

Multi-touch Interfaces

In emergency response, gathering intelligence is still largely a manual process despite advances in mobile computing and multi-touch interaction. Our goal is to integrate multiple digital sources into a common computing platform that provides two-way communication, tracking, and mission status to the personnel in the field and the incident command hierarchy that supports them. Our research in human computer interaction incorporates these capabilities through the use of several different types of multi-touch displays. A single-robot operator control unit and a multi-robot command and control interface has been created. Users tap and drag commands for individual or multiple robots through a gesture set designed to maximize ease of learning. A trail of waypoints can provide specific areas of interest or a specific path can be drawn for the robots to follow. Manual robot control is achieved by using the DREAM (Dynamically Resizing Ergonomic and Multi-touch) Controller, which is virtually painted beneath the user's hands, changing its size and orientation according to our newly designed algorithm for fast hand detection, finger registration, and handedness registration.

Selected Publication:

Mark Micire, Eric McCann, Munjal Desai, Katherine M. Tsui, Adam Norton, and Holly A. Yanco. Hand and Finger Registration for Multi-Touch Joysticks on Software-Based Operator Control Units. Proceedings of the IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, April 2011.

Mark Micire, Munjal Desai, Jill L. Drury, Eric McCann, Adam Norton, Katherine M. Tsui, and Holly A. Yanco. Design and Validation of Two-Handed Multi-Touch Tabletop Controllers for Robot Teleoperation. Proceedings of the International Conference on Intelligent User Interfaces, Palo Alto, CA, February 13-16, 2011.

ROS.NET

Development on ROS.NET started in 2011 as a way to avoid complications with writing individual TCP protocols to control robots running ROS using a Windows-based user interface. It has proven to be a critical path for many projects, including our rover, the RoverHawk, winner of the 2013 NASA RASC-AL Robo-Ops competition, running a ROS.NET user interface in Lowell, MA to control the robot running ROS at the Johnson Space Center Rock Yard in Houston, TX.

More information on ROS.NET can be found in its GitHub repository: https://github.com/uml-robotics/ROS.NET

RoverHawk interface running on ROS.NET