Submit your article with keyword "Benchmarking"


Fabio Bonsignorio

The Covid-19 pandemics has - like in previous examples such as the Fukushima accident in 2011 - shown the crucial importance of workable robotics solutions in disaster mitigation. Unfortunately, it has also shown some widely known pitfalls in robotics research achievements. While a flurry of robotic systems and appliances have been proposed for substituting humans in dangerous tasks and environments, their actual usage and contribution have been limited. This is due to the fact that there are not shared operational and quantitative methods to assess if a given robot will be able to perform a given set of tasks in a given environment.

So far this critical scientific, technological and practical gap in Robotics and AI research has not been satisfactorily addressed.

In robotics research the replicability and reproducibility of results and their objective evaluation and comparison is seldom put into practice. From September 2017, the IEEE Robotics & Automation Magazine has been soliciting R-Articles, and a few are already in the pipeline. However, reproducibility is still in its infancy in Robotics and AI.

This prevents serious progress on benchmarking, performance and safety ex-ante estimation of intelligent autonomous systems as the related scientific ground is unstable. A lot of work has still to be done

This workshop aims to gather researchers active in academia, industry and emergency management to share the ideas so far developed and discuss the challenges still preventing the development of more effective applications of intelligent robots for disaster mitigation.

Topics of interest
  • Discussion of examples of successful applications in Disaster Robotics
  • Discussion of unsuccessful applications in Disaster Robotics
  • Metrics for robot effectiveness and efficiency in Disaster Robotics
  • Replication of experiments in Disaster Robotics
  • Metrics of dexterity, adaptivity, flexibility, robustness
  • Benchmarking autonomy and robustness to changes in the environment/tasks
  • Forecasting autonomy and robustness to changes in the environment/tasks
  • Middlewares
  • Environments to support reproducibility, performance evaluation and forecasting in Disaster Robotics
  • Examples of good practice
  • Evaluation of experimental intelligent robotics work in Disaster Robotics
  • Machine Learning and Deep Learning applications for the estimation and forecasting of robot performances

Note: To submit to this special session, use the keyword “Benchmarking”