The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot.

Planning Collision-Free Robot Motions in a Human–Robot Shared Workspace via Mixed Reality and Sensor-Fusion Skeleton Tracking

Farsoni S.
Primo
;
Rizzi J.
Secondo
;
Bonfe' M.
Ultimo
2022

Abstract

The paper describes a method for planning collision-free motions of an industrial manipulator that shares the workspace with human operators during a human–robot collaborative application with strict safety requirements. The proposed workflow exploits the advantages of mixed reality to insert real entities into a virtual scene, wherein the robot control command is computed and validated by simulating robot motions without risks for the human. The proposed motion planner relies on a sensor-fusion algorithm that improves the 3D perception of the humans inside the robot workspace. Such an algorithm merges the estimations of the pose of the human bones reconstructed by means of a pointcloud-based skeleton tracking algorithm with the orientation data acquired from wearable inertial measurement units (IMUs) supposed to be fixed to the human bones. The algorithm provides a final reconstruction of the position and of the orientation of the human bones that can be used to include the human in the virtual simulation of the robotic workcell. A dynamic motion-planning algorithm can be processed within such a mixed-reality environment, allowing the computation of a collision-free joint velocity command for the real robot.
2022
Farsoni, S.; Rizzi, J.; Ufondu, G. N.; Bonfe', M.
File in questo prodotto:
File Dimensione Formato  
08_MotionPlanSkelTrack_EL.pdf

accesso aperto

Descrizione: Full text editoriale
Tipologia: Full text (versione editoriale)
Licenza: Creative commons
Dimensione 11.53 MB
Formato Adobe PDF
11.53 MB Adobe PDF Visualizza/Apri

I documenti in SFERA sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11392/2494493
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 3
social impact