VisGraB: A Benchmark for Vision-Based Grasping
We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different c...
मुख्य लेखकों: | , , , , , |
---|---|
स्वरूप: | लेख |
भाषा: | English |
प्रकाशित: |
De Gruyter
2012-06-01
|
श्रृंखला: | Paladyn |
विषय: | |
ऑनलाइन पहुंच: | https://doi.org/10.2478/s13230-012-0020-5 |
_version_ | 1827613597176430592 |
---|---|
author | Kootstra Gert Popović Mila Jørgensen Jimmy Alison Kragic Danica Petersen Henrik Gordon Krüger Norbert |
author_facet | Kootstra Gert Popović Mila Jørgensen Jimmy Alison Kragic Danica Petersen Henrik Gordon Krüger Norbert |
author_sort | Kootstra Gert |
collection | DOAJ |
description | We present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different configurations are included in the database. The user needs to provide a method for grasp generation based on the real visual input. The grasps are then planned, executed, and evaluated by the provided grasp simulator where several grasp-quality measures are used for evaluation. This setup has the advantage that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision methods instead. As a baseline, benchmark results of our grasp strategy are included. |
first_indexed | 2024-03-09T08:42:03Z |
format | Article |
id | doaj.art-94e587abecfc441cb1532b29c737edf4 |
institution | Directory Open Access Journal |
issn | 2081-4836 |
language | English |
last_indexed | 2024-03-09T08:42:03Z |
publishDate | 2012-06-01 |
publisher | De Gruyter |
record_format | Article |
series | Paladyn |
spelling | doaj.art-94e587abecfc441cb1532b29c737edf42023-12-02T16:42:20ZengDe GruyterPaladyn2081-48362012-06-0132546210.2478/s13230-012-0020-5VisGraB: A Benchmark for Vision-Based GraspingKootstra Gert0Popović Mila1Jørgensen Jimmy Alison2Kragic Danica3Petersen Henrik Gordon4Krüger Norbert5 Computer Vision and Active Perception Lab, CSC, Royal Institute of Technology (KTH), Stockholm, Sweden Cognitive Vision Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark Robotics Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark Computer Vision and Active Perception Lab, CSC, Royal Institute of Technology (KTH), Stockholm, Sweden Robotics Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, Denmark Cognitive Vision Lab, The Mærsk Mc-Kinney Møller Institute, University of Southern Denmark, Campusvej 55, DK-5230 Odense, DenmarkWe present a database and a software tool, VisGraB, for benchmarking of methods for vision-based grasping of unknown objects with no prior object knowledge. The benchmark is a combined real-world and simulated experimental setup. Stereo images of real scenes containing several objects in different configurations are included in the database. The user needs to provide a method for grasp generation based on the real visual input. The grasps are then planned, executed, and evaluated by the provided grasp simulator where several grasp-quality measures are used for evaluation. This setup has the advantage that a large number of grasps can be executed and evaluated while dealing with dynamics and the noise and uncertainty present in the real world images. VisGraB enables a fair comparison among different grasping methods. The user furthermore does not need to deal with robot hardware, focusing on the vision methods instead. As a baseline, benchmark results of our grasp strategy are included.https://doi.org/10.2478/s13230-012-0020-5grasping of unknown objectsvision-based graspingbenchmark |
spellingShingle | Kootstra Gert Popović Mila Jørgensen Jimmy Alison Kragic Danica Petersen Henrik Gordon Krüger Norbert VisGraB: A Benchmark for Vision-Based Grasping Paladyn grasping of unknown objects vision-based grasping benchmark |
title | VisGraB: A Benchmark for Vision-Based Grasping |
title_full | VisGraB: A Benchmark for Vision-Based Grasping |
title_fullStr | VisGraB: A Benchmark for Vision-Based Grasping |
title_full_unstemmed | VisGraB: A Benchmark for Vision-Based Grasping |
title_short | VisGraB: A Benchmark for Vision-Based Grasping |
title_sort | visgrab a benchmark for vision based grasping |
topic | grasping of unknown objects vision-based grasping benchmark |
url | https://doi.org/10.2478/s13230-012-0020-5 |
work_keys_str_mv | AT kootstragert visgrababenchmarkforvisionbasedgrasping AT popovicmila visgrababenchmarkforvisionbasedgrasping AT jørgensenjimmyalison visgrababenchmarkforvisionbasedgrasping AT kragicdanica visgrababenchmarkforvisionbasedgrasping AT petersenhenrikgordon visgrababenchmarkforvisionbasedgrasping AT krugernorbert visgrababenchmarkforvisionbasedgrasping |