Here we show the generated functional grasps for three intention (Tool-use, Robot-to-Human Handover and Pickup) on various
object categories using five kinematically diverse robot hands based on the grasp synthesis algorithm we proposed.
Due to space limitation, only 28 categories are shown here. Note that no further refinement in simulator is applied
to these grasps, some failure cases are also shown here. (NOTE: We will publish the dataset in early August 2024.
We are currently working on further cleaning up the code, visualisation and related comments.)
Category
Hand
Intention Type
Overview of the framework: DexFG
Functional Grasp Synthesis
Fistly, given a human grasp demonstration, we are able to get the affordance map on the object surface. Then
we can map the affordance map from one object to similar objects through the dense correspondence. Finally,
we optimize the articulated hand model which minimizes the cost function of diffused affordance map
(contact).
Human Grasp Demonstration
Knuckle-level Contact Map
Contact Diffusion
Shadow Hand Grasp
The six-step synthesis algorithm is summarized in following order:
Establish knuckle-level hand-object contact to associate fine-grained contact between hand segments and
object surfaces, using both object and hand meshes.
Define auxiliary anchor points on the hand to align precise finger contacts with target objects.
Obtain the initial grasp configuration for grasp optimization through human-to-robot hand grasp
mapping.
Obtain dense shape correspondence of category-level objects using a pretrained neural network for
contact diffusion between objects of the same category.
Apply a gradient descent-based algorithm to optimize the initial grasp configuration based on
fine-grained hand-object contact objective functions.
Refine the grasps in simulator to avoid both inter-penetration and self-penetration.
Grasp Generation
Object Reconstruction
Intention-conditioned Grasp Sampler
Data Augmentation
Iterative Grasp RefineNet
Results
Grasp Synthesis for Kinematically Diverse Robot Hands
Comparison with Baselines
Real Robot Grasp Execution
BibTeX
If you use our object dataset in your research, please cite the following papers:
@article{chang2015shapenet,
title={Shapenet: An information-rich 3d model repository},
author={Chang, Angel X and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and
Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and
Song, Shuran and Su, Hao and others},
journal={arXiv preprint arXiv:1512.03012},
year={2015}
}
@article{ycb,
title={The ycb object and model set: Towards common benchmarks for manipulation research},
author={Calli, Berk and Walsman, Aaron and Singh, Arjun and Srinivasa, Siddhartha and
Abbeel, Pieter and Dollar, Aaron M},
journal={IEEE International Conference on Robotics and Automation (ICRA)},
year={2015}
}
@inproceedings{yang2022oakink,
title={OakInk: A Large-scale Knowledge Repository for Understanding Hand-Object Interaction},
author={Yang, Lixin and Li, Kailin and Zhan, Xinyu and Wu, Fei and Xu, Anran and
Liu, Liu and Lu, Cewu},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20953--20962},
year={2022}
}