InteractVLM: 3D Interaction Reasoning from 2D Foundational Models

CVPR 2025

1Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2University of Amsterdam, the Netherlands, 3Inria, École normale supérieure, France

InteractVLM estimates 3D contact points on both human bodies and objects from single in-the-wild images, enabling accurate human-object joint reconstruction in 3D. We introduce a novel task, Semantic Human Contact, which goes beyond the traditional Binary Human Contact to infer object-specific contacts on bodies. By leveraging the rich visual knowledge of large Vision-Language Models, we address the limited availability of ground-truth 3D interaction data for training, resulting in better generalization to diverse real-world interactions.



Joint Human-Object Reconstruction Video



Joint 3D reconstruction of human and object from a single image using InteractVLM's inferred contact.
Comparison of joint 3D reconstruction between InteractVLM and PHOSA.



Contact Estimation - Human and Object Video



Semantic human contact estimation from a single image, conditioned on object label.
Object affordance prediction from single in-the-wild image.




Abstract

We introduce InteractVLM, a novel method to estimate 3D contact points on human bodies and objects from single in-the-wild images, enabling accurate human-object joint reconstruction in 3D. This is challenging due to occlusions, depth ambiguities, and widely varying object shapes. Existing methods rely on 3D contact annotations collected via expensive motion-capture systems or tedious manual labeling, limiting scalability and generalization. To overcome this, InteractVLM harnesses the broad visual knowledge of large Vision-Language Models (VLMs), fine-tuned with limited 3D contact data. However, directly applying these models is non-trivial, as they reason only in 2D, while human-object contact is inherently 3D. Thus we introduce a novel Render-Localize-Lift module that: (1) embeds 3D body and object surfaces in 2D space via multi-view rendering, (2) trains a novel multi-view localization model (MV-Loc) to infer contacts in 2D, and (3) lifts these to 3D. Additionally, we propose a new task called Semantic Human Contact estimation, where human contact predictions are conditioned explicitly on object semantics, enabling richer interaction modeling. InteractVLM outperforms existing work on contact estimation and also facilitates 3D reconstruction from an in-the wild image.



Summary Video



Method Overview

Given an image, $I$, and prompt text, $T_{inp}$, our VLM, $\Psi$, produces contact tokens for humans and objects, <<HCON>> and <<OCON>>, which are projected ($\Gamma$) into feature embeddings, $E^{H}$ and $E^{O}$. These guide a ``Multi-View [contact] Localization'' model. This renders the 3D human and object geometry via cameras, $K$, into multi-view 2D renders and passes these to encoder, $\Theta$, while decoders, $\Omega^H$, $\Omega^O$, estimate and highlight 2D contacts in these renders.


Acknowledgments & Disclosure

We thank Alpár Cseke for his assistance with evaluating joint human-object reconstruction. We also thank Tsvetelina Alexiadis and Taylor Obersat for MTurk evaluation, Yao Feng, Peter Kulits, and Markos Diomataris for their valuable feedback and Benjamin Pellkofer for IT support. SKD is supported by the International Max Planck Research School for Intelligent Systems (IMPRS-IS). The UvA part of the team is supported by an ERC Starting Grant (STRIPES, 101165317, PI: D. Tzionas).

DT has received a research gift fund from Google. While MJB is a co-founder and Chief Scientist at Meshcapade, his research in this project was performed solely at, and funded solely by, the Max Planck Society.

Contact

For technical questions, please contact sai.dwivedi@tue.mpg.de
For commercial licensing, please contact ps-licensing@tue.mpg.de

BibTeX

@inproceedings{dwivedi_interactvlm_2025,
  title     = {{InteractVLM}: {3D} Interaction Reasoning from {2D} Foundational Models},
  author    = {Dwivedi, Sai Kumar and Antić, Dimitrije and Tripathi, Shashank and Taheri, Omid and Schmid, Cordelia and Black, Michael J. and Tzionas, Dimitrios},
  booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  month     = {June},
  year      = {2025},
}