Welcome to my homepage.

I am interested in making robots more intelligent by enabling them to better understand what they see.


I am a PhD Robotics student at Georgia Tech advised by James Hays. I also work closely with Charlie Kemp.

My PhD thesis is on observing and predicting hand-object interaction during human grasping, especially from the contact perspective. I've also worked on robotic grasping, learning to navigate and localize agents in large environments, object detection under occlusion, and object pose estimation. I was previously advised by Henrik Christensen.

I have a Masters degree in Robotics from the University of Pennsylvania, where I had the pleasure to work with Dr. Kostas Daniilidis. Before that, I got a bachelor's degree in electronics and communication engineering from Nirma University, India.


Towards Markerless Grasp Capture CVPR '19 Workshop on CV For AR/VR

Preliminary results on a completely markerless grasp (hand pose + object pose) capture algorithm.

paper website


ContactGrasp: Functional Multi-finger Grasp Synthesis from Contact IROS 2019 | Summer Internship 2018 (NVIDIA Research, Seattle)

Functional grasp synthesis for kinematically diverse end-effectors, using human demonstrations of grasping from ContactDB.

paper website bib


ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging CVPR 2019 (Oral, Best Paper Finalist) | Fall 2018

First-ever dataset of high-resolution contact during human grasping.

paper website bib poster slides data, code and models

Summer 2017

Geometry-Aware Learning of Maps for Camera Localization CVPR 2018 (Spotlight) | Summer Internship 2017 (NVIDIA Research, Santa Clara)

Geometric constraints and semi-supervised learning for better image-based camera localization.

arXiv website bib poster slides code and models


DeepNav: Learning to Navigate Large Cities CVPR 2017

Learning to navigate large cities by training convolutional neural networks to make a navigation decision from the current street-view image.

paper arXiv bib poster code and models

WACV 2017

StuffNet: Using 'Stuff' to Improve Object Detection IEEE WACV 2017

Improving deep-learning object detection by looking at 'stuff' surrounding objects.

paper arXiv bib code and models slides poster

BMVC 2015

Occlusion-Aware Object Localization, Segmentation and Pose Estimation BMVC 2015

Detection, segmentation and 3D pose estimation of partially occluded objects.

arXiv paper bib

Masters Thesis

Masters' Thesis Fall 2014

Detection and segmentation of partially occluded objects.

Learn more

PR2 Research

3D Pose Estimation Summer 2013 / ICRA 2014

We made GRASPY, Penn's PR2 robot detect and estimate the 6-DOF pose of household objects, all from one 2D image.

Learn more


RoboCup Kid Size League Summer 2013

Our Team DARwIN won the Humanoid Kid Size League world championships.

Learn more

Summer 2011

Robotic Arm Control Using Kinect Summer 2011

We used a Kinect to segment spherical and cylindrical objects lying on a table and guided a robotic arm to their 3D position.

Learn more