I am happy to share my first test with AR2, ROS, Kinect, and tensorflow. I use pretrained fasterRCNN (https://github.com/tensorflow/models/tree/master/research/object_detection) and depth map from the Kinect to find the XYZ coordinates of an apple. It works but I think a simple openCVpipeline should be more efficient than deep learning vision algorithm . For the next step I will finish ROS integration (ROS_control), try Opencv pipeline and the DOPE algorithm from Nvidia. It can run on real time with a single RGB cam and returns 6Dpose estimation of an object (https://github.com/NVlabs/Deep_Object_Pose)!!! Thanks for making this project available to the open source community! AR2 is really awesome!!