Project

General

Profile

Vision Log

This is a log of the progress being made on the vision team.

12 November 2011

We got ROS working across multiple machines. This requires one master machine, and any number of others.
On the master, set ROS_IP to its IP address (since other computers probably can't resolve its name on the network) and run roscore:

export ROS_IP=<ip address>
roscore

roscore will print out ROS_MASTER_URI, which needs to be set on all other machines. It will probably look something like:
export ROS_MASTER_URI=http://<ip address>:11311/

You will probably need to set ROS_IP on the other machines with their own IP addresses also, if they are publishing anything. In your ros_workspace/setup.bash, you can read the computer's IP address automatically every time you open a new shell:
export ROS_IP=`ifconfig wlan0 | grep 'inet addr:' | sed -r 's/^.*inet addr:(\S*).*$/\1/'`

For more info see:
http://www.ros.org/wiki/ROS/Tutorials/MultipleMachines
http://www.ros.org/wiki/ROS/NetworkSetup

21 October 2011

Our next immediate goal is to log all data required by RGBDSLAM in order to offload the 3D map creation (to a faster computer). We'll need to log the following ros topics:

/tf
/camera/rgb/image_mono
/camera/rgb/points
/camera/depth/image

To record these, use
rosbag record -O rgbdslam.bag /tf /camera/rgb/image_mono /camera/rgb/points /camera/depth/image

So far we've gotten ROS up and running and we've downloaded RGBDSLAM semi-successfully (there are still some issues on some of our computers). We've tested it after turning off the store_pointclouds variable which decreases the amount of memory needed to run it. However, the program still cannot handle transforms very well at any reasonable speed, so we are considering putting the data into a bag file and running it from there, so that we can do it in post-processing instead of real-time, which is considerably easier, as a first step. Next steps may be doing this on the PandaBoard, and trying to run this real-time, perhaps. -Nick

We were able to record to a bag file and run the point cloud in RGBDSLAM by increasing the buffer size to 1024. We tried 256 (default), 512, and 1024. We're going to test between 512 and 1024. If that fails, and we can't run it, it is possible to create a node that filters out alternating input data and then enters that into a bag. -Nick

Target location

From http://www.ros.org/wiki/kinect_calibration/technical:

focal length (pixels) FOV (degrees)
IR 580 57.8
RGB 525 62.7

Assuming an origin in the middle of the image, with known focal length f, distance d, and image coordinates (x, y), the position is approximately (x*d/f, y*d/f, d). With f and (x, y) in pixels, the resulting units match d.