driver_common 0.2.1 has been released. This is a patch release to fix a bug in dynamic_reconfigure introduced by changes in method names in 0.2.0.
Download
driver_common 0.2.1 has been released. This is a patch release to fix a bug in dynamic_reconfigure introduced by changes in method names in 0.2.0.
Download
camera_drivers 0.3.0 has been released.
Changes
Download
We have released several more stacks into our stable release process: driver_common 0.2.0, imu_drivers 0.2.0, camera_drivers 0.2.0, and laser_drivers 0.2.0. We have also released image_pipeline 0.1.1, which will soon be in stable release as well. These stacks contain drivers for publicly available sensors, such as Hokuyo and SICK laser range finders, firewire cameras, and the microstrain 3dmgx2 IMU.
With these releases, all driver-related components for our Milestone 3 effort are now stable. This means that feature development for these components is largely concluded for this release cycle and that every effort will be made to ensure that future modifications to the public APIs in these stacks will be done in a backwards-compatible manner.
Download
crossposted from willowgarage.com
Peter Pastor, a PhD student at USC, spent the past three months developing software that allows the PR2 to learn new motor skills from human demonstration. In particular, the robot learned how to grasp, pour, and place beverage containers after just a single demonstration. Peter focused on tasks like turning a door handle or grasping a cup -- tasks that personal robots like PR2 will perform over and over again. Instead of requiring new trajectory planning each time a common task is encountered, the presented approach enables the robot to build up a library of movements that can be used to execute these common goals. For this library to be useful, learned movements must be generalizable to new goal poses. In real life, the robot will never face the exact same situation twice. Therefore, the learned movements must be encoded in such a way that they can be adapted to different start and goal positions.
Peter used Dynamic Movement Primitives (DMPs), which allow the robot to encode movement plans. The parameters of these DMPs can be learned efficiently from a single demonstration, allowing a user to teach the PR2 new movements within seconds. Thus, the presented imitation learning set-up allows a user to teach discrete movements, like a grasping, placing, and releasing movement, and then apply these motions to manipulate several objects on a table. This obviates the need to plan a new trajectory every single time a motion is reused. Furthermore, the DMPs allow the robot to complete its task even when the goal is changed on-the-fly.
You can find out more about the open source code for Peter's work here, and check out his presentation slides below (download PDF). For more about Peter's research with DMPs and learning from demonstration, see "Learning and Generalization of Motor Skills by Learning from Demonstration", ICRA 2009.
We have released several more stacks into our stable release process: pr2_common 0.2.0, pr2_gui 0.2.0, pr2_ethercat_drivers 0.2.0, pr2_power_drivers 0.2.0, and pr2_mechanism 0.5.0. These releases mainly affect users of the PR2 robot, including the PR2 simulator. Each of these stacks now has a stable API, which means that every effort will be made to provide backwards-compatiblity for any changes.
Download
navigation 0.6.3 has been released. This release contains the standard bug fixes and patches and also adds some features to the costmap_2d package that begin to make it more compatible with SLAM systems. There is still a bit of work/testing to be done to allow easy integration of the navigation stack with a SLAM system, but look for the next few releases of navigation stack to add additional capabilities, culminating with the 0.7.0 release of the stack. I'm planning on getting the 0.7.0 release out sometime in mid to late January. In the meantime, please treat the additions to the costmap_2d package as experimental as they are fairly untested.
Changes
fake_localization
robot_pose_ekf
costmap_2d
navfn
Download
Update: Documentation is available on ros.org at http://www.ros.org/wiki/alufr-ros-pkg
I'm happy to announce v0.1, the first proper release of Freiburg's Nao Stack located at:
http://code.google.com/p/
You can check out the trunk (svn) from:
http://alufr-ros-pkg.
or download the stack package from:
http://code.google.com/p/
Changes are:
Some basic instructions are available at http://code.google.com/p/
Best regards, Armin
image_common 0.6.1 has been released. This is a patch release to fix compilation under gcc 4.4.
Download
laser_pipeline 0.6.0 has been released. There are only a few small changes with this release.
Changes
laser_geometry
laser_assembler
Download
simulator_gazebo 0.6.4 has been released. This is a patch release fixing a camera plugin bug where distortion parameters (k1,k2,k3,t1,t2) were not passed from Gazebo URDF extensions XML to the subsequently published CameraInfo message (D-matrix remained 0).
Download
image_common 0.6.0 has been released.
There is one deprecation: creating an image_transport::SubscriberFilter now requires an image_transport::ImageTransport argument in place of a ros::NodeHandle. The constructor/subscribe() taking ros::NodeHandle will be removed in a later release.
image_transport::Subscriber now attempts to detect a common user error, passing in a transport-specific topic name (e.g. "/camera/image/compressed") in place of the base topic ("camera/image"), and print a useful warning.
Other changes are mostly bugfixes and making image_transport's publisher/subscriber classes more closely mimic the behavior of ros::Publisher and ros::Subscriber in edge cases.
Changes
image_transport:
camera_calibration_parsers:
Download
crossposted from willowgarage.com
Ethan Dreyfuss, who recently received a master's degree from Stanford University, is continuing his work here on autonomous person-following and dataset collection and annotation. The former project provides a useful building block for a wide variety of tasks. Consider a robot that helps you carry groceries. This robot is vastly more useful if it can carry your bags to the house without requiring teleoperation; the robot can simply track you and follow behind. At a high level, person-following comprises two principal tasks: person tracking and navigation.
The approach developed by Ethan and Caroline Pantofaru fuses a face detector with two weak person trackers: one for legs, and one for 3D blobs at person-height. None of these approaches is individually effective enough to provide robust tracking, but their
strengths are complementary. The face detector is effective when the person is close to, and directly facing the robot. While the leg tracker provides high accuracy
when multiple people are present, it is often confused by non-human
obstacles and can therefore not work reliably from afar. Conversely, the height-based blob tracker can effectively track from further away, yet it is
easily confused by groups of people. By combining techniques, Ethan and Caroline were able to develop a more robust person-tracking tool.
Once
the robot can track a designated person, the information is passed on
to the navigation stack. This same navigation software was used to
complete Milestone 2, with
some
improvements made to help deal more quickly and robustly with
dynamically-moving obstacles such as people.
In addition to the person-following project, Ethan is contributing to
the collection and labeling of a large dataset of people in an indoor
office environment. One
of the major drivers of computer vision research is the availability of
high-quality labeled data. The bulk of existing person datasets
exclude indoor environments, and instead focus on outdoor pedestrians.
Indoor environments present numerous challenges for person detection,
including poor lighting and environmental clutter. By automating as
much as possible, the process of both
collecting (using the robot) and labeling (using Amazon's Mechanical
Turk and Alex Sorokin's CV Web Annotation Toolkit), Ethan's team will be able to provide a large,
compelling dataset
to encourage other researchers to tackle these challenging problems.
Ethan
also picked up a number of side projects including rapid neighborhood
computation on point clouds, and implementing a package that uses the
open-source video codec Theora to allow
low-bandwidth video streaming within ROS.
geometry 0.4.5 has been released. This is a patch release. The changes are listed below.
Changes
tf
angles
kdl
Download
simulator_gazebo 0.6.3 has been released. This release adds bayer image generation in simulation for modeling wide stereo camera pair on PR2.
Download
pr2_simulator 0.1.1 has been released and contains a minor patch to startup a set of default controllers when starting PR2 in simulation. This is going to be the default behavior for the actual PR2 as well. Please see the ChangeList for more details.
Download
The pr2_simulator 0.1 has been released. This initial release represents a transition into a stable release cycle. Future updates to this stack will be made backwards compatible, when possible. The pr2_simulator stack contains the necessary components for working with a simulated PR2.
Download
simulator_gazebo 0.6.2 has been released. This release contains a parallel make bug fix and (re)enables erratic_gazebo 2dnav-stack demo.
Change List
Download
crossposted from willowgarage.com
Along with our Texas Robot experimentation, we continue our endeavors to improve the usability of our software and hardware components. By focusing outwards on the ROS and PR2 communities, we've made significant progress towards improving user experience, and we continue to iterate on these changes with the generous help of numerous ROS community members. We've been running software and hardware component tutorials through rigorous user testing, and along the way, discover opportunities to develop new teaching tools.
In an effort to simplify ROS adoption for new users, we developed turtlesim, a LOGO-inspired tool that provides a hands-on approach to learning ROS basics. This tool functions as an entry-level tutorial that takes in velocity commands, and "drives" a turtle according to the input. Turtlesim offers a simple simulator that allows new users to more readily visualize their commands and work with a simulated "robot." Once comfortable with some of the more basic ROS commands, users can try simulators, like Stage and Gazebo, for more advanced experimentation and tutorials.
These tutorials, along with many others, can be found at ros.org. There are currently 19 tutorials available for the core ROS system, and 186 tutorials (and counting) in total, covering much of the functionality available on ROS. To learn how to document a package, check here, and to learn how to write a tutorial for a package, click here. Ros.org has seen great expansion and improved organization. We strongly encourage you to upload your work and share your documentation and tutorials with the ROS community!
Find this blog and more at planet.ros.org.