Listen to this article
Luxonis announced the release of its newest DepthAI ROS driver for its stereo depth OAK cameras. The driver aims to make the development of ROS-based software easier.
When using the DepthAI ROS driver, almost everything is parameterized with ROS2 parameters/dynamic reconfigure, which aims to provide more flexibility to help users customize OAK to their unique use cases.
The DepthAI ROS driver is being developed on ROS2 Humble and ROS1 Noetic. This allows users to take advantage of ROS Composition/Nodelet mechanisms. The driver supports both 2D and spatial detection and semantic segmentation networkers.
The driver offers several different modes that users can run their camera in depending on their use case. For example, users can use the camera to publish Spatial NN detections and publish RGBD pointcloud. Alternatively, with the DepthAI ROS driver users can stream data straight from sensors for host processing, calibration and modular camera setup.
Robotics Summit (May 10-11) returns to Boston
With the driver, users can set parameters for things like exposure and focus for individual cameras at runtime and IR LED power for better depth accuracy and night vision. This allows users to experiment with onboard depth filter parameters.
The driver enables encoding to get more bandwidth with compressed images and provides an easy way to integrate a multi-camera setup. It also provides docker support for easy integration, users can build one themselves or use one from Luxonis’ DockerHub repository.
Users can also reconfigure their cameras quickly and easily using ‘stop’ and ‘start’ services. The driver also allows users to use low-quality streams and switch to higher quality when they need or switch between different neural networks to get their robot the data it needs.
Earlier this month, Luxonis announced a partnership with ams OSRAM. As part of the partnership, Luxonis will use OSRAM’s Belago 1.1 Dot Projector in its 3D vision solutions for automatic guided vehicles (AGVs), robots, drones and more.
Tell Us What You Think!