SDK for 3D environment perception

Vision based real-time 3D environment perception SDK

Revolutionizing Real-Time Positioning and Mapping with Advanced Technology

Our cutting-edge technology seamlessly integrates state-of-the-art visual SLAM (Simultaneous Localization and Mapping), sensor fusion, and advanced AI to deliver precise, real-time positioning, perception, and mapping solutions. By combining visual input with data from multiple sensors, our system ensures unmatched accuracy and reliability in dynamic environments. Whether for autonomous navigation, robotics, or augmented reality, our platform empowers next-generation applications with the intelligence to understand and interact with the world around them.

The DC Vision SDK enables robust and realtime 3D vision for embedded systems. It works with mono-, stereo- and multi-camera setups.

Main features 

  • Visual SLAM for precise determination of position and orientation
  • Recognition of objects and tracking with 3D bounding box, 6D pose
  • 3D scene reconstruction based on image sequences,
  • 3D model generation
  • Centimetre-accurate (re-)localisation based on images relative to a 3D model
  • Digital twin for decision making and path planning
  • Realtime visualisation of the digital twin, dashboard

Applications

  • Mobile robotics
  • Autonomous vehicles
  • Logistics
  • Safety, collision detection and avoidance
  • Traffic monitoring and surveillance
  • Autonomous systems for agriculture
  • Augmented reality

Software architecture

The SDK is build upon several ground-breaking algorithms we developed over the last years. The structure of the SDK is flexible and adaptable for different use-cases. It can run on several different hardware architectures and is especially suited for realtime embedded processing on resource limited systems.

Example results

Point cloud and camera trajectory
Reconstructed mesh
Point cloud and camera trajectory for a large scene
Indoor scene view with image, depth map and voxel-based 3D model. With voxels, it is easy to estimate potential collsions with the scene. The voxels are stored in a sparse data structure and are made available as part of the digital twin.
2D object detection and 3D object tracking: Each detected object can be localized in 3D in the digital twin. Besides the position, also the orientation and a 3D bounding box is determined. The visualisation also shows the trajectories and the IDs of the objects.
Augmented reality: In the bottom right camera view, a 3D path is overlayed in yellow/red. In the overall image, the 3D model of the digital twin is rendered from the same perspective as the camera view.