With our uncompromising speed, quality and performance the applications for our camera are vast. Our vision as a company is to improve and revolutionize existing pipelines requiring high speed and performance stereoscopic capture, ranging from AR to VR to ADAS to Machine Learning.



Through our unique stitching and latency pipeline we have enabled ultra-fast encoding, re-projection and streaming in less than the blink of an eye. When we say real-time, we mean real-time. Using WebVR as a development structure our beta platform conVRsation™ allows for virtual reality chatrooms broadcast over an internet connection.

The platform is available for demo as we believe that VR is the next major social platform. 




Using advanced stereoscopic capture for depth measurement, object recognition and planar tracking we deliver virtual environments that can easily be optimized for Augmented Reality (AR) object insertion with advanced occlusion.

True Virtual Reality (VR) capture is the combination of 360 video and depth perception; using our proprietary cameras and stitching software, we deliver zero-latency solutions for fully immersive and high quality virtual reality capture. Register for a demo to see our camera in action here.



ADAS (Advanced Driver Assistance Systems)

The Suometry™ omnistereo™ camera can materially improve the capability and quality of ADAS vision systems, and potentially reduce the cost of introducing advanced imaging in vehicles.

With its real time performance and 360° field of view, the camera can address everything from long-range distance measurement and vision, to short range exterior monitoring, to interior cabin monitoring.

The baseline between cameras can run from 2.5” out to an extreme hyper-stereo configuration. The hyper-stereo baseline enables the camera to calculate depth with much better accuracy at much greater distances. 




The Suometry™ omnistereo™ camera also has the potential to materially improve machine learning and training. First, because the image processing time is so fast and requires so few computational resources, substantially more stereoscopic data can be generated for machine training.

Second, when used in a hyper-stereo configuration, it can also provide additional depth information to correlate with LIDAR and for creation of or validation against ground truth.

Third, the addition of full 360 stereoscopic data could improve the overall quality of object identification, especially for objects that are in motion or are partially occluded at different angles.