Just like how humans can grasp unknown objects they have never seen before, CynLr’s technology enables robots to grasp any object without any pre-training. We only learn objects after we pick them up.
We identify an object by its Shape & Colour. But, when the object's orientation changes, the same object assumes different shapes and colours. Our technology allows robots to instinctively learn to put together an object from all its complex shapes and orientations and know how to re-orient them.
One can’t build a car by throwing parts at each other. We enable robots to not only grasp objects from unstructured scenarios, but learn to make oriented placements to achieve desired outcomes.
The Human eye doesn't use different sensors for Motion, Depth and Colour.
Making the cacophony of RADAR, LIDAR & Vision Fusion Redundant. One Vision Platform To Rule Them All. CynLr's Vision system sees Motion, Depth and Colour, all at once, in same resolution, in sync through the same pair of eyes.
Ever wondered why all animals "Converge" their eyes? Depth is perceived through Convergence.
Convergence gives 10x faster Depth at 3x the resolution than traditional stereoscopy. No more wasting compute power in calibrating the images and feature-extraction for constructing depth. A lidar and camera in one.
"Sight" is not "Vision". Vision occurs when sight overlaps with all other senses, giving meaning to the colours that we see - a mango or a spoon.
Human Vision understands Objects through 7 different dimensions of information - not just colour and depth. We create rich visual physics models of objects through combination of Liquid Lens Optics, Optical Convergence, Temporal Imaging, Hierarchical Depth Mapping and Force-Correlated Visual Mapping and many such technologies.