Object Processors That Learn to See and Manipulate Any Object

Grasp any object without pre-training

Just like how humans can grasp unknown objects they have never seen before, CynLr’s technology enables robots to grasp any object without pre-training. We only learn objects after we pick them up.

Learn to manipulate and re-orient

An object is characterized by its shape, and its shape as perceived by the observer varies for every small change in orientation. Our technology allows robots to learn to manipulate objects in any orientation.

Make oriented placements

One can’t build a car by throwing parts at each other. We enable robots to not only grasp objects from unstructured scenarios but learn to make oriented placements to achieve desired outcomes.

Create rich visual physics models of objects

Seeing is not “Vision”. Vision occurs when we augment what we “see” with what we “feel” to build sophisticated mental models that help us manipulate objects.

CynLr visual intelligence platform integrates a novel combination of Auto-Focus Liquid Lens Optics, Optical Convergence, Temporal Imaging, Hierarchical Depth Mapping and Force-Correlated Visual Mapping, that allows us to create rich visual physics models of objects, enabling robots to manipulate objects even in highly stochastic scenarios.

“Seeing like Humans”

Human-like Robot Versality Requires a Human-like Robotic Vision System


More than 50% of the human brain’s cortex is used for visual processing alone


There are at least 18 known depth cues humans use to perceive depth for manipulation, not just stereopsis

7 +

Humans see in more than 7 constructed dimensions of colour, contrast, contours & depth – not just as RGBD images

“Team” is THE “Tech”

We are looking for researchers, algorithm developers, machine learning specialists and scientists to join our quest towards building Object Intelligence.

Together, let’s Expand AI from Images & Video to Objects & Tasks