CynLr Robotic Arms Collage

“ Robots that can see, understand and learn to grasp and manipulate any object even from clutter ! ”

CynLr full setup
Mutli-dimensional dynamic visual acquisition
Mutli-dimensional dynamic visual acquisition

Continuous visual feedback while manipulating objects, makes accurate placement possible.

Mutli-dimensional dynamic visual acquisition
Adaptive & force-sensitive gripping.

Picks objects of any geometry. Learns mass distribution for effective grips. Reacts to and avoids slippage.

Mutli-dimensional dynamic visual acquisition
Portable, grouting free deployment, even at high speeds

No requirement for civil planning & engineering. Place your robot, train and go.

Interested in a pilot for your Pick -> Orient -> Place application ? Contact us

Interested in a pilot for your
Pick -> Orient -> Place
application ?

Contact Us

Picking objects from clutter and accurately placing them is intuitive for a human but very challenging for a robot

Picking objects from clutter and accurately placing them is intuitive for a human but very challenging for a robot

Connecting rod

Objects with complex geometries, such as this connecting rod in a bin, results in infinite number of unpredictable orientations for picking.

Connecting rod

Because of asymmetry and uneven mass distribution, objects easily get entangled making only a partial face visible for the most ideal part to be picked.

Connecting rod

This needs the system to understand and predict all possible gripping points on the revealed side, also considering hidden parts and choose the best part for picking.

Connecting rod

The part must then be selected based on the best picking orientation that can deliver most accurate placement. This requires more visual data than just colour and 3D depth map.

Connecting rod

Trajectory for gripping, oriented pick and oriented placement, to be computed & fed to robot as suited to pick the connecting rod of an untrained random orientation in a bin.

A vector image showing connecting rods

" Yes, excessive automation at Tesla was a mistake. To be precise,my mistake. Humans are underrated. "

- Elon Musk, April 2018, on Tesla not meeting Model 3 production targets.

Watch Video

“ Human-like Robot Versality Requires a
Human-like Robotic Vision System ”

0 %

More than 50% of the human brain’s cortex is used for visual processing alone


There are at least 18 known depth cues humans use to perceive depth for manipulation, not just stereopsis

0 +

Humans see in more than 7 constructed dimensions of colour, contrast, contours and depth – not just as RGBD images

Adaptice Aquasition illustration

Adaptive & Coordinated Acquisition

See to manipulate, manipulate to see

Vision for object manipulation requires dynamic image acquisition hardware, with the ability to focus, refocus, pan, zoom, move around in real-time, and adaptively acquire at high speeds.

Start Discovering

Multi-Dimensional Perception

Not 2D, not 3D, we see a lot more!

Static 2D color images and 3D depth maps alone are insufficient inputs for neural networks to model the complexities of objects. Our imaging pathway acquires and constructs more than 7 fundamental dimensions of information, dramatically reducing the quantum of data for learning.

Start Discovering

Multi dimentional Perception photo

Frequently Asked Questions

3D Bin-picking solutions currently available in the market perform basic pattern-matching algorithms on sparse 3D depth-map data. Therefore they work only when object geometries are simple, with no occlusion or entanglement, with at least one part fully visible in the viewing angle trained for and can only approximately pick and drop objects not accurately place objects.

The purpose of picking an object during a manual task is almost always to place in a desired orientation, hence approximate picks and drops finds very limited use-cases. CynLr solution is a full-scale manual task automation platform capable of learning to manipulate – pick, orient and accurately place - any geometry object even from occluded conditions.

Deep Neural Networks for image analysis have evolved for object identification use-cases and follow a typical Classification-> Localization-> Detection-> Segmentation approach not suitable for object manipulation applications.

Automation of manual tasks on shop floors demands at least 99.999% accuracies and repeatability, which traditional deep learning models cannot achieve even with billions of images of training data. It is practically infeasible to obtain training data for all possible scenarios of even a single object.

Instead of high volumes of low-dimensionality information (millions of XY-RGB (2D-Colour) or XYZ(3D) images) for training, our visual processing system constructs more than 7 dimensions of information, closely mimicking the human visual cortex. This enables cognition of objects from a manipulation point of view (PoV) instead of just an identification PoV, and thus enables us to achieve sub-millimeter accuracies of pose estimations in real-time for the robot to grip and place the object precisely.

Still have questions? Get in touch