Continuous visual feedback while manipulating objects, makes accurate placement possible.
Picks objects of any geometry. Learns mass distribution for effective grips. Reacts to and avoids slippage.
No requirement for civil planning & engineering. Place your robot, train and go.
Objects with complex geometries, such as this connecting rod in a bin, results in infinite number of unpredictable orientations for picking.
Because of asymmetry and uneven mass distribution, objects easily get entangled making only a partial face visible for the most ideal part to be picked.
This needs the system to understand and predict all possible gripping points on the revealed side, also considering hidden parts and choose the best part for picking.
The part must then be selected based on the best picking orientation that can deliver most accurate placement. This requires more visual data than just colour and 3D depth map.
Trajectory for gripping, oriented pick and oriented placement, to be computed & fed to robot as suited to pick the connecting rod of an untrained random orientation in a bin.
Objects with complex geometries, such as this connecting rod in a bin, results in infinite number of unpredictable orientations for picking.
Because of asymmetry and uneven mass distribution, objects easily get entangled making only a partial face visible for the most ideal part to be picked.
This needs the system to understand and predict all possible gripping points on the revealed side, also considering hidden parts and choose the best part for picking.
The part must then be selected based on the best picking orientation that can deliver most accurate placement. This requires more visual data than just colour and 3D depth map.
Trajectory for gripping, oriented pick and oriented placement, to be computed & fed to robot as suited to pick the connecting rod of an untrained random orientation in a bin.
More than 50% of the human brain’s cortex is used for visual processing alone
There are at least 18 known depth cues humans use to perceive depth for manipulation, not just stereopsis
Humans see in more than 7 constructed dimensions of colour, contrast, contours and depth – not just as RGBD images
Vision for object manipulation requires dynamic image acquisition hardware, with the ability to focus, refocus, pan, zoom, move around in real-time, and adaptively acquire at high speeds.
Static 2D color images and 3D depth maps alone are insufficient inputs for neural networks to model the complexities of objects. Our imaging pathway acquires and constructs more than 7 fundamental dimensions of information, dramatically reducing the quantum of data for learning.
3D Bin-picking solutions currently available in the market perform basic pattern-matching algorithms on sparse 3D depth-map data. Therefore they work only when object geometries are simple, with no occlusion or entanglement, with at least one part fully visible in the viewing angle trained for and can only approximately pick and drop objects not accurately place objects.
The purpose of picking an object during a manual task is almost always to place in a desired orientation, hence approximate picks and drops finds very limited use-cases. CynLr solution is a full-scale manual task automation platform capable of learning to manipulate – pick, orient and accurately place - any geometry object even from occluded conditions.
Deep Neural Networks for image analysis have evolved for object identification use-cases and follow a typical Classification-> Localization-> Detection-> Segmentation approach not suitable for object manipulation applications.
Automation of manual tasks on shop floors demands at least 99.999% accuracies and repeatability, which traditional deep learning models cannot achieve even with billions of images of training data. It is practically infeasible to obtain training data for all possible scenarios of even a single object.
Instead of high volumes of low-dimensionality information (millions of XY-RGB (2D-Colour) or XYZ(3D) images) for training, our visual processing system constructs more than 7 dimensions of information, closely mimicking the human visual cortex. This enables cognition of objects from a manipulation point of view (PoV) instead of just an identification PoV, and thus enables us to achieve sub-millimeter accuracies of pose estimations in real-time for the robot to grip and place the object precisely.