Continuous visual feedback while manipulating objects, makes accurate placement possible.
Picks object of any geometry. Learns mass distribution for effective grips. Reacts to and avoids slippage.
No requirement for civil planning & engineering. Place your robot, train and go.
Objects with complex geometries, such as this connecting rod in a bin, results in infinite number of unpredictable orientations for picking
Because of asymmetry and uneven mass distribution, objects easily get entangled making only a partial face visible for the most ideal part to be picked.
This needs the system to understand and predict all possible gripping points on the revealed side, also considering hidden parts and choose the best part for picking.
The part must then be selected based on the best picking orientation that can deliver most accurate placement. This requires more visual data than just colour and 3D depth map.
Trajectory for gripping, oriented pick and oriented place, to be computed & fed to robot as suited to pick the connecting rod of an untrained random orientation in a bin.
“ Yes, excessive automation at Tesla was a mistake. To be precise,my mistake. Humans are underrated. ”
- Elon Musk, April 2018, on Tesla not meeting Model 3 production targets.
More than 50% of the human brain’s cortex is used for visual processing alone
There are at least 18 known depth cues humans use to perceive depth for manipulation, not just stereopsis
Humans see in more than 7 constructed dimensions of colour, contrast, contours and depth – not just as RGBD images
3D Bin-picking solutions currently available in the market perform basic pattern-matching algorithms on sparse 3D depth-map data. Therefore they work only when object geometries are simple, with no occlusion or entanglement, with at least one part fully visible in the viewing angle trained for and can only approximately pick and drop objects not accurately place objects.
The purpose of picking an object during a manual task is almost always to place in a desired orientation, hence approximate picks and drops finds very limited use-cases. CynLr solution is a full-scale manual task automation platform capable of learning to manipulate – pick, orient and accurately place - any geometry object even from occluded conditions.
Deep Neural Networks for image analysis have evolved for object identification use-cases and follow a typical Classification-> Localization-> Detection-> Segmentation approach not suitable for object manipulation applications.<>/p
Automation of manual tasks on shop floors demands at least 99.999% accuracies and repeatability, which traditional deep learning models cannot achieve even with billions of images of training data. It is practically infeasible to obtain training data for all possible scenarios of even a single object.
Instead of high volumes of low-dimensionality information (millions of XY-RGB (2D-Colour) or XYZ(3D) images) for training, our visual processing system constructs more than 7 dimensions of information, closely mimicking the human visual cortex. This enables cognition of objects from a manipulation point of view (PoV) instead of just an identification PoV, and thus enables us to achieve sub-millimeter accuracies of pose estimations in real-time for the robot to grip and place the object precisely. This is achieved using a customized DNN that performs in the order of Segmentation, Association through Gripping Reinforcement, Classification and Identification, instead of the regular CNN methods of Classification, Localization, Identification and Segmentation. The trained model can run on the edge.