Skip to main content
Strategy
02/04/2019

How robots learn to see

The benefit of AI-based image processing
Marco Braun, Strategic Technology Expert
Marco Braun
Strategic Technology Expert, Corporate Technology Development
Bin Picking

Practice makes perfect. Recent developments in the field of robotics combine learning phases for the detection and localisation of objects with phases for grasping these objects. The resulting continuous learning process allows the robot, much like a child, to gain experience with objects while learning geometric and kinematic properties intrinsically and autonomously. This capability could allow robots to adapt more flexibly to new tasks in the future.

Recognising and interacting with objects is an essential part of our everyday lives, and is learned - by us humans - during early childhood by way of numerous training cycles. Visual recognition alone and the localisation of objects requires the consideration of situation-related variables such as viewing angle or lighting conditions. But the problem starts as soon as the question is asked: which pixel belongs to which object, and where is this object located in space?

With respect to the camera image, these are, for example, methods for detecting edges, searching for defined shapes or partial images within an image. These work very well if basic conditions such as the same ambient lighting, the same component shape, the same component colours and the same component position are met. In classical image processing, it is necessary to assume these restrictions in order to explicitly determine suitable procedures and parameterisations for a specific application. Since the associated development process of such solutions, depending on the task, is associated with a certain effort, a specific scenario should be present from the outset and the economic benefits should be ensured.

Flexible and robust image recognition can be trained

Thanks to Artificial Intelligence, new opportunities have come on the market in recent years to make image processing more flexible. But intelligent software for image recognition must also be trained. Training with large amounts of data provides the image recognition system – in comparison to the classical approach – with more robust object recognition since the object properties were implicitly learned for different overall conditions. The learning algorithms divide the images into small components and look for patterns in the data. Once programs for image recognition have been trained, they often do their job faster and sometimes more accurately than humans.

Convolutional Neural Networks (CNNs) etc. are used for object classification, object recognition or segmentation. By training them with common photos of dogs and cats, these deep artificial neural networks are e.g. well able to distinguish the animals regardless of shooting conditions (such as lighting conditions or surroundings) and even classify them by their breed. So CNNs can be trained to recognise objects with non-uniform shapes and colours, even in widely varying environments. With regard to industrial robotics, there are already many applications for the use of artificial intelligence for handling components.

Solutions for robust gripping

At the HARTING Technology Group solutions are being developed for e.g. robust robot gripping of chaotically scattered components lying in a container. Here, a mix is used, consisting of algorithms of classical image processing for pre- and post-processing of data as well as artificial intelligence to deal with fluctuating conditions such as ambient lighting and shading and the chaotic arrangement of the components. Applications centre on gripping different components that are made available as bulk material for further processing or packaging. This is where the use of AI for identifying the component position comes into play – CNN were trained to work out distinct contours even in fluctuating ambient lighting. These contours are then further processed using classical image processing algorithms in order to be able to determine – in very precise manner – the position of the component relative to the robot. Using Artificial Intelligence, it is possible to provide robust detection and save development time. Instead of manually fine-tuning algorithms, the parameters are learned from a CNN.

Convolutional artificial neural network

Convolutional Neural Networks (CNNs) are successfully used in the processing of image data. The practical relevance can be illustrated with a simple example: CNNs can be used to e.g. detect variable fluctuations in the powder coatings of connector metal housings. These defects can always look different and also occur differently. If a CNN is trained to recognise these components, it can eventually sort them out as rejects.

This is just one of many possible approaches in which Convolutional Neural Networks are used.

Recommend article

Eingeschränktes HTML

  • Allowed HTML tags: <a href hreflang> <em> <strong> <cite> <blockquote cite> <code> <ul type> <ol start type> <li> <dl> <dt> <dd> <h2 id> <h3 id> <h4 id> <h5 id> <h6 id>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.