Is a robot vision system right for your industrial application?

As the name implies, robot visioning systems essentially enable robots to “see” objects within their work envelope and execute their operations accordingly. A combination of specific components to the work cell enable the robot to locate, identify and determine the orientation of objects for selection, retrieval and application.

Robot visioning systems add new levels of consistency and reliability to the process, while doing away with the need for some of the more complex systems to identify and orient objects for processing that are the staple of “blind” robot systems. The increased flexibility of what can be achieved by the basic robotic unit saves both time and expense.

 

This ability can be facilitated in both a two-dimensional and a three-dimensional plane, where the robot will not only be able to identify the location of the object but also its height. 

This technology allows robots to extend their functionality far beyond the norm of repetitive and highly structured activities. A “blind” robot often requires a multitude of sometimes complex and expensive add-ons to ensure the correct selection, orientation and positioning of objects before the robot can execute its required function. Such add-ons often require routine adjustment and skilled maintenance to ensure smooth and continuous operation.

Advanced software makes it possible even to identify and locate objects while in motion, on a conveyor belt for example.

 

Robot visioning is achieved by integrating a lighting and camera system into the controller along with advanced software. In the case of a single camera system, the camera generates a digital image of the object, which is then interpreted by software, the software having been programmed to identify and match certain shapes. The relative size of the object as it appears in the image is used to determine the distance of the object from the camera, thereby locating the object in a two-dimensional plane.

Depending on the application, this two-dimensional positioning along with the orientation of the object may provide sufficient information for the robot to select and manipulate the object. Precision software allows the precise orientation of the object to be determined - facilitating pick up and manipulation.

Two-dimensional visioning is particularly useful in picking, sorting, feeder and assembly operations. 

 

However, robots are at their most productive in a three-dimensional environment where multiple functions can be performed by a single machine, or a complex string of operations by robots working together in a three-dimensional space. 

Three-dimensional visioning generally comes into its own when identification of the height of an object comes into play. An example would be de-palletizing, where the robot would need to identify the height of a stack on a pallet as well as an exact location and apply lift accordingly. Other examples would include picking objects from a bin or rack, selecting and loading or unloading dies.

 

The final word in deciding whether robot visioning is the solution for your application is the consideration of the limitations of these systems. What the human eye can identify is still far in excess of what robotic visiting systems are capable of. As a result, robotic visioning relies on consistency in terms of appearance for the various objects that it is specifically programmed to identify and process. Ill-defined, nebulous or random objects are generally not suited to processing by robotic visioning. An example would be biomorphic objects that vary in shape and size. 

It is also important to understand that robot visioning systems are not designed to process visual information beyond that which they are programmed for. For example, in a line of robots working in a linear process, the next robot in line will not be able to determine whether the previous robot has correctly fulfilled its process. If an error does enter the process, such as from a broken tool, the robot visioning system will not discern this as a human operator would.