Of all sensing technologies for motion designs, none collect more information than machine-vision systems. Visual feedback tracks environment and part features with high-bandwidth data to boost application safety, speed, and quality. What’s more, vision technologies are more advanced all the time.
Artificial vision has evolved from research-level computer vision to working machine vision in manufacturing plants to the embedded vision in today’s defense, consumer, automotive, and medical products — with plenty of technology cross-pollination between subtypes. Just consider emerging machine-vision technologies of unifying software environments, GigE cameras and networks, multiple PLC vision triggers, and distributed multi-core processors for vision as proof.
Machine vision improves robotic applications
Machine vision is common on robots — especially those that tend conveyors — and vision-guided robots (VGRs) take many forms. Just consider the Matrox Iris GTR smart camera from Matrox Imaging for integrators, machine builders, and OEMs of factory automation solutions. The 75 x 75 x 54 mm Iris GTR uses Semiconductor-brand PYTHON CMOS image sensors for high readout rates. An Intel Celeron dual-core embedded processor lets it inspect on faster moving lines or perform more inspections in a given time. An IP67-rated housing and M12 connectors also make the camera dustproof and immersion resistant.

Matrox Imaging Iris GTR cameras come with monochrome and color-image sensors having resolutions from VGA to 5 Mpixels. Dedicated interfaces for LED intensity and Varioptic Caspian auto-focus lens control facilitate setup. Eight real-time GPIOs with rotary-encoder support let the camera follow and interact with fast-moving equipment; a Gigabit-Ethernet interface lets the smart camera efficiently output data (including images) over factory networks.
Robotiq’s new machine-vision camera for Universal Robots is another new VGR system. It’s for pick-and-place tasks executed by Universal Robots, and can be setup on a shop floor in five minutes. The system needs no external PC for setup or programming the camera. Everything is done right from the robot user interface. Early adopters say Robotiq’s Camera opens new automation possibilities.
“The Robotiq Camera breaks integration barriers of vision systems in manufacturing automation,” said Victor Canton, manufacturing engineer at Continental Automotive. His team saw many pick-and-place robot tasks that could benefit from quick programming. “We’ll need this for upcoming projects incorporating Universal Robots cells,” added Canton.
The camera fits on Universal Robots wrists and integrates with the controller. The camera software embeds into Universal Robot’s graphical user interface.
“We kept seeing projects that would work much better with machine vision, but manufacturers stayed away because of the complexity and cost of existing solutions,” said Samuel Bouchard, CEO of Robotiq. “So we creates a camera and vision software that anyone can setup and use.”

Robotiq cameras go into Universal Robots wrists and directly connect vision connectivity.
A key area of VGR advancement is machine vision that goes beyond parts scanning in a 2D plane (lying flat) for 3D applications. Chief among these jobs is bin picking. Here, it’s common for 80% of a bin clearing to go smoothly and then the last 20% or so to exceed the tending robot’s capabilities. Parts left in bin corners or at strange angles prove too hard for traditional machine vision to detect. But now, manufacturers are starting to use smarter machine-vision setups to get closer-to-perfect performance.
Better bin picking sometimes necessitates changes in upstream processes — with a switch to pre-arranging parts or using concave bins, for example. VGRs for bin picking must also have grippers that can fit into corners and get parts without hitting walls, with successful grasps even if workpieces have curved surfaces. That’s why end-of-arm tooling models must be part of upfront bin-picking simulations.
In fact, robotic and machine-vision manufacturers have aimed to address many of the motion task’s unique challenges with programming and 3D modeling tools. Dedicated VGRs setup with application-customized algorithms improve bin picking most.
Case in point: Drop-in robotic workcells from Universal Robotics now blend the simplicity of collaborative systems with the speed and capacity of industrial robots. They work for order fulfillment, machine tending, line loading, or even bin picking. The Neocortex Goods to Robot Cell is built on Universal’s Neocortex and Spatial Vision 3D software platform for collaborative interaction at industrial speeds — handling 800 to 1,600 mixed objects (from machined parts to consumer products) per hour.

One machine-vision setup from Skye Automation monitors passing stamped parts for defects. Machine vision setups work best when they’re customized to the application at hand. Here, diffused white LED light combined with a light-red bandpass filter gets the highest contrast regardless of the metal color variations.
Or reconsider the Matrox Iris GTR smart camera. It comes preinstalled with Microsoft Windows Embedded Standard 7 or Linux, so developers can pick between two common environments for running vision-application software.
Another machine-vision application: Machine vision checks stamped parts
Stamping presses continue to be an integral part of manufacturing. Although associated processes have undergone continuous improvement, the basic nature of press operations have remained unchanged for decades … despite rising expectations for quality components from stamping machines.
In fact, penalties imposed for sending bad parts can easily reach $50,000, not to mention wasted man-hours to sort through parts returned to the manufacturer.
But defects in stamped metal components are often difficult to detect. Consider slug marks, which often only appear for only a few parts, or even just one, and then disappear after the slug embeds itself into another part and gets carried away. Another situation is when broken die pins create an abundance of wasted material, time, and missed defects during sorting.
In these and similar situations, machine vision can automatically inspect metal components as they are coming off of the press, which in turn can save money and reduce scrap — ensuring 100% quality-component delivery rate for the end user. But there are challenges to consider when applying machine vision to this task. That’s because one press can run many different parts with frequent changeovers, various sizes, inconsistent environments, and various speeds.
It’s important to understand what vision can and cannot do for a given process within the confines of a design budget — before starting a project. Challenges are met through careful selection of camera, lighting, lens and filter. Defect size and the precision with which the machine-vision system must detect them dictate what camera resolution is suitable.
For example, in some stamping applications, a two-megapixel monochrome camera can work well. However, other applications may require higher resolution to accommodate the FOV or to adequately determine the inspection requirements within the ROI.
In most machines using a monochrome camera, the setup pairs with a red LED light with a wavelength of 625 nm or so. However, in machine-vision applications, exceptions and situational anomalies are often the norm. In one real-world application, testing showed that a white LED light with a monochrome camera made for better contrast.
In fact, due to the complex nature of imaging, only testing can confirm that a particular setup is optimized — even with nontraditional lighting and camera arrangements — for the environment and parts being inspected.
Reconsider the nontraditional setup for stamped-part inspection. Here, a diffused white LED light combined with a light red bandpass filter maximizes transmission of light in the specific spectrum between bad and good parts. High-intensity light also lets the setup inspect the parts while in motion, because camera shutter speed can be set fast enough to eliminate motion blur.
Assume vision software orients parts to within 0.001° based on a reference point as they are placed randomly on the conveyor. Once oriented, controls trigger the rest of the quality checks to determine if there are any problems with the part. If defects are found, the system can immediately stop the press or reject the parts. Or the integrator can program the machine-vision system to log details of the inspection and provide trending data.
Note that as quality requirements become more stringent, so do the requirements for automatic visual inspection — but advances in hardware and software are taking vision-system capabilities to new heights.
Thank you to Skye Gorter, President of Skye Automation, for information on machine vision in stamped-part inspection.
Tell Us What You Think!