Product image
Systems by Application

3D / Robotics

Machine vision for 3D Inline inspection

JLI 3D/Robotics systems are solutions for measuring three dimensions, position estimation of objects, bin picking and sampling. The systems cover a wide range of production processes. All equipment is designed for the relevant environment.

Inline 3D inspection of bricks

General features/benefits of our 3D/Robotics systems include:

  • Custom made
  • Adapted to existing production lines or robot manufacturer
  • Internet connection for support

What is 2D Inspection?

In order to understand 3D Machine vision technology, we need the basics of 2D imaging.

In pure 2D machine vision system, the target is to acquired an image for processing. The image is effectively a flat, two-dimensional plane view. The 2D image does not provide any height information at all – there’s X and Y data, but no Z-axis depth of field data. What we see is effectively the contours/profile of a 3D object viewed from a specific viewpoint. Different viewpoints and different objects create entirely different contours, making 2D challenging or of limited use in applications where depth information is critical to performing a task.

There are a range of different cameras available to obtain the 2D image:

  • Matrix
  • Linescan
  • Spectral (UV-IR)

The illumination plays an important role in maximizing the contrast in a 2D image. Mastering illumination becomes an integral part in designing a system. There are a numerous different types available:

  • Backlight
  • Dome
  • Dark field
  • Collimated
  • Strobed
  • Line

Together with the appropriate software typical inspections can be carried out:

  • Presence check
  • Dimensional gauging
  • Print OCR/OCV
  • Barcodes
  • Surface quality
  • Color

What is 3D Inspection?

In some applications where a profile/contour of the object does not provide the right of information, 3D technology can be helpful. With 3D you capture the shape of an object and gain an extra dimension - depth information. With 3D, you can measure geometric features on an objects surface regardless of material or color.

In some setups one of the advantages of 3D is that it is contrast invariant, and ideal for inspecting low contrast objects. It is also more robust and immune to minor lighting variation or ambient light and there is a high repeatability on measurements due to the fact that components often come in one package with camera, optics, lighting, and pre-calibration.

3D offers the possibility of:

  • Detecting surface height defects
  • Volumetric measurement (X,Y, and Z-axis) provides shape and position related parameters
  • Helping robots pick up objects randomly located in a bin – automating factory processes
  • It also helps guide and move robots around in the world autonomously

How do I pick the right 3D Inspection solution?

Obtaining 3D data can be somewhat of a jungle. There are numerous techniques available each with their advantages/dis-advantages.

The most prominent within machine vision are currently:

  • Laser triangulation
  • Stereo vision
  • Fringe projection / light stripe /structured light
  • Time of flight

In the following we will try to provide a brief overview of these.

What is Laserline Triangulation?

Laser Triangulation is a machine vision technique used to capture 3-dimensional measurements by pairing a laser illumination source with a camera. Laser triangulation can be a point or more commonly a line.

A laserline triangulation system works by measuring the position of the reflected laserline in the cameras field of view under a preset angle. When an object moves up or down the position changes and this distance can be correlated to a height. Scanning the laserline continuesly during motion can provide a height map of an object.

Laserline triangulation is currently one of the most common 3D technologies and is often used for:

  • Scanning the volume of products. Often used in the food industry.
  • Quality control, measuring dimensions
  • Assembly pick-n-place. Make a robot grab an object with the correct orientaton

Pros:

  • Works on moving objects
  • It is very accurate (typically microns or mm)
  • The technology is rather material invariant
  • And works a high speed

Cons:

  • The FoV is dedicated to a small range
  • Requires encoder to work inline

What is Stereo Vision?

Stereo vision works by imitating the human/animal vision using binocular disparity. Basically our eyes see things from two slightly different viewpoints through our left and right eye.

In the simplified illustration two points C and D are seen from two cameras with a small distance (typically 6-10cm) between them. In camera A The first point is C follow by D, and viceversa for camera B. Notice that the points in the right image (camera B) are shifted to the left and that the shift/parallax is less for distant points. By simple math/triangulation the distance to an object can be calculated when the distance between the cameras are known along with the focal length and the disparity/parallax.

One of the challenges in stereo vision is to match points in both cameras in order to calculate the disparity/parallax. This often leads to less accurate measurements together with calibration errors.

Pros:

  • Rather cheap components
  • Small form factor
  • The FoV can be anything from small to very large
  • The technology is somewhat material invariant

Cons:

  • It is not very accurate (typically 1mm-1cm)

What is Fringe Projection?

Fringe projection is based on triangulation in a similar way to laser profiling, however, the whole surface of the sample is acquired at once by projecting a stripe pattern to the surface, and recording the resulting image by a camera perpendicular to the surface. Using at least three different stripe patterns (often many more) recorded in a fast sequence, the exact lateral stripe displacements can be determined for all points on the surface. It is an advanced version of laser triangulation.

One of the advantages is that different wavelength can be applied suiting the surface material of the objects.

Some disadvantages are that it only works on stationary objects and may require a processing time of several seconds depending on the number of patterns projected. The number of patterns is also directly related to the accuracy. More cameras can be added to deal with occlusions and external light noise/stray light.

With fringe projection, the measurement area can be scaled over a wide range, from less than a millimetre up to more than one metre, whereby the resolution/accuracy scales accordingly. The method suits small samples due to its high resolution and accuracy for precise detail measurements, as well as large area.

Pros:

  • It is very accurate (typically micron on small FoV in a few cm)

Cons:

  • Slow scanning speed: just below 1s to several seconds
  • The FoV is dedicated to a small range
  • Does not work on moving objects

What is time of flight (ToF)?

The Time-of-Flight principle (ToF) is a method for measuring the distance between a sensor and an object, based on the time difference between the emission of a signal and its return to the sensor, after being reflected by an object. Various types of signals (also called carriers) can be used with the Time-of-Flight principle, the most common being sound and light. ToF sensors use light as their carrier because it is uniquely able to combine higher speed, longer range, lower weight, and eye-safety. By using infrared light we can ensure less signal disturbance and easier distinction from natural ambient light, resulting in the highest performing distance sensors for their given size and weight.

Pros:

  • Rather cheap components
  • Small form factor
  • The FoV can be anything from small to very large (typically 0-13m)
  • The technology is material invariant

Cons:

  • It is not very accurate (typically +/-1 cm)

Which requirements should you consider?

Let’s conclude the 3D overview by looking at this technology comparison. Here are some of the most important requirements that you as a user should consider.

Accuracy, Speed, FoV, Material invariancy, moving parts, and form factor. You could also consider ambient light robustness.

To put it short:

  1. If the part is moving -> use laser triangulation.
  2. If the part is stationary and you require very accurate measurements -> use fringe projection
  3. If the part is moving and you a large field of view -> use either stereo or ToF

What is the Challenges?

As you saw in the previous slide there is no options to measure objects in motion from multiple angles. Most of the technologies only provide depth info based on camera views from one angle.

If we take the laser triangulation principle which is the most suitable for moving objects we have a few different setup options. Some are more prone for occlusions to occur. Specifically, whenever the camera views the object from anything other than an angle perpendicular to the inspection surface, there will be some parts of the line which are blocked or occluded from the camera's view since no object is completely flat. This creates an inherent design tradeoff, because, while measurement height resolution increases for the standard geometry as the camera angle increases, so does the potential for occlusion.

What we can do is to arrange more sensors in a network. As the illustration shows if we wanted to inspect a xylindrical object like a log we could position 4 sensor evenly distributed around the object each covering approcimately ¼ of the log. The sensors position will need to be calibrated together

In order to present a unified view of the log and make it possible to measure e.g. a diameter.

What we do not have a solution for inline is the possibility of obtaining a full view of e.g. a five sided object like a brick. The front and back suddenly become rather challenging. Performing this can be done offline or on a static object using fringe projection mounted on a robot.

JLI is now introducing a method that can perform inline 3D on the fly. And makes measuring e.g.: Hight, width, length, angles, patterns and deviations possible.

So how is this done?

With the assumption that are objects will pass on a conveyorbelt and that we need to cover typically 5 sides, we have found a two sensors solution in a setup as shown in the illustration.

This is an altrernative to mounting many scanners and simplifies configurations. The accuracy is related to the configuration:

  • At least 2 anglesare used, often in a diagonal
  • Enables inspection of 5 sides of objects moving on a conveyor belt at speeds of up to 25m/min
  • The achievable accuracy depends on the accuracy of the encoder on the conveyorbelt. As a rule of thumb approximately 1mm on objects up to 500mm
  • The footprint of the portal for an inspection FoV of 500 x 500 x 500 mm is 1,25mx1,25mx1,25m
  • Smaller FoV gives smaller footprint of portal
  • Portal can be combined with standard 2D inspections

What to be aware of in an inline solution?

  • Vibrations
  • Temperature changes
  • Conveyor speed
  • Movement on the conveyor
  • Dirt
  • Occlusions

This solution ideal for:

  • 100% inline measurements – the only realistic alternative is sampling

What does the future bring?

3D imaging is a steadily growing technology. The availability of sensors have exploded over the past few years and components are becoming more sophisticated and targeted for specific applications such as robot guidance, high speed scanning of volumes and precise 3D modelling. Most applications require accurate and reliable 3D images. There has been proliferation of 3D imaging technology, like triangulation, stereo vision, time of flight and structured light. Structured light with stereo vision has begun to penetrate the 3D segment and shows great promise. At JLI we focus on providing advanced systems for inline 3D gauging of moving objects. In 2020 we expect to release the first version of our inline 3D portal capable of comparing recorded point clouds with their original CAD model. It may look similar to the illustration.

One of the promissing new technology is 3D Line Confocal Sensors. The key advantage of line confocal sensors is their ability to generate 3D tomography (multi-layer 3D geometry and 2D intensity), and 2D intensity data simultaneously using a ultra fast camera. Confocal scanning avoids unwanted reflections from shiny metal surfaces and the off-axis arrangement is what permits multi-layer (tomography) scanning. Ability to scan and measure materials with any color combination

  • Ability to measure all surface types including mirror-like, glossy, transparent, translucent, curved, convex, concave, soft, fragile, or porous
  • Metrology-grade resolution and accuracy are provided at high speed

Line confocal sensors are ideal for transparent material inspection and quality control. One of the key applications in capturing the surface of mobile device displays and detecting layers inside and under the screen. The combination of 3D tomography (multi-layer) and 2D intensity imaging can be used to identify defects such as delamination, scratches, or dust on the surface or inside of laminated glass, mobile phone displays, or any other type of transparent multilayered material like sealed medical packages. And, contrary to other types of imaging systems, line confocal sensors detect not only the location of the defects,they also identify in which layer the defect exists. The sensor even measures the dimensions of the defect down to the sub-micron level.