Integration of vision camera with SD 35

Created by Spin Robotics Admin, Modified on Wed, 21 Feb at 2:00 PM by Spin Robotics Admin


Integration of

SICK Inspector with SD35


TABLE OF CONTENTS


Vision and machine learning

Vision Systems: Think of vision systems as the eyes of a robot. They rely on cameras and sensors to capture and interpret visual data from the surroundings. This enables robots to "see" and understand objects, shapes, and spatial relationships. Vision systems play a crucial role in tasks such as object detection, navigation, and quality inspection.

Machine Learning: Now, imagine if these robots could not only see but also learn from their visual data. That's where machine learning comes in. Machine learning algorithms enable robots to analyze large amounts of data, identify patterns, and make intelligent decisions based on their observations. This allows robots to adapt to changing environments, optimize performance, and even learn new tasks over time.



Preface

Here you will see an integration of a vision camera with SD 35 which sits on a UR robot, for this a SICK Inspector PIM 60 has been used, with the aim of being able to detect a screw with the camera and have the screw tool run down and unscrew or screw in, depending on the desired purpose.


In this description, you will see an example of how to use the camera to detect a screw and then unscrew it.


The software used for the camera is the SOPAS Engineering Tool and can be downloaded from SICK's website.


There is an associated URCap which can be downloaded from SICK's website. It is necessary to use the URCap for e.g. to calibrate the camera and receive the positions of the detected.



Setup


Figure 1: Setting up the camera, on the SD35


The camera is set up as shown in the picture.


The lower screws on the screw tool are unscrewed and then the camera can be placed parallel to the screw tool.


The screws that were already in place were replaced with some longer screws of the same type to reach the thread of the screw tool, so that the camera could be firmly fixed.


Network

A connection must be established between all devices and therefore the network-ID of the IP-address must be the same and the host-ID must be adapted to each individual device in order to establish a connection between all devices. 

It must also be on the same subnet mask.


  • SpinBridge

  • Robot

  • Inspector PIM 60

  • PC


Subnet Mask:     255.255.255.0

SpinBridge:     192.168.37.1

Robot:               192.168.37.2

Inspector:        192.168.37.3

PC:                      192.168.37.4 


To change the IP address of the Inspector, press the menu with the 3 dots, then press 'Change IP-address…'. as shown in figure 2. 

Press 'Search devices' if the inspector does not appear and ensure that the inspector is connected to a power source and that the ethernet cable is connected to the computer.

Figure 2: Change IP address in SOPAS ET



It is important that the network-ID of the Inspector and that of the PC are the same in order to find the Inspector in SOPAS ET. Figure 3 shows the IPv4 address for the PC.


Figure 3: IPv4-address for PC 



Figure 4: IP-address on the  UR-robot


Figure 5: IP-address of SpinBridge



When there is a connection between PC and Inspector, it will look like shown  in Figure 6.

Then double-click on the inspector logo to access the functions shown in figure 7.


Figure 6: Connection between PC and Inspector created


Calibration

In order to be able to detect effectively, a checkerboard must be used to calibrate the camera, here the software SOPAS ET is used for calibration.

The SOPAS ET software must be in  edit mode to access the functions in the software


It is recommended to place the checkerboard on top of the object to be detected.

 

SICK URCap also has a calibration function that you find on UR's polyscope: press

Installation → URCaps → SICK Inspector


In SOPAS ET you must enter the size of the squares and the same applies in SICK URCap, however in SICK URcap you must place the tool center point (TCP) on the points A,B,C,D on the checkerboarded afterwards.


Figure 7: Checkerboard for calibration



Object detection

it is ensured that the camera is kept at the same height as it was calibrated, and the Object Locator is used to draw a contour around the entire object.


The contour around the object will follow the movement of the object as long as the entire area is in the camera's path.


The pattern function is then used to locate the screw and the coordinates above the screw can be sent to the robot via the ethernet result output.


Figure 8: Screw area located by the 'object locator', and the screw located with the 'Pattern' function

In the top left corner, you can then press

'Inspector PIM 60("NoName")' → Ethernet result output → An XML code is then inserted which sends the coordinate to the robot.


Insert tags → Pattern → Decision, XYZ, 0.

The arrangement must be the same as Figure 9.

Figure 9: Ethernet result output. Here, the coordinates of the screw are sent to the robot


UR-Robot

On the robots polyscope, press the program tab and then select URCaps then press SICK Inspector to use the node 'MoveToCamPos', which contains what you need to detect the object and make the robot drive down to the detected, as long as ethernet result output is in the correct format. See Figure 9.



Figure 10: MoveToCamPos node to control the robot with


In Figure 8 you can see the object positioned correctly and within the camera's path. If this is the case, you can receive the positions above the object by pressing 'Get pos'

Then place the screw tool's TCP on the screw, press 'Set pose', this only needs to be done once. See figure 11.



Figure 11: Here you can see what MoveToCamPos contains and the object position will be displayed if it is correct within the camera's range.


The program will then calculate a pick point offset and rotation offset.


Then start your program and let the robot drive down to the position it receives from the camera.


It is a good idea to avoid the program running in a loop to test whether the detection works correctly, by moving the object to another position, within the camera's range.



Robot Program

The robot program, which is used to unscrew the screw that is in the screw area in figure 8, is made in the following way as seen in figure 12.


Nodes 7 to 9 are used to get the bit that sits on the screw tool to turn a little, so that it can come down well on the screw and to get the correct orientation on the screw.


Drive Screw is thus used in node 11, in order to be able to run a screw program that is suitable for the screw to be unscrewed.


However, a spin target variable must also be used in the Drive Screw, which must be set to get_actual_tcp, so that when the camera detects the screw and drives down to it, it will subsequently also execute the program set on the Drive Screw. This means that teaching will not be necessary, as we now use vision to detect the screw.


Figure 12: Robot Program to control unscrewing


Was this article helpful?

That’s Great!

Thank you for your feedback

Sorry! We couldn't be helpful

Thank you for your feedback

Let us know how can we improve this article!

Select at least one of the reasons

Feedback sent

We appreciate your effort and will try to fix the article