How to Program an Omron TM Robot to Do a Camera Assisted Pick and Place

Hello and welcome. Today we're going to expand on a video that we did earlier for a simple pick and place and we're going to add the camera to do a camera assisted pick and place. My name is Ray Marquiss, senior application engineer at Valin. This is the program we made in the last video, and just picks up the part, but it has to know where the part is. In other words, it has to be in a tray or some other known position. Then it's going to go to that position and just go down and pick it up. Whether there's a part there or not doesn't matter.

It can't see it or it's not looking for it. And then it's going to place it. And if we just let it go, it's going to go back there and you won't see in this video, but it would just go down and then try to pick up a part that's not. So this is the program that we ended up with in the last video. This is the one we're going to modify, we’ll just go over again. You have the start block, which doesn't really do anything. Then you have an initial position to move to just to get the robot out of the way. We’ll initialize the gripper, which just opens and closes it. Once it's done, it will go down and to this node will move the robot to a position above the part.

And then we'll just drop straight down to pick up the part, or to the position where we can pick up the part. Then will actually close the gripper. To grab it. And then we'll move up to be above the part again. Same position as before, and then we'll move over to a drop off. And then open the gripper, and this line on the bottom her loops back up to the point where we go above the part, so we'll just keep cycling through that, as long as there is something to pick up. Now we're going to see how we modify the program to do this vision assisted pickup. We're going to click on the node that says above part. That's where the robot is going to be above the part that we're going to click on the X to delete that connection to the next node so that there's no connection between them and go over the toolbox and grab the vision node. And we're going to drop that there. And it gets connected to the node above automatically, and we're going to finish taking the pass output to the node below so that we can take this the flow and we’ll click on the pencil. To go in and edit will click on vision job. That little arrow there, and when you get here there will be nothing in here when you do this from this from scratch, but I had been playing around so I'm going to delete that job just to start over. So just get rid of that. And then I'm going to add a vision job. Now give it a name. Whatever you like with no spaces. Say OK. And it'll bring up the vision tools. And we're going to start not from the very beginning. We've already done a calibration, another video, so you should have that calibration saved. Or you should know how to do that already. We're just going to start from the task designer. When I click on that, then I'm going to select fixed point. Pull that down and then here's the calibrations or workspaces that we've done earlier in our previous video, and I'm going to use one that I've already finished. I know it works and it's that one there, so I'm just going to say load it. So that loads the calibration and some other factors. Now the robot wants to be moved into the position for doing the inspection, which is loaded in that workspace so I’ll hold the plus key on the pendant. and the robot to move, and when it's done, that image will go away and will just be with the live video here. You can tell it's very dark. I can't see anything in there and I'll tell you what we're going to do is go to this initiate block. Or initiate block. And double click on that and the first thing I want to do is Scroll down.

I want to drag that bar down until I see lighting. And I want to enable the built-in light on the camera and so just click on that and it looks a lot different now because the light came on. It looks pretty good, but we can fool around with some settings to make it a little better if I move it just kind of a little bit to get it right out from under the center of the light is not the perfect position. Then click on the adjust parameters button. And here you can adjust the light If I drag that thing all the way to the right or move it to the right, it's going to get brighter. That's way too bright? If I move it all the way to the left, it's going to get quite dark. You can drag it like that or you can click on the auto button. And the auto button will try to go out and automatically figure out with the light level should be.  It takes a few seconds so we’ll wait for it to complete here. You can talk amongst yourselves. And I don't like the way that looks. It came up with that number 2100 there, or 21,000 and I don't like the way it looks because that tip is really bright and so I'm just going to put in about what I think it should be. Maybe leave it there. That's probably good enough for this demo, but you would adjust your lighting to get the best performance. Sometimes the background there in the white. It's not quite white and you can adjust the white balance by moving these three sliders to make it like if it's greenish or reddish, but you can push the auto once on that too. I like to do that without anything in the field of view and a white background. And then it didn't change anything in this case, but I have seen it change things significantly. Put the part back. Doesn't really, just as long as it's in the field of view. And then we're going to update, but yours might say save here. Mine says update because I'm fooling around an existing one, so we'll click update or save. That works out OK. Then we'll go back. And go to the next step. The next step is to select this icon. Which are the vision tools. Click on that.

And we have these five the pattern match shape, pattern match image, blob, anchor and this last one. Here we're going to use the pattern match shape. We’ll click on that. Give it a name. I'll call it the USB part. But just make it a little more fun, sound more exciting I’m going to add a “Y” so it becomes a USB party. Helps the vision system work better. Just kidding. Next we select the pattern that we want to look for, and we're just going to drag it on box around the part of the USB that we want to find. The part that's unique, so we'll just catch that tip and a little bit of the body. Go to next. Records that saves it and this part here. It's kind of hard to read, but it says 100 or approximately 100 and that means that the part matches what you trained it 100%, which is good because it's looking right at the park we just trained. Next thing we're going to do is edit the pattern so that we can fix some of this stuff here at those little dots are going to mess up. That's the part moves around. That detail will get Lost and we don't really need to see it in order to know that that's the part, so I’m going to zoom in. Then I'll pick an eraser size. 40 by 40 is pretty good and I can either click on something and it’ll go away, or I can click and drag. Over stuff I don’t want, so click click click just to make him go away. And then there's some other stuff here. Some noise. I don't want this to be something we consider when it's finding the pattern. I don't need any of this. It’s getting tight in there. I'll change the eraser some smaller. So you want to clean up your image and get rid of stuff that might not always be there, or might be too much detail. So we'll just do that for now. This part looks OK. Then we'll go back and it says you want to save the updated pattern.  I’ll say yes, that's what I want. And then the next thing is to select the range that you want to look for that part in. So we'll click on the. Search range. And by default it looks in the whole field of view. That's why it's kind of like that shaded color. But I can just drag box that says this is the only place you should look for this part. If it's only going to be in a certain space in the field of view, then that's fine, but I just want to do the whole field of view for this demo, so I'll just drag the box around the whole thing and then say next. And this is where we tell how much we can allow the part to rotate and still be matching the pattern. So we can limit the rotation of the parts so that if a parts completely upside down, for instance, maybe we don't find it 'cause it shouldn't be that way. But I'm just going to leave it like that. It can rotate 360 degrees. We go next and this is sort of the same thing, only one size. We can allow the pattern. I'm sorry the part to be a little bit bigger or smaller than the pattern and still be a match, but we'll just leave it at it's gotta be the same size. Go next. And then we're done with that part. The next thing I look at is the.

Minimum score this is how much the found part matches the recorded pattern. So right now it's set at 50%, so it's going to have a 50% match in order to be a found part. If the if the pattern matching falls below that, it's not going to find it, so you can slide that around to help you keep from finding the parts that don't quite match or to help you find parts that do match, but for some reason underscore right? OK, next we're going to go back. We're going to click on the arrow to go back and that takes us back to this point. And then we're going to save this vision job, basically. So click on that icon for. Saving. And we'll give it a name, which I just leave it at what I already did. Give the job name. So then confirming that you want to save it. And then you want since you saved it, do you want to quit so we're done? We’ll say yes and then OK. And then OK to this. The node name will be filled in with the vision job name automatically, and so there we have it and you see this little icon to the left there. That's an eye meaning that this is a vision based job. The next step is to change the gripper position. So I’m going to drag in the image of the robot so you can see what I'm doing, but I'm going to use that free button that we've talked about in previous videos. It's on the camera, it's a button on the side when I push it, I can move the robot around. I’m just going to move it to the position where I want to grab the part, and if it's not exactly right after you test it once, you can go back and move it around, either with free the button again or going to the controller. So I've got the part like in the right place where I want to grab it once I'm in this preposition. So I need to go train that position now, 'cause this one that's here was different, so I'll click on the pencil, go to point manager. We have to do this twice. I'll show you why. First one is we want to overwrite this new pose or position to this point in the table, so we'll click on that. And it's going to take us back to here, so we're going to go back into the point manager. The next thing I want to do is I want to make sure that the coordinates used are based on the vision image that was just done, so I'm going to record it on another base and you can see that “vision Find USB” is one of the bases, so I'm going to click on that and then say, OK. That means that it's going to find the thing, whatever it is, and then adjust its coordinates to move the robot to pick it up the way that we wanted to. And you see that it puts this little icon with the eye there. That's the one that says it's using a vision based positioning and then everything else is the same.  We're just going to go down once we find it, we're going to go down. We're going to grip it. And then go back up and then go to the drop off and open the gripper. So one small modification. And here's what it will look like if we start the program. We're going to bring the robot over.

Sorry, the image of the robot over and I'll press the play pause button. Background you can see TM flow changes and you'll see something here that we haven't seen yet in any of our examples so far, so it's doing that initial position. Testing the gripper. Now it goes and finds the part you can see. We see what the camera took an image of there, so we found the part. Picked it up. And it's going to move. So far nothing special because we were doing that before without even finding the part. But this is where it'll be different. I'm going to grab that thing and just moving look at that angle I change the angle of it. You'll see that reflected here. And the parts the robots going down to the part now that new orientation and it's going to pick it up. And then drop it off again. So not too hard to just add in that vision assisted pick up. Let's try one more time. I'll flip it the other way. In future videos I'll show you how to increase the speed and do some other tips for working with TM flow on the robot. But there you can see it found that part no problem. Picked it up and I'm going to hit the stop button. But I like to do that after it drops off the part so I don't have to pry it from the robots cold, lifeless fingers and then we get back to there. That's it for this video. I hope you found it helpful. This is Ray Marquis with Valin Corporation and for now the robot and I say goodbye.

If you have any questions or are just looking for some help, we're happy to discuss your application with you.  Reach out to us at (855) 737-4716 or fill out our online form.