Robotics Blog Update

I have grown to understand a lot about the field of robotics. I find this to be a challenging field but it is also an area of work that gives me a lot of satisfaction. I find it difficult to put into words how much it means to me to have the opportunity to work with some of the best scientists in the world. There are many aspects of robotics and automation that has not been discussed yet and in time all areas of this arena will be covered. I would like to give you an update on what I have been working on so far, given that I have had countless e-mails requesting I give some insight into my current work. My next article will cover this subject in detail but all I can say now is that I have moved away from vision robotics. I will return to this subject at a later date but for now I will look at other fundamental issues affecting automation projects.

Robots Manufacturing and Lab Automation

The use of machine vision systems in the lab has become widespread in recent years. There is a lot of evidence at the moment suggesting this is a trend that is likely to continue for the foreseeable future. It is understood that these systems, when keying from given co-ordinates will create what is known as solid fixturing. This is yet to be confirmed by any recognized institution but this is merely a formality and we expect this to be confirmed quite soon. As machine vision systems grow in popularity we can expect a lot more investment in this technology, especially from the lab automation industry. An investigation in to how this technology can help better our understanding of robotics needs to be first carried out, before appropriate funding can be provided.

Lab Robotics

Locating components in an environment can be a very difficult task and researchers have begun to use luminescent codes to help hi-light the location of objects. The system will scan a given area using ultraviolet illumination in conjunction with regular illumination. Following this, a second scan will be carried out using just regular illumination to create another image. These two images are then compared allowing the system to deduct what is in a given area. The resulting image will give the system a detailed understanding of its area, allowing it to clearly see where various illumination codes are located. Given that these codes are applied in a meaningful way, different codes can be used to represent different elements of the object. A code can even be placed on what is deemed to be a critical spot on the object, allowing the device to track the position of the robot in relation to the devices robot arm.

manufacturing robotics

Use in Manufacturing

The most obvious application of these systems is in manufacturing, where a device such as this can easily replace human workers. This application is fraught with challenges which need to be overcome if a successful system is to be implemented. In the manufacturing of non-conformed parts, it can be very difficult for these systems to manipulate objects in a repeatable manner. Intricate manipulation such as folding and twisting can be very difficult for most robotic systems but those fitted with vision systems have made a real breakthrough in this industry. Edge detection is paramount in being able to carry out these tasks without damaging the objects in anyway. I will cover this in more detail in a later article but for now I will leave this subject. I will post some more resources for those who are interested in this subject, so they can carry out further reading.


Robotics in the New World


I have not sent out a personal message on this web blog for a while, so I think it’s a good idea to finally do one and get back in touch with my readers. The world of robotics has changed somewhat in the last few years and I do believe that this has changed for the better and definitely no the worst. You need to be able to look at what is important to you and then make decisions based on that importance. I am someone who is very passionate about robotics and I think that this is a good stepping stone to bigger and better things. The robotic make-up of life is enough to drive even the sanest person crazy and I don’t think enough has been said about that. I plan to post more of my thoughts on this website, as there has been a demand for this laid back approach recently. Bookmark this website for more information on this subject.


The Editor.

Lab Automation Vision Systems

In many articles on the subject of robotics and automation, especially if the article is geared towards lab automation, vision systems are addressed as a subtopic. Many people in the scientific community have speculated that machine vision systems would not be common place in laboratories for a number of years. This forecast however, turned out to be off the mark with the growth rate of machine vision systems rivalling that of machine manipulators. The problems lay in that many people do not truly understand what the term ‘machine vision’ actually means.

Vision Systems

Machine vision is mistakenly thought of as sensing, storing and then reconstructing some sort of image to create a copy of that image. The objective is a lot more specific than that, as machine vision systems usually have a specific assignment. They are used primarily for checking part orientation in terms of their position and where they are in relation to their environment. Machine vision is quite often described as the ability of a device to sense and internally reconstruct a graphic image that emulates the original as close as it possibly can.

This is rather simplistic way of viewing it as the process inside the laboratory can be rather intricate. In lab automation environments, machine vision devices have a specific role, such as picking up and placing vials in a device. Procedures within the lab can be defined as being efficient, while effectively completing their tasks with the minimal amount of deviation. A full graphical recreation of the devices surroundings should not be necessary, with only a partial reconstruction required to navigate its environment. The procedures outlined above take advantage of various elements of the image, before using algorithms to deduce the necessary information from the image.

Machine Vision

This side of machine vision is often misunderstood and it can be a difficult concept to grasp. I have a profound interest in this field and this is reflected greatly in my work which has received much criticism, due to its cognitively pressing nature. This website is not dedicated to the vision process itself but on the robotics and automation, so I will not delve too deep into this area. With this in mind, I feel it is necessary to examine only what is required to implement such a system in an automated device. Sorry if this disappoints those who wanted to look at this subject matter more thoroughly, although I may do a more in-depth article at a later date.

Robotic Motion

It is the ease of which they can be programmed that makes robots so versatile and it is this accessibility that drives the creativity of automation pioneers. Programming of industrial, laboratory or commercial robots can range from manipulation of a simple tool to the intricate programming of computer chips and microprocessors. One of the greatest difficulties in programming robots is to effectively translate the robots frame of reference across to the human who is operating the machine. When most of us consider the space around us we think of right versus left or up versus down but for robotic devices is is very different. Automated machines think of space in terms of their movement within that space and how their axes require to be moved to get to a pre-determined point. The tools that these machines use can be classed as another frame of reference. For teaching that is both practical and effective, the system being programmed must accommodate transformations between various co-ordinate systems.

Robotic Motion

Programming methods that are considered to be powerful and versatile, use two devices; a keyboard and a teach pendant. The many software languages I have referenced on this website were used to signal the relationship between teaching the robot and programming the robot using a keyboard. Both of these methods are required if you are to build a robot that executes with perfection and does not encounter any error routines. I suppose this post is like a summary for the articles that have preceded it, as I like to recap on the lessons I have taught. I hope you the readers are getting as much out of reading this information, as I am getting from writing it. Computer aided design and robotic manipulation have been used for many years, helping to alleviate some of the tasks carried out by manual processes.

ARMBASIC Lab Automation Example

I feel there is no better way to understand a particular concept than using an example. Now we will program a lab automation device in ARMBASIC to accomplish the picking up of a vile and emptying it into a test tube. The first thing we need to do is to define a variable to hold the stepper count for each axis used by the robot. The stepper count is the number of motor steps needed by a given axis to reach a given point. This number is taken from the home position set using the @RESET command and configured as follows:

  • A1 will be used for the pick-up point of axis j
  • B1 will be used for the place point of axis i
  • C1 will be used for the axis I approach to A
  • D1 will be used for the axis I approach to B

Let’s say we have a robot with six axes where the sixth axis is the gripper. This would give us four points with five axes which will give us twenty variables. These twenty variables must have a stepper count defined for each one. The lab automation device has now learned every single point, including the point relating to C, as this was defined as the home position. You are probably wondering if the @READ command will specify five axes or six. The answer is five because axis six will keep the same degree of closure in every move. Due to this we will have to save only sixteen variables as opposed to twenty, with fifteen used for the locations and one relating to the steps for the gripper closure.

Gripper Robot

Now it’s time to execute the program and see how it works. But first things first, the robot must drop the vile picked up during the teach phase and return to its ready state. The automated machine can now begin the pick and place functional sequence, controlled by the ARMBASIC commands. Additionally, we must configure the stepper count variation between the open gripper and the closed gripper.

The most efficient way to enter these variables into the commanding computer is to manually move the robot into the desired positions and then use the @SET command in conjunction with the @READ command. This is referred to as the manual teach phase, discussed in depth on other articles on this website. This approach greatly reduces the number of variables needed to save the different positions of the axes which leads to less data processing over all. It will also cut down the amount of arithmetic required when using the @STEP commands. The approach point for C which is the position at which the gripper’s jaws are wide open, is a good point to set as the home reference. This point will be the starting and end position for the execution of each command in our program. Using this language the operator uses keyboard input to position the robot at the home reference point which has the jaw open at its widest point, perfect for picking up an object.


From the programming example the reader can probably see that some of the statements are identical to one another. Regardless of statement duplicity, as soon as the robot teach phase has finished the robot will go immediately into action. To make this software more efficient, we must:

  • Have program prompts that interact with the user, requesting they enter points during the teach phase.
  • Include a pause between manually teaching the robot and then executing the code. This will make the robot wait until the operator is ready to execute the program.
  • Insert a function which allows the operator to control how many times the program must execute.

To add these features we could use the conventional structure of the BASIC programming language. It is clear that this sub-routine does nothing except insert a variable to track time. The amount of time that had actually passed would be shown by the internal clock of the computer but it may also be shown by the program itself, using an execution second counter. Programming an external device robot or lab automated tool to turn on and off, can be done using the BASIC programming structure. These two languages are based on each other and therefor integrate with each other seamlessly.


ARMBASIC Programming for Automation


When using the ARMBASIC program language you will only need to know the following commands:

  • @RESET – Initializes the positioning variables, establishing what is referred to as a home reference marker for the motor iterations.
  • @STEP – Delivers pulses to specific stepper motors. This command is used to move the robot at a speed defined in its variable. Variables relating to the number of step pulses to be used, which axis is to be advanced and how fast the robot is to move are defined. Incremental motion is used which mean the axis steps taken are relative to its position following the previous command.
  • @SET – This command switches the control of the robot to the keyboard, so the machine can be operated manually. This command is used prior to entering a teach process for the robot.
  • @READ – This command records the sum of pulses sent to each stepper following the @RESET command. It is vital that the position register of the stepper pulses are recorded, relative to the number in the previous command. This data can then be manipulated to return the robot to previous positions without having to re-learn the steps.
  • @CLOSE – This command will close the gripper to the point a sensor is triggered, indicating a certain degree of resistance.
  • @ARM – When configuring a robot it is important the correct port number is selected. The @ARM command carries out this process.

As you can see from above, each command is prefixed with the @ sign, which is used to distinguish the ARMBASIC commands from BASIC commands. You may also notice that there is a command for closing the set of instructions but none for opening it. This can be done automatically, by sending a series of pulses that action the opening of the gripper, which can be done using the @STEP or @SET command. The @CLOSE command is required because the gripper motor will continue until it is powered off by one of the sensors.

Robot Gripper

From what I have discussed so far, it is easy to presume that the software engineer is required to make complex calculations on robotic dimensions and angular displacement but this is simply not true. All the programmer has to do is move the robot using the @SET command and in conjunction with the @READ command , the required pulse data is recorded. Following this, if the engineer wants to make the robot return to that position they simply use the @STEP command with the specific pulse count.  The trajectory of motion is smooth because the programming language pulses each motor at a controlled rate.


This path is usually not a straight line but this makes very little difference to the motion of the automated machine. The engineer does not have to carry out calculations and is often not aware of the values used. The program calculates this and deals with these values. When the software engineer names the variables, he is assigning memory to hold the pulse data and has no underlying knowledge of what that data actually is. The final premise to take from this is that the entire process is an open-loop process, meaning the program can not react to blockages in the motion path.

Lab Programming Commands

Lab Programming

The best approach to programming a robot is to structure the commands in sub-routines of a more basic programming language, instead of trying to achieve simple tasks using a complex language. This means that software engineers must be versed in many programming languages and be competent at applying this knowledge to any application. This allows more complex programs to borrow features from less complex programs, leading to a more efficient development process. Today’s micro-computers contain a number of basic programming tools which can be accessed by developers, allowing them to only use advanced programming techniques when solving intricate problems.

Lab Programming Defined

The above text is aimed at those looking for an introduction to the formatting of most automated programs and the ability to be able to write basic programs. The language schematics used is much more robust than those discussed here. For example, the majority of the motion commands can be used for traversing objects between a set of points and not just moving to one centralized point. The internal polarity of these devices allows for a curved-linear movement, similar to the motion described in previous articles. On top of this, there is what is termed ‘convenience routines’ which are used in numerous languages based on C. Those of you who have been exposed to coding before will be aware of the following commands:


Seeing as we have now had a thorough introduction to linear commands, we shall now develop a routine that will achieve a predetermined task. This will allow you to better understand how these routines function and how they can be used in real-world applications. The best way to learn any new approach is to actually use it. You can read about it all you want but if you do not put that knowledge into practice you will soon forget it. Click here to read more about sub-routines.

Software Solutions

In previous articles on this subject, the software solutions included various modes, mostly incremental and absolute modes. Many software languages permit the use of both these modes but they are rarely used together. If you want to action a robotic device from its current location or manipulate it in some way, a special command must be used to achieve this. Input parameters must be entered to allow the software to achieve meaningful goals, as without meaningful input the output would be random.

Robot actuators use commands that are interlinked with other executable instructions, to aid them in completing the most complex of tasks. A huge repository of commands exist which allows the programmer to complete various tasks and gives them fantastic scope when trying to overcome complex automation problems. Time delays for the actuators are set in seconds which allows the machine to function at precise intervals. Delay commands such as these are a common feature in many robotics software programs used today.

Software Solutions

Programming Routines

These routines have a standard approach to dealing with pallet machines in the world of industrial robotics, so general commands can be used in place of more complex commands. There are many constants assumed when manipulating pallets on a large scale, all of which must be accounted for before the software is developed. Column spacing and row spacing have a prominent role in how these objects are picked up and then transported from one location to another. Corner points of the pallets must be defined for the machine to be able to interact with it. If this data is not available, then the automated devices won’t be able to carry out its function.

Software Solutions

The locations representing the predefined points in the command definitions must be entered as variables in the program. Some software programs will look at an abject as beginning on the left and then moving across to the right. In these cases, rows are considered to be horizontal, while columns are considered to be vertical. The last item is then seen to be in the upper right hand corner in each sequence, with this logical approach adopted across the board. The terms horizontal, vertical, left and right are only relevant with respect to the robotic plane of vision and the pre-defined orientation. In a real-world example, any object being manipulated would have vertical rows and horizontal columns.

During this entire process the software keeps a log of all the positions on the object that the robot has manipulated. This log is referred to as the objects or parts indicator. This log is then used by sub-routines further down the software code, to enable them to carry out the same actions on the object. This is probably one of the most difficult subjects I have blogged about and you may need to read this article a few times to fully understand this concept. Robotic devices are very complex and the software that controls these machines is even more complex. There are signs that programming approaches are using more basic routine which can be executed more efficiently but at the moment this subject matter not suitable for novices.

Teaching Robots to do Work


In many programming languages we can see the robot’s ability to interact with its environment, by using a specific set of instructions. There are certain instructions that allow the robot to respond rapidly to external communications. There is a specific instruction that will cause the robot to halt whatever it is doing at that precise time, if one of its input channels becomes low, engaging an erroneous sub-routine. The idea behind this process is to halt any operations currently taking place if one of the sensors is triggered, indicating the robot’s motion area has been encroached upon.  There is a huge difference in how these instructions are processed and what actions they lead to. If the robot is to sit in an idle state, the program will instruct the device to do nothing while the input channels are low. But if the robot is to be called into action, a sub-routine will trigger this process once an appropriate level has been achieved in the channels. The robot will remain at an alert status while it is waiting for further instructions. The machine can then be made inactive by instructing the robot to ignore all channel data and await further instructions. These instructions are fed through an automated software program but can be over-ridden by manual commands.

software automation

This short introduction to programming languages does not cover the entirety of the power of these techniques. My aim is to give a quick preview into what these languages can do and some of the processes that can be carried out. The fetes achieved by these machines and the languages which control them is truly amazing. Most modern languages have a semantic make-up similar to C and you will find that most languages are based on this early programming approach. There is a programming modularity used which provides efficient calling of sub-routines which is vital to how the C language functions. These sub-routines seed control to each other at various points throughout the programs life-cycle. The language was intended to be a general language for various types of lab automation equipment and not just for industrial robots. It is basically a manufacturing language and this is reflected in the programs name. I am only going to scratch the surface of what this language does, as I plan a much more detailed article on this subject in the future. To do this subject justice, I feel a more complete look at the language needs to be carried out, so the full power of this application can be envisaged.