An Easy-to-Build Syringe Pump

This is my attempt at coming up with a better dosing mechanism for my bartending robot (more on that soon). It’s a cheap, easy to build syringe pump using off-the-shelf parts.

What I used:

The only modification I had to make was to cut the plunger ring off the syringe and drill some mounting holes in the plunger top and the grip flange.

You could easily make the metal brackets from aluminum channel. You could also use a cheaper stepper with a threaded rod coupled to the shaft to save on cost.

Here’s a short video of the action:


Using the APDS-9960 RGB, Proximity, and Gesture Sensor with the Raspberry Pi

I’m always excited to get my hands on new sensors, so when I saw the APDS-9960 RGB and gesture sensor breakout board from SparkFun I ordered one immediately. The neat thing about this sensor is that it does a lot for its tiny size and relatively low cost.

The features this sensor offers are:

  • Proximity measurement
  • Ambient light intensity measurement
  • Color detection
  • Simple gesture detection including left, right, up, down, foward, and back
  • More complex gestures are apparently possible, although I haven’t seen evidence of this yet

SparkFun offers an Arduino library for the sensor, but no other platforms are supported yet. I really wanted to use the sensor with the Raspberry Pi so I quickly ported it. The connections I’m using are:

I put the code up on BitBucket at You can clone the repository using:

git clone

In order to get it working you’ll need to install wiringPi for I2C and interrupt support. Just follow the installation instruction at the wiringPi website.

Once you have wiringPi installed you can build any of the examples. So the GestureTest example can be built using the compile string:

g++ -Wall -o GestureTest GestureTest.cpp APDS9960_RPi.cpp -lwiringPi

It’ll be easier to write makefiles for larger projects so you don’t have to keep doing that, but for testing it’s fine. Before you run the program you’ll need to set up I2C and the interrupt with:

gpio load i2c
gpio edge 7 falling

You might get a warning with the i2c load, which you can safely ignore. Run the program with:

sudo ./GestureTest

And wave your hand over the sensor to see what it does. Neat right?? I’ll be digging further into this code to see if I can get more complex gesture sensing to work if someone doesn’t beat me to it. Have fun!

Designing a Quadruped Robot – Part 3

The key technique for having the quadruped move smoothly is interpolation. Dynamixel servos (or any other servo, really) accept some value to use as the destination. The Dynamixel SDK uses the term “goal position” for this. If you send the servo this value, it will move as quicky as possible to the destination, which will not result in smooth movement when coordinating multiple servos.

To interpolate the servo for smooth movement, the goal is to move it in small increments between the current position and the destination. My solution is generally to always keep the time between increments the same, but vary the number of increments to change the speed of the servo.

A basic example for interpolating a single servo might look something like this (excuse the pseudo-ish code throughout this post):

void interpolate_servo(float current_position, float desired_position, int increments) {
  float increment_amount = (desired_position - current_position) / increments;
  for(int i = 0; i < increments; i++) {
    current_position += increment_amount;

Of course “move_servo_to” will vary based on the servos used, and should accept an angle and convert it to a value the servo understands. Also, this method would have to accept the ID of the servo you’re controlling somehow. If writing object oriented code, it could be a method of the “servo” class.

This works well if you’re only moving one servo at a time. But if you’re trying to move a whole leg, this routine will have to finish moving one servo before you can call it for another servo. Therefore, the interpolation routine should be done on the entire leg and not the servos directly. If you look back at Designing a Quadruped Robot – Part 2, some simple code could be written which would move the entire leg to a destination when given a point in 3D space. The key then is to apply the interpolation technique above on the “move_leg” function.

An example for doing this with an entire leg would look like this:

void interpolate_leg(float current_x, float current_y, float current_z, float desired_x, float desired_y, float desired_z, int increments) {
  float x_increment = (desired_x - current_x) / increments;
  float y_increment = (desired_y - current_y) / increments;
  float z_increment = (desired_z - current_z) / increments;

  for(int i = 0; i < increments; i++) {
    current_x += x_increment;
    current_y += y_increment;
    current_z += z_increment;
    move_leg_to(current_x, current_y, current_z);


Again, this works well for a single leg. But what about the four legs on the robot? You don’t want to have to move only one at a time or things look pretty bad. I’m sure you’re starting to see the pattern.

At this point I suggest using c++ and creating a leg class that holds a “move_to” method, “update” method, the current coordinates of the leg, the desired coordinates, the number of increments, and the IDs of the servos that make it up (IDs in the case of Dynamixels. If you’re using normal servos you’ll need some other system of identifying them). This way you can simply set the desired location and number of increments, and call “leg.update()”. The code would look something like this:

class Leg {
  float current_x, current_y, current_z;
  float desired_x, desired_y, desired_z;
  int increments;
  unsigned char coxa_servo_id, femur_servo_id, tibia_servo_id;

    Leg(unsigned char, unsigned char, unsigned char);
    set_desired_position(float, float, float);

// Constructor here to set up servo IDs
Leg::Leg(unsigned char coxa_servo_id, unsigned char femur_servo_id, unsigned char tibia_servo_id) {

// move the leg one increment
void Leg::update() {
  float x_increment = (desired_x - current_x) / increments;
  float y_increment = (desired_y - current_y) / increments;
  float z_increment = (desired_z - current_z) / increments;

  if(increments > 0) {
    current_x += x_increment;
    current_y += y_increment;
    current_z += z_increment;
    move_to(current_x, current_y, current_z);

// other class member functions like "move_to" etc. below...

This way in the main loop you can simply go through each leg calling leg.update(), and if a leg has an “increments” value greater than 0 it will move towards its destination; otherwise nothing will happen.

Now each leg can be moved independently and at the same time! Please keep in mind that this is simplified code describing concepts. In reality there are structs to hold points and rotations, a “Joint” class containing servo ID info and abstracted code to control the servo, a bunch of member variables in the leg class to hold angular information, etc., but this should be enough to get things working.

This is the part where you could probably come up with some neat walking gait on your own, but I’ll talk quickly about it at a high-level since it’s frequently asked about. I will probably do a more descriptive post about it as well.

For the entire robot to move, two diagonally opposite legs lift at the same time, move forward, and set back down. The other set of legs now lifts and does the same thing while the first two legs move backward which pushes the robot forward. The path that the legs follow while on the ground moving backward determines the overall path of the robot. The code just needs to be written so that the path can be a mathematical function (in the case of turning it’s an arc, forward it’s a line, etc.). There are a ton of edge cases to consider though. What if the robot wants to stop? Do the legs just return to their “home” position regardless of their current location, or would this cause them to drag the feet across the carpet potentially stripping the servo gears? What if you want to quickly switch from walking forward to turning in a tight circle? Should you return to home and then start turning or is there a faster way? Hopefully I can address these issues as well in the next post.

Experimenting With Dynamic Stability

Balancing robots are interesting to me because they use such a simple mechanism to achieve a behavior that seems complex and intelligent. They are also a great introduction to applied control systems, and particularly PID controllers. After seeing videos of these robots online I wanted to give it a try for myself, but I also wanted to try something a little different. This got me thinking about (somewhat) practical uses of a balancing robot, and reasons why they might be better at something than a robot with four wheels. The basic idea of my design was something that could act as a mobile table with some resistance to tipping over. This could be useful if I was carrying food or drinks out of the kitchen and needed another hand. The design has been through two iterations and still needs plenty of work, but this video shows a quick look at the current state:

For the robot to meet my goal, it would have to be:

  • Very stable: It can’t be dumping stir-fry all over the floor
    • This means a good control system with properly tuned PID constants
  • Fast enough to keep up with my walking
    • This is a function of motor speed, but also relies on good control
  • Tall enough to be useful
    • The taller the robot is, the more unstable it’s likely to be. My solution is to place a lot of the weight close to the bottom.
  • Sturdy enough to carry plates, etc.
    • Strong materials and design is a requirement. It can’t be vibrating its motors loose.
  • Able to stay in one place. Many balancing robots drift over time.
    • Somehow it needs to find its own center of balance dynamically. This allows plates, etc. to be set on top without causing it to drive off if they’re not perfectly balanced.

For the sake of simplicity and rapid development, the key parts used in this robot are:

Implementing the PID controller is not a difficult task once you understand the idea, but tuning a system that can damage itself if it falls over can be tough. For example, my quick and dirty PID controller is implemented here:

float ComputePID(float input, float kp, float ki, float kd, float setpoint, float &error_sum, float &last_input, float delta) {
 // compute all working error variables
 float error = setpoint - input;
 error_sum += (error * delta);
 float d_input = (input - last_input) / delta; // take the derivative of the input instead of the error to eliminate derivative kick
 last_input = input;

 // compute PID output
 float output = (kp * error) + (ki * error_sum) - (kd * d_input); // derivative term must be negative because its based on input
 return output;

In the case of my balancing robot, the setpoint is the angle the IMU needs to report to be considered “balancing” and the delta is the time between PID calculations. The error_sum (integral) and last_input (derivative) are passed by reference from the calling method so that different systems can use the PID code. Finally, the output is a PWM value that is common to both motors. A differential term can be added to the motors afterward to allow the robot to turn.

This system can be tuned using the Ziegler–Nichols method to obtain kp, ki, and kd if it’s done carefully. First, the ki and kd are set to zero. The kp is increased slowly until the robot starts to oscillate back and forth steadily without falling over. That kp value is called Ku (ultimate gain) and the period of robot oscillation is Pu (ultimate oscillation period). These values are then plugged into the following equations to obtain kp, ki, and kd:

  • Kp = 0.60*Ku
  • Ki = 2*Kp/Pu
  • Kd = Kp*Pu/8

This gives reasonably good results, but usually requires a little tweaking.

Once this was working, I wanted the robot to find its own setpoint, which depends completely on the distribution of weight and can change when something is set on top. This is done by cascading PID controllers, which means adjusting the setpoint of one PID controller (the one that balances the robot) with another PID controller (one that will find the correct angle to balance at). This can be a complete nightmare to tune because telling the Pu apart from normal balancing oscillations is very hard. The trick is to set the PID delta (the time between PID calculations) for this second PID controller, which is finding balance angle, to a much longer time than the one that balances. This makes sure Pu results from large, sweeping oscillations which are much more obvious. I’ve had good luck with a delta that’s 5 to 10 times larger. This second PID controller which finds the balance angle takes the common motor PWM (the output of the balancing PID) as an input, zero as a setpoint (if the robot shouldn’t be moving, but this is really the RPM you’d like the motors to be turning at and it can be adjusted to change the robot’s speed), and the output is the setpoint of the faster, balancing PID controller (the angle that is considered “balanced”).

This is all I have done so far, but I’m working on some more interesting behavior now that I’ve got a system that balances well enough.

An update on some projects! It’s been too long.

It’s be quite a while, I know. I’ve honestly forgotten to update here since starting a new career in electrical engineering back in the summer. But there’s good news: I haven’t stopped working on independent projects, and I have a some cool things to show over the next few weeks. Some of my newer projects include a 4-foot tall 2-wheeled balancing robot and a ‘bartending robot’ that uses an interesting pumping mechanism. I’ve also started to experiment with 3d printing as a tool for creating mechanical parts.

But for now I’d like to offer the latest video of my quadruped robot as a peace offering. This robot now gets a reaction somewhere between cute and creepy, but judge for yourself:

Designing a Quadruped Robot – Part 2

In this post I’m going to discuss the mathematics behind moving the legs of the robot correctly. But first, another quick video of the robot roaming free on battery power:

The key to getting a robot like this to move is finding a way to tell each foot exactly where you want it in 3D space. In order to do this, each joint in the leg needs to be rotated at exactly the right angle. The joint angles are calculated using inverse kinematics. In the case of this robot, each leg has three joints:


In order to simplify this from a 3D problem to a 2D problem, find the angle of the coxa joint first:

\theta_{coxa} = \arctan \left( \dfrac{y_{foot}}{x_{foot}} \right)

Find the distance from the femur joint to the foot, ignoring the coxa length. This gives the x position of the foot for the simplified 2DOF problem:

x_{simplified} = \sqrt{x_{foot}^2+y_{foot}^2}-l_{coxa}

Find the length of the imaginary leg in the 2DOF problem:

l_{imaginary} = \sqrt{x_{simplified}^2+z_{foot}^2}

Find the angle of the imaginary leg in the 2DOF problem:

\theta_{imaginary} = - \arctan \left( \dfrac{z_{foot}}{x_{simplified}} \right)

Use the law of cosines to find the tibia joint angle:

\theta_{tibia} = \arccos \left( \dfrac{l_{femur}+l_{tibia}-l_{imaginary}}{2(l_{femur})(l_{tibia})} \right)

Use the law of cosines to find the angle between the imaginary leg and the femur, and then subtract out the imaginary leg angle to get the femur joint angle:

\theta_{femur} = \arccos \left( \dfrac{l_{femur}+l_{imaginary}-l_{tibia}}{2(l_{femur})(l_{imaginary})} \right) -\theta_{imaginary}

Now we have the three joint angles for the leg: \theta_{coxa}\theta_{tibia}, and \theta_{femur}. All that’s left to do is convert the angle to something the servo understands. Since the Dynamixel AX-12A servos accept a value from 0 to 1023 and have a range from 0 to 300 degrees, the following equation can be used to convert radians to a dynamixel value:

position = \dfrac{3069\theta_{joint}}{5\pi}

And that’s it for the leg IK equations. Simply plug the x, y, and z coordinate values into the above equations and solve. Plug the resulting angles into the last equation to get the position values for the servos, and the foot will move to the desired position.

Next time I’m going to explain how to use this to make the robot walk forward.

Designing a Quadruped Robot – Teaser Video 2

Within a week or so I should be done designing a fancy new chassis for my quadruped robot with some neat fixes and features, so stay tuned!

In the meantime, I’ve taken a short video of a basic walking gait. The idea here was to maximize the size and symmetry of the stability geometries. Before lifting each leg, the robot shifts the center of its chassis over the centroid of the stability triangle created by the grounded legs in order to maintain balance.

Designing a Quadruped Robot – Part 1


With the recent addition of a CNC router to my workbench, I wanted to start a project that really made use of its capabilities. I felt like a walking robot would be a great mechanical design challenge as well as a fun programming exercise.

My initial design goals for this robot:

  • Either 4 or 6 legs
  • At least 3 degrees of freedom per leg
  • Controlled by a Raspberry Pi
  • Most mechanical components made on my CNC router
  • Use robotics servos such as Dynamixel or Herkulex (mainly to gain experience using them)
  • Maximize on battery capacity (within reason)

After looking at the price of robotics servos, I decided this robot will definitely have 4 legs with 3 degrees of freedom per leg. The cost would already be pretty high at this point. Luckily the Raspberry Pi is cheap…

I considered both the Dynamixel AX-12A and Herkulex DRS-0101 servos and researched them extensively. I was leaning toward the Dynamixels because they’ve been used for many quad/hexapod robots and seem to work well for others. That said, the Herkulex servos look awesome and I’ll probably try them out next time.

I initially bought 3 Dynamixel AX-12As so that I could play around with leg designs. Within a day or so I had a prototype:

A freshly cut prototype leg

A freshly cut prototype leg

The prototype leg assembled with the three Dynamixel AX-12A servos

The prototype leg assembled with the 3 Dynamixel AX-12A servos

The gray brackets were included with the servos. Eventually I may try to make one unified bracket, but it works very well for now. In this design the femur (connecting the two servo horns) and the tibia (which comes to a point that acts as the foot) each have an effective length of 100mm. It looks a little long, but these servos have 16.5 kg-cm of holding torque which should be plenty as long as the body doesn’t get too heavy. The main flaws in this prototype were that the ends of the femur that mount to the servo horns were so large that they covered up the connectors on the back, and the support plate that spans the two femur pieces was too short. But the prototype gave me enough confidence to order the remaining nine servos.

I designed a simple body that would hold the four legs and the Raspberry Pi, and redesigned the legs to address the problems I found in the prototype. Once the rest of the servos arrived I put everything together, resulting in this thing:

Putting the mechanical pieces together for the first time

Putting the mechanical pieces together for the first time

When I looked at it at first, I felt like the legs were way too long. I thought about it for a while, and came up with something that required me to design a new type of foot:

New foot after cutting

New foot after cutting

Quadruped with new foot design

Quadruped with new foot design

This used almost all of the brackets that came with the servos, and ended up looking pretty awkward in my opinion. In addition to that, the legs are barely any shorter. Back to the old design.

In part 2, I’m going to talk about the theory behind making a robot like this move. So dig out your trigonometry textbook and stay tuned…

Designing a Quadruped Robot – Teaser Video

Over the past month I’ve started working on a quadrupedal walking robot, and I’m going to be chronicling my progress over the course of a few blog posts. But for now, here’s a teaser video demonstrating my inverse kinematics engine. Stay tuned!

Simple BeagleBoard-xM Based Robot

In my quest to start building smarter robots, I wanted to explore some concepts using parts I had lying around. I bought a BeagleBoard-xM  over a year ago for this very purpose but never got to it. When I got my CNC router though, I figured this would be the perfect first project.

At minimum I wanted this robot to have a camera (for OpenCV experiments), a “2-wheel plus caster” configuration similar to my cartographer robot, and a computer capable of running OpenCV (plus any necessary connections for controlling the motors). With the BeagleBoard-xM, I get 4 USB ports, a RS-232 port, a GPIO header, and a whole bunch of other stuff.

I went hunting through my collection of robotics parts for things I could use and ended up finding some good stuff I bought throughout the years but never used:

Because these things all had nice mounting holes, I didn’t have to make any weird brackets. I bought some sheets of HDPE from Amazon and designed a simple 2-layer chassis to cut on the router.

A few pictures to show the process:

Just finished cutting a prototype robot chassis

Just finished cutting a prototype robot chassis

Chassis after cutting on router

Chassis after cutting on router

Pieces punched out

Pieces punched out

Beaglebot prototype

Beaglebot prototype

From the side showing BeagleBoard-xM

From the side showing BeagleBoard-xM

I had to make a wire harness with a built in regulator to bring the 7.2v from the NiCd battery tamiya connector down to 5v at a barrel jack for the BeagleBoard-xM. The harness also provides 7.2v through the proper power connector for the servos.

For now, I just have a simple python socket server running on the BeagleBoard-xM which sends any data it receives directly to the servo controller. This allows a Processing program running on a PC to issue commands to the servo controller directly so that the robot can be driven around. The video from the camera is streamed over WiFi using Motion, which gives the user the ability to see. In the future this robot will be my testbed for object tracking experiments.