Jon's Place

Saturday, December 30, 2006

Brain Engineering

I have started working with some really interesting people (hi Andrew!) from the Brain Engineering Lab at Dartmouth College in New Hampshire. It turns out we share some common thoughts on how intelligence works, and I'm going to help them solve some low level hardware and software problems they have (which I also have), and they are providing me with some hardware that will help with MicroRaptor.

They are building an advanced Bioloid humanoid robot as well, and the wireless bridge my brother and I are doing will solve one of their problems, which is how to get high speed low latency communications between a desktop/laptop and the Bioloid robot.

Friday, December 29, 2006

New Bioloid Wiki

I have decided to create and host a new wiki specifically to address the Bioloid community.

http://www.bioloid.info

This is a community wiki, and as such will need the input and efforts from the community at large to be a useful resource.

Wednesday, December 27, 2006

Higher Detail CAD Models

I found (thanks to a post on RoboSavvy) the link on the Robotis site where they have high-detail models of the AX-12, the AX-S1, and a couple of the brackets. I've imported those into Rhino, and I'm providing a link to those four files in Rhino format.

BioloidParts-HighDetail-3DM.zip (2.98 MB)

Tuesday, December 26, 2006

CAD Model Asleep


I was playing with the CAD model some more, just to see if I could get a decent "sleep" or rest pose. I think this qualifies pretty well. Of course, the real trick is going to be seeing if it can get up from this pose.

Sunday, December 24, 2006

Side View of CAD Model

I exchanged a couple messages with someone who was commenting on the weight balance of MicroRaptor. I thought it would be useful to post a render of MicroRaptor from the side, so you can get a better feel for how everything looks with respect to front/rear balance.


One of the nice things is the robot will be able to self-balance to a certain extent, by changing the positions of certain leg servos. The gross level balancing can be done with positioning the hips forwards or backwards along the body, although there isn't a lot of room to adjust it there. The tail has two pieces that I will be machining as well (the cylinder between the servos, and the cone at the end). Depending on need, I can machine those two pieces from a variety of materials (which have a variety of densities), including Delrin, aluminum, and brass. I can use a heavier material and drill a hole in the center, or even mix materials if required.

Updated CAD Model

Here's a new version of the CAD model. Fixes include changing the main body layout, narrowing the hips by a little bit, and much more detail on the PCB. The circuit board has the following components:
The 3.3 volt components on the board are the Wifi module and the 5-axis IMU. Both ATmega128 boards will be run at 5 volts (the Wifi module is 5 volt tolerant for SPI). Since the batteries only provide 9.6 volts, I'll be providing a 12 volt switch-up regulator to provide power for them. I will probably have to modify the board slightly to accomodate 2 of the switch-up regulators depending on how much current the cameras take. The IMU module will be a custom sensor on the AX-12 bus.

Saturday, December 23, 2006

CAD Components

I'm putting versions of the individual brackets and parts that I used to build the MicroRaptor model up on my web site. I'm including the five types of bracket, the AX-12, and the foot in each package.

I've provided 3 different versions of each package:
Hopefully these will help other people who are interested in these kits. Note that the parts are all taken from the Direct-X derived model posted to a thread on RoboSavvy.

MicroRaptor CAD Model - Walking

Here's a rendering of MicroRaptor, with the legs in what I hope will be a decent walking gait. With the spread apart hips, it can tuck its legs under its body to maintain balance while walking, much the same as people do.

I will need to play with the position of the hips (forwards and backwards), to help balance the robot while it is standing and walking with those two cameras hanging off the front.

Eventually, I plan to add a small actuator to the head to allow the cameras to be pointed in towards each other (again, like people do). This will allow the robot to estimate distance and size of objects that are relatively close, and will hopefully make it easier to do landmark-based navigation.

MicroRaptor CAD Model

So, after a busy week, I finally got some time to work on something I've wanted to work on for a while - a CAD model for MicroRaptor.

I like to design all my robots in CAD before I build them - there are far less surprises that way, and you can "test fit" parts together, and check for clearances, before actually starting to build a single part. For MicroRaptor this is especially important, because some of the parts are going to be custom made.

This image shows what I think MicroRaptor will look like. I've decided to put a pair of wireless cameras on it, and a pair of batteries as well - one for the servos, one for the electronics. The feet in the CAD model are the standard Bioloid feet, which I'm not going to be using.

The electronics board on the top has the ATMega128 board on it, as well as the Wifi module. There is plenty of space on the other side for voltage regulators and power switches.

Sunday, December 10, 2006

Wireless

I've decided, as a result of my previously mentioned conversation, to make MicroRaptor wireless, without using the gumstix at all. The past couple weeks of playing with the gumstix has reminded me of what a pain it is to develop on a truly headless platform. I'm going to interface a Wifi module over SPI to an ATMega128 board, which will interface to the Bioloid bus. The SPI can interface at 1 Mbps, so the connection from my laptop to the Bioloid bus will be at least 1 Mbps throughout.

The only real worry about this setup is the latency over the wireless link. If it proves to be too much, I will implement a force-driven actuator system on the AVR, so the latency from the PC will not seriously affect things. I may end up doing that anyways, so that the actuators act more like series elastic actuators.

This decision has several interesting side effects. One is that I can use a wireless camera, with a USB video capture card. That allows me to capture much higher resolution video, at decent frame rates, with full color. Another is that it will be much simpler to control the robot directly, say by using a joystick, which will help when I am working on the motion control.

Of course, the most interesting side effect is I can, assuming that the mechanicals work out the way I hope they will, delve much further into what I'm really trying to accomplish with this robot and its successor - real hard-core artificial intelligence.

Thursday, December 7, 2006

AX-12 Administration

One of the first things I am doing right now is building a user interface to allow me to configure a single servo. I'm assuming the CM-5 module that comes with the Bioloid kit comes with something like this, but I don't have one, so I need to make it. The way this application works is by connecting my laptop directly to the robostix in place of the gumstix, using a USB -> TTL serial converter I bought from HVWTech. My laptop is talking to the robostix at 115,200 baud, and of course since the same code runs on my gumstix also runs on my laptop, I didn't have to do any more work to use my command infrastructure, other than change which serial port the program is opening.

Right now I can query a single servo attached to the bus to find out what its ID is, and then retrieve all its parameters. You can use the little servo widget to drag the servo arm around in real time, and the real servo follows. The parameters that are read only show up in gray, and the editable ones have a white background. You can double-click on the editable ones to change them, and the servo is updated.

This will allow me to experiment with changing various parameters like compliance, and seeing how they affect the servo operation and feel. One thing that I am going to add to this window shortly is the ability to export the set of servo parameters to a file, so once I get something I like I can easily apply it to all the servos.

On-board versus Wireless

One of the interesting things about building an advanced robot today is there are so many choices available. I've been participating in a thread on RoboSavvy with another person who wants to use the Bioloid kit for building an autonomous robot, but doesn't want on-board processing. Instead, he wants a high-speed wireless link to the Bioloid bus over Wifi, so he (and his students) can program and operate the robot from a PC.

So, to accomplish that, you would need:
In order to get a high-speed link (1 Mbps) with the Bioloid bus from your PC, you would need to use the SPI connection between the Wifi module and the ATMega128. Apparently, the robostix can't run SPI as a master, so I'll have to use a different ATMega128 board, like the one linked above. That would require a re-factoring of the code my brother Dave wrote that runs on the robostix, so that it would listen to the SPI port rather than the hardware UART. The nice side effect to this is it would leave the extra hardware UART free, for me means I could plug the digital camera directly into that, and have frames from it feed back over the wireless link as well.

The only real issue with this kind of a setup versus using a gumstix is the latency introduced with the wireless link. Since I'm going to be doing dynamic balancing, having low-latency response is important. Having the brain run on my laptop instead of on the gumstix would hugely simplify development of it, and I could go a lot further, given the massive amount more memory, processing power, and hard drive space I would have available.

Everything would be easier to code and debug, because I could have interactive user interfaces that run in real-time along side the controller, to monitor exactly what it is doing and what it is seeing. A down side is the Wifi module chews power (probably close to half an amp), so the battery won't last as long. But batteries aren't hard to swap on a robot like this, so I don't view that as a huge downside.

Wednesday, December 6, 2006

AX-12 from gumstix

Last night I was able to get my test harness set up, and I was able to successfully ping an AX-12 from my gumstix code written in Squeak. I was also able to send a bunch of control commands to the AX-12, and have it move its arm through a sequence.

A big kudos goes to my brother, Dave, who wrote the glue code that runs on the robostix. Basically, my smalltalk code on the gumstix talks to the robostix over the serial port at 115,200 baud, and the program running on the robostix forwards whatever it gets over that port to the Bioloid bus, at 1 Mbps. It also passes back to the gumstix anything the Bioloid bus sends back in response. One key point to this is you do not need the interface hardware mentioned in the AX-12 manual - you only need to hook together the Rx and Tx pins on the robostix, and they can talk on the bus automagically.

Dave has posted his source code here and here (the second link is for a referenced file, CBUF.h). Anyone who wants to control a Bioloid robot from a gumstix should pick this up, and let him know you did.

Tuesday, December 5, 2006

AX-12

My pair of AX-12 servos arrived today :-)

Once I get a few pieces of hardware put together, I'll be able to do some limited testing. I've ordered a USB -> TTL level converter from HVWTech, and once I get that I will be able to test out the servos more thoroughly from my laptop.

Mechanical & Electronics

So, following are my thoughts on things mechanical. Bear in mind I have no practical experience building biped robots, but I have been building mechanical stuff since I was a kid.

I'm going to be machining a bunch of parts for MicroRaptor, simply because I'm not happy just using what they provide. Many of the servo connection parts in the kit will work fine, but there are other structural pieces I will need to do. For example, I will be machining a custom "backbone" for this raptor, which will hold the whole thing together. Because I will be using a custom battery pack, and custom control electronics, what they provide with the kit won't cut it.

Another part I will be building from scratch is the head. Because I will be using custom range sensors, plus a digital camera, I'm going to be machining whatever structural components I need to hold those parts together. You can see a CAD rendering of what I think the head might look like here. It has three MaxSonar-EZ1 sonar sensors, a C328-7640 digital camera, and a nozzle for blowing air (for doing firefighting competitions). The big black thing at the back is an AX-12 servo, which is used for pitch control of the head. Right now the plan is that the 3 sonars and the camera will be held to the main board using molex headers that plug into sockets on the board. The board will be held to the servo using stand-offs.

I'm still trying to decide if I'm going to use a single central micro-controller to handle interfacing to all my custom sensors/acuators, or if each sensor group/actuator will get its own controller. If I go with a central controller, here's what will have to be on the board:
  • 2 pin terminal block for battery
  • 3 pin terminal block for bus
  • 2 pin SIP for external motor control
  • 2 pin terminal block for aux switched power
  • 4 pin SIP socket for bump sensors
  • 3x4 pin SIP sockets for sonars
  • 4 pin SIP socket for I2C bus
  • 4 pin SIP socket for console
  • 4 pin SIP socket for HW UART (programming)
  • 2 switching regulators (5 volts, 3.3 volts)
  • 2 pin terminal block for 5 volts out
  • 2 pin terminal block for 3.3 volts out
Seems like a lot, but if I use a micro-controller like a PIC 16F876, the plugs and terminal blocks just gets clustered around the chip, and can still end up a reasonable size. After looking at the data sheets, I discovered that an 876 running at 16 MHz can use the hardware UART to talk at 1 Mbps, which is the Bioloid bus speed. However, if I'm going to use this, I will need a switching chip, which basically allows disconnecting either Tx or Rx when the other is in use.

Another mechanical thing I plan to do is build new feet for the robot, with rubber pads on the bottom. I think this whole sliding around thing that most of these robots do is silly, I realize that many of these robots need it, because otherwise you have to be a lot more dynamic on balance control. Since I plan to be more dynamic on balance, I plan to have my robot be very sure-footed. The type of walking I'm planning for it pretty much requires that.

Monday, December 4, 2006

Motion Control

Motion control is a biggie for bipedal robots, and its the thing that I have spent the most time on in the design of MicroRaptor. I firmly believe that motion control is the key to building an intelligent robot. Now, I have no expectation that MicroRaptor will become "smart" - its really not intended for that. It is intended as a relatively cheap platform to test my motion control system on, and also to check out the overall architecture of the rest of the system.

Spring elastic actuators, which I described earlier, are a key point to my control system. The Bioloid AX-12 servos should be able to provide a usable simulation of a SEA, with the appropriate control feedback loops written on my gumstix. A proper SEA is a smart actuator, with an onboard micro-controller that handles the force feedback internally.

One of the major issues with walking is of course maintaining balance. In MicroRaptor, the six-axis IMU will be used to sense balance. I am implemented a generalized sensor query system (mentioned before as the pattern matching system), and it can handle far more than just visual patterns. It will be capable of looking for patterns in any of the sensors, and be able to react accordingly.

Actually, the sensor query system doesn't do any reacting - it just tells the guy who set up the query that it has triggered. In the case of balance, the motion system will set up a permanent sensor query that watches the 6-axis IMU output, and triggers the motion system to react when the sensor tells it the robot is out of balance. Reacting might be as simple as applying more force to the foot servos in one direction, or swinging the head/tail to try and compensate, or as complex as moving a leg out to catch balance.

As the robot gets "older", its sense of balance will get better. What I mean by that is the motion profiles used for standing still, walking, running, etc, will be self-tuned by using a genetic-programming technique to improve. Gait smoothness, which is really another word for balance, is one of the performance measures for the genetic system to test against.

To start off, however, I will be kick-starting the process of learning to walk. I plan on building some fairly complex and powerful visual gait editors, to allow me to look at motion samples captured as I move the legs, and to tune those captured samples. I will also be writing software to convert between positional control (which is what I will start with) and force control, which is how it will eventually work.

Sunday, December 3, 2006

Software

The autonomous controller I am writing for MicroRaptor is a totally new way (for me) of writing this type of code. I've tried to, for everything in this controller, to think about how a person would think.

The main areas in this "brain" are for motion control, short term memory, and long term memory. Short term memory is a very limited place (basically, 6-8 slots) to put things from long term memory that are being actively "thought" about, or worked on. For instance, the current goal that is being accomplished will take up one slot, and will get processing cycles. Part of accomplishing goals may involved going somewhere, so the current navigation engine will get a slot. The physical act of moving involves playing back "muscle memory", which is a fancy way for saying the forces being applied by the actuators over a span of time, so the motion engine will get a slot. Doing landmark-based navigation involves recognizing certain patterns that the sensors "see", so the patterns that are being watched for get a slot as well.

Some aspects of this version of the system are much more hand-coded and set up than the real version will be. Unfortunately, the gumstix just doesn't have the storage capability that the real system will have, so things have to be a little more explicit, and self-directed learning will be mostly supressed for now.

The thing that this system is intended to do is prove that the overall architecture works, and that the motion system I am building does the job. The motion system is one part of this version of the controller that will be all-out in terms of capabilities. The robot will, once it can walk at a basic level, be able to self-tune motion profiles for smoothness and efficiency, using a technique that will look something like an evolutionary system.

The goal system and the navigation system are basically special cases of the general knowledge representation system I am building. Navigation will be self-directed, but the map will have to be set up manually. The map system is vector-based, and the robot will make no effort to build an accurate 3D map, nor will it attempt to ever determine exactly where it is. The simple fact of the matter is, building "accurate" 3D maps of the robot's environment suffers from the same problem that actuator rigidity suffers from - the environment is far to changing and dynamic to be worth the effort.

Think about how we get to someone's house, if we've never been there before... "Take a left at Jackson Street, then the second stop sign is where you turn right onto Builders Lane, and my house is the fourth one on the left, and there will be a Jeep parked out on the road, and I've got a basketball net on the side of my driveway, and the street number is 442."

Think about how a typical robot would solve this:

"Turn to bearing 275 degrees at GPS coordinate XX.XXXX, YY.YYYY, continue on that bearing for 326.3 meters, turn on a bearing of 72 degrees, continue on that bearing for 183 meters, then stop."

This robot will navigate much more like the first example, although on a much smaller scale. Take for example the firefighting competition - most robots do exactly what the second example does, using precise encoders to measure exactly how far the robot has gone before it has to turn. From my perspective, it makes a lot more sense to say something like "Go straight, maintain a distance of 20 cm from the right wall, and continue until the left rangefinder reports a large change in value, which indicates a doorway."

Long term memory is a place to store things that the robot will need when it is trying to accomplish a goal. The controller will be able to look up things in long term memory associatively, by following connections. In the beginning, many of those connections will be hard-coded, but eventually new connections will be made. Long term memory will be stored in an object-oriented database, and so things that it learns, and connections that are made will be persisted between sessions. When this robot is powered up, it will not be starting with a blank slate. One of the first things I will have to tell it, each time I power it up, is where it is. It will have a representation of the world it knows about (vector-based, with nodes and paths), so once it knows where it is, it will be able to figure out how to get anywhere else in that "world".

MicroRaptor

MicroRaptor will have 20 AX-12 actuators, in the following configuration:

7 for each leg - 3 at the hip, one at the knee, one at the ankle, and two at the foot
4 for the neck/head
2 for the tail

Ideally I would add one or two more to the tail, but I'll have to see how it all works out first.

It will be powered by an 8-cell, AA NiMh battery pack, which provides 9.6 volts. The battery will be slung vertically under a backbone that I will have to machine (probably out of aluminum). The legs will be fastened to each side of the backbone, keeping in mind that the backbone on a velociraptor is more or less horizontal. The neck servos will be fastened to the front end of the backbone, and the tail to the back end, behind the legs.

The gumstix/robostix will be attached to one side of the battery, and the other side will have an electronics box which will hold the six-axis IMU/compass and a couple of switching voltage regulators (5 volts and 3.3 volts).

Each sensor or sensor group in this robot will be on the main bus, just like the AX-12 actuators. The IMU/compass will be treated as a single bus sensor. The three range finder sonars in the head, along with the bump sensor, will also be a single bus sensor. Each of these "bus sensors" will have their own dedicated micro-controller, most likely an AVR ATMega8, to communicate on the bus.

The camera is not going to be on the bus, simply because of the high volume of data the robot will be receiving from it. It will have its own dedicated serial port on the gumstix, and I will either be writing or using a vision package to do interesting things with it. One package I have my eye on is EmbedCV, since it is being written for exactly the type of hardware I am using.

One of the things I plan on doing with this robot, after it can walk in a reasonable fashion, is to have it compete in the firefighting competition. I will build a ducted fan "actuator" that will sit on its back, that the robot can turn on and off over the bus. It will push air through a flexible hose that will run along the robot's neck and through the head, to a custom nozzle that will spray the air out at high pressure. The camera will be used for landmark based navigation, and for finding the candle, and ensuring that it gets blown out.

First though, the robot has to learn how to walk. I will be doing a bunch of work in the beginning to give it a reasonable walking gait, but after that the robot will use evolutionary techniques to improve both the smoothness of the gait and its efficiency (in terms of power usage).

I think one of the main problems with most walking robots today is that they don't walk right. If you look at any of the Robo-One type robots, and any of the high-end research bipeds that I have seen video of, they all share one common trait - the body of the robot sways back and forth as they walk. If you were to mount a video camera on a tripod, and stick it on a busy sidewalk, I guarantee that you won't see that kind of body sway in most people. When you watch people walk, you see their heads bob up and down, but the don't sway from side to side at all. This is the reason, in my opinion, why robots have such a hard time walking smoothly, and I'm pretty sure I have come up with a way to avoid it. Time will tell of course, since I haven't ever built a biped, but I have spent considerable time thinking about this problem. I'm not going to say any more on this topic until after I have tried my technique, but of course at that time I will post the results here...

Series Elastic Actuators

In this post, I'm going to talk about series elastic actuators, and why I think they are important.

Many robots, including some of the most advanced bipeds out there, use gearmotors of one kind or another for actuators. Well-made gearmotors can make for some fantastically precise robot joints, but they all suffer from a common problem - they are rigid, and they make rigid joints. These properties make a robot good at repeatable tasks in a prefectly controlled environment, but not so good in the real world, where we operate. If you look at a lot of the servo-actuated humanoids out there, the actuators are almost all positional. You move a joint by specifying a new position for the servo, and it does its best to move to that exact spot. Once again, this makes for good repeatability, but not good flexibility.

A person's muscles are very stretchy, and provide a natural level of shock absorption. The other important feature of muscles is they are what I call "force-driven" rather than positional. You don't move your arm to a specific position - you apply force to the appropriate muscles, and use a visual or tactile feedback mechanism to control determine how much or how little force to use. Once you've performed a specific action enough times, the nerves that control the muscles gain "muscle memory", which allows you to do the same motion with virtually no feedback required.

Series Elastic Actuators were invented in the MIT Leg Lab, and the company Yobotics has been spun off to commercialize the technology. SEAs are force-driven, and force-driven actuators have this really interesting property - If you set the desired force to be zero, the limb being controlled by the actuator goes "limp", and you can move it around manually, all the while sampling the amount of force being applied to the actuator in order for you to move it to its new position. Using SEAs gives your robot a natural flexibility that rigid servo-based or gearmotor-based robots don't have. I believe the combination of these two attributes (force driven and flexibile) will allow, with the right software and sensors behind it, the ability to walk with much more natural gaits, and also to do more advanced things like running, jumping, etc.

With the Bioloid AX-12 servos, I've heard that you can set up the compliance parameters to have a certain amount of "springiness", and by using the torque feedback to control them, you get a poor-man's series elastic actuator. I will be trying this once I get the two AX-12 servos I ordered, and will of course post the results here.

MicroRaptor

This is my first blog post about my new project. Yeah, it seems like I start a lot of new projects, but this one is a little different. This is my first legged robot, my first biped, and my first attempt at integrating some serious AI into a robot.

This micro-raptor will be based on a Bioloid Kit, which I will order sometime (hopefully) early in the new year. I've ordered a couple of AX-12 servos already, and should get getting those sometime this coming week. Anyways, here's the plan from the hardware side:
  • Bioloid Kit (comes with 18 servos) plus 2 extra servos (total 20 degrees of freedom)
  • gumstix (400 MHz, with bluetooth, running Squeak)
  • robostix (running a program my brother Dave wrote, to interface to the Bioloid bus)
  • 6 axis IMU with compass (made from this and this and this)
  • digital camera (hooked to the gumstix)
  • 3 sonar rangefinders (mounted in the head, forward, left, and right)
  • bump sensor in the nose
That's it for the hardware side. Of course, the real thing that will make this robot different is the software. I've been researching for the past two years a new kind of autonomous controller, which works more like the way I think a person's brain works. This robot will not have enough processing power or storage capability to really push the brain software to the limit, but it should be a good start.

Anyways, I'll talk more about the software in a future post. Right now I'd like to get back to the hardware side. This project has a genesis from two different angles. The first angle is the robot I have been designing to go along with the brain software, which you can see a picture of next to this paragraph. It uses something called "series elastic actuators", which I believe are the best kind of actuators that you can actually get for a robot with limbs. I'll probably go into more detail in a later post about what I like about SEA's. Note that the robot pictured here is not MicroRaptor -- it is what I want to build eventually, but simply can't afford to build at this point.

The second angle for this robot comes from my friend Julian, who is a Smalltalk developer/fanatic like me, is heavy into underwater robotics and a whole bunch of other things that we share in common. We've worked together in the past (professionally), and we hang out a lot, watching Firefly and building robots and doing cool design work on various things. Anyways, Julian saw an advertisement for the RoboNova, and basically went apeshit over it. He's wanted a small biped humanid for a long time, but until recently they haven't been available.

I was looking around at the RoboNova at different sites, for pricing and to see what else was available in the Robo-One category of biped humanoids. I stumbled on a relatively new kit, called the Bioloid, made by a Korean company called Robotis. On the surface, it doesn't look much different than the other servo-driven humanoids, but after reading the specification sheet on the servos, I realized that they are completely different.

The AX-12, which is the custom-made servo that the Bioloid kit uses, is an amazing piece of technology. It is twice as powerful as the standard servo that the RoboNova kit comes with, Instead of using a standard pulse-width modulation signal, they use a serial bus, where each servo has an addressable ID. Using this bus, you can interrogate each servo, and retrieve data from the servo, including the position, real time rotational speed, internal temperature, and (most importantly) torque.

You can set various configuration parameters for each servo, including compliance, which specifies how closely it tries to maintain the position you tell it to go to. With the appropriate settings, you can make these servos work almost exactly like a series elastic actuator (assuming you put the right control software in the computer controlling them).

Once I realized the capabilities of these servos, I realized I could easily build a small velociraptor for a reasonable price, which will allow me to start experimenting with this new "brain" autonomous controller. Thus the MicroRaptor project was born.

I'm going to stop talking here, and save details for a future post...