Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Friday, January 24, 2014

Lego Segway with minimal-order observer control

Self-balancing Lego robots are nothing new, but everyone uses PID controllers. I wanted to implement an observer controller to do something new and flex my controls muscles. 

I built a Mindstorms robot that uses a light sensor to measure light reflected off the floor and thereby the robot's tilt. This turned out to be finicky since I had to set the zero point manually, and ambient light variations screwed things up fairly often. It worked well enough in the end though.


Controller Design
A full-order observer controller uses a model of the system in the control loop, which allows us to observe state information that would otherwise be hidden in the actual system. We can then use that state info in the feedback to reduce the error, which now incorporates both the system and model outputs. This can be a robust way to control high-dimensional systems while also being able to inspect the (estimated) states for useful insights.

However, we may not actually need all the state information. A minimal-order observer (aka functional observer) still uses a model, but requires fewer poles to be chosen than a full-order controller. That simplifies design and eliminates the need to calculate and compute state-space transformation matrices.

The figure shows the minimal-order observer, with the controller elements labeled as psi 0 and psi 1. In the lower diagram, psi 0 is algebraically combined with the summation block to simplify coding. As noted, each psi function is a ratio of (simple) Z-domain transfer polynomials.

Minimal-order diagram in Simulink. In the actual system, the real robot takes the place of the "Linearized Model".
Results
I coded the observer controller in RobotC with the help of a couple of Matlab scripts to choose poles and calculate the coefficients of the transfer polynomials. I could have put more work into accurately modeling the robot (weighing it properly, etc.), but as you can see, it works well enough.


The video's a bit long, to show the balancing stability - skip to 1:30 to see me driving the robot with a joystick over Bluetooth. Driving could use some smoothing, but it's fun.

Code is here.
            

Monday, January 28, 2013

Maximum information, minimum post

I've been planning for a while to write up some research I worked on in 2011 involving intrinsic "motivation" for robots. We got a workshop paper out of it, and I presented the results to the ECE department last year. I also planned to extend it into my thesis project.

But... the lab went through some advisor round-robin and the project fell apart, and I just don't feel like writing it up into a full post anymore.

In a nutshell, our robot learned a policy for a partially observable Markov decision process (POMDP) to learn about objects in a space by manipulating them with its arm, then assigning object classification probabilities, with Shannon information gain across all objects as the learning reward.

Here's the AAAI workshop abstract, with a link to the full PDF:
http://www.aaai.org/ocs/index.php/WS/AAAIW11/paper/view/3960

Here's a fun picture of the robot!

Sunday, December 18, 2011

Cricket, the toy robot that never was

I decided earlier this year to build a robot as a gift for a young relative. I've always found Braitenberg vehicles interesting and wanted to create a mobile robot with simple sensors and the ability to switch among several Braitenberg-type "personalities" (light-following, sound-avoiding, etc.).

Thus Cricket was born.

Cricket with, well, some things working.

Turns out I underestimated the chaos of the target environment, with multiple even younger siblings running around. Only a totally bombproof gift would work - which Cricket is not.

That, plus some irritating bugs I don't feel like ironing out, means Cricket is now abandonware. But not forgotten!


Testing the light pods.

Full details and more pics after the jump.


Wednesday, December 7, 2011

Neural networks part 2: Evolving a "living" robot

In my first post on neural networks, I discussed training the network using gradient descent - a pretty straightforward optimization method. This project took a completely different approach: evolving the network's weights with genetic algorithms

Our project team designed a virtual agent (robot) that learned to avoid obstacles while acting autonomously to "work" and "eat", maintaining its own internal conditions in proper balance like a living animal.

The virtual robot (green circle) navigates from the green "health" waypoint to the red "work" waypoint while avoiding the gray obstacles.

We started in simulation, planning to implement the working system on a physical robot, but ran out of time to get the hardware side functioning. C'est la vie robotique! We did make sure our virtual agent would use the same motor commands as the real robot, so the simulation wasn't completely disconnected from the real world.

Full details, including the multilevel control architecture we developed, after the jump.

Wednesday, November 9, 2011

Neural networks part 1: Teaching Canyonero to drive

Artificial neural networks (ANNs) are modeled after natural neural networks (brains and nervous systems) and though they don't work exactly alike, both a brain and an ANN can learn arbitrarily complex tasks without being told exactly how - they just need data about the task and their performance.

A generic artificial neural network.

ANNs have been applied to a lot of artificial intelligence and machine learning problems, from autonomous vehicle driving to recognizing handwritten address on envelopes to creating artificial intelligence for video game agents.
  
I won't go deep into the math behind ANNs here; there are great sites on the web (and it's not really difficult, there's just a lot of bookkeeping). 

Instead, I'll take two posts to describe a couple of neural net projects I've worked on. First up: a mobile robot called Canyonero that learned to compensate for its own mismatched wheels.

Canyonero, with a camera in the front and a netbook running an ANN.


Monday, September 12, 2011

ModDroid, we hardly knew ye

One of the projects I worked on at the Robotics and Neural Systems Lab was a modular robot we were going to design and build for a conference. Each module would be about 8" x 8" and stackable, so you could load up on CPUs and batteries, add some arms, mobility options, different heads...

Alas, for a variety of reasons we didn't get that far - we built just enough modules to make a basic robot we named ModDroid. ModDroid's coolest feature was his adorable head, designed by two of my labmates and featuring 8x8 RGB LED matrix eyes from Sparkfun. I had a lot of fun coding up the eye animations on the PIC control board.


 

I recently saw ModDroid sitting in a heap of parts in a corner of the lab. Looks like all we have to remember him by is this video... bon voyage little buddy!



Particle filters in real time

I love class projects, because it's great to make something that works amidst the theory and pure math. For one of my robotics classes, my team decided to code up a particle filter.

Actually, our plan was to have a Lego Mindstorms NXT robot localize itself (figure out its initially unknown position in a known environment), then navigate to a sound source while avoiding obstacles. We couldn't get the physical robot to cooperate, so we did the localization piece in simulation.

We got the particle filter working, and made some nice videos of the particles in action. Those and more info after the jump.

Sunday, September 11, 2011

Little Drummer Boy: a drum machine with real drums

 
"Little Drummer Boy" was my final class project for an embedded microcontroller class. Two Futaba S3004 servos with drumsticks, controlled by a PIC18F4550, play a set of bongos. The frame was built by a teammate from scrap aluminum.

Videos and more info after the jump.