The Search For Focusless Cameras

So I’m in an embedded systems class. We have a bunch of labs, but as an alternative to the labs we can do a semester project, programming a Freescale car. This is lots of fun.

Here I will document the camera algorithm I developed, and the process I took to get there.

The Freescale Cup Car is a small autonomous car about a foot long which is supposed to follow a racetrack. It follows this racetrack using a 128 pixel analog line camera. That is, it’s a camera that only sees a single line. On our car (I’m on a team), the camera is mounted 12″ above the ground at the front bumper of the car, angled so that it sees about 2 feet ahead.

This is all fine and dandy. Except that the camera’s vision dims at the edges:



This graph shows a calibration frame, which is the camera looking at a blank white piece of paper, alongside a frame showing two lines with regulation spacing.

The conventional wisdom for parsing these camera frames is to take the derivative and search for peaks and zero crossings. Let’s do that:


Okay. Not too shabby. There’s clearly something going on. As a human we can see a line on the left and a line on the right.

How to get a computer to detect the lines? A dark line happens when the derivative goes down, and then goes up. So maybe we could use a threshold for the start of the dark line and the end of the dark line.

It’s clear that this won’t work very well in this case. A negative threshold for darkening won’t be able to catch the left-hand line without catching a substantial portion of the rest of the frame. A positive threshold for the frame getting lighter (lightening?) won’t be able to catch the right-hand line without catching a substantial portion of the rest of the frame.

A second derivative won’t work, either, because the noise spike in the middle is about the same height as the spike on the left. A little bit of smoothing might help, but too much and we smooth away the lines on the left or right.

An astute eye would notice a general slant from the upper left to the lower right in the first derivative. If we found a best-fit line and subtracted that from this sample, then we could more easily identify the lines.

There’s two ways to find the amount of this slant: Manual tuning and least-squares of some sort.

Manual tuning takes valuable developer time to get it right, and it will vary from track to track and arena to arena. The lighting in my house is different than the lighting outside which is different than the lighting in a gym, so we have to re-calibrate the car at every event.

Least-squares takes valuable computation time, but we get proportional gain (and later we’ll do parabolic least-squares, and get even more gain). If you constrain the best-fit line to pass through zero halfway through the frame, then all you care about is the slope, and it becomes an averaging problem. Averaging takes N integer adds and an integer division, but it turns out that this integer division is by N, which is a power of 2 – this is a very fast operation.

Both of these, however, will either take up N space to store the value of the line at each pixel, or will require an additional N integer multiplies to remove the trend from the data.

But we still have one more problem, as the title of this post suggests. This method is highly dependent on the focus of the camera. If the camera is out of focus, then the derivatives will be smaller, and the thresholds more subject to missing the line.

Notice how the first derivative has a straight trendline. Therefore, the second derivative (the slope of the slope) will be constant. We all know what that means – it’s a parabola.

Enter parabolic least-squared fits to the camera frame.

That sounds expensive, and it is, but it’s less susceptible to camera focus issues.

If you are able to calibrate the camera on a white background, then you can either remember that frame and subtract it from incoming frames or you can find the best fit parabola and subtract that from incoming frames. This is the results of that:


Two clearly identifiable peaks, both orders of magnitude above the noise floor.

This method still requires on-site calibration, though, which we want to avoid as much as possible. But we have a data source that looks a lot like the calibration data – the frame itself. If we fit a parabola to the frame, and then subtract that from the frame, we get something like:


A strangely parabolic noise in the center, caused by the lines on the edges throwing off the least-squares. This data smooths well, and then a peak-finding algorithm can be run to find the lines reliably.

The astute reader will notice that generalized linear least-squares is not a cheap algorithm. In fact, it requires 3*128 integer multiplies plus 3*128 integer adds plus a half-dozen floating-point multiplies and divides. Then to evaluate the parabola at every point, we have 128 * 4 multiplies and a similar number of additions.

This takes a while – we read a camera frame every 20ms, and this process takes up about 6-7ms or so. But we have the time to spare, and this is the most crucial part of the project, and now the camera doesn’t have to be re-calibrated under different light sources.

For reference, we’re using the KL25Z chipset:

Stay tuned for more car-related algorithms!


Posted in Uncategorized | Leave a comment

Blog Moving

Dear Avid Reader,

There has been a few minor issues in the hosting of this blog. As such I’m considering moving providers. I do much appreciate everything that WP Engine has done for me, including hosting this blog for free for 4 years, but it occurs to me that perhaps it is time for me to switch to a different solution for blog hosting and give up their server space to paying customers.

This blog has been moved to, hosted in-house at the Pillow servers.

I will continue to keep this blog synchronized with that blog, until this blog is replaced with a 302 redirect.

Thank you,

Lane Kolbly


Posted in Uncategorized | Leave a comment

Italy, Revisited

I haven’t literally been back to Italy. I’m just revisiting the Identify Italy project.

Quick recap: According to a reputable source, Italy has 28% more elevators than the United States (and is “the world’s largest market for elevators”, despite Spain having more elevators than Italy).

When I read it, this statistic struck me as unusual. Italy has 1/30th the land area and 1/5th the population of the United States (as well as a Census, whose website banner crushes the US Census’ website).

Side note: Istat’s website provides random factoids about Italy’s demographics on the front page (22 million Italians read online books, ebooks, or newspapers, for instance). The Census Bureau doesn’t provide any random factoids. Come’on, guys, step up the game!

Regardless, my hypothesis was that this elevator crisis is caused by Italy lacking much of a suburban area. So, I downloaded satellite imagery of Italy from Google Maps and set to work identifying swaths of it as one of 7 categories: Urban, Suburban, Industrial, Farm, Rural, Water, and Unknown. “Unknown” was largely for images which weren’t stored for some reason and so appeared grey, or were mostly cloud. I setup a little web UI where I can quickly identify a bunch.

Now I have 1.7 million image squares which cover Italy and the surrounding seas (Adriatic and Tyrrhenian, I was awake in Geography) and 897 data points identifying different images. The big question is how to train a computer to identify the different types. This question remained unanswered for a long period of time.

A few weeks ago, I decided to apply Apache Spark to this project. Spark provides a simple, powerful way to express data analytics algorithms. It also provides a platform which scales well horizontally, as well as applies your expressiveness as an in-memory lazily evaluated powerhouse. It’s really quite fast.

For this project, lacking any and all skills in machine learning, I figured I’d use Spark’s MLlib. They provide an implementation of a random forest, which I used.

After much tinkering with the random forests, I eventually decided on a two-stage tree classifier. The first stage deals only with single colors, and tries to predict the terrain type from the color of a single pixel. That is, for a given image that’s been classified by hand, it assigns that classification to each pixel in the image – for instance, image marked “Farm” may have a red pixel (255,0,0) and a blue pixel (0,0,255), so both of these colors will be associated with “Farm”. This is done for each image we have training data for (actually, 70% of the images, the other 30% being for testing), and this stream of pixels is fed into the random forest training algorithm. After building a confusion matrix on the test data, this is the result:

Precision: 0.6874467926180897
Confusion matrix:
983243.0 266.0 11.0 316805.0 10358.0  11868.0 0.0 
47638.0  779.0 0.0  38079.0  9404.0   5182.0 0.0 
80885.0  114.0 40.0 72375.0  7407.0   7811.0 0.0 
382910.0 196.0 2.0  497653.0 3748.0   7255.0 0.0 
69776.0  297.0 0.0  7852.0   770870.0 36286.0 0.0 
44301.0  118.0 17.0 11204.0  9154.0   434804.0 0.0 
17033.0  172.0 0.0  22698.0  422.0    199.0 0.0

Where the axes are both:


In order. Obviously, numbers suck, so I made a beautiful chart:

Identify Italy random forest classifier based on single-pixel value alone. Columns are the predicted values, rows are the actual values.

1 is rural, 2 is urban, and so forth. Each column has been normalized. Observe the strong yellow band that goes from the upper-left corner to the lower-right corner, indicating that 897 samples provides a pretty decent training sample.

Also note the seeming confusion between categories 1 and 4 (rural and farm). Frankly, the distinction is hard for a human, so I can’t blame the computer.

The astute will note, however, that there are only 6 columns, but 7 rows. This indicates two things:

  1. The classifier never classified the pixel as “industrial” (the last category, 7), and
  2. I’m new to MATLAB, and when I put in a column of all zeroes it thought I didn’t want a column at all.

The trivial step would be to use a majority-voting process of the pixels to classify an image. However, I decided to use another random forest model. This model’s inputs are the (normalized) votes of the pixel classifier. For instance, if 2 pixels were classified as “urban” and 5 as “suburban” etc. than the input to the 2nd stage classifier would be (2, 5, etc.)

Let’s look at the result:


Identify Italy image classifier confusion matrix. Rows are actual, columns are predictions.

Wow, that sucks. Astute or not, it clearly can only correctly identify 1, 5, and 6 (rural, unknown, and water).

However, what this confusion matrix doesn’t show is that columns 2 and 3 (urban and suburban) each only had a test data size of 1. In other words, we only predicted urban and suburban a single time out of the test set, which is why it’s so yellow. Only 11 images in the test set were actually either suburban or suburban (and there were only about 100 images in the test set overall), so the jury’s still out on how accurate this classifier is.

One last cool thing.

MLlib’s random forest implementation has the property that if you have a trained model, you can access the trees individually. Once the whole-image classifier is trained, I then run the whole-image classifier on every image (all 1.7 million). Because I can see the predictions of the individual trees, I can see which images have the least consensus. That is, the Spark job can determine which images it’s least sure about.

The Spark job then prints out these images, and I copy this list into a MySQL database, which feeds that list into the webpage you can use to classify the images.

So when you go to, 50% of the time it shows you one of the top 500 images that it’s least sure about (the other 50% of the time it’s random).

So go, identify images. Your efforts are now optimized.


Posted in Uncategorized | Leave a comment

Pillow Outage

The Pillow servers are back online after a power supply failure which affected the server for a majority of this week.

I’m considering switching to a decentralized Raspberry Pi architecture. Now accepting Pi donations.


Posted in Uncategorized | Leave a comment

Bitcoin Prices

Here they are for the past 381 days:



Posted in Uncategorized | Leave a comment

Identify Italy!

It’s here! Take a stab at figuring out what’s what in Italy:


Posted in Uncategorized | Leave a comment

Identify Italy Preview


Urban City? Suburban homes? Rural farmland? Factories?

Who knows? Brownie points to whoever can correctly identify the type of buildings in the lower left portion of the image.


Posted in Uncategorized | Leave a comment

Every Day We’re Shuffling…

A friend of mine is working on a shuffling algorithm. Here’s what he’s got:

int *shuffle2(int numItems)
  //int index[numItems];
  int *randomIndex = malloc(sizeof(int)*numItems);
  int i,j;
  for(i = 0; i < numItems; i++)
      //index[i] = 0;
      randomIndex[i] = -1;

  int randomNum = rand()%numItems;
  for(j = 0; j < numItems; j++)
      while(randomIndex[randomNum] != -1)
	  randomNum = rand()%numItems;//random(0, numItems-1);
      //index[randomNum] = 1;
      randomIndex[randomNum] = j;
  return randomIndex;

It has certain problems. I’ll cover those later. After I figure them out.

But until then, I will study Chi-Squared tests.


Posted in Uncategorized | Leave a comment

WikiRank (using PageRank algorithm)

So for grins I implemented pagerank on Wikipedia (this was actually last month). I figured I’d share my results (and code, although it’s kind of a hack job) in case anyone was curious how it turned out.


The code is here:


There are two folders: pagecount/ contains code to download the November page view data and aggregate it. pagerank/ contains all the code to parse the Wikipediaenwiki dump, parse it, and run pagerank on the result.


If you run:

gcc -O2 sparsematrix.c pagerank.c main.c -lpthread -lm -o pagerank

Inside pagerank/ you will get an executable file called “pagerank”, which will perform the pagerank algorithm on an arbitrary network specified in the file “network”. There’s a small test network in the file testdata, which is the one Dr. Cline showed us in class (in the slides). It’ll output the pagerank data as a vector, one element on each line, into the file vector-out.


The file shows how I ran it. On the Wikipedia data you need about 20GB of disk space and 16GB of RAM to run everything comfortably. It ran in about 2-3 hours.


Final results are here, in a CSV file: (warning: 286MB compressed, a 20 minute download or so). Also, it’s bigger than most spreadsheet programs will accept.


There are three columns, the name of the article, the expected rank according to pagerank, and the actual rank based on page view data from November 2014.


The pagerank algorithm worked by multiplying the markov matrix by a vector 1000 times. I’m not sure if that’s enough, but the eigenvector wasn’t changing by a measurable amount (I measured the angle between successive iterations), so I assumed it was good.


Okay, now that the technical stuff is over, now some pictures. I made a plot, where X was the expected rank from pagerank and Y was the actual rank in November, then made a density chart out of that. In this image, red means more points and blue means fewer points:

Odd little graphic (The Z axis is the log of the number of points in the cell). You would expect it to be square, except that a lot of pages didn’t have empirical pagerank data attached to them, so they had to be cut out. The scale is 512 = 16,000,000


Ideally, if the pagerank algorithm perfectly predicted the actual page ranks, then the graph would be white except for a bright red line along Y=X.


We don’t see that, exactly. We see a sort of plume in the lower left corner, which in general follows a line. This plume shows that pagerank in general got the higher-ranked pages right, and in general didn’t guess that unpopular pages would be popular or vice versa.


Over towards the right, we see a few bands of lighter blue. These are presumably because certain pages on Wikipedia aren’t linked to often, and aren’t visited very often either (it’s not hard to think of an example or two). I can imagine there are some clusters of pages which are like this – perhaps bridges in Iran, or Gymnastics competitions in China. These clusters would form those vertical bands.


Anyway, here’s a zoomed-in image of the plume:

As you can see, it’s slightly more likely that a page that’s popular in reality gets ranked lowly by the pagerank than it is for a page that pagerank expects to be popular gets rankly lowly in reality. (remember, lower ranks are more popular)


I would imagine that a lot of the error would be because Wikipedia’s traffic is driven primarily by what’s happening in the news, rather than networking effects like the internet’s traffic.


Anyway, I guess the result of all this is that pagerank actually works. It may still be magic, but it’s magic which actually works. Also a working sparse matrix pagerank program, if you ever need one.

Posted in Uncategorized | Leave a comment

Farewell, 2014

Happy New Year! (in about 30 minutes, at least)

Goodbye, 2014. It was nice knowing you, but frankly I’m glad we’re moving on.

2015 should be a blast. Let’s wait and see!


Posted in Uncategorized | Leave a comment