# Genetic Algorithm I’m better than I once was.

My glasses broke a couple weeks ago while I was at work. The screw fell out of the temple and is lost forever. I did a quick patch job to make it through the rest of the day, and when I got home, I set out to fix them.

A bit of steel wire, a pair of pliers, and 6 minutes later, I had themrepaired. I figured it would be a stop-gap, and I’d need new ones soon. I remembered back to glasses repairs of my youth. They involved lots of tape and a countdown to being able to get a new pair.

But I’ve surprise myself. The repair has held nicely. It seems to be robust enough that I’m not worried about when I’ll get another pair. The last few years has really given me a lot more experience repairing and building things, and it’s nice to see those improved skills carrying over into the broken-glasses part of my life.

So now to the subject at hand. Something else where I’ve improved from where I was a few years ago: programming.

In 2008, a programmer named Roger Alsing posted this, demonstrating the Mona Lisa being constructed from 50 polygons by a genetic algorithm. I saw the post soon after, and it got me fascinated about genetic algorithms and machine learning. I read quite a bit about different methods, and I wanted to try some myself. I decided to try any copy his result as a starting point. I would use simple shapes, ellipses or triangles, instead of complex polygons, but otherwise do the same, try to match an image by hill-climbing. Create a set of shapes, and let them mutate their color, position, size, and shape each frame, and keep every improvement.

And I learned a lot, but never got any result. I had taken a few programming classes, but nothing that had gone much beyond ‘hello world’ or currency conversions. I started by bit-banging the targa specification in Python. “If you wish to make an apple pie from scratch, you must first invent the universe” (Carl Sagan) at its finest. How many languages have image libraries? Basically all of them. Still, I managed to do that part, reading and writing images like a champ. What I never did do though was write anything that could rearrange my random ellipses into anything resembling my target image (maybe I’ll dig up that code and share it if there’s interest).

Anyway, fast forward to now. I had a chat the other day on reddit with Paul Grouchy, who’s working on strong AI at the University of Toronto. He’s got a paper on using ODEs in place of neural networks, and I really think that’s interesting. Anyway, I have enough interest in this field that I was able to ask him some questions, but the important takeaway was that I should DO something.

And so I have. I revisited that old project. I opened up processing, and in a couple hours on Wednesday night, I was drawing random triangles like nothing else (ellipses and rectangles too). I got as far in a couple hours as I had in probably a week’s time before. Thursday I tried to finish it up, but I was tired and couldn’t get anything right, so I had reverted all my code by the time I went to bed. But the next day I thought through what I needed to do, and after a few minutes of work this afternoon, it works!

Well… well enough at least. It quickly gets stuck in local minima. But it’s clearly doing what it’s designed to do. Let me share some images and some code:

At the top and bottom I have the target image, in this case a set of color bars (chosen because they lead to quick convergence, to see if things are working (also, don’t know where I got the error bar image, I’m hoping this counts as fair use)). In the three columns I have samples of the algorithm running. You see every 2nth iteration (1,2,4,8…16384). So the first frame of each is random, the next is after a single mutation. This is the number of mutations, not the number of improvements, on average each has gone through slightly less than half this number of mutations that were beneficial.

I think the convergence after just a few thousand iterations is promising. Keep in mind that yellows/pinks are close in the RGB colorspace. Hopefully the result is apparent to other viewers. I think it’s clearly better than random. I’ll try to run some longer tests on some other images to demonstrate the capabilities. I’d like to do some other improvements as well to keep it from getting stuck in local minima, which it appears to do regularly.

Here’s the code for anyone interested. Runs in Processing 2.1.

```PImage target;
float mutateFactor = 0.01;
int numElements = 50;
Tri[] tris;
Tri[] tris_old;
long compare_old = 2147483640;
long compare_new = 0;
int iterations = 0;
int improvements = 0;
float ratio = 1.0;
int divisor = 1;
void setup() {

size(target.width, target.height);
noStroke(); //comment out to see outlines
tris = new Tri[numElements];
tris_old = new Tri[numElements];

//println("Hello");
for (int ii = 0; ii < numElements; ii++) {
tris[ii] = new Tri();
//println(ii);
}
}
void draw() {
//println("In loop... ");
iterations++;
background(0);
//arrayCopy(tris,tris_old);

for (int ii = 0; ii < numElements; ii++) {
tris[ii].store();
tris[ii].mutate();
tris[ii].draws();
}

compare_old = compare_new;
compare_new = compare();
ratio = float(improvements)/float(iterations);

if (compare_new < compare_old) {
improvements++;
println("Diff: " + compare_new + "\tIterations: " + iterations +"\tImprovements: " + improvements + "\tRatio: " + ratio);
}
else {
for (int ii = 0; ii < numElements; ii++) {
tris[ii].recall();
//compare_new = compare_old;
}
}
}

long compare() { //PImage m, PImage n) {
//blend(target,0,0,width,height,0,0,width,height,DIFFERENCE);
long diff = 0;
String name = "TriE";

//save some images
if (iterations % divisor == 0) {
save("TriE" + divisor + ".png");
divisor *= 2;
}

//PImage diffImg = createImage(width,height,RGB);

for (int i = 0; i < width * height; i++) {
diff += abs(pixels[i] - target.pixels[i]);
//diff += abs(pixels[i].saturation() - target.pixels[i].saturation());
//diff += abs(pixels[i].brightness() - target.pixels[i].brightness());
//pixels[i] = abs(pixels[i] - target.pixels[i]);
}
//updatePixels();
return diff/1000000;
}
class Tri {
int alpha;
int r,g,b;
int x1,y1,x2,y2,x3,y3;

int fitness;
int stable;

Tri() {
alpha = int(random(256));
r = int(random(256));
g = int(random(256));
b = int(random(256));
x1 = int(random(width));
y1 = int(random(height));
x2 = int(random(width));
y2 = int(random(height));
x3 = int(random(width));
y3 = int(random(height));
stable = 0;
}

void mutate() {
//println("mutating");
alpha = blurNum(alpha, true);
//println("mutating2");
r = blurNum(r, true);
g = blurNum(g, true);
b = blurNum(b, true);
x1 = blurNum(x1);
//println("mutating3");
y1 = blurNum(y1);
x2 = blurNum(x2);
y2 = blurNum(y2);
x3 = blurNum(x3);
y3 = blurNum(y3);
//println("mutated");
}

void draws() {
fill(r,g,b,alpha);
triangle(x1,y1,x2,y2,x3,y3);
//ellipse(x1,y1,x2,y2);
//rect(x1,y1,x2,y2);
}

int[] saved = new int;

void store() {
saved = alpha;
saved = r;
saved = g;
saved = b;
saved = x1;
saved = y1;
saved = x2;
saved = y2;
saved = x3;
saved = y3;
saved = fitness;
saved = stable;
}

void recall() {
alpha = saved;
r = saved;
g = saved;
b = saved;
x1 = saved;
y1 = saved;
x2 = saved;
y2 = saved;
x3 = saved;
y3 = saved;
fitness = saved;
stable = saved;
}}
int blurNum(int n) {
float b = float(n);
//int out = int(random(b - width * mutateFactor, b + width * mutateFactor));
int out = int(b + random(-1 * width * mutateFactor, width * mutateFactor + 1));
out = constrain(out, -1 * width, width * 2);
//println(n + "\t" + out);
return out;
}
int blurNum(int n,boolean max) {
float b = float(n);
//int out = int(random(b - 256 * mutateFactor, b + 256 * mutateFactor));
int out = int(b + random(-256 * mutateFactor, 256 * mutateFactor + 1));
if (max) {
out = constrain(out,0,255);
}
return out;
}```

Potential Improvements:

• The top left corner is my origin (0,0). I allow the shapes to wander around a canvas twice the size of the image, but I forgot to center it, so you can see no shapes cross the vertical or horizontal 0 boundaries.
• The comparison between my generated image and the sample image is based on a simple difference of the value of each pixel. A better way to evaluate the fitness of an image could produce better results.
• My algorithm should be taking each iteration, mutating it, and comparing against the previous result, and then keeping the better one. But I sometimes see that the score for an image is higher than the one before it, meaning there’s a bug somewhere in my code.

Anyway, my coding seems to be much better than it was just a few years ago. I was able to kick out in a few hours of work that which I was not able to complete not long ago. Pardon the quick typing of this post, wanted to get it out there. Please enjoy and comment, I’m happy to discuss, and some interest will probably prompt me to do more with genetic algorithms (real ones even, more than just a graphical hill-climbing).

## 2 thoughts on “Genetic Algorithm”

1. Ron Zachow on said:

Hey Bob;
I remember Allen & I doing something like this using Z-Basic and I enjoyed it greatly, but none of the computers today will run Z-Basic. Wed did one with the Olympic Circle