Thursday, July 19, 2012

I have been writing... I swear.

So I have been writing, just not somewhere where you can see it.

I am changing this fact: my_thesis

I'll try to update nightly, before I go to bed,
along with a brief description of the day's work.


As always, your feedback is warmly received.

~tony

Abstract

    We are in the midst of the information age. Those who can collect, decipher and make
informed decisions from the vast amounts of data at hand will gain a competitive advantage
over their peers. Until recently, it has been the role of a human to perform transform
data into models and decision making processes. This process, known as the scienti c
method, has been the foundation of progress for many hundreds of years. Data is analyzed,
hypotheses are formulated and tested for validity. Those hypotheses that succinctly explain
the data become theories.
    However, as the amount and rate of data acquisition increase with technology, so do the
di culties humans have with comprehending, modeling, and describing that data. To deal
with this, techniques in statistics, data mining, machine learning, and arti cial intelligence
have been crafted to facilitate our understanding of big data.
    A particular model well known to scientists and engineers is the mathematical equation.
Equations encapsulate the relationships between observed variables and responses. These
analytical models are more than predictive entities, the espouse relationships, and in turn
theories, something which can be studied in its own right.

The question thus arises:
Can a feasible algorithm be developed for the general problem of recovering an-
alytical equations from observational data?

We show here that the task of recovering equations, called Symbolic Regression, is in fact
an achievable goal.

    We introduce Symbolic Regression and formulate it as a problem. We then give a back-
ground of Symbolic Regression implementations We focus on the most common method:
Genetic Programming, a non-deterministic algorithm. We next describe our main contri-
bution, a deterministic algorithm for Symbolic Regression, which we call Prioritized Enu-
meration. We use a suite of benchmarks and real-world data sets, to providing, what we
aim to be as, an in-depth and fair comparison of the most prominent implementation and
Prioritized Enumeration. We conclude by introducing a framework in which both human
and machine work in synergy to perform the task of Symbolic Regression.




Monday, July 2, 2012

Reboot

let me try this again...

I'll talk about this in the next post...

I'd like to be able for others to see what I do. Math, Data Mining, and Algorithms are hard for me to explain to the layman at this point. Hopefully I'll do better with you. Think Symbolic Regression (mixed with HCI and a secret ingredient) for the time being.

I encourage you to play around and use the blue button at the bottom. ~cheers

Monday, May 7, 2012

Graph Partitioning 3 - An Update

So my new low score as of this morning was 1339! Maybe we'll find out how good that is in class tomorrow...

I didn't get to too much today, I had to finish a presentation on Ensemble Theory and a Gaussian Mixture Model program. I ended up adding ensemble code to my GMM code and it got a bit better. One class almost done, one final remaining when this blog series ends (Thursday I think).

What I did do was add some flexibility to the partitioning balance and random shuffling of the gain buckets. The partition balancing forces a particular side if the difference becomes to great, otherwise it randomly chooses a partition. The random shuffling of the gain buckets happens during the initial fill and the roll back fill. Doing it this way allows me to insert into the buckets and randomize, once per iteration,  by adding only one function:

 func (b *BList) randomizeList() {
  if b.nodes.Len() < 2 { return }
  tmp := make( []*list.Element, b.nodes.Len() )
  i := 0
  for e:=b.nodes.Front(); e!=nil; e=b.nodes.Front() {
    tmp[i] = e
    b.nodes.Remove(e)
    i++
  }
  for p,_ := range tmp {
    r := rand.Intn(len(tmp)-p)
    tmp[p],tmp[r] = tmp[r],tmp[p]
  }
  for p,_ := range tmp {
    b.nodes.PushFront(tmp[p].Value)
  }
}


The best with flexible partitions is: 1343
The best with flexible partitions and bucket shuffling is: 1335

(I'll update in the morning when the experiments finish)
For the curious, it takes around 8 hours to cover 100 random numbers with 1000 iterations each, of which each is a hill climber that takes several unknown (though not large) number of iterations.

[and that's 1 core only]




I'll let you look at my Data Mining homework, since it's past the due date now, and I'll talk about EM at a later date:

 package main  
 import (  
  "fmt"  
  "log"  
  "os"  
  "math"  
  rand "math/rand"  
  "flag"  
 )  
 type Point struct {  
  val float64  
  lbl int  
 }  
 var seed = flag.Int("seed",23,"random seed")  
 var numC = flag.Int("numc",3, "number of clusters")  
 func main() {  
  flag.Parse()  
  fmt.Printf( "Worm's MLE/EM\n\n" )  
  pts,min,max := readLabeledFile( "labeled.txt" )  
 //  upts := readUnabeledFile( "unlabeled.txt" )  
  m,sd,c := calcEM( pts,*numC,*seed,min,max )  
  err := calcClassifyError( pts,m,sd,c )  
  fmt.Printf( "Solo Err: %f\n\n", err)  
  rsrc := rand.NewSource(int64(*seed))  
  rgen := rand.New(rsrc)  
  var means,sdevs,cprobs [][]float64  
  I,J := 100,10  
  L := len(pts)  
  sset := make([]Point,L)  
  errSum := 0.0  
  for i:=0; i<I; i++ {  
   rseed := rgen.Int()  
   aveErr := 0.0  
   for j:=0; j<J; j++ {  
    for p:=0; p<L; p++ {  
     r := rgen.Intn(L)  
     sset[p] = pts[r]  
    }  
    m,sd,c := calcEM( pts,*numC,rseed,min,max )  
    err := calcClassifyError( pts,m,sd,c )  
    means = append(means,m)  
    sdevs = append(sdevs,sd)  
    cprobs = append(cprobs,c)  
    aveErr += err  
    errSum += err  
   }  
 //   fmt.Printf( "Seed(%d): %f\n", rseed,aveErr/float64(J) )  
  }  
  fmt.Printf( "\n\nAveErr: %f\n\n", errSum/float64(I*J) )  
  ERR := calcClassifyErrorEnsemble(pts,means,sdevs,cprobs)  
  fmt.Printf( "EnsembleErr: %f\n\n", ERR )  
 }  
 func calcMLE( pts []Point ) {  
  fmt.Printf( "Q1: MLE of labeled points\n---------------------------\n" )  
  var means,sdevs [3]float64  
  var cnts [3]int  
  for _,p := range pts {  
   pos := p.lbl  
   means[pos] += p.val  
   cnts[pos] += 1  
  }  
  for i:=0; i<3; i++ {  
   means[i] /= float64(cnts[i])  
  }  
  for _,p := range pts {  
   pos := p.lbl  
   val := means[pos] - p.val  
   sdevs[pos] += val*val  
  }  
  for i:=0; i<3; i++ {  
   sdevs[i] /= float64(cnts[i]-1)  
   sdevs[i] = math.Sqrt(sdevs[i])  
  }  
  fmt.Printf( "cnts: %v\nmeans: %v\nsdevs: %v\n", cnts, means,sdevs )  
  fmt.Printf( "\n\n" )  
 }  
 func calcEM( pts []Point, k, seed int, min,max float64 ) (MEAN,SDEV,CPROB []float64) {  
  //  fmt.Printf( "Q2: EM of unlabeled points\n---------------------------\n" )  
  rand.Seed(int64(seed))  
  means := make( []float64, k )  
  sdevs := make( []float64, k )  
  cprob := make( []float64, k )  
  evals := make( [][]float64, len(pts) )  
  for i:=0; i<len(pts); i++ {  
   evals[i] = make( []float64, k )  
  }  
  diff := max-min  
  for i:=0; i<k; i++ {  
   means[i] = (rand.Float64()*diff)+min  
   sdevs[i] = rand.Float64()+0.5  
   cprob[i] = 1.0/float64(k)  
  }  
  //  fmt.Printf( "Initial Guesses:\nmeans: %v\nsdevs: %v\n\n", means, sdevs )  
  for I:=0; I<20; I++ {  
   // E-step  
   for p,P := range pts {  
    sum := 0.0  
    for c:=0; c<k; c++ {  
     val := calcPDF( P.val, means[c], sdevs[c], cprob[c] )  
     sum += val  
     evals[p][c] = val  
    }  
    for c:=0; c<k; c++ {  
     evals[p][c] /= sum  
    }  
   }  
   // M-step  
   for c:=0; c<k; c++ {  
    means[c] = 0.0  
   }  
   sdevs := make( []float64, k )  
   csums := make( []float64, k )  
   // update means  
   for p,P := range pts {  
    for c:=0; c<k; c++ {  
     csums[c] += evals[p][c]  
     means[c] += evals[p][c] * P.val  
    }  
   }  
   for c:=0; c<k; c++ {  
    means[c] /= csums[c]  
    cprob[c] = csums[c] / float64(len(pts))  
   }  
   // update std devs  
   for p,P := range pts {  
    for c:=0; c<k; c++ {  
     val := means[c] - P.val  
     sdevs[c] += val*val * evals[p][c]  
    }  
   }  
   for c:=0; c<k; c++ {  
    sdevs[c] /= float64(csums[c]-1)  
    sdevs[c] = math.Sqrt(sdevs[c])  
   }  
   //   fmt.Printf( "Iteration%d:\nprobs: %v\nmeans: %v\nsdevs: %v\n\n", I,csums, means, sdevs )  
  }  
  // sort means & sdevs  
  for i:=1; i<len(means); i++ {  
   for j:=i; j>0 && means[j]<means[j-1]; j-- {  
    m,sd,c := means[j],sdevs[j],cprob[j]  
    means[j],sdevs[j],cprob[j] = means[j-1],sdevs[j-1],cprob[j-1]  
    means[j-1],sdevs[j-1],cprob[j-1] = m,sd,c  
   }  
  }  
  return means,sdevs,cprob  
 }  
 func calcClassifyError( pts []Point, mean,sdev,cprob []float64 ) (error float64) {  
  k := len(mean)  
  evals := make( [][]float64, len(pts) )  
  for i:=0; i<len(pts); i++ {  
   evals[i] = make( []float64, k )  
  }  
  for p,P := range pts {  
   sum := 0.0  
   for c:=0; c<k; c++ {  
    val := calcPDF( P.val, mean[c], sdev[c], cprob[c] )  
    sum += val  
    evals[p][c] = val  
   }  
   for c:=0; c<k; c++ {  
    evals[p][c] /= sum  
   }  
  }  
  r,w := 0,0  
  for p,P := range pts {  
   max,maxI := 0.0, 0  
   for c:=0; c<k; c++ {  
    if evals[p][c] > max {  
     max = evals[p][c]  
     maxI = c  
    }  
   }  
   if P.lbl == maxI {  
    r++  
   } else {  
    w++  
   }  
  }  
  err := float64(w)/float64(r+w)  
  return err  
 }  
 func calcClassifyErrorEnsemble( pts []Point, MEAN,SDEV,CPROB [][]float64 ) (error float64) {  
  evals := make( [][]float64, len(pts) )  
  classes := make( []map[int]int,len(MEAN[0]) )  
  for i:=0; i<len(MEAN[0]); i++ {  
   classes[i] = make(map[int]int)  
  }  
  for C:=0; C<len(MEAN); C++ {  
   mean := MEAN[C]  
   sdev := SDEV[C]  
   cprob := CPROB[C]  
   k := len(mean)  
   for i:=0; i<len(pts); i++ {  
    evals[i] = make( []float64, k )  
   }  
   for p,P := range pts {  
    sum := 0.0  
    for c:=0; c<k; c++ {  
     val := calcPDF( P.val, mean[c], sdev[c], cprob[c] )  
     sum += val  
     evals[p][c] = val  
    }  
    for c:=0; c<k; c++ {  
     evals[p][c] /= sum  
    }  
   }  
   for p,_ := range pts {  
    max,maxI := 0.0, 0  
    for c:=0; c<k; c++ {  
     if evals[p][c] > max {  
      max = evals[p][c]  
      maxI = c  
     }  
    }  
    classes[maxI][p]++  
   }  
  }  
  r,w := 0,0  
  for p,P := range pts {  
   max,maxI := 0, 0  
   for c:=0; c<len(MEAN[0]); c++ {  
    val := classes[c][p]  
    if val>max {  
     max = val  
     maxI = c  
    }  
   }  
   if P.lbl == maxI {  
    r++  
   } else {  
    w++  
   }  
  }  
  err := float64(w)/float64(r+w)  
  return err  
 }  
 func calcPDF( x,m,s,p float64 ) float64 {  
  diff := (x-m)  
  expo := (-1.0*diff*diff)/(s*s*2.0)  
  cons := 1.0/(s*math.Sqrt(2.0*math.Pi))  
  return cons*math.Exp(expo)  
 }  
 func readLabeledFile( fn string ) (pts []Point, min,max float64 ){  
  file, err := os.Open(fn) // For read access.  
  if err != nil {  
   log.Fatal(err)  
  }  
  pts = make( []Point, 0, 300 )  
  min = math.MaxFloat64  
  max = math.SmallestNonzeroFloat64  
  for {  
   var p Point  
   _, serr := fmt.Fscanf( file, "%f %d", &(p.val), &(p.lbl) )  
   if serr != nil {  
 //    log.Fatal(serr)  
    break  
   }  
   if p.lbl == 2 {  
    p.lbl = 1  
   } else if p.lbl == 5 {  
    p.lbl = 2  
   }  
   pts = append(pts,p)  
   if p.val < min { min = p.val }  
   if p.val > max { max = p.val }  
  }  
  fmt.Printf( "min: %f\nmax: %f\n", min, max )  
  return  
 }  
 func readUnabeledFile( fn string ) (pts []Point ){  
  file, err := os.Open(fn) // For read access.  
  if err != nil {  
   log.Fatal(err)  
  }  
  pts = make( []Point, 0, 300 )  
  for {  
   var p Point  
   _, serr := fmt.Fscanf( file, "%f", &(p.val) )  
   if serr != nil {  
 //    log.Fatal(serr)  
    break  
   }  
   pts = append(pts,p)  
  }  
  return  
 }  


Sunday, May 6, 2012

Graph Partitioning 2 - KL/FM Implementation

In my last post, I gave you an overview of the KL/FM algorithm for graph partitioning. Today we will go over the code and reach a new low score, now that I have correctly(hopefully) implemented the algorithm.

First, some simple structs for storing the graph data:

 type Node struct {
  // normal stuff
  id int
  edges []*Edge
  // KL/FM stuff
  gain int
  part int
  lock bool  
}
type Edge struct {
  id int
  w  int
  n1,n2 *Node
}
type Graph struct {
  nodes []Node
  edges []Edge
  maxN int
}

The graphs and edges are pretty standard, I added a maximum neighbor count to the graph so that I can bound the number of gain buckets I need to [-maxN:maxN]. The node is augmented to track the currrent partition, whether the node has been considered, and the current gain value, calculated as follows:

 func (n *Node) calcGain() int {
  var ecost,icost int
  for _,e := range n.edges {
    var n2 *Node
    if n == e.n2 { n2 = e.n1 } else { n2 = e.n2 }
    if n.part == n2.part {
      icost += e.w
    } else {
      ecost += e.w
    }
  }
  return ecost - icost
}
func (n *Node) calcSwap(n2 *Node) int {
  var E *Edge
  for _,e := range n.edges {
    if e.n1 == n2 || e.n2 == n2 {
      E = e
      break
    }
  }
  c,c2 := n.calcGain(),n2.calcGain()
  return c + c2 - E.w
}
func (g *Graph) calcCut() int {
  var cut int
  for _,e := range g.edges {
    if e.n1.part != e.n2.part {
      cut += e.w
    }
  }
  return cut
}

The following is the implementation of the gain buckets:

 type BList struct {
  gain int
  nodes *list.List
}
type Buckets struct {
  side int
  size int
  pos []*BList
  neg []*BList
  maxB  int // max bucket size 
}

func (b *Buckets) insertNode(n *Node) { 
  var side []*BList
  var pos int
  if n.gain < 0 {
    side = b.neg
    pos = -1 * n.gain
  } else {
    side = b.pos
    pos =  n.gain
  }

  bl := side[pos]
  if bl == nil {
    side[pos] = new(BList)
    bl = side[pos]
    bl.gain = n.gain
  }
  
  if bl.gain > b.maxB {
    b.maxB = bl.gain
  }
  bl.pushNode(n)
  b.size++
}

func (b *Buckets) bestNode() (n *Node) {
  for ; b.maxB > -len(b.neg); b.maxB-- {
    var bl *BList
    if b.maxB < 0 {
      pos := -1 * b.maxB
      bl = b.neg[pos]
    } else {
      bl = b.pos[b.maxB]
    }
    if bl != nil && bl.nodes != nil && bl.nodes.Len() > 0 {
      n = bl.popNode()
      b.size--
      break;
    }
  }
  return
}

func (b *Buckets) updateNode( n *Node ) {
  var side []*BList
  var pos int
  if n.gain < 0 {
    side = b.neg
    pos = -1 * n.gain
  } else {
    side = b.pos
    pos =  n.gain
  }
  bl := side[pos]
  bl.rmvNode(n)
  n.CalcGain()
  b.insertNode(n)  
  b.size-- // readjust for insertNode(n) size++ op
}

BList is a single gain bucket with its gain value and a list that is used as a stack. The buckets tracks which partition(side) it is and the total number of nodes remaining in the bucket. I have stored the individual buckets in two arrays, one for the negative buckets and one for the non-negative buckets. I did this to allow for constant time look up when updating and inserting nodes. The maxB stores the size of the highest value bucket for constant time lookup of the associated bucket when finding the best node for swapping.

The Partitions, I think, is pretty straight forward:

 type Partitions struct {
  parts []*Buckets  // an array[2] of lists, each is a list of buckets
  sizes []int  // to keep track of the sum of bucket sizes
}

func partitionGraph( g *Graph ) (p *Partitions) {
  // randomly assign nodes
  max := 0
  for i,_ := range g.nodes {
    r := rand.Float64()
    n := &(g.nodes[i])
    if r < 0.5 {
      n.part = 0;
    } else {
      n.part = 1;
    }
    if l:=len(n.edges); l > max {
      max = l
    }
  }
  g.maxN = max+1 // make sure we have enough gain buckets spots
  for i,_ := range g.nodes {
    g.nodes[i].CalcGain()
  }
  
  // make partitions
  p = new( Partitions )
  p.fillPartitions(g)
  
  return p
}

func (p *Partitions) fillPartitions( g *Graph ) {
  p.parts = make([]*Buckets,2)
  p.sizes = make([]int,2)
  for i,_ := range p.parts {
    p.parts[i] = new(Buckets)
    p.parts[i].side = i
    p.parts[i].pos = make( []*BList, g.maxN )
    p.parts[i].neg = make( []*BList, g.maxN )
  }
  for i,_ := range g.nodes {
    g.nodes[i].lock = false
    p.insertNode(&(g.nodes[i]))
  }
  
}

func (p *Partitions) insertNode( n *Node ) {
  p.sizes[n.part]++
  B := p.parts[n.part]
  B.insertNode(n)
}

I added the fillPartitions() function as a helper when initializing the partitions, but then used it in the KL/FM loop to avoid inserting locked nodes into a gain bucket. I believe that once a node has been considered for swap, it can't be swapped again, so I don't put in any gain bucket. After all of the gain buckets are drained, the partitions are reverted to the best found so far and the nodes are all unlocked at returned to the gain buckets. This repeats until no new global minimum is found.

And finally the main algorithm:

 func KLFM(g *Graph) (MIN int, PARTS []int) {
  p := partitionGraph(g)
  
  Cut := g.calcCut()
  CUT := Cut + 1

  min := 1000000
  PARTS = make( []int, len(g.nodes) )
  
  iter := 0
  for Cut < CUT { // while finding improvements
    CUT = Cut
    cnt := 0
    for cnt < len(g.nodes) { // while unlocked nodes
      var B *Buckets
      diff := p.sizes[0]-p.sizes[1]
      // partition selection
      if (diff > 0 && p.parts[0].size>0) || p.parts[1].size == 0 {
        B = p.parts[0]
        p.sizes[0]--
        p.sizes[1]++
      } else {
        B = p.parts[1]
        p.sizes[0]++
        p.sizes[1]--
      }

      // get best node & swap
      N  := B.bestNode()
      N.part = (N.part+1)%2
      N.lock = true
      N.CalcGain()

      // update neigbbors
      for _,e := range N.edges {
        var N2 *Node
        if e.n1 == N {
          N2 = e.n2
        } else {
          N2 = e.n1
        }
        if N2.lock == false {
          p.parts[N2.part].updateNode(N2)
        } else {
          N2.CalcGain()
        }
      }

      // calculate cut value (fix? maxes loop O(N*E) )
      cut := g.calcCut()
      
      // save iteration best
      if cut < Cut { 
        Cut = cut 
      }
      // save global best
      if cut < min { 
        min = cut 
        // save partition
        for i,n := range g.nodes {
          PARTS[i] = n.part 
        }
      }
      cnt++
    }
    
    // roll back
    for i,_ := range g.nodes {
      g.nodes[i].part = PARTS[i]
    }
    
    // refill gain buckets
    p.fillPartitions(g)
    iter++
  }
  return min,PARTS
}

For the experiments, I am using the assignment data file and add20 from the benchmarks I linked to in my last post. I am running 1000 iterations of the KL/FM algorithm with initial seeds from [0:100). So far my best scores are:

  • datafile3.txt:  1340 (an improvement of 108)
  • add20.graph:  617 (only 23 from the best!)


Tomorrow I will let you know how the experiments finished up. I also want to run the algorithm with a modified partition selection that allows for uneven partition sizing. If I have time I will cover another algorithm too, but it is the end of the semester...

Thursday, May 3, 2012

Graph Partitioning 1 - KL/FM Introduction

Graph partitioning is an algorithm which attempts to separate an undirected graph into equal sized groups with the minimal edges cuts, those edges that cross the divide. This is an NP-complete problem for which there is no known polynomial time algorithm. This means with all of the computers in the world and all of the time in the universe, we can never really know what the optimal solution is. The best we can hope for is a 'good' solution using heuristic algorithms. Here is a link to an overview of graph partitioning.

I will implement several algorithms over the coming posts to give you a feel for what's out there.
Here is a link to a bunch of algorithms, benchmarks, and results. (If you know of others, tell me and I will include them here)

The first algorithm I will cover is the Fiduccia-Mattheyses(FM) algorithm. This is the best link I found and covers the KL algorithm upon which FM is built.

The basic idea is to swap nodes between partitions looking for better solutions until we "can't" get better. The measure of change, for better, is the total cut value that results from a node being swapped to the other partition, known as gain. We track which partition each node is in, as well as the gain from swapping. This information is stored in each partition using a set of gain buckets. A gain bucket is a set of nodes with a common gain value. The buckets themselves are LIFO queues, why, I will explain tomorrow when I detail implementation and how to achieve the optimal time complexity.

KL/FM Algorithm:
P := ititRandomPartitions(G Graph)
while ( we find improvements )
  while ( unlocked nodes remain )
    N := best node from larger partition
    move N & lock
  Roll back to best configuration
  Unlock all nodes
return best P

We begin by splitting the graph into two sets by randomly assigning nodes to a partition. Each node has its gain calculated and is stored in its partitions appropriate gain bucket. We then start switching nodes by choosing the best node from the largest partition. We do this to select the best choice at the time while moving the partitions toward equal sizes. Once a node is switched, we lock it so that we don't continually swap the node back and forth. The node is moved into a new gain bucket in its new partition. Each of its neighbors have their gains recalculated and are likely moved to a new bucket.
This process repeats until all nodes are locked. We then recall the best solution seen so far, return to it,
and continue to search for better partitioning. This repeated search is known as hill climbing and continues until no improvement is made whatsoever.

KL/FM is a randomized heuristic for solving the graph partitioning problem. However, the solution you get depends on where you start, hence the randomized initial partitioning. "In practive TEN is the magic number for FM" says Madden, meaning that when you run the algorithm over and over, you get similar results. You get results that are close to each other and look something like a Gaussian curve. This is common in heuristic algorithms. You can't really ever know if you have the best solution.

Tomorrow I will show you how to efficiently implement this algorithm with an augmented graph and custom linked list. As of now I have a low score of 1448[seed 2] with LIFOs for each partition instead of the gain buckets. (Just a map and an index in, but still thinking on this one)

Wednesday, May 2, 2012

Fungi Freddy Goes LIVE

Check it out!
Give a little, or a lot.
UMF @ Kickstarter.com


p.s.  this is not the post for the day yet...

Tuesday, May 1, 2012

Day two, I'm copping out

sort of...
I spent the morning writing my story on kickstarter and submitted it for review. Check it out: Urban Mushroom Farming 

I may blog about that occasionally...
(let me know if there are any typos or any thoughts you may have)

Tonight I am going to start my final assignment for Madden, Graph Partitioning. You'll get to follow my creation as I set out to use everything he stands against.


Monday, April 30, 2012

A Short Introduction

I've never been a fan of writing, always dreaded it. I know it's important, but I never gave it much attention.
I came across 366 or How I Tricked Myself into Being Awesome the other day and decided to give it a try. So here I am. I have enjoyed stream of consciousness writing in the past, so this might become some of that. I also have to work on my editing skills, so that may increasingly show up. Hopefully this experiment will bolster my skill in and self-perception of my writing.

"What would I blog about?" I've asked myself this many a time. So a short list:
  • Go is my new favorite Google baby. It has been for over a year now and with Go1 being officially released, I might as well blog with it and maybe about it, or at least how I use it. 
  • Math, I love math. I won't focus on math, but it will be the foundations upon which everything else stands.
  • Algorithms, way more important than cores. (At least Madden and I can agree upon something)
  • Genetic Programming, my master's thesis is on this subject
  • Time Series analysis, my PhD is in this subject
  • Data Mining and Machine Learning
  • The Mind
Yeah, I know, less descriptive the further down the list you go. I get sick of all the blogs and tutorials out there covering only the basics. I will go into details and delve into current research, discussing theory where needed. All the while, showing how to implement current research algorithms with Go.

I'm excited... are you?