alt got

I found it disturbing that in the time and age, we rely only on vision to monitor our deployments…

What’s up with our cluster?

… I’ll take a look at the logs … Let me peek at the dashboard.

What if your cluster had a voice? I think it could be cool to add such a feature, and be able to hear if our production cluster is either happy as a Hippo or in a do not resuscitate state…

In my copious spare time, I do a fair bit of wrenching on my automobiles, thus I tend to always listen to my machines and often pick up issues prior to the idiot light coming on. Why couldn’t we use machine sounds to identify issues with our clusters? Like a car, couldn’t we ascertain our cluster is sounding good?

In my past life, working with workstations (I’ll have you know they are making a comeback!), I knew something was going wrong with my software once the fans started to frantically spin out of control. Couldn’t we hear the thunder in the clouds too?

A while back, I’ve watched this great keynote by Joe Armstrong and Sam Aaron on Sonic-Pi. This was a cool and entertaining journey on remotely controlling a Sonic-Pi using Erlang. Of recent, I saw my good friend Joseph Wilk brilliantly perform a live coding session at Farm 2017. These totally blew my mind!

While driving home this week and spinning The Pump by one of my favorite artist Jeff Beck. I realize that perhaps, I could mix all these concepts together and give my Kubernetes clusters a voice. Maybe even coin the term Pods Orchestra?


You’re Too Loud!

This is of course a very simple demonstration. Below you will see how to play sound based on pods going in and out of a Kubernetes cluster. Now keep in mind, I have all of 30 mins of SonicPi experience and just ripped and pillaged some of Sam’s examples to get something quickly out. Also when it comes to music composition…

I have never had one lesson!


You’ve been warned! Now you can watch the video… Need More CowBell, Baby!

For the impatients, you can cruise to the SonicKube App on the Hub.


The Lay Of The Music Land

It turns out one of the coolest and undocumented? feature of SonicPi is the ability to tap into a running loop by connecting to a UDP socket and sending out Open Sound Control - OSC messages. I had never heard of this protocol before but figured how hard can it be? An OSC packet takes the following form:

Address Pattern - Type - Args - Terminator

  • Address pattern represents a unix like path
  • Types of arguments for the message ie s for string, i for int, etc…
  • Args the message arguments
  • A null terminator

I know, right?!, so intuitive!

NOTE: Each block in an OSC message must be on a 4 bytes boundary.

So for example:

/run-code,ssgoplay 60

Will play a note value 60 on the default synth on a Sonic-Pi

You can connect to your Sonic-Pi on UDP port 4557 in Go as follows:

addr, err := net.ResolveUDPAddr("udp", address)
...
conn, err := net.DialUDP("udp", nil, addr)
...

To play note 60, we just need to decorate the message and toss it over our UDP connection. So I came up with a canned Blast method to allows us to just sent our music snippet and takes care of the OSC formating and padding.

conn.Blast("play 60")


Any Clusters With Ears On?

The next thing is to tap into Kubernetes events. Thankfully the awesome K8s team has put forth some really nice API’s for us to get notified on any Kubernetes events flowing into the system. Now keep in mind we are keeping things simple here and will just monitor the pods flowing in and out the system.

Provided better musical abilities… one could start putting together some very cool tunes based on either K8s events or your applications own monitoring events. I could forsee one tapping into app metrics and laying down some sick beats! Hence instead of the cacophony I’ve just put together, someone musically inclined could lay down killer harmonies. Just like a graphical interface, the ear interface (EI?), would give the listener insights on the well being or ills of ones cluster’s health.

The following highlights how to listen to K8s pod events. Note: Our cluster uses the default namespace and we only care about our own pods! So we manifest interest in those events, filter on the default namespace and register callbacks for pods going up/down.

master, err := client.New(config)
if err != nil {
  log.Fatalln("K8s master connect failed!", err)
}

watchlist := cache.NewListWatchFromClient(
  master,
  "pods",
  api.NamespaceDefault,
  fields.Everything(),
)

_, eController := framework.NewInformer(
  watchlist,
  &api.Pod{},
  resyncPeriod,
  framework.ResourceEventHandlerFuncs{
    AddFunc:    upFunc,
    DeleteFunc: downFunc,
  },
)
go eController.Run(wait.NeverStop)


So Ear You Go?

At this point, you might think to yourself, `Dude’s on crack? And you could very well be right! To be honest, I am not sure, as of yet, where this road leads?

One thing for sure, it was a total gas putting this together and hear my cluster come to life!

I can imagine walking in an office with a zen vibe theme and hearing their clusters stream in the background with nature’s sounds birds, bees, bubbling brooks…

Could be cool? The human brain is great at picking up on patterns, I suspect the up-rise of electronic music is mostly based on that. Hence why do we only focus on one sense for monitoring? If you’re pattern challenged, one could throw DeepLearning in the mix and you could get notified when the sound signatures of your cluster are out of wack.

To boot having my pencil neck manager come down with “Dude! I no longer hear the crickets?” would make that scene so much more special…

God knows if this contraption has `ears, but I would be remissed to remind you that in Music as in Deployments, Silences are indeed critical!

And this year MTV Music award goes to… The ACME Corp Cluster!!


Thanks for listening!!