alt canary

There appear to be some disturbance in the force, when it comes to deploying applications these days. In the same vein functional programing is changing the way we write programs, Functions As A Service (FAAS) is changing the way we deploy them. Seeing an application as decorated business logic, one can start decomposing the workflows as a series of transformation to process a request. Having experimented with Istio in the past, I began to wonder how cool it would be if one was to apply the same traffic combing technics to functions deployed in a cluster, aka the function mesh? Venturing down FAAS land, I’ve experimented with some of the contenders and opted to settle on the new cool kid on the block namely Nuclio.

Sound like a total gas! So I wrote a blog post about it…



No, Not Netflix, Iconoflix…


alt canary

For this endeavor, I’ve decided to rewrite my world famous Iconoflix application as a collection of collaborating functions. Iconoflix is a game where users guess a movie given only a set of icons as clues. For this post purpose, Iconoflix backend is composed of the following components:

  • Iconoflix – Calls the picker function to produce a movie to be guessed
  • λ IMDB – IconoFlix Movie DataBase as a function
  • λ Picker – Calls IMDB functions and pick out a movie at random as a function


alt canary



First Attempt, OpenFAAS

OpenFAAS is a really cool FAAS framework. It offers functions as containers. Looking around the site and kicking the tires some, I liked what I saw: flexibility, granularity, good toolings, community, docs and fairly easy to get up and running, not to mention killer support! I was able to deploy my Go functions with relative ease and could plow thru some interesting use cases. I like the fact that you can specify function tests and they will be run when building my function container. A nice touch indeed!

So Far, So Good?

Diving deeper, I’ve started experimenting with pipelining of functions ie f1 calls f2 calls fx. This is where the initial buzz and dev happiness started to fade a bit for me.

On one hand, there is not really build support for things like headers, cookies and other familiar HTTP goodies. Some of those are exposed as env variables to your function, but it feels a bit clunky not having direct access to these common idioms from my function args especially in an HTTP context. This is actually where things started to unravel pretty quickly for me. Let’s take a look at a typical function signature in OpenFAAS…

func Handler(in string) string {
  ...
}

At first glance this is innocent enough. Everybody loves strings, right? So usability is pretty high right off the blocks. However once, one starts to implement pipeline like behavior, that once beloved string, quickly rushes into a bad Umpa Lumpa vibe! I was trying to send and receive JSON data from one function into another and ran into a pilot error with my marshalling/un-marshalling routine. So as any good programer should, open and close case, just sprinkle a debug or two and bounce? Let’s take a look…

func Handler(in string) string {
  log.Println("Yo!")
  fmt.Println("Something wicked this way comes...")
  // ...
  return "blee"
}

Seems pretty straight forward right? Well, turns out this function actually returns…

Yo!
Something wicked this way comes...
blee

Wat? return “blee” I’ve said? What’s up with that? Turns out OpenFAAS merges streams so stdout/stderr are part the magical “blee” return. That totally surprised me, as it took me a while to realize that my JSON grokking errors came from my very attempts to debug the problem in the first place…

⚠️ IMPORTANT!! To be fair here, there is a fix for this behavior that Alex Ellis pointed me to of-watchdog. As of this writing I haven’t had the chance to try it out yet. But this issue has been addressed with the watchdog rewrite by the fine OpenFAAS crew!



Next Stop… Nuclio

Nuclio is the new kid on the block in FAAS land. They approach the problem once again using Docker containers but do offer a heftier function handler signature that resorts to stronger data types…

func Handler(ctx *nuclio.Context, evt nuclio.Event) (interface{}, error) {
  ...
}

As a developer, that makes me happier. My days of string parsing are long gone! I can now log my requests, get call info and interact with headers and cookies. My use cases now started to materialize quickly and I felt well on my way to close the deal on this prototype…

Wat? No Canaries In FAAS Mines?

Sadly, I would have had better luck locating a pizza at a WeightWatchers gathering than deploying a canary function in FAAS land. The concept does not quiet exists as of yet. In this light, I was thinking of a DeepLearning experiment where I could deploy different implementations of my training function and invoke at will to see which implementation produces better results. This did put a damper on things, but chatting with the excellent Nuclio folks, I think there is hope here in the near future…

So I’ve figured no deal! For now, I will deploy different functions and use an extra K8s service to multiplex the workloads. Initially, I try to use Nuclio CRD function manifest to set my labels that my custom service would select on. Sounded like a good idea, at the time! Turns out I hit a bug in Nuclio where all labels specified in the manifest get wiped out by the controller when generating the final deployment 🐭. So the only labels left standing after that effort were the ones Nuclio injects ie serverless=nuclio, version=latest, name=f1. Rats!

This threw me for a bit as I could not set the version to anything else than latest but more importantly I needed the labels to unify my functions into a single service. After a quick cone of silence retreat, I decided to leverage the nuctl CLI instead to build and deploy my functions and finally I had custom labels setup. Yes!

# Deploys function f1 labels as fn=f1 to my docker registry and launch it!
@nuctl deploy f1 \
  -e F1_REV=v1 \
  -p f1.go \
  -n coalmine \
  -l fn=f1 \ # Yes!
  -i canary-f1 \
  --runtime golang \
  --registry myregistry \
  --run-image myregistry/f1

This generates a decorated HTTP service out of my f1.go handler and builds a custom docker image, a K8s pod, service and ingress. This is when it really dawn on me how cool this all is! One no longer needs to think in terms of services and pods and can elevate to the wonderful world of functions and pure business logic. This is a pretty cool shift IMHO!!

SuperPowers acquired! I can start exploring the scenarios I was fishing for! That is deploying canary functions in an Istio aware cluster. Hence leverage the function mesh and traffic combing to direct traffic to my functions for a given impetus!

With my service fronting different implementations of virtually the same functions, I was starting to feel pretty good about achieving my end goal. I now have the ability to deploy f1 and f1’ that will be proxied via my custom f1 service leveraging labels selection to group the functions together. Bitch’n!



OK Then, Unleash them Canaries Already!

The team has been hard at work and produced an Iconoflix B-Movie edition. After our initial testing, we decided to push this new marvel out to production so our customers can help us flush out the remaining issues (Right?)

  • Deploy IMDB V1 and V2

      # Deploy imdb-v1
      @nuctl deploy imdb-v1 -e IMDB_REV=v1 -p fn.go -n nuclio -l app=imdb \
          --runtime golang -i icx-imdb --registry quay.io/imhotepio \
          --run-image quay.io/imhotepio/icx-imdb
      # Deploy imdb-v2
      @nuctl deploy imdb-v2 -e IMDB_REV=v2 -p fn.go -n nuclio -l app=imdb \
        --runtime golang -i icx-imdb --registry quay.io/imhotepio \
        --run-image quay.io/imhotepio/icx-imdb
    
  • IMDB Service K8s Manifest

      # svc.imdb.yml
      apiVersion: v1
      kind:       Service
      metadata:
        name:      imdb
        namespace: icx
        labels:
          app: imdb
      spec:
        type: ClusterIP
        selector:
          app: imdb
        ports:
          - name: http
            port: 8080
    


I The Revolting Slob Approach!

  1. Deploy Already!

     kubectl apply -f k8s/svc.imdb.yaml
    

Nice work Fernand! You ‘ve just pissed off 50% of our customers!

True enough as my custom IMDB service indiscriminately round-robins traffic between V1 to V2. Rats!!


II The Percentage Game

OK, so how about leveraging an Istio route rule to direct traffic using percentage instead?

  1. 99% To V1 and 1% to V2

     # istio-99-1.yml
     apiVersion: config.istio.io/v1alpha2
     kind:       RouteRule
     metadata:
       name:      icx-99-1-v2
       namespace: icx
     spec:
       precedence: 0
       destination:
         name:      imdb
         namespace: icx
       route:
       - labels:
           rev:  v1
         weight: 99
       - labels:
           rev:  v2
         weight: 1
    
  2. Deploy!

     istioclt create -f istio-99-1.yml
    


And Voila, I rule!! Oh wait, Wat? Still getting phone calls? WTF?


III Friendly Fire Only? Yes Please!

Leveraging Istio traffic combing, one can use specific HTTP headers or Cookies to divvy up traffic. Let’s use cookies and we will give our friendlies a secret URL to try out our B-Movies selections.

  1. V2 Cookies Based Rule movie=bmovie

     # cookies.yml
     apiVersion: config.istio.io/v1alpha2
     kind:       RouteRule
     metadata:
       name:      icx-cookie
       namespace: icx
     spec:
       precedence: 2
       destination:
         name:      imdb
         namespace: icx
       match:
         source:
           name: picker
         request:
           headers:
             cookie:
               regex: "^(.*?;)?(movie=bmovie)(;.*)?$"
       route:
         - labels:
             rev: v2
           weight: 100
    
  2. Deploy!

     istioctl create -f cookies.yml
    


A Quick Angular push away and our most friendly of customers get to checkout the killer Iconoflix BMovie Edition and help us with the flush.

No need to spin off a new cluster, URLS, etc to support this edition. Now that’s money in the bank, happy customers and ecstatic marketeers… Resounding Win!



Flush Out The Pipes…

Breaking out into function pipeline, gives us the opportunity to crack open our full of stacks skills and start using different language for our functions. We can now offer an eco system of functions specifically tuned for a language using our best of breed approach. That is pure juice for sure, but introduces some nasty implementation details when it comes to dealing with the bad case of the fail/retry dilemma. Each languages in our stack, now offers different retries implementations and to boot the whole necessary evil is going to clutter our implementations and therefore our business logic. Not to mention we will need some kind of configuration how many retries and for how long per impls? 👻

Once again Istio to the rescue. Leveraging Aspect Oriented Programing (AOP) technics, we can use cross-cutting concerns to inject retry logic across the function mesh. No more lame retry code clutter and furthermore one can dial in different retry logic across the entire cluster in a single blow. Now that’s the Bomb!

Let’s introduce some disturbance in the IMDB force by emulating function failures 30% of the time and first see if our pipeline correctly handles this condition and how well (or not!) we compensate?

  1. Resistance Is Futile…

     # fault.yml
     apiVersion: config.istio.io/v1alpha2
     kind:       RouteRule
     metadata:
       name:      icx-fault
       namespace: icx
     spec:
       precedence: 1
       destination:
         name:      imdb
         namespace: icx
       match:
         source:
           name: picker
       httpFault:
         abort:
           percent:    30
           httpStatus: 400
    
  2. Deploy!

     istioctl create -f fault.yml
    


Ok looks good so far, we are indeed logging the failure correctly, however customer are impacted since we’re dropping requests on the floor 🤪 !


Capt’ain Envoy To The Rescue!

Let’s retry 3 times 2 secs apart and see how we fair now…

  1. Retry with flair…

     # retry.yml
     apiVersion: config.istio.io/v1alpha2
     kind:       RouteRule
     metadata:
       name:      icx-retry
       namespace: icx
     spec:
       precedence: 2
       destination:
         name:      imdb
         namespace: icx
       httpReqRetries:
         simpleRetry:
           attempts:      3
           perTryTimeout: 2s
    
  2. Deploy!

     istioctl create -f retry.yml
    


Cool deal! Our function mesh is once again happy as a hippo in mud! Our function pipeline is now behaving correctly eventhough our movie imdb function is toast 30% off the time. That my friends, is the Duck’s Nuts!



That’s A Wrap! Iconoflix Is In The Can…


Useless to say, I am pretty excited about Istio and FAAS combo and the various frameworks that are coming out. Best I can tell, leveraging Docker to remediate dictating how functions are implemented is brilliant! I am also very keen on the folks at Nuclio and OpenFAAS their support, kindness and insights are excellent! I think it’s very exciting to see the evolution here tho the road is still a bit bumpy at this time. Once again Istio delivers some really cool aptitudes to shape our function mesh and affords canary scenarios that aren’t readily available in FAAS land at this juncture.


Thank you for Reading!