On Containers and Docker

Containers, containers everywhere!

A container is an isolated area of operating system (OS) with some resource limits imposed on it. Even though the terms container became sort of a buzzword recently, these semantics were available in very early versions of *nix operating systems. For example, chroot, FreeBSD jail, Solaris containers and many more.

Like previously mentioned, a container can simply break down into two components; an OS area that is operating in isolation and, a set of rules or limits applied to control/limit resource it can consume and access to it from outside of the isolated area. This essentially provides the flexibility of sharing OS and physical resources effectively among multiple applications and eventually this started becoming a popular way of shipping applications.

But, having only this container support was not very helpful for a few reasons. Building a container manually could be a challenging task and it is easy to misconfigure the container. Therefore, the container ecosystem needed a layer of abstraction.

Docker is one abstraction that is making use of containers very easy. Kernel primitives and Docker engine are the two components that provide this abstraction. As kernel primitives Docker uses namespaces, cgroups and layers of a unified file system.

When creating a container, what would happen is that the daemon will receive a request to create a container using a particular image as a web request. The Docker daemon will invoke a gRPC call in containerd to initialize the container. This task will be done according to Open Container Initiative specification. A runc process will task the responsibility and create the container and it will hand it over to a shim process for continuing it’s operations. As soon as the container is up and running, the runc process will terminate.

Once decent advantage of this decoupled docker architecture is that it is possible to turn off docker daemon and containerd daemon processes without affecting the running container in the machine. This is a huge advantage when it comes to upgrading the docker engine.

On JavaScript Closures

A function that returns a child function in which child function still has access to the properties / declarations of the parent function, even after the execution of the parent function is usually referred to as a Closure. For example,

According to the above example, even after the execution of ‘square’ function, the returning child function has access to the value of ‘x’. Here under we examine how this is possible.

Like in many other languages limits the existence of a variable within a code block, JavaScript variables exist within a function block. For example,

Here we first prints the value of ‘result’ variable and then refer the value of ‘x’ variable to initialize the value of ‘result’ variable. But, under the hood of JavaScript engine it moves variable declarations to the top of the function block. Which is know as hoisting. This somewhat ensures the declaration of variables within a function block before those are used.

One the hoisting is done, previous code block would looks like above.

For each execution of a function block a data structure called Lexical Environment (environment) is created. Each environment is stacked based on the execution context and it holds the outer environment / environment of the parent’s execution context and references to the values declared in the current execution context.

If we consider the above example where we have ‘square’ function declared in the global scope in which that function returns an object containing a function called ‘calc’. The environment stack would looks like below,

Likewise, every execution context has it’s own environment where it holds references to the values of the properties. This is also referred to as scope. Since environment could trace it’s parent scope’s environment, it could access references of the parent scope’s properties. This lay down the foundation of the JavaScript Closures.


Rock on!!!

On Node Modules

Creating a node module is similar to creating a node.js application. We have to run npm init and that’s it!


A module can be either a single file or a directory containing a set of files. Usually when the module contains more than one file inside a directory it is recommended to have a file named index.js exposed what properties, objects and functions the module provide to outside. However it is not always required to have a file named index.js where we can override this behavior (more on this later).

Importing a module in our application is also straightforward, which can be done using Node’s require function. require function takes the path to a node module as an argument and performs a synchronous look up for the module first in core modules, then the current directory and finally in node_modules. We could omit the .js extension when it comes to requiring. In that case node js will assumes the extension as .js and scan for all JavaScript modules. Since JSON also treated as JavaScript objects, node js will take JSON files in to count when extension is missing. Once the module is located, the require function will return the contents of the exports object defined in the module.

We have two options when it comes to exporting properties or functions; exports and module.exports. exports is a global reference to module.exports and module.exports is what ultimately gets expose to outside. Since exports is a global reference, Node expects it to not be reassigned to any other object. If anything assigned to exports, then the reference between exports and module.exports will be broken.

When it comes to using node modules, Node uses a mechanism to reuse node_modules without knowing their file system location. It search for required node_modules at multiple level.

Finally, Node expects to have a file named index.js in each node module. Otherwise it will scan for a file names package.json inside the module directory and that package.json should contain an element named main specifying the starting point.

Rock on!

Chaos Engineering

Last week I had the pleasure of reading Chaos Engineering; Building Confidence in System Behavior through Experiments. A book written by few engineers involved in Chaos Engineering at Netflix. Being a software engineer who is involved in a similar set of work in enterprise software context, I would say that I am much thankful for the authors of this book for the content that helped me on thinking of formulating a strategy to ensure the reliability of our product.

In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.

In this blog post I will summarize a simple process of adapting Chaos Engineering.

Under the light of Agile development methodologies, we are following different practices to ensure that our applications are doing what it is suppose to do. This could start at unit level testing with a Test Driven Development framework or could be scale up to component level integration testing. What we are doing in all these cases is testing what we know about our application or what we are expecting from our application. Our support teams have to battle with a different set of problems once we deploy our applications to production.

Chaos engineering is a discipline that allow us to get an understanding of our systems behavior in production environment by simulating faults in it. Principles of Chaos Engineering” define it as,

Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

Here are some steps involve in designing experiments,

  1. Create a hypothesis
  2. Define the scope of experiments
  3. Identify the metrics
  4. Notify the organization
  5. Run the experiments
  6. Analyze the results
  7. Increase the scrope
  8. Automate

Create a hypothesis

Failures can happen due to various reasons. For example, hardware failures, functional bugs, network latency or communication barriers, inconsistent state transmissions, etc. What is important at this stage is to select an impactful event that can change the system. Lets say that we have observed traffic of one region of our APIs is increasing, we could test out our load balancing functionality.

Define the scope of the experiments

It is great if we can do experiments on our hypothesis in production, but at first we could choose a less impactful  environment and gradually move towards production as confidence on our experiments grows over time.

Identify the metrics

Once the hypothesis and scope is defined we could decided what metrics we are going to use to evaluate our outcome. Equal distribution of traffic across multiple servers or time taken to reach a response to client can be used in load balancing scenario.

Notify organization

It is necessary to keep all stakeholders informed about the experiments and taking their input on how the experiments should designed in order to get maximum insights.

Run the experiments

Lights, Camera, Action! Now we can run the experiments, but at this point it is necessary to keep an on metrics. If the experiments are causing harm to system it is necessary to abort experiments and a mechanism for that should be placed in.

Analyze the result

Once the results are available we could validate the correctness of hypothesis and communicate the results with relevant teams. If the problem is with load balance, maybe the network infrastructure team have to work a bit more on load balancing across the system.

Increase the scope

Once we grow our confidence on experimenting on smaller scale problems we could start extending the scope of  experiments. Increasing the scope can reveal a different set of systemic problems. For example, failures in load balancing can cause time outs and inconsistent states in different services that could cause our system to fall apart in peak times.


Don’t repeat it yourselves as you gain confidence on your experiments. Start automating what you have already experimented and look forward for other areas to build confidence.

Finally, a problem that comes to mind naturally is how good the decision of shutting down or playing around with your system to take it down in production? Well, Chaos Engineering is certainly not playing around with your system. It is based on the same empirical process that uses to test new drugs, therefore whatever the work we are doing in here is for the betterment of our own products.

Rest Parameters Curry

According to Wikipedia, Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. Simply, if a function takes three arguments and is curried, it is really three functions. Each function takes one argument and return a new function that takes the next argument until all arguments are received, then it returns the final result.

Consider a function that takes three arguments:


The equivalent curried function:


This is somewhat insane! returning nested functions as function takes more and more arguments. One option is to convert a normal function into a curried function, which is what mainly this article focuses here onward.

When converting a normal function into a curried function, it is required to know how many arguments the function is expecting. Functions have a length property in JavaScript and it can be used in here. Although the main intention here is to demonstrate how ES6 Rest Parameters can be used when creating generic curry function.

In concise, a Rest Parameter allows a function to receive a variable number of arguments and more details on this can be found in MDN: Rest Parameters.


The first time this function is called it only expects one argument, a function to  curry. The args parameter will probably be an empty array on first invocation. Save the number of arguments it expects (argLength) to a local variable. Then return a function we define inside the curry function. When this function is invoked it concatenate the new args2 array with the old args array and check if it has received all the arguments yet. If so, it apply the original function and return the result. If  not it recursively call the curry function, passing along all the new arguments which puts us back in the original position, returning the curried function to await more arguments.

With concepts like currying, it is much easier to abstract the logic and make clear and concise functions.


  1. Currying on Wikipedia
  2. Rest parameters on MDN

Rock on!