On Containers and Docker

Containers, containers everywhere!

A container is an isolated area of operating system (OS) with some resource limits imposed on it. Even though the terms container became sort of a buzzword recently, these semantics were available in very early versions of *nix operating systems. For example, chroot, FreeBSD jail, Solaris containers and many more.

Like previously mentioned, a container can simply break down into two components; an OS area that is operating in isolation and, a set of rules or limits applied to control/limit resource it can consume and access to it from outside of the isolated area. This essentially provides the flexibility of sharing OS and physical resources effectively among multiple applications and eventually this started becoming a popular way of shipping applications.

But, having only this container support was not very helpful for a few reasons. Building a container manually could be a challenging task and it is easy to misconfigure the container. Therefore, the container ecosystem needed a layer of abstraction.

Docker is one abstraction that is making use of containers very easy. Kernel primitives and Docker engine are the two components that provide this abstraction. As kernel primitives Docker uses namespaces, cgroups and layers of a unified file system.

When creating a container, what would happen is that the daemon will receive a request to create a container using a particular image as a web request. The Docker daemon will invoke a gRPC call in containerd to initialize the container. This task will be done according to Open Container Initiative specification. A runc process will task the responsibility and create the container and it will hand it over to a shim process for continuing it’s operations. As soon as the container is up and running, the runc process will terminate.

Once decent advantage of this decoupled docker architecture is that it is possible to turn off docker daemon and containerd daemon processes without affecting the running container in the machine. This is a huge advantage when it comes to upgrading the docker engine.

On JavaScript Closures

A function that returns a child function in which child function still has access to the properties / declarations of the parent function, even after the execution of the parent function is usually referred to as a Closure. For example,

According to the above example, even after the execution of ‘square’ function, the returning child function has access to the value of ‘x’. Here under we examine how this is possible.

Like in many other languages limits the existence of a variable within a code block, JavaScript variables exist within a function block. For example,

Here we first prints the value of ‘result’ variable and then refer the value of ‘x’ variable to initialize the value of ‘result’ variable. But, under the hood of JavaScript engine it moves variable declarations to the top of the function block. Which is know as hoisting. This somewhat ensures the declaration of variables within a function block before those are used.

One the hoisting is done, previous code block would looks like above.

For each execution of a function block a data structure called Lexical Environment (environment) is created. Each environment is stacked based on the execution context and it holds the outer environment / environment of the parent’s execution context and references to the values declared in the current execution context.

If we consider the above example where we have ‘square’ function declared in the global scope in which that function returns an object containing a function called ‘calc’. The environment stack would looks like below,

Likewise, every execution context has it’s own environment where it holds references to the values of the properties. This is also referred to as scope. Since environment could trace it’s parent scope’s environment, it could access references of the parent scope’s properties. This lay down the foundation of the JavaScript Closures.


Rock on!!!

On Parsing and Modifying JavaScript Abstract Syntax Tree

Recently I published a Yeoman generator for Express.js based APIs. I wanted to modify the JavaScript template files while implementing the Yeoman generator, for example add new routes. This requires modifying the code time to time and I wanted to do it in the right way. Which lead me to try out Esprima and Escodegen.


Esprima is primarily a JavaScript/ECMAScript syntax parser which generates abstract syntax tree from a given code snippet. Escodegen on the other hand is a JavaScript/ECMAScript code generator.

Parsing JavaScript to get abstract syntax tree,


One important thing to note in above code is that esprima requires the code as a string to parse it.

Modifying the abstract syntax tree,


The tricky thing in here is that you need to to know the data structure that abstract syntax tree is using and you have to inject the right data structure into the right place.

Rock on!

On Node Modules

Creating a node module is similar to creating a node.js application. We have to run npm init and that’s it!


A module can be either a single file or a directory containing a set of files. Usually when the module contains more than one file inside a directory it is recommended to have a file named index.js exposed what properties, objects and functions the module provide to outside. However it is not always required to have a file named index.js where we can override this behavior (more on this later).

Importing a module in our application is also straightforward, which can be done using Node’s require function. require function takes the path to a node module as an argument and performs a synchronous look up for the module first in core modules, then the current directory and finally in node_modules. We could omit the .js extension when it comes to requiring. In that case node js will assumes the extension as .js and scan for all JavaScript modules. Since JSON also treated as JavaScript objects, node js will take JSON files in to count when extension is missing. Once the module is located, the require function will return the contents of the exports object defined in the module.

We have two options when it comes to exporting properties or functions; exports and module.exports. exports is a global reference to module.exports and module.exports is what ultimately gets expose to outside. Since exports is a global reference, Node expects it to not be reassigned to any other object. If anything assigned to exports, then the reference between exports and module.exports will be broken.

When it comes to using node modules, Node uses a mechanism to reuse node_modules without knowing their file system location. It search for required node_modules at multiple level.

Finally, Node expects to have a file named index.js in each node module. Otherwise it will scan for a file names package.json inside the module directory and that package.json should contain an element named main specifying the starting point.

Rock on!

On Garbage Collecting C++ Addons in NodeJS

NodeJS Addons are C++ objects that can be loaded into NodeJS using require(). Idea is that this allows NodeJS to gain performance and functionalities of C++. This acts as an interface between JavaScript and C++. NodeJS documentation is helpful to get familiar with Addons>>.


However, when it comes to garbage collecting (GC), the documentation is missing some important… well I’d refer to those as tricks.

Manually execute GC,

First we have to set the object instance to null and then call the garbage collector in global context.

obj = null;


One consideration with above is that this is not a reliable way to trigger GC on weak handles.

Another approach is to allocated a huge amount of memory so that GC will triggered,

for (let i = 0;  i < 1e6;  ++i) {



Also, the GC can be triggered forcefully using V8’s command line flags –gc_global and —gc_interval. –gc_global forces V8 to perform a full garbage collection, while –gc_interval forces V8 to perform garbage collection after a given amount of allocations.

These command line flags are provided by underlying V8 JavaScript engine. They are subjected to change or removal at anytime and not documented by NodeJS or V8. Therefore it is recommended not to use these outside of testing purposes.


Rock on!

Chaos Engineering

Last week I had the pleasure of reading Chaos Engineering; Building Confidence in System Behavior through Experiments. A book written by few engineers involved in Chaos Engineering at Netflix. Being a software engineer who is involved in a similar set of work in enterprise software context, I would say that I am much thankful for the authors of this book for the content that helped me on thinking of formulating a strategy to ensure the reliability of our product.

In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.

In this blog post I will summarize a simple process of adapting Chaos Engineering.

Under the light of Agile development methodologies, we are following different practices to ensure that our applications are doing what it is suppose to do. This could start at unit level testing with a Test Driven Development framework or could be scale up to component level integration testing. What we are doing in all these cases is testing what we know about our application or what we are expecting from our application. Our support teams have to battle with a different set of problems once we deploy our applications to production.

Chaos engineering is a discipline that allow us to get an understanding of our systems behavior in production environment by simulating faults in it. Principles of Chaos Engineering” define it as,

Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

Here are some steps involve in designing experiments,

  1. Create a hypothesis
  2. Define the scope of experiments
  3. Identify the metrics
  4. Notify the organization
  5. Run the experiments
  6. Analyze the results
  7. Increase the scrope
  8. Automate

Create a hypothesis

Failures can happen due to various reasons. For example, hardware failures, functional bugs, network latency or communication barriers, inconsistent state transmissions, etc. What is important at this stage is to select an impactful event that can change the system. Lets say that we have observed traffic of one region of our APIs is increasing, we could test out our load balancing functionality.

Define the scope of the experiments

It is great if we can do experiments on our hypothesis in production, but at first we could choose a less impactful  environment and gradually move towards production as confidence on our experiments grows over time.

Identify the metrics

Once the hypothesis and scope is defined we could decided what metrics we are going to use to evaluate our outcome. Equal distribution of traffic across multiple servers or time taken to reach a response to client can be used in load balancing scenario.

Notify organization

It is necessary to keep all stakeholders informed about the experiments and taking their input on how the experiments should designed in order to get maximum insights.

Run the experiments

Lights, Camera, Action! Now we can run the experiments, but at this point it is necessary to keep an on metrics. If the experiments are causing harm to system it is necessary to abort experiments and a mechanism for that should be placed in.

Analyze the result

Once the results are available we could validate the correctness of hypothesis and communicate the results with relevant teams. If the problem is with load balance, maybe the network infrastructure team have to work a bit more on load balancing across the system.

Increase the scope

Once we grow our confidence on experimenting on smaller scale problems we could start extending the scope of  experiments. Increasing the scope can reveal a different set of systemic problems. For example, failures in load balancing can cause time outs and inconsistent states in different services that could cause our system to fall apart in peak times.


Don’t repeat it yourselves as you gain confidence on your experiments. Start automating what you have already experimented and look forward for other areas to build confidence.

Finally, a problem that comes to mind naturally is how good the decision of shutting down or playing around with your system to take it down in production? Well, Chaos Engineering is certainly not playing around with your system. It is based on the same empirical process that uses to test new drugs, therefore whatever the work we are doing in here is for the betterment of our own products.

State Management in ReactJS

maique madeira

I recently started looking into ReactJS. Among many shiny things like “Virtual DOM”, I found the way react manage state interesting, a kind of an unsung hero to me. This post is to give few examples of State Management in ReactJS.

Let’s first consider a very simple function. The whole goal of this function is to fetch a given user. Therefore, we can add a parameter to function to receive username as an argument and pass a username to function when it is invoked. For example,

function fetchUser(username) {
    //ajax call


This same intuition can be applied to react components. Suppose that we have a simple react component to display a user in user interface. Now the problem is how do we communicate which user the component should display? There is where we can pass the username to our component through a custom attribute. The values we passed into components through custom attributes can be accessed inside the components via it’s “props” object. For example,

class User extends Component {
    render() {
        return (
                <p>Username: {this.props.username}</p>

<User username="Isuru" />

Due to the nature of react’s component model, it is possible to delegate/encapsulate state management into individual components, which allow us to build large applications with a bunch of small applications/components. Adding a new property called “state” who’s value is a object is enough to add a state to a component. This object represents the entire sate of the component and each key in this object represents a distinct piece of it’s state. Finally this state can be access via state object which is a much similar way we accessed props.

class User extends Component {
    state = {
        username: 'Isuru'

    render() {
        return (
                <p>Username: {this.state.username}</p>

<User />

This way allows us to separate how the application looks and the application’s state, ultimately the user interface becomes a function of applications’ state. React takes the responsibility when it comes to changing the state of a component this is done by calling the “setState()” method when something needs to be changed. Although, it is encouraged not to update the state of components directly.

Alright then, how about forms? Usually the state of forms are in DOM. If react manages state of components inside the component itself, how do we handle forms in react? This is where controlled components comes in. Controlled components are components which render a form, but the state of that form lives inside the component.

class User extends Component {
    state = {
        username: ''

    handleChange = (event) => {
        this.setState({username: event.target.value})

    render() {
        return (
                <input type="text" value={this.state.username} 
                onChange={this.handleChange} />

<User />


In above example, we bind the value of our input field to state and with that the state of our form is controlled by react. Few benefits that we get from controlled components are,

  1. it allows instant input validation,
  2. allows to conditionally enable/disable buttons,
  3. enforce input formats.

Allow of these benefits are related to user interface updates based on user inputs. This is the heart of not only the controlled components, but react in general. If the state of the application changes, then the user input changes are based on that state of the application.

Rock on!



Thinking Behind The Boring Company


Elon Musk is one of the great thinkers of our time. He has changed how we deal and think about humankind’s problems, with his proven track record ranging from Fin-tech to Space travel. The Boring Company is among many of his inspiring work and this article is to summarize how he has abstracted the problem and suggested a solution in my point of view.

The Boring Company, the FAQ section of the website is informative enough to understand what it does. It builds an underground network of tunnels to travel. The solutions seems obvious, but what’s the problem? and why tunnels?

Traveling within cities or entering into a city from suburbs is bit of a hassle mainly due to traffic congestion. If we look at our city architectures, we have multi-storey buildings. We all arrive into cities from a single level road and work in buildings which have multiple levels. How do we expect to have minimal if not no traffic congestion at all when our transportation medium is expected to support 100x of it’s capable capacity.

The solution should be a transport system that spans across many levels. As FAQ describes, tunnels doesn’t get affected by weather and won’t fall down like flying cars. Therefore a multilevel tunnels can be the solution.

When we are finding a solution, or thinking about a new product idea, the inspiration we can get from this is narrowing down the problem to it’s root. Get the ground truth of the problem and solve it.

Imaginative Constraints and Inspiration


Dan Freeman 

I’m working as a software engineer in day time. I have to work within many constraints in each an every project I’m working. Constraints comes in may forms, they can be physical, technical or imaginative. This blog post is intended to explain few thoughts I had on imaginative difficulties I faced very recently.

In good old days, we had user manuals for software we used. Manuals described what each user interface element  does and how users can get done tasks. Now if we consider the apps we use on daily basis or web sites that we visits, we don’t have user manuals for those. User interface items we find in those applications are very intuitive for us, those have been placed in right places very carefully.

I’m currently doing a research on user experience for data analytics platform that has interfaces over web, desktop and mobile. I lost myself in there. How do we decide which user experience suites the most. I reached a colleague whom have a sensible amount of user experience development and asked how do you do your magic. His response was very simple; “You have to understand users needs and walk on his shoes. Decide what would you prefer to have”.

I was thinking… How does an architect design buildings that anyone can walk through without any guidance. Isn’t it understanding what you are expecting for a certain experience and unconsciously communicate that message with others? What does a solid metal elevator convince to us? Isn’t that a message about reliability and safety?

I concluded, yet thought to explore more, that looking elsewhere and bringing experiences from out side of my universe is essential in user experience design.

Rest Parameters Curry

According to Wikipedia, Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. Simply, if a function takes three arguments and is curried, it is really three functions. Each function takes one argument and return a new function that takes the next argument until all arguments are received, then it returns the final result.

Consider a function that takes three arguments:


The equivalent curried function:


This is somewhat insane! returning nested functions as function takes more and more arguments. One option is to convert a normal function into a curried function, which is what mainly this article focuses here onward.

When converting a normal function into a curried function, it is required to know how many arguments the function is expecting. Functions have a length property in JavaScript and it can be used in here. Although the main intention here is to demonstrate how ES6 Rest Parameters can be used when creating generic curry function.

In concise, a Rest Parameter allows a function to receive a variable number of arguments and more details on this can be found in MDN: Rest Parameters.


The first time this function is called it only expects one argument, a function to  curry. The args parameter will probably be an empty array on first invocation. Save the number of arguments it expects (argLength) to a local variable. Then return a function we define inside the curry function. When this function is invoked it concatenate the new args2 array with the old args array and check if it has received all the arguments yet. If so, it apply the original function and return the result. If  not it recursively call the curry function, passing along all the new arguments which puts us back in the original position, returning the curried function to await more arguments.

With concepts like currying, it is much easier to abstract the logic and make clear and concise functions.


  1. Currying on Wikipedia
  2. Rest parameters on MDN

Rock on!