A container is an isolated area of operating system (OS) with some resource limits imposed on it. Even though the terms container became sort of a buzzword recently, these semantics were available in very early versions of *nix operating systems. For example, chroot, FreeBSD jail, Solaris containers and many more.
Like previously mentioned, a container can simply break down into two components; an OS area that is operating in isolation and, a set of rules or limits applied to control/limit resource it can consume and access to it from outside of the isolated area. This essentially provides the flexibility of sharing OS and physical resources effectively among multiple applications and eventually this started becoming a popular way of shipping applications.
But, having only this container support was not very helpful for a few reasons. Building a container manually could be a challenging task and it is easy to misconfigure the container. Therefore, the container ecosystem needed a layer of abstraction.
Docker is one abstraction that is making use of containers very easy. Kernel primitives and Docker engine are the two components that provide this abstraction. As kernel primitives Docker uses namespaces, cgroups and layers of a unified file system.
When creating a container, what would happen is that the daemon will receive a request to create a container using a particular image as a web request. The Docker daemon will invoke a gRPC call in containerd to initialize the container. This task will be done according to Open Container Initiative specification. A runc process will task the responsibility and create the container and it will hand it over to a shim process for continuing it’s operations. As soon as the container is up and running, the runc process will terminate.
Once decent advantage of this decoupled docker architecture is that it is possible to turn off docker daemon and containerd daemon processes without affecting the running container in the machine. This is a huge advantage when it comes to upgrading the docker engine.
A function that returns a child function in which child function still has access to the properties / declarations of the parent function, even after the execution of the parent function is usually referred to as a Closure. For example,
According to the above example, even after the execution of ‘square’ function, the returning child function has access to the value of ‘x’. Here under we examine how this is possible.
One the hoisting is done, previous code block would looks like above.
For each execution of a function block a data structure called Lexical Environment (environment) is created. Each environment is stacked based on the execution context and it holds the outer environment / environment of the parent’s execution context and references to the values declared in the current execution context.
If we consider the above example where we have ‘square’ function declared in the global scope in which that function returns an object containing a function called ‘calc’. The environment stack would looks like below,
Creating a node module is similar to creating a node.js application. We have to run npm init and that’s it!
A module can be either a single file or a directory containing a set of files. Usually when the module contains more than one file inside a directory it is recommended to have a file named index.js exposed what properties, objects and functions the module provide to outside. However it is not always required to have a file named index.js where we can override this behavior (more on this later).
We have two options when it comes to exporting properties or functions; exports and module.exports. exports is a global reference to module.exports and module.exports is what ultimately gets expose to outside. Since exports is a global reference, Node expects it to not be reassigned to any other object. If anything assigned to exports, then the reference between exports and module.exports will be broken.
When it comes to using node modules, Node uses a mechanism to reuse node_modules without knowing their file system location. It search for required node_modules at multiple level.
Finally, Node expects to have a file named index.js in each node module. Otherwise it will scan for a file names package.json inside the module directory and that package.json should contain an element named main specifying the starting point.
Last week I had the pleasure of reading “Chaos Engineering; Building Confidence in System Behavior through Experiments“. A book written by few engineers involved in Chaos Engineering at Netflix. Being a software engineer who is involved in a similar set of work in enterprise software context, I would say that I am much thankful for the authors of this book for the content that helped me on thinking of formulating a strategy to ensure the reliability of our product.
In this blog post I will summarize a simple process of adapting Chaos Engineering.
Under the light of Agile development methodologies, we are following different practices to ensure that our applications are doing what it is suppose to do. This could start at unit level testing with a Test Driven Development framework or could be scale up to component level integration testing. What we are doing in all these cases is testing what we know about our application or what we are expecting from our application. Our support teams have to battle with a different set of problems once we deploy our applications to production.
Chaos engineering is a discipline that allow us to get an understanding of our systems behavior in production environment by simulating faults in it. “Principles of Chaos Engineering” define it as,
Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.
Here are some steps involve in designing experiments,
Create a hypothesis
Define the scope of experiments
Identify the metrics
Notify the organization
Run the experiments
Analyze the results
Increase the scrope
Create a hypothesis
Failures can happen due to various reasons. For example, hardware failures, functional bugs, network latency or communication barriers, inconsistent state transmissions, etc. What is important at this stage is to select an impactful event that can change the system. Lets say that we have observed traffic of one region of our APIs is increasing, we could test out our load balancing functionality.
Define the scope of the experiments
It is great if we can do experiments on our hypothesis in production, but at first we could choose a less impactful environment and gradually move towards production as confidence on our experiments grows over time.
Identify the metrics
Once the hypothesis and scope is defined we could decided what metrics we are going to use to evaluate our outcome. Equal distribution of traffic across multiple servers or time taken to reach a response to client can be used in load balancing scenario.
It is necessary to keep all stakeholders informed about the experiments and taking their input on how the experiments should designed in order to get maximum insights.
Run the experiments
Lights, Camera, Action! Now we can run the experiments, but at this point it is necessary to keep an on metrics. If the experiments are causing harm to system it is necessary to abort experiments and a mechanism for that should be placed in.
Analyze the result
Once the results are available we could validate the correctness of hypothesis and communicate the results with relevant teams. If the problem is with load balance, maybe the network infrastructure team have to work a bit more on load balancing across the system.
Increase the scope
Once we grow our confidence on experimenting on smaller scale problems we could start extending the scope of experiments. Increasing the scope can reveal a different set of systemic problems. For example, failures in load balancing can cause time outs and inconsistent states in different services that could cause our system to fall apart in peak times.
Don’t repeat it yourselves as you gain confidence on your experiments. Start automating what you have already experimented and look forward for other areas to build confidence.
Finally, a problem that comes to mind naturally is how good the decision of shutting down or playing around with your system to take it down in production? Well, Chaos Engineering is certainly not playing around with your system. It is based on the same empirical process that uses to test new drugs, therefore whatever the work we are doing in here is for the betterment of our own products.
According to Wikipedia, Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. Simply, if a function takes three arguments and is curried, it is really three functions. Each function takes one argument and return a new function that takes the next argument until all arguments are received, then it returns the final result.
Consider a function that takes three arguments:
The equivalent curried function:
This is somewhat insane! returning nested functions as function takes more and more arguments. One option is to convert a normal function into a curried function, which is what mainly this article focuses here onward.
In concise, a Rest Parameter allows a function to receive a variable number of arguments and more details on this can be found in MDN: Rest Parameters.
The first time this function is called it only expects one argument, a function to curry. The args parameter will probably be an empty array on first invocation. Save the number of arguments it expects (argLength) to a local variable. Then return a function we define inside the curry function. When this function is invoked it concatenate the new args2 array with the old args array and check if it has received all the arguments yet. If so, it apply the original function and return the result. If not it recursively call the curry function, passing along all the new arguments which puts us back in the original position, returning the curried function to await more arguments.
With concepts like currying, it is much easier to abstract the logic and make clear and concise functions.