On Node Modules

Creating a node module is similar to creating a node.js application. We have to run npm init and that’s it!


A module can be either a single file or a directory containing a set of files. Usually when the module contains more than one file inside a directory it is recommended to have a file named index.js exposed what properties, objects and functions the module provide to outside. However it is not always required to have a file named index.js where we can override this behavior (more on this later).

Importing a module in our application is also straightforward, which can be done using Node’s require function. require function takes the path to a node module as an argument and performs a synchronous look up for the module first in core modules, then the current directory and finally in node_modules. We could omit the .js extension when it comes to requiring. In that case node js will assumes the extension as .js and scan for all JavaScript modules. Since JSON also treated as JavaScript objects, node js will take JSON files in to count when extension is missing. Once the module is located, the require function will return the contents of the exports object defined in the module.

We have two options when it comes to exporting properties or functions; exports and module.exports. exports is a global reference to module.exports and module.exports is what ultimately gets expose to outside. Since exports is a global reference, Node expects it to not be reassigned to any other object. If anything assigned to exports, then the reference between exports and module.exports will be broken.

When it comes to using node modules, Node uses a mechanism to reuse node_modules without knowing their file system location. It search for required node_modules at multiple level.

Finally, Node expects to have a file named index.js in each node module. Otherwise it will scan for a file names package.json inside the module directory and that package.json should contain an element named main specifying the starting point.

Rock on!

On Garbage Collecting C++ Addons in NodeJS

NodeJS Addons are C++ objects that can be loaded into NodeJS using require(). Idea is that this allows NodeJS to gain performance and functionalities of C++. This acts as an interface between JavaScript and C++. NodeJS documentation is helpful to get familiar with Addons>>.


However, when it comes to garbage collecting (GC), the documentation is missing some important… well I’d refer to those as tricks.

Manually execute GC,

First we have to set the object instance to null and then call the garbage collector in global context.

obj = null;


One consideration with above is that this is not a reliable way to trigger GC on weak handles.

Another approach is to allocated a huge amount of memory so that GC will triggered,

for (let i = 0;  i < 1e6;  ++i) {



Also, the GC can be triggered forcefully using V8’s command line flags –gc_global and —gc_interval. –gc_global forces V8 to perform a full garbage collection, while –gc_interval forces V8 to perform garbage collection after a given amount of allocations.

These command line flags are provided by underlying V8 JavaScript engine. They are subjected to change or removal at anytime and not documented by NodeJS or V8. Therefore it is recommended not to use these outside of testing purposes.


Rock on!

On Kubernetes Pod failures and Restart policy

A job in Kubernetes is responsible for creating and managing Pods and perform tasks until its successful termination where as normal Pods restarts continuously regardless of the exit code. If a job fails before the successful termination, the job controller will create a new pod and depending on the nature of the system there’s a chance that our system might end up with duplicated pods.

Let’s consider a one-shot job that fails to successfully terminate. If we check the status of of our pod, we would notice that our pod has been restarted multiple times and Kubernetes will end up in CrashLoopBackOff status.

kubectl get pod -a -l job-name=demojob
NAME                              READY          STATUS                       RESTARTS      AGE

demojob-3ddk0              0/1                 CrashLoopBackOff   4                      3m

If we change the restartPolicy from OnFailure to Never, we could avoid this kind of CrashLoops. For example,

kubectl get pod -a -l job-name=demojob


NAME                           READY        STATUS         RESTARTS      AGE 

demojob-0wm49         0/1               Error             0                       1m 

demojob-6h9s2           0/1               Error             0                       39s

demojob-hkzw0         1/1                Running       0                        6s 

demojob-k5swz         0/1               Error              0                       28s 

demojob-m1rdw       0/1               Error             0                        19s

demojob-x157b        0/1               Error               0                      57s

What we are seeing here is a set of duplicate pods that are in Error state. With restart policy set to Never we inform kubelet to not to restart pods on failure. But the job object notices a job in Error state and crates a new pod for that in this situation and we will be ended up with multiple duplicate pods. Since it is uncommon to have a pod failure on start up, we could configure the restart policy to OnFailure and avoid duplicate pods.

Chaos Engineering

Last week I had the pleasure of reading Chaos Engineering; Building Confidence in System Behavior through Experiments. A book written by few engineers involved in Chaos Engineering at Netflix. Being a software engineer who is involved in a similar set of work in enterprise software context, I would say that I am much thankful for the authors of this book for the content that helped me on thinking of formulating a strategy to ensure the reliability of our product.

In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state.

In this blog post I will summarize a simple process of adapting Chaos Engineering.

Under the light of Agile development methodologies, we are following different practices to ensure that our applications are doing what it is suppose to do. This could start at unit level testing with a Test Driven Development framework or could be scale up to component level integration testing. What we are doing in all these cases is testing what we know about our application or what we are expecting from our application. Our support teams have to battle with a different set of problems once we deploy our applications to production.

Chaos engineering is a discipline that allow us to get an understanding of our systems behavior in production environment by simulating faults in it. Principles of Chaos Engineering” define it as,

Chaos Engineering is the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.

Here are some steps involve in designing experiments,

  1. Create a hypothesis
  2. Define the scope of experiments
  3. Identify the metrics
  4. Notify the organization
  5. Run the experiments
  6. Analyze the results
  7. Increase the scrope
  8. Automate

Create a hypothesis

Failures can happen due to various reasons. For example, hardware failures, functional bugs, network latency or communication barriers, inconsistent state transmissions, etc. What is important at this stage is to select an impactful event that can change the system. Lets say that we have observed traffic of one region of our APIs is increasing, we could test out our load balancing functionality.

Define the scope of the experiments

It is great if we can do experiments on our hypothesis in production, but at first we could choose a less impactful  environment and gradually move towards production as confidence on our experiments grows over time.

Identify the metrics

Once the hypothesis and scope is defined we could decided what metrics we are going to use to evaluate our outcome. Equal distribution of traffic across multiple servers or time taken to reach a response to client can be used in load balancing scenario.

Notify organization

It is necessary to keep all stakeholders informed about the experiments and taking their input on how the experiments should designed in order to get maximum insights.

Run the experiments

Lights, Camera, Action! Now we can run the experiments, but at this point it is necessary to keep an on metrics. If the experiments are causing harm to system it is necessary to abort experiments and a mechanism for that should be placed in.

Analyze the result

Once the results are available we could validate the correctness of hypothesis and communicate the results with relevant teams. If the problem is with load balance, maybe the network infrastructure team have to work a bit more on load balancing across the system.

Increase the scope

Once we grow our confidence on experimenting on smaller scale problems we could start extending the scope of  experiments. Increasing the scope can reveal a different set of systemic problems. For example, failures in load balancing can cause time outs and inconsistent states in different services that could cause our system to fall apart in peak times.


Don’t repeat it yourselves as you gain confidence on your experiments. Start automating what you have already experimented and look forward for other areas to build confidence.

Finally, a problem that comes to mind naturally is how good the decision of shutting down or playing around with your system to take it down in production? Well, Chaos Engineering is certainly not playing around with your system. It is based on the same empirical process that uses to test new drugs, therefore whatever the work we are doing in here is for the betterment of our own products.

State Management in ReactJS

maique madeira

I recently started looking into ReactJS. Among many shiny things like “Virtual DOM”, I found the way react manage state interesting, a kind of an unsung hero to me. This post is to give few examples of State Management in ReactJS.

Let’s first consider a very simple function. The whole goal of this function is to fetch a given user. Therefore, we can add a parameter to function to receive username as an argument and pass a username to function when it is invoked. For example,

function fetchUser(username) {
    //ajax call


This same intuition can be applied to react components. Suppose that we have a simple react component to display a user in user interface. Now the problem is how do we communicate which user the component should display? There is where we can pass the username to our component through a custom attribute. The values we passed into components through custom attributes can be accessed inside the components via it’s “props” object. For example,

class User extends Component {
    render() {
        return (
                <p>Username: {this.props.username}</p>

<User username="Isuru" />

Due to the nature of react’s component model, it is possible to delegate/encapsulate state management into individual components, which allow us to build large applications with a bunch of small applications/components. Adding a new property called “state” who’s value is a object is enough to add a state to a component. This object represents the entire sate of the component and each key in this object represents a distinct piece of it’s state. Finally this state can be access via state object which is a much similar way we accessed props.

class User extends Component {
    state = {
        username: 'Isuru'

    render() {
        return (
                <p>Username: {this.state.username}</p>

<User />

This way allows us to separate how the application looks and the application’s state, ultimately the user interface becomes a function of applications’ state. React takes the responsibility when it comes to changing the state of a component this is done by calling the “setState()” method when something needs to be changed. Although, it is encouraged not to update the state of components directly.

Alright then, how about forms? Usually the state of forms are in DOM. If react manages state of components inside the component itself, how do we handle forms in react? This is where controlled components comes in. Controlled components are components which render a form, but the state of that form lives inside the component.

class User extends Component {
    state = {
        username: ''

    handleChange = (event) => {
        this.setState({username: event.target.value})

    render() {
        return (
                <input type="text" value={this.state.username} 
                onChange={this.handleChange} />

<User />


In above example, we bind the value of our input field to state and with that the state of our form is controlled by react. Few benefits that we get from controlled components are,

  1. it allows instant input validation,
  2. allows to conditionally enable/disable buttons,
  3. enforce input formats.

Allow of these benefits are related to user interface updates based on user inputs. This is the heart of not only the controlled components, but react in general. If the state of the application changes, then the user input changes are based on that state of the application.

Rock on!



Thinking Behind The Boring Company


Elon Musk is one of the great thinkers of our time. He has changed how we deal and think about humankind’s problems, with his proven track record ranging from Fin-tech to Space travel. The Boring Company is among many of his inspiring work and this article is to summarize how he has abstracted the problem and suggested a solution in my point of view.

The Boring Company, the FAQ section of the website is informative enough to understand what it does. It builds an underground network of tunnels to travel. The solutions seems obvious, but what’s the problem? and why tunnels?

Traveling within cities or entering into a city from suburbs is bit of a hassle mainly due to traffic congestion. If we look at our city architectures, we have multi-storey buildings. We all arrive into cities from a single level road and work in buildings which have multiple levels. How do we expect to have minimal if not no traffic congestion at all when our transportation medium is expected to support 100x of it’s capable capacity.

The solution should be a transport system that spans across many levels. As FAQ describes, tunnels doesn’t get affected by weather and won’t fall down like flying cars. Therefore a multilevel tunnels can be the solution.

When we are finding a solution, or thinking about a new product idea, the inspiration we can get from this is narrowing down the problem to it’s root. Get the ground truth of the problem and solve it.

Imaginative Constraints and Inspiration


Dan Freeman 

I’m working as a software engineer in day time. I have to work within many constraints in each an every project I’m working. Constraints comes in may forms, they can be physical, technical or imaginative. This blog post is intended to explain few thoughts I had on imaginative difficulties I faced very recently.

In good old days, we had user manuals for software we used. Manuals described what each user interface element  does and how users can get done tasks. Now if we consider the apps we use on daily basis or web sites that we visits, we don’t have user manuals for those. User interface items we find in those applications are very intuitive for us, those have been placed in right places very carefully.

I’m currently doing a research on user experience for data analytics platform that has interfaces over web, desktop and mobile. I lost myself in there. How do we decide which user experience suites the most. I reached a colleague whom have a sensible amount of user experience development and asked how do you do your magic. His response was very simple; “You have to understand users needs and walk on his shoes. Decide what would you prefer to have”.

I was thinking… How does an architect design buildings that anyone can walk through without any guidance. Isn’t it understanding what you are expecting for a certain experience and unconsciously communicate that message with others? What does a solid metal elevator convince to us? Isn’t that a message about reliability and safety?

I concluded, yet thought to explore more, that looking elsewhere and bringing experiences from out side of my universe is essential in user experience design.

Rest Parameters Curry

According to Wikipedia, Currying is the technique of translating the evaluation of a function that takes multiple arguments into evaluating a sequence of functions, each with a single argument. Simply, if a function takes three arguments and is curried, it is really three functions. Each function takes one argument and return a new function that takes the next argument until all arguments are received, then it returns the final result.

Consider a function that takes three arguments:


The equivalent curried function:


This is somewhat insane! returning nested functions as function takes more and more arguments. One option is to convert a normal function into a curried function, which is what mainly this article focuses here onward.

When converting a normal function into a curried function, it is required to know how many arguments the function is expecting. Functions have a length property in JavaScript and it can be used in here. Although the main intention here is to demonstrate how ES6 Rest Parameters can be used when creating generic curry function.

In concise, a Rest Parameter allows a function to receive a variable number of arguments and more details on this can be found in MDN: Rest Parameters.


The first time this function is called it only expects one argument, a function to  curry. The args parameter will probably be an empty array on first invocation. Save the number of arguments it expects (argLength) to a local variable. Then return a function we define inside the curry function. When this function is invoked it concatenate the new args2 array with the old args array and check if it has received all the arguments yet. If so, it apply the original function and return the result. If  not it recursively call the curry function, passing along all the new arguments which puts us back in the original position, returning the curried function to await more arguments.

With concepts like currying, it is much easier to abstract the logic and make clear and concise functions.


  1. Currying on Wikipedia
  2. Rest parameters on MDN

Rock on!

On 280 Characters

Twitter recently announced that it has increased the 140 character limit to 280 for some users. This news left Tweeps in mixed thoughts, at least the community I’m interacting. Indeed, this has launched as a test but it is more sensible to assume that this will be the future considering problems Twitter deals with the growth as a platform.

What is the role of Twitter in the crowded street of social media? Fundamentally Twitter allows every voice flow in the platform in real time, which represents working, living, and breathing view of the community at personal level. Like Chamath Palihapitiya highlighted once, culture is constantly evolving and this is what Twitter should stand for, fight for and thrive for, fancy stories or augmented animations are not part of the core equation in Twitters case.

“fight for the culture the way it should be…not the way it was or the way its becoming”

140 character limit has two rationales, one is practical and the other one is conceptual. Initially Twitter was mainly used via SMS and website acted as an archive.  SMS/Text was the most convenient way of communicating more simple message instantly back then. SMS has 160 character limit and leaving 20 characters out for username gives 140 characters to express what’s going on in real time. On the other hand, constrains inspires creativity. People tend to live in the moment, be more human and live with 140 characters.

Once Twitter decided to increase character length to 10000, which is similar to its direct messaging product. Fortunately, Jack, the CEO if Twitter scrapped the project and everyone should be grateful for this decision which saved the fundamental ideology of Twitter. However, as a platform it should attract new users, which Twitter is hardly good at. Doubling character length  may open the door for new users to actively engage in the platform and been more expressive while core principles remain the same.

Finally, praise the core principles not how it is represented.

Rock on!