Speed up your code reviews using ESLint and Prettier

Code reviews are very important if you want to build great software. It’s an effective way of sharing knowledge of the codebase to other members of the team, it’s a good opportunity to learn as the reviewers might suggest better ways of solving a problem than your usual approach, it helps in identifying logical bugs or gaps in implementation, it helps in ensuring that the codebase stays readable, maintainable and follows your team’s coding conventions etc

Code reviews are also time-consuming. For reviewers, it requires them to go through the changes to look for issues and opportunities for improvement. The more things they’re checking for, the more time consuming and less focussed they are. For the authors, once the review is over, they need to refactor the code as per the review comments, do additional testing, do self-review etc. Rinse and repeat.

We should strive to make this process faster so we can deliver the software to our users as quickly as possible. We can lessen the time it takes for reviewers to review the code by automating the code review as much as possible and letting them focus on the non-automatable aspects. For the authors, we can give feedback on the code early so they can refactor much earlier in the development process.

Checking whether the changes follow the coding conventions, best practices and code formatting is something we can automate. These are also the ones that trigger the most nitpicks during a code review and thereby generating more noise in the review comments.

ESLint and Prettier are two popular tools that can help us achieve this. Prettier formats code and ESLint helps enforce coding conventions and find problematic patterns in code. ESLint also has an auto-fix mode that automatically fixes some of the rule violations. Both have plugins for all popular editors which ensures that the violations are quickly shown to the developer. But if the developer is using an editor that doesn’t have these plugins or is someone who sporadically contributes code and you don’t want to add friction to their workflow by asking them to install or configure the plugins, we can use the git commit hooks so that the code gets automatically checked as it is committed. Git commit hooks are also useful in making sure that all the committed code adheres to the rules and there are no broken windows due to misconfigured editors or other reasons. You can use lint-staged for easily setting up git commit hooks.

If you’re newly setting up a project and don’t want to spend time initially to pick the rules or config, Prettier comes with good defaults and ESLint can be initialized with popular style guides.

If you want to introduce this to an existing project, you can run all the files through Prettier and use ESLint auto-fix to change the existing code as per the new rules. For the rules that are not covered by auto-fix, you can disable all the remaining non-auto-fixable rules initially and fix them manually in batches and re-enable them as they’re fixed. If it’s a very larger project, you might want to split your codebase into different sections and have directory specific ESLint configs and make changes on one section at a time.

Disabling Bunyan in tests

Bunyan (as of v1.8.10) doesn’t provide an explicit api to mute logging. This is especially useful when you’re running unit tests and don’t want logs and test reports to be mixed.

One workaround suggested by the author is to set the log level to a value above FATAL.

Set an environment variable when running tests:

// package.json
...
...
"scripts": {
"test": "NODE_ENV=test mocha"
}
...
...

Check for it in the logger module and if it matches, set the log level above FATAL.

// logger.js
const bunyan = require("bunyan");
const logger = bunyan.createLogger({name: "myapp"});
if (process.env.NODE_ENV === "test") {
logger.level(bunyan.FATAL + 1);
}
module.exports = logger;

This disables logging when running tests.

Unit testing Express route handlers

We can unit test Express.js route handler functions using a mocking library called node-mocks-http

Let’s say we’ve a simple express app

// index.js
const express = require("express");
const exampleRouter = require("./example-router");
const app = express();
app.use("/example", exampleRouter);
app.listen(3000);

With route handler defined separately as

// example-router.js
function exampleRouteHandler(req, res) {
res.send("hello world!");
}
module.exports = exampleRouteHandler;

For unit testing, we should be able to pass various inputs and see if we get correct outputs. Here, we should be able to pass valid request (req) and response (res) objects as inputs and since this function doesn’t return anything, we should be able to make assertions on the response object (res).

We can do this by using node-mocks-http’s createRequest and createResponse apis.

Let’s write a simple test for this using mocha

// example-router.test.js
const assert = require("assert");
const httpMocks = require("node-mocks-http");
const exampleRouteHandler = require("./example-router");
describe("Example Router", () => {
it("should return 'hello world' for GET /example", () => {
const mockRequest = httpMocks.createRequest({
method: "GET",
url: "/example"
});
const mockResponse = httpMocks.createResponse();
exampleRouteHandler(mockRequest, mockResponse);
const actualResponseBody = mockResponse._getData();
const expectedResponseBody = "hello world!";
assert(actualResponseBody, expectedResponseBody);
});
});

Checkout the node-mocks-http repo for more info.

Working with Fetch api

Fetch is a much needed improvement over XHR, it simplifies making network requests by exposing an easy to use api and having promise support out of the box.

fetch(url).then(function (response) {
return response.json();
});

Wrapping fetch

While the above example is good enough for most cases, sometimes you might need to send the same headers in all the requests or handle all the responses in the same way. Doing so in each and every fetch call would be duplicating a lot of code. This can solved by creating a wrapper around the fetch method and using that wrapper throughout the application instead of fetch.

// fetch-wrapper.js
function fetchWrapper(url, options) {
var options = options || {};
options.headers['Custom-Header'] = 'Your custom header value here';
return fetch(url, options);
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
});

Rejecting on HTTP errors

Coming from jQuery.ajax, one of the main gotcha’s about fetch is that it does not reject on HTTP errors - It only rejects on network failures. While this makes sense because any response (whether 2xx or 4xx etc) is still a response and thereby a ‘success’, you might want fetch to reject on http errors so that the catch part of your promise chain can handle them appropriately.

// fetch-wrapper.js
function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse);
}
function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
throw new Error(response.statusText);
}
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error);
});

Handling JSON responses

If all the responses are guaranteed to be JSON, then we can parse them before passing them down the promise chain. Since fetch throws TypeError on network errors, we can handle it in handleNetworkError to throw a JSON object similar to ones we get from our backend.

// fetch-wrapper.js
function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse, handleNetworkError);
}
function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
return response.json().then(function (error) {
throw error;
});
}
}
function handleNetworkError (error) {
throw {
msg: error.message
};
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error.msg);
});

Timeouts

There’s no support for timeouts in the fetch api, though this can be achieved by creating a promise that rejects on timeout and using it with the Promise.race api.

function timeout (value) {
return new Promise(function (resolve, reject) {
setTimeout(function () {
reject(new Error('Sorry, request timed out.'));
}, value);
})
}
Promise.race([timeout(1000), fetch(url)])
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.error(error);
})

But keep in mind that since fetch has no support for aborting the request, the above example only rejects the promise but the request itself is still alive. This behavior is different from XHR based libraries which abort the request when it takes longer than the timeout value.

Browser support

Fetch is supported in most browsers and has a polyfill for those who don’t support it.

Limitations

Fetch is being developed iteratively and there are certain things that it does not support like monitoring progress, aborting a request etc. If these are absolutely necessary to your application, then your should use XHR or its abstractions like jQuery.ajax, axios etc instead of fetch.

Closing thoughts

Though it seems to be limited compared to XHR, I think the current feature set is good enough for most of the cases. The simple api makes it beginner friendly and (future) native support means one less dependency to load.

Using Optimizely with React

Optimizely is an A/B testing tool used to test different variations of the same page/component to see which converts better.

Let’s say you have an ecommerce website with a product grid. The products in the grid currently contain only minimal information about it - name, picture, price and Buy button. Let’s say you have an hypothesis that adding ratings and other details such as weight/size etc would make more people click on the Buy button. But you’re concerned that adding too much information would clutter the UI and drive away customers. One way of solving this would be to do an A/B test between the existing UI (called as “Control” in A/B testing world) and the new UI with additional details (called “Variation”) and see which has more people clicking on the Buy button (“Conversion”). The traffic to the website is split into two - one group would always see the the Control and other would always see the Variation.

In Optimizely, the above is called an “Experiment” and both “Control” and “Variation” are the experiment’s “Variations”. Once you create an experiment and its variations, you’ll be shown an visual editor where you can customise the appearance of each variation by modifying/rearranging the elements in the page. The visual editor then translates those cutomisations into jQuery code called Variation Code. Depending on which variation an user is grouped into, the Optimizely library loads the appropriate Variation Code and thus displaying different UIs.

This workflow works well for static websites, but if the website is dynamic and uses React, then letting the Variation Code do arbitrary DOM manipulations doesn’t look like a good idea.

One solution is to create different components for each variation and using a container component to render the correct variation. But before that we need to know which variation the user belongs to and if the experiment is running (active) or not. Fortunately, Optimizely exposes Data Object which can be used to get the above data. We can use window.optimizely.data.state.activeExperiments to get the list of all running experiments and window.optimizely.data.state.variationIdsMap[<experimentId>][0] to get the variation the user belongs to.

// abtest-container.js
var AbTestContainer = React.createClass({
propTypes: {
experimentId: React.PropTypes.string.isRequired,
defaultVariationId: React.PropTypes.string.isRequired,
variations: React.PropTypes.arrayOf(React.PropTypes.shape({
variationId: React.PropTypes.string.isRequired,
component: React.PropTypes.element.isRequired
}))
},
getVariation: function () {
// Make sure you perform appropriate guard checks before using this in production!
var activeExperiments = window.optimizely.data.state.activeExperiments;
var isExperimentActive = _.contains(activeExperiments, this.props.experimentId);
var variation, variationId;
if (isExperimentActive) {
variationId = window.optimizely.data.state.variationIdsMap[experimentId][0];
} else {
variationId = this.props.defaultVariationId;
}
variation = _.findWhere(this.props.variations, { variationId: variationId });
return variation;
},
render: function () {
return this.getVariation();
}
});

And this can be used as follows

// products-page.js
...
render: function () {
var variations = [
{ variationId: '111', component: <ProductGrid/> },
{ variationId: '222', component: <ProductGridWithAdditionalDetails/> }
];
return (
<AbTestContainer
experimentId='000'
defaultVariationId='111'
variations={variations}
/>
);
}
...

The IDs for experiment and variations would be present in the visual editor page under “Options -> Diagnostic Report”.

Guidelines to choose a JavaScript library

How important is this?

Picking the right JavaScript library is very important, especially during the beginning of a project. If we’re not careful with our decisions during the beginning, we will end up spending a lot of time later cleaning up the mess. The more tightly coupled the codebase and the dependency, the more careful we must be in selecting the right one. Even more so for frameworks - as our code practically lives inside them. Here are some of the things I look for in an external dependency:

1) How invested are the core contributors?

  • Every opensource project has a couple of core contributors and sometimes a company behind it.
  • Finding out how they’re using the project would be a good indicator of their commitment. Are they using it in production and on revenue generating parts of the business? Example: Facebook is using React for Newsfeed, Instagram etc
  • This does not always apply as not all opensource projects have a commercial entity backing them.

2) How widely is it used?

  • If it is widely used by others, then you would have access to a lot of tutorials, discussions about best practices, how-tos, StackOverflow answers etc.
  • Edge cases and bugs would have been detected early and bugfixes made.
  • Widely used libraries/frameworks would also help in hiring as there would be a good number of developers with that experience. Also, a good number of developers would be interested in joining your company to gain experience.
  • This can be found out by keeping your ear to the ground for what’s going on in the JavaScript community.

3) How are breaking changes introduced?

  • The Web moves very fast and breaking changes are inevitable. Breaking changes might be done for deprecating bad ways of doing things, remedying poor architectural decisions made in the past, performance optimizations, availability of new browser features etc.
  • How they’re introduced makes a lot of difference - is this done gradually and incrementally? Is this communicated in advance by showing deprecation warnings?
  • Are any migration tools provided? Example: jQuery provides jQuery migrate, React provides React codemod scripts for migrating to newer versions etc.
  • This ties into the “How invested are the core contributors?” question. If they’re using it for important projects then they would be very careful and systematic with breaking changes.

4) How is the documentation?

  • A good documentation makes the library easy to use and helps avoid wasting time.
  • This depends on the project - libraries with simple and intuitive APIs can get away with minimal documentation whereas complicated libraries with hard to understand concepts need extensive documentation, examples, and tutorials.
  • Depending on how tightly coupled the library and the codebase is going to be, go through the library’s documentation to get a feel of it and try to build a quick proof-of-concept to see if all the current and future requirements could be easily implemented using it.

5) How actively is it being developed?

  • Actively developed projects ensure that any new bugs are quickly fixed, new functionality added and PRs merged (all depending on priority and validity).
  • This also depends on the project as some projects are just “done” with nothing more to be added or fixed.

Conclusion

I understand that this is a very broad topic and there might be a lot more factors that need to be considered. I formed these guidelines based on my personal experience and learning from others.

Writing maintainable tests using the data driven approach

One of the main considerations while writing automated tests is to make them more mainatainable. Tests shouldn’t get in the way of refactoring the source. This can be achieved by making the tests DRY and abstracting out repeating logic.

Certain logic like form validation, url parsing etc make tests unmaintainable by repeating the same tests again and again with different inputs and outputs.

Consider a simple example of validating username:

// validation.js
function isValidUserName (userName) {
if (userName.isBlank ||
userName.isLessThanTwoCharacters ||
userName.hasSpecialCharacters ||
userName.hasNumbers) {
return false;
} else {
return true;
}
}

The tests for the above function would be something like this:

// validation.test.js
it('should reject blank input', function () {
var isValid = isValidUserName('');
expect(isValid).to.be.false;
});
it('should reject input with less than two characters', function () {
var isValid = isValidUserName('a');
expect(isValid).to.be.false;
});
it('should reject input with special characters', function () {
var isValid = isValidUserName('abc#');
expect(isValid).to.be.false;
});
it('should reject input with numbers', function () {
var isValid = isValidUserName('ab3c');
expect(isValid).to.be.false;
});

These kind of tests would be tedious to maintain for certain kind of logic.

A better way would be to use the data driven approach. In this approach, we decouple the data and the actual testing code.

// validation.data.json
{
"username": [
{
"title": "should reject blank input",
"input": "",
"isValid": false
},
{
"title": "should reject input with less than two characters",
"input": "a",
"isValid": false
},
{
"title": "should reject input with special characters",
"input": "abc#",
"isValid": false
},
{
"title": "should reject input with numbers",
"input": "ab3c",
"isValid": false
}
]
}

And modify our test file as

// validation.test.js
var data = require('validation.data');
data.username.forEach(function (test) {
it(test.title, function () {
var isValid = isValidUserName(test.input);
expect(isValid).to.equal(test.isValid);
});
});

This test can now be quickly refactored if needed. A nice bonus of using this approach is that by extracting the test data from actual test cases, the test scenarios can be understood in a glance.