Working with Fetch api

Fetch is a much needed improvement over XHR, it simplifies making network requests by exposing an easy to use api and having promise support out of the box.

fetch(url).then(function (response) {
return response.json();
});

Wrapping fetch

While the above example is good enough for most cases, sometimes you might need to send the same headers in all the requests or handle all the responses in the same way. Doing so in each and every fetch call would be duplicating a lot of code. This can solved by creating a wrapper around the fetch method and using that wrapper throughout the application instead of fetch.

// fetch-wrapper.js

function fetchWrapper(url, options) {
var options = options || {};
options.headers['Custom-Header'] = 'Your custom header value here';

return fetch(url, options);
}
// books.js

fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
});

Rejecting on HTTP errors

Coming from jQuery.ajax, one of the main gotcha’s about fetch is that it does not reject on HTTP errors - It only rejects on network failures. While this makes sense because any response (whether 2xx or 4xx etc) is still a response and thereby a ‘success’, you might want fetch to reject on http errors so that the catch part of your promise chain can handle them appropriately.

// fetch-wrapper.js

function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse);
}

function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
throw new Error(response.statusText);
}
}
// books.js

fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error);
});

Handling JSON responses

If all the responses are guaranteed to be JSON, then we can parse them before passing them down the promise chain. Since fetch throws TypeError on network errors, we can handle it in handleNetworkError to throw a JSON object similar to ones we get from our backend.

// fetch-wrapper.js

function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse, handleNetworkError);
}

function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
return response.json().then(function (error) {
throw error;
});
}
}

function handleNetworkError (error) {
throw {
msg: error.message
};
}
// books.js

fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error.msg);
});

Timeouts

There’s no support for timeouts in the fetch api, though this can be achieved by creating a promise that rejects on timeout and using it with the Promise.race api.

function timeout (value) {
return new Promise(function (resolve, reject) {
setTimeout(function () {
reject(new Error('Sorry, request timed out.'));
}, value);
})
}

Promise.race([timeout(1000), fetch(url)])
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.error(error);
})

But keep in mind that since fetch has no support for aborting the request, the above example only rejects the promise but the request itself is still alive. This behavior is different from XHR based libraries which abort the request when it takes longer than the timeout value.

Browser support

Fetch is supported in most browsers and has a polyfill for those who don’t support it.

Limitations

Fetch is being developed iteratively and there are certain things that it does not support like monitoring progress, aborting a request etc. If these are absolutely necessary to your application, then your should use XHR or its abstractions like jQuery.ajax, axios etc instead of fetch.

Closing thoughts

Though it seems to be limited compared to XHR, I think the current feature set is good enough for most of the cases. The simple api makes it beginner friendly and (future) native support means one less dependency to load.

Using Optimizely with React

Optimizely is an A/B testing tool used to test different variations of the same page/component to see which converts better.

Let’s say you have an ecommerce website with a product grid. The products in the grid currently contain only minimal information about it - name, picture, price and Buy button. Let’s say you have an hypothesis that adding ratings and other details such as weight/size etc would make more people click on the Buy button. But you’re concerned that adding too much information would clutter the UI and drive away customers. One way of solving this would be to do an A/B test between the existing UI (called as “Control” in A/B testing world) and the new UI with additional details (called “Variation”) and see which has more people clicking on the Buy button (“Conversion”). The traffic to the website is split into two - one group would always see the the Control and other would always see the Variation.

In Optimizely, the above is called an “Experiment” and both “Control” and “Variation” are the experiment’s “Variations”. Once you create an experiment and its variations, you’ll be shown an visual editor where you can customise the appearance of each variation by modifying/rearranging the elements in the page. The visual editor then translates those cutomisations into jQuery code called Variation Code. Depending on which variation an user is grouped into, the Optimizely library loads the appropriate Variation Code and thus displaying different UIs.

This workflow works well for static websites, but if the website is dynamic and uses React, then letting the Variation Code do arbitrary DOM manipulations doesn’t look like a good idea.

One solution is to create different components for each variation and using a container component to render the correct variation. But before that we need to know which variation the user belongs to and if the experiment is running (active) or not. Fortunately, Optimizely exposes Data Object which can be used to get the above data. We can use window.optimizely.data.state.activeExperiments to get the list of all running experiments and window.optimizely.data.state.variationIdsMap[<experimentId>][0] to get the variation the user belongs to.

// abtest-container.js

var AbTestContainer = React.createClass({

propTypes: {
experimentId: React.PropTypes.string.isRequired,
defaultVariationId: React.PropTypes.string.isRequired,
variations: React.PropTypes.arrayOf(React.PropTypes.shape({
variationId: React.PropTypes.string.isRequired,
component: React.PropTypes.element.isRequired
}))
},

getVariation: function () {
// Make sure you perform appropriate guard checks before using this in production!
var activeExperiments = window.optimizely.data.state.activeExperiments;
var isExperimentActive = _.contains(activeExperiments, this.props.experimentId);
var variation, variationId;

if (isExperimentActive) {
variationId = window.optimizely.data.state.variationIdsMap[experimentId][0];
} else {
variationId = this.props.defaultVariationId;
}

variation = _.findWhere(this.props.variations, { variationId: variationId });
return variation;
},

render: function () {
return this.getVariation();
}

});

And this can be used as follows

// products-page.js

...

render: function () {
var variations = [
{ variationId: '111', component: <ProductGrid/> },
{ variationId: '222', component: <ProductGridWithAdditionalDetails/> }
];

return (
<AbTestContainer
experimentId='000'
defaultVariationId='111'
variations={variations}
/>
);
}

...

The IDs for experiment and variations would be present in the visual editor page under “Options -> Diagnostic Report”.

Guidelines to choose a JavaScript library

How important is this?

Picking the right JavaScript library is very important, especially during the beginning of a project. If we’re not careful with our decisions during the beginning, we will end up spending a lot of time later cleaning up the mess. The more tightly coupled the codebase and the dependency, the more careful we must be in selecting the right one. Even more so for frameworks - as our code practically lives inside them. Here are some of the things I look for in an external dependency:

1) How invested are the core contributors?

  • Every opensource project has a couple of core contributors and sometimes a company behind it.
  • Finding out how they’re using the project would be a good indicator of their commitment. Are they using it in production and on revenue generating parts of the business? Example: Facebook is using React for Newsfeed, Instagram etc
  • This does not always apply as not all opensource projects have a commercial entity backing them.

2) How widely is it used?

  • If it is widely used by others, then you would have access to a lot of tutorials, discussions about best practices, how-tos, StackOverflow answers etc.
  • Edge cases and bugs would have been detected early and bugfixes made.
  • Widely used libraries/frameworks would also help in hiring as there would be a good number of developers with that experience. Also, a good number of developers would be interested in joining your company to gain experience.
  • This can be found out by keeping your ear to the ground for what’s going on in the JavaScript community.

3) How are breaking changes introduced?

  • The Web moves very fast and breaking changes are inevitable. Breaking changes might be done for deprecating bad ways of doing things, remedying poor architectural decisions made in the past, performance optimizations, availability of new browser features etc.
  • How they’re introduced makes a lot of difference - is this done gradually and incrementally? Is this communicated in advance by showing deprecation warnings?
  • Are any migration tools provided? Example: jQuery provides jQuery migrate, React provides React codemod scripts for migrating to newer versions etc.
  • This ties into the “How invested are the core contributors?” question. If they’re using it for important projects then they would be very careful and systematic with breaking changes.

4) How is the documentation?

  • A good documentation makes the library easy to use and helps avoid wasting time.
  • This depends on the project - libraries with simple and intuitive APIs can get away with minimal documentation whereas complicated libraries with hard to understand concepts need extensive documentation, examples, and tutorials.
  • Depending on how tightly coupled the library and the codebase is going to be, go through the library’s documentation to get a feel of it and try to build a quick proof-of-concept to see if all the current and future requirements could be easily implemented using it.

5) How actively is it being developed?

  • Actively developed projects ensure that any new bugs are quickly fixed, new functionality added and PRs merged (all depending on priority and validity).
  • This also depends on the project as some projects are just “done” with nothing more to be added or fixed.

Conclusion

I understand that this is a very broad topic and there might be a lot more factors that need to be considered. I formed these guidelines based on my personal experience and learning from others.

Writing maintainable tests using the data driven approach

One of the main considerations while writing automated tests is to make them more mainatainable. Tests shouldn’t get in the way of refactoring the source. This can be achieved by making the tests DRY and abstracting out repeating logic.

Certain logic like form validation, url parsing etc make tests unmaintainable by repeating the same tests again and again with different inputs and outputs.

Consider a simple example of validating username:

// validation.js

function isValidUserName (userName) {
if (userName.isBlank ||
userName.isLessThanTwoCharacters ||
userName.hasSpecialCharacters ||
userName.hasNumbers) {
return false;
} else {
return true;
}
}

The tests for the above function would be something like this:

// validation.test.js

it('should reject blank input', function () {
var isValid = isValidUserName('');
expect(isValid).to.be.false;
});

it('should reject input with less than two characters', function () {
var isValid = isValidUserName('a');
expect(isValid).to.be.false;
});

it('should reject input with special characters', function () {
var isValid = isValidUserName('abc#');
expect(isValid).to.be.false;
});

it('should reject input with numbers', function () {
var isValid = isValidUserName('ab3c');
expect(isValid).to.be.false;
});

These kind of tests would be tedious to maintain for certain kind of logic.

A better way would be to use the data driven approach. In this approach, we decouple the data and the actual testing code.

// validation.data.json

{
"username": [
{
"title": "should reject blank input",
"input": "",
"isValid": false
},
{
"title": "should reject input with less than two characters",
"input": "a",
"isValid": false
},
{
"title": "should reject input with special characters",
"input": "abc#",
"isValid": false
},
{
"title": "should reject input with numbers",
"input": "ab3c",
"isValid": false
}
]
}

And modify our test file as

// validation.test.js

var data = require('validation.data');

data.username.forEach(function (test) {
it(test.title, function () {
var isValid = isValidUserName(test.input);
expect(isValid).to.equal(test.isValid);
});
});

This test can now be quickly refactored if needed. A nice bonus of using this approach is that by extracting the test data from actual test cases, the test scenarios can be understood in a glance.