Notes - How React Native Helps Companies Build Better Mobile Apps

These are my notes on the recent React Native panel in the F8 2018 Facebook developer conference. Watch the original video here.

Where is it used

Code reuse between multiple platforms

  • Facebook: ~93%
  • Skype: 85-90%
  • TaskRabbit: ~86%
  • Postlight: 90-95%

Why was it chosen

Facebook

  • Platform specific codebases result in
    • Developers hitting different snags
    • Each platform moving in different speeds
    • Getting out of sync
    • Doing same things twice
    • Polish/optimization doesn’t get done uniformly in both platforms

Skype

  • Platforms: iOS, Android, Windows, Mac, Linux - Using ReactNative or Electron+React
  • Before ReactNative
    • 7 devs and 7 product owners for each platform across multiple timezones
    • Main issue was communication between different platforms in different timezones
  • After ReactNative
    • Moved from platform specific teams to feature squads that focus on delivering features across different platforms
    • Feature squads reduce the communication challenges and improve development velocity
    • Reduces implementation discrepancies in different platforms
    • Got transferable knowledge as all the teams speak the same language
    • Easy to ramp up squads for new features and ramp down squads for other features
    • Ramp up time is almost zero as there’s a single language used
    • Lost good developers
    • Focus on hiring generalists
    • React - Same UI description language and layout semantics for all platforms
    • React - Everything is encapsulated in a component as opposed to code scattered across different layers
  • Uses ReactXP

TaskRabbit

  • Before ReactNative
    • 3 devs on each mobile platform
  • After ReactNative
    • 2 person team moving faster than the 6 person team
    • Business logic lives in a single place and bugs gets fixed for both platforms at same time
    • Helps in hiring

Postlight

  • Postlight has React (web) experience
  • 1.5 devs, completed before deadline and under budget

CondeNast

  • CondeNast is a JS shop, so using ReactNative to leverage that experience
  • Why replace Swift with JS?
    • Declarative layouts with JSX
    • Writing layouts in Swift is hard
    • Iterate very rapidly
    • Lower the barrier to create layouts
    • Frees developers from working on “pixel pushing”
    • Gives them time to focus on harder and interesting things

Lessons learned

Postlight

  • Was able to do performance tuning for low-end devices targetting Indian market
  • ReactNative is not prescriptive and makes it possible to drop out of ReactNative and use native modules when needed
  • Ran into performance issues when you don’t expect the same with native modules (long scrolling lists etc)

CondeNast

  • Being early adopter
    • Going through multiple ReactNative version bumps is tough
    • Better suited for apps that are under active development as batching several updates at once is painful

Packaging Node.js code into cross platform executables

If you’re writing command line tools in Node.js, it can be hard to distribute them since users need to install Node.js in their machines before being able to use your tool. It would be a lot easier for users if we can package our app into a single executable file that they can download and run without installing anything extra.

We can use pkg to compile our code into a single executable file for multiple target platforms (Windows, Linux, Mac etc).

Let’s start with a simple example:

// index.js
console.log("hello world");

This file prints “hello world” and exits. We can package this by running:

$ npx pkg index.js

This by default builds executables for three platforms - Windows, Linux and Mac:

$ ls -1
index-linux
index-macos
index-win.exe
index.js

The target platforms can be customized by using the --targets flag.

I was curious how much space these take up:

$ ls -lh
total 183496
-rwxr-xr-x 1 sheshbabu staff 33M Mar 31 01:40 index-linux
-rwxr-xr-x 1 sheshbabu staff 34M Mar 31 01:40 index-macos
-rw-r--r-- 1 sheshbabu staff 22M Mar 31 01:40 index-win.exe
-rw-r--r-- 1 sheshbabu staff 28B Mar 31 00:46 index.js

22-34MB feels like a bit too much for something that just prints “hello world”. Looking around the internet, it seems we can use tools like upx to reduce the file size.

Tips for using ESLint in a legacy codebase

ESLint is a fantastic tool that helps in detecting problematic patterns in codebase and enforcing coding conventions. It’s a must-have tool for any team trying to maintain high quality JavaScript codebases.

However, when you’re introducing ESLint to a legacy codebase for the first time, it’s hard not to feel overwhelmed by the bajillion errors it throws.

$ eslint .
✖ 20983 problems (20983 errors, 0 warnings)
19032 errors, 0 warnings potentially fixable with the `--fix` option.

It’s understandable that the old code was written without the ESLint rules in mind but it becomes a problem when errors in any new code written gets drowned in all that noise. In this blog post, I’ll go through some techniques that can help you significantly reduce the amount of errors you see.

Autofix

ESLint has this very useful utility that automatically fixes most of the errors. What can be and cannot be fixed depends on the rules themselves. The impact of this on reducing the number of errors depends on the codebase and the config used. If you notice in the above shell snippet, out of 20983 errors, 19032 can be automatically fixed! Let’s do that:

$ eslint --fix .
✖ 1700 problems (1700 errors, 0 warnings)

1700 errors sounds a lot more manageable than 20983!

Specifying environments and globals

Each JS environment (Browser, Nodejs, etc) has it’s own set of host/global objects (window, process, setTimeout etc) in addition to native JS objects (Date, parseInt etc). ESLint doesn’t assume an environment, so you might see errors like 'window' is not defined or 'setTimeout' is not defined. By specifying environments, these errors can be fixed.

Up until a few years back, the most common way of adding dependencies to a web codebase was to use <script> tags and they expose their apis using global variables like $ for jQuery, _ for underscore etc. You might see errors like '$' is not defined or 'moment' is not defined. These errors can be fixed by specifying globals.

Disabling rules

This should be the last resort and before we start disabling rules left and right, let’s go through the different levels in which we can disable rules:

Line level

This disables rules for a line

alert('foo'); // eslint-disable-line no-alert, quotes, semi
// eslint-disable-next-line no-alert, quotes, semi
alert('foo');

Block level

This disables rules for multiple lines

/* eslint-disable no-alert, no-console */
alert('foo');
console.log('bar');
/* eslint-enable no-alert, no-console */

File level

This disables rules for the whole file

/* eslint-disable no-alert */
alert('foo');

Directory level

This disables rules for a directory. This is done by creating a new config file inside that directory. There can be multiple ESLint config files inside a project. The closest config file would override the config files defined in outer directories.

This feature can be useful in situations where you need to disable some rules only a particular directory but still need those rules to be enabled for the rest of the codebase. For example, if you have no-magic-numbers rule enabled and see a lot of errors from tests/ directory because you’re using magic numbers in assertions, you can just disable this rule in a config file created inside the tests/ directory.

Learn more about this feature here.

Project level

You can disable rules for the whole codebase if you disable them in the root config file. Usually this approach is used when you’re using a sharable config like eslint-config-airbnb or eslint-config-standard etc and want to add your own overrides.

Thoughts on disabling rules

It’s very easy to disable rules at the project level but doing so would mean that any new code written won’t be checked against those rules.

Disabling at line level is best as only those lines are affected but this can be very tedious as we might need to do this for thousands of lines.

Depending on the number of files that have errors, it’s better to disable at a file or directory level. This is easier than disabling at line level and the impact on new code is also smaller than disabling at a project level.

It would be nice to have a tool that goes through the codebase and adds the eslint-disable-line comment for the affected lines, but I don’t know if such tool exists.

Conclusion

Using the steps above, all the errors in the codebase can be fixed or disabled. Any new errors would be only because of new code written and they can be fixed as soon as we see them. We can allocate some time depending on the bandwidth available to go through the disabled errors and fix them incrementally.

Thanks for reading! If you liked this, you might also like: Speed up your code reviews using ESLint and Prettier

Measuring response times of Express route handlers

If we want to measure the total time taken for each route in an Express server, we can do so by adding a common middleware that tracks the time elapsed between the moment a request comes and before the response is sent back.

// index.js
const express = require("express");
const logResponseTime = require("./response-time-logger");
const app = express();
app.use(logResponseTime);
app.get("/", (req, res) => {
res.send("hello");
});
app.get("/slow", (req, res) => {
for (let i = 0; i < 1e10; i++) {}
res.send("hello");
});
app.listen(3000);
// response-time-logger.js
function logResponseTime(req, res, next) {
const startHrTime = process.hrtime();
res.on("finish", () => {
const elapsedHrTime = process.hrtime(startHrTime);
const elapsedTimeInMs = elapsedHrTime[0] * 1000 + elapsedHrTime[1] / 1e6;
console.log("%s : %fms", req.path, elapsedTimeInMs);
});
next();
}
module.exports = logResponseTime;

Calling the above two endpoints would log

/ : 1.791791ms
/slow : 18541.045675ms

The index.js is the entrypoint to our server and that’s usually the place where all the common middlewares are added. The logResponseTime middleware needs to be at the top because we want to be as accurate as possible when we measure the time taken for each route in our Express server. The process.hrtime api is used to measure the elapsed time as it is more accurate than just using the Date api.

Speed up your code reviews using ESLint and Prettier

Code reviews are very important if you want to build great software. It’s an effective way of sharing knowledge of the codebase to other members of the team, it’s a good opportunity to learn as the reviewers might suggest better ways of solving a problem than your usual approach, it helps in identifying logical bugs or gaps in implementation, it helps in ensuring that the codebase stays readable, maintainable and follows your team’s coding conventions etc

Code reviews are also time-consuming. For reviewers, it requires them to go through the changes to look for issues and opportunities for improvement. The more things they’re checking for, the more time consuming and less focussed they are. For the authors, once the review is over, they need to refactor the code as per the review comments, do additional testing, do self-review etc. Rinse and repeat.

We should strive to make this process faster so we can deliver the software to our users as quickly as possible. We can lessen the time it takes for reviewers to review the code by automating the code review as much as possible and letting them focus on the non-automatable aspects. For the authors, we can give feedback on the code early so they can refactor much earlier in the development process.

Checking whether the changes follow the coding conventions, best practices and code formatting is something we can automate. These are also the ones that trigger the most nitpicks during a code review and thereby generating more noise in the review comments.

ESLint and Prettier are two popular tools that can help us achieve this. Prettier formats code and ESLint helps enforce coding conventions and find problematic patterns in code. ESLint also has an auto-fix mode that automatically fixes some of the rule violations. Both have plugins for all popular editors which ensures that the violations are quickly shown to the developer. But if the developer is using an editor that doesn’t have these plugins or is someone who sporadically contributes code and you don’t want to add friction to their workflow by asking them to install or configure the plugins, we can use the git commit hooks so that the code gets automatically checked as it is committed. Git commit hooks are also useful in making sure that all the committed code adheres to the rules and there are no broken windows due to misconfigured editors or other reasons. You can use lint-staged for easily setting up git commit hooks.

If you’re newly setting up a project and don’t want to spend time initially to pick the rules or config, Prettier comes with good defaults and ESLint can be initialized with popular style guides.

If you want to introduce this to an existing project, you can run all the files through Prettier and use ESLint auto-fix to change the existing code as per the new rules. For the rules that are not covered by auto-fix, you can disable all the remaining non-auto-fixable rules initially and fix them manually in batches and re-enable them as they’re fixed. If it’s a very larger project, you might want to split your codebase into different sections and have directory specific ESLint configs and make changes on one section at a time.

Disabling Bunyan in tests

Bunyan (as of v1.8.10) doesn’t provide an explicit api to mute logging. This is especially useful when you’re running unit tests and don’t want logs and test reports to be mixed.

One workaround suggested by the author is to set the log level to a value above FATAL.

Set an environment variable when running tests:

// package.json
...
...
"scripts": {
"test": "NODE_ENV=test mocha"
}
...
...

Check for it in the logger module and if it matches, set the log level above FATAL.

// logger.js
const bunyan = require("bunyan");
const logger = bunyan.createLogger({name: "myapp"});
if (process.env.NODE_ENV === "test") {
logger.level(bunyan.FATAL + 1);
}
module.exports = logger;

This disables logging when running tests.

Unit testing Express route handlers

We can unit test Express.js route handler functions using a mocking library called node-mocks-http

Let’s say we’ve a simple express app

// index.js
const express = require("express");
const exampleRouter = require("./example-router");
const app = express();
app.use("/example", exampleRouter);
app.listen(3000);

With route handler defined separately as

// example-router.js
function exampleRouteHandler(req, res) {
res.send("hello world!");
}
module.exports = exampleRouteHandler;

For unit testing, we should be able to pass various inputs and see if we get correct outputs. Here, we should be able to pass valid request (req) and response (res) objects as inputs and since this function doesn’t return anything, we should be able to make assertions on the response object (res).

We can do this by using node-mocks-http’s createRequest and createResponse apis.

Let’s write a simple test for this using mocha

// example-router.test.js
const assert = require("assert");
const httpMocks = require("node-mocks-http");
const exampleRouteHandler = require("./example-router");
describe("Example Router", () => {
it("should return 'hello world' for GET /example", () => {
const mockRequest = httpMocks.createRequest({
method: "GET",
url: "/example"
});
const mockResponse = httpMocks.createResponse();
exampleRouteHandler(mockRequest, mockResponse);
const actualResponseBody = mockResponse._getData();
const expectedResponseBody = "hello world!";
assert(actualResponseBody, expectedResponseBody);
});
});

Checkout the node-mocks-http repo for more info.

Working with Fetch api

Fetch is a much needed improvement over XHR, it simplifies making network requests by exposing an easy to use api and having promise support out of the box.

fetch(url).then(function (response) {
return response.json();
});

Wrapping fetch

While the above example is good enough for most cases, sometimes you might need to send the same headers in all the requests or handle all the responses in the same way. Doing so in each and every fetch call would be duplicating a lot of code. This can solved by creating a wrapper around the fetch method and using that wrapper throughout the application instead of fetch.

// fetch-wrapper.js
function fetchWrapper(url, options) {
var options = options || {};
options.headers['Custom-Header'] = 'Your custom header value here';
return fetch(url, options);
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
});

Rejecting on HTTP errors

Coming from jQuery.ajax, one of the main gotcha’s about fetch is that it does not reject on HTTP errors - It only rejects on network failures. While this makes sense because any response (whether 2xx or 4xx etc) is still a response and thereby a ‘success’, you might want fetch to reject on http errors so that the catch part of your promise chain can handle them appropriately.

// fetch-wrapper.js
function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse);
}
function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
throw new Error(response.statusText);
}
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error);
});

Handling JSON responses

If all the responses are guaranteed to be JSON, then we can parse them before passing them down the promise chain. Since fetch throws TypeError on network errors, we can handle it in handleNetworkError to throw a JSON object similar to ones we get from our backend.

// fetch-wrapper.js
function fetchWrapper(url, options) {
return fetch(url, options)
.then(handleResponse, handleNetworkError);
}
function handleResponse (response) {
if (response.ok) {
return response.json();
} else {
return response.json().then(function (error) {
throw error;
});
}
}
function handleNetworkError (error) {
throw {
msg: error.message
};
}
// books.js
fetchWrapper('/api/books')
.then(function (data) {
console.log(data);
})
.catch(function (error) {
console.error(error.msg);
});

Timeouts

There’s no support for timeouts in the fetch api, though this can be achieved by creating a promise that rejects on timeout and using it with the Promise.race api.

function timeout (value) {
return new Promise(function (resolve, reject) {
setTimeout(function () {
reject(new Error('Sorry, request timed out.'));
}, value);
})
}
Promise.race([timeout(1000), fetch(url)])
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.error(error);
})

But keep in mind that since fetch has no support for aborting the request, the above example only rejects the promise but the request itself is still alive. This behavior is different from XHR based libraries which abort the request when it takes longer than the timeout value.

Browser support

Fetch is supported in most browsers and has a polyfill for those who don’t support it.

Limitations

Fetch is being developed iteratively and there are certain things that it does not support like monitoring progress, aborting a request etc. If these are absolutely necessary to your application, then your should use XHR or its abstractions like jQuery.ajax, axios etc instead of fetch.

Closing thoughts

Though it seems to be limited compared to XHR, I think the current feature set is good enough for most of the cases. The simple api makes it beginner friendly and (future) native support means one less dependency to load.

Using Optimizely with React

Optimizely is an A/B testing tool used to test different variations of the same page/component to see which converts better.

Let’s say you have an ecommerce website with a product grid. The products in the grid currently contain only minimal information about it - name, picture, price and Buy button. Let’s say you have an hypothesis that adding ratings and other details such as weight/size etc would make more people click on the Buy button. But you’re concerned that adding too much information would clutter the UI and drive away customers. One way of solving this would be to do an A/B test between the existing UI (called as “Control” in A/B testing world) and the new UI with additional details (called “Variation”) and see which has more people clicking on the Buy button (“Conversion”). The traffic to the website is split into two - one group would always see the the Control and other would always see the Variation.

In Optimizely, the above is called an “Experiment” and both “Control” and “Variation” are the experiment’s “Variations”. Once you create an experiment and its variations, you’ll be shown an visual editor where you can customise the appearance of each variation by modifying/rearranging the elements in the page. The visual editor then translates those cutomisations into jQuery code called Variation Code. Depending on which variation an user is grouped into, the Optimizely library loads the appropriate Variation Code and thus displaying different UIs.

This workflow works well for static websites, but if the website is dynamic and uses React, then letting the Variation Code do arbitrary DOM manipulations doesn’t look like a good idea.

One solution is to create different components for each variation and using a container component to render the correct variation. But before that we need to know which variation the user belongs to and if the experiment is running (active) or not. Fortunately, Optimizely exposes Data Object which can be used to get the above data. We can use window.optimizely.data.state.activeExperiments to get the list of all running experiments and window.optimizely.data.state.variationIdsMap[<experimentId>][0] to get the variation the user belongs to.

// abtest-container.js
var AbTestContainer = React.createClass({
propTypes: {
experimentId: React.PropTypes.string.isRequired,
defaultVariationId: React.PropTypes.string.isRequired,
variations: React.PropTypes.arrayOf(React.PropTypes.shape({
variationId: React.PropTypes.string.isRequired,
component: React.PropTypes.element.isRequired
}))
},
getVariation: function () {
// Make sure you perform appropriate guard checks before using this in production!
var activeExperiments = window.optimizely.data.state.activeExperiments;
var isExperimentActive = _.contains(activeExperiments, this.props.experimentId);
var variation, variationId;
if (isExperimentActive) {
variationId = window.optimizely.data.state.variationIdsMap[experimentId][0];
} else {
variationId = this.props.defaultVariationId;
}
variation = _.findWhere(this.props.variations, { variationId: variationId });
return variation;
},
render: function () {
return this.getVariation();
}
});

And this can be used as follows

// products-page.js
...
render: function () {
var variations = [
{ variationId: '111', component: <ProductGrid/> },
{ variationId: '222', component: <ProductGridWithAdditionalDetails/> }
];
return (
<AbTestContainer
experimentId='000'
defaultVariationId='111'
variations={variations}
/>
);
}
...

The IDs for experiment and variations would be present in the visual editor page under “Options -> Diagnostic Report”.

Guidelines to choose a JavaScript library

How important is this?

Picking the right JavaScript library is very important, especially during the beginning of a project. If we’re not careful with our decisions during the beginning, we will end up spending a lot of time later cleaning up the mess. The more tightly coupled the codebase and the dependency, the more careful we must be in selecting the right one. Even more so for frameworks - as our code practically lives inside them. Here are some of the things I look for in an external dependency:

1) How invested are the core contributors?

  • Every opensource project has a couple of core contributors and sometimes a company behind it.
  • Finding out how they’re using the project would be a good indicator of their commitment. Are they using it in production and on revenue generating parts of the business? Example: Facebook is using React for Newsfeed, Instagram etc
  • This does not always apply as not all opensource projects have a commercial entity backing them.

2) How widely is it used?

  • If it is widely used by others, then you would have access to a lot of tutorials, discussions about best practices, how-tos, StackOverflow answers etc.
  • Edge cases and bugs would have been detected early and bugfixes made.
  • Widely used libraries/frameworks would also help in hiring as there would be a good number of developers with that experience. Also, a good number of developers would be interested in joining your company to gain experience.
  • This can be found out by keeping your ear to the ground for what’s going on in the JavaScript community.

3) How are breaking changes introduced?

  • The Web moves very fast and breaking changes are inevitable. Breaking changes might be done for deprecating bad ways of doing things, remedying poor architectural decisions made in the past, performance optimizations, availability of new browser features etc.
  • How they’re introduced makes a lot of difference - is this done gradually and incrementally? Is this communicated in advance by showing deprecation warnings?
  • Are any migration tools provided? Example: jQuery provides jQuery migrate, React provides React codemod scripts for migrating to newer versions etc.
  • This ties into the “How invested are the core contributors?” question. If they’re using it for important projects then they would be very careful and systematic with breaking changes.

4) How is the documentation?

  • A good documentation makes the library easy to use and helps avoid wasting time.
  • This depends on the project - libraries with simple and intuitive APIs can get away with minimal documentation whereas complicated libraries with hard to understand concepts need extensive documentation, examples, and tutorials.
  • Depending on how tightly coupled the library and the codebase is going to be, go through the library’s documentation to get a feel of it and try to build a quick proof-of-concept to see if all the current and future requirements could be easily implemented using it.

5) How actively is it being developed?

  • Actively developed projects ensure that any new bugs are quickly fixed, new functionality added and PRs merged (all depending on priority and validity).
  • This also depends on the project as some projects are just “done” with nothing more to be added or fixed.

Conclusion

I understand that this is a very broad topic and there might be a lot more factors that need to be considered. I formed these guidelines based on my personal experience and learning from others.