I joined Privy in the summer of 2017, when the team was exploring what it might mean to migrate off of our Backbone/CoffeeScript frontend and onto a more modern JavaScript framework. A year and a half later, we’re reflecting on this undertaking and the lessons learned along the way.

Why We Chose Backbone

The current iteration of the Privy web app came to be in 2012, as Backbone.js rose in popularity as a front-end JavaScript framework. At the time, the ability to define different views within a single page app offered a level of code organization and data handling that vanilla JS and jQuery couldn’t. Until 2016, our front end stack consisted of Backbone with Backbone.Relational, Marionette, and CoffeeScript.

Backbone.Relational mirrored ActiveRecord in the backend, and CoffeeScript syntax nicely complemented Ruby, addressed the verbosity of ES5, and shipped with Rails—a great value proposition for a small team of full stack developers.

Why We Refactored

The usual suspects

React’s focus on reusable components is great for a small, fast-moving team looking to build consistent, reliable, and testable UI. Our Backbone system was untested and repetitive, so React offered an opportunity to create more DRY, reliable code.

Performance issues

As the product grew in complexity, the cracks in the Backbone facade began to show. The Privy merchant dashboard is a complicated web app that relies on a ton of user interaction. Constant render calls in our core product meant many expensive DOM updates (think: dragging an element across the screen in our display builder), and our frontend took a performance hit. Trying to smooth out these performance glitches meant adding more third-party libraries, which further burdened the Rails asset pipeline. React’s virtual DOM concept, plus compilation with Webpack, addressed both of these concerns.

Engineering velocity

The relational nature of Backbone models introduced a lot of downstream effects for every user interaction. Combined with a lack of test coverage, this loose separation of concerns meant that each developer had to maintain context on the full breadth of a growing app in order to confidently make changes for a well-defined task. Once developers stopped feeling like they could contribute to the frontend effectively, our feature velocity suffered.

Hiring

By the time we committed to migrating our frontend to React, we already felt the pain of trying to hire developers to work in a legacy Backbone app—we simply couldn’t find talented Backbone engineers who were interested in continuing to be Backbone engineers. We’re a team of curious folks who like to tinker and build things. We wanted to hire JavaScript devs who shared those values, and that meant that we wanted to hire JavaScript devs who were interested in the evolving JS ecosystem.

How we did it

Prototype

The first step was to built a prototype, or proof of concept. We had to wrap our heads around how React could solve our particular use cases and product limitations. Patrick ended up building out an Instagram Ad builder in React, which meant that someone on the team had context on what worked and what didn’t.

Start Injecting React components into backbone

The idea was to have our Backbone views render a React component inside them. This would allow us to continue to make small releases without having to tackle the biggest issues (data fetching, application state, routing) right away. We could gradually chip away at smaller parts of our Backbone Views until the entire page could eventually be its own React component.

We created the following rc helper to do this more easily. We could now pass a component with props into the helper to render within a Backbone.View:

export function rc(component, options) {
  const { hash: { rcStyle = '', ...props } } = options

  Object.assign({}, props, { children: options.fn(this) })

  const element = React.createElement(component, props)

  const root   = document.createElement('div')
  const rootId = `rc-${uuid4()}`

  root.setAttribute('id', rootId)
  root.setAttribute('style', `display: inline-block; width: auto; ${rcStyle}`)

  ReactDOM.render(element, root)
}

Implement a 100/0 Rule

We committed to writing 100% of our new features in React. This led to a slower start as our devs ramped up on React, but it paid off to have the whole team contributing to our knowledge base and establishing good patterns for the future.

Synchronize Redux and Backbone States

In the summer of 2016, we began building the Automation Rules feature for display campaigns. Since Automation rules would be the final step in our campaign builder, it meant that we would have to straddle two state management systems, and make sure they stayed in sync throughout the campaign building process. If a user toggled between options in the Form tab (Backbone), and the Automation tab (Redux), the updated attributes should sync across both stores.

Rather than remembering to call two different updater functions on each configuration change, we opted to create a LegacyConnector class to communicate between Redux and Backbone on campaign changes.

// Dispatch redux actions on Backbone model change
export const onChangeLegacyCampaigns = LegacyConnector.createListener({
  add:     payload => ({ type: types.CAMPAIGN_CREATE_SUCCESS,  payload }),
  sync:    payload => ({ type: types.CAMPAIGN_SYNC_SUCCESS,    payload }),
  destroy: payload => ({ type: types.CAMPAIGN_DESTROY_SUCCESS, payload: payload.id })
})

// Update Backbone model on on redux actions
export const legacyCampaignEmitters = {
  [types.CAMPAIGN_FETCH_SUCCESS]:   (collection, payload) => collection.get(payload.id).set(payload),
  [types.CAMPAIGN_CREATE_SUCCESS]:  (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_RECIPE_SUCCESS]:  (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_SYNC_SUCCESS]:    (collection, payload) => collection.get(payload.id).set(payload),
  [types.CAMPAIGN_COPY_SUCCESS]:    (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_DESTROY_SUCCESS]: (collection, payload) => collection.remove(collection.get(payload.id))
}

We initialized these listeners and emitters in our Redux middleware. This meant that we could migrate each campaign builder tab with confidence that the state will automatically synchronize.

Chip away at migrating components

In addition to having a dedicated “dashboard refactor” lane in our software priorities, we committed as a team to picking off migration-related tasks in our “down time” between larger projects. Our motto here was “high impact, low lift”. We targeted the parts of the app that would give us the most bang for our buck. Indicators of “high impact” product areas included:

Areas of focus for product development in the upcoming one or two quarters

We knew that upcoming product priorities included a contact segmentation engine that would need new UI in the front end, so we prioritized migrating our contacts page. We also knew that we would be exploring new modern charting libraries, so we migrated our reports section to make a more hospitable environment for experimentation.

Common or reusable components

The highest-impact, lowest-lift component we have ever migrated was a simple ConfirmationDialog component. It was a straightforward, completely controlled component, but it appears everywhere throughout our UI. Migrating it early meant that we could use it in every migration going forward.

Buggy or low performing areas

Our display builder and email builder had by far the most user interaction, and therefore were the most susceptible to bugs. Though they were not low-lift, the impact of having these in React is huge, and unblocks future development in these core product areas.

Sh!t Happens

The process wasn’t perfect, and things definitely broke along the way. We luckily have a close-knit, collaborative team that was able to mitigate most of these issues without too much disruption. Here’s what we would do differently if we had to do it over again:

Get a clearer understanding of engineering pain points.

It can be easy to use tools as a shorthand for a deeper problem. For example, it’s easy to say “that part of the app is difficult to work in because it’s written in CoffeeScript.” It’s worth pausing to ask the question, “Is the problem that it’s written in CoffeeScript, or is the problem that the code there is difficult to reason about?”.

We realized the error we had made when we decaffeinated our remaining coffee files and found that they were still just as difficult to work with. We would have had a clearer understanding of the scope of the work to be done if we had taken the time to pinpoint what was actually slowing us down.

Decide on state management early

It’s difficult to be decisive about tooling in a fast-changing ecosystem. We migrated smaller components first which could get by with local state, but we could have saved ourselves time in the long run by deciding on state management earlier. Would we be using Redux? MobX? Flux? Context? GraphQL? If we had made this decision earlier on, we could have established good patterns here earlier and stuck to them.

Forward thinking but backwards compatible

Once there are two different ways of doing things, it can be hard to tell which is preferable in a given situation. One of the earliest examples was migrating the page where a business can update their information. The change had to be reflected in the navbar’s user greeting. We had two options:

  1. Use the backbone save method: javascript onClickSubmit() { business.sync({...this.state.formInput }) }

  2. Use our redux async actions, and update backbone incidentally javascript onClickSubmit() { this.props.syncBusiness({...this.state.formInput}) .then(() => business.set({...this.setState.formInput })) } While example 1 requires fewer lines of code, it does not help us achieve our goal of moving our state management to Redux. We used the first pattern a lot in the early days, and ended up having to fix it later. We would have saved ourselves time (and bugs!) by being sure to put React first.

Tips

I’ve learned a ton about JavaScript, product development, and teamwork throughout this whole process. My main takeaways are:

Get team buy in

Tech debt is difficult to explain to non-technical stakeholders. It feels kind of like saying “We’re going to take a ton of time out to work on something. It won’t look any different, but trust me, it’s better.” It was vital to get buy-in from other teams who would be affected by the time we took for this project. This meant having frank conversations about our “whys.” We talked about the features that customers are requesting that we can’t support in our current system, the boost in engineering velocity we’d see post-migration, the increased reliability that will help our support team, and our hiring concerns. Their trust and patience meant that we were able to do what we needed to do to modernize our code base. And in the end, they were thrilled with the quicker turnaround time in their feature requests.

Use it as an opportunity for code review, knowledge sharing, lunch and learns

Some of our best solutions came out of team code review and lunch and learns. We had an open invitation for anyone to bring a dashboard refactor PR for discussions, where we could group-think solutions, ask questions, and gain context on which problems have already been solved.

Eye on the prize

Things are going to break. It’s inevitable, and it sucks. Throughout the migration process, it was easy to focus on how many bugs were reported, how many support issues generated, and how much work was still left to do. It’s tough to get the big picture of your product when you spend 8 hours a day focusing on its deficiencies. It wasn’t until a friend raved about her experience setting up Privy campaigns for small business site that I was able to take a step back and look at the big picture—we have a great product, tech debt and all.