All Posts

Migrating from Backbone to React - Lessons Learned

Amanda Beiner

I joined Privy in the summer of 2017, when the team was exploring what it might mean to migrate off of our Backbone/CoffeeScript frontend and onto a more modern JavaScript framework. A year and a half later, we’re reflecting on this undertaking and the lessons learned along the way.

Why We Chose Backbone

The current iteration of the Privy web app came to be in 2012, as Backbone.js rose in popularity as a front-end JavaScript framework. At the time, the ability to define different views within a single page app offered a level of code organization and data handling that vanilla JS and jQuery couldn’t. Until 2016, our front end stack consisted of Backbone with Backbone.Relational, Marionette, and CoffeeScript.

Backbone.Relational mirrored ActiveRecord in the backend, and CoffeeScript syntax nicely complemented Ruby, addressed the verbosity of ES5, and shipped with Rails—a great value proposition for a small team of full stack developers.

Why We Refactored

The usual suspects

React’s focus on reusable components is great for a small, fast-moving team looking to build consistent, reliable, and testable UI. Our Backbone system was untested and repetitive, so React offered an opportunity to create more DRY, reliable code.

Performance issues

As the product grew in complexity, the cracks in the Backbone facade began to show. The Privy merchant dashboard is a complicated web app that relies on a ton of user interaction. Constant render calls in our core product meant many expensive DOM updates (think: dragging an element across the screen in our display builder), and our frontend took a performance hit. Trying to smooth out these performance glitches meant adding more third-party libraries, which further burdened the Rails asset pipeline. React’s virtual DOM concept, plus compilation with Webpack, addressed both of these concerns.

Engineering velocity

The relational nature of Backbone models introduced a lot of downstream effects for every user interaction. Combined with a lack of test coverage, this loose separation of concerns meant that each developer had to maintain context on the full breadth of a growing app in order to confidently make changes for a well-defined task. Once developers stopped feeling like they could contribute to the frontend effectively, our feature velocity suffered.

Hiring

By the time we committed to migrating our frontend to React, we already felt the pain of trying to hire developers to work in a legacy Backbone app—we simply couldn’t find talented Backbone engineers who were interested in continuing to be Backbone engineers. We’re a team of curious folks who like to tinker and build things. We wanted to hire JavaScript devs who shared those values, and that meant that we wanted to hire JavaScript devs who were interested in the evolving JS ecosystem.

How we did it

Prototype

The first step was to built a prototype, or proof of concept. We had to wrap our heads around how React could solve our particular use cases and product limitations. Patrick ended up building out an Instagram Ad builder in React, which meant that someone on the team had context on what worked and what didn’t.

Start Injecting React components into backbone

The idea was to have our Backbone views render a React component inside them. This would allow us to continue to make small releases without having to tackle the biggest issues (data fetching, application state, routing) right away. We could gradually chip away at smaller parts of our Backbone Views until the entire page could eventually be its own React component.

We created the following rc helper to do this more easily. We could now pass a component with props into the helper to render within a Backbone.View:

export function rc(component, options) {
  const { hash: { rcStyle = '', ...props } } = options

  Object.assign({}, props, { children: options.fn(this) })

  const element = React.createElement(component, props)

  const root   = document.createElement('div')
  const rootId = `rc-${uuid4()}`

  root.setAttribute('id', rootId)
  root.setAttribute('style', `display: inline-block; width: auto; ${rcStyle}`)

  ReactDOM.render(element, root)
}

Implement a 100/0 Rule

We committed to writing 100% of our new features in React. This led to a slower start as our devs ramped up on React, but it paid off to have the whole team contributing to our knowledge base and establishing good patterns for the future.

Synchronize Redux and Backbone States

In the summer of 2016, we began building the Automation Rules feature for display campaigns. Since Automation rules would be the final step in our campaign builder, it meant that we would have to straddle two state management systems, and make sure they stayed in sync throughout the campaign building process. If a user toggled between options in the Form tab (Backbone), and the Automation tab (Redux), the updated attributes should sync across both stores.

Rather than remembering to call two different updater functions on each configuration change, we opted to create a LegacyConnector class to communicate between Redux and Backbone on campaign changes.

// Dispatch redux actions on Backbone model change
export const onChangeLegacyCampaigns = LegacyConnector.createListener({
  add:     payload => ({ type: types.CAMPAIGN_CREATE_SUCCESS,  payload }),
  sync:    payload => ({ type: types.CAMPAIGN_SYNC_SUCCESS,    payload }),
  destroy: payload => ({ type: types.CAMPAIGN_DESTROY_SUCCESS, payload: payload.id })
})

// Update Backbone model on on redux actions
export const legacyCampaignEmitters = {
  [types.CAMPAIGN_FETCH_SUCCESS]:   (collection, payload) => collection.get(payload.id).set(payload),
  [types.CAMPAIGN_CREATE_SUCCESS]:  (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_RECIPE_SUCCESS]:  (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_SYNC_SUCCESS]:    (collection, payload) => collection.get(payload.id).set(payload),
  [types.CAMPAIGN_COPY_SUCCESS]:    (collection, payload) => collection.add(payload),
  [types.CAMPAIGN_DESTROY_SUCCESS]: (collection, payload) => collection.remove(collection.get(payload.id))
}

We initialized these listeners and emitters in our Redux middleware. This meant that we could migrate each campaign builder tab with confidence that the state will automatically synchronize.

Chip away at migrating components

In addition to having a dedicated “dashboard refactor” lane in our software priorities, we committed as a team to picking off migration-related tasks in our “down time” between larger projects. Our motto here was “high impact, low lift”. We targeted the parts of the app that would give us the most bang for our buck. Indicators of “high impact” product areas included:

Areas of focus for product development in the upcoming one or two quarters

We knew that upcoming product priorities included a contact segmentation engine that would need new UI in the front end, so we prioritized migrating our contacts page. We also knew that we would be exploring new modern charting libraries, so we migrated our reports section to make a more hospitable environment for experimentation.

Common or reusable components

The highest-impact, lowest-lift component we have ever migrated was a simple ConfirmationDialog component. It was a straightforward, completely controlled component, but it appears everywhere throughout our UI. Migrating it early meant that we could use it in every migration going forward.

Buggy or low performing areas

Our display builder and email builder had by far the most user interaction, and therefore were the most susceptible to bugs. Though they were not low-lift, the impact of having these in React is huge, and unblocks future development in these core product areas.

Sh!t Happens

The process wasn’t perfect, and things definitely broke along the way. We luckily have a close-knit, collaborative team that was able to mitigate most of these issues without too much disruption. Here’s what we would do differently if we had to do it over again:

Get a clearer understanding of engineering pain points.

It can be easy to use tools as a shorthand for a deeper problem. For example, it’s easy to say “that part of the app is difficult to work in because it’s written in CoffeeScript.” It’s worth pausing to ask the question, “Is the problem that it’s written in CoffeeScript, or is the problem that the code there is difficult to reason about?”.

We realized the error we had made when we decaffeinated our remaining coffee files and found that they were still just as difficult to work with. We would have had a clearer understanding of the scope of the work to be done if we had taken the time to pinpoint what was actually slowing us down.

Decide on state management early

It’s difficult to be decisive about tooling in a fast-changing ecosystem. We migrated smaller components first which could get by with local state, but we could have saved ourselves time in the long run by deciding on state management earlier. Would we be using Redux? MobX? Flux? Context? GraphQL? If we had made this decision earlier on, we could have established good patterns here earlier and stuck to them.

Forward thinking but backwards compatible

Once there are two different ways of doing things, it can be hard to tell which is preferable in a given situation. One of the earliest examples was migrating the page where a business can update their information. The change had to be reflected in the navbar’s user greeting. We had two options:

  1. Use the backbone save method: javascript onClickSubmit() { business.sync({...this.state.formInput }) }

  2. Use our redux async actions, and update backbone incidentally javascript onClickSubmit() { this.props.syncBusiness({...this.state.formInput}) .then(() => business.set({...this.setState.formInput })) } While example 1 requires fewer lines of code, it does not help us achieve our goal of moving our state management to Redux. We used the first pattern a lot in the early days, and ended up having to fix it later. We would have saved ourselves time (and bugs!) by being sure to put React first.

Tips

I’ve learned a ton about JavaScript, product development, and teamwork throughout this whole process. My main takeaways are:

Get team buy in

Tech debt is difficult to explain to non-technical stakeholders. It feels kind of like saying “We’re going to take a ton of time out to work on something. It won’t look any different, but trust me, it’s better.” It was vital to get buy-in from other teams who would be affected by the time we took for this project. This meant having frank conversations about our “whys.” We talked about the features that customers are requesting that we can’t support in our current system, the boost in engineering velocity we’d see post-migration, the increased reliability that will help our support team, and our hiring concerns. Their trust and patience meant that we were able to do what we needed to do to modernize our code base. And in the end, they were thrilled with the quicker turnaround time in their feature requests.

Use it as an opportunity for code review, knowledge sharing, lunch and learns

Some of our best solutions came out of team code review and lunch and learns. We had an open invitation for anyone to bring a dashboard refactor PR for discussions, where we could group-think solutions, ask questions, and gain context on which problems have already been solved.

Eye on the prize

Things are going to break. It’s inevitable, and it sucks. Throughout the migration process, it was easy to focus on how many bugs were reported, how many support issues generated, and how much work was still left to do. It’s tough to get the big picture of your product when you spend 8 hours a day focusing on its deficiencies. It wasn’t until a friend raved about her experience setting up Privy campaigns for small business site that I was able to take a step back and look at the big picture—we have a great product, tech debt and all.

Continue Reading »

An Engineer's Week One Report

Reef Loretto

Reef Loretto joined the Privy engineering team in August. We ask for a new engineer’s observations as part of their onboarding process, and Reef submitted his in essay format, so we decided to post it here. It has been lightly edited for the audience.

This week I started a new job at Privy. Already, there are lots of things I’ve noticed which make me incredibly excited to work, learn, and grow with the team. First among these is the clear and visible value placed upon a smooth and enjoyable onboarding process. On day one, I came to my desk and was able to go from “zero” to “functional dev environment” before lunch. The team maintains carefully-written onboarding documentation, which includes a very useful bash script to get a docker environment up and running. The script ran with no issues, and then all it took was a simple docker-compose up to get the entire application running locally. I learned immediately that the docker configuration greatly reduces the pains of trying to create a local environment resembling those of staging and deployment (which was a significant cause of stress in previous projects/teams).

Continue Reading »

Fixer Currency Gem

Emily Wilson

Emily Wilson is an engineering intern with Privy for the summer of 2018. She is part of the Georgia Institute of Technology's class of 2021.

Earlier this month we published a new Ruby Gem that handles fetching updated currency conversion rates. We previously used the GoogleCurrency gem to fetch the exchange rates, but the Google endpoint that the gem relies on is no longer supported. This caused errors when we attempted to exchange currencies. After looking at replacements for the gem, we decided it would be best to fork the GoogleCurrency gem and modify it to meet our needs.

Continue Reading »

Updates to our list of excluded security issues

Peter Cai

We've been excited to receive a number of vulnerability reports from security researchers all over the world since launching our security disclosure page earlier this year, and we've learned a lot about this process since we've published it. Privy is a more secure platform today because of the many reports we received.

Continue Reading »

Our Commitment to Candidates

Peter Cai

Interviewing for a startup job can be grueling, confusing, and demoralizing even when the process is going smoothly.

Continue Reading »

November 25, 2016 Outage Postmortem

Peter Cai

On Friday, November 25th, beginning at 1:32PM eastern US time, the Privy.com platform suffered an outage lasting roughly 3 hours.

Continue Reading »

Intercom Conversation Stats: an Open-Source Tool by Privy

Peter Cai

Andrew Knollmeyer is an engineering intern with Privy for summer 2016.

Introducing Intercom Conversation Stats, a tool developed by Privy which is free for anyone to use! This app allows you to gather information about your conversations in Intercom and store it in a Google Sheets document on a regular basis. The provided build aggregates data on conversation tags, but it can be customized to work with any other data from your conversations as well.

Continue Reading »

Building a BellBot

John Careaga

Ever feel like ringing a bell requires too much effort? Ever wish you could automate it to ring when something – like a sale – happens? If you responded "yes" to at least one of these questions, fret not. There is now a solution: BellBot.

Continue Reading »

Excuses not to Test

Peter Cai

At Privy, one of our values is pragmatism - so we don't require formal proofs of correctness and all-du-paths coverage to check in code, because it’s not cost effective (even if those things are valuable in the abstract). But this is such a widely accepted belief that it essentially conveys no information at all; outside of extraordinary operations (like NASA), no one requires 100% path coverage. So how do we determine the what and how much to test?

bridge

Continue Reading »

Understanding Design as an Engineer

Alex Miller

TL;DR

Engineers are great at understanding and building complex logical systems, but often fail when it comes to understanding the people that use them. Unlike logical systems, people often behave unpredictably. In order to help users behave as rationally as possible, we need design to show them something they recognize and understand. Implementing consistent design rules that utilize concepts like contrast, spacing, and alignment will help users focus on the right elements in your product, and will teach them to behave appropriately within the environment you’ve built for them.

 


 

As a member of Privy’s lean startup team, I have the unique honor of being both the lead engineer and the company’s only designer. You might find this curious, considering that engineers are often notoriously bad at design. Many can understand intricate and complex systems built with multiple tech stacks, but fail to understand one thing: the people that use the products they build. For this reason, I firmly believe that more engineers should learn the basic principles of design. Design soothes users into behaving rationally by showing them something they recognize, understand, and even love.

Continue Reading »

Database Concurrency, Part 2

Peter Cai

This is part two of a series on database concurrency. Read the introduction at Database Concurrency, Part 1 .

Last time, I talked about multi-version concurrency control, or MVCC, and how it enables highly concurrent database systems while still guaranteeing transaction isolation. This is because MVCC allows reality (from the perspective of two distinct transactions) to diverge, giving us the unique advantage that readers and writers don't have to block each other. But how does it achieve this in practice, and what are the caveats?

Continue Reading »

Reactive Systems - An Overview

Patrick McLaren

At Privy, many of our services are fundamentally event-driven. Indeed our core product value lies in helping merchants capture arbitrary user interaction and reacting to opportunities as they arise in a tangible and timely manner.

Continue Reading »

Database Concurrency, Part 1

Peter Cai

This is part one of a series I'll be writing about database concurrency. Since this first post is a broad overview, I have simplified many concepts here.

High performance databases must support concurrency. As in many other software systems, databases can use read/write locks to maintain consistency under concurrent use (MyISAM in MySQL does this, for example). Conceptually - this is pretty simple: 1) There can be multiple readers. 2) Readers block writers. 3) Writers block each other as well as readers.

Continue Reading »

How we sped up our background processing 150x

Peter Cai

Performance has always been an obsession of mine. I enjoy the challenge of understanding why things take as long as they do. In the process, I often discover that there's a way to make things faster by removing bottlenecks. Today I will go over some changes we recently made to Privy that resulted in our production application sending emails 150x faster per node!

Continue Reading »