In January 2015 I got a call from a colleague at Cowboy Ventures – she pitched me on an early stage startup called Homebase, and their founder/CEO, John Waldmann. I wasn’t exactly ready to leave my current role at Crowdflower, but I was open to a conversation with John.
John, founder and CEO of Homebase, met with me in the Mission in SF for a burrito soon thereafter. I was impressed with John’s command of the problem he was solving for – helping small businesses with straightforward tools that ultimately give them and their teams more time back to serve their customers. He was very personable, humble and energetic – we clicked right away. We agreed to have me meet with the small founding team in SF.
I met with the team a few times – both in the office and over a beer at a local pub. I really liked them! After these conversations John and I agreed that this would be a good role for me to jump into. It would be the closest thing to starting something myself, and while the team of maybe ten people had built the go-to-market product, there was a ton of work to do against a real engineering practice that would power the company’s post-seed round future.
I joined at the end of April 2015 and set about helping the team with anything that needed to be done. Thinking back on that time – we were about ten people, no more than fifteen: John, Rushi, Angela, Evgeniy, David, Eugene, Kai, Andrew, and Greg. I embraced this opportunity, completely bought into John’s vision, and prepared to “make my own mess this time” rather than coming in at a later stage to try to impact someone else’s “mess”!
On day 1 the first problem to tackle was operational stability. The team was developing against a single git repository and releasing to production once a week – Wednesday nights at 7 pm. At this point in time we often either went down right after the production deployment, or the next day when traffic peaks returned. My very first objective was to determine how we were testing the application both with automated tests written by engineers, and verifying them manually. I collected every test list that John and the other team members reviewed, created the first master list of use cases, and built out a testing matrix of all cases that needed to be confirmed before a production candidate could be deployed. I don’t think I’ve ever done as much software testing as I did in my first three months at Homebase. I also developed some simple SQL scripts to run in production that confirmed that the application functioned as expected post-release.
We had also made an initial investment in the concept of unit tests in our Rails application, but had not made the full effort to fix broken tests. Every branch was red all the time. So we pushed to fix the suite and began to rely on it so that when it ran green, we felt safe that no new bugs had been introduced, and started the discipline that when it ran red, we had broken something.
Over time we started to release to production consistently without going down, and I then started to push forward to release more than one time per week. But my ability to test production candidates did not scale, so I set out next to come up with a strategy that would scale.
At Crowdflower I was aware of an interesting SF startup called RainforestQA. As background, Crowdflower built a platform where users could build out a set of tasks that are sent out to crowd workers around the world. Their data is collected and verified for the user. RainforestQA built an interface for teams that need test cases run on their application and these tests can be written in simple language and logic. They were a demanding customer for Crowdflower – I met with their CTO several times to solve challenges for the benefit of the RainforestQA platform. I brought them on board, and along with a member of Homebase Customer Service (Andrew), we created sets of RainforestQA tests that allowed us to much more quickly regress a build candidate.
In a fairly short period of time, we were able to build out a regression suite that stood in place of manual testing which in turn empowered us to move to daily releases, then eventually to three to five releases per day on the web application – turns out this was our first step toward quasi-continuous deployments. Our operational stability improved greatly with small and concise deployments.
The other big event in my first months with the founding team was making my first engineering hire. I thought long and hard about what I needed in this first hire – someone with the courage and curiosity to join an early team, someone with broad skills and the desire to take on whatever needed to be done. There were a few people in my network that came to mind – people I had worked with previously who I knew I could count on. But one person in particular, Jordan, stood out from our time together at Crowdflower.
Luckily for me, Jordan was up for lunch. We met and I pitched him on the role. Jordan was clear with me – he wanted an opportunity to work on Platform and Infrastructure problems in addition to doing full-stack web feature development. I was excited to offer him this opportunity – we really had little idea of how to tune our AWS infrastructure. Jordan joined in August 2015 and jumped in feet first.
In addition to jumping into the team with his powerful skills, Jordan got into the details of AWS and by the end of the year had figured out how to operate our infrastructure and fine-tune resources to keep our platform running smoothly. His foundational work allowed us to run AWS’s basic tools for the next couple of years with nearly 100% uptime and put almost all of our attention toward building out the feature set. This was critical at a time when we could not afford to hire additional staff.
I remember feeling good toward the end of that first year with the team – our testing strategy was in place, our release cadence had been established, and our operational stability solidified. We were ready to evolve.