ReadWriteBuilders is a series of interviews with developers, designers and other architects of the programmable future.
For a 100-person company founded in 2009, the tech firm Cloudflare certainly seems to have an outsized impact on the Internet.
Shortly after the Heartbleed bug became public knowledge on April 9, Cloudflare decided to revoke all the digital-encryption SSL certificates it managed—a move that would prevent hackers from stealing digital identities from Web servers by exploiting Heartbleed. When it did so, it caused a dramatic spike in such revocations.
Similarly, when it switched its customers by default to a new Internet-address scheme called IPv6, Cloudflare says it expanded what co-founder Matthew Prince calls “the IPv6 Web” by a full five percent.
See also: Exterminating Heartbleed: How To Clear It Out Of Your Data Center
Cloudflare’s primary business is to both speed up and act as a sort of digital bouncer for its client sites. It does this by helping them deliver their information more efficiently and by sheltering them from the Internet’s bad guys—hackers, spammers and scammers who try to knock sites offline via distributed denial-of-service attacks.
In the process, it’s also managed to bring advanced site-management tools—the kind of things that previously only companies like Google could afford—to the masses.
Prince co-founded CloudFlare after bouncing through a number of startups and attending both law and business school. I spoke with him about how getting sued by the porn industry got him started, how he was a lawyer for a day and the role he sees Cloudflare playing as cloud computing continues its astronomical growth.
What follows is a lightly edited transcript of our conversation.
Back When The Web Was “A Fad”
ReadWrite: You describe yourself as the storyteller. How technical are you?
Matthew Prince: When I was seven, my grandmother gave me an Apple II Plus. I grew up in Park City, Utah, and my mom used to sneak me into computer science classes. When I got to college, I was pretty competent as a computer programmer, and got bored in the computer science program fairly quickly.
In 1992, I was technical enough that the school spotted that. Along with two other students, I became one of the campus network engineers. We were building out the network across the campus. Back then, I was installing the switches, running cabling, and learning how the underlying network worked.
The other thing that was fortuitous, in college, a couple of us had started an electronic magazine. There was no World Wide Web in 1992, so we used a programming language sold by Apple called Hypercard. It was object-oriented, one of the forgotten Apple technologies that was way ahead of its time. We made this interactive magazine with Hypercard stacks. We’d email it on campus. The school loved it. It showed how innovative they were.
The apps would get so large that they would actually crash the mail server. The school kept buying bigger and bigger mail servers to accommodate it, and we ended up making more and more complicated versions of the magazine.
They finally came to us and said, “This isn’t going to scale, but let us introduce you to some organizations.” One was a printer company, which invented a technology called PDF, which was of course Adobe. The other organization was a bunch of students at the University of Illinois, PhD. students, who had this thing called a browser.
I remember we would write articles and we couldn’t get anyone on campus to read them, but we’d get these emails, in broken in English, from Japan. I remember saying to one of the other guys, why do we care if people in Japan are reading this? It was one of the most naive and stupid things I could have said. I wrote my college thesis on essentially why the Internet was a fad, which is incredibly embarrassing.
I’m technical enough that I know how this stuff works. When we started CloudFlare, I was writing code. I think I have three lines of code left in the code base. We hired people many orders of magnitude better than I am. Lee Holloway [CloudFlare co-founder] is the technical genius, and Michelle [Zatlyn], who is incredible, is the chief operating officer of our organization. The three of us together create a pretty solid foundation.
Lawyer For A Day
RW: You went to law school, and then worked as a lawyer for just one day?
MP: When I got to the end of college, I had job offers at these companies that I thought had no future: Netscape, Yahoo, a company called BBN, [and] Microsoft, for their online service. I thought this wasn’t going anywhere, so instead I went to law school. My friends were building dot-com companies that were some degree of successful, and I went to Chicago to study law.
In 1999, between second and third year of law school, I moved to San Francisco for the summer and worked at a law firm called Latham and Watkins. Over the course of that summer, I helped take six companies public. I went back for the third year of law school, and that was when the bubble burst. Latham called and said, “Good news. You still have a job. We don’t have room for securities lawyers, but we have plenty of room in our bankruptcy practice.”
I had accepted the signing bonus and had started to do some work for them. One of my law professors said hey, my brother is starting a company, he’ll match your salary and give you some stock. I stayed in Chicago and worked for this startup [a company called GroupWorks in the insurance-benefits brokerage market].
RW: What inspired you to go back and get an MBA?
MP: The short answer is I went to business school because I got sued by the porn industry. After GroupWorks, I did well enough that I could mess around for a while. I came up with an idea for an anti-spam technology.
Unspam is like the “do not call” list, but for email. The business plan was absurd. We were going to help pass a bunch of [anti-spam] laws all around the country, and build a technology that enables these laws, and then sell it to state governments. But instead of them paying us directly, they’d charge a fee, and we’d take a share of that fee.
I remember pitching that to venture capitalists. They’re like, you’re insane. That’s exactly what we did. So we worked with state legislators around the country to pass these laws, and then we ended up winning technical services contracts. Lee Holloway was our first technical hire at Unspam.
The pornography industry guys argued it was a violation of their first amendment rights. They were arguing that they had the right to send adult material. They sued the the state of Utah, and we were a contractor to the state, so they sued us as well.
The lawyers said, “You have a good case, but it will take three years to resolve. During that time, lay low.” I sent off applications to eight different business schools, and ended up getting rejected by seven of them, and got into Harvard.
Pahk The Stahtup In The Hahvahd Yahd
RW: And that’s where CloudFlare really got its start?
MP: I continued to run Unspam while I was in business school. Lee was continuing to work for Unspam. As a final project for our last semester, Michelle and I ended up entering a business plan competition, and the business plan was CloudFlare’s business plan. It’s remarkable to read it and see that we’ve basically done what we said we were going to do.
At the same time, Lee was running out of hard technical problems at Unspam. He was getting recruited by Facebook and Google. I always wanted him to be on my team.
I called him a couple of weeks later and said, “What if we design a service that essentially sits in front of the entire Internet, and we will build something that can not only protect websites from attack, it will make things faster?”
I knew that in order to get Lee excited, the project had to be huge. Lee needed something that was really, really big. I spent 30 minutes on the phone pitching it to him. At the end he was silent for about a minute, then he said, okay, that will work. So Lee was on board. Michelle is the operations person, I’m sales and marketing and storytelling, and that ended up being the combination that allowed us to build what we built.
Services Only A Google Could Afford
Five percent of all Web traffic passes through our network. We add 5,000 new customers every day, ranging from teeny little blogs to Fortune 500 companies. International governments use us, the U.S. government uses us, commerce companies like Gilt use us. One out of 21 sites you go to online is a customer, and their traffic passes through our network. We have 25 facilities scattered across North and South America, Europe and Asia, and the plan is to open 50 more in the next year.
The network keeps growing bigger and bigger because we’re offering a compelling value proposition. It takes about five minutes to sign up. Once installed, you’re going to be at least twice as fast and protected against a whole range of attacks, and it decreases the load on your server substantially.
We’ll provide resources that previously only a company like Google could afford, with data centers scattered around the world. We ‘ll make that easy and affordable and scalable for anyone putting content online, whether it’s through traditional websites, modern web applications, or the back end of mobile apps. We make all that faster and better.
Begging And Building Frankenservers
RW: What was the first technical challenge that you wanted to address with CloudFlare?
MP: CloudFlare got born in part out of an open source project Lee and I had started called Project Honey Pot. It’s the largest online community tracking fraud and abuse. It has over 100,000 participants in 190 countries around the world.
When we were first starting CloudFlare, after we graduated from school and moved to California, we didn’t have any money, and we needed some way to build the first prototype. Amazon Web Services was just getting started at the time, and we were trying to figure out how we were going to get servers.
Michelle said, You talk about how loyal this Project Honey Pot community is. What if we just ask them if they have some spare servers lying around? It was an absurd thing, but we started to think, why not?
We had all the zip codes of members, and we emailed every Project Honey Pot member that was within 50 miles of San Francisco: “We’re looking for servers to be able to build a prototype on, do you happen to have any that are laying around?”
We got an astonishingly high response rate. So Michelle piled in her Volkswagen Jetta and drove around to all these different people, and did two things. She’d pick up the servers and load them in their car, and ask them what they wanted CloudFlare to be. It was our initial market research. Those Project Honey Pot members were the first CloudFlare.
None of [the servers] worked, but we were able to cobble parts together to create two functional servers, and built the first prototypes—two kinds of Frankenservers.
We needed to be building a demo to show the investors, and Lee didn’t want to build them. Instead he was focused on this little piece of code that would cache requests for one second. I said, “Seriously, that’s the most important thing you could be working on?”
He said, “Trust me, in three years, you are going to be happy I built this.” Lee is this technical genius who thinks about problems five years in advance. Almost three years from the day he said that, we got some of our first denial of service attacks, and the only way our infrastructure could stand up to that was thanks to layers of caching. That caching layer that he was building at the time turned out to be this piece of our foundation which has allowed us to continue to scale.
Expanding The Taxonomy Of The Cloud
RW: Do you build your own data centers, or rent space in others?
MP: We build our own equipment. We don’t pour foundations and build the buildings, but very few companies do. Even Facebook runs out of other facilities sometimes.
We’re not running on top of Amazon or Rackspace, though those are partners of ours. Instead, we are putting our own equipment in buildings scattered all around the world, and increasingly, putting them in the end-ISP facilities to ensure we have the most coverage and can be as fast as possible.
People talk about the cloud, the taxonomy of the cloud. At the base is what I call the store-and-compute layer. That used to be companies like HP, EMC, Dell and Sun—companies that made the big boxes that held your data and processed your data. Increasingly now it’s AWS [Amazon Web Services], Rackspace, Google, Microsoft with Azure and VMware building out their own clouds. So when people talk about cloud services, often they’re talking about the store-and-compute layer, only where you can rent time on machines you don’t own.
We tend to be great partners with all those store-and-compute service providers. That’s not what we do.
The layer on top of store and compute is the application layer, which used to be run by these big bundled suites, from companies like Microsoft, SAP, Oracle. Now those bundles are getting unbundled into their component parts: Salesforce does CRM, Box does storage and collaboration, Google does email, Workday does ERP, Netsuite does financial accounting.
All of those used to be in the SAP bundle. Now, instead of buying software, you’re buying those individual components.
Salesforce calls itself a cloud company. It’s not the same as Amazon; it’s a cloud services company living at that application layer.
[These companies also] tend to be partners of ours. Oftentimes a big financial institution wants to use Salesforce. The problem is, if it’s not software running in their own data center anymore, they need to have something like CloudFlare if they want a layer of protection in front of it, because they can’t call up Salesforce and ask them to put in a firewall.That leads to the third tier, what I call the edge tier. Previously the edge used to be a whole bunch of boxes that would live at the top of your rack. Those boxes would be anything with the word firewall in it, companies like Checkpoint. Increasingly it’s Fireye, or Imperva, or Palo Alto Networks. These are all firewalls that sit at the edge of your network.
And it’s companies like F5 Networks that do load balancing, WAN optimization, anyone doing performance caching, DDOS mitigation—these are all boxes, that traditionally, yesterday, you’d have to buy and put in your server rack. But increasingly, there is no rack.
Customers, however, still need this same functionality. That’s what CloudFlare is doing. We’re taking all the functionality—firewall, DDOS, web apps, load balancing, caching—and deploying it as a service, instead of it being a box, or a series of boxes, you have to buy.
So the way we work with Microsoft or Google or Amazon is that they’re providing the store-and-compute layer, or the application layer, and CloudFlare is providing the edge that sits in front of that. Instead of doing it as hardware, we’re doing it as a service.
RW: Isn’t that something Amazon and others would want to build into their cloud offerings over time?
Yeah, potentially. If they’re using all the Amazon services, people tend to use things like their Elastic Load Balancer, which is similar to the load balancing we have, or use their DNS services called Route 53.
We have those services, but we’re finding, in a lot of cases, that we’re a lot better. Our DNS services is faster and more performant than what Route 53 or Google DNS service has, so when you compare apples to apples, we do extremely well.
Second, we’re extremely focused on this. Amazon is a great company, but they’re not entirely dedicated to making sure the publisher’s experience as good as possible. We also end up being significantly more cost-effective than they are over time. Most people put us on and cut their AWS bill often by as much as 50%.
GZip It—GZip It Real Good
RW: What kind of relationship do you have with open-source projects?
MP: There’s a piece of software called gzip. It’s the compression software built into your browser. It’s probably one of the most common code paths on the internet. Gzip takes a web page and reduces it in size by as much as half.
Because it’s running on every single request, it is one of the things that takes up the most CPU on our systems, so we have an engineer who left Apple to come work for us. One of the first things he did was rewrite gzip. I was skeptical, because it’s open source project—Google uses it, Facebook uses it—so how in the hell are we going to make gzip better?
He goes away and he comes back, and he has massively increased the performance of gzip, and we have started to roll that out now across our network, which saves us a huge amount of time, allows us to offer our customers significantly faster performance.
One of the things i’m proud of that we do is turn around and contribute them back to the open source community. In the next few months, we’ll roll out our new improved gzip. We’ve been running the calculations, and the power savings alone, how much power we’ll cut if everyone in the world were to adopt this new version of gzip, it’s just astronomical.
We’re doing something that has extremely wide impact. It touches so many organizations around the world, and our mission is to build a better Internet. That sounds crazy at some level, but can do things at our scale that are pretty substantial.
About two months ago, we defaulted all our customers to IPv6 routing, so even if their backend is on IPv4 still, we can make sure the front end will support an IPv6 connection. In doing that, we increased the size of the IPv6 web in one day by something like 5%.
One of the things I’m most excited about, we have a team that’s very close, maybe by the end of this quarter, we’re going to be able to turn on SSL encrypted connections by default for even our free users. The amount of engineering work that goes into something like that is pretty substantial. There are only about two million SSL protected websites on the Internet, and the day we switch that one on, we will double the number of protected websites on the entire Internet.
RW: What other ways can the Internet be improved?
MP: There’s a Google protocol called SPDY (speedy), and SPDY makes transferring data over the internet just a ton faster, especially for mobile devices. It’s hard for individual server operators to install, so we just enabled SPDY by default.
If your server is in Texas, and you have a visitor who comes to your website from Sweden, what will happen, that visitor will first hit CloudFlare’s datacenter in Stockholm, they’ll connect via SPDY, and that dynamic content, we need to go fetch it from the server back in Texas, so we’ll open a connection back to Texas and hold that connection open.
We also have a differential compression technology called Railgun. If you’re on even a highly dynamic page like Facebook, it has some content that’s personalized to you, but there’s a whole bunch of that HTML that’s the same for you, me, and everyone else. Sending and resending all that content is just wasted bandwidth; what you really want to send is the stuff that changes.
So Railgun is differential compression for that long haul between Texas and Sweden. The performance is a lot better.
Post-Heartbleed, we’re rewriting the underlying [communication and security] protocols so the Internet runs faster. Because we are a larger and larger portion of the edge of the network, there are things that we can do, and these are things that Google has done for their own properties. If you are not Google, there’s no way to do that unless you use CloudFlare.You don’t have to be Google to be fast, safe and secure.
RW: Who do you consider your main competitor?
MP: Google doesn’t now, but will increasingly provide some services similar to us. Amazon already provides some services that overlap. And there’s Akamai, they are increasingly creating a bundle of services that compete with us.
We each have different strengths and weaknesses. My hunch is, there will be somewhere between two to six providers that provide these suite of services, and I think we have a good shot to be the leader.
Lead image by Flickr user TechCrunch, CC 2.0; CloudFlare co-founders image courtesy of CloudFlare; illustrations courtesy of Shutterstock