It’s such a glorious Saturday here in Oregon but I’ve been wanting to put this post up all week about Twitter’s uptime and its use as a measure of its service.
The connection to the cloud seems distinct as Twitter is simply the leading edge of what we can expect as using Web apps become standard practice for everyone, not just the geeks and influencers.
I am reminded of a conversation I had recently with some people who know far more about APIs than I do. We started doing the math for the synaptic effect when the top 1% of all Twitter users post a tweet.
Take ReadWriteWeb’s Twitter stream as an example. The @rww account has more than 1 million followers. One tweet will go out to people with, for example, as many as 500,000 users. They have their own followers and so on. So you can see what happens when the World Cup gets going. The synapses are bursting with messages at electric speeds on levels that networks have never been accustomed to handling. The system goes kaput, down time follows. The network goes back up and the NBA Finals have the same effect.
Twitter has 65 million tweets per day, which averages about 750 tweets per second. Twitter experienced 2,940 tweets per second when Japan scored against England in the World Cup. There were 3,085 tweets per second when the Lakers won the NBA Finals.
Facebook is dealing with its own velocity issues. At Structure 2010, Faecbook’s Tom Cook said it now supports 60,000 servers to serve 400 million users.
Facebook’s Jonathan Heiliger provided this perspective about Facebook’s volume:
- Users spend more than 16 billion minutes on Facebook each day
- Every week users share more than 6 billion pieces of content, including status updates, photos and notes.
- Each month more than 3 billion photos are uploaded to Facebook.
- Users view more than 1 million photos every second
- Facebook’s servers perform more than 50 million operations per second, primarily between the caching tier and web servers
- More than 1 million web sites have implemented features of Facebook Connect
Just to bring this point home, DJ Patil gives a picture of what the social graph looks like for people with followers in the tens of thousands. It almost looks like a morphing molecular structure.
So, is Twitter’s 99.17 uptime for the month of June really that bad when you compare it to other services?
It’s more accurate to compare Twitter to other social networks in terms of velocity more than anything else. It’s the speed of the social graph that Twitter needs to keep in check. What they learn and apply will become tenants for how future social graph service manage their own velocity.
What that means is the use of terms that have historically been attributed to more standard data mining practices. Modeling and statistical analysis are now part of the vocabulary that is needed to fully understand the dynamics of this ever faster synaptic environment.
John Adam of Twitter illustrated what we are seeing as the number of users and amount of data continues to scale:
Twitter is even more fascinating if you think of what its velocity will compare to fast growing services over the next five to ten years.
How that affects developers is something we look forward to exploring more in future posts that explore the current issues they face when developing services that rely heavily on APIs for both consuming and making data available.