CMG 2012 – Racionalização e Otimização de Energia na Nuvem – Bruno Domingues

AWS Getting Started Guides for Linux and Microsoft Windows

We’ve created three new documents to make it even easier for you to get started with AWS:

The first two documents (Getting Started Guide: AWS Web Application Hosting for Linux and Getting Started Guide: AWS Web Application Hosting for Microsoft Windows) are designed to help you create scalable, robust web applications that handle sophisticated demands and workloads using AWS. It provides an example architecture diagram of a web application hosted on AWS and a step-by-step walkthrough of how to deploy your web application using AWS services and follow best practices.

The guides walk you through each step of the process. You’ll sign up for the services and install the command-line tools. Then you will create an Elastic Load Balancer, EC2 Security Group, and a Key Pair. Next, you will use Auto Scaling to launch a load-balanced array of Amazon EC2 instances and set up a CloudWatch alarm to drive the Auto Scaling process. You will add database capabilities by launching an Amazon RDS DB Instance along with the associated DB Security Group. With the infrastructure in place, you will install and launch your web application.Finally, you will use the CloudFormer tool to capture your setup as a reusable CloudFormation template. The guide also covers the use of Route 53 for DNS hosting and CloudFront for content distribution.

We also have a brand new Microsoft Windows Guide. This guide contains

conceptual information about Amazon EC2, as well as information about how

to use the service to create new web applications on Windows instances. Separate sections

describe how to program with the command line interface (CLI) and the Query API.

— Jeff;


5 ways to protect against vendor lock-in in the cloud

Two weeks ago, Google announced a significant price increase for use of its App Engine Platform-as-a-Service. The increase itself was not a huge surprise. Google had been making noises that something like this was in the offing for a number of months. But the size of the increase shocked the Web development and cloud applications community. For most users, the cost of using the Google runtime environment effectively increased by 100% or more.

A huge online backlash ensued. For its part, Google put off the increase by a month and moderated some of the increases. But the whole incident brought many nagging doubts about the cloud to the surface. Said one poster on one of the many threads that lit up the Google Groups forums after the increase:

I like so many of us have spent a lot of time learning app engine – i have been worried like so many that using app engine is a mistake because any app you invest/build can only be run on… app engine.

Because the Google PaaS requires that developers customize code specifically to run in that environment and nowhere else, rewriting that code takes a lot of time, effort and money. With salaries for programmers hitting record highs in the Bay Area and recent CS graduates pulling in $120,000 or more to code, any big move that forced major code rewrites would ultimately wallop the bottom line. Ironically, these increases disproportionately affected numerous hobbyists and small developers running interesting applications – the creators of the next proverbial Google. Certainly corporate IT departments took notice, as well.

Vendor lock-in will make you vulnerable

Unquestionably, Google App Engine price increase revealed a key fundamental weakness of many cloud businesses.  Namely, vendor lock-in does exist in the cloud. This seems odd because one of the benefits of the cloud specifically was to obviate the advantage of vendor lock-in and make applications more portable. In that worldview, no cloud rules them all (not even Amazon) and companies operating applications in the cloud can quickly and easily port their applications to other PaaS offerings or to other IaaS providers.

With vendor lock-in comes vulnerability to price increases. In all likelihood, Google – a data-driven business if there ever was one – was rebalancing pricing to reflect its own need for profitability. But for developers and app makers, this drastic shift effectively turned their decision to go with Google App Engine into what may have been a “bet-the-company” decision without ever realizing it.  For the PaaS industry in general, the move raises significant uncertainty. If Google has to raise its prices this much, who’s next?

Start thinking defensively before you choose a platform

In a similar vein, developers who put their applications up on Heroku may not have realized that their business fate depended on the fidelity of the Amazon EC2 cloud. If a company had been planning a big sales event or promotion during the extended EC2 outage, those three days of hard downtime may have had an outsized impact.

So clearly the rules of the game have changed for anyone who wants to put an app in the cloud and run a real business. Defensive thinking is in order. Here are five key rules to avoid getting gouged by Google App Engine or eviscerated by an EC2 outage:

  1. Avoid vendor lock-in at all costs. This is now a no-brainer. Make sure that your app can be easily ported to other clouds if you need to move due to service outages. If you must write apps that require serious customization, make sure you have a back-up plan and, if you can swing the cost, an alternative cloud running your code as a backup.
  2.  Know thy PaaS. Spreading the risk among multiple PaaS providers makes a lot of sense – unless they are all totally dependent on one big cloud to deliver your applications and cloud business. Explore installable PaaS options that you yourself control. So ask pointed questions about where your PaaS is running and how they are managing their risks of failure of a big cloud.
  3.  Ask hard questions about redundancy and system architecture. Deep under the covers of most clouds are core system architectures that may replicate single-points-of-failure. That’s because, at its core, the cloud infrastructure ecosystem is not a terribly diverse environment. Only a few hardware and software companies rule the roost. Similarly, ask your cloud provider to completely open their architecture and software kimono and let you examine everything. If they won’t, then you caveat emptor. If they will, you can judge their redundancy steps for yourself. So ask for specific architecture diagrams if you are going to be dependent on a cloud environment and its reliability. And get a network engineer or system architect buddy to review the diagrams. Think this is overkill? Ask FourSquare, Reddit and the other huge sites that have corporate backing or VC money and went down hard in the EC2 outages.
  4. Pick code that’s easier and faster to modify. Not all runtime environments and frameworks are alike. Certain flavors and types of frameworks and Web scripting environments are more difficult to change in a pinch due to the core architecture of the way the scripting language works. Until recently, PHP was far harder to clean up than RoR, and Python, pre-Django, was more unwieldy.
  5. The most popular code may not be the cheapest code. Think about the availability of coders. Many applications companies have a horror story about how their iOS app needed modifications and they either had to pay a high-end dev shop $200 per hour or had to wait for weeks to make the mods. At the same time, some runtime environments like Node.js can be built with Javascript code throughout the application stack. (We’re biased as we are strong backers of Node.js). That means you eliminate the need for differentiated front- and back-end coding teams, in a best case scenario. When building your cloud app, think hard about the code selection before you start filling up your GitHub repository.

By no means are these five steps comprehensive. And for the most part they are obvious. But in the cloud things move pretty quickly and sometimes slowing down to think about what your cloud application will be in six, 12 or 24 months is hard to do. So put on your crash helmet, watch your wallet, and be careful out there, people.

Alex Salkever is Director of Product Marketing at Joyent Cloud (@Joyent). He was formerly a technology editor at

Image courtesy of Flickr user kreg.steppe.

Related research and analysis from GigaOM Pro:
Subscriber content. Sign up for a free trial.

Capacity aggregation: Cloud’s great experiment | The Wisdom of Clouds – CNET News

One of the most important of these experiments today is the introduction of true compute capacity aggregators–market services where capacity is available on demand, from multiple providers, with price and service competition.

Achieving a true capacity market, in which capacity can be traded as a commodity product like wheat or energy is an extremely difficult problem to solve. In fact, I’m on record as saying it will be many years before the technical and legal barriers to such a model will be removed.

Very nice article about some services to buy and sell virtual capacity. Cloud computing traded as a commodity product.

The ABCs of virtual private servers, Part 1: Why go virtual?


Nati Shalom’s Blog: NoCAP

« My JavaOne 2010 Plans |

October 16, 2010


In the past few months i was involved in  many of the NoSQL discussions. I must admit that i really enjoyed those discussions as it felt that we finally started to break away from the “one size fit it all” dogma and look at the data management solutions in a more pragmatic manner. That in itself sparks lots of interesting and innovative ideas that can revolutionize the entire database market such as the introduction of document model, map-reduce and new query semantics that comes with it. As with any new movement we seem to be going through the classic hype cycle. Right now it seems to me that were getting close to the peak of that hype. One of the challenges that i see when a technology reaches its peak of the hype is that people stop questioning the reason for doing things and jump on new technology just because X did that.  NoSQL is no different on that regard.


In this post i wanted to spend sometime on the CAP theorem

and clarify some of the confusion that i often see when people associate CAP with scalability without fully understanding the implications that comes with it and the alternative approaches.

I chose to name this post NoCAP specifically to illustrate the idea that you can achieve scalability without compromising on consistency at least not at the degree that many of the disk based NoSQL implementations imposes.

Recap on CAP

Quoting the definition on wikipedia:

The CAP theorem, also known as Brewer’s theorem, states that it is impossible for a distributed computer system

to simultaneously provide all three of the following guarantees:[1]


  • Consistency

    (all nodes see the same data at the same time)

  • Availability

    (node failures do not prevent survivors from continuing to operate)

  • Partition Tolerance

    (the system continues to operate despite arbitrary message loss)


Many of the disk based NoSQL implementations was originated from the need to deal with write scalability. This was largely due to the changes in traffic behavior that was mainly a result of the social networking  in which most of the content is generated by the users and not by the site owner.

In a traditional database approach achieving data consistency requires synchronous write to disk and distributed transactions (known as the ACID properties).

It was clear that  the demand for write scalability would conflict with the traditional approaches for achieving consistency (synchronous write to a central disk and distributed transactions).

The solution to that was:  1) Breaking the centralized disk access through partitioning of the data into distributed nodes. 2) Achieve high availability through redundancy (replication of the data into multiple nodes) 3) Use asynchronous replication to reduce the write latency.

The assumptions behind point 3 above is going to be the center in this specific post.


Graphical representation of the Cap Theorem. (Source



The Consistency Challenge

One of the common assumptions behind many of the NoSQL implementations is that to achieve write scalability we need to push as many operations on the write-path to a background process in order that we could minimize the time in which a user transaction is blocked on write.

The implication is that with asynchronous write we loose consistency between write and read operations i.e. read operation can return older version then that of write.

There are different algorithms that were developed to address this type of inconsistency challenges, often referred to as Eventual Consistency


For those interested in more information on that regard i would recommend looking at Jeremiah Peschka

post Consistency models in nonrelational dbs

. Jeremiah provides a good (and short!) summary of the CAP theorem, Eventual Consistency model and other common principles that comes with it such as (BASE – Basically Available Soft-state Eventually, NRW, Vector clock,..).

Do we really need Eventual Consistency to achieve write scalability?

Before I’ll dive into this topic i wanted to start with quick introduction to the term “Scalability” which is often used interchangeably with throughput. Quoting

Steve Haines


The terms “performance” and “scalability” are commonly used interchangeably, but the two are distinct: performance measures the speed with which a single request can be executed, while scalability measures the ability of a request to maintain its performance under increasing load

(See previous post on that regard:  The true meaning of linear scalability)

In our specific case that means that write scalability can be delivered primarily through point 1 and 2 above ( 1-Break the centralized disk access through partitioning of the data into distributed nodes. 2-Achieve high availability through redundancy and replication of the data into multiple nodes) where point 3 ( Use asynchronous replication to those replica’s to avoid the replication overhead on write) is mostly related with write throughput and latency and  not scalability. Which bring me to the point behind this post:

Eventual consistency have little or no direct impact on write scalability .

To be more specific my argument is that it is quite often enough to break our data model into partitions (a.k.a shards) and break out from the centralized disk model to achieve write scalability. In many cases we may find that we can achieve sufficient throughput and latency just by doing that.

We should consider the use of asynchronous write algorithms to optimize the write performance and latency but due t
o the inherited complexity that comes with it we should consider that only after we tried simpler alternative such as using db-shards, FLASH disk or memory based devices.

Achieving write throughput without compromising consistency or scalability

The diagram below illustrates one of the examples by which we could achieve write scalability and throughput without compromising on consistency.


As with the previous examples we break our data into partitions to handle our write scaling between nodes. To achieve high throughput we use in-memory storage instead of disk. As in-memory device tend to be significantly faster and concurrent then disk and since network speed is no longer a bottleneck we can achieve high throughput and low latency even when we use synchronous write to the replica.

The only place in which we’ll use asynchronous write is the write to the long-term-storage (disk).  As the user transaction doesn’t access the long-term storage directly through the read or write path, they are not exposed to the potential inconsistency between the memory storage and the long-term storage. The long-term storage can be any of the disk based alternatives starting from a standard SQL databases ending with any of the existing disk based NoSQL engines.

The other benefit behind this approach is that it is significantly simpler. Simpler not just in terms of development but simpler to maintain compared with the Eventual Consistency alternatives. In case of distributed system simplicity often correlate with reliability and deterministic behavior.


Final words

It is important to note that in this post i was referring mostly to the C in CAP and not CAP in its broad definition.  My points was not to say don’t use solution that are based on CAP/EventualConsistency  model but rather to say don’t jump on Eventual Consistency based solutions before you considered the implications and alternative approaches. There are potentially simpler approaches to deal with write scalability such as using database shards, or In-memory-data-grids.

As were reaching the age of Terra-Scale devices such as Cisco UCS where we can achieve huge capacity of memory, network and compute power in a single box the area’s in which we can consider to put our entire data in-memory get significantly broader as we can easily store Terra bytes of data in just few boxes. The case of Foursquare’s MongoDB Outage

is interesting on that regard.  10gen’s CEO Dwight Merrimanargued that the entire set of data actually needs to be served completely in-memory:

For various reasons the entire DB is accessed frequently so the working set is basically its entire size Because of this, the memory requirements for this database were the total size of data in the database. If the database size exceeded the RAM on the machine, the machine would thrash, generating more I/O requests than the four disks could service.

It is a common misconception to think that putting part of the data in LRU based cache ontop of a disk based storage could yeild better performance as noted in the sanford research The Case for RAM Cloud

..even a 1% miss ratio for a DRAM cache costs a factor of 10x in performance. A caching approach makes the deceptive suggestion that “a few cache misses are OK” and lures programmers into con-figurations where system performance is poor..

In that case using pure In-Memory-Data-Grid as a front end and disk based storage as long term storage could potentially work better and with significantly lower maintenance overhead and higher determinism. The capacity of data in this specific case ( <100GB)  shouldn’t be hard to fit into single UCS box or few of the EC2 boxes.



Email this


TrackBack URL for this entry:

Listed below are links to weblogs that reference NoCAP:


pron said…

Nice post, Nati, but I believe you are a bit misleading regarding the CAP theorem. CAP applies to all distributed solutions, including the one you’ve described (that does not sacrifice consistency). However, that solution does sacrifice partitioning. Asynchronous writes are not only used for reducing latencies (i.e. performance), but also to provide partition tolerance. It’s simply a matter of whether you prefer CAP’s CA, or AP.

Nati Shalom said in reply to pron…


You have a point here

My points was more to do with Eventual Consistency part of CAP and not CAP in its broad definition.
I’ll add that clarification to the post.

Asynchronous writes are not only used for reducing latencies (i.e. performance), but also to provide partition tolerance

I see how partition tolerance relates to the the fact that i need to maintain redundant replicas. I fail to see the direct connection on whether or not the replication is synchronous or not.

Quoting the wikipedia definition:

Partition Tolerance (the system continues to operate despite arbitrary message loss)

In the proposed alternative that i was pointing to above you can achieve partition tolerance by the fact that there is more then one replica of the data available somewhere over the network. So when a node fails or become un-available the system continues to serve write/read request through one of the redundant nodes.

My point was that you don’t have to give up consistency to achieve partition tolerance at that level.

pron said in reply to Nati Shalom

“My point was that you don’t have to give up consistency to achieve partition tolerance at that level.”
But you do. By definition, synchronous replication means that the master has to wait for the slave to acknowledge receiving the update. If update messages are lost (a partition is created in the network), the master in effect ceases to function. Both master and slave(s) stop serving queries, because they cannot guarantee consistent responses. With eventual-consistency, nodes can still serve queries even if data replication is temporarily hampered – some nodes will simply return query results that don’t reflect the latest updates. This is precisely the consistency/partition-tolerance trade-off.

Nati Shalom said in reply to pron…


Let me refine my response.

Synchronous replication behaves differently then what you described: Synchronous replication (thinking of GigaSpaces in mind)works as follow when it comes to network partition or node failure:

1) Network Partition between primary and replica node: if a replica node failed the partition that is still available continues to serve users requests and log all the transaction into a log buffer till the communication is restored. The failed node restore its state upon startup from the current state of the available node and by replaying the the log (to fill in the gap since it started its recovery process).

2) If a primary node fails, users is routed to an alternate node that becomes the new primary. The recover process of the failed primary is exactly the same as i mentioned above as it basically becomes the new backup when it comes back alive.

In this model the asynchronous replication part happens only during recovery process and not in a continues basis and thus you don’t have to compromise between consistency and partition tolerance at the level that you would otherwise.

Nati S.

pron said in reply to Nati Shalom

Nati, you cannot escape the CAP theorem, and I think you may be adding to the confusion rather than clarifying it. I invite you to read

What your (perfectly reasonable) solution does is create a quorum between nodes that contain the “shard”. If you have just two of them, a master and a slave, than in the case of a two-node failure, your system is not available (there is no access to the data contained only in those two nodes). In order to survive larger failure you would require a larger quorum, say 3 nodes, but then, because writes are synchronous, write latency MUST suffer, no matter whether the data is stored on disk or in RAM. The write transaction must wait for all shard nodes to acknowledge the operation. This kind of system is called CA (perhaps a bit confusingly), or, one that in an event of partitions becomes unavailable. Again, this may be a perfectly good solution but it is not a “simpler solution”, but rather one to a problem with different requirements. It is absolutely not a viable solution to systems that require continued operation in an event of many-node failure which must make use of data-stores such as Dynamo or Cassandra, just as Dynamo and Cassandra cannot be used by systems that require perfect consistency.
You may, however, argue that node failures are, more often than not, disk failures, so a RAM only data-store is unlikely to incur multiple-node failures.

Nati Shalom said in reply to pron…


Thanks for the pointer. The definition that i was referring to in this post seems to be more consistent with that of Henry Robinson than that of Dr. Michael Stonebraker. So i hope that were under agreement here.

In order to survive larger failure you would require a larger quorum, say 3 nodes, but then, because writes are synchronous, write latency MUST suffer

That’s not entirely true. Writes to replica happens in parallel to one another i.e. they are not sequential therefore the latency is not proportional to the number of replica but to the response of the slowest node in the replica group.

There are even more efficient ways to deal with large failure then to keep many redundant nodes active all the time and pay the overhead in bandwidth, capacity, complexity (replica of 3 means that we have for every GB of data we have 2GB of redundant information, with large data systems that can turn into huge numbers and cost)

A better way IMO is to add more backup on demand i.e. only when one of the nodes failed. So you are guarantied to have two nodes always available (with the exception of loosing the two nodes at the SAME time. Statistically i would argue that its the same chances of loosing two data centers at the same time as in most cases the two nodes would live into separate data centers)

In GigaSpaces there is also an option to mix between synchronous and asynchronous replica under the same group. This is infact how we store data into a longterm storage so you have more options to control the tradeoff of throughput and redundancy on that regard.

Over the years of dealing with those tradeoffs we found that even in the most mission critical systems that you can think of customers chose the backup-on demand option rather then having more backups even though they can.

pron said in reply to Nati Shalom

Another required read for anyone interested in distributed systems and the CAP theorem is this:

Dr. Abadi from Yale university explains that there are really only two types of operating mode for distributed systems: CA/CP (which are exactly the same thing), and AP.
I, too, believe that AP solutions (that only guarantee eventual consistency) are relevant only for extremely large, “web-scale” applications with thousands of nodes and millions of concurrent users (like Amazon and Facebook), provided they can live without strict consistency, and possibly for smaller applications where consistency is not at all crucial, and where dealing with eventual-consistency bears absolutely no additional development complexity. For most enterprise applications, CA(=CP) solutions should certainly be considered first.
Not surprisingly, the first well-known AP data-stores, Dynamo and Cassandra (which is largely based on Dynamo), were created by Amazon and Facebook. They were the first to face a problem big enough where CA was simply inadequate and AP’s eventual consistency was, luckily, acceptable. Amazon certainly tackled a new problem with an ingenious solution. But, again, if you’re not Amazon, Facebook, Twitter or LinkedIn, CA data-stores are probably a better fit.

Nati Shalom said in reply to pron…

Thanks i’ll add the two references to the reference section of this post.

Nati Shalom said in reply to pron…

And BTW i’m having hard time agreeing with the classification around CA vs CP. As Daniel Abadi noted here


CP and CA are essentially identical. So in reality, there are only two types of systems: CP/CA and AP. I.e., if there is a partition, does the system give up availability or consistency? Having three letters in CAP and saying you can pick any two does nothing but confuse this point.



Nati, I agree with most of this but are you setting up a strawman here? I don’t see how EC has any bearing. MongoDB is not in the family of databases that use EC as a strategy. It’s sharded, it performs delayed writes, it uses sync memory writes across nodes emulate durability. Each shard’s master is the sole writer master its data. A read replica is elected in the event the master fails. It’s much closer to your diagram of a disk backed memory grid – add a master lookup node, and you’ve drawn MongoDB. It’s not a system where you can just write to any node in the cluster and have the nodes make the data consistent later, which is what an EC strategy provides.

“Eventual consistency have no direct impact on write scalability .”

At worst that’s incorrect, at best it needs to be qualified. If you allow copies of a record to be concurrently updated then you have improved the write scalability of the system as there are no coordination bottlenecks on the write path. At the cost of consistency for readers. If you only allow a single copy of the record to exist and force processes to compete for access to that record, then scalability is reduced. And notably the cost of consistency for readers still has not been removed (reduced but not removed). As soon as you cache, this dilemma exists.

The outage issue seems more to do with internal detail, albeit architecturally significant detail – the way indexes and data are structured, how they get paged, the on-disk architecture. Clearly there’s an issue here, or the answer to the problem would be buy more RAM, not change MongoDB to degrade and rebalance better. But as you’ve realized by mentioning UCS you can’t easily just buy more RAM and put it into a public cloud machine, you are limited to buying more machines. So in that configuration rebalancing is a must have.

Nati Shalom said in reply to Dehora

Hi Dehora
Thanks for the detailed response – i appreciate it!

MongoDB is not in the family of databases that use EC as a strategy. It’s sharded, it performs delayed writes, it uses sync memory writes across nodes emulate durability. Each shard’s master is the sole writer master its data. A read replica is elected in the event the master fails

I was reading a serious of posts on MongoDB consistency model – namely this

You seem to be right in saying that even though MongoDB comes with those nobs to control consistency it comes with Strong Consistency as the default setup.

I’ll remove that point from my post.
Again thanks for pointing this out.

If you allow copies of a record to be concurrently updated then you have improved the write scalability of the system as there are no coordination bottlenecks on the write path. At the cost of consistency for readers.

For that specific scenario can’t i just use optimistic locking on that single record?

Can you point to a scenario where parallel updates on the same record through multiple nodes couldn’t be addressed through more relaxed locking on the same node?

From what I’ve seen in most cases where Eventual Consistency was brought up it was rarely used to address concurrent update contention on the same record but as a general mean to address write saleability through asynchronous write.

To be more accurate i think that i’ll change my wording to:

“Eventual consistency have little or no direct impact on write scalability”

The outage issue seems more to do with internal detail, albeit architecturally significant detail – the way indexes and data are structured, how they get paged

The point that i was trying to make was the same point that was brought up in the stanford research. Putting even large part of your data in cache is always going to lead to complexity and non deterministic behavior. For example if most of your queries happens to do things like (max), (avr) that will always yield going down to disk even if the 90% of your set is in-memory.
The impact of going to disk just for the 10% is not linear i.e. it wouldn’t yield 10% degradation but 10 times more.

My point was that if your data need to be served in-memory then having a disk based solution with LRU cache as front end is just going to make you application more complex. A better approach in this case is to put the entire set of data in-memory and use the memory storage as the system of record as with In-Memory-Data-Grid.

The size of the data in subject seemed to be within the range that could easily fit with that approach.

“I chose to name this post NoCAP specifically to illustrate the idea that you can achieve scalability without compromising on consistency at least not at the degree that many of the disk based NoSQL implementations imposes.”

“It is important to note that in this post i was referring mostly to the C in CAP and not CAP in its broad definition.”

“To be more accurate i think that i’ll change my wording to: Eventual consistency have little or no direct impact on write scalability.”

With the due respect, the main thing which is eventually consistent here is your post. I suggest using the term IDontUnderstandCAP rather than NoCAP (hint: if you decide to ignore partitioning in a solution, the CAP theorem continues to be valid).

I think the whole point comes from the fact that you’re confusing availability with partition tolerance. The fact that if a server is un-available the request is routed to some other server, is something that makes your distributed system highly available (and adds to the A of CAP). That is not partition tolerance.

Partition tolerance comes into play from the fact that if backbone between Europe and US breaks, Amazon, i.e., has now two systems (the two partitions, the Europe one and the US one) keep on going on, but they are now inconsistent (there’s no consistency on stock availability between the two systems as they cannot communicate their updates due to client actions). But they eventually will.

Dependance on synchronous communication means you’ll loose availability or partition tolerance, depending on the strategy you decide to take in case of breaks of the communication medium over which the synchronous communication works.

Comment below or sign in with TypePad



and more…

powered by TypePad

High Scalability – High Scalability – I, Cloud

Every time a technological innovation has spurred automation – since the time of Henry Ford right up to a minute ago – someone has claimed that machines will displace human beings. But the rainbow and unicorn dream attributed to business stakeholders everywhere, i.e. the elimination of IT, is just that – a dream. It isn’t realistic and in fact it’s downright silly to think that systems that only a few years ago were unable to automatically scale up and scale down will suddenly be able to perform the complex analysis required of IT to keep the business running.

The rare reports of the elimination of IT staff due to cloud computing and automation are highlighted in the news because they evoke visceral reactions in technologists everywhere and, to be honest, they get the click counts rising. But the jury remains out on this one and in fact many postulate that it is not a reduction in staff that will occur, but a transformation of staff, which may eliminate some old timey positions (think sysadmins) and create new ones requiring new skills (think devops).

I am, as you may have guessed, in the latter camp. IT needs to change, yes, but that change is unlikely to be the elimination of IT. Andi Mann twitterbird

(VP with CA Technologies) put it well when he says yes, IT staff reductions are always a possible outcome of better IT, “Yet with a sunk cost in training and skills, and the seemingly endless list of projects on most CIOs’ desks, is cutting staff numbers really a good outcome? How does reassignment and redeployment fit into this value too – better or worse?” It is that “seemingly endless list of projects” that makes a mass reduction in IT unlikely along with the fact that systems are simply not ready to “take over” from human beings. Not yet.


Any business stakeholder who dreams of a data center without IT should ask themselves this question: could a machine do my job? Could it analyze data and, from all those disparate numbers and product names and locations come up with a marketing plan? Could it see trends across disparate industries and, taking into consideration the season and economic conditions, determine what will be the next big seller?

Probably not, because such decisions require an analytical thought process that simply doesn’t exist in the world of technology today. The “intelligence” that exists in any system today is little more than a codified set of rules that were specified by  – wait for it, wait for it – yes, a human being. It was a person who sat down and codified a set of basic rules for automatically responding to deviances in performance and capacity and specified what action should be taken. Without those basic rules the systems could not decide whether to turn themselves on or off, let alone make a decision as complex as where to direct any given application request.

When you study communication theory you discover some very basic facts about the nature of learning and intelligence that comes down to this: words, which are just data, have no intrinsic meaning. Words, numbers, data. These things by themselves carry no especial value in and of themselves. It is only when they are seen and understood by a human being that they become valuable and become “information”. The same is true of the data that flies around a data center and upon which decisions are made: it’s just data, even to the systems, until it’s interpreted by a human being.

Not convinced? Shall we play a game? *



What is this number? Most of you probably answered, “pi” – the mathematical constant used in formulas involving circles. But it could just as easily represent the number of milliseconds it took for a packet to traverse a segment of the network, or the number of seconds it took for the first byte of a response to an HTTP request to arrive on a user’s desktop, or the average number of items purchased by storefront X in the month of August. It could be the run-rate in hundreds of thousands of dollars of a technology start up or perhaps it’s my youngest daughter’s GPA.


So which is it?


You need context in order to interpret what that number means. I’ll call that a point proven but in case you’re not convinced, let’s dig a little deeper. Even after it’s interpreted in its proper context this number requires further analysis to become valuable. After all, if that’s my daughter’s GPA you don’t know whether that’s good or bad without knowing a lot more about her. Maybe she’s underachieving, maybe she’s overachieving. Maybe she’s seven years old and in high school and that’s amazing.

There’s just a lot more intelligence required to make sense out of a piece of data than we realize. The reaction taken to this data once it becomes information requires analysis; human analysis. Even if we could codify every rule and correlate all the data necessary to make sense out of a simple number, there’s still the fact that there are exceptions to every rule and there’s always something we didn’t consider that changes the equations.


I’m not talking about the customer-service-likes-to-interact-with-others kind of people skills, I’m talking about analytics and the ability to think through a problem or, even better, simply recognize one when it happens. The best example of the continuing need for such skills is the recent outage experienced by Facebook.


The 150 minute-long outage, during which time the site was turned off completely, was the result of a single incorrect setting that produced a cascade of erroneous traffic, Facebook software engineering director Robert Johnson said in a posting to the site.


“Today we made a change to the persistent copy of a configuration value that was interpreted as invalid. This mean
t that every single client saw the invalid value and attempted to fix it. Because the fix involves making a query to a cluster of databases, that cluster was quickly overwhelmed by hundreds of thousands of queries a second,” Johnson said.

“To make matters worse, every time a client got an error attempting to query one of the databases, it interpreted it as an invalid value and deleted the corresponding cache key,” he added. “This meant that even after the original problem had been fixed, the stream of queries continued. As long as the databases failed to service some of the requests, they were causing even more requests to themselves. We had entered a feedback loop that didn’t allow the databases to recover.” [emphasis added]

Facebook outage due to internal errors, says company” ZDNet UK (September 26, 2010)

The first thing that comes to mind on reading this explanation is that if the configuration change had been made by a human being manually it might have been followed by an error message and all subsequent changes halted until it was determined why the system thought the setting was invalid. But the system, image

elegantly automated, propagated the erroneous setting and as it cascaded through the system, which was automated in a way as to try to fix the problem on its own, it just made things worse and worse. At no point did the system even recognize that something was wrong, that took a human being. In the aforementioned post by Robert Johnson on the outage, he stated, “An automated system for verifying configuration values ended up causing much more damage than it fixed.


In the end, the system needed to be shut down and restarted. A decision that was made by a human being, not a machine, because it was only when a human being looked at what was happening, when a human being evaluated the flow of data across systems and networks, that they were able to determine what the source of the problem was and, ultimately, how to resolve it. The system saw nothing wrong because it was acting exactly the way it should; it followed its programming to the letter despite the fact that it was ultimately destroying the system it was supporting.

It was just acting on data because that’s all it can do; it cannot analyze and interpret that data into information that leads to the right action. Until it can, we don’t really need a “Three Laws of Cloud” because the systems are not capable of performing the kind of analysis necessary to even recognize its actions might be harming the very applications it is built to deliver (an adaptation of Asimov’s Second Law of Robotics).


Facebook’s system, like many of those being designed and developed in organizations around the world, is the codification of processes. It is the digitized orchestration of many disparate tasks that can be performed individually and as a whole result in a specific work-flow execution. Its purpose is to minimize the amount of manual intervention required while maximizing the efficiency of such processes. The processes aren’t new, but their form is. They are chunks of conditional logic that takes as parameters one or more pieces of data and acts upon that data. That’s it. There’s no intelligence, no analysis, no gut instinct to guide the system into making a choice other than the ones codified in its scripts and daemons and services. 

History teaches us that assembly line technologies, which is as close a real-world analogy to automation and IT as we’re likely to get, do not reduce the number of human beings required to monitor, manage, and improve the processes codified to achieve such automation. Instead, it frees human beings to do what they are best at: analyzing, innovating and finding new ways to do what we’ve always done that are more efficient. It allows us to do things faster and, eventually, commoditize the output such that we can focus on leveraging those processes to build better and more awesome versions of the output.

What automation of “IT” does is create new opportunities for IT, it does not erase the need for it. Until our systems are able to analyze and interpret data such that it becomes information and then act on that information in ways that may be “outside the existing ruleset” then IT – and more specifically the people that comprise IT – will not only be needed they will be necessary to the continued growth and evolution of not only IT but the business. 


…it was Ransom E. Olds and his Olds Motor Vehicle Company (later known as Oldsmobile) who would dominate this era of automobile production. Its large scale production line was running in 1902. Within a year, Cadillac (formed from the Henry Ford Company), Winton, and Ford were producing cars in the thousands.

— Wikipedia, History of the Automobile 

Notice that “large scale” one hundred years ago meant “in the thousands.” In 2009 alone 61 million cars were produced (Wikipedia, Automotive Industry). And you can bet that there are more people employed today globally in the automotive manufacturing business than there were a century ago. People are still an integral part of that process and, as the technology has become more complex and sophisticated, they have become even more important than ever. The same will be true with automation and cloud computing; as the technology matures and becomes more sophisticated and complex, people will be as essential a part of the equation as when they had to manually enter the commands themselves. They will be able to recognize when processes are inefficient, when they could be improved or applied elsewhere. They will be able to take the time to build out systems that take on the burden of mundane tasks, which is what we’ve always relied upon machines to do.

What will be new and should be exciting is that the people involved will actually be freed to act like people rather than machines. And if we’re lucky, that means that the business stakeholders will stop treating them as though they’re machines and start leveraging their people skills instead. 

* You get 20 geek points if you recognized that question as one from the movie “War Games”, in which an “intelligent” computer system is unable to differentiate between a “game” and “reality” and nearly starts World War III by launching nuclear missiles. Keep track of those points, some day they might be worth something, like a t-shirt.