Made in USA: Enterprise Application Services
by Abdullah Jibaly
Cloud computing, also known as grid computing or Platform as a Service (PaaS), is the next step in the concept of utility computing first envisioned by John McCarthy more than four decades ago. It is also the outcome of the natural evolution of the Web, and it heavily relies on the best practices, frameworks, and standards driving Web 2.0. Examples include Representational State Transfer (REST), dynamically typed/scripting languages, social networks, server-side scriptability, stateless client-server communication, and so on. What’s interesting is that many technologies first introduced in the mainframe era are now coming back, albeit in different guises. For example, a concept once called timesharing has morphed into virtualization of distributed resources. Here’s another example: post-relational databases are gaining new legs in the utility model where scalability is a critical factor.
Amazon largely started this trend with the introduction of its Elastic Computing Cloud (EC2) and Simple Storage Service (S3). With this bold move, Amazon instantly allowed developers to push their apps to the cloud with enormous scalability in terms of resources, availability, and space. Google introduced the Google App Engine in April 2008, opening up its cloud computing infrastructure to developers everywhere. It went a step further by providing a development environment on the developer’s desktop that mimics all the functionality of the real cloud. Other companies, such as Morph, Aptana and Joyent, have introduced similar offerings.
Developers demand freedom, forcing these services to rely on open standards and software stacks instead of being tied to one service’s proprietary infrastructure. So, a common theme among all these services is that there is no vendor lock-in, and the platform is usually based on open source frameworks. To differentiate their value proposition cloud vendors offer additional services. As mentioned earlier Amazon’s cloud give free access (no bandwidth cost) to to it’s S3 service. Google’s cloud offers access to a lot of features available on Google’s infrastructure through their published APIs such as user management, imaging, distributed cache, and so on. These services start to be the main attractions of one cloud over another, as choosing the right cloud may save a lot of development time and enhance the app’s scalability based on your requirements. On the flip side, using specialized services may create the vendor lock-in that developers thought they were eliminating.
These new cloud computing platforms are also opening up a new business area or third party vendors. These vendors offer additional services, service level agreements, and wrappers around the underlying platform to facilitate more efficient or effective software development or deployment. Some vendors are also trying to provide their own integration layer with multiple clouds to provide redundancy across clouds, that is, if a particular cloud goes offline applications can be rerouted to a different cloud without disruption of service.
Here is a recap of Cloud computing properties.
Resources are physically stored in multiple facilities and geographically separate areas.m Application code and data is spread across the distributed network. Any one resource failure should not cause a hosted application to fail. The same rule applies to application data which is usually stored in a distributed file system such Sum Microsystems ZFS. Distributed databases are an additional abstraction that comes with a high-level interface like SQL.
This is a primary feature of grid-based computing and, therefore, inherited by the cloud-based platforms. Energy conservation and resource consumption are not always a focal point when discussing cloud computing; however with proper load balancing in place resource consumption can be kept to a minimum. This not only serves to keep costs low and enterprises “greener”, it also puts less stress on the circuits of each individual box, making them potentially last longer. Load balancing also enables other important features such as scalability.
By definition, data, code, and other sensitive information is spread throughout a wide area in a cloud platform, making security an extremely important issue. Access must only be granted to an application’s authorized users. This is addressed in several different ways depending on the platform but should include components at the file system layer, virtualization, and logical database partitioning. The end result is that data obtained manually from any one server, whether through direct hardware access or through an unauthorized connection, is meaningless without being tied to additional data encrypted in the cloud.
Stan Tigrett (CTO Rapid Reporting Verification Company), makes this point: many organizations cannot use cloud based services unless they can comply with auditing standards such as the SAS 70 certification. Although this task is technically feasible, it adds overhead to these platforms as well as potentially adding complexity.
Also known as elasticity, this property refers to the capability to add more resources on demand. Scalability is especially important to public-facing web applications as the traffic can go up exponentially in a very short period of time (eg slashdot or digg effect). With traditional servers this becomes an IT nightmare as the administrators struggle to add more servers and keep the load balanced with the original servers. With cloud computing the infrastructure is already in place and the system can create new virtual images or replicate data across more instances and keep up with the traffic. In discussions with Robert Johnson (CEO TeamSupport), he notes that cloud storage is a big factor in providing scalable storage, and that many in the SaaS industry have turned to cloud-based storage to meet the exponential growth of storage demands.
In traditional application hosting and enterprise data centers the cost of the service is proportional to the capacity reserved, whether it’s a certain amount of machines, storage or bandwidth. Whether consuming all the allocated resources during a billing cycle may or may not be irrelevant. With cloud based platforms, however, the cost is proportional to the actual resources used. This has the effect of allowing a firm to start up with limited investment in resources then expand their expenditure costs in proportion to the resources consumed.
Although cloud computing and PaaS can sometimes be used interchangeably, there are distinct implications that come with the PaaS terminology. These additional properties usually build on the basic cloud layer and offer additional services and ease of use benefits. This may include developer Application Programming Interfaces (APIs), Integrated Development Environments (IDEs), and reporting tools. An example of a PaaS-based IDE is Salesforce’s VisualForce. Many of these additional services directly compete with traditional enterprise stacks (WebSphere, WebLogic, .NET) whose licensing models cannot handle the pay-as-you-go model that these new platforms are successfully pushing. In fact, more and more cloud computing platforms are standardizing on open source technologies to avoid extra licensing costs, making the service cheaper.
Gartner Group’s Magic Quadrant for Enterprise Application Server (EAS) and middleware vendors shows how companies measure up when comparing their success of execution to the overall vision. Two interesting trends emerge from the diagram below: First, open source leaders are starting to appear, and even compete, with their commercial counterparts. Second, cloud based vendors are touted as among the most visionary companies. In fact Salesforce.com, with it’s No Software mentality, is going head-to-head with the big players with a software stack based exclusively on cloud computing. This approach not only proves to be successful with the number of ISVs and enterprises jumping on board, it is also forcing the big players to evaluate their licensing model if they are to survive in the new pay-as-you-go wave of cloud computing.
An additional benefit of going down the open source path is that the customer does not feel tied in to proprietary or non-standard architectures, making it difficult to move to another platform or have an application hosted on multiple platforms for redundancy. This has been an issue with Google’s BigTable implementation which is tied to it’s own proprietary infrastructure. The demand has been growing for Google to offer an alternative open source or commercial implementation that can be deployed to another service.
Another drawback caused by the distributed nature of cloud computing is that certain services, like relational databases, often do not work well or need a lot of customization to work properly in a distributed manner. Synchronizing updates and making sure data is consistent becomes a overwhelming task and can require a lot of re-architecture of an application. This is driving third party vendors to provide services that layer on top of first-tier cloud platforms and provide a consistent development platform that scales seamlessly without rewriting or custom database configuration. Morph (www.mor.ph) is an example of this type of service that layers on top of Amazon EC2 and S3 but provides a Rails or Java environment that can access resources such as databases without having to consider that they are distributed, all of that is taken care of by the infrastructure layer and is delivered in the form of ‘cubes’ that form a logical unit that can actually consist of many distributed machines in EC2.
Next we’ll look at some of the technologies that mesh well and are in line with the cloud computing way of thinking. Things like REST, rich clients, and server-side openness or scriptability are all areas that are driving the new way of thinking for developers forward with the cloud computing platforms.
REpresentational state transfer is really a concept that encompasses the world wide web’s architecture and preferred way of modeling and interacting with services. The standard methods or verbs it defines on entities such as GET, POST, DELETE, PUT, and so on directly map to relational type thinking with the CRUD (Create Retrieve Update Delete) concepts. More importantly, by standardizing on RESTful interfaces developers can easily share and consume services from different providers open up services on their platform knowing that anyone following the simple guidelines defined by REST will be able to access and use these services. In a way, this provides the API conformity similar to Microsoft’s .NET API on its Developer Network that has similar interfaces that span multiple languages (C#, VB, C++, etc). By glancing at the definition for a function one has a general understanding of how to implement it in any particular language. Similarly, by looking at a RESTful interface I can be comfortable in using it and expect it to provide the default actions, even though the language might differ (in this case it could be HTML, XML, JSON, and so on).
If user-generated content is king in Web 2.0 then perhaps the next version of the web will be defined by user-generated functionality. In fact, one can argue that this is the main factor in the success of the latest social networks. Facebook, for example, opened it’s development platform up over a year ago and allowed any user to develop and distribute their own apps on their servers, causing a large number of popular applications to popup and continue to drive traffic to the site. This can actually be thought of as a form of cloud computing, as Facebook and other similar networks will continue to add features available to the developers until it basically becomes a full-fledged cloud computing platform.
Organizations over the last 10 years have been steadily improving their web infrastructure, data centers, and IT backbone, and many of today’s leading web-based companies can now offer the same proven infrastructure that they build and deploy on to every developer. This gives unprecedented capabilities and reach to someone building the next killer web or enterprise application at a very minimal cost. All of a sudden, with the right development team or software development partner, businesses can focus on their application and almost entirely forget about the hundreds of other things they would have had to consider in the past. Maintaining servers, making sure they’re constantly patched and secured, scaling, and even distributing your application can now be taken for granted, and you can pay attention to what actually matters — focusing on your application and your users.