Cloud computing explained john rhoton pdf


Cloud Computing Explained - Implementation Handbook for Enterprises - Free ebook download as PDF File .pdf), Text File .txt) or read book online for free. text book. Please feel free to send me a message: [email protected] I cant. This books (Cloud Computing Explained: Implementation Handbook for Enterprises [PDF]) Made by John Rhoton About Books Cloud. (Pdf free) Cloud Computing Explained: Implementation Handbook for Enterprises *cloud computing explained john rhoton google | cloud computing explained.

Language:English, Spanish, Hindi
Genre:Academic & Education
Published (Last):20.07.2016
Distribution:Free* [*Register to download]
Uploaded by: JOANA

55280 downloads 168420 Views 20.70MB PDF Size Report

Cloud Computing Explained John Rhoton Pdf

Handbook For Enterprises (john Rhoton) Pdf. Cloud Computing Explained: Implementation Handbook for Enterprises · Cloud By John Rhoton. – This book. Cloud Computing: Issues in Data Mobility and Security. Zaigham .. Apr ,. [4 ] John Rhoton, Cloud Computing Explained: Implementation Be the first to ask a question about Cloud Computing Explained Cloud Computing Explained Implementation Handbook for Enterprises by John Rhoton.

Add to basket Add to wishlist Description Cloud Computing Explained provides an overview of Cloud Computing in an enterprise environment. There is a tremendous amount of enthusiasm around cloud-based solutions and services as well as the cost-savings and flexibility that they can provide. It is imperative that all senior technologists have a solid understanding of the ramifications of cloud computing since its impact is likely to permeate the entire IT landscape. However, it is not trivial to introduce a fundamentally different service-delivery paradigm into an existing enterprise architecture. This book describes the benefits and challenges of Cloud Computing and then leads the reader through the process of assessing the suitability of a cloud-based approach for a given situation, calculating and justifying the investment that is required to transform the process or application, and then developing a solid design that considers the implementation as well as the ongoing operations and governance required to maintain the solution in a partially outsourced delivery model.

Full description To Download Please Click https: SlideShare Explore Search You. Submit Search. Successfully reported this slideshow. We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads.

You can change your ad preferences anytime. Cloud Computing Explained: Upcoming SlideShare. Like this presentation? Why not share! An annual anal Embed Size px. Start on. Show related SlideShares at end.

WordPress Shortcode. Published in: Full Name Comment goes here. Are you sure you want to Yes No. Be the first to like this. No Downloads. Views Total views.

Actions Shares. Embeds 0 No embeds. At the same time, location independence and high levels of resilience allow for an always-connected user experience. Simplified management: Administration is simplified through automatic provisioning to meet scalability requirements, user self-service to expedite business processes and programmatically accessible resources that facilitate integration into enterprise management frameworks.

Affordable resources: The cost of these resources is dramatically reduced for two reasons. There is no requirement for fixed downloads. And the economy of scale of the service providers allow them to optimize their cost structure with commodity hardware and fine-tuned operational procedures that are not easily matched by most companies. The cloud is used by many organizations tenants and includes mechanisms to protect and isolate each tenant from all others.

Pooling resources across customers is an important factor in achieving scalability and cost savings. Service-Level Management: Cloud services typically offer a Service Level definition that sets the expectation to the customer as to how robust that service will be.

Some services may come with only minimal or non-existent commitments. They can still be considered cloud services but typically will not be trusted for missioncritical applications to the extent that others which are governed by more precise commitments might. All of these attributes will be discussed in more detail in the chapters to come. Related terms In addition to the set of characteristics which may be associated with cloud computing it is worth mentioning some other key technologies that are strongly interrelated with Cloud Computing.

Service-Oriented Architecture A service-oriented architecture SOA decomposes the information technology landscape of an enterprise into unassociated and loosely coupled functional primitives called services. In contrast to monolithic applications of the past, these services implement single actions and may be used by many different business applications. The business logic is then tasked with orchestrating the service objects by arranging them sequentially, selectively or iteratively so that they help to fulfill a business objective.

One of the greatest advantages of this approach is that it maximizes reusability of functionality and thereby reduces the effort needed to build new applications or modify existing programs. There is a high degree of commonality between cloud computing and SOA. An enterprise that uses a service-oriented architecture is better positioned to leverage cloud computing. Cloud computing may also drive increased attention to SOA.

However, the two are independent notions. The best way to think of the relation between them is that SOA is an architecture which is, by nature, technology independent. Cloud computing may be one means of implementing an SOA design. Grid Computing Grid Computing refers to the use of many interconnected computers to solve a problem through highly parallel computation. These grids are often based on loosely coupled and heterogeneous systems which leverage geographically dispersed volunteer resources.

They are usually confined to scientific problems which require a huge number of computer processing cycles or access to large amounts of data. However, they have also been applied successfully to drug discovery, economic forecasting, seismic analysis and even financial modeling for quantitative trading including risk management and derivative pricing. There may be some conceptual similarity between grid and cloud computing. Both involve large interconnected systems of computers, distribute their workload and blur the line between system usage and system ownership.

But it is important to also be aware of their distinctions. A grid may be transparent to its users and addresses a narrow problem domain. Cloud services are typically opaque and cover a wide range of almost every class of informational problem using a model where the functionality is decoupled from the user.

Web 2. Darcy DiNucci first used the expression in DiNucci, to refer to radical changes in web design and aesthetics. Tim OReilly popularized a recast notion in his Web 2. This term has evolved to refer to the web as not only a static information source for browser access but a platform for web-based communities which facilitate user participation and collaboration.

There is no intrinsic connection between cloud computing and Web 2. Cloud computing is a means of delivering services and Web 2. Nonetheless, it is worth observing that Web 2. These requirements, and the absence of legacy dependencies, make it optimally suited to cloud platforms. History Cloud computing represents an evolution and confluence of several trends.

But the first commercially viable offerings actually came from other sectors of the industry. site was arguably the first company to offer an extensive and thorough set of cloud-based services. This may seem somewhat odd since site was not initially in the business of providing IT services. However, they had several other advantages that they were able to leverage effectively.

As most readers will recall, site started as an on-line bookstore in Based on its success in the book market it diversified its product portfolio to include CDs, DVDs, and other forms of digital media, eventually expanding into computer hardware and software, jewelry, grocery, apparel and even automotive parts and accessories.

A major change in business model involved the creation of merchant partnerships that leveraged sites portal and large customer base. site brokered the transaction for a fee thereby developing a new ecosystem of partners and even competitors. As site grew, it had to find ways to minimize its IT costs. Its business model [3] implied a very large online presence which was crucial to its success.

Without the bricks-and-mortar retail outlets, its data center investments and operations became a significant portion of its cost structure. site chose to minimize hardware expenditures by downloading only commodity hardware parts and assembling them into a highly standardized framework that was able to guarantee the resilience they needed through extensive replication. In the course of building their infrastructure, their system designers had scrutinized the security required to ensure that the financial transactions and data of their customers and retail partners could not be compromised.

The approach met their needs however it was not inherently optimized. site and partners shared a common burden or boon, depending on how you look at it, with other retailers in that a very high proportion of their sales are processed in the weeks leading up to Christmas.

In order to be able to guarantee computing capacity in December they needed to overprovision for the remainder of the year. This meant that a major share of their data center was idle eleven out of twelve months.

The inefficiency contributes to an unacceptable amount of unnecessary costs. site decided to turn their weakness into an opportunity. When they launched site Web Services in , they effectively sold some of their idle capacity to other organizations who had computational requirements from January to November.

The proposition was attractive to their customers who were able to take advantage of a secure and reliable infrastructure at reasonable prices without making any financial or. Their story bears some resemblance to sites. They also host a huge worldwide infrastructure with many thousands of servers worldwide.

In order to satisfy hundreds of millions of search requests every day they must process about one petabyte of user-generated data every hour Vogelstein, However, their primary business model is fundamentally different in that they do not have a huge retail business which they can leverage to easily monetize their services.

Instead Googles source of revenue is through advertising and their core competence is in analytics. Through extensive data mining they are able to identify and classify user interests. And through their portals they can place advertising banners effectively. I will not go into detail on the different implications of these two approaches at this point but it will be useful to keep in mind as we discuss the various cloud platforms in the next chapters.

Also keep in mind that there are several other important cloud service providers such as Salesforce. Each has their own history and business model. The two above are merely two notable examples which I feel provide some insight into the history of cloud computing. That is not to say that it could not be completely different players who shape the future. Innovation or Impact? I hope I have clarified some of the mystery surrounding cloud computing and provided some insight into how the new service delivery model has evolved.

However, there is still one very important question that I have not addressed: What is so novel about cloud computing?

Cloud Computing by Steve Mclain on Prezi

Ive listed a number of attributes that characterize the technology. But which of those has made significant breakthroughs in the past few years in order to set the way for a revolutionary new approach to computing? Timesharing multi-tenancy was popular in the s. A utility pricing model is more recent but also preceded the current cloud-boom. The same can be said for Internetbased service delivery, application hosting and outsourcing. Rather than answering the question, I would challenge whether it is essential for there to be a clear technological innovation that triggers a major disruption even if that disruption is primarily technical in nature.

I will make my case by analogy. If you examine some other recent upheavals in the area of technology, it is similarly difficult to identify any particular novelty associated with them at the time of their break-through. Nonetheless, historians tend to agree that its influence on the economy and society was fundamental. More recently, the PC revolution saw a shift of computing from mainframes and minicomputers for large companies to desktops that small businesses and consumers could afford.

There were some advances in technology, in particular around miniaturization and environmental resilience, but these were arguably incremental in nature. The Internet is a particularly clear example of a technological transformation that caught the industry and large segments of business by surprise, but was not primarily a technical breakthrough.

Although Tim Berners-Lees vision of the World Wide Web brought together many components in a creative, and certainly compelling manner, the parts were invented long before the web made it into the mainstream. For example, the fourth version of the Internet Protocol IPv4, RFC , which is the most common network protocol today, was specified in Even the notion of hypertext isnt new. Vannevar Bush wrote an article in the Atlantic Monthly in about a device, called a Memex, which created trails of linked and branching sets of pages.

Ted Nelson coined the term hypertext in when he. And yet, the impact of these technological shifts is hard to oversee and all those who contributed to their initial breakthrough deserve credit for their vision of what could be done with all of these parts.

The innovation of the Internet, from a technical perspective, lies in identifying the confluence of several technical trends and visualizing how these can combine with improving cost factors, a changing environment and evolving societal needs to create a virtuous circle that generates ever-increasing economies of scale and benefits from network effects.

Cloud computing is similar. It is difficult to isolate a single technological trigger. A number of incremental improvements in various areas such as fine-grained metering, flexible billing, virtualization, broadband, service-oriented architecture and service management have come together recently.

Combined, they enable new business models that can dramatically affect cost and cash flow patterns and are therefore of great interest to the business especially in a down-turn.

This combined effect has also hit a critical threshold by achieving sufficient scale to dramatically reduce prices, thus leading to a virtuous cycle of benefits cost reduction for customers, profits for providers , exponential growth and ramifications that may ripple across many levels of our lives, including Technology, Business, Economic, Social and Political dimensions. Technology The impact of cloud computing is probably most apparent in information technology where we have seen the enablement of new service delivery models.

New platforms have become available to developers and utility-priced infrastructure facilitates development, testing and deployment. This foundation can enable and accelerate other applications and technologies. Many Web 2. There is significant potential to offload batch processing, analytics and compute-intensive desktop applications Armbrust, et al. Mobile technologies may also receive support from the ubiquitous presence of cloud providers, their high uptime and reduced latency through distributed hosting.

Furthermore, a public cloud environment can reduce some of the security risks associated with mobile computing. It is possible to segment the data so that only non-sensitive data is stored in the cloud and accessible to a mobile device. The exposure of reduced endpoint security for example with regard to malware infections is also minimized if a device is only connected to a public infrastructure. By maximizing service interconnectivity, cloud computing can also increase interoperability between disjoint technologies.

For example, HPs CloudPrint service. As cloud computing establishes itself as a primary service delivery channel, it is likely to have a significant impact on the IT industry by stimulating requirements that support it in areas such as: Business Cloud computing also has an undeniable impact on business strategy.

It overturns traditional models of financing IT expenditures by replacing capital expenditures with operational expenditures. Since the operational expenditures can be directly tied to production, fixed costs tend to vanish in comparison to variable costs thus greatly facilitating accounting transparency and reducing financial risk. The reduction in fixed costs also allows the company to become much more agile and aggressive in pursuing new revenue streams. Since resources can be elastically scaled up and down, they can take advantage of unanticipated high demand but are not burdened with excess costs when the market softens.

The outsourcing of IT infrastructure reduces the responsibilities and focus in the area of IT. This release can be leveraged to realign the internal IT resources with the core competencies of the organization. Rather than investing energy and managerial commitment to industry standard technologies these can be redirected toward potential sources of sustainable competitive differentiation.

Another form of business impact may be that the high level of service standardization, which cloud computing entails, may blur traditional market segmentation. For example, the conventional distinction that separates small and medium businesses from enterprises, based on their levels of customization and requirements for sales and services support, may fade in favor of richer sets of options and combinations of service offerings.

As a result of the above, it is very likely that there will be market shifts as some companies leverage the benefits of cloud computing better than others. These may trigger a reshuffling of the competitive landscape, an event that may harbor both risk and opportunity but must certainly not be ignored. Economic The business impact may very well spread across the economy.

The reduction in capital costs eliminates entry barriers for many industries which can lead to enhanced Processors: Knowledge workers could find themselves increasingly independent of large corporate [4] infrastructure Carr N. Through social productivity and crowdsourcing we may encounter increasing amount of media, from blogs and Wikipedia to collaborative video Live Music, Yair Landau.

The Internet can be a great leveling force since it essentially removes the inherent advantages of investment capital and location. However, Nicholas Carr , p. At this stage it is difficult to predict which influences will predominate, but it is likely there will be some effects.

As the cloud unleashes new investment models its interesting to consider one of the driving forces of new technologies today: Most successful startups have received a great deal of support from venture capitalists. Sand Hill Road in Palo Alto is famous for its startup investments in some of the most successful technologies businesses today ranging from Apple to Google.

On the one hand, small firms may be less reliant on external investors at all in order to get started. If someone has a PC and Internet connection, they can conceivably start a billion-dollar business overnight. On the other hand, and more realistically, investors are able to target their financing much more effectively if they can remove an element of fixed costs. Social You can expect some demographic effects of cloud computing. By virtue of its location independence there may be increases in off-shoring Friedman, There may also be impact on employment as workers need to re-skill to focus on new technologies and business models.

Culturally we are seeing an increasing invasion of privacy Carr N. While the individual impact of privacy intrusions is rarely severe, there are disturbing ramifications to its use on a large scale.

John rhoton 2009 cloud computing explained

Carr alerts us to the potential dangers of a feedback loop which reinforces preferences and thereby threatens to increase societal polarization.

From a cognitive perspective we can observe the blending of human intelligence with system and network intelligence. Carr , p. While there are certainly benefits from the derived information it begs the question of our future in a world where it is easier to issue repeated ad hoc searches rather than to remember salient facts.

Political Any force that has significant impact across society and the economy inevitably becomes the focus of politicians. There are many increasing regulations and compliance requirements that apply to the Internet and information technology. Many of these will also impact cloud computing.

At this point, cloud computing has triggered very little legislation of its own accord. However, given its far-reaching impact on pressing topics such as privacy and governance there is no doubt it will become an object of intense legal scrutiny in the years to come. Cloud Architecture Physical clouds come in all shapes and sizes.

They vary in their position, orientation, texture and color. Cirrus clouds form at the highest altitudes. They are often transparent and tend toward shapes of strands and streaks. Stratus clouds are associated with a horizontal orientation and flat shape. Cumulus clouds are noted for their clear boundaries. They can develop into tall cumulonimbus clouds associated with thunderstorms and inclement weather. The metaphor quite aptly conveys some of the many variations we also find with cloud-like components, services and solutions.

In order to paint a complete and fair picture of cloud computing we really need to analyze the structure of the offerings as well as the elements that combine together to create a useful solution. Stack One characteristic aspect of cloud computing is a strong focus toward service orientation. Rather than offering only packaged solutions that are installed monolithically on desktops and servers, or investing in single-purpose appliances, you need to decompose all the functionality that users require into primitives, which can be assembled as required.

In principle, this is a simple task but it is difficult to aggregate the functionality in an optimal manner unless you can get a clear picture of all the services that are available. This is a lot easier if you can provide some structure and a model that illustrates the interrelationships between services. Google App Engine is generally considered to be a Platform as a Service. And Salesforce.

As is often the case with classification systems, the lines are not nearly as clear in reality as they may appear on a diagram. There are many services that do not fit neatly into one category or the other.

Over time, services may also drift between service types. For example, site is constantly enhancing the EC2 offering in an effort to increase differentiation and add value. As the product matures, some may begin to question if it wouldnt be more accurate to consider it a platform service. Figure 2 1: Software, Platform and Infrastructure services Nonetheless, it is easiest to begin with a conceptual distinction. There are two primary dimensions which constrain the offerings: The services differ according to their flexibility and degree of optimization Figure 2 1.

Software services are typically highly standardized and tuned for efficiency. However, they can only accommodate minimal customization and extensions.

At the other extreme, infrastructure services can host almost any application but are not able to leverage the benefits of economy of scope as easily. Platform services represent a middle ground. They provide flexible frameworks with only a few constraints and are able to accommodate some degree of optimization.

Figure 2 2: SPI Model The classification illustrates how very different these services can be and yet, at least [5] conceptually, each layer depends on the foundation below it Figure 2 2. Platforms are built on infrastructure and software services usually leverage some platform. Figure 2 3: SPI Origins In terms of the functionality provided at each of these layers, it may be revealing to look at some of the recent precursors of each Figure 2 3: SPI OriginsFigure 2 3.

PaaS is a functional enhancement of the scripting capabilities offered by many web-hosting sites today. IaaS is a powerful evolution of colocation and managed hosting services available from large data centers and outsourcing service providers. The conceptual similarity of pre-cloud offerings often leads to cynical observations that cloud computing is little more than a rebranding exercise. As we have already seen, there is some truth to the notion that the technical innovation is limited.

However, refined metering, billing and provisioning coupled with attractive benefits of scale do have a fundamental impact on the how services are consumed with a cloud-based delivery model.

Figure 2 4: Extended Model We will examine each of the layers in more detail in the next three chapters. But to give you an idea of what each represents, its useful to take a look inside Figure 2 4. Software Services represent the actual applications that end users leverage to accomplish their business objectives or personal objectives in a consumer context.

There are a wide range of domains where you can find SaaS offerings. One of the most popular areas is customer relationship management CRM. Desktop productivity including electronic mail is also very common, as well as forms of collaboration such as conferencing or unified communications. But the list is endless with services for billing, financials, legal, human resources, backup and recovery, and many other domains appearing regularly on the market.

Platforms represent frameworks and common functions that the applications can leverage so that they dont need to re-invent the wheel. The offerings often include programming language interpreters and compilers, development environments, and libraries with interfaces to frequently needed functions. There are also platform services that focus on specific components such as databases, identity management repositories or business intelligence systems and make this functionality available to application developers.

I have divided infrastructure services into three sublevels. I dont mean to imply that they are any more complex or diverse than platform or software services. In fact, they are probably more homogenous and potentially even simpler than the higher tiers. However, they lend themselves well to further segmentation.

I suggest that most infrastructure services fall into three categories that build on each other. There are providers of simple co-location facilities services. In the basic scenario the data-center owner rents out floor space and provides power and cooling as well as a network connection. The rack hardware may also be part of the service but the owner is not involved in filling the space with the computers or appliances that the customers need.

The next conceptual level is to add hardware to the empty rackspace. There are hosting services that will provide and install blade systems for computation and storage. The simplest options involve dedicated servers, internal networking and storage equipment that is operated by the customer.

There are also managed hosting providers who will take over the administration, monitoring and support of the systems. Very often this implies that they will install a virtualization layer that facilitates automated provisioning, resource management and orchestration while also enforcing consistency of configuration.

In some cases, they will leverage multitenancy in order to maximize resource utilization - but this is not strictly required. Management Layers In addition to the software and applications that run in the SPI model and support a cloud application in its core functions, there are also a number of challenges that both the enterprise and service provider need to address in order to successfully keep the solution going Figure 2 5.

Figure 2 5: Implementation, Operation and Control Implement It is necessary to select and integrate all the components into a functioning solution. There are a large and ever increasing number of cloud-based services and solutions on the market. It is no simple task to categorize and compare them.

Cloud Computing Explained : Handbook for Enterprise Implementation

And once that is done, it would be nave to expect them all to work together seamlessly. The integration effort involves a careful selection of interfaces and configuration settings and may require additional connectors or custom software. Operate Once the solution has been brought online it is necessary to keep it running.

This means that you need to monitor it, troubleshoot it and support it. Since the service is unlikely to be completely static you need to also have processes in place to provision new users, decommission old users, plan for capacity changes, track incidents and implement changes in the service.

Control The operation of a complex set of services can be a difficult challenge. Some of the.

However, this doesnt completely obviate the need for overseeing the task. It is still necessary to ensure that service expectations are well defined and that they are validated on a continuous basis. Standards and Interoperability There are software offerings that cover all of the domains from the previous sections ranging from the SPI layers to the integration, operation and governance. One of the biggest challenges to cloud computing is the lack of standards that govern the format and implied functionality of its services.

The resultant lock-in creates risks for users related to the portability of solutions and interoperability between their service providers. The industry is well aware of the problem. Even though it may be in the short-term interests of some providers to guard their proprietary mechanisms, it is clear that cloud computing will not reach its full potential until some progress is made to address the lockin problems. The problem is quite challenging since it is not yet exactly clear which interfaces and formats need to be standardized and what functionality needs to be captured in the process.

There is also some concern that standards will lead to a cost-focused trend to commoditization that can potentially stifle future innovation.

Nonetheless, there is substantial activity on the standards front, which is at least an indication that vendors realize the importance of interoperability and portability and are willing to work together to move the technology forward. The Open Cloud Manifesto established a set of core principles in that several key vendors considered to be of highest priority. Even though the statement did not indicate any specific guidance and was not endorsed by the most prominent cloud providers e.

site, Microsoft or Google it demonstrated the importance that the industry attaches to cloud standardization. Since then several standards organizations have begun to tackle the problem of cloud computing from their vantage points: The Object Management Group OMG is modeling deployment of applications and services on clouds for portability, interoperability and reuse. The Open Group Cloud Work Group is collaborating on standard models and frameworks aimed at eliminating vendor lock-in for enterprises.

They develop benchmarks and support reference implementations for cloud computing. Evidently, the amount of standardization effort reflects the general level of hype around cloud computing. While this is encouraging it is also a cause for concern. A world with too many standards is only marginally better than one without any.

It is critical that the various organizations coordinate their effort to eliminate redundancy and ensure a complementary and unified result. Private, Partner and Public Clouds In the earliest definitions of cloud computing, the term refers to solutions where resources are dynamically provisioned over the Internet from an off-site third-party provider who shares resources and bills on a fine-grained utility computing basis. This computing model carries many inherent advantages in terms of cost and flexibility but it also has some drawback in the areas of governance and security.

Many enterprises have looked at ways that they can leverage at least some of the benefits of cloud computing while minimizing the drawbacks by only making use of some aspects of cloud computing. These efforts have led to a restricted model of cloud computing which is often designated as Private Cloud, in contrast to the fuller model, which by inference becomes a Public Cloud.

Private Cloud The term Private Cloud is disputed in some circles as many would argue that anything less than a full cloud model is not cloud computing at all but rather a simple extension of the current enterprise data center. Nonetheless, the term has become widespread and it is useful to also examine enterprise options that also fall into this category. In simple theoretical terms, a private cloud is one that only leverages some of the aspects of cloud computing Table 2 1.

It is typically hosted on-premise, scales only into the hundreds or perhaps thousands of nodes, connected primarily to the using organization through private network links.

Since all applications and servers are shared within the corporation the notion of multi-tenancy is minimized. From a business perspective you typically also find that the applications primarily support the business but do not directly drive additional revenue. So the solutions are financial cost centers rather than revenue or profit centers. Table 2 1: Private and Public Clouds Common Essence Given the disparity in descriptions between private and public clouds on topics that seem core to the notion of cloud computing, it is valid to question whether there is.

The most obvious area of intersection is around virtualization. Since virtualization enables higher degrees of automation and standardization it is a pivotal technology for many cloud implementations. Enterprises can certainly leverage many of its benefits without necessarily outsourcing their entire infrastructure or running it over the Internet.

Depending on the size of the organization, as well as its internal structure and financial reporting, there may also be other aspects of cloud computing that become relevant even in a deployment that is confined to a single company. A central IT department can just as easily provide services on-demand and cross-charge business on a utility basis as could any external provider.

The model would then be very similar to a public cloud with the business acting as the consumer and IT as the provider. At the same time, the sensitivity of the data may be easier to enforce and the controls would be internal.

Cloud Continuum A black-and-white distinction between private and public cloud computing may therefore not be realistic in all cases. In addition to the ambiguity in sourcing options mentioned above, other criteria are not binary. For example, there can be many different levels of multi-tenancy, covered in more detail in Chapter There are also many different options an enterprise can choose for security administration, channel marketing, integration, completion and billing.

Some of these may share more similarity with conventional public cloud models while others may reflect a continuation of historic enterprise architectures. What is important is that enterprises must select a combination which not only meets their current requirements in an optimal way but also offers a flexible path that will give them the ability to tailor the options as their requirements and the underlying technologies change over time. In the short term, many corporations will want to adopt a course that minimizes their risk and only barely departs from an internal infrastructure.

However, as cloud computing matures they will want the ability to leverage increasing benefits without redesigning their solutions. Partner Clouds For the sake of completeness, it is also important to mention that there are more hosting options than internal versus public.

It is not imperative that a private cloud be operated and hosted by the consuming organization itself. Other possibilities include colocation of servers in an external data center with, or without, managed hosting services. Outsourcing introduces another dimension. They can manage these services in their. In some ways, you can consider these partner clouds as another point on the continuum between private and public clouds.

Large outsourcers are able to pass on some of their benefits of economy of scale, standardization, specialization and their point in the experience curve.

And yet they offer a degree of protection and data isolation that is not common in public clouds. Vertical Clouds In addition to horizontal applications and platforms which can be used by consumers, professionals and businesses across all industries, there is also increasing talk about vertical solutions that address the needs of companies operating in specific sectors, such as transportation, hospitality or health-care. The rationale behind these efforts is that it is obvious that a large part of even the most industry-specific of IT solutions fail to generate a sustainable competitive advantage.

Reservations systems, loyalty programs, logistics software are easily replicated by competitors and therefore represent wasted intellectual and administrative effort that could be channeled much more effectively into core competencies.

A much more productive approach would be to share and aggregate best practices and collectively translate them into an optimized infrastructure which all partners can leverage thereby driving down the costs and increasing the productivity across the industry. Needless to say there are many challenges in agreeing on which approaches to use and financially recompensing those who share their intellectual property.

However, if completed effectively, it can be an example of a rising tide that lifts all boats. One area where there has been significant progress is the development of a government cloud. Terremark has opened a cloud-computing facility that caters specifically to US government customers and addresses some of their common requirements around security and reliability Terremark, It offers extensive physical security ranging from elaborate surveillance, including bomb-sniffing dogs, to steel mesh under the data center floors.

As long as the governments involved belong to the same political entity there is less need for elaborate financial incentives.

And the concerns around multi-tenancy may also be somewhat reduced compared to enterprises sharing infrastructure with their direct competitors. Multisourcing The categorization of cloud providers in the previous section into private, partner and public is a great simplification. Not only is there no clear boundary between the three delivery models but it is very likely that customers will not confine themselves to any given approach.

Instead you can expect to see a wide variety of hybrid constellations Figure 2 6. Figure 2 6: Multisourcing options The final imperative is to determine the business outcomes that must be achieved and then to analyze and compare the various options for accomplishing them. In some cases, they may be fulfilled with public cloud services securely and cost-effectively. In others it will be necessary to create internal services or to partner with outsourcing organizations in order to find the best solution.

In some cases there may be legitimate reasons to work with multiple cloud providers that deliver the same functionality. The term cloudbursting characterizes a popular approach of creating an internal service that can extend into a public cloud if there is a burst in demand, which causes it to exceed capacity.

Other reasons might be to improve business continuity through redundancy or to facilitate disaster recovery by replicating data and processes. Topology Over the past half century weve seen the typical computer topology shift from the mainframe in the s to the client-server computing in the s. The s popularized the notion of N-tier architectures, which segregate the client from the business logic and both from the information and database layer Figure 2 7.

Figure 2 7: Connectivity Evolution We are now seeing an increase in mesh connectivity. For example, peer-to-peer networks leverage the fact that every system on the network can communicate with the others. Data processing and storage may be shared between systems in a dynamic manner as required.

Cloud computing can facilitate any of these models but is most closely associated with a mesh topology. In particular it is very important to consider the client device as part of the complete cloud computing topology. Desktop virtualization can have a fundamental impact on cloud computing and can also leverage cloud services to provide content on the terminal.

However, Moores law continues to apply. We may have reached limits in transistor density but processing power is still advancing with multi-core processors. Therefore it is not realistic to think that cloud equates to thin client computing. Some functionality is simply easier to process locally, while other functions, particularly those that are collaborative in nature, may be more suitable for the cloud.

The key challenge ahead will be the effective synchronization and blending of these two operating modes. We may also see more potential for hybrid applications. Content Delivery Model One way to look at topology is to trace the content flow. There are many possible options for delivering content on the Internet.

These do not necessarily change through cloud computing, but it is important to be aware of all actors and their respective roles since they are all very much a part of cloud offerings too. Figure 2 8: Content Delivery Model There are at least three different players in many solutions.

The entity that creates the content or provides the ultimate functionality may be hidden from the user.

It is inherent in a service-oriented architecture that the end user not be explicitly cognizant of the individual component services. Instead the user interacts primarily with a content aggregator who bundles the services and contents into a form that add value to the user.

These network providers have extensive global presence and very good local connectivity. They can replicate static content and therefore make it available to end-users more quickly, thereby improving the user experience and off-loading the hosting requirements from the aggregator. Value Chain Although there is some correlation, the path of content delivery is quite distinct from the payment and funding model Figure 2 9. Figure 2 9: Payment ecosystem The simple part of the payment model is the flow from the aggregator to the delivery network and content creator.

This is intuitive and merely reflects a means of profit sharing toward those who facilitate the end-to-end service. The source of the funding model is the bigger challenge for all investors who would like to capitalize on the excitement around cloud computing.

There are at least two ways. In a case where the value of the service is explicitly recognized by the end user there is the opportunity to charge the user. Most users have an aversion to entering their credit card details on the Internet unless it is absolutely required. This typically means that the user must be convinced the content has a value that covers both the transaction costs including risk and effort and the actual billed costs. Services from site and Salesforce. For small items, the transaction costs may actually exceed the perceived value.

This makes direct billing virtually impossible. However, Google has popularized another way to monetize this value: This business model means that an advertiser pays the content provider in exchange for advertising exposure to the end user.

Ecosystem In reality, the roles of the value chain are more complex and diverse than just described. An ecosystem ties together a fragmented set of cloud computing vendors.

There are two key parts of the cloud computing ecosystem that you should keep in mind as you look at different offerings. It is extremely large. The hype surrounding cloud computing, combined with the lack of entry barriers for many functions, has made the sector extremely attractive in an economic downturn.

There are literally hundreds of vendors who consider some of their products and services to relate to cloud computing It is very dynamic. This means there are many players entering the market. Some are exiting. But it also means that many are dynamically reshaping their offerings on a frequent basis often extending into other cloud areas.

Even the delivery mechanisms themselves are changing as the technologies evolve and new functionality becomes available. As a result, it is very difficult to paint an accurate picture of the ecosystem which will have any degree of durability or completeness to it.

The market is changing and I can only provide a glimpse and high-level overview of what it looks like at this point in time. Total Cloud There are many parts to cloud computing and each of these components can be technically delivered in many different ways using a variety of different business models.

A direct outcome of this diversity is that we can expect the effects of the technology to cross many boundaries of influence. A less obvious form of impact is that each of the functions needed to implement cloud computing can, itself, be delivered as a service. Slogans such as Anything as a Service or Everything as a Service are becoming more popular to indicate that we not only have software, platforms and infrastructure as services, but also components of these such as databases, storage and security, which can be offered on-demand and priced on a utility basis.

On top of these, there are services for integrating, managing and governing Internet solutions. There are also emerging services for printing, information management, business intelligence and a variety of other areas. It is unclear where this path will ultimately lead and whether all computational assets will eventually be owned by a few service providers, leveraged by end users only if and when they need them.

But the trend is certainly in the direction of all functionality that is available also being accessible on-demand, over the Internet, and priced to reflect the actual use and value to the customer.

Open Source and Cloud Computing Richard Stallman, a well-known proponent of open source software attracted. His concerns around loss of control and proprietary lock-in may be legitimate. Nonetheless, it is also interesting to observe that cloud computing leverages open source in many ways. Self-supported Linux is by far the most popular operating system for infrastructure services due to the absence of license costs.

Cloud providers often use Xen and KVM for virtualization to minimize their marginal costs as they scale up. Distributed cloud frameworks, such as Hadoop, are usually open source to maximize interoperability and adoption. Web-based APIs also make the client device less relevant. Even though some synchronization will always be useful, the value proposition of thin clients increases as the processing power and storage shifts to the back end. Time will tell whether enterprises and consumers take advantage of this shift to reduce their desktop license fees by adopting Linux, Google Android or other open-source clients.

Many SaaS solutions leverage open-source software for obvious cost and licensing reasons. In some ways, SaaS is an ideal monetization model for open source since it facilitates a controlled revenue stream without requiring any proprietary components.

In summary, there is the potential that cloud computing may act as a catalyst for open source. Gartner, Inc. Infrastructure as a Service In the beginning there was the Data Center at least as far back in time as cloud computing goes. Data centers evolved from company computer rooms to house the servers that became necessary as client-server computing became popular.

Now they have become a critical part of many businesses and represent the technical core of the IT department. The TIA Data Center Standards Overview lists four tiers of requirements that can be used to categorize data centers ranging from a simple computer room to fully redundant and compartmentalized infrastructure that host mission-critical information systems.

Infrastructure as a Service IaaS is the simplest of cloud offerings. It is an evolution of virtual private server offerings and merely provides a mechanism to take advantage of hardware and other physical resources without any capital investment or physical administrative requirements.

The benefit of services at this level is that there are very few limitations on the consumer. There may be challenges including or interfacing with dedicated hardware but almost any software application can run in an IaaS context.

The rest of this chapter looks at Infrastructure as a Service. We will first look at what is involved in providing infrastructure as a service and then explore the types of offerings that are available today. Figure 3 1: Infrastructure Stack In order to understand infrastructure services it is useful to first take a look behind the scenes at how an Infrastructure Service provider operates and what it requires in order to build its services.

After all, the tasks and the challenges of the provider are directly related to the benefit of the customer who is able to outsource the responsibilities. Co-location This section describes a co-location service. Note that services at this level are available from many data centers.

It would be stretching the notion of cloud computing beyond my comfort level to call them cloud services. However, they are an essential ingredient to the infrastructure services described in this chapter.

At the lowest level, it is necessary to have a piece of real estate. Choice locations are often re-purposed warehouses, or old, factories that already have reliable electrical power but it is becoming increasing common to take a barren plot of land and place container-based data center modules on it.

Some of the top cloud service providers scout the globe in search of cheap, large real estate with optimal access to critical infrastructure, such as electricity and network connectivity. Power and cooling are critical to the functional continuity of the data center.

Often drawing multiple megawatts, they can represent over a third of the entire costs so designing them efficiently is indispensible. More importantly, an outage in either one can disrupt the entire operation of the facility and cause serious damage to the equipment. It is very important for the data center to have access to multiple power sources. Points of intersection between the electrical grids of regional electricity providers are particularly attractive since they facilitate a degree of redundancy should one utility company suffer a wide-spread power outage.

In any case, it is necessary to have uninterruptable power supplies or backup diesel generators that can keep the vital functions of the data center going over an extended period of time. Another environmental requirement is an efficient cooling system. Over half of the power costs of a data center are often dedicated to cooling. As costs have sky-rocketed. Most recent cooling designs leverage outside air during the colder months of the year. Subterranean placement of the data center can lead to better insulation in some parts of the world.

Cloud Computing Explained Implementation Handbook for Enterprises download

The interior of the data center is often designed to optimize air flow, for example through alternating orientation of rows of racks, targeted vents using sensors, log data and plenum spaces with air circulation underneath the floor. One other area of external reliance is a dependency on network connectivity.

Ideally the data center will have links to multiple network providers. These links dont only need to be virtually distinct they need to be physically distinct. In other words, it is common for internet service providers to rent the physical lines from another operator. There may be five DSL providers to your home but only one copper wire. If you were looking for resilience then having five contracts would not help you if someone cuts the cable in front of your house. Whoever owns and operates the data center must also come up with an internal wiring plan that distributes power and routes network access across the entire floor wherever computer hardware or other electrical infrastructure is likely to be placed.

Other environmental considerations include fire protection systems, and procedures to cope with flooding, earthquakes and other natural disasters. Security considerations include physical perimeter protection, ranging from electrical fences to surveillance systems.

Hardware The next step of an infrastructure provider is to fill the rented or owned data center space with hardware. These are typically organized in rows of servers mounted in inch rack cabinets. The cabinets are designed according to the Electronic Industries Alliance EIAD specifications which designates dimensions, hole spacings, rack openings and other physical requirements.

Each cabinet accommodates modules which are 19 inches mm wide and multiples of 1U 1. The challenge is to maximize the number of servers, storage units and network appliances that can be accommodated in the cabinet. Most racks are available in 42U form 42 x 1. But the density can be augmented by increasing the proportion of 1U blades versus 2U and 3U rack-mountable components. These modules then need to be wired for power and connected to the network.

Again, an advantage of the larger enclosures is the reduction in number of external wires that are necessary since much of the switching fabric is internalized to the system.

Similar files:

Copyright © 2019 All rights reserved.
DMCA |Contact Us