Welcome!

Java Authors: Plutora Blog, Javier Paniza, PagerDuty Blog, Elizabeth White, Pat Romanski

Related Topics: Cloud Expo, Java, SOA & WOA, Virtualization, Big Data Journal, SDN Journal

Cloud Expo: Blog Feed Post

Essential Cloud Computing Characteristics

According to NIST the cloud model is composed of five essential characteristics, three service models, & four deployment models

If you ask five different experts you will get maybe five different opinions what cloud computing is. And all five may be correct. The best definition of cloud computing that I have ever found is the National Institute of Standards and Technology Definition of Cloud Computing. According to NIST the cloud model is composed of five essential characteristics, three service models, and four deployment models. In this post I will look at the essential characteristics only, and compare to the traditional computing models; in future posts I will look at the service and deployment models.

Because computing always implies resources (CPU, memory, storage, networking etc.), the premise of cloud is an improved way to provision, access and manage those resources. Let's look at each essential characteristic of the cloud:

On-Demand Self-Service
Essentially what this means is that you (as a consumer of the resources) can provision the resources at any time you want to, and you can do this without assistance from the resource provider.

Here is an example. In the old days if your application needed additional computing power to support growing load, the process you normally used to go through is briefly as follows: call the hardware vendor and order new machines; once the hardware is received you need to install the Operating System, connect the machine to the network, configure  any firewall rules etc.; next, you need to install your application and add the machine to the pool of other machines that already handle the load for your application. This is a very simplistic view of the process but it still requires you to interact with many internal and external teams in order to complete it - those can be but are not limited to hardware vendors, IT administrators, network administrators, database administrators, operations etc. As a result it can take weeks or even months to get the hardware ready to use.

Thanks to the cloud computing though you can reduce this process to minutes. All this lengthy process comes to a click of a button or a call to the provider's API and you can have the additional resources available within minutes without. Why is this important?

Because in the past the process involved many steps and usually took months, application owners often used to over provision the environments that host their application. Of course this results in huge capital expenditures at the beginning of the project, resource underutilization throughout the project, and huge losses if the project doesn't succeed. With cloud computing though you are in control and you can provision only enough resources to support your current load.

Broad Network Access
Well, this is not something new - we've had the Internet for more than 20 years already and the cloud did not invent this. And although NIST talks that the cloud promotes the use of heterogeneous clients (like smartphones, tablets etc.) I do think this would be possible even without the cloud. However there is one important thing that in my opinion  the cloud enabled that would be very hard to do with the traditional model. The cloud made it easier to bring your application closer to your users around the world. "What is the difference?", you will ask. "Isn't it that the same as Internet or the Web?" Yes and no. Thanks to the Internet you were able to make your application available to users around the world but there were significant differences in the user experience in different parts of the world. Let's say that your company is based on California and you had a very popular application with millions of users in US. Because you are based in California all servers that host your application are either in your basement or in a datacenter that is nearby so that you can easily go and fix any hardware issues that may occur. Now, think about the experience that your users will get across the country! People from East Coast will see slower response times and possibly more errors than people from the West. If you wanted to expand globally then this problems will be amplified. The way to solve this issue was to deploy servers on the East Cost and in any other part of the world that you want to expand to.

With cloud computing though you can just provision new resources in the region you want to expand to, deploy your application and start serving your users.

It again comes to the cost that you incur by deploying new data centers around the world versus just using resources on demand and releasing them if you are not successful. Because the cloud is broadly accessible you can rely on having the ability to provision resources in different parts of the world.

Resource Pooling
One can argue whether resource pooling is good or bad. The part that brings most concerns among users is the colocation of application on the same hardware or on the same virtual machine. Very often you can hear that this compromises security, can impact your application's performance and even bring it down. Those have been real concerns in the past but with the advancement in virtualization technology and the latest application runtimes you can consider them outdated. That doesn't mean that you should not think about security and performance when you design your application.

The good side of the resource pooling is that it enabled cloud providers to achieve higher application density on single hardware and much higher resource utilization (sometimes going up to 75% to 80% compared to the 10%-12% in the traditional approach). As a result of that the price for resource usage continues to fall. Another benefit of the resource pooling is that resources can easily be shifted where the demand is without the need for the customer to know where those resources come from and where are they located. Once again, as a customer you can request from the pool as many resources as you need at certain time; once you are done utilizing those you can return them to the pool so that somebody else can use them. Because you as a customer are not aware what the size of the resource pool is, your perception is that the resources are unlimited. In contrast in the traditional approach the application owners have always been constrained by the resources available on limited number of machines (i.e. the ones that they have ordered and installed in their own datacenter).

Rapid Elasticity
Elasticity is tightly related to the pooling of resources and allows you to easily expand and contract the amount of resources your application is using. The best part here is that this expansion and contraction can be automated and thus save you money when your application is under light load and doesn't need many resources.

In order to achieve this elasticity in the traditional case the process would look something like this: when the load on your application increases you need to power up more machines and add them to the pool of servers that run your application; when the load on your application decreases you start removing servers from the pool and then powering them off. Of course we all know that nobody is doing this because it is much more expensive to constantly add and remove machines from the pool and thus everybody runs the maximum number of machines all the time with very low utilization. And we all know that if the resource planning is not done right and the load on the application is so heavy that the maximum number of machines cannot handle it, the result is increase of errors, dropped request and unhappy customers.

In the cloud scenario where you can add and remove resource within minutes you don't need to spend a great deal of time doing capacity planning. You can start very small, monitor the usage of your application and add more and more resources as you grow.

Measured Service
In order to make money the cloud providers need the ability to measure the resource usage. Because in most cases the cloud monetization is based on the pay-per-use model they need to be able to give the customers break down of how much and what resources they have used. As mentioned in the NIST definition this allows transparency for both the provider and the consumer of the service.

The ability to measure the resource usage is important in to you, the consumer of the service, in several different ways. First, based on historical data you can budget for future growth of your application. It also allows you to better budget new projects that deliver similar applications. It is also important for application architects and developers to optimize their applications for lower resource utilization (at the end everything comes to dollars on the monthly bill).

On the other side it helps the cloud providers to better optimize their datacenter resources and provide higher density per hardware. It also helps them with the capacity planning so that they don't end up with 100% utilization and no excess capacity to cover unexpected consumer growth.

Compare this to the traditional approach where you never knew how much of your compute capacity is utilized, or how much of your network capacity is used, or how much of your storage is occupied. In rare cases companies were able to collect such statistics but almost never those have been used to provide financial benefit for the enterprise.

Having those five essential characteristics you should be able to recognize the "true" cloud offerings available on the market. In the next posts I will go over the service and deployment models for cloud computing.

Read the original blog entry...

More Stories By Toddy Mladenov

Toddy Mladenov has more than 15 years experience in software development and technology consulting at companies like Microsoft, SAP and 3Com. Currently he is a CTO of Agitare Technologies, Inc. - a boutique consulting company that specializes in Cloud Computing and Big Data Solutions. Before Agitare Tech Toddy spent few years with PaaS startup Apprenda and more than six years working on Microsft's cloud computing platform Windows Azure, Windows Client and MSN/Windows Live. During his career at Microsoft he managed different aspects of the software development process for Windows Azure and Windows Services. He also evangelized Microsoft cloud services among open source communities like PHP and Java. In the past he developed enterprise software for German's software giant SAP and several startups in Europe, and managed the technical sales for 3Com in the Balkan region.

With his broad industry experience, international background and end-user point of view Toddy has an unique approach towards technology. He believes that technology should be develop to improve people's lives and is eager to share his knowledge in topics like cloud computing, mobile and web development.

@ThingsExpo Stories
Advanced Persistent Threats (APTs) are increasing at an unprecedented rate. The threat landscape of today is drastically different than just a few years ago. Attacks are much more organized and sophisticated. They are harder to detect and even harder to anticipate. In the foreseeable future it's going to get a whole lot harder. Everything you know today will change. Keeping up with this changing landscape is already a daunting task. Your organization needs to use the latest tools, methods and expertise to guard against those threats. But will that be enough? In the foreseeable future attacks w...
Building low-cost wearable devices can enhance the quality of our lives. In his session at Internet of @ThingsExpo, Sai Yamanoor, Embedded Software Engineer at Altschool, provided an example of putting together a small keychain within a $50 budget that educates the user about the air quality in their surroundings. He also provided examples such as building a wearable device that provides transit or recreational information. He then reviewed the resources available to build wearable devices at home including open source hardware, the raw materials required and the options available to power s...
“The age of the Internet of Things is upon us,” stated Thomas Svensson, senior vice-president and general manager EMEA, ThingWorx, “and working with forward-thinking companies, such as Elisa, enables us to deploy our leading technology so that customers can profit from complete, end-to-end solutions.” ThingWorx, a PTC® (Nasdaq: PTC) business and Internet of Things (IoT) platform provider, announced on Monday that Elisa, Finnish provider of mobile and fixed broadband subscriptions, will deploy ThingWorx® platform technology to enable a new Elisa IoT service in Finland and Estonia.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
From telemedicine to smart cars, digital homes and industrial monitoring, the explosive growth of IoT has created exciting new business opportunities for real time calls and messaging. In his session at @ThingsExpo, Ivelin Ivanov, CEO and Co-Founder of Telestax, shared some of the new revenue sources that IoT created for Restcomm – the open source telephony platform from Telestax. Ivelin Ivanov is a technology entrepreneur who founded Mobicents, an Open Source VoIP Platform, to help create, deploy, and manage applications integrating voice, video and data. He is the co-founder of TeleStax, a...
The Internet of Things is not new. Historically, smart businesses have used its basic concept of leveraging data to drive better decision making and have capitalized on those insights to realize additional revenue opportunities. So, what has changed to make the Internet of Things one of the hottest topics in tech? In his session at @ThingsExpo, Chris Gray, Director, Embedded and Internet of Things, discussed the underlying factors that are driving the economics of intelligent systems. Discover how hardware commoditization, the ubiquitous nature of connectivity, and the emergence of Big Data a...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
Disruptive macro trends in technology are impacting and dramatically changing the "art of the possible" relative to supply chain management practices through the innovative use of IoT, cloud, machine learning and Big Data to enable connected ecosystems of engagement. Enterprise informatics can now move beyond point solutions that merely monitor the past and implement integrated enterprise fabrics that enable end-to-end supply chain visibility to improve customer service delivery and optimize supplier management. Learn about enterprise architecture strategies for designing connected systems tha...
"For over 25 years we have been working with a lot of enterprise customers and we have seen how companies create applications. And now that we have moved to cloud computing, mobile, social and the Internet of Things, we see that the market needs a new way of creating applications," stated Jesse Shiah, CEO, President and Co-Founder of AgilePoint Inc., in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Recurring revenue models are great for driving new business in every market sector, but they are complex and need to be effectively managed to maximize profits. How you handle the range of options for pricing, co-terming and proration will ultimately determine the fate of your bottom line. In his session at 15th Cloud Expo, Brendan O'Brien, Co-founder at Aria Systems, session examined: How time impacts recurring revenue How to effectively handle customer plan changes The range of pricing and packaging options to consider
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Things are being built upon cloud foundations to transform organizations. This CEO Power Panel at 15th Cloud Expo, moderated by Roger Strukhoff, Cloud Expo and @ThingsExpo conference chair, addressed the big issues involving these technologies and, more important, the results they will achieve. Rodney Rogers, chairman and CEO of Virtustream; Brendan O'Brien, co-founder of Aria Systems, Bart Copeland, president and CEO of ActiveState Software; Jim Cowie, chief scientist at Dyn; Dave Wagstaff, VP and chief architect at BSQUARE Corporation; Seth Proctor, CTO of NuoDB, Inc.; and Andris Gailitis, C...
The 3rd International Internet of @ThingsExpo, co-located with the 16th International Cloud Expo - to be held June 9-11, 2015, at the Javits Center in New York City, NY - announces that its Call for Papers is now open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
Code Halos - aka "digital fingerprints" - are the key organizing principle to understand a) how dumb things become smart and b) how to monetize this dynamic. In his session at @ThingsExpo, Robert Brown, AVP, Center for the Future of Work at Cognizant Technology Solutions, outlined research, analysis and recommendations from his recently published book on this phenomena on the way leading edge organizations like GE and Disney are unlocking the Internet of Things opportunity and what steps your organization should be taking to position itself for the next platform of digital competition.
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by mining large volumes of unstructured data, and how data tracking delivers uptime when it matters most.