Welcome!

Java IoT Authors: Zakia Bouachraoui, Pat Romanski, Yeshim Deniz, Elizabeth White, Liz McMillan

Related Topics: @CloudExpo

@CloudExpo: Blog Feed Post

The One Problem Cloud Can't Solve. Or Can It?

Cloud computing can’t assure availability of applications in the face of a physical network outage, can it?

Cloud computing providers focus on providing an efficient, scalable environment in which applications can be deployed and provide for their availability with load balancing services and health monitoring and elastic scalability.

But it can’t assure availability of your network. The Rackspace outage late last year was allegedly caused by a peering issue.

You know, a network, problem.

blockquote UPDATE: “The issues resulted from a problem with a router used for peering and backbone connectivity located outside the data center at a peering facility, which handles approximately 20% of Rackspace’s Dallas traffic,” Rackspace said in an incident report on its blog. “The problems stemmed from a configuration and testing procedure made at our new Chicago data center, creating a routing loop between the Chicago and Dallas data centers. This activity was in final preparation for network integration between the Chicago and Dallas data centers. The network integration of the facilities was scheduled to take place during the monthly maintenance window outside normal business hours, and today’s incident occurred during final preparations.”

We spend so much time worrying about application availability that we often overlook – both purposefully and accidentally – one of the most basic facts on which applications are built today: the existence of a working, reliable core network.

N

 


O NETWORK, NO APPS


 One of the most basic solutions to ensuring availability at the network layer is network redundancy. That is to say most organizations who determine that availability is a number one priority will maintain multiple connections to the Internet – via different providers – and then utilize “link load balancing” to route, re-route, and balance traffic across those  cat5_network_cableconnections. This redundancy is supposed to ensure that if one connection (provider) is hit with an outage or simply experiencing poor performance that another provider can be used to ensure customers and users can access applications.

This would seem to mean, at first glance, that cloud computing does not have a part to play in network availability. You can’t outsource your physical connectivity to “the cloud”, after all, so it doesn’t seem as though cloud has a part to play in maintaining availability from a network perspective.

That’s true. From a network perspective, cloud can’t help. From an internal user/customer perspective, cloud can’t help.

But from an external customer/user perspective, perhaps cloud can be of service (sorry for that one, really) after all.

The reason to keep connectivity available is, ultimately, to deliver applications. While cloud computing cannot address a problem with basic physical connectivity it can be leveraged in a way as to help ensure that applications are available in the unlikely event that an organization’s physical connectivity is interrupted. Using the cloud as a secondary data center, essentially, provides the means by which at least customers external to the network problem can still access applications in the face of an interruption. Cloud as a secondary data center is a fairly mundane and perhaps even boring use of cloud computing, and yet it’s probably one of the more well-understood and cost effective examples of how cloud computing can be leveraged by organizations of all sizes, but particularly smaller ones that may not have before had the option to have a “second” data center due to prohibitive costs.

The only problem – and it is a problem – in this entire scenario is that the global application delivery solution (global server load balancer or GSLB) must remain available too, which may mean that deployment at the local data center is not an option because well, if there’s no connectivity to the applications there’s no connectivity to the GSLB, either. The reason this is a problem is that typically the GSLB is deployed locally, under the control of the organization. In order to take advantage of cloud computing as a secondary data center to combat the potential loss of physical network service, the GSLB would have to be deployed externally, so it was still accessible to external customers and users.

I

 

 


S THIS A JOB FOR INTERCLOUD?


Perhaps an external GSLB “service” is what’s required; an external catalog of services that’s based on GSLB and provides core DNS services on an “organizational” scale. A domain “locator” that’s not quite DNS but yet is. Or perhaps we’re simply looking at a solution that’s more along the lines of a third-party DNS service, where DNS is outsourced to a managed provider and GSLB is an extension or additional option that can be provisioned. Perhaps it, itself, is a cloud-based service that only kicks in when/if you need it.

There is almost certainly a solution to the problem of maintaining network-level availability that involves “the cloud” but it is architectural, not technological. It’s not a tangible solution like link load balancing that physically addresses the challenges associated with maintaining network connectivity. It’s a deployment model, an architectural model, that will necessary to solve this problem. The pieces of the puzzle already exist, generally speaking, so coupling together a solution today would not, strictly speaking, be impossible. But it may be desirable to envision a solution that is based on standards (Intercloud may actually help with this one) or standard practices, and that’s something that today the cloud doesn’t address.

Follow me on Twitter    View Lori's profile on SlideShare  friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...