Welcome!

Java IoT Authors: Pat Romanski, Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Cloud Security

@CloudExpo: Article

Business Critical Apps | @CloudExpo #Cloud #BigData #IoT #API #AWS #Azure

Understanding the truths and myths of HA and DR in cloud deployments can dramatically reduce data center costs and risks

The "New Data Center"
The New Data Center has arrived. Over the past decade we have seen the migration from physical servers to virtual machines and now to public cloud, private cloud and hybrid cloud. Each of these migrations has taken a similar path. Test, dev and non-critical workloads are the first to make the move. As the technology matures, business critical tier 1 applications eventually make the move as well. At this point the percentage of applications still running directly on physical servers is rapidly declining. As Cloud IaaS technology such as AWS and Azure matures, many companies are moving their tier 1 applications directly to the cloud along with the rest of their infrastructure.

Figure 1: The New Data Center

This movement to the cloud was predicted by Gartner analyst in October of 2013.

"The use of cloud computing is growing, and by 2016 this growth will increase to become the bulk of new IT spend, according to Gartner, Inc. 2016 will be a defining year for cloud as private cloud begins to give way to hybrid cloud, and nearly half of large enterprises will have hybrid cloud deployments by the end of 2017." [1]

This rapid adoption of the cloud puts the pressure on the cloud providers to deliver on their promises of flexibility, agility and availability. Before moving business critical applications to the cloud IT needs to ensure that doing so will not mean sacrificing performance, availability, or disaster protection.

Cloud Availability - 99.95% Uptime "Guaranteed"
Both Amazon Web Services (AWS)[2]and Microsoft Azure[3]offer Service Level Agreements that guarantee 99.95% uptime, which equates to roughly 22 minutes of downtime per month. However, if you read both SLAs carefully, you will see that in order to qualify for the SLA you have to deploy two or more instances per region across different "Availability Zones"[4] or "Fault Domains"[5].

Amazon's Availability Zones and Microsoft's Fault Domains are essentially the same concept. Within any geographic region of their IaaS cloud offerings there are sections of infrastructure that are independent of each other, meaning they have no compute, network, storage or power in common. AWS regions have two to three Availability Zones whereas Azure by default allows for two Fault Domains per "Availability Set," but has recently added a feature that allows up to three Fault Domains[6].

What the SLA guarantees is that 99.95% of the time you should be able to reach at least one instance, assuming you have two or more instances running in different Fault Domains or Availability Zones. While that works fine for applications like the web servers shown in Figure 2 and non-transactional application servers where you can simply load balance between instances for high availability and scalability, what do you do for transactional applications like database servers where the data is dynamic? Something must be done to keep the instances in sync with each other.

Figure 2: Azure Fault Domains

Another consideration is that all they are guaranteeing is "dial tone." They don't guarantee the application will be up and running or even that it will be performing at an acceptable level.

Note that even the leading cloud providers, including Microsoft and Amazon, have had downtime events in the past 12 months. According to CloudHarmony[7] Amazon EC2 and EBS combined had 46 outages ranging from 19 seconds to 2.8 hours in the 365 days previous to June 16, 2015. Microsoft Azure Virtual Machines and Object Storage in the same time period had 242 outages ranging from 10.4 minutes to 13.16 hours.

If your cloud provider doesn't meet their SLAs, what is the impact on your organization? At the end of the day all it really means is you get refunded a fraction of your bill for the month that the downtime occurred in as shown in Figure 3.

Figure 3: Service Credits for Missed SLAs

If a 25% discount for 13.16 hours of downtime does not seem like an even trade, you have to protect your applications from downtime. Traditionally downtime has been minimized by deploying failover clusters.

Failover Clusters in the Traditional Data Center
Failover clusters have been the traditional mechanism to ensure high availability for transactional applications that are deemed business critical. A traditional failover cluster has the following properties:

  • Two or more "nodes": A failover cluster is made up of a group of servers (aka nodes) that act as safety nets for each other. If one node fails, then one of the remaining nodes will continue to run the clustered workload.
  • Shared Storage: Each cluster node must have access to the same data set, which is typically stored on a shared disk, SAN or iSCSI array.
  • System level monitoring: A cluster uses a heartbeat mechanism to detect failures of an entire system and initiate recovery action to make sure the clustered work load continues to run on one of the remaining cluster nodes.
  • Application level monitoring: Failure of an application to perform properly is detected and recovery action is taken. In some cases the application can be recovered in place, otherwise the application workload will be moved to the standby server.
  • Planned Maintenance: An application workload can be moved from one node to another with minimal downtime to allow planned maintenance to be done on the backup nodes without scheduling significant downtime.
  • Client redirection: Clients connecting to the cluster workload will automatically be reconnected to the active node in the cluster whenever the workload moves between cluster nodes.

Failover Clusters in the Cloud
For business-critical applications in the cloud, failover clusters are still the best way to ensure that applications remain highly available. A failover cluster in the cloud is required in order to ensure that should a failure occur in one Azure Fault Domain or AWS Availability Zone fail, another node in a separate domain or zone will be able to recover with minimal down time.

A failover cluster traditionally requires shared storage. In most cloud environments, including both AWS and Azure, shared storage that supports failover clustering is not available. In these cases you have two alternatives: use replication options that come with the application or use third-party SANLess cluster solutions.

Application based replication
Many applications have built-in features that allow for replication and high availability without the use of a SAN. Solutions such as SQL Server AlwaysOn Availability Groups[8], Exchange Server Database Availability Groups[9], DFS-R[10] and Oracle Streams[11] are just some of the examples of replication features built into applications that may help provide availability within cloud deployments.

Each solution mentioned above will have to be understood completely before you embark on your deployment as there are usually restrictions and/or limitations associated with each solution.

SANLess Clusters
Third-party host-based replication[12] solutions have been around since the 1990s and help with high availability and disaster recovery of business-critical applications. Check with your cloud provider to see which solutions are certified for use in their cloud. Choose a SANless clustering software that is easy to implement and configure and is fully integrated with industry-standard clustering solutions. For example, ensure SANless clustering software can be added to a standard Windows Server failover clustering environment - enabling it to be used in cloud, hybrid cloud, and virtual environments where shared storage is impossible or impractical. This software also enables a greater degree of configuration flexibility, enabling you to create hybrid cluster environments with any combination of physical, virtual, and cloud.

The benefit of a SANLess cluster is that it behaves the same as a traditional cluster, except it uses local storage instead of shared storage. Most applications have supported traditional clusters for many years and administrators are familiar with the features and functionality. They are particularly useful if you have many different applications to protect as you can manage them all with the same technology.

Summary
The new data center is inevitable. The benefits of flexibility and agility are just too enticing to ignore. However, it is imperative that AVAILABILITY not be taken for granted. It is incumbent upon your cloud architecture team to understand the steps that must be taken to ensure that tier 1 business critical applications are highly available.

References

  1. http://www.gartner.com/newsroom/id/2613015
  2. http://aws.amazon.com/ec2/sla/
  3. http://azure.microsoft.com/en-us/support/legal/sla/
  4. http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
  5. https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-manage-availability/
  6. http://azure.microsoft.com/en-us/documentation/templates/101-create-availability-set-3fds/
  7. https://cloudharmony.com/status-1year-for-aws
  8. https://msdn.microsoft.com/en-us/library/ff877884.aspx
  9. https://technet.microsoft.com/en-us/library/Dd638137(v=EXCHG.150).aspx
  10. https://msdn.microsoft.com/en-us/library/Bb540025(v=VS.85).aspx
  11. http://www.oracle.com/technetwork/database/information-management/streams-fov-11g-134280.pdf
  12. http://www.linuxclustering.net/2012/11/07/host-based-replication-vs-san-replication/

More Stories By David Bermingham

David Bermingham is recognized within the technology community as a high availability expert and has been honored by his peers by being elected to be a Microsoft MVP in Clustering since 2010. His work as director of Technical Evangelist at SIOS has him focused on evangelizing Microsoft high availability and disaster recovery solutions as well as providing hands on support, training and professional services for cluster implementations.

David holds numerous technical certifications and draws from over twenty years of experience in IT, including work in the finance, healthcare and education fields, to help organizations design solutions to meet their high availability and disaster recovery needs. He has recently begun speaking on deploying highly available SQL Servers in the Azure Cloud and deploying Azure Hybrid Cloud for disaster recovery.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...