Click here to close now.

Welcome!

Java Authors: Amit Gupta, Liz McMillan, Mike Kavis, Brett Hofer, Sematext Blog

Related Topics: Cloud Expo, Java, SOA & WOA, Linux, Security, Big Data Journal

Cloud Expo: Article

Integrate Cloud-Based Disaster Recovery into Business Continuity Strategy

DRaaS will continue to gain market strength as a solution this year while evolving to better meet customer requirements

Cloud-based Recovery-as-a-Service (RaaS) is becoming big business. Research and Markets forecasts the global market of RaaS and cloud-based business continuity will reach $5.77 billion by 2018, creating major opportunities for business continuity and risk management specialists alike. Likewise, Reportstack announced recently the global Disaster Recovery-as-a Service (DRaaS) market is expected to grow at a Compound Annual Growth Rate (CAGR) of 54.64 percent from 2014 to 2018.[1]

One of the leading drivers for small and mid-size businesses (SMBs) as well as enterprises seeking cloud solutions is Disaster Recovery (DR).[2]Organizations seek improved resiliency and failover in response to service disruptions of all kinds including natural disasters, cyber-attacks and technical malfunctions. In 2013, the financial impact of natural disasters worldwide was more than double the $100 billion estimate of 1990.[3]McAfee® Labs Threats Report indicates service disruptions are inevitable and becoming more predictable, with a reported 20 million new types of malware in the third quarter of 2013 alone. In a recent survey, IDC found that 71 percent of respondents experienced less than 10 hours of annual downtime, with a projected financial impact for SMBs of $125,000. Larger enterprise organizations could potentially have a corresponding annual financial impact of $17 million.[4] Dun & Bradstreet surveyed Fortune 500 companies with 59% of respondents reporting 1.5 hours of downtime each week, amounting to a projected $46 million impact annually for companies of 10,000 employees or more.[5]

However, the impact may be even greater. In a 2013 Ponemon Institute study, 91 percent of the participants reported that their organizations experienced unplanned downtime in the past two years. When you consider it takes about two days to recover from an IT event, if at all, the cost can be much higher in terms of lost revenue and damage to a company's reputation through reduced customer loyalty.

Floods, mudslides, ice and snow storms, hurricanes, tornados and cyclones, fires and droughts have one thing in common: all can have a negative financial impact on day-to-day business. Hurricane Sandy ranked as the largest global disaster in 2012 with a price tag of $65 billion. At the same time, New Jersey residents and municipalities had to cover an additional $8 million to $13 million in unmet expenses. Businesses are still trying to recover from the hurricane, with many resorting to bankruptcy protection. In 2013, 296 adverse weather events, predominantly in Europe and Asia, caused $192 billion in worldwide economic losses. Although the dollar amount was 4 percent less than the 10-year average, the number of events was greater than the 10-year average of 259.[6]

Other factors generating a need for Disaster Recovery planning include the risk potential from cyber attacks on Wi-Fi access into secure networks, Distributed Denial of Service (DDoS) attacks, resistant malware, insider threats, attacks on employee-owned device, or bring your own device (BYOD), and breakdowns with out-of-date, legacy systems.

Banks have been particularly hard hit in the last couple of years by DDoS attacks, prompting an April 2014 notice from the Federal Financial Institutions Examination Council (FFIEC), which requires banks to assess risk, monitor, and develop response plans to mitigate against DDoS attacks.[7] Attacks are becoming more sophisticated and can shut down business activity, slow website connections or prevent access to institutional websites. These attacks can be system-wide or come in via peripherals. For instance, an unsecured keyboard video mouse (KVM) switch allows cyber attackers to capture keystrokes and password information or access information through unauthorized universal serial bus (USB) devices and microphones.[8]

Cybercriminals are becoming stealthy and developing tools and botnet source codes that are increasingly complex and capable of avoiding detection. Cryptolocker, for instance, can be delivered by e-mail and is added to the start-up menu. It encrypts the data, infects the system and locks the organization out. Criminals then demand a ransom to unlock the data.[9]

Today, 31 percent of PCs continue to run on Windows XP operating systems. It's not just PCs that are at risk, as a number of medical devices and point of sale (POS) systems use Windows to run transactions, and the systems are not consistently updated. In April 2014, Microsoft announced it would no longer provide support and updates, placing systems and equipment at increased risk for cyber attacks. Because enterprise and institutions invest so much time and money in legacy hardware and software, these systems will require expert knowledge moving forward to maintain system security.

Business Continuity Planning is No Longer Optional
All of these factors point to the need for systematic security planning. Business Continuity Management (BCM) refers to the plans executed and activities performed on a daily basis to maintain business consistency and ensure critical business systems will be available when disaster strikes. And although the term Business Continuity Management is used interchangeably with DR, it is considered to be a separate, overarching strategic plan which includes disaster recovery, crisis management, incident response and contingency planning, as well as business impact analysis, recovery time objective (RTO) and recovery point objective (RPO).

BCM is a set of processes and practices created to identify and mitigate threats and their potential impact while providing the framework to prevent, mitigate and recover from disruptions of all kinds including the implementation of new programs, processes, system virtualization and other process shifts. And, although closely related, DR is more about the process of building continuity capabilities for infrastructure and applications. More specifically, DR is the business' ability to maintain critical operations and provide services during a disruptive event.[10]

Disaster recovery and business continuity continue to rank as two of today's top business concerns due to the prevalence of natural and man-made disruptions. A recent Continuity Insights and KPMG Continuity Management Program Benchmarking study was conducted to determine whether enterprise organizations are prepared for a disruptive event. The study involved 434 executives from 22 countries. Approximately 71 percent of those surveyed indicated a senior management board had been established for the purpose of developing a BCM, which made a big difference when conducting business impact analyses (BIAs), recovery objectives, adopting global standards and addressing cyber security issues. However, 36 percent of the respondents indicated they did not address cyber terrorism issues in the BCM. More than half of those surveyed stated they were prompted to initiate a BCM plan, DR plan or crisis management plan due to a disruption. Outages were due to weather problems, power interruptions and IT security issues and represented a nine percent increase in disruptions over the previous year's responses.[11]

Zero Tolerance for Downtime
New technologies and business trends such as virtualization and mobile device BYOD policies, cloud computing, real-time data analysis, e-commerce, third-party cloud-based providers, and globalization are prompting more companies to establish DR and BCM plans as part of overall business strategies. These trends make 24x7 availability the number one priority. At the same time, enterprise organizations are seeking fast Internet speeds, real-time information and ubiquitous connectivity to remain competitive, which leaves no room for downtime.

So, what is the cost if a business continuity plan is not instituted? Plenty, according to leading analysts. In a published study by Touche Ross and ioSafe, companies without a DR plan have a survival rate of less than 10 percent. Gartner, a leading information technology research company, breaks it down even further, predicting 25 percent of PCs will fail this year, while mid-sized companies will experience about 20 hours of network, system and application downtime at an average cost of $70,000 an hour. Forrester, another leading research company, predicts that 24 percent of companies will have a full data disaster.[12]

Business Continuity Planning is Key
In its annual business continuity trends study, Continuity Central reports some interesting findings in the way survey respondents are handling business continuity this year. More than half of those surveyed expect to make small changes to existing BCM plans in 2014, while a quarter of the respondents are expecting bigger changes, and eight percent anticipate a more thoroughly integrated plan. Five percent will implement ISO 22301 projects this year. As the first international standard developed for BCM, the ISO 22301 specifies what requirements businesses must meet to ensure the business recovers from a disaster or disruptive event.

Secure Data with Cloud Computing
Now that cloud computing has matured as a platform, more companies are beginning to trust that moving critical data to the cloud will ensure against loss in the event of a disaster or event. Forbes predicts that overall cloud spending will grow by about 25% this year, reaching $100 billion for software and services as well as cloud infrastructure. More SMBs will join the cloud at a growth rate of 20 percent over the next five years and more mid-sized companies will move to public clouds.[13]

More companies are seeking ways to reduce the cost of DR, which represents about 25 percent of the overall IT budget, without sacrificing security. However, as network architecture gains complexity, data recovery from on-site storage is becoming a long and arduous process, and on-site backup and restore has increased risk associated with it due to its potential for failure. The cost becomes even greater when organizations put time, effort and money into additional architecture to mirror all servers, applications, data, software and network connections. To that point, CIOs realize cloud storage poses less of a risk while the recovery process makes sound financial sense. Cost avoidance is gained as enterprise no longer needs to make large capital investments and infrastructure upgrades to maintain availability.

Cloud Service Providers (CSPs) offer a range of storage options and as-a-service offerings, which makes DRaaS a faster and more simplified process. Likewise, virtualized servers have brought down the cost of cloud storage, making it easier for SMBs to compete on the same level as larger organizations.

DRaaS Provides a Low Cost Solution
DRaaS is a flexible platform, enabling enterprise organizations to choose whether it's necessary to restore the entire organizational infrastructure or just critical applications. Organizations gain more control because they get to decide how data should be saved and what critical infrastructure needs to be restored and in what order. A recent study by the Aberdeen Group reports DRaaS is growing as the preferred solution because it reduces the risk of losing critical business data and experiencing a business interruption; critical applications can be up and running in minutes, not days; and it's a faster way of bringing the business back to normal.

Benefits of DRaaS as a pay-as-you-go recovery model are lower costs and minimized downtime as applications are automatically restarted once the problem is identified. Because DRaaS is on a virtual platform rather than on an on-site server, business continuity requirements to meet performance standards and consistency can also be achieved. A virtual backup site provides much needed data replication while providing faster recovery time at a lower cost because it runs on higher capacity, shared architecture. Testing can occur more frequently, because the system is always ready and does not have to be placed offline to test.[14]

Creating a Business Continuity Plan
A greater number of businesses today are taking advantage of cost-effective, pay-as-you-go DRaaS and BCM plans. BCM takes into account the scope of requirements for backup and restoration of data, applications, systems and in some cases, facilities, to ensure business continuity when disaster strikes. The first step when developing DRaaS or BCM is finding the right cloud service provider to help your organization determine solution architecture to meet your recovery performance needs and requirements; this can be done by performing a business impact analysis with a qualified professional. Once complete, a feasibility plan is needed to ensure proper procedures are implemented and followed. Results must then be measured by testing the system repeatedly for availability, completeness and verified backup. The plan should then be shared with key personnel so everyone knows their roles and responsibilities when downtime occurs.

The Future of DRaaS and BCM
DRaaS will continue to gain market strength as a solution this year while evolving to better meet customer requirements. The service is expected to become faster while efficiently optimizing infrastructure storage and servers. Virtualization will be key to meeting customer service level agreements while addressing recovery point and recovery time objectives. Platform flexibility will be integrated with self-service for larger companies with internal IT staff. Expect more companies to ask for a hybrid combination of DR strategies combining on-premise backup solutions with cloud platforms for data archiving and recovery. This way, on-site and cloud applications can be synched for rapid recovery.

Some customers will seek multiple CSPs for different cloud services, opening up new opportunities for vendors and risk management specialists. Storage is expected to double in growth in 10 years, while IT staff remains in demand. CSPs and risk management specialists who can serve as trusted IT advisors will be better positioned to take advantage of opportunities from companies seeking purpose-built back-up solutions. While at the same time, CSPs who enact simple, consumer-oriented pricing strategies will make decision-making easier for enterprise and speed up the sales cycle for solution specialists and channel partners. Last but not least, giving the customers what they want, true customer support, can make the difference in building a larger customer base and improving customer loyalty.[15]

More Stories By Mike Castañeda

Mike Castañeda is the Director of Technology at Lam Cloud Management, a New Jersey-based provider of proven Business Continuity, Workplace Recovery, Data Center and Network solution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
SYS-CON Events announced today that Open Data Centers (ODC), a carrier-neutral colocation provider, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Open Data Centers is a carrier-neutral data center operator in New Jersey and New York City offering alternative connectivity options for carriers, service providers and enterprise customers.
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
SYS-CON Events announced today that CodeFutures, a leading supplier of database performance tools, has been named a “Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. CodeFutures is an independent software vendor focused on providing tools that deliver database performance tools that increase productivity during database development and increase database performance and scalability during production.
The IoT market is projected to be $1.9 trillion tidal wave that’s bigger than the combined market for smartphones, tablets and PCs. While IoT is widely discussed, what not being talked about are the monetization opportunities that are created from ubiquitous connectivity and the ensuing avalanche of data. While we cannot foresee every service that the IoT will enable, we should future-proof operations by preparing to monetize them with extremely agile systems.
There’s Big Data, then there’s really Big Data from the Internet of Things. IoT is evolving to include many data possibilities like new types of event, log and network data. The volumes are enormous, generating tens of billions of logs per day, which raise data challenges. Early IoT deployments are relying heavily on both the cloud and managed service providers to navigate these challenges. Learn about IoT, Big Data and deployments processing massive data volumes from wearables, utilities and other machines.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
“In the past year we've seen a lot of stabilization of WebRTC. You can now use it in production with a far greater degree of certainty. A lot of the real developments in the past year have been in things like the data channel, which will enable a whole new type of application," explained Peter Dunkley, Technical Director at Acision, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Events announced today that Intelligent Systems Services will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1994, Intelligent Systems Services Inc. is located near Washington, DC, with representatives and partners nationwide. ISS’s well-established track record is based on the continuous pursuit of excellence in designing, implementing and supporting nationwide clients’ mission-critical systems. ISS has completed many successful projects in Healthcare, Commercial, Manufacturing, ...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...
Sensor-enabled things are becoming more commonplace, precursors to a larger and more complex framework that most consider the ultimate promise of the IoT: things connecting, interacting, sharing, storing, and over time perhaps learning and predicting based on habits, behaviors, location, preferences, purchases and more. In his session at @ThingsExpo, Tom Wesselman, Director of Communications Ecosystem Architecture at Plantronics, will examine the still nascent IoT as it is coalescing, including what it is today, what it might ultimately be, the role of wearable tech, and technology gaps stil...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities. In his session at @ThingsExpo, Gary Hall, Chief Technology Officer, Federal Defense at Cisco Systems, will break down the core capabilities of IoT in multiple settings and expand upon IoE for bo...
With several hundred implementations of IoT-enabled solutions in the past 12 months alone, this session will focus on experience over the art of the possible. Many can only imagine the most advanced telematics platform ever deployed, supporting millions of customers, producing tens of thousands events or GBs per trip, and hundreds of TBs per month. With the ability to support a billion sensor events per second, over 30PB of warm data for analytics, and hundreds of PBs for an data analytics archive, in his session at @ThingsExpo, Jim Kaskade, Vice President and General Manager, Big Data & Ana...
For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated.
We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor...
In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i...
When it comes to the Internet of Things, hooking up will get you only so far. If you want customers to commit, you need to go beyond simply connecting products. You need to use the devices themselves to transform how you engage with every customer and how you manage the entire product lifecycle. In his session at @ThingsExpo, Sean Lorenz, Technical Product Manager for Xively at LogMeIn, will show how “product relationship management” can help you leverage your connected devices and the data they generate about customer usage and product performance to deliver extremely compelling and reliabl...
The Internet of Things (IoT) is causing data centers to become radically decentralized and atomized within a new paradigm known as “fog computing.” To support IoT applications, such as connected cars and smart grids, data centers' core functions will be decentralized out to the network's edges and endpoints (aka “fogs”). As this trend takes hold, Big Data analytics platforms will focus on high-volume log analysis (aka “logs”) and rely heavily on cognitive-computing algorithms (aka “cogs”) to make sense of it all.