Welcome!

Java IoT Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Frank Lupo

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

What if it were possible to have both high availability and high performance without the high cost and complexity?

Optimizing VMware Environments for Peak SQL Server Performance

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?

This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

Building the Foundation with an HA/HP Architecture
SQL Server administrators have many options for implementing HA in a VMware environment. VMware offers vSphere HA, Microsoft offers Windows Server Failover Clustering as a general-purpose HA solution, and SQL Server has its own HA capabilities with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. Then there are the many third party vendors that offer solutions purpose-built for HA and disaster recovery.

The problem is: many of these HA solutions lack full application availability protection, reduce operational flexibility or have an adverse impact on performance. Performance overhead is caused by the layers of abstraction in virtualized servers complicating the way virtual machines (VMs) interface with physical devices, including in a Storage Area Network (SAN) where the storage is also virtualized. Both VMware HA and AlwaysOn Availability Groups fall short in protecting the entire application stack and all application data during failover. And while Windows Server Failover Clustering is the ideal solution to fully address these issues, VMware imposes certain restrictions that reduces IT flexibility, highest performing configurations and the mobility of VMs configured in the cluster. Let's look at the issues.

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for layering any HA clustering technology on a VMware environment, including a SQL Server Failover Cluster using Windows Server Failover Clustering (WSFC).

RDM makes the storage appear to the guest operating system as if it were a virtual disk file in a VMware Virtual Machine File System (VMFS) volume. For this reason, mapping is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to both the operating system and applications.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use "raw" or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices. And because RDM interferes with VMware features that employ virtual machine disk (VMDK) files, SQL Server administrators may be unable to fully utilize desirable features like snapshots, Virtual Consolidated Backups (VCBs), templates and vMotion.

But the real problem for transaction-intensive applications like SQL Server is the inability to utilize performance-enhancing Flash Read Cache when RDM is configured. Achieving both HA and HP for SQL Server applications in a VMware environment is best achieved using a SANless configuration that eliminates the need for shared SAN storage.  In SANless configurations both the compute and storage resources are fully redundant (with no single points of failure and automatic failover) and provide the additional flexibility to achieve disaster protection by geographically dispersing redundant resources.

SANless HA/HP architectures make it possible to create a shared-nothing, hardware-agnostic, single-site or multi-site cluster. Some solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANless cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as a local or shared storage device.

A well-designed SANless HA/HP solution can actually be less expensive than traditional HA configurations owing to savings in two areas. The first involves avoiding the high cost associated with creating a fully redundant SAN across the LAN and WAN. Simply put: HA configurations using local storage with hard disk drives (HDDs) and/or solid state drives (SSDs) are able to deliver superior performance at a lower cost. The second area involves licensing. Because these solutions are designed to deliver carrier-class HA for AlwaysOn Failover Clusters in SQL Server Standard Edition, there is no need for using the AlwaysOn Availability Groups in the more expensive Enterprise Edition.

The performance advantage of a SANless HA/HP solution is shown in diagram. Benchmark testing reveals the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data in a SAN environment. These test results also show how the use of local storage in an HA configuration is able to perform nearly as well as an unprotected application. To provide an accurate comparison, each alternative utilized identically-performing HDDs. The use of SSDs can deliver an even more significant performance advantage over the SAN-based AlwaysOn Availability Group configuration.

Benchmark tests comparing SQL Server's AlwaysOn Availability Groups with SANless clusters shows the throughput advantage possible with replication techniques designed for HA/HP.

The SANless cluster tested is able to deliver this impressive performance with complete application and data transparency because its advanced architecture implements a low-level, high efficiency driver that sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the VMDK on the remote secondary server.

Beyond performance, SANless clusters have many other advantages. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server application instance, including all databases, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which protects only the SQL databases, not including other disk resident data that may be application specific.

Tuning the Configuration for Peak Performance
Just as virtualization's layers of abstraction make accessing storage more complex, so too do they obscure how the physical resources are performing. This can make optimizing resources for peak performance a never-ending exercise in trial-and-error.

The trial-and-error process is nearly impossible to avoid with traditional application performance management tools that utilize thresholds of discrete events to isolate performance issues. But individual thresholds are unable account for the interrelated nature of resources in virtualized environments, where a change to one often has a significant impact on another. So even when these tools alert IT to a performance issue, they are incapable of providing meaningful insight into the issue or provide guidance for resolution.

Advanced machine learning analytics (MLA) software overcomes these and other limitations by automatically and continuously learning the many complex behaviors and interactions among all interrelated resources. Self-learning and automatic adaptation is what makes it possible for MLA-based solutions to provide a more accurate means of identifying the root cause(s) of performance issues and providing actionable recommendations for resolving these.

Most machine learning analytics systems work by aggregating, normalizing, and then correlating and analyzing hundreds of thousands of data points from numerous resources across network, storage, compute and application layers. While gathering and analyzing this wealth of data, the MLA system learns what constitutes normal behavior patterns, thereby establishing a baseline for detecting anomalies and finding root causes. Some MLA systems also enable human supervision to accelerate the learning process and improve results.

In addition to identifying root causes, some MLA systems are able to simulate and predict the impact of making changes in resources and configurations. This is key for anticipating and avoiding performance or reliability issues and avoiding real-time reaction to problems that occur. In contrast traditional monitoring tools are reactive by design and primarily designed to deliver alerts on current events within the infrastructure. These tools are manually intensive and involve time-consuming and error-prone approaches. They require IT administrators to run multiple reports, and then manually compare the results to find and fix under- and over-provisioning of vCPU and vMemory resources.

MLA systems can identify a wide range of performance issues, involving compute  or storage contention, or incorrectly configured VMs as well as problems arising from migrated VMs, newly provisioned VMs, "noisy neighbors," misconfigured applications or hardware degradation. Most MLA systems also help improve efficient use of resources by identifying idle VM's or wasted storage.

SQL administrators often employ host-based caching (HBC), all-flash arrays and/or hybrid storage to improve performance. In SAN environments, HBC normally delivers the greatest improvements in throughput performance by maximizing I/O operations per second (IOPS) for some, but not all applications. And therein lies the challenge.

The improvement in performance is best when the cache is able to contain sufficient "hot" data to have a meaningful increase in IOPS. But testing every application that might fit such criteria with different HBC configurations in an attempt to quantify the improvement is an arduous endeavor in organizations running hundreds or thousands of applications.

Because machine learning is able to evaluate the many variables involved, MLA systems make it possible to identify those applications that would benefit the most from host-based caching. Most systems are able to recommend a cost-effective HBC configuration, and some are even able to estimate the likely increase in IOPS, enabling SQL administrators to prioritize the implementation effort.

Conclusion
Peak performance is impossible to achieve on a shaky foundation, so it is critically important to make certain the infrastructure's architecture is designed for both high availability and high performance. But as with most things, the SQL Server performance devil is in the details of the many physical resource configurations throughout the HA/HP infrastructure. By taking the guesswork out of performance tuning, machine learning analytics makes it easier than ever to achieve peak performance.

Is your VMware infrastructure delivering satisfactory performance for all of your SQL Server applications? You're among good company if the answer is no. The recommendations made here are easy to implement in a development or pilot environment, so there is little to lose and much to gain by giving them a try. And because most vendors today offer free trials of their performance-tuning tools, there is also zero financial risk to trying.

More Stories By Tony Tomarchio

Tony Tomarchio is the Director of Field Engineering for SIOS Technology. He is responsible for defining and delivering technical pre-sales services, support and best practices to SIOS customers, prospects and partners. He has more than a decade of experience providing systems management and high availability solutions to enterprise customers. Prior to joining SIOS, he served as the Global Sales Engineering lead for the Oracle systems management practice. Tony joined Oracle through the acquisitions of Sun Microsystems and Aduva, Inc., where he served as the lead Sales Engineer / Technical Account Manager and played a critical role in product adoption and evolution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that MIRAI Inc. will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MIRAI Inc. are IT consultants from the public sector whose mission is to solve social issues by technology and innovation and to create a meaningful future for people.
SYS-CON Events announced today that TidalScale, a leading provider of systems and services, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale has been involved in shaping the computing landscape. They've designed, developed and deployed some of the most important and successful systems and services in the history of the computing industry - internet, Ethernet, operating s...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant tha...
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their ass...
As hybrid cloud becomes the de-facto standard mode of operation for most enterprises, new challenges arise on how to efficiently and economically share data across environments. In his session at 21st Cloud Expo, Dr. Allon Cohen, VP of Product at Elastifile, will explore new techniques and best practices that help enterprise IT benefit from the advantages of hybrid cloud environments by enabling data availability for both legacy enterprise and cloud-native mission critical applications. By rev...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Amazon is pursuing new markets and disrupting industries at an incredible pace. Almost every industry seems to be in its crosshairs. Companies and industries that once thought they were safe are now worried about being “Amazoned.”. The new watch word should be “Be afraid. Be very afraid.” In his session 21st Cloud Expo, Chris Kocher, a co-founder of Grey Heron, will address questions such as: What new areas is Amazon disrupting? How are they doing this? Where are they likely to go? What are th...
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering ...
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's applicati...
As popularity of the smart home is growing and continues to go mainstream, technological factors play a greater role. The IoT protocol houses the interoperability battery consumption, security, and configuration of a smart home device, and it can be difficult for companies to choose the right kind for their product. For both DIY and professionally installed smart homes, developers need to consider each of these elements for their product to be successful in the market and current smart homes.
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
Widespread fragmentation is stalling the growth of the IIoT and making it difficult for partners to work together. The number of software platforms, apps, hardware and connectivity standards is creating paralysis among businesses that are afraid of being locked into a solution. EdgeX Foundry is unifying the community around a common IoT edge framework and an ecosystem of interoperable components.
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its ...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...