Welcome!

Java IoT Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

What if it were possible to have both high availability and high performance without the high cost and complexity?

Optimizing VMware Environments for Peak SQL Server Performance

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?

This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

Building the Foundation with an HA/HP Architecture
SQL Server administrators have many options for implementing HA in a VMware environment. VMware offers vSphere HA, Microsoft offers Windows Server Failover Clustering as a general-purpose HA solution, and SQL Server has its own HA capabilities with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. Then there are the many third party vendors that offer solutions purpose-built for HA and disaster recovery.

The problem is: many of these HA solutions lack full application availability protection, reduce operational flexibility or have an adverse impact on performance. Performance overhead is caused by the layers of abstraction in virtualized servers complicating the way virtual machines (VMs) interface with physical devices, including in a Storage Area Network (SAN) where the storage is also virtualized. Both VMware HA and AlwaysOn Availability Groups fall short in protecting the entire application stack and all application data during failover. And while Windows Server Failover Clustering is the ideal solution to fully address these issues, VMware imposes certain restrictions that reduces IT flexibility, highest performing configurations and the mobility of VMs configured in the cluster. Let's look at the issues.

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for layering any HA clustering technology on a VMware environment, including a SQL Server Failover Cluster using Windows Server Failover Clustering (WSFC).

RDM makes the storage appear to the guest operating system as if it were a virtual disk file in a VMware Virtual Machine File System (VMFS) volume. For this reason, mapping is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to both the operating system and applications.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use "raw" or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices. And because RDM interferes with VMware features that employ virtual machine disk (VMDK) files, SQL Server administrators may be unable to fully utilize desirable features like snapshots, Virtual Consolidated Backups (VCBs), templates and vMotion.

But the real problem for transaction-intensive applications like SQL Server is the inability to utilize performance-enhancing Flash Read Cache when RDM is configured. Achieving both HA and HP for SQL Server applications in a VMware environment is best achieved using a SANless configuration that eliminates the need for shared SAN storage.  In SANless configurations both the compute and storage resources are fully redundant (with no single points of failure and automatic failover) and provide the additional flexibility to achieve disaster protection by geographically dispersing redundant resources.

SANless HA/HP architectures make it possible to create a shared-nothing, hardware-agnostic, single-site or multi-site cluster. Some solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANless cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as a local or shared storage device.

A well-designed SANless HA/HP solution can actually be less expensive than traditional HA configurations owing to savings in two areas. The first involves avoiding the high cost associated with creating a fully redundant SAN across the LAN and WAN. Simply put: HA configurations using local storage with hard disk drives (HDDs) and/or solid state drives (SSDs) are able to deliver superior performance at a lower cost. The second area involves licensing. Because these solutions are designed to deliver carrier-class HA for AlwaysOn Failover Clusters in SQL Server Standard Edition, there is no need for using the AlwaysOn Availability Groups in the more expensive Enterprise Edition.

The performance advantage of a SANless HA/HP solution is shown in diagram. Benchmark testing reveals the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data in a SAN environment. These test results also show how the use of local storage in an HA configuration is able to perform nearly as well as an unprotected application. To provide an accurate comparison, each alternative utilized identically-performing HDDs. The use of SSDs can deliver an even more significant performance advantage over the SAN-based AlwaysOn Availability Group configuration.

Benchmark tests comparing SQL Server's AlwaysOn Availability Groups with SANless clusters shows the throughput advantage possible with replication techniques designed for HA/HP.

The SANless cluster tested is able to deliver this impressive performance with complete application and data transparency because its advanced architecture implements a low-level, high efficiency driver that sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the VMDK on the remote secondary server.

Beyond performance, SANless clusters have many other advantages. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server application instance, including all databases, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which protects only the SQL databases, not including other disk resident data that may be application specific.

Tuning the Configuration for Peak Performance
Just as virtualization's layers of abstraction make accessing storage more complex, so too do they obscure how the physical resources are performing. This can make optimizing resources for peak performance a never-ending exercise in trial-and-error.

The trial-and-error process is nearly impossible to avoid with traditional application performance management tools that utilize thresholds of discrete events to isolate performance issues. But individual thresholds are unable account for the interrelated nature of resources in virtualized environments, where a change to one often has a significant impact on another. So even when these tools alert IT to a performance issue, they are incapable of providing meaningful insight into the issue or provide guidance for resolution.

Advanced machine learning analytics (MLA) software overcomes these and other limitations by automatically and continuously learning the many complex behaviors and interactions among all interrelated resources. Self-learning and automatic adaptation is what makes it possible for MLA-based solutions to provide a more accurate means of identifying the root cause(s) of performance issues and providing actionable recommendations for resolving these.

Most machine learning analytics systems work by aggregating, normalizing, and then correlating and analyzing hundreds of thousands of data points from numerous resources across network, storage, compute and application layers. While gathering and analyzing this wealth of data, the MLA system learns what constitutes normal behavior patterns, thereby establishing a baseline for detecting anomalies and finding root causes. Some MLA systems also enable human supervision to accelerate the learning process and improve results.

In addition to identifying root causes, some MLA systems are able to simulate and predict the impact of making changes in resources and configurations. This is key for anticipating and avoiding performance or reliability issues and avoiding real-time reaction to problems that occur. In contrast traditional monitoring tools are reactive by design and primarily designed to deliver alerts on current events within the infrastructure. These tools are manually intensive and involve time-consuming and error-prone approaches. They require IT administrators to run multiple reports, and then manually compare the results to find and fix under- and over-provisioning of vCPU and vMemory resources.

MLA systems can identify a wide range of performance issues, involving compute  or storage contention, or incorrectly configured VMs as well as problems arising from migrated VMs, newly provisioned VMs, "noisy neighbors," misconfigured applications or hardware degradation. Most MLA systems also help improve efficient use of resources by identifying idle VM's or wasted storage.

SQL administrators often employ host-based caching (HBC), all-flash arrays and/or hybrid storage to improve performance. In SAN environments, HBC normally delivers the greatest improvements in throughput performance by maximizing I/O operations per second (IOPS) for some, but not all applications. And therein lies the challenge.

The improvement in performance is best when the cache is able to contain sufficient "hot" data to have a meaningful increase in IOPS. But testing every application that might fit such criteria with different HBC configurations in an attempt to quantify the improvement is an arduous endeavor in organizations running hundreds or thousands of applications.

Because machine learning is able to evaluate the many variables involved, MLA systems make it possible to identify those applications that would benefit the most from host-based caching. Most systems are able to recommend a cost-effective HBC configuration, and some are even able to estimate the likely increase in IOPS, enabling SQL administrators to prioritize the implementation effort.

Conclusion
Peak performance is impossible to achieve on a shaky foundation, so it is critically important to make certain the infrastructure's architecture is designed for both high availability and high performance. But as with most things, the SQL Server performance devil is in the details of the many physical resource configurations throughout the HA/HP infrastructure. By taking the guesswork out of performance tuning, machine learning analytics makes it easier than ever to achieve peak performance.

Is your VMware infrastructure delivering satisfactory performance for all of your SQL Server applications? You're among good company if the answer is no. The recommendations made here are easy to implement in a development or pilot environment, so there is little to lose and much to gain by giving them a try. And because most vendors today offer free trials of their performance-tuning tools, there is also zero financial risk to trying.

More Stories By Tony Tomarchio

Tony Tomarchio is the Director of Field Engineering for SIOS Technology. He is responsible for defining and delivering technical pre-sales services, support and best practices to SIOS customers, prospects and partners. He has more than a decade of experience providing systems management and high availability solutions to enterprise customers. Prior to joining SIOS, he served as the Global Sales Engineering lead for the Oracle systems management practice. Tony joined Oracle through the acquisitions of Sun Microsystems and Aduva, Inc., where he served as the lead Sales Engineer / Technical Account Manager and played a critical role in product adoption and evolution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...