Welcome!

Java IoT Authors: Pat Romanski, Zakia Bouachraoui, William Schmarzo, Elizabeth White, Yeshim Deniz

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

Optimizing VMware Environments for Peak SQL Server Performance | @CloudExpo #Cloud #Analytics #MachineLearning

What if it were possible to have both high availability and high performance without the high cost and complexity?

Optimizing VMware Environments for Peak SQL Server Performance

VMware configurations designed to provide high availability often make it difficult to achieve satisfactory performance required by mission-critical SQL Server applications. But what if it were possible to have both high availability and high performance without the high cost and complexity normally required?

This article explores two requirements for getting both for SQL applications, while reducing capital and operational expenditures. The first is to implement a storage architecture within VMware environments designed for both high availability and high performance; the second involves tuning the high availability (HA) and high performance (HP) HA/HP architecture for peak performance.

Building the Foundation with an HA/HP Architecture
SQL Server administrators have many options for implementing HA in a VMware environment. VMware offers vSphere HA, Microsoft offers Windows Server Failover Clustering as a general-purpose HA solution, and SQL Server has its own HA capabilities with AlwaysOn Failover Clusters and AlwaysOn Availability Groups. Then there are the many third party vendors that offer solutions purpose-built for HA and disaster recovery.

The problem is: many of these HA solutions lack full application availability protection, reduce operational flexibility or have an adverse impact on performance. Performance overhead is caused by the layers of abstraction in virtualized servers complicating the way virtual machines (VMs) interface with physical devices, including in a Storage Area Network (SAN) where the storage is also virtualized. Both VMware HA and AlwaysOn Availability Groups fall short in protecting the entire application stack and all application data during failover. And while Windows Server Failover Clustering is the ideal solution to fully address these issues, VMware imposes certain restrictions that reduces IT flexibility, highest performing configurations and the mobility of VMs configured in the cluster. Let's look at the issues.

To enable compatibility with certain SAN and other shared-storage features, such as I/O fencing and SCSI reservations, vSphere utilizes a technology called Raw Device Mapping (RDM) to create a direct link through the hypervisor between the VM and the external storage system. The requirement for using RDM with shared storage exists for layering any HA clustering technology on a VMware environment, including a SQL Server Failover Cluster using Windows Server Failover Clustering (WSFC).

RDM makes the storage appear to the guest operating system as if it were a virtual disk file in a VMware Virtual Machine File System (VMFS) volume. For this reason, mapping is able to maintain 100 percent compatibility with all SAN commands, making virtualized storage access seamless to both the operating system and applications.

RDM can be made to work effectively, but achieving the desired result is not always easy, and may not even be possible. For example, RDM does not support disk partitions, so it is necessary to use "raw" or whole LUNs (logical unit numbers), and mapping is not available for direct-attached block storage and certain RAID devices. And because RDM interferes with VMware features that employ virtual machine disk (VMDK) files, SQL Server administrators may be unable to fully utilize desirable features like snapshots, Virtual Consolidated Backups (VCBs), templates and vMotion.

But the real problem for transaction-intensive applications like SQL Server is the inability to utilize performance-enhancing Flash Read Cache when RDM is configured. Achieving both HA and HP for SQL Server applications in a VMware environment is best achieved using a SANless configuration that eliminates the need for shared SAN storage.  In SANless configurations both the compute and storage resources are fully redundant (with no single points of failure and automatic failover) and provide the additional flexibility to achieve disaster protection by geographically dispersing redundant resources.

SANless HA/HP architectures make it possible to create a shared-nothing, hardware-agnostic, single-site or multi-site cluster. Some solutions also make it possible to implement LAN/WAN-optimized, real-time block-level replication in either a synchronous or asynchronous manner. In effect, these solutions are capable of creating a RAID 1 mirror across the network, automatically changing the direction of the data replication (source and target) as needed after failover and failback.

Just as importantly, a SANless cluster is often easier to implement and operate with both physical and virtual servers. For example, for solutions that are integrated with WSFC, administrators are able to configure high-availability clusters using a familiar feature in a way that avoids the use of shared storage as a potential single point of failure. Once configured, most solutions then automatically synchronize the local storage in two or more servers (in one or more data centers), making them appear to WSFC as a local or shared storage device.

A well-designed SANless HA/HP solution can actually be less expensive than traditional HA configurations owing to savings in two areas. The first involves avoiding the high cost associated with creating a fully redundant SAN across the LAN and WAN. Simply put: HA configurations using local storage with hard disk drives (HDDs) and/or solid state drives (SSDs) are able to deliver superior performance at a lower cost. The second area involves licensing. Because these solutions are designed to deliver carrier-class HA for AlwaysOn Failover Clusters in SQL Server Standard Edition, there is no need for using the AlwaysOn Availability Groups in the more expensive Enterprise Edition.

The performance advantage of a SANless HA/HP solution is shown in diagram. Benchmark testing reveals the 60-70 percent performance penalty associated with using SQL Server AlwaysOn Availability Groups to replicate data in a SAN environment. These test results also show how the use of local storage in an HA configuration is able to perform nearly as well as an unprotected application. To provide an accurate comparison, each alternative utilized identically-performing HDDs. The use of SSDs can deliver an even more significant performance advantage over the SAN-based AlwaysOn Availability Group configuration.

Benchmark tests comparing SQL Server's AlwaysOn Availability Groups with SANless clusters shows the throughput advantage possible with replication techniques designed for HA/HP.

The SANless cluster tested is able to deliver this impressive performance with complete application and data transparency because its advanced architecture implements a low-level, high efficiency driver that sits immediately below NTFS. As writes occur on the primary server, the driver writes one copy of the block to the local VMDK and another copy simultaneously across the network to the VMDK on the remote secondary server.

Beyond performance, SANless clusters have many other advantages. For example, those that use block-level replication technology that is fully integrated with WSFC are able to protect the entire SQL Server application instance, including all databases, logons and agent jobs-all in an integrated fashion. Contrast this approach with AlwaysOn Availability Groups, which protects only the SQL databases, not including other disk resident data that may be application specific.

Tuning the Configuration for Peak Performance
Just as virtualization's layers of abstraction make accessing storage more complex, so too do they obscure how the physical resources are performing. This can make optimizing resources for peak performance a never-ending exercise in trial-and-error.

The trial-and-error process is nearly impossible to avoid with traditional application performance management tools that utilize thresholds of discrete events to isolate performance issues. But individual thresholds are unable account for the interrelated nature of resources in virtualized environments, where a change to one often has a significant impact on another. So even when these tools alert IT to a performance issue, they are incapable of providing meaningful insight into the issue or provide guidance for resolution.

Advanced machine learning analytics (MLA) software overcomes these and other limitations by automatically and continuously learning the many complex behaviors and interactions among all interrelated resources. Self-learning and automatic adaptation is what makes it possible for MLA-based solutions to provide a more accurate means of identifying the root cause(s) of performance issues and providing actionable recommendations for resolving these.

Most machine learning analytics systems work by aggregating, normalizing, and then correlating and analyzing hundreds of thousands of data points from numerous resources across network, storage, compute and application layers. While gathering and analyzing this wealth of data, the MLA system learns what constitutes normal behavior patterns, thereby establishing a baseline for detecting anomalies and finding root causes. Some MLA systems also enable human supervision to accelerate the learning process and improve results.

In addition to identifying root causes, some MLA systems are able to simulate and predict the impact of making changes in resources and configurations. This is key for anticipating and avoiding performance or reliability issues and avoiding real-time reaction to problems that occur. In contrast traditional monitoring tools are reactive by design and primarily designed to deliver alerts on current events within the infrastructure. These tools are manually intensive and involve time-consuming and error-prone approaches. They require IT administrators to run multiple reports, and then manually compare the results to find and fix under- and over-provisioning of vCPU and vMemory resources.

MLA systems can identify a wide range of performance issues, involving compute  or storage contention, or incorrectly configured VMs as well as problems arising from migrated VMs, newly provisioned VMs, "noisy neighbors," misconfigured applications or hardware degradation. Most MLA systems also help improve efficient use of resources by identifying idle VM's or wasted storage.

SQL administrators often employ host-based caching (HBC), all-flash arrays and/or hybrid storage to improve performance. In SAN environments, HBC normally delivers the greatest improvements in throughput performance by maximizing I/O operations per second (IOPS) for some, but not all applications. And therein lies the challenge.

The improvement in performance is best when the cache is able to contain sufficient "hot" data to have a meaningful increase in IOPS. But testing every application that might fit such criteria with different HBC configurations in an attempt to quantify the improvement is an arduous endeavor in organizations running hundreds or thousands of applications.

Because machine learning is able to evaluate the many variables involved, MLA systems make it possible to identify those applications that would benefit the most from host-based caching. Most systems are able to recommend a cost-effective HBC configuration, and some are even able to estimate the likely increase in IOPS, enabling SQL administrators to prioritize the implementation effort.

Conclusion
Peak performance is impossible to achieve on a shaky foundation, so it is critically important to make certain the infrastructure's architecture is designed for both high availability and high performance. But as with most things, the SQL Server performance devil is in the details of the many physical resource configurations throughout the HA/HP infrastructure. By taking the guesswork out of performance tuning, machine learning analytics makes it easier than ever to achieve peak performance.

Is your VMware infrastructure delivering satisfactory performance for all of your SQL Server applications? You're among good company if the answer is no. The recommendations made here are easy to implement in a development or pilot environment, so there is little to lose and much to gain by giving them a try. And because most vendors today offer free trials of their performance-tuning tools, there is also zero financial risk to trying.

More Stories By Tony Tomarchio

Tony Tomarchio is the Director of Field Engineering for SIOS Technology. He is responsible for defining and delivering technical pre-sales services, support and best practices to SIOS customers, prospects and partners. He has more than a decade of experience providing systems management and high availability solutions to enterprise customers. Prior to joining SIOS, he served as the Global Sales Engineering lead for the Oracle systems management practice. Tony joined Oracle through the acquisitions of Sun Microsystems and Aduva, Inc., where he served as the lead Sales Engineer / Technical Account Manager and played a critical role in product adoption and evolution.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
Codete accelerates their clients growth through technological expertise and experience. Codite team works with organizations to meet the challenges that digitalization presents. Their clients include digital start-ups as well as established enterprises in the IT industry. To stay competitive in a highly innovative IT industry, strong R&D departments and bold spin-off initiatives is a must. Codete Data Science and Software Architects teams help corporate clients to stay up to date with the mod...
Tapping into blockchain revolution early enough translates into a substantial business competitiveness advantage. Codete comprehensively develops custom, blockchain-based business solutions, founded on the most advanced cryptographic innovations, and striking a balance point between complexity of the technologies used in quickly-changing stack building, business impact, and cost-effectiveness. Codete researches and provides business consultancy in the field of single most thrilling innovative te...
Darktrace is the world's leading AI company for cyber security. Created by mathematicians from the University of Cambridge, Darktrace's Enterprise Immune System is the first non-consumer application of machine learning to work at scale, across all network types, from physical, virtualized, and cloud, through to IoT and industrial control systems. Installed as a self-configuring cyber defense platform, Darktrace continuously learns what is ‘normal' for all devices and users, updating its understa...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
CloudEXPO has been the M&A capital for Cloud companies for more than a decade with memorable acquisition news stories which came out of CloudEXPO expo floor. DevOpsSUMMIT New York faculty member Greg Bledsoe shared his views on IBM's Red Hat acquisition live from NASDAQ floor. Acquisition news was announced during CloudEXPO New York which took place November 12-13, 2019 in New York City.
OpsRamp is an enterprise IT operation platform provided by US-based OpsRamp, Inc. It provides SaaS services through support for increasingly complex cloud and hybrid computing environments from system operation to service management. The OpsRamp platform is a SaaS-based, multi-tenant solution that enables enterprise IT organizations and cloud service providers like JBS the flexibility and control they need to manage and monitor today's hybrid, multi-cloud infrastructure, applications, and wor...
The Master of Science in Artificial Intelligence (MSAI) provides a comprehensive framework of theory and practice in the emerging field of AI. The program delivers the foundational knowledge needed to explore both key contextual areas and complex technical applications of AI systems. Curriculum incorporates elements of data science, robotics, and machine learning-enabling you to pursue a holistic and interdisciplinary course of study while preparing for a position in AI research, operations, ...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, discussed some of the security challenges of the IoT infrastructure and related how these aspects impact Smart Living. The material was delivered interac...
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California, in the Silicon Valley. It is the world's second largest and second highest valued semiconductor chip maker based on revenue after being overtaken by Samsung, and is the inventor of the x86 series of microprocessors, the processors found in most personal computers (PCs). Intel supplies processors for computer system manufacturers such as Apple, Lenovo, HP, and Dell. Intel also manufactu...