Welcome!

Java IoT Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui, Carmen Gonzalez

Related Topics: Containers Expo Blog, Java IoT, @CloudExpo

Containers Expo Blog: Blog Post

The Dirty Truth About Efficiency in Hyperconvergence By @ACConboy | @CloudExpo #Cloud

The discussion on efficiency can be extended through every single part of a vendor's architectural and business choices

The business dictionary defines efficiency as the comparison of what is actually produced or performed with what can be achieved with the same consumption of resources (money, time, labor, design, etc.). Example being: The designers needed to revise the product specifications as the complexity of its parts reduced the efficiency of the product.

In technology today we constantly hear efficiency being used as a marketing term by folks who haven't ever actually looked under the hood of the technology in question to the design of how architectures actually work. Efficiency is constantly being used as a buzzword without real evaluation of its meaning in the context of how the product in question really does what it was intended to do, compared to the alternatives in the market. There are way too many vendors saying "trust us, ours is the most efficient..."

Sadly, a quote from Paul Graham all too often comes to mind when dealing with vendors and their architectural choices in new and rapidly growing market segments such as Hyperconvergence:

"In a rapidly growing market, you don't worry too much about efficiency. It's more important to grow fast. If there's some mundane problem getting in your way, and there's a simple solution that's somewhat expensive, just take it and get on with more important things."

Understand that when the technology at hand is a Hyperconverged infrastructure (HCI), the importance of this term "Efficiency" cannot be overstated. Bear in mind that what a hyperconverged vendor (or cloud vendor, or converged vendor) is actually doing is taking responsibility for all of the architectural decisions and choices that their customers would have made for themselves in the past. All of the decisions around which parts to use in the solution, how many parts there should be, how best to utilize them, what capabilities the end product should have, and what the end user should be able to do (and what - in their opinion - they don't need to be able to do) with the solution.

Likewise, these choices made in the different approaches you see in the HCI market today can have profound effects on the efficiency (and resulting cost) of the end product. All too often, shortcuts are taken that radically decrease efficiency in the name of expediency (see the Paul Graham quote above). Like cloud and flash before it, the HCI space is seen as a ‘land grab' by several vendors, and getting to market trumps how they get there. Those fundamental decisions do not have priority (with some vendors) over getting their sales and marketing machines moving.

The IO Path
One great example of technology moving forward is SSD and flash technologies. Used properly, they can radically improve performance and reduce power consumption. However, several HCI vendors are using SSD and flash as an essential buffer to hide very slow IO paths used between Virtual Machines, VSAs (just another VM),  and their underlying disks - creating what amounts to a Rube Goldberg machine for an IO path - (one that consumes 4 to 10 disk IOs or more for every IO the VM needs done) rather than using flash and SSD as proper tiers with a QOS-like mechanism in place to automatically put the right workloads in the right place at the right time with the flexibility to move those workloads fluidly between tiers. Any architecture that REQUIRES the use of flash to function at an acceptable speed has clearly not been architected efficiently. If turning off the flash layer results in IO speeds best described as Glacial, then the vendor is hardly being efficient in their use of flash or solid state. Flash is not meant to be the curtain that hides the efficiency issues of the solution.

Disk Controller Approaches
In other cases, rather than handing disk subsystems with SAN flexibility built in at a block level directly to production VMs, you see vendors choosing to simply virtualize a SAN controller and pull the legacy SAN + storage protocols up into the servers as a separate VM, causing several I/O path loops to happen with IOs having to pass multiple times through VMs in the system and adjacent systems, maintaining and magnifying the overhead of storage protocols (and their foibles). Likewise, this approach of using Storage Controller VMs (sometimes called VSAs or Virtual Storage Appliances) often consumes significant resources in terms of CPU and RAM that could easily otherwise power additional Virtual Machines in the architecture. This has been done by many vendors due to the forced lock-in and lack of flexibility caused by legacy virtualization approaches.

Essentially, the VSA is a shortcut solution to the ‘legacy of lock-in' problem. In one case I can think of, a VSA running on each server (or node) in a vendor's architecture BEGINS its RAM consumption at 16GB and 4 vCores per node, then grows that based on how much additional feature implementation, IO loading and maintenance it is having to do (see above on IO path). With a different vendor, the VSA reserves over 43GB per node on their entry point offering, and over 100GB of RAM per node on their most common platform - a 3 node cluster reserving 300 GB RAM just for IO path overhead.  An average SMB customer could run their entire operation in just the CPU and RAM resources these VSAs consume. While this approach may offer benefits to the large enterprise that may not miss the consumed resource due to features offered in the VSA outweighing the loss, which is very much not the case with the average SMB customer.  Again, not a paragon of the efficiency required in today's SMB and mid-market IT environments.

This discussion on efficiency can be extended through every single part of a vendor's architectural and business choices - from hardware, hypervisor, management, disk, to the way they run their business. All of these choices result in dramatic impacts on the resultant capabilities, performance, cost and usability of the final product. As technology consumers in the modern SMB datacenter, we need to look beyond the marketeering to truly vet the choices being made for us.

The really disheartening part is when the vendors in question choose to either hide their choices, bury them on overly technical manuals, or claim that a given ‘new' technical approach is a panacea for every use case without mentioning that the ‘new' technical approach is really just a marketing renamed (read buzzword) approach that is just a slightly modified version of what has been around for decades and is already widely known to only be appropriate only for very specific uses. Even worse, they simply come out of the gate with the arrogance of "Trust us, our choices are best."

At the end of the day, it is always best to think of efficiency first, in all the areas it touches within your environment and the proposed architecture. Think of it for what it really is, how it impacts use and ask your vendor the specifics. You may well be surprised by the answers you get to the real questions....

More Stories By Alan Conboy

A 20 year industry veteran, Alan Conboy is a technology evangelist who specializes in designing, prototyping, selling, and implementing disruptive storage and virtualization technologies targeted at the SMB and mid-market. First mover in the X86/X64 Hyperconvergence space. One of the first 30 people ever certified by SNIA.

IoT & Smart Cities Stories
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation is much more than a buzzword. The radical shift to digital mechanisms for almost every process is evident across all industries and verticals. This is often especially true in financial services, where the legacy environment is many times unable to keep up with the rapidly shifting demands of the consumer. The constant pressure to provide complete, omnichannel delivery of customer-facing solutions to meet both regulatory and customer demands is putting enormous pressure on...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
To Really Work for Enterprises, MultiCloud Adoption Requires Far Better and Inclusive Cloud Monitoring and Cost Management … But How? Overwhelmingly, even as enterprises have adopted cloud computing and are expanding to multi-cloud computing, IT leaders remain concerned about how to monitor, manage and control costs across hybrid and multi-cloud deployments. It’s clear that traditional IT monitoring and management approaches, designed after all for on-premises data centers, are falling short in ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...