Click here to close now.

Welcome!

Java Authors: Irit Gillath, David Sprott, Elizabeth White, Pat Romanski, XebiaLabs Blog

Related Topics: Virtualization, Java, Linux, Cloud Expo, Big Data Journal, SDN Journal

Virtualization: Article

Software-Defined Storage Moving to the Forefront

A gift that keeps giving, software-defined storage now showing IT architecture-wide benefits

The next BriefingsDirect deep-dive discussion explores how one of the most costly and complex parts of any enterprises IT infrastructure -- storage -- is being dramatically improved by the accelerating adoption of software-defined storage (SDS).

The ability to choose low-cost hardware, to manage across different types of storage, and radically simplify data storage via intelligent automation means a virtual rewriting of the economics of data.

But just as IT leaders seek to simultaneously tackle storage pain points of scalability, availability, agility, and cost -- software-defined storage is also providing significant strategic- and architectural-level benefits.

We're joined by two executives from VMware to unpack these efficiencies and examine the broad innovation behind the rush to exploit software-defined storage, Alberto Farronato, Director of Product Marketing for Cloud Infrastructure Storage and Availability at VMware, and Christos Karamanolis, Chief Architect and a Principal Engineer in the Storage and Availability Engineering Organization at VMware. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Software-defined storage is changing something more fundamental than just data and economics of data. How do you see the wider implications of what’s happening now that software-defined storage is becoming more common?

Farronato: Software-defined storage is certainly about addressing the cost issue of storage, but more importantly, as you said, it’s also about operations. In fact, the overarching goal that VMware has is to bring to storage the efficient operational model that we brought to compute with server virtualization. So we have a set of initiatives around improving storage on all levels, and building a parallel evolution of storage to what we did with compute. We're very excited about what’s coming.

Gardner: Christos, one of my favorite sayings is that "architecture is IT destiny." How you see software-defined storage at that architectural level? How does it change the game?

Concept of flexibility

Karamanolis: The fundamental architectural principle behind software-defined storage is the concept of flexibility. It's the idea of being able to adapt to different hardware resources, whether those are magnetic disks, flash storage, or other types of non-volatile memories in the future.

Karamanolis

How does the end user adapt their storage platform to the needs they have in terms of the capabilities of the hardware, the ratios of the different types of storage, the networking, the CPU resources, and the memory resources needed for executing and providing their service to what's ahead?

That’s one part of flexibility, but there is another very interesting part, which is a very acute problem for VMware customers today. Their operational complexity of provisioning storage for applications and virtual machines (VMs) has been one way of packaging applications.

Today, customers virtualize environments, but also in general have to provision physical storage containers. They have to anticipate their uses over time and have make an investment up front in resources that they'll need over a long period of time. So they create those logical unit number (LUN) file services, or whatever that is needed, for a period of time that spans anything from weeks to years.

Software-defined storage advocates a new model, where applications and VMs are provisioned at the time that the user needs them. The storage resources that they need are provisioned on-demand, exactly for what the application and the user needs -- nothing more or less.

The idea is that you do this in a way that is really intuitive to the end-user, in a way that reflects the abstractions that user understands -- applications, the data containers that the applications need, and the characteristics of the application workloads.

So those two aspects of flexibility are the two fundamental aspects of any software-defined storage.

Gardner: As we see this increased agility, flexibility, the on-demand nature of virtualization now coupled with software-defined storage, how are organizations benefiting at a business level?

Farronato

Farronato: There are several benefits and several outcomes of adopting software-defined storage. The first that I would call out is the ability to be much more responsive to the business needs -- and the changing business needs -- in the form of what your application needs faster.

As Christos was saying, in the old model, you had to guess ahead of time what the applications will need, spend a lot of time trying to preconfigure and predetermine the various services levels, performance, availability and other things that our storage really would be required by your application, and so spend a lot of time setting things up, and then hopefully, down the line, consume it the way you thought you would.

Difficult change management

In many cases, this causes long provisioning cycles. It causes difficult change management after you provision the application. You find that you need to change things around, because either the business needs have changed or what you guessed was wrong. For example, customers have to face constant data migration.

With the policy-driven approach that Christos has just described -- with the ability to create these storage services on-the-fly for a policy approach -- you don’t have to do all that pre-provisioning and preconfiguring. As you create the VMs and specify the requirements, the system responds accordingly. When you have to change things, you just modify the policy and everything in the underlying infrastructure changes accordingly.

Responsiveness, in my opinion, is the one biggest benefit that IT will deliver to the business by shifting to software-defined storage. There are many others, but I want to focus on the most important one.

When you have to change things, you just modify the policy and everything in the underlying infrastructure will change accordingly.

Gardner: Can you explain what happens when software-defined storage becomes strategic at the applications level, perhaps with implications across the entire data lifecycle?

Karamanolis: One thing we already see, not only among VMware customers, but as a more generic trend, is that infrastructure administrators -- the guys who do the heavy-lifting in the data centers day in and day out, who manage much more beyond what is traditionally servers and applications -- are getting more and more into managing networks and data storage.

Find SDS technical insights and best practices on the VSAN storage blog.

Talking about changing models here, what we see is that tools have to be developed and software-defined storage is a key technology evolution behind that. These are tools for those administrators to manage all those resources that they need to make their day-to-day jobs happen.

Here, software-defined storage is playing a key role. With technology like Virtual SAN, we make the management of storage visible for people who are not necessarily experts in the esoterics of a certain vendor's hardware. It allows more IT professionals to specify the requirements of their applications.

Then, the software storage platform can apply those requirements on the fly to provision, configure, and dynamically monitor and enforce compliance for the policy and requirements that are specified for the applications. This is a major shift we see in the IT industry today, and it’s going to be accelerated by technologies like Virtual SAN.

Gardner: When you go to software-defined storage, you can get to policy level, automation, and intelligence when it comes to how you're executing on storage. How does software-defined storage simplify storage overall?

Distributed platform

Karamanolis: That's an interesting point, because if you think about this superficially, we’ll now go from a single, monolithic storage entity to a storage platform that is distributed, controlled by software, and can span tens or sometimes hundreds of physical nodes and/or entities. Isn’t complexity harder in the latter case?

The reality is that whether it's because of necessity or because we've learned a lot over the last 10 to 15 years about how to manage and control large distributed systems, that there is a parallel evolution of these ideas of how you manage your infrastructure, including the management of storage.

The user has to be exposed to the consequences of the policy they choose. There is a cost there for every one of those services.

As we alluded to already, the fundamental model here is that the end user, the IT professional that manages this infrastructure, expresses in a descriptive way, what they need for their applications in terms of CPU, memory, networking, and, in our case, storage.

What do I mean by descriptive? The IT professional does not need to understand all the internal details of the technologies or the hardware used at any point in time, and which may evolve over a period of time.

Instead, they express at a high level a set of requirements -- we call them policies -- that capture the requirements of the application. For example, in the case of storage, they specify the level of availability that is required for certain applications and performance goals, and they can also specify things like the data protection policies for certain data sets.

Of course, for all those things, nothing comes for free. So the user has to be exposed to the consequences of the policy that they choose. There is a cost there for every one of those services.

But the key point is that the software platform automatically configures the appropriate resources, whether they're arrayed across multiple physical devices, arrayed across the network, or whether they get asynchronous data as specified in a remote location in order to comply with certain disaster recovery (DR) policies.

All those things are done by the software, without the user having to worry about whether the storage underneath is highly available storage, in which case they need to be able to create only two copies of the data, or whether it is of some low-end hardware for which that would require three or four copies of the data. All those things are determined automatically by the platform.

This is the new mode. Perhaps I'm oversimplifying some of these problems, but the idea is that the user should really not have to know the specific hardware configurations of a disk array. If the requirements can not be met, it is because these new technologies are not incorporated into the storage platform.

Policy driven

Farronato: Virtual SAN is a completely policy-driven product, and we call it VM-centric or application-centric. The whole management paradigm for storage, when you use Virtual SAN, is predicated around the VM and the policies that you create and you assign to the VMs as you create your VMs, as you scale your environment.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store. In the past, you had to create individual LUNs or volumes, assign data services like replication or RAID levels to each individual volume, and then map the application to them.

With Virtual SAN, you're simply going to have a capacity container that happens to be distributed across a number of nodes in your cluster -- and everything that happens from that point on is just dropping your VMs into this container. It automatically instantiates all the data services by virtue of having built-in intelligence that interprets the requirements of the policy.

One of the great things that you can achieve with Virtual SAN is providing differentiated service levels to individual VMs from a single data store.

That makes this system extremely simple and intuitive to use. In fact, one of the core design objectives of Virtual SAN is simplicity. If you look at a short description of the system, the radically simple hypervisor-converged storage means bringing that idea of eliminating the complexity of storage to the next level.

Gardner: We've talked about simplicity, policy driven, automation, and optimization. It seems to me that those add up very quickly to a fit-for-purpose approach to storage, so that we are not under-provisioning or over-provisioning, and that can lead to significant cost-savings.

So let’s translate this back to economics. Alberto, do you have any thoughts on how we lower total cost of ownership (TCO) through these SDS approaches of simplicity, optimization, policy driven, and intelligence?

Farronato: There are always two sides of the equation. There is a CAPEX and an OPEX component. Looking at how a product like Virtual SAN reduces CAPEX, there are several ways, but I can mention a couple of key components or drivers.

First, I'd call out the fact that it is an x86 server-based storage area network (SAN). So it leverages server-side components to deliver shared storage. By virtue of using server-side resources right off the bat there are significant savings that you can achieve through lower-cost hardware components. So the same hard drive or solid-state drive (SSD) that you deploy on a shared external storage array could be on the order of 80 percent cheaper.

The other aspect that I would call out that reduces the overall CAPEX cost is more along the lines of this, as you said, consume on-demand approach or, as we put it in many other terms, grow-as-you-go. With a scale-out model, you can start with a small deployment and a small upfront investment.

You can then progressively scale out as your environment grows by the much finer granularity that you would with a monolithic array. And as you scale, you scale both compute, but also IOPs and that goes hand in hand with often the number of VMs that you are running out of your cluster.

System growth

So the system grows with the size of your environment, rather than requiring you to buy a lot of resources upfront that many times remain under-utilized for a long time.

On the OPEX side, when things become simpler, it means that overall administration productivity increases. So we expect a trend where individual administrators will be able to manage a greater amount of capacity, and to do so in conjunction with management of the virtual infrastructure to achieve additional benefits.

Gardner: Virtual SAN has been in general availability now for several months, since March 2014, after being announced last year at VMworld 2013. Now that it’s in place and growing in the market, are there any unintended benefits or unintended consequences from that total-cost perspective in real-world day-in, day-out operations?

The system grows with the size of your environment, rather than require you to buy a lot of resources upfront that many times remain under-utilized for a long time.

I'm looking for ways in which a typical organization is seeing software-defined storage benefiting them culturally and organizationally in terms of skills, labor, and that sort of softer metric.

Karamanolis: That’s a very interesting point. Our technologists sometimes tend to overlook the cultural shifts that technology causes in the field. In the case of Virtual SAN, we see a lot of, as one customer put it, being empowered to manage their own storage, in the vertical that we are controlling in their IT organization, without having to depend on the centralized storage organization in this company.
What we really see here is a shift in paradigm about how our customers use Virtual SAN today to enable them to have a much faster turnaround for trying new applications, new workloads, and getting them from test and dev into production without having to be constrained by the processes and the timelines that are imposed by a central storage IT organization.

This is a major achievement, and the major tool for VMware administrators in the field, which we believe is going to lead the way to a much wider adoption of Virtual SAN and software-defined storage in general.

Gardner: How does this simplification and automation have a governance, risk, and compliance (GRC) benefit?

Farronato: With this approach you have a more granular way to control the service levels that you deliver to your customers, to your internal customers, and a more efficient way to do it by standardizing through polices rather than trying to standardize service levels over a category of hardware.

Self-service consumption

You can more easily keep track of what each individual application is receiving, whether it’s in compliance to that particular policy that you specified. You can also now enable self-service consumption more easily and effectively.

We have, as part of our Policy-Based Management Engine, APIs that will allow for integration with cloud automation frameworks, such as vCloud Automation Center for OpenStack, where end users will be able to consume a predefined category of service.

It will speed up the provisioning process, while at the same time, enabling IT to maintain that control and visibility that all the admins want to maintain over how the resources are consumed and allocated.

You can also now enable self-service consumption more easily and effectively.

Gardner: I suppose there are as many on-ramps to software-defined data center as there are enterprises. So it's interesting that it can be done at that custom level, based on actual implementation, but also have a strategic vision or a strategic architectural direction. So, it's future-proof as well as supporting legacy.

How about some examples? Do we have either use-case scenarios or an actual organization that we can look to and say that they have deployed these VSAN and they have benefited in certain ways and they are indicative of what others should expect?

Farronato: Let me give you some statistics and some interesting facts. We can look at some of the early examples where, in the last three months since the product has become available, we've found a significant success already in the marketplace, with a great start in terms of adoption from our customers.

Find SDS technical insights and best practices on the VSAN storage blog.

We already have more than 300 paying customers in just one quarter. That follows the great success of the public beta that ran through the fall and the early winter with several thousand customers testing and taking a look at the product.

We are finding that virtual desktop infrastructure (VDI) is the most popular use case for Virtual SAN right now. There are a number of reasons why Virtual SAN fits this model from the scale out, as well as the fact that the hyper-converged storage architecture is particularly suitable to address the storage issues of a VDI deployment.

DevOps, or if you want, preproduction environments, loosely defined as test dev, is another area. There are disaster recovery targets in combination with vSphere Replication and Site Recovery Manager. And some of the more aggressive customers are also starting to deploy it in production use cases.

In the last three months since the product has become available, we've found a significant success already in the marketplace.

As I said, the 300 customers that we already have span the gamut in terms of size and names. We have large enterprises, banking, down to the smaller accounts and companies, including education or smaller SMBs.

There are a couple of interesting cases that we'll be showcasing at VMworld 2014 in late-August. If you look at the session list, they're already available as actual use cases presented by our customers themselves.

Adobe will be talking about their massive implementation of Virtual SAN. And for their our production environment, on their data analytics platform, there will be another interesting use case with TeleTech talking about how they have leveraged Cisco UCS to progress VDI deployments.

VDI equation

Gardner: I'd like to revisit the VDI equation for a moment, because one of the things that’s held people up is the impact on storage, and the costs associated with the storage to support VDI. But if you're able to bring down costs by 50 percent, in some cases, using software-defined storage. That radically changes the VDI equation. Isn’t that the case, Christos, where you can now say that you can do VDI cheaper than almost any other approach to a virtualized desktop?

Karamanolis: Absolutely, and the cost of storage is the main impediment in organizations to implement a VDI strategy. With Virtual SAN, as Alberto mentioned earlier, we provide a very compelling cost proposition, both in terms of the capacity of the storage, as well as the performance you gain out of the storage.

You get the needs, both capacity and performance of your VDI workloads for a fraction of the cost you would pay for with a traditional disk array storage.

Alberto already touched on the cost of the capacity, referring to the difference in prices one can get from server vendors and from the market, as opposed to single hardware being procured as part of a traditional disk array.

I'd like to touch on something that is an unsung hero of Virtual SAN and of VDI deployment especially, and that's performance. Virtual SAN, as should be clear by now, is a storage platform that is strongly integrated with our hypervisor. Specifically, the data path implementation and the distributed protocols that are implemented in Virtual SAN are part of the ESXi kernel.

That means that, because of that, we can actually achieve very high performance goals, while we minimize the CPU cycles that are consumed to serve those high I/Os per second. What that means, especially for VDI, is that we use a small slice of the CPU and memory of every single ESXi host to implement this distributed software-driven storage controller.

It doesn't affect all the VMs that run on the same ESXi host, who have already published extensive and detailed performance evaluations, where we compare VDI deployments only on Virtual SAN versus using an external disk array.

And even though Virtual SAN use percentage is cut to be 10 percent of local CPU and memory on those hosts, the consolidation ratio, the number of virtual desktops we run on those clusters, is virtually unaffected, while we get the full performance that is realized with an external, all-flash disk array. So this is the value of Virtual SAN in those environments.

Essentially, you get the needs, both capacity and performance of your VDI workloads, for a fraction of the cost you would pay for with a traditional disk array storage.

Gardner: We're only a few weeks from VMworld 2014 in San Francisco, and I know there's going to be a lot of interest in mobile and in desktop infrastructure for virtualized desktops and applications.

Do you think that we can make some sort of a determination about 2014? Maybe this is the year that we turn the corner on VDI, and that that is a bigger driver to some of these higher efficiencies. Any closing thoughts on the vision for software-defined data center and VDI and the timing with VMworld. Alberto?

Last barrier

Farronato: Certainly, one of the goals that we set ourselves for this Virtual SAN release, solving the VDI use case, eliminating probably the last barrier, and enabling a broader adoption of VDI across the enterprise, and we hope that will materialize. We're very excited about what the early findings show.

With respect to VMworld and some of the other things that we'll be talking about at the conference with respect to storage, we'll continue to explain our vision of software-defined storage, talk about the Virtual SAN momentum, some of the key initiatives that we are rolling out with our OEM partners around things such as Virtual SAN Ready Nodes.

We're going to talk about how we will extend the concept of policy management and dynamic composition of storage services to external storage, with a technology called Virtual Volumes.

There are many other things, and it's gearing up to be a very exciting VMworld Conference for storage-related issues.

Gardner: Last word to you, Christos. Do you have any thoughts about why 2014 is such a pivotal time in the software-defined storage evolution?

Karamanolis: I think that this is the year where the vision that we've been talking about, us and the industry at large, is going to become real in the eyes of some of the bigger, more conservative enterprise IT organizations.

With Virtual SAN from VMware, we're going to make a very strong case at VMworld that this is a real enterprise-class storage system that's applicable across a very wide range of use cases and customers.

With actual customers using the product in the field, I believe that it is going to be a strong evidence for the rest of the industry that software-defined storage is real, it is solving real world problems, and it is here to stay.

Together with opening up some of the management APIs that Virtual SAN uses in VMware products to third parties through this Virtual Volumes technology that Alberto mentioned, we'll also be initiating an industry-wide initiative of making, providing, and offering software-defined storage solutions beyond just VMware and the early companies, mostly startups so far, that have been adopting this model. It’s going to become a key industry direction.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@ThingsExpo Stories
Docker is an excellent platform for organizations interested in running microservices. It offers portability and consistency between development and production environments, quick provisioning times, and a simple way to isolate services. In his session at DevOps Summit at 16th Cloud Expo, Shannon Williams, co-founder of Rancher Labs, will walk through these and other benefits of using Docker to run microservices, and provide an overview of RancherOS, a minimalist distribution of Linux designed expressly to run Docker. He will also discuss Rancher, an orchestration and service discovery platf...
As organizations shift toward IT-as-a-service models, the need for managing and protecting data residing across physical, virtual, and now cloud environments grows with it. CommVault can ensure protection &E-Discovery of your data – whether in a private cloud, a Service Provider delivered public cloud, or a hybrid cloud environment – across the heterogeneous enterprise. In his session at 16th Cloud Expo, Randy De Meno, Chief Technologist - Windows Products and Microsoft Partnerships, will discuss how to cut costs, scale easily, and unleash insight with CommVault Simpana software, the only si...
Analytics is the foundation of smart data and now, with the ability to run Hadoop directly on smart storage systems like Cloudian HyperStore, enterprises will gain huge business advantages in terms of scalability, efficiency and cost savings as they move closer to realizing the potential of the Internet of Things. In his session at 16th Cloud Expo, Paul Turner, technology evangelist and CMO at Cloudian, Inc., will discuss the revolutionary notion that the storage world is transitioning from mere Big Data to smart data. He will argue that today’s hybrid cloud storage solutions, with commodity...
Cloud data governance was previously an avoided function when cloud deployments were relatively small. With the rapid adoption in public cloud – both rogue and sanctioned, it’s not uncommon to find regulated data dumped into public cloud and unprotected. This is why enterprises and cloud providers alike need to embrace a cloud data governance function and map policies, processes and technology controls accordingly. In her session at 15th Cloud Expo, Evelyn de Souza, Data Privacy and Compliance Strategy Leader at Cisco Systems, will focus on how to set up a cloud data governance program and s...
Roberto Medrano, Executive Vice President at SOA Software, had reached 30,000 page views on his home page - http://RobertoMedrano.SYS-CON.com/ - on the SYS-CON family of online magazines, which includes Cloud Computing Journal, Internet of Things Journal, Big Data Journal, and SOA World Magazine. He is a recognized executive in the information technology fields of SOA, internet security, governance, and compliance. He has extensive experience with both start-ups and large companies, having been involved at the beginning of four IT industries: EDA, Open Systems, Computer Security and now SOA.
The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ...
Every innovation or invention was originally a daydream. You like to imagine a “what-if” scenario. And with all the attention being paid to the so-called Internet of Things (IoT) you don’t have to stretch the imagination too much to see how this may impact commercial and homeowners insurance. We’re beyond the point of accepting this as a leap of faith. The groundwork is laid. Now it’s just a matter of time. We can thank the inventors of smart thermostats for developing a practical business application that everyone can relate to. Gone are the salad days of smart home apps, the early chalkb...
We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services...
Operational Hadoop and the Lambda Architecture for Streaming Data Apache Hadoop is emerging as a distributed platform for handling large and fast incoming streams of data. Predictive maintenance, supply chain optimization, and Internet-of-Things analysis are examples where Hadoop provides the scalable storage, processing, and analytics platform to gain meaningful insights from granular data that is typically only valuable from a large-scale, aggregate view. One architecture useful for capturing and analyzing streaming data is the Lambda Architecture, representing a model of how to analyze rea...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
SYS-CON Events announced today that Vitria Technology, Inc. will exhibit at SYS-CON’s @ThingsExpo, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Vitria will showcase the company’s new IoT Analytics Platform through live demonstrations at booth #330. Vitria’s IoT Analytics Platform, fully integrated and powered by an operational intelligence engine, enables customers to rapidly build and operationalize advanced analytics to deliver timely business outcomes for use cases across the industrial, enterprise, and consumer segments.
Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch of Docker's initial release in March of 2013, interest was revved up several notches. Then late last...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet conditions, Dyn ensures traffic gets delivered faster, safer, and more reliably than ever.
CommVault has announced that top industry technology visionaries have joined its leadership team. The addition of leaders from companies such as Oracle, SAP, Microsoft, Cisco, PwC and EMC signals the continuation of CommVault Next, the company's business transformation for sales, go-to-market strategies, pricing and packaging and technology innovation. The company also announced that it had realigned its structure to create business units to more directly match how customers evaluate, deploy, operate, and purchase technology.
In their session at @ThingsExpo, Shyam Varan Nath, Principal Architect at GE, and Ibrahim Gokcen, who leads GE's advanced IoT analytics, focused on the Internet of Things / Industrial Internet and how to make it operational for business end-users. Learn about the challenges posed by machine and sensor data and how to marry it with enterprise data. They also discussed the tips and tricks to provide the Industrial Internet as an end-user consumable service using Big Data Analytics and Industrial Cloud.
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
Performance is the intersection of power, agility, control, and choice. If you value performance, and more specifically consistent performance, you need to look beyond simple virtualized compute. Many factors need to be considered to create a truly performant environment. In his General Session at 15th Cloud Expo, Harold Hannon, Sr. Software Architect at SoftLayer, discussed how to take advantage of a multitude of compute options and platform features to make cloud the cornerstone of your online presence.
Even as cloud and managed services grow increasingly central to business strategy and performance, challenges remain. The biggest sticking point for companies seeking to capitalize on the cloud is data security. Keeping data safe is an issue in any computing environment, and it has been a focus since the earliest days of the cloud revolution. Understandably so: a lot can go wrong when you allow valuable information to live outside the firewall. Recent revelations about government snooping, along with a steady stream of well-publicized data breaches, only add to the uncertainty
The explosion of connected devices / sensors is creating an ever-expanding set of new and valuable data. In parallel the emerging capability of Big Data technologies to store, access, analyze, and react to this data is producing changes in business models under the umbrella of the Internet of Things (IoT). In particular within the Insurance industry, IoT appears positioned to enable deep changes by altering relationships between insurers, distributors, and the insured. In his session at @ThingsExpo, Michael Sick, a Senior Manager and Big Data Architect within Ernst and Young's Financial Servi...
PubNub on Monday has announced that it is partnering with IBM to bring its sophisticated real-time data streaming and messaging capabilities to Bluemix, IBM’s cloud development platform. “Today’s app and connected devices require an always-on connection, but building a secure, scalable solution from the ground up is time consuming, resource intensive, and error-prone,” said Todd Greene, CEO of PubNub. “PubNub enables web, mobile and IoT developers building apps on IBM Bluemix to quickly add scalable realtime functionality with minimal effort and cost.”