Welcome!

Java IoT Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Pat Romanski

Related Topics: Java IoT, @CloudExpo

Java IoT: Blog Feed Post

The Revolution Continues

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer, it’s just the location that’s problematic for most organizations.

Organizations aren’t ignoring reality; they know there are real benefits associated with cloud computing. But they aren’t yet – and may never be – willing to give up control. And there are good reasons to maintain that control, from security to accountability to agility. 

But the “people” still want the benefits of cloud, so the question is: how do we put the power of ( cloud | elastic | on-demand) computing into the hands of the people who will benefit from it without requiring that they relocate to a new address “in the cloud?”

letthemeatcloud

The problem is that all the cloud providers have the secret sauce to their efficient, on-demand infrastructures locked up in the palace. They certainly aren’t – and shouldn’t really be expected to – reveal those secrets. It’s part of their competitive advantage, after all.

Unlike the French back in 1789 that decided that if the nobility wasn’t going to share their cake then, well, they’d just revolt, execute them, and take the cake themselves there’s no way you can “force” cloud providers to hand over their secret sauce. You can revolt, of course, but such a revolution will be digital, not physical, and it’s not really going to change the way providers do business.


EAT THEIR CAKE AND HAVE IT TOO


The problem isn’t necessarily that enterprises don’t want to use the cloud at all. In fact, many organizations are using the cloud or see potential use for the cloud, just not for every application for which they are responsible. Some applications will invariably end up in the cloud while others remain tethered to the local data center for years to come due to integration or security concerns, or just the inherent difficulty in moving something like a COBOL application on IBM big iron into the cloud. Yes, such applications still exist. Yes, they still run businesses. Yes, I know it’s crazy but it works for them and trying to get them to “modernize” is like trying to convince your grandmother she needs an iPhone.

For applications that have been/will be moved to the cloud, there it is. That’s all you need. But for those “left behind” for which you’d really like the same benefits of an on-demand, elastic infrastructure such that you’re not wasting compute resources, you need a way to move from what is a fairly static network and application network infrastructure to something a whole lot more dynamic.

You’ve probably already invested in a virtualization technology. That’s the easy part. The harder part is implementing the automation and intelligent provisioning necessary to maximize utilization of compute resources across the data center and somehow managing the volatility that will occur due to the moving around of resources in a way that optimizes the data center. This is the “secret sauce” part.

What you still need to do:

  1. Normalize storage. One of the things we forget is that in an environment where applications can be deployed at will on one of X number of physical servers is that we either (a) need to keep a copy of the virtual image on every physical server or (b) we need a consistent method of access from each physical server so the image can be loaded and executed. As (a) is a management nightmare and could, if you have enough applications, use far more disk space than is reasonable, you’ll want to go with (b). This means implementing a storage layer in your architecture that is normalized – that is, access from any physical machine is consistent. Storage/file virtualization is an excellent method of implementing the storage layer and providing that consistency that also happens to make more efficient use of your storage capabilities.
  2. Delegate authority. If you aren’t going to be provisioning and de-provisioning manually then something – some system, some application, some device – needs to be the authoritative source of these "events”. This could be VMWare, Microsoft, a custom application, a third-party solution, etc… Whatever has the ability to interrogate and direct action based on specific resource conditions across the infrastructure – servers, network, application network, security – is a good place to look for this authoritative source.
  3. Prepare the infrastructure. This may be more difficult or easy depending on the level of integration that exists between the authoritative source and the infrastructure. Infrastructure needs to be prepared to provide feedback and to take direction from the source of authority in the virtual infrastructure. For example, if the authority “knows” that a particular application is nearing capacity, it may (if so configured) decide to spin up another instance. Doing so kicks off an entire chain of events that includes assignment of IP address, activation of security policies, recognition by the application delivery network that a new instance is available and should be included in future application routing decisions.

    This is the “integration”, the “collaboration”, the “connectivity intelligence” we talk about with Infrastructure 2.0. Many of the moving parts are already capable – and integrated – with virtual management offerings and both give and take feedback from such an authoritative source when making decisions about routing, application switching, user access, etc…in real time. If the integration with the authoritative source you choose does not exist, then you have a few options:
    1. Build/acquire a different source of authority. One that most of the infrastructure does integrate with.
    2. Invest in a different infrastructure solution. One that does integrate with a wide variety of virtual infrastructure management systems and that is likely to continue to integrate with systems in the future. Consider new options, too. For example, HP ProCurve ONE Infrastructure is specifically designed for just such an environment. It may be time to invest in a new infrastructure, one that is capable of seeing you through the changes that are coming (and they are coming, do not doubt that) in the near and far future.
    3. Wait. Yes, this is always an option. If you explore what it would take and decide it’s too costly, the technology is too immature, or it’s just going to take too long right now you can always stop. There’s no time limit on migrating from a static architecture to a dynamic one. No one’s going to make fun of you because you decided to wait. It’s your data center, if it’s just not the right time then it’s not the right time. Period. Eventually the system management tools will exist that handle most of this for you, so perhaps waiting is the right option for your organization.
  4. Define policies. This means more than just the traditional network, security, and application performance policies that are a normal part of a deployment. This also means defining thresholds on compute resources and utilization and determining at what point it is necessary to spin up – or down – application images. One of the things you’ll need to know is how long it takes to spin up an application. If it takes five minutes to spin up an application, you’ll need to tweak the policies surrounding provisioning to ensure that the “spin up process” starts before you run out of capacity, such that existing allocated resources can handle the load until that new instance is online.

    This process is one of ongoing tweaking and modification. You’re unlikely to “get it perfect” the first time, and you’ll need to evaluate the execution success of these policies on an ongoing basis until you – and business stakeholders - are satisfied with the results. This is the reason visibility is so important in a virtualized infrastructure; because you need to be able to see and understand the flow of traffic, data, and how the execution of policies affects everything from availability to security to performance in order to optimize them in a way that makes sense for your infrastructure, application, and business needs.

THE BEST OF BOTH WORLDS


What we are likely to see in the future is a hybrid model of computing; one in which organizations take advantage of cloud computing models both internally and externally as befits the needs of their organization and applications. The dynamic infrastructure revolution is about ensuring you have the means to support a cloud model internally such that you can make the decision on whether any given application should reside “out there” or “in here”. The dynamic infrastructure revolution is about realizing the benefits of a cloud computing model internally as well as externally, so you don’t have to sacrifice performance, or reliability, or security just to reduce costs. The dynamic infrastructure revolution is about a change in the way we view network and application network infrastructure by elevating infrastructure to a first-class citizen in the overall architecture; one that actively participates in the process of delivering applications and provides value through that collaboration.

The dynamic infrastructure revolution is about a change in the way we think about networks. Are they dumb pipes that make routing and security and application delivery decisions in a vacuum or are they intelligent, dynamic partners that provide real value to both applications and the people managing them?

The dynamic infrastructure revolution is not about removing the power of the cloud, it’s about giving that power to the people, too, so both can be leveraged in a way that maximizes the efficiency of all applications, modern and legacy, web and client-server, virtualized and physical.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Categories:  ,  ,  ,  ,
 ,

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
IoT is rapidly becoming mainstream as more and more investments are made into the platforms and technology. As this movement continues to expand and gain momentum it creates a massive wall of noise that can be difficult to sift through. Unfortunately, this inevitably makes IoT less approachable for people to get started with and can hamper efforts to integrate this key technology into your own portfolio. There are so many connected products already in place today with many hundreds more on the h...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.