Java IoT Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, William Schmarzo

Related Topics: Java IoT, @CloudExpo

Java IoT: Blog Feed Post

The Revolution Continues

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer, it’s just the location that’s problematic for most organizations.

Organizations aren’t ignoring reality; they know there are real benefits associated with cloud computing. But they aren’t yet – and may never be – willing to give up control. And there are good reasons to maintain that control, from security to accountability to agility. 

But the “people” still want the benefits of cloud, so the question is: how do we put the power of ( cloud | elastic | on-demand) computing into the hands of the people who will benefit from it without requiring that they relocate to a new address “in the cloud?”


The problem is that all the cloud providers have the secret sauce to their efficient, on-demand infrastructures locked up in the palace. They certainly aren’t – and shouldn’t really be expected to – reveal those secrets. It’s part of their competitive advantage, after all.

Unlike the French back in 1789 that decided that if the nobility wasn’t going to share their cake then, well, they’d just revolt, execute them, and take the cake themselves there’s no way you can “force” cloud providers to hand over their secret sauce. You can revolt, of course, but such a revolution will be digital, not physical, and it’s not really going to change the way providers do business.


The problem isn’t necessarily that enterprises don’t want to use the cloud at all. In fact, many organizations are using the cloud or see potential use for the cloud, just not for every application for which they are responsible. Some applications will invariably end up in the cloud while others remain tethered to the local data center for years to come due to integration or security concerns, or just the inherent difficulty in moving something like a COBOL application on IBM big iron into the cloud. Yes, such applications still exist. Yes, they still run businesses. Yes, I know it’s crazy but it works for them and trying to get them to “modernize” is like trying to convince your grandmother she needs an iPhone.

For applications that have been/will be moved to the cloud, there it is. That’s all you need. But for those “left behind” for which you’d really like the same benefits of an on-demand, elastic infrastructure such that you’re not wasting compute resources, you need a way to move from what is a fairly static network and application network infrastructure to something a whole lot more dynamic.

You’ve probably already invested in a virtualization technology. That’s the easy part. The harder part is implementing the automation and intelligent provisioning necessary to maximize utilization of compute resources across the data center and somehow managing the volatility that will occur due to the moving around of resources in a way that optimizes the data center. This is the “secret sauce” part.

What you still need to do:

  1. Normalize storage. One of the things we forget is that in an environment where applications can be deployed at will on one of X number of physical servers is that we either (a) need to keep a copy of the virtual image on every physical server or (b) we need a consistent method of access from each physical server so the image can be loaded and executed. As (a) is a management nightmare and could, if you have enough applications, use far more disk space than is reasonable, you’ll want to go with (b). This means implementing a storage layer in your architecture that is normalized – that is, access from any physical machine is consistent. Storage/file virtualization is an excellent method of implementing the storage layer and providing that consistency that also happens to make more efficient use of your storage capabilities.
  2. Delegate authority. If you aren’t going to be provisioning and de-provisioning manually then something – some system, some application, some device – needs to be the authoritative source of these "events”. This could be VMWare, Microsoft, a custom application, a third-party solution, etc… Whatever has the ability to interrogate and direct action based on specific resource conditions across the infrastructure – servers, network, application network, security – is a good place to look for this authoritative source.
  3. Prepare the infrastructure. This may be more difficult or easy depending on the level of integration that exists between the authoritative source and the infrastructure. Infrastructure needs to be prepared to provide feedback and to take direction from the source of authority in the virtual infrastructure. For example, if the authority “knows” that a particular application is nearing capacity, it may (if so configured) decide to spin up another instance. Doing so kicks off an entire chain of events that includes assignment of IP address, activation of security policies, recognition by the application delivery network that a new instance is available and should be included in future application routing decisions.

    This is the “integration”, the “collaboration”, the “connectivity intelligence” we talk about with Infrastructure 2.0. Many of the moving parts are already capable – and integrated – with virtual management offerings and both give and take feedback from such an authoritative source when making decisions about routing, application switching, user access, etc…in real time. If the integration with the authoritative source you choose does not exist, then you have a few options:
    1. Build/acquire a different source of authority. One that most of the infrastructure does integrate with.
    2. Invest in a different infrastructure solution. One that does integrate with a wide variety of virtual infrastructure management systems and that is likely to continue to integrate with systems in the future. Consider new options, too. For example, HP ProCurve ONE Infrastructure is specifically designed for just such an environment. It may be time to invest in a new infrastructure, one that is capable of seeing you through the changes that are coming (and they are coming, do not doubt that) in the near and far future.
    3. Wait. Yes, this is always an option. If you explore what it would take and decide it’s too costly, the technology is too immature, or it’s just going to take too long right now you can always stop. There’s no time limit on migrating from a static architecture to a dynamic one. No one’s going to make fun of you because you decided to wait. It’s your data center, if it’s just not the right time then it’s not the right time. Period. Eventually the system management tools will exist that handle most of this for you, so perhaps waiting is the right option for your organization.
  4. Define policies. This means more than just the traditional network, security, and application performance policies that are a normal part of a deployment. This also means defining thresholds on compute resources and utilization and determining at what point it is necessary to spin up – or down – application images. One of the things you’ll need to know is how long it takes to spin up an application. If it takes five minutes to spin up an application, you’ll need to tweak the policies surrounding provisioning to ensure that the “spin up process” starts before you run out of capacity, such that existing allocated resources can handle the load until that new instance is online.

    This process is one of ongoing tweaking and modification. You’re unlikely to “get it perfect” the first time, and you’ll need to evaluate the execution success of these policies on an ongoing basis until you – and business stakeholders - are satisfied with the results. This is the reason visibility is so important in a virtualized infrastructure; because you need to be able to see and understand the flow of traffic, data, and how the execution of policies affects everything from availability to security to performance in order to optimize them in a way that makes sense for your infrastructure, application, and business needs.


What we are likely to see in the future is a hybrid model of computing; one in which organizations take advantage of cloud computing models both internally and externally as befits the needs of their organization and applications. The dynamic infrastructure revolution is about ensuring you have the means to support a cloud model internally such that you can make the decision on whether any given application should reside “out there” or “in here”. The dynamic infrastructure revolution is about realizing the benefits of a cloud computing model internally as well as externally, so you don’t have to sacrifice performance, or reliability, or security just to reduce costs. The dynamic infrastructure revolution is about a change in the way we view network and application network infrastructure by elevating infrastructure to a first-class citizen in the overall architecture; one that actively participates in the process of delivering applications and provides value through that collaboration.

The dynamic infrastructure revolution is about a change in the way we think about networks. Are they dumb pipes that make routing and security and application delivery decisions in a vacuum or are they intelligent, dynamic partners that provide real value to both applications and the people managing them?

The dynamic infrastructure revolution is not about removing the power of the cloud, it’s about giving that power to the people, too, so both can be leveraged in a way that maximizes the efficiency of all applications, modern and legacy, web and client-server, virtualized and physical.


Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Categories:  ,  ,  ,  ,

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@ThingsExpo Stories
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The IoT Will Grow: In what might be the most obvious prediction of the decade, the IoT will continue to expand next year, with more and more devices coming online every single day. What isn’t so obvious about this prediction: where that growth will occur. The retail, healthcare, and industrial/supply chain industries will likely see the greatest growth. Forrester Research has predicted the IoT will become “the backbone” of customer value as it continues to grow. It is no surprise that retail is ...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...