Welcome!

Java IoT Authors: Zakia Bouachraoui, Carmen Gonzalez, Yeshim Deniz, Elizabeth White, Pat Romanski

Related Topics: Java IoT, @CloudExpo

Java IoT: Blog Feed Post

The Revolution Continues

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer

The consensus seems to be, at least from the myriad surveys, studies, and research, that cloud as a model is the right answer, it’s just the location that’s problematic for most organizations.

Organizations aren’t ignoring reality; they know there are real benefits associated with cloud computing. But they aren’t yet – and may never be – willing to give up control. And there are good reasons to maintain that control, from security to accountability to agility. 

But the “people” still want the benefits of cloud, so the question is: how do we put the power of ( cloud | elastic | on-demand) computing into the hands of the people who will benefit from it without requiring that they relocate to a new address “in the cloud?”

letthemeatcloud

The problem is that all the cloud providers have the secret sauce to their efficient, on-demand infrastructures locked up in the palace. They certainly aren’t – and shouldn’t really be expected to – reveal those secrets. It’s part of their competitive advantage, after all.

Unlike the French back in 1789 that decided that if the nobility wasn’t going to share their cake then, well, they’d just revolt, execute them, and take the cake themselves there’s no way you can “force” cloud providers to hand over their secret sauce. You can revolt, of course, but such a revolution will be digital, not physical, and it’s not really going to change the way providers do business.


EAT THEIR CAKE AND HAVE IT TOO


The problem isn’t necessarily that enterprises don’t want to use the cloud at all. In fact, many organizations are using the cloud or see potential use for the cloud, just not for every application for which they are responsible. Some applications will invariably end up in the cloud while others remain tethered to the local data center for years to come due to integration or security concerns, or just the inherent difficulty in moving something like a COBOL application on IBM big iron into the cloud. Yes, such applications still exist. Yes, they still run businesses. Yes, I know it’s crazy but it works for them and trying to get them to “modernize” is like trying to convince your grandmother she needs an iPhone.

For applications that have been/will be moved to the cloud, there it is. That’s all you need. But for those “left behind” for which you’d really like the same benefits of an on-demand, elastic infrastructure such that you’re not wasting compute resources, you need a way to move from what is a fairly static network and application network infrastructure to something a whole lot more dynamic.

You’ve probably already invested in a virtualization technology. That’s the easy part. The harder part is implementing the automation and intelligent provisioning necessary to maximize utilization of compute resources across the data center and somehow managing the volatility that will occur due to the moving around of resources in a way that optimizes the data center. This is the “secret sauce” part.

What you still need to do:

  1. Normalize storage. One of the things we forget is that in an environment where applications can be deployed at will on one of X number of physical servers is that we either (a) need to keep a copy of the virtual image on every physical server or (b) we need a consistent method of access from each physical server so the image can be loaded and executed. As (a) is a management nightmare and could, if you have enough applications, use far more disk space than is reasonable, you’ll want to go with (b). This means implementing a storage layer in your architecture that is normalized – that is, access from any physical machine is consistent. Storage/file virtualization is an excellent method of implementing the storage layer and providing that consistency that also happens to make more efficient use of your storage capabilities.
  2. Delegate authority. If you aren’t going to be provisioning and de-provisioning manually then something – some system, some application, some device – needs to be the authoritative source of these "events”. This could be VMWare, Microsoft, a custom application, a third-party solution, etc… Whatever has the ability to interrogate and direct action based on specific resource conditions across the infrastructure – servers, network, application network, security – is a good place to look for this authoritative source.
  3. Prepare the infrastructure. This may be more difficult or easy depending on the level of integration that exists between the authoritative source and the infrastructure. Infrastructure needs to be prepared to provide feedback and to take direction from the source of authority in the virtual infrastructure. For example, if the authority “knows” that a particular application is nearing capacity, it may (if so configured) decide to spin up another instance. Doing so kicks off an entire chain of events that includes assignment of IP address, activation of security policies, recognition by the application delivery network that a new instance is available and should be included in future application routing decisions.

    This is the “integration”, the “collaboration”, the “connectivity intelligence” we talk about with Infrastructure 2.0. Many of the moving parts are already capable – and integrated – with virtual management offerings and both give and take feedback from such an authoritative source when making decisions about routing, application switching, user access, etc…in real time. If the integration with the authoritative source you choose does not exist, then you have a few options:
    1. Build/acquire a different source of authority. One that most of the infrastructure does integrate with.
    2. Invest in a different infrastructure solution. One that does integrate with a wide variety of virtual infrastructure management systems and that is likely to continue to integrate with systems in the future. Consider new options, too. For example, HP ProCurve ONE Infrastructure is specifically designed for just such an environment. It may be time to invest in a new infrastructure, one that is capable of seeing you through the changes that are coming (and they are coming, do not doubt that) in the near and far future.
    3. Wait. Yes, this is always an option. If you explore what it would take and decide it’s too costly, the technology is too immature, or it’s just going to take too long right now you can always stop. There’s no time limit on migrating from a static architecture to a dynamic one. No one’s going to make fun of you because you decided to wait. It’s your data center, if it’s just not the right time then it’s not the right time. Period. Eventually the system management tools will exist that handle most of this for you, so perhaps waiting is the right option for your organization.
  4. Define policies. This means more than just the traditional network, security, and application performance policies that are a normal part of a deployment. This also means defining thresholds on compute resources and utilization and determining at what point it is necessary to spin up – or down – application images. One of the things you’ll need to know is how long it takes to spin up an application. If it takes five minutes to spin up an application, you’ll need to tweak the policies surrounding provisioning to ensure that the “spin up process” starts before you run out of capacity, such that existing allocated resources can handle the load until that new instance is online.

    This process is one of ongoing tweaking and modification. You’re unlikely to “get it perfect” the first time, and you’ll need to evaluate the execution success of these policies on an ongoing basis until you – and business stakeholders - are satisfied with the results. This is the reason visibility is so important in a virtualized infrastructure; because you need to be able to see and understand the flow of traffic, data, and how the execution of policies affects everything from availability to security to performance in order to optimize them in a way that makes sense for your infrastructure, application, and business needs.

THE BEST OF BOTH WORLDS


What we are likely to see in the future is a hybrid model of computing; one in which organizations take advantage of cloud computing models both internally and externally as befits the needs of their organization and applications. The dynamic infrastructure revolution is about ensuring you have the means to support a cloud model internally such that you can make the decision on whether any given application should reside “out there” or “in here”. The dynamic infrastructure revolution is about realizing the benefits of a cloud computing model internally as well as externally, so you don’t have to sacrifice performance, or reliability, or security just to reduce costs. The dynamic infrastructure revolution is about a change in the way we view network and application network infrastructure by elevating infrastructure to a first-class citizen in the overall architecture; one that actively participates in the process of delivering applications and provides value through that collaboration.

The dynamic infrastructure revolution is about a change in the way we think about networks. Are they dumb pipes that make routing and security and application delivery decisions in a vacuum or are they intelligent, dynamic partners that provide real value to both applications and the people managing them?

The dynamic infrastructure revolution is not about removing the power of the cloud, it’s about giving that power to the people, too, so both can be leveraged in a way that maximizes the efficiency of all applications, modern and legacy, web and client-server, virtualized and physical.

 

Follow me on Twitter View Lori's profile on SlideShare friendfeedicon_facebook AddThis Feed Button Bookmark and Share

Related blogs & articles:

Categories:  ,  ,  ,  ,
 ,

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Blockchain is a new buzzword that promises to revolutionize the way we manage data. If the data is stored in a blockchain there is no need for a middleman - the distributed database is stored on multiple and there is no need to have a centralized server that will ensure that the transactions can be trusted. The best way to understand how a blockchain works is to build one. During this presentation, we'll start with covering the basics (hash, nounce, block, smart contracts) and then we'll crea...
History of how we got here. What IoT devices are most vulnerable? This presentation will demonstrate where hacks are most successful, through hardware, software, firmware or the radio connected to the network. The hacking of IoT devices and systems explained in 6 basic steps. On the other side, protecting devices continue to be a challenging effort. Product vendors/developers and customers are all responsible for improving IoT device security. The top 10 vulnerabilities will be presented a...
As the fourth industrial revolution continues to march forward, key questions remain related to the protection of software, cloud, AI, and automation intellectual property. Recent developments in Supreme Court and lower court case law will be reviewed to explain the intricacies of what inventions are eligible for patent protection, how copyright law may be used to protect application programming interfaces (APIs), and the extent to which trademark and trade secret law may have expanded relev...
Never mind that we might not know what the future holds for cryptocurrencies and how much values will fluctuate or even how the process of mining a coin could cost as much as the value of the coin itself - cryptocurrency mining is a hot industry and shows no signs of slowing down. However, energy consumption to mine cryptocurrency is one of the biggest issues facing this industry. Burning huge amounts of electricity isn't incidental to cryptocurrency, it's basically embedded in the core of "mini...
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Every organization is facing their own Digital Transformation as they attempt to stay ahead of the competition, or worse, just keep up. Each new opportunity, whether embracing machine learning, IoT, or a cloud migration, seems to bring new development, deployment, and management models. The results are more diverse and federated computing models than any time in our history.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throug...
Atmosera delivers modern cloud services that maximize the advantages of cloud-based infrastructures. Offering private, hybrid, and public cloud solutions, Atmosera works closely with customers to engineer, deploy, and operate cloud architectures with advanced services that deliver strategic business outcomes. Atmosera's expertise simplifies the process of cloud transformation and our 20+ years of experience managing complex IT environments provides our customers with the confidence and trust tha...
Where many organizations get into trouble, however, is that they try to have a broad and deep knowledge in each of these areas. This is a huge blow to an organization's productivity. By automating or outsourcing some of these pieces, such as databases, infrastructure, and networks, your team can instead focus on development, testing, and deployment. Further, organizations that focus their attention on these areas can eventually move to a test-driven development structure that condenses several l...