|By Mark Little||
|March 1, 2002 12:00 AM EST||
In addition, it was suggested that traditional Online Transaction Processing systems (OLTP) don’t suffer from such limitations, rendering them more suitable for the emerging e-commerce applications that may require such guarantees.
This article discusses this question and shows that there’s nothing inherently wrong with these new models that prevents applications from using them to obtain end-to-end transactionality. However, before addressing the question of whether or not any specific transaction system can be used to provide end-to-end transactional guarantees, it’s important to realize the following: end-to-end transactionality is not some holy grail that people have been searching for in myths and legends; it’s a solution to one specific problem area, not a global panacea to all transaction issues. The ability or lack thereof to guarantee end-to-end transaction integrity does not in and of itself prevent a specific transaction system provider from tackling many other equally important issues in today’s evolving world of e-commerce and mobile applications.
What Is End-to-End Transactionality?
Let’s consider what such end-to-end guarantees are. Atomic transactions (transactions) are used in application programs to control the manipulation of persistent (long-lived) objects. Transactions have the following ACID properties:
• Atomic: If interrupted by failure, all effects are undone (rolled back).
• Consistent: The effects of a transaction preserve invariant properties.
• Isolated: A transaction’s intermediate states are not visible to other transactions. Transactions appear to execute serially, even if they’re performed concurrently.
• Durable: The effects of a completed transaction are persistent; they’re never lost (except in a catastrophic failure).
A transaction can be terminated in two ways: committed or aborted (rolled back). When a transaction is committed, all changes made within it are made durable (forced on to stable storage, e.g., disk). When a transaction is aborted, all the changes are undone. Atomic transactions can also be nested; the effects of a nested transaction are provisional upon the commit/abort of the outermost (top-level) atomic transaction.
A two-phase commit protocol is required to guarantee that all the transaction participants either commit or abort any changes made. Figure 1 illustrates the main aspects of the commit protocol: during phase one, the transaction coordinator, C, attempts to communicate with all the transaction participants, A and B, to determine whether they’ll commit or abort. An abort reply from any participant acts as a veto, causing the entire transaction to abort. Based upon these (lack of) responses, the coordinator decides whether to commit or abort the transaction. If the transaction will commit, the coordinator records this decision on stable storage and the protocol enters phase two, where the coordinator forces the participants to carry out the decision. The coordinator also informs the participants if the transaction aborts.
When each participant receives the coordinator’s phase-one message, they record sufficient information on stable storage to either commit or abort changes made during the transaction. After returning the phase-one response, each participant that returned a commit response must remain blocked until it has received the coordinator’s phase-two message. Until they receive this message, these resources are unavailable for use by other transactions. If the coordinator fails before this message is delivered, these resources remain blocked. However, if crashed machines eventually recover, crash recovery mechanisms can be employed to unblock the protocol and terminate the transaction.
A participant’s role depends on the application in which it occurs. For example, a J2EE JTA XAResource is a participant that typically controls the fate of work performed on a database (e.g., Oracle) within the scope of a specific transaction.
Note: The two-phase commit protocol that most transaction systems use is not client/server-based. It simply talks about a coordinator and participants, and makes no assumption about where they’re located. Different implementations of the protocol may impose certain restrictions about locality, but these are purely implementation choices.
What Is an End-to-End Transaction?
In this section I use the Web as an example of where end-to-end transactionality integrity is required. However, there are other areas (e.g., mobile) that are equally valid. Transposing the issues mentioned here to these other areas should be relatively straightforward.
Atomic transactions, with their “all-or-nothing” property, are a well-known technique for guaranteeing application consistency in the presence of failures. Although Web applications exist that offer transactional guarantees to users, these guarantees extend only to resources used at or between Web servers; clients (browsers) are not included, even though they play a significant role in certain applications, as mentioned earlier.
Therefore, providing end-to-end transactional integrity between the browser and the application (server) is important, as it allows work that involves both the browser and the server to be atomic. However, current techniques based on CGI scripts can’t provide end-to-end guarantees. As shown in Figure 2, the user selects a URL that references a CGI script on a Web server (message one), which then performs the transaction and returns a response to the browser (message two) after the transaction is complete. Returning the message during the transaction is incorrect since it may not be able to commit the changes.
In a failure-free environment, this mechanism works well. However, in the presence of failure it’s possible for message two to be lost between the server and the browser, resulting in work at the server not being atomic with respect to any browser-related work. Thus, there’s no end-to-end transactional guarantee.
Is End-to-End Transactionality Possible?
Yes. There’s nothing inherent in any transaction protocol (two-phase, three-phase, presumed abort, presumed nothing, etc.) that prevents end-to-end transactionality. The transaction engine (essentially the coordinator) has very little effect on this, whether or not it’s embedded in a proprietary service or within an industry standard application server. What does make end-to-end transactionality difficult is that it requires a transactional participant to reside at each “end,” but it does not require a transaction coordinator and its associated baggage to reside at each “end.”
A contract exists between transaction coordinator and participants, which states (in brief and with many simplifications):
• Once the transaction coordinator has decided to commit or roll back the transaction, it guarantees delivery of this information to every participant, regardless of failures. Note: There are various optimizations to this, such as a presumed abort protocol, but in essence the contract remains the same.
• Once told to prepare, a participant will make sufficient durable information for it to either commit or cancel the work that it controls. Until it determines the final outcome, it should neither commit nor cancel the work itself. If a failure occurs, or the final outcome of the transaction is slow in arriving, the resource can typically communicate with the coordinator to determine the current progress of the transaction.
In most transaction systems the majority of the effort goes into designing and developing the transaction-coordinator engine, making it as performant and reliable as possible. However, this by itself is insufficient to provide a usable system: participants are obviously required. Although any contract-conformant participant implementation can be plugged into the two-phase protocol, typically the only ones that most people use are those that control work performed on (distributed) databases, e.g., the aforementioned XAResource. This tends to result in the fact that many people equate transactions with databases only, and hence the significant amount of resources required for these participant implementations. However, this is not the case: a participant can be as resource hungry as necessary in order to fulfill the contract. Thus, a participant could use a local file system to make state information durable, for example, or it could use nonvolatile RAM. It depends on what the programmer deems necessary.
With the advent of Java it’s possible to empower thin (resource scarce) clients (e.g., browsers) so they can fully participate within transactional applications. Transaction participants tailored to the application and environment where they’ll work can be downloaded (on demand) to the client to ensure that they can fully participate within the two-phase commit protocol. There’s nothing special about specific transaction system implementations that makes them more easily adapted to this kind of environment. The specialization comes from the end-point resource – the client-side participant.
OLTP vs OO-TP
Are “traditional” online transaction processing (OLTP) engines more suited to end-to-end transactionality guarantees than “newer” object-oriented transaction systems? The quick answer is no. Why should they be? It doesn’t matter whether a transaction system is supported by a proprietary remote procedure call (RPC) and stub generation techniques or by an open-standard remote object invocation mechanism such as a CORBA ORB; once the distributed layers are removed, they all share the same core – a two-phase commit protocol engine that supports durable storage and failure recovery. How that engine is invoked, and how it invokes its participants, is immaterial to its overall workings.
The real benefit of OO-TP over OLTP is openness. Over the past eight years there’s been a significant move away from proprietary transaction processing systems and their support infrastructure for open standards. This move has been driven by users who traditionally found it extremely difficult to move applications from one vendor’s product to another, or even between different versions of a product from the same vendor. The OMG pioneered this approach with the Object Transaction Service (OTS) in 1995, when IBM, HP, Digital, and others got together to provide a means whereby their existing products could essentially be wrapped in a standard veneer; this approach allowed applications developed with this veneer to be ported from one implementation to another, and for different implementations to interact (something else that was extremely difficult to do reliably).
Therefore, it’s inaccurate to conclude that OLTP systems are superior in any way to OO-TP equivalents because of their architecture, support environment, or distribution paradigm. OLTP systems are typically monolithic closed systems, tying users to vendor-specific choices for implementations, languages, etc. If the experiences gained by the developers of efficient and reliable implementations of OLTP are transposed to OO-TP, then there’s nothing to prevent such an OO-TP system from competing well. The advantages should be obvious: open systems allow customers to pick and choose the components they require to develop their applications without worrying about vendor lock-in. Such systems are also more readily ported to new hardware and operating systems, allowing customers even more choice for deployment.
The OTS architecture provides standard interfaces to components that are possessed by all transaction engines. It doesn’t modify the model underlying all the existing different transaction monitor implementations; it mandates a two-phase commit protocol with presumed abort and all implementations of the OTS must comply with this. It was intended as an adjunct to these systems, not as a replacement.
No company that has spent many years building up reliability in such a critical piece of software as transactions would be prepared to start from scratch and implement again. In addition, no user of such reliable transaction software would be prepared to take the risk of transitioning to this new software, even if it were “open.” In the area of transactions, which are critical fault-tolerance components, it takes time to convince customers that new technology is stable and performant enough to replace what they’ve been using for many years.
Although the CORBA model is typically discussed in terms of client/server architecture, from the outset its designers did not want to impose any restrictions on the type of environment in which it could run. There’s no assumption about how “thin” a client is, or how “fat” a server must be, in order to execute a CORBA application. Many programmers these days simply use the client/server model as a convenient way in which to reason about distributed applications. But at their core these applications never have what would traditionally be considered a thin client; services that a user requires may well be colocated with that user within the same process. CORBA was the first open architecture to support the configurable deployment of services in this way, correctly seeing this separation of client and service as just that: a deployment issue. Nothing in the CORBA model requires a client to be thin and functionally deficient.
The OTS is comparable to CICS, Tuxedo, and DEC ACMS. It differs only in that it’s a standard and allows interoperation. There’s nothing fundamentally wrong with the OTS architecture that prevents it from being used in an end-to-end manner. The OTS supports end-to-end transactionality in exactly the same way CICS or any other “traditional” OLTP would – through its resources.
To ORB or Not to ORB?
There also appears to be some confusion as to whether CORBA implementations (ORBs) are sufficient for mission-critical applications. In the mid-’90s when implementations first appeared on the market, their performance was not as good as handcrafted solutions. However, that has certainly changed over the past few years. The footprint of some ORBs is certainly large, but there are other ORBs that have been tailored specifically for real-time or embedded use. Companies such as IBM, IONA, and BEA have seen ORBs develop over the years to become a critical part of the infrastructure that they have to control and therefore they have their own implementations. Other companies have licensed ORB implementations from elsewhere.
The crash failure of an ORB doesn’t typically mean that it can recover automatically and continue applications from where it left off. This is because the ORB doesn’t have necessary semantic and syntactic information to automatically check point state for recovery purposes. However, by using suitably defined services such as the OTS and the persistence service, it’s possible for applications to do this themselves or for vendors to use these services to do this for applications.
It’s been said that OLTP systems provide this kind of feature out-of-the-box, and it may be true. However, it’s an unfair comparison: an OLTP system does just one thing and does it well – it manages transactions. A CORBA ORB is meant to provide support for arbitrary distributed applications, the majority of which probably won’t even need fault tolerance, let alone transactions. However, for those applications that do need these capabilities, it’s entirely possible to provide exactly the same recovery functionality using OMG open standards.
Although the J2EE model is client /server based, as with CORBA there’s nothing to prevent a client from being rich in functionality. It’s a deployment choice that’s made at build-time and runtime (obviously, the capability for being so rich is required to be built into the client, and even if it were present, it would be up to the user to determine whether or not such functionality was required or possible). Note: Although J2EE didn’t start out as an infrastructure that used CORBA, it quickly became evident that the OMG companies’ experiences in developing CORBA were extremely important to any distributed system. As a result, over the past few years J2EE has gotten closer and closer to the CORBA world, and now requires some critical ORB components in order to run.
The typical way in which J2EE programmers use transactions is through the Java Transaction API (JTA), which is a mandated part of the specification. The JTA is intended as a higher-level API for programmers to try to isolate them from some of the more complex (and sometimes esoteric) aspects of constructing transactional applications. The JTA does not imply a specific underlying implementation for a transaction system, so it could be layered on CICS, Tuxedo, etc. However, because the OTS is now the transaction standard for most companies and it allows interoperation between different implementations, it was decided that the preferred implementation would be based on this (called the JTS, just to place it firmly in the Java domain). The JTS is currently optional, but it may eventually become a mandated part of the J2EE specification.
Application Server Means Thin Client?
No. As shown above, this is essentially a deployment issue. It’s certainly correct to say that most J2EE programmers currently use a thin(-ish) client, with most of the business logic residing within the server; however, this is simply because this solution matches 90% of the problems. Closer examination of all application server applications would certainly reveal that although thin clients are the norm, they are by no means the complete picture.
The application server has nothing fundamentally wrong with its model either. Not to say that the client-side of an application server application has to be wafer-thin. If the client wants to embed functionality such as a two-phase aware transactional resource within itself, that’s entirely possible. In fact, a client could just as easily be embedded within an application server, if the footprint allowed. The reasons for not doing this have more to do with the footprint size than any architectural issue.
Can end-to-end transactional guarantees be provided by modern transaction systems such as JTA? Yes, as we’ve shown there’s nothing inherent in these models that prevents them from providing such guarantees. The two-phase commit protocol doesn’t know anything about clients or servers, doesn’t make assumptions about the locality of the coordinator or participants, and doesn’t require any semantic knowledge of the applications. End-to-end transactional guarantees are simply a deployment view on the relative locality of different participants.
Are OLTP systems more suited to end-to-end guarantees than their modern OO-TP cousins? As we’ve shown, since they’re both based on two-phase commit protocols, there’s nothing in either model that would mean they are any more or less ideal for any specific problem domain. However, there are obvious design and implementation decisions that can be made when building a transaction system using either model, which may mean that specific instances are not best suited for end-to-end transactional solutions. It’s important to realize that this is an implementation choice only.
• Little, M.C., Shrivastava, S.K., Caughey,S.J. and Ingham, D.B. (1997).“Constructing Reliable Web Applications Using Atomic Actions.” Proceedings of the Sixth Web Conference. April.
• Little, M.C. (1997). “Providing End-to-End Transactional Web Applications Using the Object Transaction Service” OMG Success Story.
• “CORBAservices: Common Object Services Specification.” OMG Document Number 95-3-31. March 1995.
• “Java Transaction API 1.0.1 (JTA).” Sun Microsystems. April 1999.
• “Java Transaction Service 1.0 (JTS).” Sun Microsystems. December 1999.
Recently, the question was asked whether or not the models on which current transaction systems are based (e.g., JTA and JTS) are powerful enough to support end-to-end transactional guarantees for applications.
SYS-CON Events announced today that Infranics will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Since 2000, Infranics has developed SysMaster Suite, which is required for the stable and efficient management of ICT infrastructure. The ICT management solution developed and provided by Infranics continues to add intelligence to the ICT infrastructure through the IMC (Infra Management Cycle) based on mathemat...
Mar. 23, 2017 09:30 AM EDT Reads: 2,366
SYS-CON Events announced today that SoftLayer, an IBM Company, has been named “Gold Sponsor” of SYS-CON's 18th Cloud Expo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. SoftLayer, an IBM Company, provides cloud infrastructure as a service from a growing number of data centers and network points of presence around the world. SoftLayer’s customers range from Web startups to global enterprises.
Mar. 23, 2017 08:00 AM EDT Reads: 820
"I think that everyone recognizes that for IoT to really realize its full potential and value that it is about creating ecosystems and marketplaces and that no single vendor is able to support what is required," explained Esmeralda Swartz, VP, Marketing Enterprise and Cloud at Ericsson, in this SYS-CON.tv interview at @ThingsExpo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Mar. 23, 2017 08:00 AM EDT Reads: 3,632
The buzz continues for cloud, data analytics and the Internet of Things (IoT) and their collective impact across all industries. But a new conversation is emerging - how do companies use industry disruption and technology enablers to lead in markets undergoing change, uncertainty and ambiguity? Organizations of all sizes need to evolve and transform, often under massive pressure, as industry lines blur and merge and traditional business models are assaulted and turned upside down. In this new da...
Mar. 23, 2017 08:00 AM EDT Reads: 3,190
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor - all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Mar. 23, 2017 07:45 AM EDT Reads: 948
SYS-CON Events announced today that T-Mobile will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. As America's Un-carrier, T-Mobile US, Inc., is redefining the way consumers and businesses buy wireless services through leading product and service innovation. The Company's advanced nationwide 4G LTE network delivers outstanding wireless experiences to 67.4 million customers who are unwilling to compromise on ...
Mar. 23, 2017 06:45 AM EDT Reads: 1,480
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
Mar. 23, 2017 04:15 AM EDT Reads: 984
Web Real-Time Communication APIs have quickly revolutionized what browsers are capable of. In addition to video and audio streams, we can now bi-directionally send arbitrary data over WebRTC's PeerConnection Data Channels. With the advent of Progressive Web Apps and new hardware APIs such as WebBluetooh and WebUSB, we can finally enable users to stitch together the Internet of Things directly from their browsers while communicating privately and securely in a decentralized way.
Mar. 23, 2017 03:00 AM EDT Reads: 5,352
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Mar. 23, 2017 02:15 AM EDT Reads: 2,318
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you ...
Mar. 23, 2017 02:00 AM EDT Reads: 3,693
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex softw...
Mar. 23, 2017 01:15 AM EDT Reads: 3,341
In his General Session at 17th Cloud Expo, Bruce Swann, Senior Product Marketing Manager for Adobe Campaign, explored the key ingredients of cross-channel marketing in a digital world. Learn how the Adobe Marketing Cloud can help marketers embrace opportunities for personalized, relevant and real-time customer engagement across offline (direct mail, point of sale, call center) and digital (email, website, SMS, mobile apps, social networks, connected objects).
Mar. 22, 2017 11:00 PM EDT Reads: 2,991
With the introduction of IoT and Smart Living in every aspect of our lives, one question has become relevant: What are the security implications? To answer this, first we have to look and explore the security models of the technologies that IoT is founded upon. In his session at @ThingsExpo, Nevi Kaja, a Research Engineer at Ford Motor Company, will discuss some of the security challenges of the IoT infrastructure and relate how these aspects impact Smart Living. The material will be delivered i...
Mar. 22, 2017 10:15 PM EDT Reads: 1,729
SYS-CON Events announced today that Hitrons Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Hitrons Solutions Inc. is distributor in the North American market for unique products and services of small and medium-size businesses, including cloud services and solutions, SEO marketing platforms, and mobile applications.
Mar. 22, 2017 10:15 PM EDT Reads: 3,288
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, will provide a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services ...
Mar. 22, 2017 06:15 PM EDT Reads: 3,968
SYS-CON Events announced today that Cloudistics, an on-premises cloud computing company, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Cloudistics delivers a complete public cloud experience with composable on-premises infrastructures to medium and large enterprises. Its software-defined technology natively converges network, storage, compute, virtualization, and management into a ...
Mar. 22, 2017 03:45 PM EDT Reads: 1,241
My team embarked on building a data lake for our sales and marketing data to better understand customer journeys. This required building a hybrid data pipeline to connect our cloud CRM with the new Hadoop Data Lake. One challenge is that IT was not in a position to provide support until we proved value and marketing did not have the experience, so we embarked on the journey ourselves within the product marketing team for our line of business within Progress. In his session at @BigDataExpo, Sum...
Mar. 22, 2017 02:45 PM EDT Reads: 2,261
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Mar. 22, 2017 02:00 PM EDT Reads: 1,013
SYS-CON Events announced today that Ocean9will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Ocean9 provides cloud services for Backup, Disaster Recovery (DRaaS) and instant Innovation, and redefines enterprise infrastructure with its cloud native subscription offerings for mission critical SAP workloads.
Mar. 22, 2017 02:00 PM EDT Reads: 1,434
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Mar. 22, 2017 01:30 PM EDT Reads: 8,153