Welcome!

Java IoT Authors: Pat Romanski, Yeshim Deniz, Liz McMillan, Elizabeth White, Stackify Blog

Related Topics: Java IoT

Java IoT: Article

OODBMS & CORBA

OODBMS & CORBA

Object-oriented database platforms offer several benefits. The first one I think of is that I don't have to write code to handle the transformation of an object to a row in a table. The object model is the data model. Navigation from reference to reference is efficient because object access is in the OO language itself. How complicated that can get depends on your intent, your design and the OODB platform you're using. Most Java API's for OODBMS platforms are maturing quickly, and there are some interesting variations and parallels forming.

ObjectStore is a physical page-based implementation. This means that when an object is referenced, a physical image of the disk area that the object is stored on is brought into memory. The physical disk address becomes the object's identifier. Other OODBMS platforms, like Versant, use logical models. Instead of bringing a page of disk into memory, they use tree traversal to find the object and bring it into memory. There are good things relative to both paradigms. ObjectStore allows (at the time of this writing) one transaction per session and one session per process.

Versant's product allows multiple sessions per process. Both allow multiple threads in a session. Yet no matter what sort of session model the platform uses, whether it is a physical or logical implementation, or how implicit the transaction boundaries become, we, the programmers, must still be involved in transaction management. For instance, you may want to cause a rollback based on a programmatic exception and return an informative exception to the user/client.

Adding CORBA to the fray makes things more complicated because there needs to be some concept of coordination between CORBA transactions and database transactions. The patterns used to implement the service can make it or break it. Three things to pay close attention to going into the problem are interface granularity, scaling issues and use cases.

The granularity of the CORBA interface can make all the difference in the complexity of transaction coordination. Some interface implementations are very fine-grained, using accessor and mutator methods defined in the IDL for each data member. Others are more coarsely grained and deliver structures of data to a client. The client manipulates the data in the structure and returns it to the interface.

Scaling issues are a beast in their own right. Some applications achieve scaling by horizontal partitioning using multiple clones of the same CORBA service to distribute load. This is a means to push some of the concurrency issues back into the database engine because it implies having several processes hitting the same database. Others achieve scaling by vertical partitioning in which services are built using a per-method or per-client service. This also implies multiple processes hitting the same database.

It pays to understand your use cases in depth. Some services manage read operations while others are built to manage write operations. This directly addresses both scaling and concurrency versus alleviating concurrency issues as a result of addressing scaling.

It is certain, if you ever implement an OODB with CORBA, you will use some combination of the above paradigms. For instance, you could implement a read-only service that is horizontally partitioned and is coarse-grained, but as you plan your implementation, remember that all affect concurrency. Study your use cases carefully. Let granularity and scaling fall in once your use cases are well understood.

If you have a good conceptualization of the previous issues, then the conceptualization of transaction coordination should follow. A CORBA transaction starts when the client request enters the server's domain, and ends when the reply leaves the server. Should the CORBA transaction and the database transaction be parallel? If your data structure is shallow and your interface is fine-grained, the answer could be yes. This would mean the CORBA implementation is a persistent object as well. But if your data structure has associative depth, then you probably shouldn't make the implementation objects persistent objects and your interface should be coarse grained. The benefits start at performance and continue into maintenance. Consider the implications if the acting CORBA interface is associated with another CORBA interface. If the CORBA transactions and database transactions are parallel, you are potentially stuck with a two-phase commit. Thanks for playing.

There is no necessary relationship between a CORBA transaction and a database transaction. A CORBA transaction could cause no database transaction or it could be the precursor of many. But to relax the relationship is to bring into question how, in pattern, the CORBA object can be decoupled from the persistent object. The answer begins with a concept called "Persistence Aware." ObjectStore implements the idea directly. But conceptually an object that is persistence-aware is an aggregate of persistence-capable objects. It is aware of which objects in its immediate membership are persistent. Control of database transactions begins at this point, one layer behind the CORBA interface.

If you intend to pass persistent data to your clients via a CORBA interface, then CORBA structures are necessary. In general, it is not a good idea to make your CORBA structures persistent. Each persistent object contains one transient CORBA structure. The structure is declared in the persistent object as transient and the persistent object is responsible for moving data elements back and forth before and after a database event. Most vendors provide hooks (Versant will by the end of the year) in persistence-capable objects that the database engine can call when events, like a fetch or a flush, occur. ObjectStore defines postInitializeContents and preFlushContents methods for every persistence-capable class. Both are there to be overridden. The pattern of containment and the hooks allow easy mapping to occur automatically between CORBA structure and persistent object.

When a client calls into the object via the CORBA interface, the CORBA structure is returned. If the data is fresh, there is potentially no database transaction necessary. The structure is a cache that is refreshed within the boundaries of a database transaction. The database engine will call the preFlush method when the data is being saved, and it will call the postInitialize method when the data is fetched. The data will, in a sense, be transferred back and forth automatically between its transient representation and its persistent representation. The provision of methods guaranteed to be called before and after database events provides a hands-off environment with respect to each transient structure. Responsibility for starting and committing the transaction should be based on the data structure being returned. It can lie in the persistence-aware object or it can be pushed back into the persistence-capable objects. The base heuristic is that the transaction boundary is defined as late as practical. Postponing the transaction could allow you to avoid the transaction altogether. Avoidance could turn throughput into something nearly nonmeasurable.

The issue of transaction control and management becomes more complicated when the service you're building is multithreaded. When threads are implemented well, the increase in performance makes the cost of writing the code diminish quickly, especially when using CORBA. With the CORBA layer decoupled from the database layer, the CORBA transactions become relatively easy to implement, synchronization being key. However, the database transactions become a bit more complicated because you've got n threads running though your objects' accessors and mutators. It will always be economical to allow multiple threads into the same transaction. For instance, you could allow read requests into a write transaction. Consequently, you need to know when a transaction is in progress and what sort (Read or Write) of transaction it is. The thing to do is to encapsulate transaction management in a subsystem. Within it there needs to be:

  • An object that keeps track of the current transaction state
  • An object that queues and brokers the transaction requests
  • An object that can open, load and close the database
  • Pool management for the semaphores
  • A background thread that pulls requests from the broker and processes them. When the thread asks the broker for a transaction, what gets returned is a sort of heavy semaphore. It contains everything that the thread will need, from a database perspective, to do its job.

    Object pool management consists of a stack or queue and a strategy to allocate new semaphores to storage in case of fault. Semaphores are not created and destroyed per demand. They are pooled for reuse. The ImplementationStrategy is the manager of the pool.

    The DBLoader is responsible for starting things up. It has a static instance() method so it is easily available in the process. It creates the TransactionBroker. All requests for a transaction and subsequent transaction calls come through the DBLoader.

    The TransactionBroker is the middle manager. It creates two vectors to keep read and write requests separate. Because there is a break in the operation sequence, there is a hashtable to store exceptions in. If there is an exception while the request is being processed, it is placed in the hashtable. The requesting thread will check the hashtable after its notify() is called from TransactionState. TransactionBroker creates the TransactionState and subsequently the TransDaemon. When a request is received, it is manifested in the form of the ODBSemaphore.

    The TransactionBroker requests a semaphore from the ImplementationStrategy and initializes it with a reference to the requesting thread and the type of transaction that is being requested. The TransactionBroker adds the semaphore to the appropriate vector and calls wait () on the semaphore. TransactionBroker also has the job of notifying TransDaemon when a transaction ends.

    The TransDaemon is derived from java.lang.Thread. TransDaemon is given references to the request vectors and works both from the front. The current heuristic gives write requests priority over read requests. If it finds nothing in either vector, it yields. If a request is found, then a check is made with the TransactionState to find out if there is already a transaction started. If the system is in a transaction and the new request is a writer, call wait(). Here the Daemon waits to be notified by the TransactionState that the transaction has ended. Under all conditions the logic proceeds to three rules:
    1. If the system is not in a transaction state and there is a writer, pass the request to TransactionState to start the transaction. This rule starts a write transaction.
    2. If the system is in a transaction state and there is no writer, pass the request to TransactionState to join the transaction.
    3. If the system is not in a transaction state and there is no writer, pass the request to TransactionState to start the transaction. This rule starts a read transaction.

    The TransactionState is, as one would suspect, a state machine: it approves a request. When the TransDaemon pulls a request from one of the request vectors, eventually the request will go to TransactionState to either join the current transaction or to start a new one. When that happens, the TransactionState notifies the waiting semaphore/request and it completes its trip back through the TransactionBroker.

    For general implementation, there are holes in the rules. In a mixed-transaction-type situation there is an apparent probability of starving readers out. One extension I've been considering is how to always allow readers into a write transaction. That way, if there is a steady stream of write requests, the readers can still join in the game. Readers are allowed in until the current writer intends to commit. At that point, transaction management calls for a checkpoint. All waiting readers are queued until the next transaction starts up. It is important that the clients are completely uncoupled from the database and see only CORBA. The database is loosely coupled to CORBA and sees only idl-defined, behavior-less structures. CORBA interfaces are uncoupled from the database and see only in-process calls that return idl-defined structures. Another extension entails the invention of a heuristic object that would act as a transaction legislator. Other objects involved in transaction processing would ask the heuristic object what is allowed. Answers come back in the form of true or false.

    I've described a pattern that is extensible and flexible. It provides a means to uncouple the dependencies between an OODB platform and CORBA, and places the database functionality in the back of the process. It will be interesting to see how it evolves both in my domain and from the perspective of other developers. Some things, though, are not as flexible. No matter what OODB platform you choose, the first things that must be done by all is to fall into good design habits.

    Analyze and understand your use cases. If you can separate reads and writes into different services, do so. It relieves one concurrency factor. Analyze and understand your scaling issues. Derive a sense of how many clients there could be, and interject into the consideration how interactive your interface needs to be. If the service you are building has a high degree of interaction, the number of clients becomes less meaningful. A few clients can cause a great deal of traffic. When you can, design your interface to function in terms of structures versus fine-grained accessors and mutators. It lessens the degree of interaction.

  • More Stories By David Knox

    David Knox has a BS in Mathematics from
    Metropolitan State College of Denver. He works for Galileo International, Inc., developers of one of the largest computerized airline reservations systems
    in the world. David works in Infrastructure and
    Middleware organization. His responsiblities include Research & Development and the first deployment of CORBA technology.ObjectStore is a registered trademark of Object Design Inc. OrbixWeb is a registered trademark of IONA Corporation. Versant is a registered trademark of Versant Object Technology.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @ThingsExpo Stories
    The best way to leverage your CloudEXPO | DXWorldEXPO presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering CloudEXPO | DXWorldEXPO will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at CloudEXPO. Product announcements during our show provide your company with the most reach through our targeted audienc...
    Everything run by electricity will eventually be connected to the Internet. Get ahead of the Internet of Things revolution. In his session at @ThingsExpo, Akvelon expert and IoT industry leader Sergey Grebnov provided an educational dive into the world of managing your home, workplace and all the devices they contain with the power of machine-based AI and intelligent Bot services for a completely streamlined experience.
    @DevOpsSummit at Cloud Expo, taking place November 12-13 in New York City, NY, is co-located with 22nd international CloudEXPO | first international DXWorldEXPO and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
    DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
    22nd International Cloud Expo, taking place June 5-7, 2018, at the Javits Center in New York City, NY, and co-located with the 1st DXWorld Expo will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud ...
    In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
    "MobiDev is a software development company and we do complex, custom software development for everybody from entrepreneurs to large enterprises," explained Alan Winters, U.S. Head of Business Development at MobiDev, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get...
    Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    In past @ThingsExpo presentations, Joseph di Paolantonio has explored how various Internet of Things (IoT) and data management and analytics (DMA) solution spaces will come together as sensor analytics ecosystems. This year, in his session at @ThingsExpo, Joseph di Paolantonio from DataArchon, added the numerous Transportation areas, from autonomous vehicles to “Uber for containers.” While IoT data in any one area of Transportation will have a huge impact in that area, combining sensor analytic...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight...
    Michael Maximilien, better known as max or Dr. Max, is a computer scientist with IBM. At IBM Research Triangle Park, he was a principal engineer for the worldwide industry point-of-sale standard: JavaPOS. At IBM Research, some highlights include pioneering research on semantic Web services, mashups, and cloud computing, and platform-as-a-service. He joined the IBM Cloud Labs in 2014 and works closely with Pivotal Inc., to help make the Cloud Found the best PaaS.
    It is of utmost importance for the future success of WebRTC to ensure that interoperability is operational between web browsers and any WebRTC-compliant client. To be guaranteed as operational and effective, interoperability must be tested extensively by establishing WebRTC data and media connections between different web browsers running on different devices and operating systems. In his session at WebRTC Summit at @ThingsExpo, Dr. Alex Gouaillard, CEO and Founder of CoSMo Software, presented ...
    I think DevOps is now a rambunctious teenager - it's starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
    DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
    In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
    "Evatronix provides design services to companies that need to integrate the IoT technology in their products but they don't necessarily have the expertise, knowledge and design team to do so," explained Adam Morawiec, VP of Business Development at Evatronix, in this SYS-CON.tv interview at @ThingsExpo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.