Welcome!

Java IoT Authors: Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, William Schmarzo

Related Topics: Java IoT

Java IoT: Article

OODBMS & CORBA

OODBMS & CORBA

Object-oriented database platforms offer several benefits. The first one I think of is that I don't have to write code to handle the transformation of an object to a row in a table. The object model is the data model. Navigation from reference to reference is efficient because object access is in the OO language itself. How complicated that can get depends on your intent, your design and the OODB platform you're using. Most Java API's for OODBMS platforms are maturing quickly, and there are some interesting variations and parallels forming.

ObjectStore is a physical page-based implementation. This means that when an object is referenced, a physical image of the disk area that the object is stored on is brought into memory. The physical disk address becomes the object's identifier. Other OODBMS platforms, like Versant, use logical models. Instead of bringing a page of disk into memory, they use tree traversal to find the object and bring it into memory. There are good things relative to both paradigms. ObjectStore allows (at the time of this writing) one transaction per session and one session per process.

Versant's product allows multiple sessions per process. Both allow multiple threads in a session. Yet no matter what sort of session model the platform uses, whether it is a physical or logical implementation, or how implicit the transaction boundaries become, we, the programmers, must still be involved in transaction management. For instance, you may want to cause a rollback based on a programmatic exception and return an informative exception to the user/client.

Adding CORBA to the fray makes things more complicated because there needs to be some concept of coordination between CORBA transactions and database transactions. The patterns used to implement the service can make it or break it. Three things to pay close attention to going into the problem are interface granularity, scaling issues and use cases.

The granularity of the CORBA interface can make all the difference in the complexity of transaction coordination. Some interface implementations are very fine-grained, using accessor and mutator methods defined in the IDL for each data member. Others are more coarsely grained and deliver structures of data to a client. The client manipulates the data in the structure and returns it to the interface.

Scaling issues are a beast in their own right. Some applications achieve scaling by horizontal partitioning using multiple clones of the same CORBA service to distribute load. This is a means to push some of the concurrency issues back into the database engine because it implies having several processes hitting the same database. Others achieve scaling by vertical partitioning in which services are built using a per-method or per-client service. This also implies multiple processes hitting the same database.

It pays to understand your use cases in depth. Some services manage read operations while others are built to manage write operations. This directly addresses both scaling and concurrency versus alleviating concurrency issues as a result of addressing scaling.

It is certain, if you ever implement an OODB with CORBA, you will use some combination of the above paradigms. For instance, you could implement a read-only service that is horizontally partitioned and is coarse-grained, but as you plan your implementation, remember that all affect concurrency. Study your use cases carefully. Let granularity and scaling fall in once your use cases are well understood.

If you have a good conceptualization of the previous issues, then the conceptualization of transaction coordination should follow. A CORBA transaction starts when the client request enters the server's domain, and ends when the reply leaves the server. Should the CORBA transaction and the database transaction be parallel? If your data structure is shallow and your interface is fine-grained, the answer could be yes. This would mean the CORBA implementation is a persistent object as well. But if your data structure has associative depth, then you probably shouldn't make the implementation objects persistent objects and your interface should be coarse grained. The benefits start at performance and continue into maintenance. Consider the implications if the acting CORBA interface is associated with another CORBA interface. If the CORBA transactions and database transactions are parallel, you are potentially stuck with a two-phase commit. Thanks for playing.

There is no necessary relationship between a CORBA transaction and a database transaction. A CORBA transaction could cause no database transaction or it could be the precursor of many. But to relax the relationship is to bring into question how, in pattern, the CORBA object can be decoupled from the persistent object. The answer begins with a concept called "Persistence Aware." ObjectStore implements the idea directly. But conceptually an object that is persistence-aware is an aggregate of persistence-capable objects. It is aware of which objects in its immediate membership are persistent. Control of database transactions begins at this point, one layer behind the CORBA interface.

If you intend to pass persistent data to your clients via a CORBA interface, then CORBA structures are necessary. In general, it is not a good idea to make your CORBA structures persistent. Each persistent object contains one transient CORBA structure. The structure is declared in the persistent object as transient and the persistent object is responsible for moving data elements back and forth before and after a database event. Most vendors provide hooks (Versant will by the end of the year) in persistence-capable objects that the database engine can call when events, like a fetch or a flush, occur. ObjectStore defines postInitializeContents and preFlushContents methods for every persistence-capable class. Both are there to be overridden. The pattern of containment and the hooks allow easy mapping to occur automatically between CORBA structure and persistent object.

When a client calls into the object via the CORBA interface, the CORBA structure is returned. If the data is fresh, there is potentially no database transaction necessary. The structure is a cache that is refreshed within the boundaries of a database transaction. The database engine will call the preFlush method when the data is being saved, and it will call the postInitialize method when the data is fetched. The data will, in a sense, be transferred back and forth automatically between its transient representation and its persistent representation. The provision of methods guaranteed to be called before and after database events provides a hands-off environment with respect to each transient structure. Responsibility for starting and committing the transaction should be based on the data structure being returned. It can lie in the persistence-aware object or it can be pushed back into the persistence-capable objects. The base heuristic is that the transaction boundary is defined as late as practical. Postponing the transaction could allow you to avoid the transaction altogether. Avoidance could turn throughput into something nearly nonmeasurable.

The issue of transaction control and management becomes more complicated when the service you're building is multithreaded. When threads are implemented well, the increase in performance makes the cost of writing the code diminish quickly, especially when using CORBA. With the CORBA layer decoupled from the database layer, the CORBA transactions become relatively easy to implement, synchronization being key. However, the database transactions become a bit more complicated because you've got n threads running though your objects' accessors and mutators. It will always be economical to allow multiple threads into the same transaction. For instance, you could allow read requests into a write transaction. Consequently, you need to know when a transaction is in progress and what sort (Read or Write) of transaction it is. The thing to do is to encapsulate transaction management in a subsystem. Within it there needs to be:

  • An object that keeps track of the current transaction state
  • An object that queues and brokers the transaction requests
  • An object that can open, load and close the database
  • Pool management for the semaphores
  • A background thread that pulls requests from the broker and processes them. When the thread asks the broker for a transaction, what gets returned is a sort of heavy semaphore. It contains everything that the thread will need, from a database perspective, to do its job.

    Object pool management consists of a stack or queue and a strategy to allocate new semaphores to storage in case of fault. Semaphores are not created and destroyed per demand. They are pooled for reuse. The ImplementationStrategy is the manager of the pool.

    The DBLoader is responsible for starting things up. It has a static instance() method so it is easily available in the process. It creates the TransactionBroker. All requests for a transaction and subsequent transaction calls come through the DBLoader.

    The TransactionBroker is the middle manager. It creates two vectors to keep read and write requests separate. Because there is a break in the operation sequence, there is a hashtable to store exceptions in. If there is an exception while the request is being processed, it is placed in the hashtable. The requesting thread will check the hashtable after its notify() is called from TransactionState. TransactionBroker creates the TransactionState and subsequently the TransDaemon. When a request is received, it is manifested in the form of the ODBSemaphore.

    The TransactionBroker requests a semaphore from the ImplementationStrategy and initializes it with a reference to the requesting thread and the type of transaction that is being requested. The TransactionBroker adds the semaphore to the appropriate vector and calls wait () on the semaphore. TransactionBroker also has the job of notifying TransDaemon when a transaction ends.

    The TransDaemon is derived from java.lang.Thread. TransDaemon is given references to the request vectors and works both from the front. The current heuristic gives write requests priority over read requests. If it finds nothing in either vector, it yields. If a request is found, then a check is made with the TransactionState to find out if there is already a transaction started. If the system is in a transaction and the new request is a writer, call wait(). Here the Daemon waits to be notified by the TransactionState that the transaction has ended. Under all conditions the logic proceeds to three rules:
    1. If the system is not in a transaction state and there is a writer, pass the request to TransactionState to start the transaction. This rule starts a write transaction.
    2. If the system is in a transaction state and there is no writer, pass the request to TransactionState to join the transaction.
    3. If the system is not in a transaction state and there is no writer, pass the request to TransactionState to start the transaction. This rule starts a read transaction.

    The TransactionState is, as one would suspect, a state machine: it approves a request. When the TransDaemon pulls a request from one of the request vectors, eventually the request will go to TransactionState to either join the current transaction or to start a new one. When that happens, the TransactionState notifies the waiting semaphore/request and it completes its trip back through the TransactionBroker.

    For general implementation, there are holes in the rules. In a mixed-transaction-type situation there is an apparent probability of starving readers out. One extension I've been considering is how to always allow readers into a write transaction. That way, if there is a steady stream of write requests, the readers can still join in the game. Readers are allowed in until the current writer intends to commit. At that point, transaction management calls for a checkpoint. All waiting readers are queued until the next transaction starts up. It is important that the clients are completely uncoupled from the database and see only CORBA. The database is loosely coupled to CORBA and sees only idl-defined, behavior-less structures. CORBA interfaces are uncoupled from the database and see only in-process calls that return idl-defined structures. Another extension entails the invention of a heuristic object that would act as a transaction legislator. Other objects involved in transaction processing would ask the heuristic object what is allowed. Answers come back in the form of true or false.

    I've described a pattern that is extensible and flexible. It provides a means to uncouple the dependencies between an OODB platform and CORBA, and places the database functionality in the back of the process. It will be interesting to see how it evolves both in my domain and from the perspective of other developers. Some things, though, are not as flexible. No matter what OODB platform you choose, the first things that must be done by all is to fall into good design habits.

    Analyze and understand your use cases. If you can separate reads and writes into different services, do so. It relieves one concurrency factor. Analyze and understand your scaling issues. Derive a sense of how many clients there could be, and interject into the consideration how interactive your interface needs to be. If the service you are building has a high degree of interaction, the number of clients becomes less meaningful. A few clients can cause a great deal of traffic. When you can, design your interface to function in terms of structures versus fine-grained accessors and mutators. It lessens the degree of interaction.

  • More Stories By David Knox

    David Knox has a BS in Mathematics from
    Metropolitan State College of Denver. He works for Galileo International, Inc., developers of one of the largest computerized airline reservations systems
    in the world. David works in Infrastructure and
    Middleware organization. His responsiblities include Research & Development and the first deployment of CORBA technology.ObjectStore is a registered trademark of Object Design Inc. OrbixWeb is a registered trademark of IONA Corporation. Versant is a registered trademark of Versant Object Technology.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
    In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
    René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
    Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
    Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
    If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
    Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...