Welcome!

Java IoT Authors: Pat Romanski, Yeshim Deniz, Zakia Bouachraoui, Elizabeth White, Liz McMillan

Related Topics: Java IoT

Java IoT: Article

OODBMS & CORBA

OODBMS & CORBA

Object-oriented database platforms offer several benefits. The first one I think of is that I don't have to write code to handle the transformation of an object to a row in a table. The object model is the data model. Navigation from reference to reference is efficient because object access is in the OO language itself. How complicated that can get depends on your intent, your design and the OODB platform you're using. Most Java API's for OODBMS platforms are maturing quickly, and there are some interesting variations and parallels forming.

ObjectStore is a physical page-based implementation. This means that when an object is referenced, a physical image of the disk area that the object is stored on is brought into memory. The physical disk address becomes the object's identifier. Other OODBMS platforms, like Versant, use logical models. Instead of bringing a page of disk into memory, they use tree traversal to find the object and bring it into memory. There are good things relative to both paradigms. ObjectStore allows (at the time of this writing) one transaction per session and one session per process.

Versant's product allows multiple sessions per process. Both allow multiple threads in a session. Yet no matter what sort of session model the platform uses, whether it is a physical or logical implementation, or how implicit the transaction boundaries become, we, the programmers, must still be involved in transaction management. For instance, you may want to cause a rollback based on a programmatic exception and return an informative exception to the user/client.

Adding CORBA to the fray makes things more complicated because there needs to be some concept of coordination between CORBA transactions and database transactions. The patterns used to implement the service can make it or break it. Three things to pay close attention to going into the problem are interface granularity, scaling issues and use cases.

The granularity of the CORBA interface can make all the difference in the complexity of transaction coordination. Some interface implementations are very fine-grained, using accessor and mutator methods defined in the IDL for each data member. Others are more coarsely grained and deliver structures of data to a client. The client manipulates the data in the structure and returns it to the interface.

Scaling issues are a beast in their own right. Some applications achieve scaling by horizontal partitioning using multiple clones of the same CORBA service to distribute load. This is a means to push some of the concurrency issues back into the database engine because it implies having several processes hitting the same database. Others achieve scaling by vertical partitioning in which services are built using a per-method or per-client service. This also implies multiple processes hitting the same database.

It pays to understand your use cases in depth. Some services manage read operations while others are built to manage write operations. This directly addresses both scaling and concurrency versus alleviating concurrency issues as a result of addressing scaling.

It is certain, if you ever implement an OODB with CORBA, you will use some combination of the above paradigms. For instance, you could implement a read-only service that is horizontally partitioned and is coarse-grained, but as you plan your implementation, remember that all affect concurrency. Study your use cases carefully. Let granularity and scaling fall in once your use cases are well understood.

If you have a good conceptualization of the previous issues, then the conceptualization of transaction coordination should follow. A CORBA transaction starts when the client request enters the server's domain, and ends when the reply leaves the server. Should the CORBA transaction and the database transaction be parallel? If your data structure is shallow and your interface is fine-grained, the answer could be yes. This would mean the CORBA implementation is a persistent object as well. But if your data structure has associative depth, then you probably shouldn't make the implementation objects persistent objects and your interface should be coarse grained. The benefits start at performance and continue into maintenance. Consider the implications if the acting CORBA interface is associated with another CORBA interface. If the CORBA transactions and database transactions are parallel, you are potentially stuck with a two-phase commit. Thanks for playing.

There is no necessary relationship between a CORBA transaction and a database transaction. A CORBA transaction could cause no database transaction or it could be the precursor of many. But to relax the relationship is to bring into question how, in pattern, the CORBA object can be decoupled from the persistent object. The answer begins with a concept called "Persistence Aware." ObjectStore implements the idea directly. But conceptually an object that is persistence-aware is an aggregate of persistence-capable objects. It is aware of which objects in its immediate membership are persistent. Control of database transactions begins at this point, one layer behind the CORBA interface.

If you intend to pass persistent data to your clients via a CORBA interface, then CORBA structures are necessary. In general, it is not a good idea to make your CORBA structures persistent. Each persistent object contains one transient CORBA structure. The structure is declared in the persistent object as transient and the persistent object is responsible for moving data elements back and forth before and after a database event. Most vendors provide hooks (Versant will by the end of the year) in persistence-capable objects that the database engine can call when events, like a fetch or a flush, occur. ObjectStore defines postInitializeContents and preFlushContents methods for every persistence-capable class. Both are there to be overridden. The pattern of containment and the hooks allow easy mapping to occur automatically between CORBA structure and persistent object.

When a client calls into the object via the CORBA interface, the CORBA structure is returned. If the data is fresh, there is potentially no database transaction necessary. The structure is a cache that is refreshed within the boundaries of a database transaction. The database engine will call the preFlush method when the data is being saved, and it will call the postInitialize method when the data is fetched. The data will, in a sense, be transferred back and forth automatically between its transient representation and its persistent representation. The provision of methods guaranteed to be called before and after database events provides a hands-off environment with respect to each transient structure. Responsibility for starting and committing the transaction should be based on the data structure being returned. It can lie in the persistence-aware object or it can be pushed back into the persistence-capable objects. The base heuristic is that the transaction boundary is defined as late as practical. Postponing the transaction could allow you to avoid the transaction altogether. Avoidance could turn throughput into something nearly nonmeasurable.

The issue of transaction control and management becomes more complicated when the service you're building is multithreaded. When threads are implemented well, the increase in performance makes the cost of writing the code diminish quickly, especially when using CORBA. With the CORBA layer decoupled from the database layer, the CORBA transactions become relatively easy to implement, synchronization being key. However, the database transactions become a bit more complicated because you've got n threads running though your objects' accessors and mutators. It will always be economical to allow multiple threads into the same transaction. For instance, you could allow read requests into a write transaction. Consequently, you need to know when a transaction is in progress and what sort (Read or Write) of transaction it is. The thing to do is to encapsulate transaction management in a subsystem. Within it there needs to be:

  • An object that keeps track of the current transaction state
  • An object that queues and brokers the transaction requests
  • An object that can open, load and close the database
  • Pool management for the semaphores
  • A background thread that pulls requests from the broker and processes them. When the thread asks the broker for a transaction, what gets returned is a sort of heavy semaphore. It contains everything that the thread will need, from a database perspective, to do its job.

    Object pool management consists of a stack or queue and a strategy to allocate new semaphores to storage in case of fault. Semaphores are not created and destroyed per demand. They are pooled for reuse. The ImplementationStrategy is the manager of the pool.

    The DBLoader is responsible for starting things up. It has a static instance() method so it is easily available in the process. It creates the TransactionBroker. All requests for a transaction and subsequent transaction calls come through the DBLoader.

    The TransactionBroker is the middle manager. It creates two vectors to keep read and write requests separate. Because there is a break in the operation sequence, there is a hashtable to store exceptions in. If there is an exception while the request is being processed, it is placed in the hashtable. The requesting thread will check the hashtable after its notify() is called from TransactionState. TransactionBroker creates the TransactionState and subsequently the TransDaemon. When a request is received, it is manifested in the form of the ODBSemaphore.

    The TransactionBroker requests a semaphore from the ImplementationStrategy and initializes it with a reference to the requesting thread and the type of transaction that is being requested. The TransactionBroker adds the semaphore to the appropriate vector and calls wait () on the semaphore. TransactionBroker also has the job of notifying TransDaemon when a transaction ends.

    The TransDaemon is derived from java.lang.Thread. TransDaemon is given references to the request vectors and works both from the front. The current heuristic gives write requests priority over read requests. If it finds nothing in either vector, it yields. If a request is found, then a check is made with the TransactionState to find out if there is already a transaction started. If the system is in a transaction and the new request is a writer, call wait(). Here the Daemon waits to be notified by the TransactionState that the transaction has ended. Under all conditions the logic proceeds to three rules:
    1. If the system is not in a transaction state and there is a writer, pass the request to TransactionState to start the transaction. This rule starts a write transaction.
    2. If the system is in a transaction state and there is no writer, pass the request to TransactionState to join the transaction.
    3. If the system is not in a transaction state and there is no writer, pass the request to TransactionState to start the transaction. This rule starts a read transaction.

    The TransactionState is, as one would suspect, a state machine: it approves a request. When the TransDaemon pulls a request from one of the request vectors, eventually the request will go to TransactionState to either join the current transaction or to start a new one. When that happens, the TransactionState notifies the waiting semaphore/request and it completes its trip back through the TransactionBroker.

    For general implementation, there are holes in the rules. In a mixed-transaction-type situation there is an apparent probability of starving readers out. One extension I've been considering is how to always allow readers into a write transaction. That way, if there is a steady stream of write requests, the readers can still join in the game. Readers are allowed in until the current writer intends to commit. At that point, transaction management calls for a checkpoint. All waiting readers are queued until the next transaction starts up. It is important that the clients are completely uncoupled from the database and see only CORBA. The database is loosely coupled to CORBA and sees only idl-defined, behavior-less structures. CORBA interfaces are uncoupled from the database and see only in-process calls that return idl-defined structures. Another extension entails the invention of a heuristic object that would act as a transaction legislator. Other objects involved in transaction processing would ask the heuristic object what is allowed. Answers come back in the form of true or false.

    I've described a pattern that is extensible and flexible. It provides a means to uncouple the dependencies between an OODB platform and CORBA, and places the database functionality in the back of the process. It will be interesting to see how it evolves both in my domain and from the perspective of other developers. Some things, though, are not as flexible. No matter what OODB platform you choose, the first things that must be done by all is to fall into good design habits.

    Analyze and understand your use cases. If you can separate reads and writes into different services, do so. It relieves one concurrency factor. Analyze and understand your scaling issues. Derive a sense of how many clients there could be, and interject into the consideration how interactive your interface needs to be. If the service you are building has a high degree of interaction, the number of clients becomes less meaningful. A few clients can cause a great deal of traffic. When you can, design your interface to function in terms of structures versus fine-grained accessors and mutators. It lessens the degree of interaction.

  • More Stories By David Knox

    David Knox has a BS in Mathematics from
    Metropolitan State College of Denver. He works for Galileo International, Inc., developers of one of the largest computerized airline reservations systems
    in the world. David works in Infrastructure and
    Middleware organization. His responsiblities include Research & Development and the first deployment of CORBA technology.ObjectStore is a registered trademark of Object Design Inc. OrbixWeb is a registered trademark of IONA Corporation. Versant is a registered trademark of Versant Object Technology.

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    IoT & Smart Cities Stories
    CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
    All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
    Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
    Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
    The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
    DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
    Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
    The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
    SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
    When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...