Welcome!

Java IoT Authors: Liz McMillan, Pat Romanski, Yeshim Deniz, Zakia Bouachraoui, Elizabeth White

RSS Feed Item

Object Database Systems: Quo vadis?

I wanted to have an opinion on some critical questions related to Object Databases:

Where are Object Database Systems going? Are Relational database systems becoming Object Databases? Do we need a standard for Object Databases? Why ODMG did not succeed?


I have therefore interviewed one of our Experts, Mike Card , on his view on the current State of the Union of object database systems.
Mike works with Syracuse Research Corporation (SRC) and is involved in object databases and their application to challenging problems, including pattern recognition. He chairs the ODBT group in OMG to advance object database standardization.

Question1:
It has been said (See Java Panel II ) that an Object Database System in order to be a suitable solution to the object persistence problem needs to support not only a richer object model, but it also has to support set-oriented, scalable, cost-based-optimized query processing, and high-throughput transactions.
Do current ODBMS offer these features?


Mike Card:
In my opinion, no though the support for true transactional processing varies between vendors. Some products use “optimistic” concurrency control, which is suitable only for environments where there is very little concurrent access to the database, such as single-threaded embedded applications. In my opinion, a database engine is not “scalable” (at least in the enterprise sense of the word) if it is based on optimistic concurrency control. This is because most truly large-scale applications will require optimal performance with many concurrent transactions, and this cannot be achieved when updates have to be rolled back at transaction commit time and re-attempted due to access conflicts.

Question2:
Relational systems are rapidly becoming object database systems (See Java Panel II ). Do you agree or disagree with this statement? Why?


Mike Card:
I would disagree, because relational databases still fundamentally access objects as rows of tables and do not offer seamless integration into a host programming language’s type system. It is true that there are some good ORMs out there, but these will never offer the performance or seamlessness that is available with a good ODBMS. I would agree that ORMs are getting better, but relational databases themselves are not becoming object databases.

Question3:
A lot of the worlds systems are built on relational technology and those systems need to be extended and integrated.
That job is always difficult. An ODBMS should be able to fully participate in the enterprise data ecosystem as well as any other DBMS for both new development as well as enhancing existing applications. How this can be achieved?
What is your opinion on this issue?


Mike Card:
As many vendors have noted, this is to some extent a marketing problem in terms of making enterprise customers aware of what object databases can do. It is also a technology issue, however, as engines based on “small-scale” concepts like optimistic concurrency control are not suitable to many enterprise environments.

Question4:
Object databases vary greatly from vendor to vendor. Is a standard for object databases (still) needed? If yes, what needs to be standardized in your opinion?


Mike Card:
Yes, I believe it is. The APIs for creating, opening, deleting, and synchronizing/replicating databases as well as the native query APIs should be standardized to allow application portability. Any APIs needed to insert objects into the database, remove them from the database, or create an index on them should also be standardized, again for the sake of application portability. I would also like to see a standard XML format for exporting object database contents to allow for data portability. I am not sure our current OMG effort can achieve all of these standardization goals, but I would like to.

Question5:
How would this new standard would different to the previous effort in ODMG? And what relationships this new standard would have with standards such as SQL?


Mike Card:
Unlike the previous ODMG standard, the new standard should have a conformance test suite that anyone can download and run against a candidate product. The standard itself should also be unambiguous and use precise language as is done in ISO standards for things like programming languages, e.g. ISO/IEC 8652 (Ada programming language standard).

The primary focus of an object database standard should be its support of a native programming language, so I would expect that an object database standard might be more closely tied to an ISO standard for an object programming language (Ada, C++, other ISO-standardized languages that may appear) than to SQL, though perhaps if a LINQ-like native query capability were included in the object database standard would also reference the SQL standard due to the use of SQL-like verbs and semantics in LINQ.

Question6:
LINQ is leading in database API innovation, providing native language data access. Is this a suitable standard for ODBMS? Why?


Mike Card:
LINQ looks like it has a lot of promise in this area. We (the Object Database Technology Working Group in OMG) are currently evaluating LINQ vs. the stack-based query language (SBQL) developed at the Polish-Japanese Institute for Information Technology to see how these technologies compare for handling complex queries. SBQL has proven to be very good for complex queries and is being deployed in several EU projects, though it is unknown to most American developers. We are doing this evaluation to ensure LINQ is a good foundation for developers of applications that require complex queries, and is not too “small-scale” in its current form. We also want to hear from the LINQ community on plans (if any) to include update capability in LINQ and we need to be sure there are no surprises for parallel transaction execution.

Question7:
When object databases are a suitable solution for an Enterprise and when they are not?


Mike Card:
They are not suitable when the engine is intended primarily for use in single-threaded embedded systems (optimistic concurrency control is a good indicator of this as I mentioned earlier).

An object database would be suitable for use in an enterprise system if it was really good at large-scale data management, i.e. the engine was designed to handle large volumes of data and many parallel transactions. Some object databases are not built like this, they are designed for use primarily in single-threaded embedded applications with fairly small data volumes and as such they would not be good candidates for enterprise applications.

Besides the technology used in the database engine itself, a good enterprise object database would need database maintenance tools (e.g. taking database A offline and replacing it with database B, updating or fiddling with database A and then bringing it back on-line, scheduling backups of databases and replicating databases between sites etc.).

Question 8:
Future direction of object databases. Where do they go?


Mike Card:
The answer to this question depends on where object programming languages themselves go. Up to this point, programming languages have not included the concept of persistence, it is always included as a “foreign” thing to be dealt with using APIs for things like file I/O etc. This is a very 1960s view of persistence, where programs were things that lived in core memory and persistent things were data files written out to tape or disk.

The closest thing to true integration of persistence I have seen is in Ruby with its “PStore” class. I would like to see persistence integrated even more fully, where objects can be declared persistent or made persistent a la

public class myClass {

persistent Integer[] myInts = new Integer[5];
Integer[] myOtherInts = new Integer[2];

public void aMethod() {
myOtherInts.makePersistent();
}

}

and the programming language itself would take care of maintaining them in files and loading them in at program start-up etc. without any additional work from the programmer.

Now there are obviously challenges with this as this small example shows. What does it mean to initialize a persistent object in a class declaration? Is the object re-initialized when the program starts up? Or is the persisted value retained, rendering the initialization clause meaningless on a subsequent run of the program? Should persistent objects be allowed to have initialization clauses like this? What are the rules about inter-object access? Must persistence by reachability be used to ensure referential integrity? Can a “stack” variable (i.e. a variable declared in a method) be declared or made persistent, or must persistent variables be at the class level or even “global” (static)? Are these questions different for interpreted languages like Ruby which do not have the same notions of class as languages like Java? These are computer science/discrete math questions that will be answered during the language design process which will in turn determine how much “database” functionality ends up in the language itself.

If persistence were fully integrated into an object programming language in this way, then the role of an object database for that language might be to just provide an efficient way to organize and search the program’s persistent variables. This would reduce the scope of what an object database has to do, since today an object database not only has to provide efficient organization and search (index and query) capability, but it also has to make objects persistent as seamlessly as possible. Of course, this “reduction in scope” would only be possible if the default persistence mechanism for the programming language was implemented in a way that was efficient and fast for large numbers of objects.

##

Read the original blog entry...

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Early Bird Registration Discount Expires on August 31, 2018 Conference Registration Link ▸ HERE. Pick from all 200 sessions in all 10 tracks, plus 22 Keynotes & General Sessions! Lunch is served two days. EXPIRES AUGUST 31, 2018. Ticket prices: ($1,295-Aug 31) ($1,495-Oct 31) ($1,995-Nov 12) ($2,500-Walk-in)
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...