Java IoT Authors: Liz McMillan, Pat Romanski, Zakia Bouachraoui, Elizabeth White, Yeshim Deniz

Related Topics: Java IoT

Java IoT: Article



Several fun and important secrets of SQLJ will be unlocked today. Our show will include the following numbers:

  • Some magic tricks for taming the SQLJ translator to do your bidding.
  • Some incantations to turn you into a SQLJ debug-mon.
  • An initiation to the mysteries of execution contexts - for getting full control over executing SQL statements - and of connection contexts.
  • A happy story of brotherly love between JDBC and SQLJ.
  • A map of the hidden location containing all of the remaining SQLJ details that the wonderful folks from Java Developer's Journal neither would or could let me fit into this column.
Little space and much to say - let's get started right away!

Your Own Private Translator...Is Talking Back
Don't fret if you're stuck and need help - the SQLJ translator may just be able to give you what you need. Enjoy the following translator command-line options:

  • sqlj -help shows a short help message with the important SQLJ command line options. And when you just say sqlj by itself this is also recognized as a cry for help.
The Translator-Chameleon Is Serving Up Bugs
How I wish that SQLJ translation were built into the javac Java compiler. But tough luck - it ain't! The translator strives hard to make it appear however as if the only difference is saying sqlj instead of javac.

Nice: You can freely mix .sqlj and .java source files on the command line. The translator also has some implicit make capability similar to javac. Oh yes, it automatically invokes your Java compiler - but you already knew that!

Nice: The translator reports error messages from your Java compiler on the original SQLJ file and not on the generated Java file.

Not Nice: When your program throws exceptions at runtime, line numbers - such as those issued by printStackTrace() - are shown in terms of the generated Java files.

Incantation #1
Add the flag -linemap to your command line during translation. Then the translator will fix up file names and the line numbers in those class files to show the original SQLJ files. If you want this on by default (which it probably should have been in the first place), then create an environment variable SQLJ_OPTIONS with the value "-linemap". Or you can add the line sqlj.linemap to a sqlj.properties file.

Still Not Nice: If you use Sun's Java debugger jdb to debug your SQLJ program, you'll see that...this tip does not work: jdb refuses to show the SQLJ source. No wonder — they only taught it about .java source files!

Magic spell #2
Shout -jdblinemap instead of -linemap whenever you must trick that silly little (de)bugger.

God Mode: Well, not quite...You can trace the goings-on in the SQLJ runtime by installing a profile auditor in your SQLJ profile files (remember those pesky little .ser files that hold the static SQL information of your program). After the usual translation and compilation you add tracing with the following command:
sqlj -P-debug *.ser
Have fun drinking from the fire hose!

More Dizzying Debugging Spells
Remember the JDBC Genie? On many platforms (such as Oracle's), SQLJ runtime calls turn into calls to JDBC at the end of the day. You can trace JDBC calls with DriverManager.setLogStream() (or setLogWriter).

Don't count out them IDEs: Oracle's JDeveloper has been supporting SQLJ programming and debugging for a while. Other IDEs may offer SQLJ support Real Soon Now.

Smokey Bear Says: Only you can prevent runtime bugs. You should always provide the translator with the

option, so that it can check your SQL code for real.

Become a Control Freak - with ExecutionContexts
At times you need to obtain additional information about a SQL statement that just ran or you need to control exactly how you want SQL statements to be executed. Take the following examples:

  • The statement might have resulted in a warning (not an exception) that you want to inspect.
  • An UPDATE or DELETE statement reports the number of rows that was changed or - respectively - removed.
  • You want to set a timeout or designate a prefetch size for queries.
  • You want to batch several SQL statements and have them all executed together, rather than pay a round trip to the database for each one. In particular, you want to combine the repeated execution of a DML statement, such as an INSERT, an UPDATE or a DELETE that takes on different values for its bind-variables.

All of this - and more - is available on the sqlj.runtime.ExecutionContext. Every connection context (remember, this is the SQLJ equivalent to JDBC connections) has an associated ExecutionContext which can be accessed with getExecutionContext(). Your static DefaultContext also has one. Consider this example:

import sqlj.runtime.ref.DefaultContext;
import sqlj.runtime.ExecutionContext;
#sql { UPDATE emp SET sal = sal * 2 };
ExecutionContext ec = DefaultContext.getDefaultContext().getExecutionContext();
System.out.println( ec.getUpdateCount() + " employees are rejoicing.");
Or, you can terminate a whole bunch of bad guys in a single batch as follows.
String[] badGuys = { "HORRIBLE", "TERRIBLE", "NOGOOD", "VERYBAD" };
for (int i=0; i<badGuys.length; i++) {
#sql { DELETE FROM emp WHERE ename = :(badGuys[i]) }; }
If we hadn't issued executeBatch() explicitly, then the badGuys' execution would happen implicitly when a different #sql statement is encountered. Alternatively, we could have used ec.setBatchLimit(4); to tell SQLJ to always flush a batch implicitly once 4 rows have accumulated. This convenience feature is not (yet?) offered by JDBC.

Being Well Connected - Explicitly
Until now you've been brainwashed. You were led to believe that there's only a single, static connection in your SQLJ program, set once with DefaultContext.setDefaultContext(...) and then forgotten about. Even though this makes great marketing copy, it's not the (only) way the world works.

If you're connecting to more than one schema, if you're running an applet in a browser or if you're connecting to the database in a multithreaded program, then you should - nay, you must - use explicit SQLJ connections.

Don't worry: it's really easy. You just put the connection context instance (or an expression evaluating to one) in those square brackets:

An example:
import sqlj.runtime.ref.DefaultContext;
DefaultContext ctx =
new DefaultContext("jdbc:oracle:oci8:@", "scott", "tiger", false);
#sql [ctx] { UPDATE emp SET sal = sal * 2 };
Tip: Always use explicit connection contexts, unless you know that your program owns the world and requires only a single, static database connection.

Rolling Your Own Connection
Sometimes you need to distinguish between different schemas or databases. Consider a program that establishes connections to two different database schemas. The PILOTS schema has personnel data, flight hours and so on for pilots, and the JETS schema contains maintenance and flight data of aircraft. Your program is working on two different sets of tables, views and stored procedures. Now you need to verify your SQLJ statements against both of these schemas. Typed connection contexts are what let you do this. First, declare two different context types:

#sql context Pilots;
#sql context Jets;

At runtime, you connect to each of the schemas using the appropriate type:

Pilots pconn = new Pilots("jdbc:oracle:oci8:@","pilots","ace",false);
Jets jconn = new Jets("jdbc:oracle:oci8:@","jets","stratos",false);
Then the connection context type of the #sql statement clearly shows whether you want a Pilots or a Jets connection:
#sql [pconn] { INSERT INTO pilot VALUES ( .... ) }; // Pilots context
#sql [jconn] { UPDATE maintenance SET status = Checkup( .... ) }; // Jets context
Of course at translate time you also need to explain how to connect to the database for both connection context types:
sqlj [email protected]=pilots/ace [email protected]=jets/stratos MyFile.sqlj
SQLJ and JDBC: Living in Perfect Harmony
SQLJ works just fine and dandy with static SQL - where you know the shape of SQL statements and queries beforehand. But what if your application has to make up the WHERE clause in a SELECT statement on the fly - guess you'd better forget all about SQLJ, right?

Not so quick - SQLJ and JDBC are actually close-knit buddies. JDBC connections and SQLJ connection contexts are mutually convertible and so are java.sql.ResultSets and SQLJ iterators. Let's look at the specifics.

Connecting from JDBC to SQLJ
All connection context constructors and initializers can take an existing JDBC connection. Example:

java.sql.Connection conn = DriverManager.getConnection(....);
DefaultContext ctx = new DefaultContext(conn);
Now SQLJ and JDBC share the same session. Closing the SQLJ context will also clean up the JDBC connection.

Connecting from SQLJ to JDBC
All SQLJ connection contexts have the getConnection() method that allows you to retrieve an underlying JDBC connection. For example, if your program uses the static default context, the following will do:

java.sql.Connection conn =
Passing Result Sets from JDBC to SQLJ
If you want to pass off a JDBC result set as a SQLJ iterator, you can do so with a SQLJ CAST statement:
SomeIterator iter;
java.sql.ResultSet rs = stmt.executeQuery();
#sql iter = { CAST :rs };
Why that CAST statement and not simply a Java constructor? So that any vendor's SQLJ runtime implementation can scrutinize that result set very closely.

Passing Iterators fromSQLJ to JDBC
This one's a breeze. You just call the iterator's getResultSet() method and - voilˆ - your JDBC ResultSet.

Exercise 1: Why might it be useful to convert a JDBC result set into a SQLJ iterator or vice versa?

Jump into the Fray
Hope you enjoyed the journey, even though we've not yet seen all the vistas. We stopped short of covering the new fun JDBC 2.0 features that made it into SQLJ, such as support for various SQL LOB types, named SQL types, DataSources and scrollable iterators. Nor did we look at how to put iterator subclassing to use or how connection caching meshes with SQLJ or even speculate what directions SQLJ might take in the future. Is there a place to learn more? In my totally biased opinion (Oracle pays my bills, after all) the vastest amount of information on SQLJ is on the Oracle Technology Network site at http://technet.oracle.com. You'll find the world's most humongous SQLJ manual that answers more questions than you could ever dream of, as well as demos, samples, papers, an FAQ and so on. Beware, though, the Oracle-specific is happily stirred in with the generic.

Better yet, do it. If you want to get involved on the standards side, subscribe to the SQLJ partners mailing list at [email protected]. Or, if you want to hack the SQLJ reference implementation - yep, all source, all public domain - go to www.sqlj.org. Or download one of the SQLJ implementations from the IBM, Informix, or Oracle Web site and get started.

Summary of SQLJ Syntax

  • Connection context declaration and use:
    #sql context CtxType;
    CtxTypectx = new CtxType(url, user, pwd, auto-commit);
    #sql [ctx] { ...sql statement... };
  • Setting the translate-time connection for CtxType:
    sqlj [email protected]=user/pwd ...
  • ExecutionContext declaration and use:
    import sqlj.runtime.ExecutionContext;
    ExecutionContext ec = new ExecutionContext();
    ... set properties on ec ...
    #sql [ec] { ...sql statement... };
    ... retrieve warnings, update count, side-channel results, etc. from ec ...
  • Using an explicit ExecutionContext and an explicit connection context:
    #sql [ctx, ec] { ...sql statement... };
  • Casting SQLJ iterators to JDBC result sets:
    SqljIterator iter;
    #sql iter = { CAST :jdbc_result_set };

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

IoT & Smart Cities Stories
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...