Software Development

jOOQ Tuesdays: Glenn Paulley Gives Insight into SQL’s History

Welcome to the jOOQ Tuesdays series. In this series, we’ll publish an article on the third Tuesday every other month where we interview someone we find exciting in our industry from a jOOQ perspective. This includes people who work with SQL, Java, Open Source, and a variety of other related topics.
 
 
 
 
 
 
 
glenn-paulley

I’m very excited to feature today Glenn Paulley who has been working with and on SQL for several decades.

Glenn, you have been a part of the database ecosystem since the very early days, having been Director of Engineering at Sybase and representing SAP with the SQL Standard committee. What is it that fascinates you so much about databases?

Data management technology has been my favourite subject within Computer Science since I was an undergraduate student at the University of Manitoba, and so it has remained throughout much of my career. I was privileged to be part of the team that implemented the first online IBM DB2 application – using IMS/TM as the transaction monitor – in Canada at Great-West Life Assurance in the late 1980’s, and we hosted some of the DB2 development team from IBM’s San Jose lab – including Don Haderle – to celebrate that achievement. So yes, I guess you could say that I’ve been around for a while.

Much of my personal expertise lies in the realm of query processing and optimization, having had Paul Larson and Frank Wm. Tompa as my Ph.D. supervisors at the University of Waterloo. But I am interested in many related database subjects: query languages certainly, multidatabase systems, and information retrieval. Two closely-related topics within database systems are of particular interest to me.  One is scale. Companies and organizations have a lot of data; billion-row tables in a relational database are fairly routine today in many companies. Advances in other technologies, such as the Internet of Things, are going to dramatically increase the data management requirements for firms wanting to take advantage of these technologies. Implementing solutions to achieve the scale required is difficult, and is of great interest to me. The related issue, quite naturally, is processing queries over such vast collections and getting the execution time down to something reasonable. So query optimization remains a favourite topic.

We’ve had a very interesting E-Mail conversation about SQL’s experiments related to Object Orientation in the late 1990’s. Today, we have ORDBMS like Oracle, Informix, PostgreSQL. Why didn’t ORDBMS succeed?

ORDBMS implementations were not as successful as their developers expected, to be sure, though I would note that many SQL implementations now contain “object” features even though objects are specifically omitted from the ISO SQL standard. Oracle 12c is a good example – Oracle’s PL/SQL object implementation supports a good selection of object-oriented programming features, such as polymorphism and single inheritance, that when coupled with collection types provide a very rich data model that can handle very complex applications. There are others, too, of course: InterSystems’ Caché product, for example, is still available.

So, while object support in relational systems is present, in many instances, to me the significant issues are (1) the performance of object constructions on larger database instances and (2) the question of where do you want objects to exist in the application stack: in the server proper, or in another tier, implemented in a true object-oriented language? The latter issue is the premise behind object-relational mapping tools, though I think that their usage often causes as many problems as they solve.

How do you see the future of the SQL language – e.g. with respect to Big Data or alternative models like document stores (which have N1QL) or graph stores (which have Open Cypher)?

My personal view is that SQL will continue to evolve; having an independent query language that permits one to query or manipulate a database but avoid writing a “program” in the traditional sense is a good idea. Over time that language might evolve to something different from today’s SQL, but it is likely that we will still call it S-Q-L. I do not expect a revolutionary approach to be successful; there is simply far too much invested in current applications and infrastructure. Don’t get me wrong – I’m not saying SQL doesn’t need improvements. It does.

Unlike the work of e.g. the JCP or w3c, which are public and open, SQL still seems to be a more closed environment. How do you see the interaction between SQL and the end user? Can “ordinary” folks participate in the future of SQL?

The SQL standard is published by the International Standards Organization (ISO), whose member countries contribute to changes to the standard and vote on them on a regular basis. Member countries that contribute to the ISO SQL standard include (at least) the US, Canada, Germany, the United Kingdom, Japan, and Korea. Participation in the standards process requires individuals or companies to belong to these “national bodies” in order to view drafts of the standard and vote on proposed changes. Usually those meetings are held in-person – at least in Canada and the United States – so there is a real cost to participation in the process.

While having SQL as an international ISO standard is, I think, worthwhile, the ISO’s business model is based on activities such as collecting revenue from the sale of standards documents. Unless governments or other benefactors sponsor the development of those standards, then it is difficult to see how the standard can be made more freely available. That issue is a political one, rather than a technical one.

After a brief stop at Conestoga College, you’re heading back to SAP. What will you be working on in the next few years?

I will be continuing to focus on database technology now that I’ve returned to SAP. So far I’ve had a fantastic experience being back at the SAP Waterloo lab, and I have every expectation that will continue into the future.

You’ve spent a lot of time at SAP (and before SAP) with Sybase. How do the different Sybase distributions relate to SAP’s new flagship database HANA?

The various SAP database systems (SQL Anywhere, IQ, ASE) all contain database technology pertinent to the HANA platform.

Last question: What do SQL and Curling have in common? :-)

As a Level 3 curling coach, one of my tasks as a High Performance Coach for the Ontario Curling Council is to collect performance data on the athletes in our various programs. Naturally I use a database system to store that data, and perform various analyses on it, so the two are not as unrelated as you might think!

Lukas Eder

Lukas is a Java and SQL enthusiast developer. He created the Data Geekery GmbH. He is the creator of jOOQ, a comprehensive SQL library for Java, and he is blogging mostly about these three topics: Java, SQL and jOOQ.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Catherine Marenghi
8 years ago

Hey, thanks for mentioning InterSystems Caché, but just for the record, this particular data platform has had a long record of success and consistent growth — not because it is an OODBMS nor an ORDBMS, but because it has always been a multi-model DBMS. Here’s a link to a 451 Research report on this topic: http://www.intersystems.com/library/library-item/intersystems-touts-multi-model-database-credentials-seeks-developer-led-adoption/

Full disclosure: I am employed by InterSystems.

Evgeny
8 years ago

“There are others, too, of course: InterSystems’ Caché product, for example, is still available.”
I want to answer on this. InterSystems is not just “still available”. InterSystems Caché is listed in Gartner’s Leader Quadrant of Operational Databases for 2014 and 2015: http://www.intersystems.com/who-we-are/newsroom/news-item/intersystems-recognized-as-a-leader-in-2015-gartner-magic-quadrant-for-operational-database-management-systems/.
InterSystems Caché is multimodel database engine which supports SQL, Object, Key-value and other types of operations with data.
Look at Caché 2016.2 Field Test which introduces Document data model support: https://community.intersystems.com/post/20162-field-test

Back to top button