Oracle and the Autonomous Database (a personal perspective from afar)

Yeah, if you hadn’t seen that one coming, hmmm, what can I say

Lot’s of people, my old and current peers (?), might have been shocked by, for example, Tim’s blog post about the continuously changing day by day work of the “database administrator”, but be honest, do you really want to do backup’s and recovery stuff, etc., all day…? Some stuff should be automated from the start.

Anyway, actually, that is not what this post is about. Although I expected all the “autonomous” stuff that Oracle would (should) build-in in their own Cloud environment, I am more interested in the tech behind some of the area’s.

What about the database?

This year its actually the first time since 2007 that I am not on site during Oracle OpenWorld, so I did a search on stuff which is now in the open (regarding database tech). IMHO the “holy grail” of a database, is the self-tuning performance part of the database and although a lot already can be done for ages in an Oracle database (gross parts of it), the real challenge is the human intervention retrieving data via SQL or JSON or XQuery or… etc. Its not easy to prevent strange, stupid “SQL” done in a database.

The simplest of solutions is just to buy bigger machines, more hardware, the smart one would be to avoid the work or do the needed work smarter. One way in an Oracle database would be via “SQL rewrite” actions (by the database), one other on the horizon since 12.2 is for example using the new In-Memory Column Store (In-Memory DB) and In Memory Expressions. This combination opens up a whole new area regarding, analyzing data beforehand, having parts of the SQL/JSON/etc at hand beforehand, minimizing data sections (amount of data) beforehand, do smart stuff in combination with In-Memory (in memory, not on disk, aka faster)…and…start automating this part of the data query, manipulation, etc., part.

Wouldn’t it be cool to have an autonomous database system that your can use whatever you throw at it, despite the incompetent construct (regarding language) you use and let the environment find out the most optimal, most fast way (in memory, not via slower disk operations), to retrieve the needed answer or update data?

So for me, “the” announcement on Oracle OpenWorld, was done beforehand đŸ˜‰ by Juan Loaiza (27 Sep 2017, YouTube – have a look – good stuff): “to move our in-memory algorithms into flash“.

Which I think makes sense, aligning the two “in flash columnar” and the “in-memory columnar” methods, picking the more promising of the two for setting up a new fast lane of data movement and handling.

I am guessing, “from afar”, that this is (currently) a packaged deal regarding hardware (Exadata X7 storage, vector processing in CPU) and the database software (18c, once known as V12.2.0.2).

Additionally an extra cache section is added for OLTP DB In-Memory processing.

The hot data automation that can be achieved with live cycle data management functionality can now even faster prepare, find, and already put into memory, (hot, often used) data in memory while being aware possible SQL or JSON statements and/or already set ready and, if hot data, loaded virtual column possible subset data, based on the original tabel data. Automated, no need for user intervention or optimization.

For most info shared at Oracle OpenWorld this year, you didn’t need foretelling capabilities. Regarding tech improvements, the stuff talked about in the post, this is one, IMHO, off the bigger steps forward.

What to know more? Have a look at: Exadata In-Memory OLTP acceleration

HTH/M 

Marco Gralike Written by: