[AccessD] The direction of data processing

Jim Lawrence accessd at shaw.ca
Mon Jan 27 11:48:20 CST 2014


I feel your pain. 

T-SQL is sort of like Access SQL or is it only a little like?... Have you considered using SPs and just passing parameters?

Very complex pass-through queries use to crash Access but maybe that bug has been fixed.

Jim

----- Original Message -----
From: "Gustav Brock" <gustav at cactus.dk>
To: "Access Developers discussion and problem solving" <accessd at databaseadvisors.com>
Sent: Monday, January 27, 2014 12:13:20 AM
Subject: Re: [AccessD] The direction of data processing

Hi Jim

For the last days I have been struggling with some updating pass-through
queries, not Oracle but T-SQL.
No fun. As soon as you have more than a few joins, the code turns nearly
unreadable. I'm not very good at it, so I had to build the query and the
joins bit by bit to not lose my feet. I never learn to love this. Give me C#
please.

What a moment to bring a potent Oracle server on its nees. That must have
been a cup of coffee you never forget.

/gustav
 

-----Oprindelig meddelelse-----
Fra: accessd-bounces at databaseadvisors.com
[mailto:accessd-bounces at databaseadvisors.com] På vegne af Jim Lawrence
Sendt: 27. januar 2014 06:03
Til: Access Developers discussion and problem solving
Emne: [AccessD] The direction of data processing

Hi All:

I must admit that there is a bit of a preamble to but it is all aimed at a
point and, I believe, the future in data management.

Back a number of years ago, when working for a government branch that
handled data and policy I was asked to retrieve a full set of data summaries
and have them ready to display and/or printout at the request of the
department head. To say the least the data was a mess. It had evolved for
years and each time the data model was improved the data structure was
changed and because it was easier to just make a new table than try and
figure out how to consolidate the information in one table. To add to data's
complexity, government policy continued to change and affect how data
entered into the existing table. When the variation became too extreme time
for a new table.

All this data resided, in a Oracle database, running on Unix, on one of the
largest HP prolient computers I had ever seen. It was somewhere between
standard server and a mini, full 64 bit, multi-core, a huge extended RAID
with coprocessor running the stacks of drives. 64Bit and multi-core might
not be unusual now but then it was really rare and only available to
governments and big businesses. (A perfect piece of hardware network. ;-))

I decided to run the whole interface through a MS Access FE but gather and
calculate the data processing through the big prolient and Oracle. It took
almost two weeks to build the Oracle procedure query. I segregated each data
table set and applied subqueries to manage various policy changes (you use
to be able to only go six level deep with subqueries, on the Oracle of the
day, but I went no more than four levels. After each table set had been
processed and a common data set had been retrieved all the data sets were
UNION ALL together. The finished query was about two hundred lines long. I
had tested each module extensively but now it was time to test the fully
integrated query.

I pressed the button on my Access form and trotted off for a coffee as I
knew it would probably take a few minutes even on that great machine. As I
was walking to the coffee room, I could over hear everyone saying the system
had stopped. I knew it was a momentary thing so continued on to fix a
coffee. It was momentary; the rest of the morning, momentary...two hundred
lines of code had that huge server thrashing and every resource it had was
being used (the benefits of having full admin rights).

After that, I broke that big query into individual modules and controlled
its processing via the Access FE. First one module was called, then when the
data was received, that data was sorted and added to the local MDB
table...then in a few minutes, after the server's memory had cleared, the
next module was called and process was repeated until all the modules had
been handled, sorted and appended. This method took less than an hour and
none of the other staff were inconvenienced.

At that moment, I realized that a system that had multiple small computers
probably could have handled the processing far more effectively. Vertical
processing was maxed out (and was a dead-end technology) and horizontal
processing was the future of data management. NoSQL databases are not the
biggest part of this process, it is the data managers like Hadoop and the
modern languages like Elan and F# and even torrent networks (and dozens
similar technologies), that will make the new data revolution a reality.

It is a very exciting time to be working in the computer industry.

Jim     
        
-- 
AccessD mailing list
AccessD at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/accessd
Website: http://www.databaseadvisors.com


-- 
AccessD mailing list
AccessD at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/accessd
Website: http://www.databaseadvisors.com



More information about the AccessD mailing list