[AccessD] Data interface The best way

MartyConnelly martyconnelly at shaw.ca
Tue Oct 18 15:16:27 CDT 2005


Here is one DB I have used on and off since DOS 3.2
CodeBase from Sequitur Software
http://www.codebase.com/products/
Runs on Macs, Linux, Unix, Windows, PDA's, Win CE
Really small foot print and price, expands to Terabytes.
Lots of ODBC drivers.
Very Fast.
Oh yes, it is an Excel VBA interface so I guess it goes to Access too
I think it is still ISAM based.

Gustav Brock wrote:

>Hi Shamil
>
>You are right - there is no single answer to all needs and that counts for db engines as well.
>I have no idea where, say, Caché is faster than other solutions, if any, but I'm sure in _some_ cases it will be. It is difficult to say as Intersystems as well as other suppliers are very reluctant to publish comparable benchmarks which reflect real scenarios.
>
>You can also turn it upside down; in some cases speed is not that important and for such scenarios it an object design down to the lowest level could be preferable.
>
>As for sheer speed using code, some years ago I posted to the list a routine for recursive lookup (self referencing a table) using DAO only - and it ran at a speed that surprised me.
>
>/gustav
>
>  
>
>>>>shamil at users.mns.ru 17-10-2005 01:22 >>>
>>>>        
>>>>
>Gustav,
>
>But as far as I understand well thought through and properly applied O/R mapping doesn't reject "dozens of highly optimized queries" - not at all.
>With active O/R mapping (including LINQ) Queries/SPs will continue to do their "dirty&heavy duty" optimized and effective work and the results of this work
>will be mapped on object level by O/R mapping and if needed these results will be polished on this level and then transmitted to the presentation/visual level/layer/tier...
>
><<<
>unless, of course, if the objects operate directly in the engine as the case is for Intersystems' Caché engine.
>  
>
>Gustav, I didn't see/try Caché - so I can be wrong but I think/guess that nowadays "pure object DBDMs/engines" can't be as effective as relational
>databases are for real commercial hundred gygabytes/terabytes size databases. I mean they can be as effective or more for some special, well tuned cases or for the cases when the whole object database is loaded in memory. But the latter effect is a "deja vue" from "good old" Hierarchical data model and Network(CODASYL) data model based DBMSs times. But even in memory processing for ad-hoc "chaotic" queries for object DBMSs is expected (here) to be considerably slower that similar querying of in memory relational DBMS's database because nowadays relational databases use so advanced database query processing optimization.
>
>Yes, as far as I may guess there are many tasks and real applications in many business areas where pure object DBMSs will be OK and effective used for development even these days but this still will be a limited(/niche) use comparing with modern relational DBMSs broad and effective applicability....
>
>There is no miracles in this World (or I'm missing them?) - the only way for effective data processing/querying this (IT) industry and computer science found for the last 40+ years is by:
>
>- 1. (fine) tuning physical location/placement of the data to be queried to
>make it located as effective/close as possible for the set of queries
>executed more often;
>- 2. introducing as much as possible parallelism in query execution;
>- 3. creating auxiliary indexing data structures;
>- 4. putting/(pre-)loading as much as possible data to be processed into
>speedy RAM.
>
>That's all?
>And the numbering above is made according to how these query optimization
>techniques/mainstream ideas were emering/evoloving:
>
>1st: Hierarchical and then Network(CODASYL) datamodel DBMS -  the computer
>memory was limited, both external and RAM, CPUs were rather slow, to build
>multi-processor units was a very expensive idea  - so the only commercially
>approved optimization was #1.
>2nd: First supercomputers/multi-CPU computers appeared, more speedy memory
>become reality - commercially approved idea of parallel query execution (#2)
>arrived but got limited implementation because of not enough experience and
>still expensive hardware;
>3rd: Relational model investigation resulted in commercially approved
>RDBMs - the only (/the main way) to optimize query execution in RDBMS is by
>using indexes - #3 (together with #2 - still limited by hardware) and #1 are
>in use in RDBMS;
>4th: Cheap RAM memory becomes reality - idea #4 is getting into mainstream
>together with ideas #3 and #2(multi-core processors) and #1 is still here
>and even getting more use in such advanced modern DBMS like MS SQL 2005...
>
>I think that the times of the broad and effective use of pure object DBMSs
>are not yet arrived.
>They should come of course sooner or later - and it may happen that MS
>evolutional approach will bring the best results sooner than the other
>"revolutionary" approaches. I can be wrong...
>
>I did make a project several years ago(MS Access/mdb), which have had a lot
>of complicated queries, with many indexes on the base tables optimized for
>these queries etc. - we found it slow (not speedy enough for the target
>audience). Then most of the queries were dropped and code was rewritten to
>(pre)load most of data in memory - it started to work a matter of magnitude
>or more faster. The database had not that much tables - around 15 but the
>total quantity of test records was about half a million.
>
>This real life story above shows how far superior is in memory data
>processing over modern relational DBMSs  query processing(I think MS Access
>is a good representative case of relational DBMS here in this story because
>database wasn't big).  But as I noted above unordered "chaotic" in memory
>processing for ad-hoc queries for a general case promise to be (much) slower
>than similar ad-hoc querying of an advanced modern relational DBMSs provided
>most of the data to query is cached in memory by RDBMS engine and modern
>query optimization techniques are used by this engine....
>
>And I think that nowadays "true object oriented DBMSs" use these "tricks":
>
>- they in their tests do not use commercail size (terabytes) databases
>and/or
>- they are used in a very specialized business areas/use case scenarios
>where physical data structures are optimized to have most of the data
>preloaded(cached) during query processing and/or processed in parallel....
>
>Am I totally wrong? Where can I find real pure ODBMSs tests results as the
>ones existing for relational DBMSs - http://www.tpc.org/? 
>
>No, my intention is not to start "OODBMS vs. RDMS flame" here - pure facts
>are enough to make decision to use OODBMS or RDBMS for a certain business
>area/task - I'd use the one best suited to solve/develop certain
>tasks/projects as effective and as quickly as possible...
>
>Shamil
>
>----- Original Message ----- 
>From: "Gustav Brock" <Gustav at cactus.dk>
>To: <accessd at databaseadvisors.com>
>Sent: Monday, October 17, 2005 12:19 AM
>Subject: Re: [AccessD] Data interface The best way
>
>
>Hi Shamil
>
>Thanks for your snippets and thoughts.
>
>I think these fellows hit the point. I would like to believe in the
>value of full O/R mapping - and maybe LINQ will make this come true.
>But when I study my latest task (with an Jet backend, not SQL Server)
>with some quite complicated statistical calculations using dozens of
>highly optimized queries - which I probably would not have been able to
>construct without the query designer of Access - I have a hard time
>imagining how this should have been accomplished in pure OO code without
>running at a crawl ... unless, of course, if the objects operate
>directly in the engine as the case is for Intersystems' Caché engine.
>
>But I would like to be positively surprised ...
>
>/gustav
>
>  
>

-- 
Marty Connelly
Victoria, B.C.
Canada






More information about the AccessD mailing list