[AccessD] Pushing data to Azure SQL database VERY VERY slow

Jim Lawrence accessd at shaw.ca
Tue Nov 7 16:25:50 CST 2017


I wonder if there is any site that has done the research to allow Cloud users to cross reference performance?

My SiL is working for a company as their head programmer/designer and he is getting his company's site ready for "Black Friday". Their site is running off an AWS segment. When he started he was left with a bit of a disaster. The previous developer had just purchased an application that was supposed to be able to handle the thousands of transaction that super high volumes of a buying and selling spike. The application could not.

He tried every possible AWS configuration but he could only achieve marginal increases in speed.

He is a programming around the problem. He acquired a number of small AWS segments, created a caching system where a number transaction volumes are being populated simultaneously. His FE app tracks which volume is least busy and passing the transaction on to it. Code running at the BE, federates from each small database segment and feeds/syncs into the master database. The beauty of his new system is that all that is necessary to complete a customer's transaction is a acknowledgement, or receipt, from the from the individual DB caches.

Using his new system there is virtually no limit to how many transactions can be managed. AWS virtual DB segments are spawned, in real time, as volumes increase. The system is fairly complex but he has been testing and building it for months and now he is crunch time, doing 12 to 16 hour shifts.

If you wish, after "Black Friday" is over and he will have a chance to put his feet up and relax and then maybe you could email him and ask him questions about building a FE. He is creating a BE package called "Kitsilano"(?) that his company will be marketing, which is designed to handle unlimited transaction volumes. 

I have also been playing with a friend, building out a number of privately hosted Cloud (NextCloud) instances and using their federation capabilities to sync and collaborate. There may be a possibility to help a client's three offices.

Aside: I do remember either duplicating a table or deleting the contents in a MS SQL table, without or removing all keys and then bulk importing all the data and finally adding or re-adding the keys. Fastest method for updating a huge table. Of course Azure's limitations may not be possible to work around(?).

There is of course a host of tools and methods for expediting data importing into the Azure cloud and MS SQL:
https://blogs.msdn.microsoft.com/sqlcat/2010/07/30/loading-data-to-sql-azure-the-fast-way/
http://bit.ly/2m2ro4z

...and...

https://docs.microsoft.com/en-us/azure/machine-learning/team-data-science-process/parallel-load-sql-partitioned-tables
http://bit.ly/2zEiJLE

PS To automate bulk importing processes, using command line batch files, contact John C. as he is an expert at processing large amounts of data and large files and might be able to give you some pointers.

Jim

       

----- Original Message -----
From: "Ryan W" <wrwehler at gmail.com>
To: "Access Developers discussion and problem solving" <accessd at databaseadvisors.com>
Sent: Thursday, October 19, 2017 1:38:52 PM
Subject: Re: [AccessD] Pushing data to Azure SQL database VERY VERY slow

Gustav,
  The kicker there is my $10/mo "Shared Windows Hosting", with a Shared SQL
Server with MANY MANY users on it is faster. I realize Azure is a "shared"
nature too.. but with SOME resources that are for your account only (like a
mini virtual machine, of sorts)... where on the shared hosting that's not
the case.. anyone can cause the server you are on to bottleneck and cause
grief.

  I would have thought that the S0 or S1 Azure plan would exceed the shared
windows hosting plan I have over at HostGator...    I was only testing to
see if my 1-2 minute "insert into" time with HostGator was atypical or
not.  Turns out I'm not doing as bad as I could be, I suppose.



On Thu, Oct 19, 2017 at 3:34 PM, Gustav Brock <gustav at cactus.dk> wrote:

> Hi Ryan
>
> We have some small Azure SQL instances running for testing purposes, and
> they are very slow. Straight reading is relatively fast, but anything else
> is painfully slow. Except for some simple views, we don't run anything else
> on the Azure SQL engine.
>
> The positive aspect of this is, that it is a very effective method to
> pinpoint bottlenecks. If you can run a form off the Azure SQL at good
> speed, it will run blazingly fast off a local SQL Server. And what you
> think you can away with using a fast local server, will kick you hard when
> moved to Azure SQL.
>
> I guess that using a larger instance would prove much better results, but
> the costs rise quite quickly. I miss a small, fast, and free instance.
>
> /gustav
>
> ________________________________________
> Fra: AccessD <accessd-bounces at databaseadvisors.com> på vegne af Ryan W <
> wrwehler at gmail.com>
> Sendt: 19. oktober 2017 20:39:34
> Til: Access Developers discussion and problem solving
> Emne: [AccessD] Pushing data to Azure SQL database VERY VERY slow
>
> Back story:
>
> We have an Access FE with a SQL Server back end.  We push up a relatively
> small dataset (6000 rows total) up to our website for some updated
> 'statistics' on client work.
>
> Right now my SQL server is on a shared windows host and pushing those 6000
> rows takes anywhere from 1m30s to 2 minutes, usually!
>
> I'm also testing a snapshot of our SQL database in Azure (S1 plan) for some
> devs we have in Denmark to query off, it doesn't have to be updated so it
> doesn't do any log shipping or anything (not that azure supports it, from
> what I can tell).
>
>
> Anyway those same tables on my shared Windows hosting plan were created in
> my azure instance and those very same queries take over 6 minutes!
>
> First off, it seems highly suspect that azure would be 3 times slower?
> Secondly aside from WAN latency and such why would it even be taking 2
> minutes to insert 6,000 rows across the WAN?  Is there a way I can speed
> that up in either the shared server or the Azure server?
>
>
> When I check the Azure DB statistics and "DTU usage" it's barely a blip on
> the radar, so I'm not pegging my DTU allocation by any means.
>
>
> When I query that information back out of Azure I pull down those 6,000
> rows in less than one second.. so it doesn't seem like my WAN is
> necessarily the culprit.
> --
> AccessD mailing list
> AccessD at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/accessd
> Website: http://www.databaseadvisors.com
>
-- 
AccessD mailing list
AccessD at databaseadvisors.com
http://databaseadvisors.com/mailman/listinfo/accessd
Website: http://www.databaseadvisors.com



More information about the AccessD mailing list