[AccessD] Pushing data to Azure SQL database VERY VERY slow

Ryan W wrwehler at gmail.com
Mon Oct 30 09:32:06 CDT 2017


Looks like under 90ms to the datacenter I'm at. The closest bobbles up and
down between mid 30s to mid 80s...

I'm in "West US 2" I think.. I don't remember having a choice when I set up
the DB or I'd had set it up closer to me geographically.



On Mon, Oct 30, 2017 at 8:49 AM, The Smiley Coder <thesmileycoder at gmail.com>
wrote:

> You can try this site:
> http://www.azurespeed.com/
> From my current position at a client (which has an intense firewall, and
> possibly also some software gateway tool which slows down) most of the
> european datacenters had a latency in the 500-1000ms range.
>
> When I connect from my own home PC, or office PC, they have  a latency
> around 150-400, so significantly lower.
>
> I am guessing the performance changes you see, is more related to latency
> than it is to the performance SLA of the server.
>
>
>
> I remember a while ago I did an interface which pulled and pushed in the
> 10k amount of records to and from a hosted mySQL on a shite webserver. The
> program would pull 10k rows, do some excel magic using the built in
> forecasting, then upload the results.
> Doing regular inserts against a linked table would take ages. So, instead I
> added code in the frontend which looped the local recordsets, wrote a SQL
> passthrough string like so:
> Insert into tbl_results(ID,value,x,y) values (6,"yeah",6,7)
> Insert into tbl_results(ID,value,x,y) values (7,"nooo",9,7)
> Insert into tbl_results(ID,value,x,y) values (8,"yeah",6,7)
>
> Then I would execute that sql string against the server.
> I don't really know if it was required, but I added a check on the length
> of the statement, and if it reached 50K characters, it would fire the SQL,
> then start a new string for the remained of the records.
> The upload for all 10k records took maybe 1.5 seconds including the loop of
> the recordsets, and writing of the SQL string.
>
> Another option I've heard from other places is to use XML for the upload,
> but never tried that.
>
>
>
> Best regards
> Anders Ebro // TheSmileyCoder <http://www.thesmileycoder.com>
> Access MVP 2014-2018
> <http://mvp.microsoft.com/en-us/mvp/Anders%20Ebro-5000469>
>
> On Mon, Oct 30, 2017 at 2:31 PM, Ryan W <wrwehler at gmail.com> wrote:
>
> > I can't ping it unfortunately. Azure probably has a firewall rule up to
> > block pings.
> >
> >
> >
> > On Mon, Oct 30, 2017 at 5:28 AM, The Smiley Coder <
> > thesmileycoder at gmail.com>
> > wrote:
> >
> > > Curious Ryan, what is the ping time for both servers? I've found that
> > with
> > > a bit of work, and some design principles, I can get my local access
> app
> > to
> > > run against SQL azure so fast you can't tell it is not local.
> > >
> > > Best regards
> > > Anders Ebro // TheSmileyCoder <http://www.thesmileycoder.com>
> > > Access MVP 2014-2018
> > > <http://mvp.microsoft.com/en-us/mvp/Anders%20Ebro-5000469>
> > >
> > > On Thu, Oct 19, 2017 at 10:38 PM, Ryan W <wrwehler at gmail.com> wrote:
> > >
> > > > Gustav,
> > > >   The kicker there is my $10/mo "Shared Windows Hosting", with a
> Shared
> > > SQL
> > > > Server with MANY MANY users on it is faster. I realize Azure is a
> > > "shared"
> > > > nature too.. but with SOME resources that are for your account only
> > > (like a
> > > > mini virtual machine, of sorts)... where on the shared hosting that's
> > not
> > > > the case.. anyone can cause the server you are on to bottleneck and
> > cause
> > > > grief.
> > > >
> > > >   I would have thought that the S0 or S1 Azure plan would exceed the
> > > shared
> > > > windows hosting plan I have over at HostGator...    I was only
> testing
> > to
> > > > see if my 1-2 minute "insert into" time with HostGator was atypical
> or
> > > > not.  Turns out I'm not doing as bad as I could be, I suppose.
> > > >
> > > >
> > > >
> > > > On Thu, Oct 19, 2017 at 3:34 PM, Gustav Brock <gustav at cactus.dk>
> > wrote:
> > > >
> > > > > Hi Ryan
> > > > >
> > > > > We have some small Azure SQL instances running for testing
> purposes,
> > > and
> > > > > they are very slow. Straight reading is relatively fast, but
> anything
> > > > else
> > > > > is painfully slow. Except for some simple views, we don't run
> > anything
> > > > else
> > > > > on the Azure SQL engine.
> > > > >
> > > > > The positive aspect of this is, that it is a very effective method
> to
> > > > > pinpoint bottlenecks. If you can run a form off the Azure SQL at
> good
> > > > > speed, it will run blazingly fast off a local SQL Server. And what
> > you
> > > > > think you can away with using a fast local server, will kick you
> hard
> > > > when
> > > > > moved to Azure SQL.
> > > > >
> > > > > I guess that using a larger instance would prove much better
> results,
> > > but
> > > > > the costs rise quite quickly. I miss a small, fast, and free
> > instance.
> > > > >
> > > > > /gustav
> > > > >
> > > > > ________________________________________
> > > > > Fra: AccessD <accessd-bounces at databaseadvisors.com> på vegne af
> Ryan
> > > W <
> > > > > wrwehler at gmail.com>
> > > > > Sendt: 19. oktober 2017 20:39:34
> > > > > Til: Access Developers discussion and problem solving
> > > > > Emne: [AccessD] Pushing data to Azure SQL database VERY VERY slow
> > > > >
> > > > > Back story:
> > > > >
> > > > > We have an Access FE with a SQL Server back end.  We push up a
> > > relatively
> > > > > small dataset (6000 rows total) up to our website for some updated
> > > > > 'statistics' on client work.
> > > > >
> > > > > Right now my SQL server is on a shared windows host and pushing
> those
> > > > 6000
> > > > > rows takes anywhere from 1m30s to 2 minutes, usually!
> > > > >
> > > > > I'm also testing a snapshot of our SQL database in Azure (S1 plan)
> > for
> > > > some
> > > > > devs we have in Denmark to query off, it doesn't have to be updated
> > so
> > > it
> > > > > doesn't do any log shipping or anything (not that azure supports
> it,
> > > from
> > > > > what I can tell).
> > > > >
> > > > >
> > > > > Anyway those same tables on my shared Windows hosting plan were
> > created
> > > > in
> > > > > my azure instance and those very same queries take over 6 minutes!
> > > > >
> > > > > First off, it seems highly suspect that azure would be 3 times
> > slower?
> > > > > Secondly aside from WAN latency and such why would it even be
> taking
> > 2
> > > > > minutes to insert 6,000 rows across the WAN?  Is there a way I can
> > > speed
> > > > > that up in either the shared server or the Azure server?
> > > > >
> > > > >
> > > > > When I check the Azure DB statistics and "DTU usage" it's barely a
> > blip
> > > > on
> > > > > the radar, so I'm not pegging my DTU allocation by any means.
> > > > >
> > > > >
> > > > > When I query that information back out of Azure I pull down those
> > 6,000
> > > > > rows in less than one second.. so it doesn't seem like my WAN is
> > > > > necessarily the culprit.
> > > > > --
> > > > > AccessD mailing list
> > > > > AccessD at databaseadvisors.com
> > > > > http://databaseadvisors.com/mailman/listinfo/accessd
> > > > > Website: http://www.databaseadvisors.com
> > > > >
> > > > --
> > > > AccessD mailing list
> > > > AccessD at databaseadvisors.com
> > > > http://databaseadvisors.com/mailman/listinfo/accessd
> > > > Website: http://www.databaseadvisors.com
> > > >
> > > --
> > > AccessD mailing list
> > > AccessD at databaseadvisors.com
> > > http://databaseadvisors.com/mailman/listinfo/accessd
> > > Website: http://www.databaseadvisors.com
> > >
> > --
> > AccessD mailing list
> > AccessD at databaseadvisors.com
> > http://databaseadvisors.com/mailman/listinfo/accessd
> > Website: http://www.databaseadvisors.com
> >
> --
> AccessD mailing list
> AccessD at databaseadvisors.com
> http://databaseadvisors.com/mailman/listinfo/accessd
> Website: http://www.databaseadvisors.com
>


More information about the AccessD mailing list