Bruce Bruen
bbruen at bigpond.com
Tue May 27 05:39:51 CDT 2003
Thanks Marty.....Urrgh to much information The pages we are tryinng to retrieve are "pure" html ( with some unwanted pictures ). Primarily, they are vendor price lists we are trying to monitor against benchmarks for PC componentry. One user, say "Fred" is after motherboard prices. He builds a list of vendor sites and runs a daily check on the supply price for the model(s) he is looking for. Most of the time is spent clicking on a "favourites", waiting for the page to load and copying the days prices into his pricing models. What we would like to do is kick off the "download" in the morning, grab all the pages while he's getting a coffee and then show them one at a time and Fred can highlight any lines he wants and then we'll scrape that bit off the page and paste it in his models. What choice do you suggest. I note that the XML route would be the technopolitic right way - but the vendors aren't necessarily agreed suppliers - in fact in general its just the opposite!, we are trying to keep the agreed suppliers on their toes. (Also note that my Polish isn't up to scratch re Inet API - or did I get the wrong site baby?) Finally, if WinHttp is the go, do you know of a site that's a bit more to the point of our problem rather than the MSDN pages(and pages and pages and ...... Yet again tia Bruce -----Original Message----- From: accessd-bounces at databaseadvisors.com [mailto:accessd-bounces at databaseadvisors.com] On Behalf Of MartyConnelly Sent: Tuesday, May 27, 2003 1:29 To: accessd at databaseadvisors.com Subject: Re: [AccessD] Hyperlink screen scraping There are at least 3 ways to do this WinHttp, Inet Api's and XMLHttp. These all may require an install or a specific version of IE. It depends on what type of document you want to download a jpeg, a word Doc, text file or html. It also depends on what OS you want to deploy on. I would probably put the download into an OLE field and only use text into a memo field. Bruce Bruen wrote: >Dear List, > >I am sure we have covered this before (Seth?) but I cannot find >anything in the archives (probably cant set a good keyword). > >The database has a text field containing a hyperlink. When the user >clicks a command button on a form showing the address I want to >download the hyperlink document and save the contents of the page in a >memo field for later scraping and processing. > >How is this done? Follow and FollowHyperlink just open IE. Is there >something I am missing (apart from that!) > > >Tia >Bruce > > > > _______________________________________________ AccessD mailing list AccessD at databaseadvisors.com http://databaseadvisors.com/mailman/listinfo/accessd Website: http://www.databaseadvisors.com