Hi,
I am trying to paginate through a site, and tried both Element Click and Pagination.
The graph is:
_root > pagination
> pagination
> …
> …
> extractor
> extractor
When I execute the scrape, it first navigates through all pages, keeps the links and then goes into each page to extract. Unfortunately, some of the links are expired by the time they get reached.
Is there a way to visit a page, then extract, then visit the next page, extract and so on? Basically I want to breadth first traverse the tree.
Note: you need to first login as guest and pass through a captcha to run the scraping.
Sitemap: