Hello everyone,
Congratulations on your application, it is extremely useful and very powerful as a tool!
I created a sitemap with multiple url's but the site I tried to extract from has some kind of limit on the pages a user can access at a time.
The problem is that the scraper doesn't remember which url's it accessed and the url's are not accesed sequencialy, so every time I press the scrap button, I get the same results over and over again.
Is there a way to ignore the already scraped url's?