How to make the scraper remember searches

Hello everyone,

Congratulations on your application, it is extremely useful and very powerful as a tool!

I created a sitemap with multiple url's but the site I tried to extract from has some kind of limit on the pages a user can access at a time.
The problem is that the scraper doesn't remember which url's it accessed and the url's are not accesed sequencialy, so every time I press the scrap button, I get the same results over and over again.
Is there a way to ignore the already scraped url's?

@tom123 Hi, yes, it is possible if these URLs/listings have a certain identifier that you could use as a keyword(requires manually specifying the selector by using jQuery selectors - :not(), :not(:contains()) and etc.)

Learn more: