Data in subsecuences page arent scraped

Hi, i have an issue, this is my first time doing this, but i dont think is my fault, maybe something with the page? I had a lot of problems with text element, but i solve it. But i cant solve the problema with subsequent pages, there are 2 problems:
1.- Only scrape data from 1st page.
2.- Only go to 6th page and it stops.

Plus to this, the page is continuelly updateted with more replayes and has more 3000 pages per version. So the best will be get for example, from page 10 to 1 (for get only the last 10 page), but I dont know how do it (in case the others problems will solved).

Url: http://wotreplays.eu/

Sitemap:
{"_id":"wotreplays","startUrl":["http://wotreplays.eu/site/index/version/93/sort/uploaded_at.desc/page/5/"],"selectors":[{"id":"replayItem","type":"SelectorElement","parentSelectors":["_root","nextPage"],"selector":"li[class='clearfix'],li[class='clearfix ng-scope']","multiple":true,"delay":0},{"id":"tankName","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ci']>li:nth-of-type(1)>a","multiple":false,"regex":"","delay":0},{"id":"mapName","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ci']>li:nth-of-type(2)>a","multiple":false,"regex":"","delay":0},{"id":"kills","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ri']>li:nth-of-type(1)","multiple":false,"regex":"","delay":0},{"id":"exp","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ri']>li:nth-of-type(2)","multiple":false,"regex":"","delay":0},{"id":"creds","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ri']>li:nth-of-type(3)","multiple":false,"regex":"","delay":0},{"id":"dmg","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ri']>li:nth-of-type(4)","multiple":false,"regex":"","delay":0},{"id":"asist","type":"SelectorText","parentSelectors":["replayItem"],"selector":"ul[class='r-info_ri']>li:nth-of-type(5)","multiple":false,"regex":"","delay":0},{"id":"nextPage","type":"SelectorPopupLink","parentSelectors":["_root","nextPage"],"selector":"li.pagination__item:nth-of-type(n)","multiple":true,"delay":0}]}

Looks like the page is loading new items using JS, because of this the next page link selector won't work and you have to use Element Click selector to get the new items. You already had an Element selector for the items so you can change that to Element Click selector, here's how to set it up for a "next page" link:
https://www.webscraper.io/how-to-video/element-click-pagination-next
However, this will make the scraper try to load ALL the pages and it will only extract the data when there are no more items to load.

The best option in this case would be to just remove the link selector and use the URL range feature; change your start URL to something like:

http://wotreplays.eu/site/index/version/93/sort/uploaded_at.desc/page/[1-10]/

This will make the scraper load every page from 1 to 10, for example. You can read more about this in the docs (section "Specify multiple urls with ranges"):
https://webscraper.io/documentation/scraping-a-site

1 Like

Thank you for the answer, I did that, I used the element click, and works fine except for one thing... was scraping data about 8 hours haha, but that must solved with your second tip. Thank you again, and happy hollydays.