Can't surf into the detail page of the search results

Hi, searching for example the keyword HOTEL, then the engine give some results and to enter into the results I have to click into the blue button "DETTAGLIO", (that in italian language means DETAIL), but then using the link selector I am not able to surf into it, to grab the data of the company details.
To have a test, just please go to:
Url: https://www.ufficiocamerale.it/
and then search for the keyword HOTEL and click on the blue button "CERCA" to see the results and the blue bottons named "DETTAGLIO" that I am not able to surf with Web Scraper.
Scrrenshot:

Sitemap:
{id:"{"_id":"ufficiocamerale-it","startUrl":["https://www.ufficiocamerale.it/"],"selectors":[{"id":"DETTAGLIO","linkType":"linkFromHref","multiple":true,"parentSelectors":["_root"],"selector":"a.btn-block","type":"SelectorLink"},{"id":"name","multiple":false,"parentSelectors":["DETTAGLIO"],"regex":"","selector":"strong#field_denominazione","type":"SelectorText"},{"id":"email","multiple":false,"parentSelectors":["DETTAGLIO"],"regex":"","selector":"strong#field_pec","type":"SelectorText"}]}"}

I can give you an advice to check sitemap... you can grab all secured data ))))

https://www.ufficiocamerale.it/sitemapindex.xml

Which i the limit of links browsable in a web page and which is the limit of the links into a sitemap? this is the only guide that explain how to import many links into the sitemap?

that's not too hard to extract URLs from xml

@matteoraggi Hi, the maximum count of start URLs that can be applied for a sitemap within the browser extension is 10'000 while in the Web Scraper Cloud - 20'000.

Multiple start URLs can also be added via Web Scraper Cloud using the 'Bulk Start URL Import' feature.

thanks, I found the way to set less of 10k start urls and I made a first test with 5k url but at the 889th url it stopped at this page

without close the scraping tab, how could I understand what made webscraper to go in pause in this case?
and the link after is working well too:

Then I restarted cutting out the past urls but it got stuck here again:

at the 399th url, this time I didn't set 2 seconds per page but 0,5 seconds per page..

Issue solved, here the solution How to understand why sometimes webscraper stop to scrape? - #8 by matteoraggi