Pagination Problem yogaalliance.org

Hi,

I want to scrape all of these Yoga teachers from this link below:
Url: https://www.yogaalliance.org/Directory-Registrants?type=Teacher
But it shows nothing. Please check my sitemap.
Sitemap:
{"_id":"yoga_teacher","startUrl":["https://www.yogaalliance.org/Directory-Registrants?type=Teacher"],"selectors":[{"id":"Elements","type":"SelectorElement","parentSelectors":["Pager"],"selector":"div.ya_result-item","multiple":true,"delay":0},{"id":"Name","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_teacher-name a","multiple":false,"regex":"","delay":0},{"id":"Address","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_container > div.ya_teacher-address div:nth-of-type(1)","multiple":false,"regex":"","delay":0},{"id":"City State Zip Country","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_container > div.ya_teacher-address div:nth-of-type(2)","multiple":false,"regex":"","delay":0},{"id":"Designation","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_container > div.ya_designations","multiple":false,"regex":"","delay":0},{"id":"Pager","type":"SelectorLink","parentSelectors":["_root"],"selector":"li:nth-of-type(n+3) a.k-link","multiple":true,"delay":0}]}

Needs an Element Click selector to paginate through the pages:

{"_id":"yoga_teacher","startUrl":["https://www.yogaalliance.org/Directory-Registrants?type=Teacher"],"selectors":[{"id":"Elements","type":"SelectorElementClick","parentSelectors":["_root"],"selector":".ya_result-list .ya_result-item","multiple":true,"delay":"2000","clickElementSelector":"a[aria-label='Go to the next page']","clickType":"clickMore","discardInitialElements":"do-not-discard","clickElementUniquenessType":"uniqueCSSSelector"},{"id":"Name","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_teacher-name a","multiple":false,"regex":"","delay":0},{"id":"Address","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_teacher-address div:not(:has(a))","multiple":false,"regex":"","delay":0},{"id":"City State Zip Country","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_container > div.ya_teacher-address div:has(a)","multiple":false,"regex":"","delay":0},{"id":"Designation","type":"SelectorText","parentSelectors":["Elements"],"selector":".ya_container > div.ya_designations","multiple":false,"regex":"","delay":0}]}

The scraper starts but no data scraped.

Sounds like the scraping job failed mid-scrape due to running out of RAM as there are quite a lot of pages that needs to be iterated through

What is the solution then?

It would fail anyway even if you have enough RAM, because WS can only generate 10,000 lines of data max (20,000 lines max for cloud scraper). So you would need some kind of limiter for the pagination.
Edited: @martins has clarified that the 10K and 20K limits are only for Start URL imports. WS does not limit the number of rows you can scrape. You would still need to factor in Chrome RAM limits/Chrome crashes though.