How to select certain links to visit?

Hello, can I restrict by criteria - for example the first symbol the links I visit? The information I collect is a lot and at some point the resources on my computer are exhausted, so I want to divide the task into several smaller ones. When I work in another section with fewer categories, everything works without a problem.
I tried the following regex a:contains('Backplane') which selects the first 8 categories on the link, but I was wondering if there no possibility for another selection that would allow me to select more (or more flexible) categories to scrape.

Can anyone help me with advice?
Thank you!

Url: https://www.digikey.com/products/en/connectors-interconnects/20

Sitemap:
{"_id":"digikey-connectors-interconnects","startUrl":["https://www.digikey.com/products/en/connectors-interconnects/20"],"selectors":[{"id":"digikey-category","type":"SelectorLink","parentSelectors":["_root"],"selector":"a.catfilterlink","multiple":true,"delay":0},{"id":"part-number","type":"SelectorTable","parentSelectors":["digikey-category","page"],"selector":"table#productTable","multiple":true,"columns":[{"header":"Compare Parts","name":"Compare Parts","extract":true},{"header":"Image","name":"Image","extract":true},{"header":"Digi-Key Part Number","name":"Digi-Key Part Number","extract":true},{"header":"Manufacturer Part Number","name":"Manufacturer Part Number","extract":true},{"header":"Manufacturer","name":"Manufacturer","extract":true},{"header":"Description","name":"Description","extract":true},{"header":"Quantity Available","name":"Quantity Available","extract":true},{"header":"Unit Price\n \n USD","name":"Unit Price\n \n USD","extract":false},{"header":"Minimum Quantity","name":"Minimum Quantity","extract":true},{"header":"Packaging","name":"Packaging","extract":true},{"header":"Series","name":"Series","extract":true},{"header":"Part Status","name":"Part Status","extract":true},{"header":"Convert From (Adapter End)","name":"Convert From (Adapter End)","extract":true},{"header":"Convert To (Adapter End)","name":"Convert To (Adapter End)","extract":true},{"header":"Mounting Type","name":"Mounting Type","extract":true}],"delay":0,"tableDataRowSelector":"#lnkPart tr","tableHeaderRowSelector":"#tblhead tr:nth-of-type(1)"},{"id":"page","type":"SelectorLink","parentSelectors":["digikey-category","page"],"selector":".mid-wrapper a.Next","multiple":false,"delay":0}]}

Yes, you would definitely need to scrape in batches. This site has over 75,000 products, while WS has a limit of 10,000 rows max (20,000 max for the cloud scraper). Edited: @martins has clarified that the 10K and 20K limits are only for Start URL imports. WS does not limit the number of rows you can scrape. You would still need to factor in Chrome RAM limits/Chrome crashes though.

The :nth selectors are perfect for this kind of thing. For example, if you only want the first 10 categories, you can use:
li:nth-of-type(-n+10) > a.catfilterlink

For categories 11 to 20, you can use:
li:nth-of-type(n+11):nth-of-type(-n+20) > a.catfilterlink

...and so on. You should use Element preview after each change to confirm the correct categories are being selected. Ref: http://nthmaster.com/

Dear leemeng, thanks a lot for the tips!
Thanks to your help, I was able to successfully download most of the products.
Unfortunately, the page code has recently changed and the selectors for selecting subcategories and those for switching to the next page have stopped working.

Would you check and give me advice again if possible.
Thank you!