Anyone can help me? with infinite load more button?

Describe the problem.

what i do wrong? instead of click loadmore button and start scrapping, then click load more button again
this function keep clicking load more button until crashed.
anyone can help me?

Url:Explore

Sitemap:
{"_id":"henext-recently-sale","startUrl":["https://www.henext.xyz/explore"],"selectors":[{"clickElementSelector":"button.MuiButton-root","clickElementUniquenessType":"uniqueCSSSelector","clickType":"clickMore","delay":5000,"discardInitialElements":"do-not-discard","id":"loadmore","multiple":false,"parentSelectors":["_root","loadmore"],"selector":"div#__next","type":"SelectorElementClick"},{"delay":0,"id":"nftcard","multiple":true,"parentSelectors":["_root"],"selector":"div.MuiGrid-item","type":"SelectorElement"},{"delay":0,"id":"name","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":"h4.css-19lrlhh","type":"SelectorText"},{"delay":0,"id":"owner","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":"a.css-4ixww0","type":"SelectorText"},{"delay":0,"id":"owner-url","multiple":false,"parentSelectors":["nftcard"],"selector":"a.css-4ixww0","type":"SelectorLink"},{"delay":0,"id":"edition","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":".css-1olfk66 h4.css-qpzv6d","type":"SelectorText"},{"delay":0,"id":"collector","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":"a.css-aau5xv","type":"SelectorText"},{"delay":0,"id":"collector-url","multiple":false,"parentSelectors":["nftcard"],"selector":"a.css-aau5xv","type":"SelectorLink"},{"delay":0,"id":"price","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":"h4.css-16tlp0z","type":"SelectorText"},{"delay":0,"id":"sold-in","multiple":false,"parentSelectors":["nftcard"],"regex":"","selector":".css-1k9jv1i h4","type":"SelectorText"}]}

@maznifar Hello. After inspecting, the structure of the targeted website, it seems that the discovered items are 'pixel' based - in other words - after scrolling down the page previously discovered items are replaced by the newly discovered items and the desired data is not consistently available within the HTML.

The only viable solution to scrape this data would require using a predefined interval for the starting URL.

Practical example:

{"_id":"henext-xyz","startUrl":["https://www.henext.xyz/o/[495680-495704]"],"selectors":[{"delay":0,"id":"Title","multiple":false,"parentSelectors":["_root"],"regex":"","selector":"h2","type":"SelectorText"},{"delay":0,"id":"Description","multiple":false,"parentSelectors":["_root"],"regex":"","selector":"p.MuiTypography-gutterBottom","type":"SelectorText"},{"delay":0,"id":"Image","multiple":false,"parentSelectors":["_root"],"selector":"img","type":"SelectorImage"}]}