How to scrape web file directory?

Hi everyone,

I need to scrape a web file directory with about 500 folders that are nested inside each other on multiple levels together with documents on different levels as well as in the "last" folder (deepest folder). I need to get all the documents filenames and the file links scraped.

The directory looks like this:

  1. Folder
    1. Subfolder
      1. SubSubFolder
      2. Document
    2. Subfolder
      1. Document
      2. SubSubFolder
      1. SubSubSubFolder
      2. Document
    3. Document
    4. Document
  2. Folder
  3. Folder

I don't know on which level there are only subfolders and when there are also documents.
But priority is to get all document filenames and file links.

Unfortunately I can not provide a URL as it a confidential online directory.

My solution so far:
I have set up a Element Click Selector that selects a "row" in the directory = one entry (could be folder or document) and clicks on it's title (which opens the folder/document). This Element Click Selector is in root & itself so it should keep clicking on folders. On the same levels I have Selectors that scrape the element title (could be folder or document title) and the type (is it a folder or document?).

Problem:
The scraper is only clicking into the first folder in root, then opening the 3rd subfolder (why not the first?) then the first SubSubfolder and scrapes the documents that lay inside that folder.
I don't know how to get it to click on ALL the folders one by one.

Sitemap:
{"_id":"scrapedocuments","startUrl":["https://url.com"],"selectors":[{"id":"folder-selector","type":"SelectorElementClick","parentSelectors":["_root"],"selector":"tr.table-row","multiple":true,"delay":"500","clickElementSelector":"span.table-row-name-title","clickType":"clickOnce","discardInitialElements":"do-not-discard","clickElementUniquenessType":"uniqueText"},{"id":"filename","type":"SelectorText","parentSelectors":["_root","folder-selector"],"selector":"span.table-row-name-title","multiple":false,"regex":"","delay":0},{"id":"type","type":"SelectorText","parentSelectors":["_root","folder-selector"],"selector":"td.cdk-column-fileType","multiple":false,"regex":"","delay":0},{"id":"index","type":"SelectorText","parentSelectors":["_root","folder-selector"],"selector":"cdk-column-index div","multiple":false,"regex":"","delay":0}]}

A program like HTTrack Website Copier might work better for this case. It is free, GPL. I use it sometimes.