Salesforce Database Data Grabber

I've two problems with Salesforce (someone of you have tryed to capture data from it?)

First problem to solve is that i've a table of contacts and the table is limited to 50 contacts per page then i must scroll down with the mouse and the system add another 50 contatcts to the table and so on until 2000 limit is reached, how I can simulate this behavoir to build a complete list of contacts to grab? ...Then I must click on every contact name to open the client contact and grab all data I need. But what I don't know how to do is after having the table of all contacts how to enter every single contact one by one grab the single contact sitemap exit the client sheet go back on contact list and grab the subsequent contact?

Second problem is that i've created the sitemap for a single client contact and it's working fine I can grab the data I've selected but every time I logout and login and I enter the contact page the CSS selector changed so many fields have 'null' value. It's possible to have a "partial match" to identify the css selector? I've noticed that what is changed is the final string of the CSS selector the first part remain the same, It's like an anti-thieft mechanism of Salesforce that scramble CSS selectors :frowning:

Url: it's a private website

Thanks for helping out if You can...
This tool is almost perfect! :slight_smile:

To simulate the pagination behavior, you can use a headless browser automation tool like Selenium or Puppeteer. These tools allow you to automate web interactions in a browser, including scrolling down to enrich data. Here's a simplified example using Puppeteer in Node.js:
const puppeteer = require('puppeteer');

(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
await page.goto('YOUR_SALESFORCE_URL');

// Simulate scrolling down to load more contacts
let previousHeight;
while (true) {
previousHeight = await page.evaluate('document.body.scrollHeight');
await page.evaluate('window.scrollTo(0, document.body.scrollHeight)');
await page.waitForTimeout(1000); // Wait for the page to load more contacts
const newHeight = await page.evaluate('document.body.scrollHeight');
if (newHeight === previousHeight) break; // If no more contacts are loaded, stop scrolling
}

// Now you have all contacts loaded, you can extract their data
const contacts = await page.evaluate(() => {
const rows = Array.from(document.querySelectorAll('YOUR_CONTACT_SELECTOR'));
return rows.map(row => ({
name: row.querySelector('CONTACT_NAME_SELECTOR').innerText,
// Add other fields you want to extract
}));
});

console.log(contacts);

await browser.close();
})();

To handle the changing CSS selectors, you can use a more flexible selector strategy. For example, instead of relying on specific CSS classes or IDs, you can use more general attribute selectors or XPath expressions that are less likely to change. Here's an example using XPath:
const contacts = await page.evaluate(() => {
const rows = Array.from(document.querySelectorAll('YOUR_CONTACT_SELECTOR'));
return rows.map(row => ({
name: row.querySelector('td:nth-child(2)').innerText,
// Add other fields you want to extract
}));
});