The Lumen database has 28,743 results for sanet.st; 12,470 for sanet.lc; 12,520 for sanet.ws; Those are just three root domains that I am aware of and Google has received over 50,000 requests to remove links from the search index. The bots don't need to scan the website, they just lift the information from Google.
The community pages aren't listed in the XML sitemap for the website, the search engine spider would have originally found them by following the links from the blog pages which are in the sitemap.
The XML sitemap appears to be updated quite frequently, I checked and the last entry is a link to a post that was published just minutes before I began this reply. I checked and the page was in the Google index in less than 5 minutes. The spider indexes that page and along with it gets the links to the freshest content on the community side of the website. There is no entry in the robots.txt file telling the spider not to crawl the community pages so it will naturally index the pages. The spider has a limited time window for crawling a site and favours fresh content and fast loading pages for indexing.
It doesn't require somebody to be sitting in the community to find content for posting elsewhere, it's quite easy to automate the process with a little know how.