SEOnautBot is the generic name of SEOnaut’s web crawler. You can identify it by looking at the user agent string in the request, it identifies itself as Mozilla/5.0 (compatible; SEOnautBot/1.0; +https://seonaut.org/bot).
By default, SEOnautBot will respect the robots.txt file, but this can be disabled by the users. It usually accesses the sites on user demand, and requests all the pages of the site until it has accessed it all. It accesses the site with a random delay between each request, this delay is usually around one second.
The purpose of SEOnautBot is to crawl websites for SEO purposes, gathering information such as website structure, content, and other relevant data that can be used to improve the website’s visibility and ranking in search engines.
SEOnautBot is designed to be a respectful and well-behaved web crawler. It follows the rules set out in the robots.txt file and does not engage in any malicious or harmful behavior. However, if you believe SEOnautBot is causing issues on your site or if you have any concerns about its behavior, please write an email to hello@seonaut.org.
SEOnaut is open source software, and as such the code and its behavior can be changed by anyone using the software.
If you have any questions or concerns about SEOnautBot, please contact at hello@seonaut.org.