Nigooutbot is,, and web crawler (also known as a spider). Crawling is a process that this websites use to discover new and updated events that might be interesting for citizens from this cities.

Our websites are publishing information about third party external websites through Nigooutbot crawling system. Our main goal is to provide to users a good online agenda with things to do in specific cities. All the information that we publish from external websites is correctly linked and we do not disclose all the original content from this information. We do not believe that copying content from external websites is a good strategy and if we disclose a portion of this content is in order to send traffic to those external websites so the user can read the original content and buy a ticket from the source link. However, if you want to erase or modify information that is yours and is being published here please contact us as as soon as posible at

On average, when Nigooutbot is scanning a website, Nigooutbot does not access websites more than once per five seconds. Usually those scans are performed once a day at most, which takes in total between 1 and 30 minutes depending on how big is your website. Furthemore, we do not read all events every day so if part of the content has already been crawled our crawling operation is shorter. We recrawl an event when we think it may have disappeared or its performing date is getting closer.

To block Nigooutbot access, just include the following text in robots.txt:

User-agent: Nigooutbot

Disallow: /

You also can specify a different crawling frequency by adding this snippet in robots.txt:

User-agent: Nigooutbot

Crawl-delay: 10