A `robots.txt` file lives at the root of a website. So, for the site www.example.com, a robots.txt file would live at www.example.com/robots.txt. robots.txt is a plain text file that follows the [Robots Exclusion Standard](http://en.wikipedia.org/wiki/Robots_exclusion_standard#About_the_standard). A robots.txt file consists of one or more rules. Each rule blocks (or allows) access for a given crawler to a specified file path in that website.
This probably means there is some sensitive information on one of the `Disallow` locations. Let's look at them one by one.
**`/cgi-bin/`**
When opening [`/cgi-bin/`](https://08.adventofctf.com/cgi-bin/), we get a `404` error. So let's skip this one for now.
**`/encryption/is/a/right`**
Upon opening [`/encryption/is/a/right`](https://08.adventofctf.com/encryption/is/a/right/), we get some encoded string back. It looks like `base64` so let's try to decode it using `base64 -d` in the terminal:
This doesn't mean a lot so let's have a look at the next one.
**`/fnagn/unf/znal/cynprf/gb/tb`**
After opening [`/fnagn/unf/znal/cynprf/gb/tb`](https://08.adventofctf.com/fnagn/unf/znal/cynprf/gb/tb/), we're greeted with the following text:
> "Oh, the places you'll go", my favorite poem... but this is the wrong place. Maybe you read that wrong?
Hmm, it says "Maybe you read that wrong?". The URL also looks kinda weird. It might be `rot13` encoded. So let's try to decode it using `rot13`:
```bash
echo "/fnagn/unf/znal/cynprf/gb/tb" | rot13
> /santa/has/many/places/to/go
```
_Note: `rot13` is not a program on linux, I just programmed it as an alias for `tr 'A-Za-z' 'N-ZA-Mn-za-m'`_
We got new url (I hope 😀). Let's try to [access it](https://08.adventofctf.com/santa/has/many/places/to/go/). We got the flag! It is `NOVI{you_have_br@1ns_in_your_head}`.