I’ve decided to use Loki and Grafana to aggregate and display the nginx logs of my applications. That requires some configuration changes. This articles help you to set everything up, but it is not perfect or finished!
In this article I assume that you have 2 servers - both running Linux, one for nginx and the other for the Loki-Grafana stack. I try to keep it as general as possible to allow you to use configs for different platforms as well.
nginx configuration
We have to adjust the format of the logging format to structured JSON.
Side note: there are many formats you could choose. I highly depends on your needs and what you want to analyze. The format in this articles works for me at this moment.
First, we need to create a log_formater
that allows us to choose the format for the logs we want to have.
In the /etc/nginx/nginx.conf
file, we need to add the following config in the http
block:
log_format json_analytics escape=json '{'
'"msec": "$msec", ' # request unixtime in seconds with a milliseconds resolution
'"connection": "$connection", ' # connection serial number
'"connection_requests": "$connection_requests", ' # number of requests made in connection
'"pid": "$pid", ' # process pid
'"request_id": "$request_id", ' # the unique request id
'"request_length": "$request_length", ' # request length (including headers and body)
'"remote_addr": "$remote_addr", ' # client IP
'"remote_user": "$remote_user", ' # client HTTP username
'"remote_port": "$remote_port", ' # client port
'"time_local": "$time_local", '
'"time_iso8601": "$time_iso8601", ' # local time in the ISO 8601 standard format
'"request": "$request", ' # full path no arguments if the request
'"request_uri": "$request_uri", ' # full path and arguments if the request
'"args": "$args", ' # args
'"status": "$status", ' # response status code
'"body_bytes_sent": "$body_bytes_sent", ' # the number of body bytes exclude headers sent to a client
'"bytes_sent": "$bytes_sent", ' # the number of bytes sent to a client
'"http_referer": "$http_referer", ' # HTTP referer
'"http_user_agent": "$http_user_agent", ' # user agent
'"http_x_forwarded_for": "$http_x_forwarded_for", ' # http_x_forwarded_for
'"http_host": "$http_host", ' # the request Host: header
'"server_name": "$server_name", ' # the name of the vhost serving the request
'"request_time": "$request_time", ' # request processing time in seconds with msec resolution
'"upstream": "$upstream_addr", ' # upstream backend server for proxied requests
'"upstream_connect_time": "$upstream_connect_time", ' # upstream handshake time incl. TLS
'"upstream_header_time": "$upstream_header_time", ' # time spent receiving upstream headers
'"upstream_response_time": "$upstream_response_time", ' # time spend receiving upstream body
'"upstream_response_length": "$upstream_response_length", ' # upstream response length
'"upstream_cache_status": "$upstream_cache_status", ' # cache HIT/MISS where applicable
'"ssl_protocol": "$ssl_protocol", ' # TLS protocol
'"ssl_cipher": "$ssl_cipher", ' # TLS cipher
'"scheme": "$scheme", ' # http or https
'"request_method": "$request_method", ' # request method
'"server_protocol": "$server_protocol", ' # request protocol, like HTTP/1.1 or HTTP/2.0
'"pipe": "$pipe", ' # "p" if request was pipelined, "." otherwise
'"gzip_ratio": "$gzip_ratio", '
'"http_cf_ray": "$http_cf_ray",'
'}';
I’ve got this format from the Grafana Dashboard I want to use, minus the Geodata.
Side note: This addition does not change anything at this point as we do not ‘use’ this log format yet.
Next, we have to choose this logging format in a nginx server
block to create logs the required format.
You can have multiple logging streams at once with different formats and for this articles I’ve created a new logging directory to make it easier for the export:
/var/log/nginx-json
Open the nginx config file with the server
block and add the following line:
access_log /var/log/nginx-json/ittavern.com.access.log json_analytics;
We create a new destination for this server
block in the new directory, with the name of the page and the previously created log formatter json_analytics
.
Check the correct syntax with sudo nginx -t
and reload the nginx config with sudo systemctl reload nginx
to finish the nginx configuration.
Restart
Configuring Grafana Alloy
Grafana Alloy is the successor of Promtail and is being used to push the local logs to Loki.
As a reference, I’ve used this article from the official documentation.
There are many ways to run Grafana Alloy - Please do check the official installation documentation and install it in your prefered method. I’ve decided to use the native installation on Ubuntu for this article.
Before we configure anything, let me explain how it works - or at least how I understand Grafana Alloy:
The main components of Grafana Alloy are the Collector, the Transformer and the Writer.
The collector allows it to get logs locally, via HTTP endpoint or outher methodes.
The transformer allows you to process the logs - filter out certain lines or terms, dedublicate, add labels, etc.
The writer pushes the processes logs to the destination.
You can have multiple components and build your own little pipeline.
That is the short form - all functions can be found in the official documentation - and this is only for Loki. Grafana Alloy can be used for Prometheus and other endpoints as well!
The default configuration file can be found here:
/etc/alloy/config.alloy
The default config can be removed and we add the following components or ‘pipeline’:
local.file_match "local_files" {
path_targets = [{ "__path__" = "/var/log/nginx-json/*.log", job = "nginx", host = "1-prod-mnsn-net" }]
sync_period = "5s"
}
loki.source.file "log_scrape" {
targets = local.file_match.local_files.targets
forward_to = [loki.write.grafana_loki.receiver]
tail_from_end = true
}
loki.write "grafana_loki" {
endpoint {
url = "http://loki.lo.mnsn.net:3100/loki/api/v1/push"
}
}
In the first section, we provice the source of the logs we want to forward, add two labels and the sync period.
In the second section, we choose the source and forward them to the writer we want.
In the last section, we choose the destination for the logs - in this case our Loki instance.
This is good enough for this article. For more configuration options please check out the official documentation.
Feel free to check the syntax with sudo alloy validate /etc/alloy/config.alloy
.
In the beginning or tests, you can run it manually via sudo alloy run /etc/alloy/config.alloy
and see if it works and if it does, start the systemd service with sudo systemctl start alloy
. Additionally, Grafana Alloy will publish a simple GUI via 127.0.0.1:12345
.
Side note: Please make sure that Grafana Alloy has read-access to the logs and the server with nginx running can reach the Loki instance over TCP/3100.
nc -vz loki.lo.mnsn.net 3100
Connection to loki.lo.mnsn.net (10.20.30.56) 3100 port [tcp/*] succeeded!
That is all! Now you can use the logs in Loki. I currently use this Grafana dashboard, or simply build your own!
Share:





E-Mail hello @itta vern. com
Most recent Articles: