Correct way to include the Kubernetes namespace and service name in access logs?

Hello, I’m trying to make the nginx http logs from Kong easily associatable to the service that backed each request. I have hundreds of services so this filtering would be very handy :slight_smile:

The nginx-ingress-controller project adds variables such as $proxy_upstream_name, $namespace, $ingress_name, $service_name, etc. so the access logs can be nicely filtered to a particular Kubernetes resource.

With kong these variables aren’t present, and the only way I’ve found to log information such as upstream name is to register a custom global Kong plugin which sets a new variable:

function ContextVarsHandler:log(conf)
  if ngx.ctx.service then

    ngx.var.service_name =
        ngx.ctx.service.name ~= ngx.null and
        ngx.ctx.service.name or ngx.ctx.service.host

  end
end

Along with some helm values:

env:
  nginx_proxy_set: '$service_name internal'

With the plugin I can use $service_name to print “namespace.servicename.port” in the access logs. But this seems horribly manual! Also these would ideally be three different variables.

Surely there’s an easier way to include this information in logs? If not can this feature be requested?

I found a similar request, but not for Kubernetes specifically, from August with no answer: Service Name on Access Logs

Thanks for any info!

Not sure there’s a good way to get it into the standard access logs; there are a few hurdles to that.

We generally recommend using one of our logging plugins over the NGINX access logs, e.g. https://docs.konghq.com/hub/kong-inc/http-log/. Much of Kong’s internal data isn’t exposed via NGINX variables (and you need data in those variables if you want it in an access log), as we use our own internal Lua structures for tracking much of the request state. Logging plugins do contain a lot more rich data, including the service name, because of that.

NGINX variables and modified access log directives both require template modifications (you need to declare the variable before Lua code can populate it, and need to declare the new access log format/log directive also). Custom templates are often best avoided, but you should be able to handle both using directive injection. The main thing to note there is that directive injection cannot override the standard access log (which doesn’t have a kong.conf-level setting to choose the format), but it can add additional log directives. Since most K8S-based logging works off container stdout, you’ll usually want to configure the standard log to /dev/null and then inject your new log directive for /dev/stdout to avoid duplicate entries in different form.

You further need some code that sets ngx.var.whatever to the value of the internal structure, as you’ve noted. Custom plugins or instances of serverless plugins can handle that.

The second major hurdle is that K8S-level information isn’t necessarily exposed to Kong: the controller has a pre-defined set of information that it transforms into Kong configuration, and doesn’t provide a means to inject arbitrary info elsewhere. It sounds like the existing namespace.svcName.port naming scheme is sufficient for your needs, but keep that in mind if you think you’ll need more data–something like, say, the K8S resource UID wouldn’t be available without making changes to the controller code.

With that all in mind, do you think that one of the logging plugins would work for you? Are there any major reasons you know you can’t use them?

Although customized NGINX access logs are doable, we do often dissuade them because of the additional custom overhead required: if there’s a way to make a logging plugin work for your needs, or if they’re mostly able to meet your needs but lack some critical feature, that’s something we want to explore, as those standardized plugins are easier to use out of the box. We recognize that there are K8S-specific concerns–much of the ecosystem expects to ingest logs from stdout (more of a pull model), whereas the logging plugins usually have you configure the log target (more of a push model). That gap is on our minds for longer-term planning, but if there are other concerns atop that, we’d like to hear them!

cc @Shane-Connelly, this sorta logging gap issue comes up often; it’s something we should review more broadly.

Hey @traines thanks for the detailed reply. I have already seen the whole nginx custom log format config fighting stuff. After some trial and error, these helm values seem to result in actual user traffic only getting logged in a custom JSON format:

env:
  nginx_http_log_format: >-
    json_upstream_data escape=json
    '{ "remote_addr": "$remote_addr",
    "request": "$request", "status": "$status",
    [....] }'
  proxy_access_log: 'logs/access.log json_upstream_data'

Internal traffic (eg admin api) are still logged in a different format but that’s not much of a bother.
The ‘last 10%’ of including namespace/name info turned out to be a fair bit harder :slight_smile:

So, it does seem like the desync is that, yes, the Kubernetes ecosystem values stdout logging. In effect the source of logs (the application) becomes uncoupled from the log aggregator when stdout is in use for every application. If an application writes JSON to stdout, a Kubernetes user will be able to grok those structures.


I did actually check out the logging plugins. Several notes there…
Unfortunately it’s not clear if any of the plugins are able to transmit logs to Datadog, the logging vendor I’m working with. I can transmit logs by:

  1. Sending one JSON object per line over TCP to a server
    – prefixed by API key which tcp-log cannot do
  2. POSTing one or more logs at a time to an HTTPS endpoint along with API key
    – which http-log cannot adopt the structure of
  3. Print the log to stdout and let the agents ingest it
    – using file-log pointing to /dev/stdout, but file-log says it shouldn’t be used in production due to the synchronous I/O model

Even if I were to use one of these logging plugins, they still do not include separate fields for namespace, svcName, & port. I believe this is enough information for most cases. And it makes sense that normal Kong puts them all in one string, but Kong “Ingress Controller” should really know how to at least break that string back up into components because they have real meaning in Kubernetes.

For example, if there’s a sudden pile of 500’s coming from related services on the gateway, engineers working within the associated namespace are the ones who should be alerted. (This is a wider problem with Kong + Kubernetes actually, the metrics plugins have the same issue.)

Finally, the logs sent by the plugin seem to just have too much info. I pay per byte so the whole route block feels like a repetitive waste. And it seems to include full HTTP headers, including session ID cookies…


I don’t want everyone’s authorization strings sitting on a log server.

I can doctor the logs in post processing but there’s just so many different papercuts and minor concerns no matter how I look at it, so it’s good to hear that this is a known friction point.

It kinda seems like the most Kubernetes-friendly option is writing a custom sidecar container to sit next to each kong, receive tcp-log packets from localhost, and write a reformatted subset of the log to its own stdout to be indexed normally. :roll_eyes:

Another related topic: Nginx custom template http log with service name

Hi, this is something that we are interested into too, since we use fluentbit as a log collector and we would have kong plugins to interact with it. See: https://docs.fluentbit.io/manual/installation/kubernetes
What are your suggestions in this case?