Kong Manager in Kubernetes won't work

I’m using Kong Enterprise with Helm to upload a test environment, but Kong Manager doesn’t show any data and doesn’t allow any kind of changes to be made. There is no way to check workspaces or create new ones as Kong Manager does not identify them.
I’m using an external database because I was crashing with migrations when I was using the default chart database.

I really don’t know why the error, so from what I see there is nothing wrong with the chart, I try to look for the logs in the pods but they don’t notify any kind of error.

To use the Kong in test env, i’m using K3D to create the cluster.
Chart used: kong 2.3.0 · helm/kong

# -----------------------------------------------------------------------------
# Kong parameters
env:
  database: postgres
  pg_database: postgres
  pg_host: postgres-postgresql
  pg_user: postgres
  pg_password: 123
  pg_port: 5432
  log_level: notice
  portal: on
  portal_auth: "basic-auth"
  portal_auto_approve: on
  admin_session_conf: '{"cookie_name":"admin_session","cookie_samesite":"off","secret":"kong123","cookie_secure":true,"storage":"kong"}'
  portal_session_conf: '{"cookie_name":"portal_session","cookie_samesite":"off","secret":"kong123","cookie_secure":true,"storage":"kong"}'
  password:
    valueFrom:
      secretKeyRef:
        name: kong-enterprise-superuser-password
        key: password
# -----------------------------------------------------------------------------
# Kong Services and Endpoints
# -----------------------------------------------------------------------------
image:
  repository: kong/kong-gateway
  tag: "2.5.0.1"
admin:
  enabled: true
  type: ClusterIP
  http:
    enabled: true
    servicePort: 8001
    containerPort: 8001
  ingress:
    enabled: true
    hostname: admin.localhost
    annotations:
      kubernetes.io/ingress.class: "kong"
    path: /
  tls:
    enabled: false
proxy:
  enabled: true
  type: LoadBalancer
  http:
    enabled: true
    servicePort: 80
    containerPort: 8000
  tls:
    enabled: false
manager:
  enabled: true
  type: ClusterIP
  http:
    enabled: true
    servicePort: 8002
    containerPort: 8002
  ingress:
    enabled: true
    hostname: manager.localhost
    annotations:
      kubernetes.io/ingress.class: "kong"
    path: /
  tls:
    enabled: false
portal:
  enabled: true
  type: ClusterIP
  http:
    enabled: true
    servicePort: 8003
    containerPort: 8003
  ingress:
    enabled: true
    hostname: portal.localhost
    annotations:
      kubernetes.io/ingress.class: "kong"
    path: /
  tls:
    enabled: false
portalapi:
  enabled: true
  type: ClusterIP
  http:
    enabled: true
    servicePort: 8004
    containerPort: 8004
  ingress:
    enabled: true
    hostname: portalapi.localhost
    annotations:
      kubernetes.io/ingress.class: "kong"
    path: /
  tls:
    enabled: false
# -----------------------------------------------------------------------------
# Ingress Controller parameters
# -----------------------------------------------------------------------------
ingressController:
  enabled: true
  image:
    repository: kong/kubernetes-ingress-controller
    tag: "1.3"
  env:
    kong_admin_tls_skip_verify: true
    kong_admin_token:
      valueFrom:
        secretKeyRef:
          name: kong-enterprise-superuser-password
          key: password
  ingressClass: kong
  rbac:
    create: true
# -----------------------------------------------------------------------------
# Postgres sub-chart parameters
# -----------------------------------------------------------------------------
postgresql:
  enabled: false
migrations:
  preUpgrade: true
  postUpgrade: true
# -----------------------------------------------------------------------------
# Kong Enterprise parameters
# -----------------------------------------------------------------------------
enterprise:
  enabled: true
  license_secret: kong-enterprise-license
  vitals:
    enabled: true
  portal:
    enabled: true
  rbac:
    enabled: false
  smtp:
    enabled: false

Hi @jpeedroza
I suspect you are not able to communicate with the Admin API. If you open up the browser dev tools I could make sure there are no errors in the console. Specifically, failures in communicating to the admin api, i.e, can’t be reached, self sign cert, etc.

How are you accessing the admin API and ? You have the proxy set to a LoadBalancer, which will allocate an externally-accessible IP in most cases, but use admin.localhost and manager.localhost as the Ingress hostnames. I’d normally only expect those to work if you have port-forwarded to the proxy and set up local DNS entries to route those hostnames to localhost (i.e. through the port forward).

For Manager to work, it needs to know a valid URL for the admin API, and the admin API needs to know the URL used to access the GUI to set CORS headers. Those are set via these settings:

When enabled, the ingress.hostname, ingress.path, and protocol settings will configure those settings automatically. For example, since you have TLS disabled, the chart will set KONG_ADMIN_GUI_URL=http://manager.localhost/.

If you’re not actually using those Ingresses to access those resources, those settings will be misconfigured, and Manager won’t actually be able to reach the admin API. I’d recommend either:

  • Mapping DNS to point to the proxy LoadBalancer IP and configuring the chart Ingress settings to use that hostname, and then access Manager via that hostname.
  • Manually setting env.admin_gui_url and env.admin_api_uri (which will override the auto-generated configuration) to reflect whatever URLs you’re actually using to reach those resources.

If you’re still not seeing data, reviewing your browser developer console and network diagnostics should clarify exactly what’s failing.