Hello guys
I am trying to complete the installation of kong for Kubernetes on AKS. Basically I want to reproduce this scenario:
Here is how I install it
I have tried with:
helm install kong/kong --generate-name --set ingressController.installCRDs=false --set admin.type=LoadBalancer --set proxy.type=LoadBalancer
and
kubectl apply -f https://bit.ly/k4k8s
The result is always the same
kubectl get all -n kong
NAME READY STATUS RESTARTS AGE
pod/kong-1595829690-kong-6789f4b45f-rjztq 2/3 CrashLoopBackOff 11 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kong-1595829690-kong-proxy LoadBalancer 10.0.88.120 51.116.135.128 80:32104/TCP,443:30928/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kong-1595829690-kong 0/1 1 0 21m
NAME DESIRED CURRENT READY AGE
replicaset.apps/kong-1595829690-kong-6789f4b45f 1 1 0 21m
when I do a describe:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 18m default-scheduler Successfully assigned kong/kong-1595829690-kong-6789f4b45f-rjztq to aks-agentpool-41512647-vmss000003
Normal Pulling 18m kubelet, aks-agentpool-41512647-vmss000003 Pulling image "docker.io/istio/proxyv2:1.7.0-alpha.0"
Normal Pulled 18m kubelet, aks-agentpool-41512647-vmss000003 Successfully pulled image "docker.io/istio/proxyv2:1.7.0-alpha.0"
Normal Created 18m kubelet, aks-agentpool-41512647-vmss000003 Created container istio-init
Normal Started 18m kubelet, aks-agentpool-41512647-vmss000003 Started container istio-init
Normal Pulled 18m kubelet, aks-agentpool-41512647-vmss000003 Container image "kong:2.1" already present on machine
Normal Pulling 18m kubelet, aks-agentpool-41512647-vmss000003 Pulling image "docker.io/istio/proxyv2:1.7.0-alpha.0"
Normal Started 18m kubelet, aks-agentpool-41512647-vmss000003 Started container proxy
Normal Created 18m kubelet, aks-agentpool-41512647-vmss000003 Created container proxy
Normal Pulled 18m kubelet, aks-agentpool-41512647-vmss000003 Successfully pulled image "docker.io/istio/proxyv2:1.7.0-alpha.0"
Normal Created 18m kubelet, aks-agentpool-41512647-vmss000003 Created container istio-proxy
Normal Started 18m kubelet, aks-agentpool-41512647-vmss000003 Started container istio-proxy
Warning Unhealthy 18m (x3 over 18m) kubelet, aks-agentpool-41512647-vmss000003 Readiness probe failed: HTTP probe failed with statuscode: 500
Normal Created 18m (x2 over 18m) kubelet, aks-agentpool-41512647-vmss000003 Created container ingress-controller
Normal Pulled 18m (x2 over 18m) kubelet, aks-agentpool-41512647-vmss000003 Container image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1" already present on machine
Normal Killing 18m kubelet, aks-agentpool-41512647-vmss000003 Container ingress-controller failed liveness probe, will be restarted
Normal Started 18m (x2 over 18m) kubelet, aks-agentpool-41512647-vmss000003 Started container ingress-controller
Warning Unhealthy 18m (x4 over 18m) kubelet, aks-agentpool-41512647-vmss000003 Liveness probe failed: HTTP probe failed with statuscode: 500
Warning BackOff 3m48s (x50 over 16m) kubelet, aks-agentpool-41512647-vmss000003 Back-off restarting failed container
When I check logs out:
kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq ingress-controller -n kong
-------------------------------------------------------------------------------
Kong Ingress controller
Release: 0.9.1
Build: 2caa524
Repository: git@github.com:kong/kubernetes-ingress-controller.git
Go: go1.14.1
-------------------------------------------------------------------------------
W0727 06:33:17.215799 1 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0727 06:33:17.215947 1 main.go:492] Creating API client for https://10.0.0.1:443
d063877@ubuntu ~/Downloads/istio/istio-1.7.0-alpha.0 $ kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq proxy -n kong
2020/07/27 06:01:53 [warn] 1#0: load balancing method redefined in /kong_prefix/nginx-kong.conf:61
nginx: [warn] load balancing method redefined in /kong_prefix/nginx-kong.conf:61
2020/07/27 06:01:53 [notice] 1#0: using the "epoll" event method
2020/07/27 06:01:53 [notice] 1#0: openresty/1.15.8.3
2020/07/27 06:01:53 [notice] 1#0: built by gcc 9.3.0 (Alpine 9.3.0)
2020/07/27 06:01:53 [notice] 1#0: OS: Linux 4.15.0-1083-azure
2020/07/27 06:01:53 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/07/27 06:01:53 [notice] 1#0: start worker processes
2020/07/27 06:01:53 [notice] 1#0: start worker process 22
2020/07/27 06:01:53 [notice] 22#0: *1 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2020/07/27 06:01:53 [notice] 22#0: *1 [lua] cache.lua:374: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
kubectl logs pod/kong-1595829690-kong-6789f4b45f-rjztq istio-proxy -n kong
2020-07-27T06:01:54.513325Z info FLAG: --concurrency="2"
2020-07-27T06:01:54.513350Z info FLAG: --disableInternalTelemetry="false"
2020-07-27T06:01:54.513355Z info FLAG: --domain="kong.svc.cluster.local"
2020-07-27T06:01:54.513357Z info FLAG: --help="false"
2020-07-27T06:01:54.513360Z info FLAG: --id=""
2020-07-27T06:01:54.513362Z info FLAG: --ip=""
2020-07-27T06:01:54.513364Z info FLAG: --log_as_json="false"
2020-07-27T06:01:54.513367Z info FLAG: --log_caller=""
2020-07-27T06:01:54.513369Z info FLAG: --log_output_level="default:info"
2020-07-27T06:01:54.513371Z info FLAG: --log_rotate=""
2020-07-27T06:01:54.513373Z info FLAG: --log_rotate_max_age="30"
2020-07-27T06:01:54.513376Z info FLAG: --log_rotate_max_backups="1000"
2020-07-27T06:01:54.513379Z info FLAG: --log_rotate_max_size="104857600"
2020-07-27T06:01:54.513381Z info FLAG: --log_stacktrace_level="default:none"
2020-07-27T06:01:54.513386Z info FLAG: --log_target="[stdout]"
2020-07-27T06:01:54.513389Z info FLAG: --meshConfig="./etc/istio/config/mesh"
2020-07-27T06:01:54.513391Z info FLAG: --mixerIdentity=""
2020-07-27T06:01:54.513394Z info FLAG: --outlierLogPath=""
2020-07-27T06:01:54.513396Z info FLAG: --proxyComponentLogLevel="misc:error"
2020-07-27T06:01:54.513399Z info FLAG: --proxyLogLevel="warning"
2020-07-27T06:01:54.513401Z info FLAG: --serviceCluster="kong-1595829690-kong.kong"
2020-07-27T06:01:54.513404Z info FLAG: --serviceregistry="Kubernetes"
2020-07-27T06:01:54.513407Z info FLAG: --stsPort="0"
2020-07-27T06:01:54.513409Z info FLAG: --templateFile=""
2020-07-27T06:01:54.513412Z info FLAG: --tokenManagerPlugin="GoogleTokenExchange"
2020-07-27T06:01:54.513415Z info FLAG: --trust-domain="cluster.local"
2020-07-27T06:01:54.513438Z info Version 1.7.0-alpha.0-37119973c952151e269110170f2fda8c6a34fb5e-dirty-Modified
2020-07-27T06:01:54.513733Z info Obtained private IP [10.244.3.65]
2020-07-27T06:01:54.513807Z info Apply proxy config from env {"proxyMetadata":{"DNS_AGENT":""}}
2020-07-27T06:01:54.514560Z info Effective config: binaryPath: /usr/local/bin/envoy
concurrency: 2
configPath: ./etc/istio/proxy
discoveryAddress: istiod.istio-system.svc:15012
drainDuration: 45s
envoyAccessLogService: {}
envoyMetricsService: {}
parentShutdownDuration: 60s
proxyAdminPort: 15000
proxyMetadata:
DNS_AGENT: ""
serviceCluster: kong-1595829690-kong.kong
statNameLength: 189
statusPort: 15020
terminationDrainDuration: 5s
tracing:
zipkin:
address: zipkin.istio-system:9411
...
2020-07-27T06:01:55.300093Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.301147Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.307657Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:55.308667Z warning envoy filter [src/envoy/http/authn/http_filter_factory.cc:83] mTLS PERMISSIVE mode is used, connection can be either plaintext or TLS, and client cert can be omitted. Please consider to upgrade to mTLS STRICT mode for more secure configuration that only allows TLS connection with client cert. See https://istio.io/docs/tasks/security/mtls-migration/
2020-07-27T06:01:57.014809Z info Envoy proxy is ready
[2020-07-27T06:01:57.676Z] "- - -" 0 UH "-" "-" 0 0 7 - "-" "-" "-" "-" "-" - - 10.0.0.1:443 10.244.3.65:47500 - -
2020-07-27T06:02:00.695289Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:02:06.558113Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz
2020-07-27T06:33:30.695134Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:33:36.557807Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz
[2020-07-27T06:33:30.808Z] "- - -" 0 UH "-" "-" 0 0 0 - "-" "-" "-" "-" "-" - - 10.0.0.1:443 10.244.3.65:41542 - -
2020-07-27T06:33:40.694965Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/readyz
app URL path = /healthz
2020-07-27T06:33:46.557618Z error Request to probe app failed: Get "http://localhost:10254/healthz": dial tcp 127.0.0.1:10254: connect: connection refused, original URL path = /app-health/ingress-controller/livez
app URL path = /healthz
Here is the deployment:
kubectl get deployment.apps/kong-1595829690-kong -n kong -oyaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kuma.io/gateway: enabled
meta.helm.sh/release-name: kong-1595829690
meta.helm.sh/release-namespace: kong
traffic.sidecar.istio.io/includeInboundPorts: ""
creationTimestamp: "2020-07-27T06:01:49Z"
generation: 1
labels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong-1595829690
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2"
helm.sh/chart: kong-1.8.0
name: kong-1595829690-kong
namespace: kong
resourceVersion: "11494112"
selfLink: /apis/apps/v1/namespaces/kong/deployments/kong-1595829690-kong
uid: 67dacf6e-0e04-4a93-92b5-5097ed9a4ed4
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong-1595829690
app.kubernetes.io/name: kong
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/component: app
app.kubernetes.io/instance: kong-1595829690
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: kong
app.kubernetes.io/version: "2"
helm.sh/chart: kong-1.8.0
spec:
containers:
- args:
- /kong-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: CONTROLLER_ELECTION_ID
value: kong-ingress-controller-leader-kong
- name: CONTROLLER_INGRESS_CLASS
value: kong
- name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
value: "true"
- name: CONTROLLER_KONG_URL
value: https://localhost:8444
- name: CONTROLLER_PUBLISH_SERVICE
value: kong/kong-1595829690-kong-proxy
image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: ingress-controller
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
- env:
- name: KONG_ADMIN_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_GUI_ACCESS_LOG
value: /dev/stdout
- name: KONG_ADMIN_GUI_ERROR_LOG
value: /dev/stderr
- name: KONG_ADMIN_LISTEN
value: 127.0.0.1:8444 http2 ssl
- name: KONG_CLUSTER_LISTEN
value: "off"
- name: KONG_DATABASE
value: "off"
- name: KONG_KIC
value: "on"
- name: KONG_LUA_PACKAGE_PATH
value: /opt/?.lua;/opt/?/init.lua;;
- name: KONG_NGINX_HTTP_INCLUDE
value: /kong/servers.conf
- name: KONG_NGINX_WORKER_PROCESSES
value: "1"
- name: KONG_PLUGINS
value: bundled
- name: KONG_PORTAL_API_ACCESS_LOG
value: /dev/stdout
- name: KONG_PORTAL_API_ERROR_LOG
value: /dev/stderr
- name: KONG_PORT_MAPS
value: 80:8000, 443:8443
- name: KONG_PREFIX
value: /kong_prefix/
- name: KONG_PROXY_ACCESS_LOG
value: /dev/stdout
- name: KONG_PROXY_ERROR_LOG
value: /dev/stderr
- name: KONG_PROXY_LISTEN
value: 0.0.0.0:8000, 0.0.0.0:8443 http2 ssl
- name: KONG_STATUS_LISTEN
value: 0.0.0.0:8100
- name: KONG_STREAM_LISTEN
value: "off"
- name: KONG_NGINX_DAEMON
value: "off"
image: kong:2.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- /bin/sleep 15 && kong quit
livenessProbe:
failureThreshold: 3
httpGet:
path: /status
port: metrics
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: proxy
ports:
- containerPort: 8000
name: proxy
protocol: TCP
- containerPort: 8443
name: proxy-tls
protocol: TCP
- containerPort: 9542
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /status
port: metrics
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /kong_prefix/
name: kong-1595829690-kong-prefix-dir
- mountPath: /tmp
name: kong-1595829690-kong-tmp
- mountPath: /kong
name: custom-nginx-template-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: kong-1595829690-kong
serviceAccountName: kong-1595829690-kong
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: kong-1595829690-kong-prefix-dir
- emptyDir: {}
name: kong-1595829690-kong-tmp
- configMap:
defaultMode: 420
name: kong-1595829690-kong-default-custom-server-blocks
name: custom-nginx-template-volume
status:
conditions:
- lastTransitionTime: "2020-07-27T06:01:49Z"
lastUpdateTime: "2020-07-27T06:01:49Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-07-27T06:11:50Z"
lastUpdateTime: "2020-07-27T06:11:50Z"
message: ReplicaSet "kong-1595829690-kong-6789f4b45f" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
I would really appreciate your help.