Problem defining port for service

We are currently evaluating the kong-ingress-controller to replace our current setup where we configuring the Kong instance through its API.

Our current kong instance is running version 1.3 with postgres storage.
The evaluation instance is running kong v1.3 in dbless mode and kong-ingress-controller v0.6

We have a pod exposing an api on port 8080, and a corresponding kubernetes service that load balance the pod also on port 8080

We are currently exposing the api through kong with the following config:

curl -i -X POST "http://localhost:8001/services/" \
-d "name=demo-service" \
-d "url=http://demo-service:8080/api"

curl -i -X POST "http://localhost:8001/services/demo-service/routes" \
-d "paths[]=/api/demo_service" \
-d "strip_path=true"

When we are trying to accomplish the same setup but with the kong-ingress-controller the service is exposed on port 80 and we have not found a way to change this. At a first stage we would like to continue using the kube-proxy for loadbalancing and according to the documentation this should be possible by annotating the service with

Below you can find the relevant part of our configuration, where we annotate both the service and the Ingress with the kong configuration. In addition we have annotated the service with but kong will still keep port 80 as the service port, ignoring the servicePort from the Ingress.

apiVersion: extensions/v1beta1
kind: Ingress
  name: demo
  namespace: dev
  annotations: api "kong"
  - http:
      - path: /api/demo_service
          serviceName: demo-service
          servicePort: 8080

kind: KongIngress
  name: api
  namespace: dev
  path: /api
  preserve_host: false

apiVersion: v1
kind: Service
  annotations: api 'true'
    app: demo-service
  name: demo-service
  namespace: dev
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080
    app: demo-service
  type: ClusterIP

When verifying the setup through Kong’s admin api we get the following result:

curl -k https://localhost:8444/services

	"next": null,
	"data": [{
		"host": "",
		"created_at": 1569921353,
		"connect_timeout": 60000,
		"id": "1fca4d48-68cd-5c0f-a3f5-ca20027b1759",
		"protocol": "http",
		"name": "dev.demo-service.8080",
		"read_timeout": 60000,
		"port": 80,
		"path": "\/api",
		"updated_at": 1569921353,
		"client_certificate": null,
		"tags": null,
		"write_timeout": 60000,
		"retries": 5

curl -k https://localhost:8444/routes

	"next": null,
	"data": [{
		"strip_path": true,
		"tags": null,
		"updated_at": 1569921353,
		"destinations": null,
		"headers": null,
		"protocols": ["http", "https"],
		"created_at": 1569921353,
		"snis": null,
		"service": {
			"id": "1fca4d48-68cd-5c0f-a3f5-ca20027b1759"
		"name": "dev.demo.00",
		"preserve_host": false,
		"regex_priority": 0,
		"id": "d54da5df-7dce-5143-aee0-7f500ea65430",
		"sources": null,
		"paths": ["\/api\/demo_service"],
		"https_redirect_status_code": 426,
		"methods": null,
		"hosts": null

So my question is how we could properly configure a different port than 80 for our service?

I have potentially found the underlying issue.

We have deployed kong-ingress-controller to a separate namespace, which has the effect that Openshift’s EgressNetworkPolicy does not allow communication between namespaces.

From Openshift 3.11 it is possible to accomplish this with a NetworkPolicy

kind: NetworkPolicy
  name: allow-from-kong-namespace
  - from:
    - namespaceSelector:
          name: kong

Until we get there I will do another try by installing kong-ingress-controller in the same namespace as the services it will be managing.

The service port is actually not used when using Upstreams and Routes in Kong.
The port to use is defined in the corresponding target of the service (GET /upstreams/dev.demo-service.8080/targets).

So, even if you service does have port 80 set, the traffic will be sent to the correct port on the pods that are part of your service.

I just want to make sure that there is nothing that we (maintainers of Kong) can do about it and it is something that you will resolve internally with the admin of your OpenShift cluster. Right?

You are right, I will seek for a solution together with the team responsible for our cluster and will come back when I have confirmed my theory. If my theory turns out to be correct we could maybe add some notes in the documentation so others who hit the same issue that we did can move forward faster.