Kong-1.1 OSS - services and routes are getting deleted automatically

Hi Team,

Created Service and Route using kong admin-api. After six minutes services and routes are getting deleted automatically. Below output for reference.

    k get all -n kong
    NAME                                          READY   STATUS      RESTARTS   AGE
    pod/kong-7fb986b69c-8pz42                     1/1     Running     0          16m
    pod/kong-ingress-controller-6cb449464-qbftw   2/2     Running     2          16m
    pod/kong-migrations-b66v2                     0/1     Completed   0          16m
    pod/postgres-0                                1/1     Running     0          16m

    NAME                              TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)                      AGE
    service/kong-ingress-controller   LoadBalancer   172.18.193.27    a51777d6a96c211e99e9e06c4a33806b-74.eu-west-1.elb.amazonaws.com   80:31978/TCP                 16m
    service/kong-proxy                LoadBalancer   172.18.195.108   a519c2c3896c211e99e9e06c4a338060.eu-west-1.elb.amazonaws.com   80:32077/TCP,443:31620/TCP   16m
    service/postgres                  ClusterIP      172.18.195.202   <none>                                                                   5432/TCP                     16m

    NAME                                      READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/kong                      1/1     1            1           16m
    deployment.apps/kong-ingress-controller   1/1     1            1           16m

    NAME                                                DESIRED   CURRENT   READY   AGE
    replicaset.apps/kong-7fb986b69c                     1         1         1       16m
    replicaset.apps/kong-ingress-controller-6cb449464   1         1         1       16m

    NAME                        READY   AGE
    statefulset.apps/postgres   1/1     16m

    NAME                                                              REFERENCE                            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
    horizontalpodautoscaler.autoscaling/kong-ingress-controller-hpa   Deployment/kong-ingress-controller   0%/60%    1         10        1          16m
    horizontalpodautoscaler.autoscaling/kong-proxy-hpa                Deployment/kong                      0%/60%    1         10        1          16m

    NAME                        COMPLETIONS   DURATION   AGE
    job.batch/kong-migrations   1/1           26s        16m

    -------------------------------------------------------

    Service Creation:-

    curl -i -X POST --url http://a51777d6a96c211e99e9e4a33806b-742738932.eu-west-1.elb.amazonaws.com/services/ --data 'name=kong-perfapi.coffee-svc.443' --data 'host=coffee-svc.kong-perfapi.svc' --data 'path=/coffee' --data 'port=443'
    HTTP/1.1 201 Created
    Date: Mon, 24 Jun 2019 21:00:05 GMT
    Content-Type: application/json; charset=utf-8
    Connection: keep-alive
    Access-Control-Allow-Origin: *
    Server: kong/1.1.2
    Content-Length: 305

    {"host":"coffee-svc.kong-perfapi.svc","created_at":1561410005,"connect_timeout":60000,"id":"c32713f2-cbec-40c2-aa95-3f672e3a78c9","protocol":"http","name":"kong-perfapi.coffee-svc.443","read_timeout":60000,"port":443,"path":"\/coffee","updated_at":1561410005,"retries":5,"write_timeout":60000,"tags":null}%
    -----------------------------------------------------

    Route Creation:- 

    curl -X POST http://a51777d6a96c211e99e93806b-742738932.eu-west-1.elb.amazonaws.com/services/kong-perfapi.coffee-svc.443/routes --data 'hosts[]=coffee.api.k8se.de' --data 'name=kong-perfapi.coffee-ingress.00' --data 'paths=/kong-perfapi'

    {"updated_at":1561410058,"created_at":1561410058,"strip_path":true,"snis":null,"hosts":["coffee.api.k8se.de"],"name":"kong-perfapi.coffee-ingress.00","methods":null,"sources":null,"preserve_host":false,"regex_priority":0,"service":{"id":"c32713f2-cbec-40c2-aa95-3f672e3a78c9"},"paths":["\/kong-perfapi"],"destinations":null,"id":"98faa334-b9f1-4b13-8243-ec6cb3f647c7","protocols":["http","https"],"tags":null}%
    -----------------------------------------------------

    Time of creation:- Mon Jun 24 22:59:53 CEST 2019

    Time of automatic delete od service and routes:-Mon Jun 24 23:06:58 CEST 2019

    -----------------------------------------------------
    Every 2.0s: curl -X GET http://a51777d6a96c211e99e...  : Mon Jun 24 23:06:58 2019

      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0100    23  100
        23    0     0    239      0 --:--:-- --:--:-- --:--:--   242
    {"next":null,"data":[]}
curl -X GET http://a51777d6a96c211e9806b-742738932.eu-wt-1.elb.amazonaws.com/services
**{"next":null,"data":[]}%**
----
curl -X GET http://a51777d6a96c206c4a33806b-742738932.eu-wt-1.elb.amazonaws.com/routes
**{"next":null,"data":[]}%**

As noted in the Readme section of Kubernetes Ingress Controller:

Kong Ingress Controller takes care of managing all entities in Kong’s datastore as per the Ingress and custom resource definitions in k8s. Any entity created using Kong’s Admin API will be deleted by the Ingress Controller.

Please only either use Ingress resources or Kong’s API to manage your Kong configuration. Using both of them at the same time is a recipe for a mess.

If you would like to configure Kong using the Admin API, please do not deploy Kong with the Ingress Controller.

Hi Harry,

Thanks for the reply. It was my bad, i did a mistake.

I was using, kong-ingress-controller:0.4.0 and observed above issue. Today, tried with kong-ingress-controller:0.5.0-rc0. Everything is working as expected. Below is my scenario.

-----------------------------------------
The plan was to create configuration (services/routes/plugins) using the ingress controller(i.e. manifests/yamls). However, data (things like consumers) would be created either with the Admin API.
-----------------------------------------

Regards,
Pradeep.