Last modified November 28, 2017
Services of type LoadBalancer and Multiple Ingress Controllers
Next to using the default NGINX Ingress Controller, on cloud providers (currently AWS, soon also Azure), you can expose services directly outside your cluster by using Services of type
Note that this functionality cannot be used on premises.
Exposing a single Service
type field of your service to
LoadBalancer will result in your Service being exposed by a dynamically provisioned load balancer.
You can do this with any Service within your cluster, including Services that expose several ports.
The actual creation of the load balancer happens asynchronously, and information about the provisioned balancer will be published in the Service’s
status.loadBalancer field, like following:
apiVersion: v1 kind: Service metadata: labels: app: helloworld name: helloworld spec: ports: - port: 8080 targetPort: http selector: app: helloworld type: LoadBalancer status: loadBalancer: ingress: - hostname: a54cae28bd42b11e7b2c7020a3f15370-27798109.eu-central-1.elb.amazonaws.com
The above YAML would expose port 8080 of our helloworld Pods on the http port of the provisioned ELB.
Exposing on a non-http port and protocol
You can change the port of the load balancer and protocol of the load balancer by changing the
targetPortfield and adding a
ports.protocol field. This way you can expose TCP services directly without having to customize the Ingress Controller.
Following example would set the ELB to TCP and port
apiVersion: v1 kind: Service metadata: labels: app: helloworld name: helloworld spec: ports: - port: 8080 protocol: TCP targetPort: 8888 selector: app: helloworld type: LoadBalancer status: loadBalancer: ingress: - hostname: a54cae28bd42b11e7b2c7020a3f15370-27798109.eu-central-1.elb.amazonaws.com
Customizing the External Load Balancer
As we are currently only supporting the AWS cloud, this section will focus on the custom options you can set on the AWS Elastic Load Balancer via a Service of type
LoadBalancer. You can configure these options by adding annotations to the service.
If you want the ELB to be available only within your VPC (can be extended to other VPC by VPC peering) use the following annotation:
[...] metadata: name: my-service annotations: service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 [...]
There are three annotations you can set to configure SSL termination.
The first one depicts the ARN of the certificate you want to use. You can either upload the certificate to IAM or create it within AWS Certificate Manager.
The second annotation specifies which protocol a pod speaks. For HTTPS and SSL, the ELB will expect the pod to authenticate itself over the encrypted connection.
HTTP and HTTPS will select layer 7 proxying: the ELB will terminate the connection with the user, parse headers and inject the
X-Forwarded-For header with the user’s IP address (pods will only see the IP address of the ELB at the other end of its connection) when forwarding requests.
TCP and SSL will select layer 4 proxying: the ELB will forward traffic without modifying the headers. In a mixed-use environment where some ports are secured and others are left unencrypted, the following annotations may be used:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443,8443"
In the above example, if the service contained three ports,
8443 would use the SSL certificate, but
80 would just be proxied HTTP.
Using Multiple Ingress Controllers
By default a cluster in Giant Swarm is bootstrapped with a default Ingress Controller based on NGINX. This Ingress Controller is registered with the default
nginx Ingress Class.
You can run additional Ingress Controllers by exposing them through Services of type
LoadBalancer as explained above.
Some use cases for this might be: - An Ingress Controller that is behind an internal ELB for traffic between services within the VPC (or a group of peered VPCs) - An Ingress Controller behind an ELB that already terminates SSL - An Ingress Controller with different functionality or performance
Note that if you are running multiple Ingress Controllers you need to annotate each Ingress with the appropriate class, e.g.
Not specifying the annotation will lead to multiple ingress controllers claiming the same ingress. Specifying a value which does not match the class of any existing ingress controllers will result in all ingress controllers ignoring the ingress.
Further note that if you are running additional Ingress Controllers you might need to configure them so their Ingress Class does not collide with the class of our default NGINX Ingress Controller. For the community supported NGINX Ingress Controller this is described in the official documentation.