Last modified April 25, 2018
In what data centers does Giant Swarm run clusters?
Giant Swarm runs on AWS, Microsoft Azure, and on bare metal or virtualized hardware using KVM.
How can I create backups or snapshots from volumes?
We currently don’t provide this as a service. You have to take care of it yourself. You can use S3 or other storage solutions.
If your cluster is running on AWS using your own AWS account, you can use the EBS Snapshot function.
Can I run a database?
Yes, for most databases there are ready built containers in the Docker Hub. If you can’t find one for your database of choice, you should be able to easily build one. However, currently persistent volumes are only available on AWS. With a bare metal (KVM) based cluster, volumes are not persistent and won’t survive rescheduling of their pods.
Check out our guide on using persistent volumes on AWS.
How do I make pods talk to each other?
For pods to be available to other pods or to the outside you need to expose them through a service. See Kubernetes Fundamentals for more details.
How can I provide environment variables for the containers?
You can either define environment variables in your deployment or better use ConfigMaps and/or Secrets. See Kubernetes Fundamentals for more details.
Can I use TLS/HTTPS/SSL?
Yes. Take a look at our advanced ingress guide.
Can I use websockets?
Currently not, but we are working on it.
Can I use a third party private registry?
Yes, you just need to set up an ImagePullSecret for your pod.
For AWS-based clusters, using the AWS EC2 Container Registry (ECR) requires specific configuration of the worker nodes. The EC2 instance policies need specific permissions, which are listed in the Kubernetes documentation.
How can I run a container periodically?
The Cron Job resource is available to you.
Which IP Block do I need to reserve for my full Giant Swarm installation on AWS?
There are different answers to this, and some depend on your requirements. We advise that you think about where you want to go with your Giant Swarm Platform in light of the limits you might impose based on a smaller IP Block. The following is a hopefully well argued possible solution.
First we need IP space for our host cluster and guest clusters, meaning the underlying EC2 machines. We currently suggesting taking a Class C IP Block per guest cluster, as this would allow some 240 machines or 120 if spread over two Availability Zones. While taking a full Class B IP Block would mean you can start well over 200 clusters, or e.g. 80 in three different Amazon Locations, you might be ok with using a smaller IP Range. These IPs, in case you somehow want IP to IP connections to those machines, need to be routable inside your old environment. Normally these can run over a NAT but still, the IPs should not be used inside your existing network in case systems inside the cluster need to talk to some of your old IPs, e.g. a database.
Then we need a class B IP Block for the internal Calico based network between containers, that will be reused within each cluster as the will not bleed out. These do not need to be routed, but reserved internally and use nowhere else in case we need to talk to IPs outside the cluster.
Lastly, we need internal IP space for K8s services and we advise 172.31.0.0 for it because that is what is used as a default on many of the examples out there. Otherwise we can also split up a few from the Class B for the containers.