Last modified January 7, 2022
Giant Swarm offers a collection of open-source security tools which extend the basic security considerations outlined in our RBAC and PSP tutorial, Network Policy tutorial, and security guide and help you gain deeper observability and control over your clusters.
The stack consists of multiple distinct components which are independently installable and configurable based on the user’s security requirements.
|Image Scanning||Trivy + Starboard||In Catalog||Trivy / Starboard|
|Policy Enforcement||Kyverno||In Catalog||Kyverno|
|CIS Benchmarks||Starboard||In Catalog||Starboard|
|Cloud Security Posture||Cloud Custodian||Planned|
|Runtime Anomalies||Falco||In Catalog||Falco|
|In-Cluster Registry||Harbor||In Catalog||Harbor|
|Log Alerting||Supported by our managed Observability Stack offering.||In Catalog||Loki|
|Log Shipping + Storage||Supported by our managed EFK Stack offering.||In Catalog||EFK Stack|
|Advanced Network Capabilities*||Supported by our managed Connectivity Stack offering.||In Catalog||Linkerd / Linkerd CNI / Linkerd Visualization|
* mTLS, DNS-based egress policies, and other advanced network capabilities are available through a separately-managed service mesh.
Components with a state of “In Catalog” are available for installation via our App Platform. We are working to improve centralized installation and configuration across components.
A high-level overview of each component is included below. Please refer to the GitHub repository for each individual app for more detailed technical information.
Trivy is a vulnerability scanner created by Aqua Security. It can be run as a command-line tool (for example, in a CI/CD pipeline) or as a Kubernetes operator, which we deploy from our Trivy App. When running as an operator, Trivy can be used as the scanning backend for a Harbor container registry as well as the scanner used by Starboard.
Within our managed security stack, Trivy is deployed in-cluster as the backend for Starboard and Harbor (if in use). We also recommend customers enable vulnerability scanning in their CI/CD pipelines and include support for that integration as part of our managed offering.
Starboard is another open-source project developed by Aqua Security. Starboard runs as an operator (deployed from our Starboard App) and performs several ongoing functions in the cluster, including scanning Pods for vulnerabilities, running Kubernetes CIS benchmarks with
kube-bench, and auditing Kubernetes resources against best practices and other policies using Polaris. These functions can all be configured independently. Though not required, Starboard can use an existing Trivy server running in the cluster as its vulnerability scanner. Starboard stores the results of its scans inside the cluster as Kubernetes custom resources.
In our stack, we deploy Starboard alongside Trivy in the cluster to initiate vulnerability scans for running Pods and to perform CIS benchmarks. Users may also choose to enable Polaris configuration scans in addition to our recommended Kyverno policy enforcement. To support monitoring and better observability of the scan results, we have also created a custom Prometheus exporter which reads the
CISKubeBenchReport custom resources created by Starboard and exposes the data as Prometheus metrics.
Reporting and Monitoring
The authoritative source of truth for Starboard scans are the in-cluster custom resources. For convenience, the data from these CRs is exported to Prometheus, where it can be queries or included in dashboards.
- Starboard scans a Pod.
- Starboard creates a
VulnerabilityReportCR and exposes metrics.
- Prometheus scrapes the metrics from
starboard-exporter. Data can then be queried in Prometheus or seen in the Grafana vulnerability dashboard.
Scan results are stored in the cluster and can be retrieved using
$ kubectl get vulnerabilityreport -n argocd NAME REPOSITORY TAG SCANNER AGE ... replicaset-argocd-redis-74d8c6db65-redis library/redis 6.2.4-alpine Trivy 23d replicaset-argocd-redis-759b6bc7f4-redis library/redis 6.2.1-alpine Trivy 23d ...
To see detailed vulnerability information,
describe the resource or use
get -o yaml, for example:
$ kubectl describe vulnerabilityreport -n argocd replicaset-argocd-redis-74d8c6db65-redis Name: replicaset-argocd-redis-74d8c6db65-redis Namespace: argocd Labels: pod-spec-hash=94c6f9fbb starboard.container.name=redis starboard.resource.kind=ReplicaSet starboard.resource.name=argocd-redis-74d8c6db65 starboard.resource.namespace=argocd Annotations: <none> API Version: aquasecurity.github.io/v1alpha1 Kind: VulnerabilityReport Report: Artifact: Repository: library/redis Tag: 6.2.4-alpine Registry: Server: index.docker.io Scanner: Name: Trivy Vendor: Aqua Security Version: 0.19.2 Summary: Critical Count: 3 High Count: 2 Low Count: 0 Medium Count: 0 Unknown Count: 0 Update Timestamp: 2021-10-30T03:09:37Z Vulnerabilities: ...
Kubernetes CIS benchmark reports can similarly be retrieved with
$ kubectl get ciskubebenchreport -A and
Kyverno is a CNCF project originally created by Nirmata which acts as an admission controller and enforces policies for Kubernetes resources. It loads policies from Kubernetes custom resources and similarly stores reports about policy violations as additional resources within the cluster. It can be used to enforce a wide range of policies including Kubernetes best practices and Pod Security Standards (PSS), as well as custom user-defined policies.
As part of the security offering, Kyverno provides enforcement for PSS policies and image signing as well as custom policies provided by customers using the stack.
Policy violations are stored in
PolicyReport CRs and exposed as Prometheus metrics via policy-reporter. You can retrieve the reports via
$ kubectl get polr -A NAMESPACE NAME PASS FAIL WARN ERROR SKIP AGE argocd polr-ns-argocd 35 1 0 0 0 14d default polr-ns-default 9 0 0 0 0 14d flux-system polr-ns-flux-system 54 0 0 0 0 14d hello-world polr-ns-hello-world 0 0 0 0 0 6d23h kube-system polr-ns-kube-system 0 0 0 0 0 14d monitoring polr-ns-monitoring 185 5 0 0 0 14d replex-k8s-agent polr-ns-replex-k8s-agent 9 0 0 0 0 14d
kubectl get -o yaml a report to see detailed information about the policies in place as well as any recorded violations. Reports can also be visualized through the included web UI by port forwarding it to your local machine:
$ kubectl port-forward service/kyverno-ui 8080:8080 -n <kyverno namespace> Forwarding from 127.0.0.1:8080 -> 8080 Forwarding from [::1]:8080 -> 8080 ...
Open your browser to
localhost:8080 to view the reports.
Falco is a CNCF project originally created by Sysdig which enables rule-based detection of runtime anomalies in a container or on a host Node. Falco watches Linux system calls (syscalls) for events matching a predefined set of suspicious or malicious activities, for example the reading of a sensitive file or the execution of a shell inside a container.
We include Falco in our managed security stack as a detection mechanism for malicious activity once a Pod has already started. It is deployed from our Falco App, which includes helper components for exposing Prometheus metrics and forwarding events to various other channels, such as Elasticsearch and various messages queues and alerting backends.