Last modified December 13, 2022

Platform Security


Giant Swarm integrates a collection of open-source security tools which extend the basic security considerations outlined in our RBAC and PSP tutorial, Network Policy tutorial, and security guide and help you gain deeper observability and control over your developer platform.

The stack consists of multiple distinct components which are independently installable and configurable based on the user’s security requirements.

Image ScanningTrivy + StarboardIn CatalogTrivy / Starboard
Policy EnforcementKyvernoIn CatalogKyverno
CIS BenchmarksStarboardIn CatalogStarboard
Image ProvenanceCosign + FulcioPlanned
Cloud Security PostureCloud CustodianPlanned
Runtime AnomaliesFalcoIn CatalogFalco
In-Cluster RegistryHarborIn CatalogHarbor
Log AlertingSupported by our managed Observability Stack offering.In CatalogLoki
Log Shipping + StorageSupported by our managed EFK Stack offering.In CatalogEFK Stack
Advanced Network Capabilities*Supported by our managed Connectivity Stack offering.In CatalogLinkerd / Linkerd CNI / Linkerd Visualization

* mTLS, DNS-based egress policies, and other advanced network capabilities are available through a separately-managed service mesh.

Components with a state of “In Catalog” are available for installation via our App Platform. We are working to improve centralized installation and configuration across components.

A high-level overview of each component is included below. Please refer to the GitHub repository for each individual app for more detailed technical information.


Trivy is a vulnerability scanner created by Aqua Security. It can be run as a command-line tool (for example, in a CI/CD pipeline) or as a Kubernetes operator, which we deploy from our Trivy App. When running as an operator, Trivy can be used as the scanning backend for a Harbor container registry as well as the scanner used by Starboard.

Within our managed security stack, Trivy is deployed in-cluster as the backend for Starboard and Harbor (if in use). We also recommend customers enable vulnerability scanning in their CI/CD pipelines and include support for that integration as part of our managed offering.


Starboard is another open-source project developed by Aqua Security. Starboard runs as an operator (deployed from our Starboard App) and performs several ongoing functions in the cluster, including scanning Pods for vulnerabilities, running Kubernetes CIS benchmarks with kube-bench, and auditing Kubernetes resources against best practices and other policies using Polaris. These functions can all be configured independently. Though not required, Starboard can use an existing Trivy server running in the cluster as its vulnerability scanner. Starboard stores the results of its scans inside the cluster as Kubernetes custom resources.

In our stack, we deploy Starboard alongside Trivy in the cluster to initiate vulnerability scans for running Pods and to perform CIS benchmarks. Users may also choose to enable Polaris configuration scans in addition to our recommended Kyverno policy enforcement. To support monitoring and better observability of the scan results, we have also created a custom Prometheus exporter which reads the VulnerabilityReport and CISKubeBenchReport custom resources created by Starboard and exposes the data as Prometheus metrics.

Working with Starboard Scan Results

The authoritative source of truth for Starboard scans are the in-cluster custom resources. However, scan results, especially VulnerabilityReports, can be lengthy and difficult to read.

There are several available options for viewing and distilling the results of Starboard scans as well as UI integrations to make them easier to work with.

Scan data can be accessed:

Using kubectl

Authoritative scan results are stored in the cluster and can be retrieved using kubectl:

$ kubectl get vulnerabilityreport -n argocd
NAME                                       REPOSITORY      TAG            SCANNER   AGE
replicaset-argocd-redis-74d8c6db65-redis   library/redis   6.2.4-alpine   Trivy     23d
replicaset-argocd-redis-759b6bc7f4-redis   library/redis   6.2.1-alpine   Trivy     23d

To see detailed vulnerability information, describe the resource or use get -o yaml, for example:

$ kubectl describe vulnerabilityreport -n argocd replicaset-argocd-redis-74d8c6db65-redis
Name:         replicaset-argocd-redis-74d8c6db65-redis
Namespace:    argocd
Labels:       pod-spec-hash=94c6f9fbb
Annotations:  <none>
API Version:
Kind:         VulnerabilityReport
    Repository:  library/redis
    Tag:         6.2.4-alpine
    Name:     Trivy
    Vendor:   Aqua Security
    Version:  0.19.2
    Critical Count:  3
    High Count:      2
    Low Count:       0
    Medium Count:    0
    Unknown Count:   0
  Update Timestamp:  2021-10-30T03:09:37Z

Kubernetes CIS benchmark reports can similarly be retrieved with $ kubectl get ciskubebenchreport -A and kubectl describe.

Reporting and Monitoring

For convenience, data from the in-cluster CRs is exported to Prometheus, where it can be queried, used for alerting, or included in dashboards.

Diagram illustrating the flow of data from Starboard&rsquo;s scan through an exporter to Prometheus and Grafana

Data flow:

  1. Starboard scans a Pod.
  2. Starboard creates a VulnerabilityReport CR.
  3. starboard-exporter reads the VulnerabilityReport CR and exposes metrics.
  4. Prometheus scrapes the metrics from starboard-exporter. Data can then be queried in Prometheus or seen in the Grafana vulnerability dashboard.


Kyverno is a CNCF project originally created by Nirmata which acts as an admission controller and enforces policies for Kubernetes resources. It loads policies from Kubernetes custom resources and similarly stores reports about policy violations as additional resources within the cluster. It can be used to enforce a wide range of policies including Kubernetes best practices and Pod Security Standards (PSS), as well as custom user-defined policies.

As part of the security offering, Kyverno provides enforcement for PSS policies and image signing as well as custom policies provided by customers using the stack.

Policy violations are stored in PolicyReport CRs and exposed as Prometheus metrics via policy-reporter. You can retrieve the reports via kubectl:

$ kubectl get polr -A
NAMESPACE          NAME                       PASS   FAIL   WARN   ERROR   SKIP   AGE
argocd             polr-ns-argocd             35     1      0      0       0      14d
default            polr-ns-default            9      0      0      0       0      14d
flux-system        polr-ns-flux-system        54     0      0      0       0      14d
hello-world        polr-ns-hello-world        0      0      0      0       0      6d23h
kube-system        polr-ns-kube-system        0      0      0      0       0      14d
monitoring         polr-ns-monitoring         185    5      0      0       0      14d
replex-k8s-agent   polr-ns-replex-k8s-agent   9      0      0      0       0      14d

Simply kubectl get -o yaml a report to see detailed information about the policies in place as well as any recorded violations. Reports can also be visualized through the included web UI by port forwarding it to your local machine:

$ kubectl port-forward service/kyverno-ui 8080:8080 -n <kyverno namespace>
Forwarding from -> 8080
Forwarding from [::1]:8080 -> 8080

Open your browser to localhost:8080 to view the reports.


Falco is a CNCF project originally created by Sysdig which enables rule-based detection of runtime anomalies in a container or on a host Node. Falco watches Linux system calls (syscalls) for events matching a predefined set of suspicious or malicious activities, for example the reading of a sensitive file or the execution of a shell inside a container.

We include Falco in our managed security stack as a detection mechanism for malicious activity once a Pod has already started. It is deployed from our Falco App, which includes helper components for exposing Prometheus metrics and forwarding events to various other channels, such as Elasticsearch and various messages queues and alerting backends.

[vscode-trivy]: The nginx-ingress-controller helm chart on Github