Last modified July 8, 2025
Data Ingestion
The Giant Swarm Observability Platform provides flexible, self-service data ingestion capabilities for both metrics and logs. By default, all clusters are equipped with the necessary components to collect and forward observability data to the central platform.
Architecture overview
Each Giant Swarm cluster comes pre-configured with:
- Prometheus Operator: Manages Prometheus instances and provides CRDs for metric collection configuration
- Grafana Alloy: Acts as the monitoring agent for both metrics and logs collection
- Central storage: Metrics are stored in Grafana Mimir and logs in Grafana Loki
This architecture allows you to configure data collection declaratively using Kubernetes Custom Resources, making it easy to integrate into your existing deployment workflows.
Metrics ingestion
Prerequisites
Before ingesting metrics, ensure your application is properly instrumented to expose metrics in Prometheus format.
Using ServiceMonitors
ServiceMonitors are the primary way to configure metric collection for applications that expose metrics through a Kubernetes Service.
Here’s a basic ServiceMonitor example:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
# Required for discovery by the metrics agent
observability.giantswarm.io/tenant: my_tenant
app.kubernetes.io/instance: my-service
name: my-service
namespace: monitoring
spec:
endpoints:
- interval: 60s # Collection frequency
path: /metrics # Metrics endpoint path
port: web # Named port exposing metrics
selector: # Service label selector
matchLabels:
app.kubernetes.io/instance: my-service
Using PodMonitors
PodMonitors are useful for collecting metrics directly from pods when a Service isn’t necessary or doesn’t exist.
Use PodMonitors when:
- Your application doesn’t require a Service for its primary function
- You need to collect metrics from specific pod instances
- You want more granular control over pod selection
Key requirements
- Tenant labeling: All ServiceMonitors and PodMonitors must include the
observability.giantswarm.io/tenant
label - Tenant existence: The specified tenant must exist in a Grafana Organization
- Resource considerations: Monitor resource usage in the ServiceMonitors Overview dashboard
For detailed configuration options and advanced use cases, refer to the Prometheus Operator API documentation.
Log ingestion
Overview
Starting from CAPA v29.2.0 and CAPZ v29.1.0, clusters automatically collect system logs and forward them to the central Loki instance. You can also configure collection of application logs using two approaches:
Method 1: Pod labels (Recommended)
The simplest way to enable log collection is by adding a tenant label to your pods:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: example-namespace
spec:
template:
metadata:
labels:
# Enable automatic log collection
observability.giantswarm.io/tenant: my_team
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
Method 2: PodLogs resources
PodLogs provide more flexibility for advanced log collection scenarios:
apiVersion: monitoring.grafana.com/v1alpha2
kind: PodLogs
metadata:
name: example-podlog
namespace: example-namespace
spec:
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: example-namespace
relabelings:
# Configure tenant for data routing
- action: replace
replacement: myteam
targetLabel: giantswarm_observability_tenant
selector:
matchLabels:
app: nginx
Use PodLogs when you need to:
- Filter pods using complex label selectors
- Apply custom relabeling rules
- Collect from multiple namespaces
- Transform log metadata before ingestion
Default log collection
System logs are automatically collected from these namespaces:
kube-system
giantswarm
These are managed through predefined PodLogs resources and shouldn’t be modified.
Multi-tenancy considerations
Both metrics and logs use tenant-based routing to ensure data isolation:
- Metrics: Use
observability.giantswarm.io/tenant
label on ServiceMonitors/PodMonitors - Logs: Use
observability.giantswarm.io/tenant
pod label orgiantswarm_observability_tenant
relabeling in PodLogs
Important: Data sent to non-existent tenants will be dropped. Ensure your tenant exists in a Grafana Organization before configuring data collection.
Performance and cost considerations
Data ingestion affects platform resource consumption:
- Metrics: More metrics increase Mimir resource usage
- Logs: More logs increase Loki resource usage and Kubernetes API server load
- Network: Log tailing uses Kubernetes API, impacting network traffic
Monitor your usage through:
- ServiceMonitors Overview dashboard for metrics
- Kubernetes / Compute Resources / Pod dashboard for API server impact
Next steps
- Learn more about data exploration to query your ingested data
- Set up multi-tenancy for your teams
- Explore dashboard creation to visualize your data
Need help, got feedback?
We listen to your Slack support channel. You can also reach us at support@giantswarm.io. And of course, we welcome your pull requests!