Data management

Learn how to manage observability data including ingestion, exploration, transformation, and export in the Giant Swarm platform.

Data management is the backbone of our observability platform, giving you complete control over how your observability data flows through the system. From collecting metrics and logs to exploring and analyzing them, our platform offers powerful capabilities to handle your data lifecycle efficiently.

Think of data management as your control center for observability—it’s where you decide what data to collect, how to organize it, and how to make it useful for your teams.

Supported data types

Our observability platform handles three key types of observability data:

Metrics

Time-series data that tracks numerical values over time, perfect for monitoring system health, performance trends, and capacity planning. We store metrics in Mimir, a horizontally scalable, multi-tenant time series database.

Examples:

  • CPU and memory usage
  • Request rates and response times
  • Error counts and success rates
  • Custom application metrics

Logs

Detailed records of events and activities from your applications and infrastructure. We aggregate logs using Loki, designed for efficiency and seamless integration with our metrics stack.

Examples:

  • Application debug and error messages
  • Kubernetes events and audit logs
  • System and security logs
  • Custom structured logs

Data management capabilities

Our platform provides comprehensive capabilities to handle your observability data throughout its lifecycle:

Data ingestion

Flexible collection from multiple sources:

  • Metrics ingestion: Collect metrics from applications, infrastructure, and external sources using ServiceMonitors and PodMonitors
  • Log ingestion: Gather logs from applications and infrastructure using PodLogs and automatic collection
  • External data sources: Push data from external systems via our Data Import and Export API

Data exploration

Advanced querying and analysis capabilities:

  • Interactive exploration: Use Grafana’s Explore feature for ad-hoc analysis with PromQL and LogQL
  • Dashboard management: Build custom visualizations with GitOps workflows or through the Grafana UI
  • Query languages: PromQL for metrics and LogQL for logs with powerful filtering and aggregation

Data transformation

Transform and enrich your data during collection and visualization:

  • Recording rules: Pre-compute expensive PromQL expressions for better performance
  • Relabeling rules: Modify, filter, or enrich metrics and logs before storage
  • Data parsing: Extract structured data from logs and add contextual information
  • Grafana transformations: Client-side data processing for visualization

Data import and export

Access your data programmatically for external analysis and integration, and send external data to the platform:

  • Data import and export capabilities: Secure API access for external tools and custom integrations, plus data import from external sources
  • External Grafana integration: Connect self-managed Grafana instances to Giant Swarm data
  • Programmatic access: REST APIs compatible with Loki and Prometheus standards
  • Data ingestion: Send logs from external systems using Loki’s native format
  • Future protocols: OpenTelemetry Protocol (OTLP) support is planned for standardized telemetry data exchange