Data Management
Learn how to manage observability data including ingestion, exploration, transformation, and export in the Giant Swarm platform.
Data management is the backbone of our observability platform, giving you complete control over how your observability data flows through the system. From collecting metrics and logs to exploring and analyzing them, our platform offers powerful capabilities to handle your data lifecycle efficiently.
Think of data management as your control center for observability - it’s where you decide what data to collect, how to organize it, and how to make it useful for your teams.
Supported data types
Our observability platform handles three key types of observability data:
Metrics
Time-series data that tracks numerical values over time, perfect for monitoring system health, performance trends, and capacity planning. We store metrics in Mimir, a horizontally scalable, multi-tenant time series database.
Examples:
- CPU and memory usage
- Request rates and response times
- Error counts and success rates
- Custom application metrics
Logs
Detailed records of events and activities from your applications and infrastructure. We aggregate logs using Loki, designed for efficiency and seamless integration with our metrics stack.
Examples:
- Application debug and error messages
- Kubernetes events and audit logs
- System and security logs
- Custom structured logs
Data management capabilities
Our platform provides comprehensive capabilities to handle your observability data throughout its lifecycle:
Data ingestion
Flexible collection from multiple sources:
- Metrics ingestion: Collect metrics from applications, infrastructure, and external sources using ServiceMonitors and PodMonitors
- Log ingestion: Gather logs from applications and infrastructure using PodLogs and automatic collection
- External data sources: Push data from external systems via our Observability Platform API
Data exploration
Advanced querying and analysis capabilities:
- Interactive exploration: Use Grafana’s Explore feature for ad-hoc analysis with PromQL and LogQL
- Dashboard management: Build custom visualizations with GitOps workflows or through the Grafana UI
- Query languages: PromQL for metrics and LogQL for logs with powerful filtering and aggregation
Data transformation
Process and enrich your data during collection:
- Relabeling rules: modify, filter, or enrich metrics and logs before storage
- Data validation: ensure data quality and compliance with platform standards
- Parsing and enrichment: extract structured data from logs and add contextual information
Data export
Access your data programmatically:
- API access: Programmatic data ingestion and retrieval via REST APIs
- External integration: Connect to external monitoring tools and data sources
- Standard protocols: Support for OpenTelemetry Protocol (OTLP) and Prometheus remote write