Last modified July 17, 2025
Data Import and Export
Data import and export capabilities enable you to both send observability data from external sources into the Giant Swarm platform and access your observability data from external systems and tools. This gives you the flexibility to integrate Giant Swarm’s observability platform with your existing monitoring infrastructure, external data sources, and specialized analysis tools.
The Observability Platform API serves as the primary mechanism for both data import and export, providing secure, authenticated access to send and receive metrics, logs, and events from anywhere - not just from within Giant Swarm managed clusters.
Why importing and exporting data
Data import and export capabilities open up powerful integration possibilities:
Data Import Benefits:
- External data sources: Send logs and events from SaaS applications, databases, or other infrastructure not managed by Giant Swarm
- Cross-platform correlation: Combine data from multiple environments and platforms in a single observability stack
- Legacy system integration: Import data from existing monitoring tools during migrations or for hybrid deployments
- Third-party services: Collect observability data from external services, APIs, or cloud providers
Data Export Benefits:
- External monitoring tools: Connect your existing Grafana instances, monitoring dashboards, or business intelligence tools
- Specialized analysis: Use advanced analytics tools, machine learning platforms, or custom applications with your observability data
- Backup and archival: Create additional copies of your observability data for compliance or long-term analysis
- Multi-cloud strategies: Centralize observability data from multiple cloud providers and platforms
How data import and export works
The Observability Platform API provides both data ingestion (sending data to the platform) and data export (retrieving data from the platform) capabilities through a unified, secure interface.
Architecture overview
The API consists of different ingress components that use:
- Shared host: Based on your Giant Swarm installation’s base domain (
https://observability.<domain_name>
) - OIDC authentication: Secure access through your identity provider
- Multi-tenant access control: Tenant-scoped data access through HTTP headers
Full size architecture diagram
Authentication and access control
All data import and export requests require:
- Valid OIDC token: Authentication through your organization’s identity provider
- Tenant specification: Include an
X-Scope-OrgId
HTTP header with an existing tenant name
Please note that your identity must have access to the specified tenant.
⚠️ Important: Only data from tenants defined in Grafana Organization resources can be accessed. Requests for non-existent tenants will be rejected.
Available data types
The platform supports importing and exporting different types of observability data:
Logs and events ✅
Currently available for both import and export:
- Application logs: Custom logs from your workloads and external applications
- System logs: Kubernetes events and infrastructure logs
- Audit logs: Security and compliance-related events
- External service logs: Logs from SaaS applications, databases, and third-party services
Metrics 🚧
Metrics capabilities are in development:
- Export: Limited metrics export capabilities are available
- Import: Metrics import is planned for future releases
- Infrastructure metrics: CPU, memory, disk, and network metrics
- Application metrics: Custom business and performance metrics
- Platform metrics: Kubernetes and Giant Swarm platform metrics
Note: Currently, data import via the API is limited to logs and events only. Metrics import will follow in a later release. Keep an eye on our changes and releases for updates on metrics import availability.
Data import methods
Loki API ingestion
Send log data directly to the platform using the Loki push API with properly formatted log streams.
Loki push endpoint
The platform provides a Loki-compatible endpoint for log data ingestion:
- Logs ingestion:
https://observability.<domain_name>/loki/api/v1/push
Example: Sending logs via Loki API
# Example Loki logs ingestion
curl -X POST \
-H "Authorization: Bearer $OIDC_TOKEN" \
-H "X-Scope-OrgId: your-tenant" \
-H "Content-Type: application/json" \
-d @logs-payload.json \
"https://observability.<domain>/loki/api/v1/push"
Payload format
Logs should be sent in Loki’s native format. Here’s an example payload structure:
{
"streams": [
{
"stream": {
"job": "my-external-service",
"level": "info",
"service": "auth-service"
},
"values": [
["1640995200000000000", "Application started successfully"],
["1640995201000000000", "User authentication completed"]
]
}
]
}
The payload structure includes:
- streams: Array of log streams, each with labels and log entries
- stream: Object containing label key-value pairs to identify the log stream
- values: Array of timestamp-message pairs, where timestamps are in nanoseconds since Unix epoch
⚠️ Prerequisites: You must configure OIDC authentication with Giant Swarm before using the import method. Contact your account engineer for setup assistance.
Data export methods
Method 1: External Grafana integration
Connect your self-managed Grafana instance to access Giant Swarm observability data through familiar dashboards and queries.
Setting up Grafana data sources
Configure the connection URL:
- For logs (Loki):
https://observability.<domain_name>
- For metrics (Mimir/Prometheus):
https://observability.<domain_name>/prometheus
Replace
<domain_name>
with your installation’s base domain.- For logs (Loki):
Set up authentication:
- Select “Forward OAuth Identity” in the Authentication section
- This passes your OIDC credentials to the API
Configure tenant access:
- Add an
X-Scope-OrgID
custom header - Set the value to your target tenant (for example,
giantswarm
for platform logs,anonymous
for platform metrics) - For custom data, use the tenant you configured during ingestion
- Add an
Tenant selection guide
Choose the appropriate tenant based on the data you want to access:
Data Type | Tenant Value | Description |
---|---|---|
Platform logs | giantswarm | System and infrastructure logs |
Platform metrics | giantswarm | System and infrastructure metrics |
Custom logs | Your tenant | Logs from your applications |
Custom metrics | Your tenant | Metrics from your applications |
Method 2: Programmatic API access
Access observability data programmatically through REST APIs for custom integrations and automated analysis.
API endpoints
The platform provides standard observability API endpoints:
- Loki API: Compatible with standard Loki query API for logs
- Prometheus API: Compatible with Prometheus query API for metrics (when available)
- Future protocols: OpenTelemetry Protocol (OTLP) endpoints are planned for comprehensive telemetry data exchange
Example: Querying logs programmatically
# Example LogQL query via API
curl -H "Authorization: Bearer $OIDC_TOKEN" \
-H "X-Scope-OrgId: giantswarm" \
"https://observability.<domain>/loki/api/v1/query_range?query={cluster_id=\"your-cluster\"}"
⚠️ Prerequisites: You must configure OIDC authentication with Giant Swarm before using the API. Contact your account engineer for setup assistance.
Security and compliance
Data import and export maintain the same security standards as the internal platform:
- End-to-end encryption: All data transfer uses TLS encryption
- Identity-based access: Integration with your organization’s OIDC provider
- Tenant isolation: Multi-tenant architecture ensures data separation
- Audit trails: All data access requests are logged for compliance
Getting started with data import
Prerequisites for data import
Before you can import data, ensure you have:
- OIDC provider configured: Work with Giant Swarm to set up identity provider integration
- Tenant setup: Create or identify the tenant where your external data should be stored
- Data format: Prepare your data in Loki’s native log stream format
- Network access: Ensure your external systems can reach
https://observability.<domain_name>
Setup process for data import
- Plan your data sources: Identify which external systems will send data to the platform
- Configure authentication: Work with Giant Swarm to set up OIDC integration for your data sources
- Set up tenants: Create appropriate Grafana Organizations for your external data
- Test ingestion: Send sample data to verify connectivity and formatting
- Implement production ingestion: Deploy your chosen import method at scale
- Monitor ingestion: Track data volume and verify data is being properly processed
Getting started with data export
Prerequisites for data export
Before you can export data, ensure you have:
- OIDC provider configured: Work with Giant Swarm to set up identity provider integration
- Tenant access: Confirm you have access to the tenants containing your data
- Network access: Ensure your external systems can reach
https://observability.<domain_name>
Setup process for data export
- Plan your integration: Identify what data you need and which external tools will consume it
- Configure authentication: Work with Giant Swarm to set up OIDC integration
- Test connectivity: Verify you can authenticate and access your tenants
- Implement export: Set up your external tools or custom integrations
- Monitor usage: Track export volume and performance impact
Performance considerations
Both data import and export can impact platform resources:
Data Import Impact:
- Ingestion volume: Large volumes of imported data increase storage and processing requirements
- Data frequency: High-frequency data streams consume more ingestion capacity
- Payload size: Large individual payloads affect processing time and memory usage
- Tenant capacity: Multiple tenants importing data share platform ingestion resources
Data Export Impact:
- Query complexity: Complex queries (broad time ranges, intensive filters) consume more resources
- Export volume: Large data exports may affect platform performance
- Concurrent access: Multiple simultaneous export operations share platform capacity
Best practices
For Data Import:
- Batch processing: Send data in appropriately sized batches rather than individual events
- Rate limiting: Implement client-side rate limiting to avoid overwhelming the platform
- Data filtering: Only send relevant data; pre-filter unnecessary logs or events
- Compression: Use compressed payloads where supported to reduce transfer time
For Data Export:
- Optimize queries: Use specific time ranges and efficient filters
- Implement caching: Cache frequently accessed data in your external systems
- Schedule intensive exports: Run large data exports during off-peak hours
- Monitor impact: Track export performance and adjust patterns as needed
Next steps
- Set up data ingestion: Learn how to send data to the platform in our data ingestion guide
- Configure multi-tenancy: Understand tenant management in our multi-tenancy documentation
- Explore data: Use Grafana’s built-in tools with our data exploration guide
- Create dashboards: Build custom visualizations with our dashboard creation guide
Need help, got feedback?
We listen to your Slack support channel. You can also reach us at support@giantswarm.io. And of course, we welcome your pull requests!