Last modified July 17, 2025

Data Import and Export

Data import and export capabilities enable you to both send observability data from external sources into the Giant Swarm platform and access your observability data from external systems and tools. This gives you the flexibility to integrate Giant Swarm’s observability platform with your existing monitoring infrastructure, external data sources, and specialized analysis tools.

The Observability Platform API serves as the primary mechanism for both data import and export, providing secure, authenticated access to send and receive metrics, logs, and events from anywhere - not just from within Giant Swarm managed clusters.

Why importing and exporting data

Data import and export capabilities open up powerful integration possibilities:

Data Import Benefits:

  • External data sources: Send logs and events from SaaS applications, databases, or other infrastructure not managed by Giant Swarm
  • Cross-platform correlation: Combine data from multiple environments and platforms in a single observability stack
  • Legacy system integration: Import data from existing monitoring tools during migrations or for hybrid deployments
  • Third-party services: Collect observability data from external services, APIs, or cloud providers

Data Export Benefits:

  • External monitoring tools: Connect your existing Grafana instances, monitoring dashboards, or business intelligence tools
  • Specialized analysis: Use advanced analytics tools, machine learning platforms, or custom applications with your observability data
  • Backup and archival: Create additional copies of your observability data for compliance or long-term analysis
  • Multi-cloud strategies: Centralize observability data from multiple cloud providers and platforms

How data import and export works

The Observability Platform API provides both data ingestion (sending data to the platform) and data export (retrieving data from the platform) capabilities through a unified, secure interface.

Architecture overview

The API consists of different ingress components that use:

  • Shared host: Based on your Giant Swarm installation’s base domain (https://observability.<domain_name>)
  • OIDC authentication: Secure access through your identity provider
  • Multi-tenant access control: Tenant-scoped data access through HTTP headers

Data export architecture Full size architecture diagram

Authentication and access control

All data import and export requests require:

  1. Valid OIDC token: Authentication through your organization’s identity provider
  2. Tenant specification: Include an X-Scope-OrgId HTTP header with an existing tenant name

Please note that your identity must have access to the specified tenant.

⚠️ Important: Only data from tenants defined in Grafana Organization resources can be accessed. Requests for non-existent tenants will be rejected.

Available data types

The platform supports importing and exporting different types of observability data:

Logs and events ✅

Currently available for both import and export:

  • Application logs: Custom logs from your workloads and external applications
  • System logs: Kubernetes events and infrastructure logs
  • Audit logs: Security and compliance-related events
  • External service logs: Logs from SaaS applications, databases, and third-party services

Metrics 🚧

Metrics capabilities are in development:

  • Export: Limited metrics export capabilities are available
  • Import: Metrics import is planned for future releases
  • Infrastructure metrics: CPU, memory, disk, and network metrics
  • Application metrics: Custom business and performance metrics
  • Platform metrics: Kubernetes and Giant Swarm platform metrics

Note: Currently, data import via the API is limited to logs and events only. Metrics import will follow in a later release. Keep an eye on our changes and releases for updates on metrics import availability.

Data import methods

Loki API ingestion

Send log data directly to the platform using the Loki push API with properly formatted log streams.

Loki push endpoint

The platform provides a Loki-compatible endpoint for log data ingestion:

  • Logs ingestion: https://observability.<domain_name>/loki/api/v1/push

Example: Sending logs via Loki API

# Example Loki logs ingestion
curl -X POST \
     -H "Authorization: Bearer $OIDC_TOKEN" \
     -H "X-Scope-OrgId: your-tenant" \
     -H "Content-Type: application/json" \
     -d @logs-payload.json \
     "https://observability.<domain>/loki/api/v1/push"

Payload format

Logs should be sent in Loki’s native format. Here’s an example payload structure:

{
  "streams": [
    {
      "stream": {
        "job": "my-external-service",
        "level": "info",
        "service": "auth-service"
      },
      "values": [
        ["1640995200000000000", "Application started successfully"],
        ["1640995201000000000", "User authentication completed"]
      ]
    }
  ]
}

The payload structure includes:

  • streams: Array of log streams, each with labels and log entries
  • stream: Object containing label key-value pairs to identify the log stream
  • values: Array of timestamp-message pairs, where timestamps are in nanoseconds since Unix epoch

⚠️ Prerequisites: You must configure OIDC authentication with Giant Swarm before using the import method. Contact your account engineer for setup assistance.

Data export methods

Method 1: External Grafana integration

Connect your self-managed Grafana instance to access Giant Swarm observability data through familiar dashboards and queries.

Setting up Grafana data sources

  1. Configure the connection URL:

    • For logs (Loki): https://observability.<domain_name>
    • For metrics (Mimir/Prometheus): https://observability.<domain_name>/prometheus

    Replace <domain_name> with your installation’s base domain.

    Data source URL configuration

  2. Set up authentication:

    • Select “Forward OAuth Identity” in the Authentication section
    • This passes your OIDC credentials to the API

    Data source authentication setup

  3. Configure tenant access:

    • Add an X-Scope-OrgID custom header
    • Set the value to your target tenant (for example, giantswarm for platform logs, anonymous for platform metrics)
    • For custom data, use the tenant you configured during ingestion

    Data source headers configuration

Tenant selection guide

Choose the appropriate tenant based on the data you want to access:

Data TypeTenant ValueDescription
Platform logsgiantswarmSystem and infrastructure logs
Platform metricsgiantswarmSystem and infrastructure metrics
Custom logsYour tenantLogs from your applications
Custom metricsYour tenantMetrics from your applications

Method 2: Programmatic API access

Access observability data programmatically through REST APIs for custom integrations and automated analysis.

API endpoints

The platform provides standard observability API endpoints:

  • Loki API: Compatible with standard Loki query API for logs
  • Prometheus API: Compatible with Prometheus query API for metrics (when available)
  • Future protocols: OpenTelemetry Protocol (OTLP) endpoints are planned for comprehensive telemetry data exchange

Example: Querying logs programmatically

# Example LogQL query via API
curl -H "Authorization: Bearer $OIDC_TOKEN" \
     -H "X-Scope-OrgId: giantswarm" \
     "https://observability.<domain>/loki/api/v1/query_range?query={cluster_id=\"your-cluster\"}"

⚠️ Prerequisites: You must configure OIDC authentication with Giant Swarm before using the API. Contact your account engineer for setup assistance.

Security and compliance

Data import and export maintain the same security standards as the internal platform:

  • End-to-end encryption: All data transfer uses TLS encryption
  • Identity-based access: Integration with your organization’s OIDC provider
  • Tenant isolation: Multi-tenant architecture ensures data separation
  • Audit trails: All data access requests are logged for compliance

Getting started with data import

Prerequisites for data import

Before you can import data, ensure you have:

  1. OIDC provider configured: Work with Giant Swarm to set up identity provider integration
  2. Tenant setup: Create or identify the tenant where your external data should be stored
  3. Data format: Prepare your data in Loki’s native log stream format
  4. Network access: Ensure your external systems can reach https://observability.<domain_name>

Setup process for data import

  1. Plan your data sources: Identify which external systems will send data to the platform
  2. Configure authentication: Work with Giant Swarm to set up OIDC integration for your data sources
  3. Set up tenants: Create appropriate Grafana Organizations for your external data
  4. Test ingestion: Send sample data to verify connectivity and formatting
  5. Implement production ingestion: Deploy your chosen import method at scale
  6. Monitor ingestion: Track data volume and verify data is being properly processed

Getting started with data export

Prerequisites for data export

Before you can export data, ensure you have:

  1. OIDC provider configured: Work with Giant Swarm to set up identity provider integration
  2. Tenant access: Confirm you have access to the tenants containing your data
  3. Network access: Ensure your external systems can reach https://observability.<domain_name>

Setup process for data export

  1. Plan your integration: Identify what data you need and which external tools will consume it
  2. Configure authentication: Work with Giant Swarm to set up OIDC integration
  3. Test connectivity: Verify you can authenticate and access your tenants
  4. Implement export: Set up your external tools or custom integrations
  5. Monitor usage: Track export volume and performance impact

Performance considerations

Both data import and export can impact platform resources:

Data Import Impact:

  • Ingestion volume: Large volumes of imported data increase storage and processing requirements
  • Data frequency: High-frequency data streams consume more ingestion capacity
  • Payload size: Large individual payloads affect processing time and memory usage
  • Tenant capacity: Multiple tenants importing data share platform ingestion resources

Data Export Impact:

  • Query complexity: Complex queries (broad time ranges, intensive filters) consume more resources
  • Export volume: Large data exports may affect platform performance
  • Concurrent access: Multiple simultaneous export operations share platform capacity

Best practices

For Data Import:

  • Batch processing: Send data in appropriately sized batches rather than individual events
  • Rate limiting: Implement client-side rate limiting to avoid overwhelming the platform
  • Data filtering: Only send relevant data; pre-filter unnecessary logs or events
  • Compression: Use compressed payloads where supported to reduce transfer time

For Data Export:

  • Optimize queries: Use specific time ranges and efficient filters
  • Implement caching: Cache frequently accessed data in your external systems
  • Schedule intensive exports: Run large data exports during off-peak hours
  • Monitor impact: Track export performance and adjust patterns as needed

Next steps