What’s Changed
- chore(deps): update go toolchain directive to v1.26.3 by @renovate[bot] in https://github.com/giantswarm/klausctl/pull/217
Full Changelog: https://github.com/giantswarm/klausctl/compare/v0.0.78...v0.0.79
Updates on Giant Swarm workload cluster releases, apps, UI improvements and documentation changes.
Full Changelog: https://github.com/giantswarm/klausctl/compare/v0.0.78...v0.0.79
Full Changelog: https://github.com/giantswarm/muster/compare/v0.1.157...v0.1.158
Full Changelog: https://github.com/giantswarm/klausctl/compare/v0.0.77...v0.0.78
Full Changelog: https://github.com/giantswarm/klausctl/compare/v0.0.76...v0.0.77
Full Changelog: https://github.com/giantswarm/klausctl/compare/v0.0.75...v0.0.76
useChatSetup constructed a fresh AssistantChatTransport on every render and handed it to useChatRuntime; when its identity changed mid-stream (e.g. as a side effect of a state update triggered by streamed reasoning-delta / tool-input-delta events), the runtime tore down the in-flight chat fetch with TypeError: network error. Envoy logged the symptom as response_flags: DC (downstream remote disconnect) against a healthy Backstage upstream while the SSE stream summary was sawFinish=false, last event tool-input-delta, surfacing in the UI as a “Network error” banner even though no real network outage occurred. The transport is now memoised on the resolved API URL and the stable getHeaders / debugFetch callbacks, so a single transport lives for the component’s lifetime.AbortSignal and log abort events with the reason inline (ABORT signaled by client at <ms> -- reason: <name>: <message>). Stream-outcome lines now annotate aborted streams with [client-aborted at Nms reason="..."] so a “Network error” banner can be classified as a deliberate client cancel (transport rebuild, unmount, manual stop) versus a real proxy / network failure (no abort signal fired, raw stream error). Read errors that fired after the caller aborted are now logged as console.warn (“cancelled by client AbortSignal”) rather than console.error (“STREAM READ FAILED”), reserving the latter for the genuine pre-finish, non-aborted failure mode.See ./docs/releases/v0.129.3-changelog.md for more information.
bytes=... events=... sawFinish=... lastEventType=...) instead of attaching it as a trailing object argument, so it stays readable in browser-console consumers that flatten the args array (devtools-snapshotters, log shippers, the Cursor IDE browser, etc.) rather than collapsing to “[object Object]”. Also classify a stream read error that arrives after a finish event was already parsed as a post-completion teardown (console.warn, message already committed) instead of a network failure (console.error); this matches the actual semantics of the AI SDK aborting the underlying fetch once it has finished consuming the stream.
See ./docs/releases/v0.129.2-changelog.md for more information.ai-chat-verbose-debugging feature flag. Verbose payload-level logging (messages, system prompt, tool schemas, per-event SSE detail) remains gated on the feature flag in non-production builds.
See ./docs/releases/v0.129.1-changelog.md for more information.streamText() without temperature, topP, topK, seed, minP, or maxOutputTokens, so the server’s defaults applied – which for vLLM means temperature=1.0, top_p=1.0, top_k=-1, seed=null. That is far too loose for a tool-using agent backed by a reasoning model and was the dominant cause of token-cost variance in production agent loops (same prompt, fresh chat, observed total-token spread of 22k / 607k / 22k across three runs against the same Qwen3 endpoint). Config now accepts an aiChat.sampling block with temperature, topP, topK, minP, seed, and maxOutputTokens; all fields are optional and default behaviour with no sampling: block is unchanged. temperature, topP, topK, seed, and maxOutputTokens are forwarded through the AI SDK to every provider that supports them. minP is spliced into the request body via the OpenAI-compatible provider’s transformRequestBody hook, since vLLM accepts it as a top-level field but it is not part of the AI SDK call settings. The ai-chat-backend README documents recommended values per model family (Qwen3 thinking/non-thinking, Qwen3-Coder, GPT-4 / GPT-4o, Anthropic Claude).
See ./docs/releases/v0.129.0-changelog.md for more information.Full Changelog: https://github.com/giantswarm/muster/compare/v0.1.156...v0.1.157