Configuring providers

Add LLM providers, set privacy tiers, choose a runtime strategy, and cap per-provider concurrency. Ten providers across three tiers, OS keychain for keys.

Team-X supports ten LLM providers across three privacy tiers. This guide covers adding providers, setting the privacy tier the runtime is allowed to use, choosing a runtime strategy, and capping per-provider concurrency.

Supported providers

ProviderPrivacy tierNotes
OllamaLocalRuns on your machine. No data leaves your network.
AnthropicProprietary CloudClaude Opus, Sonnet, Haiku.
OpenAIProprietary CloudGPT-4o, GPT-4, GPT-3.5.
GoogleProprietary CloudGemini models.
GroqOpen-Source CloudFast inference for open models.
OpenRouterProprietary CloudMulti-model router.
TogetherOpen-Source CloudOpen model hosting.
FireworksOpen-Source CloudFast open model inference.
OpenAI-CompatibleVariesAny endpoint with the OpenAI-compatible API.

Add a provider

  1. Open Settings > Providers.
  2. Click Add Provider.
  3. Pick the provider type from the dropdown.
  4. Enter your API key.
  5. Optionally set:
    • Privacy tier (Local, Open-Source Cloud, Proprietary Cloud).
    • Base URL (for OpenAI-compatible endpoints).
  6. Click Add.

Your API key is stored in the OS keychain (macOS Keychain, Windows Credential Manager, Linux Secret Service). It never touches a config file or a database column.

Test a connection

After adding a provider, click Test Connection on the provider card. Team-X sends a minimal request to verify the API key and endpoint are valid. If the test passes, the card lights green and the provider is eligible to run agents.

Enable, disable, and remove

Toggle the switch on any provider card to enable or disable it. Disabled providers are never used by the runtime, even if they are the only option for a role’s preferred provider list. Click Remove to delete the configuration and remove the API key from the keychain.

Privacy tiers

Privacy tiers control which providers your agents are allowed to use.

TierData locationExample providers
LocalYour machine onlyOllama
Open-Source CloudThird-party servers, open modelsGroq, Together, Fireworks
Proprietary CloudThird-party servers, proprietary modelsAnthropic, OpenAI, Google

In Settings > Privacy, set the maximum allowed tier:

  • Local only: agents can only use Ollama. No data leaves your machine.
  • Open-Source Cloud: agents can use local or open-source cloud providers.
  • Proprietary Cloud: agents can use any provider (default).

The provider router enforces this filter at call time. If a role requests a proprietary provider but your privacy max is Local only, the router falls back per the role’s fallback_providers list.

Runtime strategy

The runtime strategy determines how Team-X balances model quality, speed, and resource usage.

StrategyBehavior
Auto (default)Profiles your hardware and providers on startup, picks a strategy automatically.
HybridLocal models for simple tasks, cloud models for complex ones.
Always-OnSends everything to the highest-quality available provider.
LeanMinimizes resource usage; prefers local models and fewer concurrent agents.

Configure in Settings > Runtime:

  1. View your hardware profile (CPU, RAM, GPU, detected at startup).
  2. Pick a strategy from the dropdown, or leave it on Auto.
  3. The effective slot count shows how many agents can run concurrently under the selected strategy.

Concurrency caps

In Settings > Concurrency:

  • Global orchestrator slots: maximum total concurrent agent runs.
  • Per-provider caps: limit how many concurrent calls go to each provider.

Default per-provider caps:

ProviderDefault cap
Ollama1
Anthropic4
OpenAI6
Google4
Groq10
OpenRouter8
Together6
Fireworks6

These defaults prevent overwhelming local hardware (Ollama) and respect API rate limits (cloud providers). Adjust based on your plan tier and hardware.

See also