AnythingLLM
AnythingLLM is a self-hosted AI workspace. It can use CoreCube as an OpenAI-compatible LLM backend, but does not currently support per-user identity forwarding to downstream model endpoints. All AnythingLLM users share a single CoreCube identity.
:::warning Per-user access control not available AnythingLLM has no mechanism to forward the logged-in user's identity to a downstream API endpoint on a per-request basis. Per-user compartment and scope enforcement is not possible with the current version of AnythingLLM. All users in an AnythingLLM workspace will see the same knowledge, governed by the single CoreCube key you configure.
This is a limitation of AnythingLLM, not CoreCube. Feature requests are tracked at #696 and #2934 on the AnythingLLM repository. :::
If per-user knowledge access control is a requirement, consider OpenWebUI, LibreChat, or Onyx instead.
What is supported
- All AnythingLLM users query CoreCube against a single shared scope
- The scope's compartments and sensitivity ceiling determine what knowledge is accessible
- Works for teams where all users should see the same knowledge pool
Prerequisites
- CoreCube running and accessible from your AnythingLLM instance
- An admin account on CoreCube
Step 1 — Create a public API key
Because AnythingLLM cannot forward per-user identity, use a public API key bound directly to the scope your AnythingLLM users should see. Public keys skip user resolution entirely — they authenticate the AnythingLLM instance and resolve knowledge straight from the bound scope.
In CoreCube admin, go to Settings → API Keys → Create Key and choose Public — anonymous access. Pick the scope that covers the compartments AnythingLLM users should reach. If the scope's max_sensitivity is higher than public, you will be asked to confirm that the selected content is intended for all AnythingLLM users.
Step 2 — Configure AnythingLLM
In AnythingLLM admin, go to LLM Preference and select Generic OpenAI:
| Field | Value |
|---|---|
| Base URL | http://corecube:7400/v1 (or your CoreCube address) |
| API key | The public key from Step 1 |
| Model name | A model name from your CoreCube LLM provider configuration |
How it works
Any AnythingLLM user sends a message
→ AnythingLLM calls POST /v1/chat/completions
with: Authorization: Bearer cc_PUBLIC_KEY
→ CoreCube resolves the bound scope directly (no user lookup)
→ searches compartments in that scope
→ returns cited response (audit log records user_id = null)
All users see the same results
Workspace-level isolation
If you have multiple AnythingLLM workspaces for different teams, create one public key per workspace, each bound to a different scope:
| AnythingLLM workspace | CoreCube scope | Compartments |
|---|---|---|
| Engineering workspace | anythingllm-eng | Engineering, DevOps |
| Marketing workspace | anythingllm-mkt | Marketing, Brand |
This gives you workspace-level isolation without creating dedicated user accounts for each workspace.
Upgrading from a personal-key setup
Earlier CoreCube versions recommended a personal key tied to a dedicated anythingllm@company.com service account. Public keys replace that pattern:
- No service-account user to manage — the key binds to a scope directly
- Audit logs record
user_id = nullinstead of attributing every query to the service account - Scopes bound to a public key cannot be deleted until the key is re-pointed or revoked
Existing personal-key setups continue to work. To migrate, create a new public key bound to the same scope, swap the bearer token in AnythingLLM, then revoke the old personal key once you have verified traffic is flowing.