Repository URL to install this package:
|
Version:
0.4.39 ▾
|
| omni_agents |
| omni_code |
| omni_code.egg-info |
| server_functions |
| tests |
| tools |
| LICENSE |
| MANIFEST.in |
| PKG-INFO |
| README.md |
| main.py |
| project.yml |
| pyproject.toml |
| setup.cfg |
A standalone coding agent built with OmniAgents. It gives an AI assistant safe access to your local workspace and a rich set of tools to search, edit, and run code; fetch docs; and summarize content.
ink)The omni CLI defaults to --mode ink. You can pass through any omniagents run flags.
Examples:
omni # Ink TUI omni --mode web # Web runner omni --mode server # Run the backend server
For launcher/embed scenarios, use:
omni --embedded --mode web --ui-minimal
In this mode Omni Code does not auto-open a browser and emits a machine-readable JSON line from the web backend with the resolved url and port.
If you have Docker installed, you can run Omni Code inside a container while mounting your current workspace and your Omni Code config directory (models/MCP/skills):
omni sandbox --ui container
To persist credentials or tool state between runs, mount a persistent Docker volume into the container:
omni sandbox --persist-volume omni-cred-azure:/home/user/.azure
To run code-server inside the sandbox and expose it on a host port:
omni sandbox --ui container --enable-code-server --code-server-port 8080
By default the sandbox persists OmniAgents state (sessions + Studio traces) in an isolated directory so sandbox history is not mixed with your local OmniAgents history:
~/.config/omni_code/sandbox/omniagents/ → Container: /home/user/.omniagents/You can override this location (or intentionally share your local OmniAgents state) with:
omni sandbox --omniagents-home ~/.omniagents
For Electron-style launching, you can run in the background and print only the URL:
omni sandbox --mode web --detach --print-url --port 0
For programmatic launching (e.g. an Electron wrapper), you can emit a machine-readable JSON payload:
omni sandbox --ui local --enable-code-server --enable-vnc --code-server-port 0 --vnc-port 0 --ui-port 0 --output json
For --ui container/none, JSON output requires --detach.
By default it will pass <workspace>/.env to Docker if present. You can override it with:
omni sandbox --env-file /path/to/.env
Type these while the assistant is running:
/help - Show available commands/models - List available models and current session state/model <name> - Switch models for this session/reasoning <low|medium|high> - Set reasoning effort level/compact - Compact conversation context/exit - End the current session/good / /bad / /note - Feedback and session notesInspect prior conversations stored in the OmniAgents sessions database:
omni sessions list --pretty omni sessions list --after 2026-02-01 --limit 20 --offset 0 --pretty omni sessions list --after 2026-02-01 --limit 20 --offset 20 --pretty omni sessions list --stats --limit 20 --pretty omni sessions search "kubernetes" --role user --pretty omni sessions search "kubernetes" --after 2026-02-01 --limit 20 --offset 0 --pretty omni sessions export <session_id> --format jsonl omni sessions summarize <session_id> --pretty
OPENAI_API_KEY (or run omni model setup to store credentials in the global config).SERPAPI_API_KEY.Omni Code supports Agent Skills by injecting discovered skills into the system prompt.
~/.config/omni_code/skills/ (Windows: %APPDATA%\OmniCode\skills\)<workspace_root>/.omni_code/skills/omni CLI globally, then run omni model setup to configure credentials.python3 -m pip install --user pipx pipx ensurepath pipx install omni-code --pip-args="--extra-index-url https://pypi.fury.io/ericmichael/" omni model setup omni
Note: Keep pip's default index pointed at PyPI. If you've previously run
pip config set global.index-url https://pypi.fury.io/..., reset it withpip config unset global.index-url(or set it back tohttps://pypi.org/simple) before installing.
Windows (PowerShell):
py -m pip install --user pipx pipx ensurepath pipx install omni-code --pip-args="--extra-index-url https://pypi.fury.io/ericmichael/" omni model setup omni
python3 -m pip install --user uv uv tool install omni-code --extra-index-url https://pypi.fury.io/ericmichael/ omni model setup omni
If your pip configuration overrides the default index, reset it as described above so PyPI remains available.
Windows (PowerShell):
py -m pip install --user uv uv tool install omni-code --extra-index-url https://pypi.fury.io/ericmichael/ omni model setup omni
The setup wizard guides you through choosing your provider (OpenAI, Azure, OpenAI-compatible, or LiteLLM), securely collects required keys, and writes them to a global config file:
~/.config/omni_code/models.json%APPDATA%\OmniCode\models.jsonYou can rerun omni model setup anytime to update credentials. After setup, run omni from any directory to start the assistant (it auto-launches the setup wizard if no model is configured). By default it launches in the ink TUI; use --mode to switch (web or server).
If you prefer environment variables, you can skip the wizard and just set OPENAI_API_KEY (the built-in OpenAI provider profile references it via ${OPENAI_API_KEY}).
Model switching uses references like gpt-5.1 (defaults to the openai provider profile) or azure-prod/gpt-5.2 for a named Azure profile.
If you installed via pipx, you can upgrade with:
omni update
Non-interactive:
omni update --yes
When the wizard prompts for API keys it uses hidden input, so you will not see characters or
***while typing.
omni model setup and choose the Azure option to provide your endpoint, deployment, API version, and API key.omni model listomni model default <name>omni model show <name>models.json using ${VAR_NAME} instead of storing keys directly.This project also includes a voice-optimized agent spec (voice_agent). How you select it depends on the OmniAgents runner/UI you use; look for an agent selection option and choose voice_agent.
No model credentials configured: run omni model setup.ImportError: libgit2 / pygit2 failures: install libgit2 (macOS: brew install libgit2, Debian/Ubuntu: sudo apt-get install -y libgit2-dev)..
├── main.py
├── tools/
├── omni_agents/
├── server_functions/
├── omni_code/
├── tests/
├── docs/
├── project.yml
├── pyproject.toml
└── requirements.txt
brew install libgit2, sudo apt-get install -y libgit2-dev)omni model setup once to capture credentials in your global config if you have not already.git clone git@github.com:<you>/omni-code.git cd omni-code python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install -e .[dev] --extra-index-url https://pypi.fury.io/ericmichael/ omni model setup omni
Development environments should also retain
https://pypi.org/simpleas the primary index. Usepip config unset global.index-urlif Gemfury was set globally.
Run omni model setup again anytime you need to update credentials. To use the CLI globally without activating the virtual environment, follow either the pipx or uv instructions above.
pytest
make bump-patch/bump-minor/bump-major to adjust the version.make publish builds distributions and uploads them to Fury using FURY_USER and FURY_TOKEN from .env.docs/RELEASE.md for the full workflow.pip install -r requirements.txt
The file includes the private Gemfury index URL required for omniagents and agentio.
MIT (see LICENSE)