🦔
Highlights
- Pro
Pinned Loading
-
rookery
rookery PublicLocal inference command center — manage llama-server and vLLM backends, hot-swap models, monitor GPU, run agents, and browse models from one daemon + CLI + live dashboard.
Rust
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.





