Starting today, you can use any OpenAI API comptaible LLM with Savvy 🚀
Why BYOLLM Matters
- Stay compliant with company policy and use company approved LLMs.
- Experiment with multiple models and pick the ones that work best
- Use any local model and export locally to create workflows without any data leaving your machine.
Two Easy Steps to Get Started
First, make sure you have the latest version of Savvy CLI:
Second, udate your Savvy config file (~/.config/savvy/config.json) with your preferred model's details:
Seamless Integration
Once configured, Savvy automatically routes all LLM requests to your specified model. No additional changes needed – your existing workflows and commands continue to work exactly as before, just with your preferred model doing the heavy lifting.
Have questions or feedback about BYOLLM? Join our community Discord or open an issue on our GitHub repository.