-
Notifications
You must be signed in to change notification settings - Fork 3
[FEAT] Ollama #17
Copy link
Copy link
Open
Labels
backendenhancementNew feature or requestNew feature or requesttriageNew issue needs reviewNew issue needs review
Description
Objective
Reinstitute Ollama as an option so a user could choose to run the advisor app using their own locally hosted LLM. Currently, we have Gemini available to power models, and our own Neon AI models will be available after implementing some commits from this PR - NeonClary#2 . The original build had Ollama functioning at least partially, but the functionality was overwritten when acces to the Neon models was added.
Initial Implementation Requirements
- add Ollama functionality back in fully
Other Considerations
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
backendenhancementNew feature or requestNew feature or requesttriageNew issue needs reviewNew issue needs review