Skip to content

[FEAT] Ollama #17

@NeonClary

Description

@NeonClary

Objective

Reinstitute Ollama as an option so a user could choose to run the advisor app using their own locally hosted LLM. Currently, we have Gemini available to power models, and our own Neon AI models will be available after implementing some commits from this PR - NeonClary#2 . The original build had Ollama functioning at least partially, but the functionality was overwritten when acces to the Neon models was added.

Initial Implementation Requirements

  • add Ollama functionality back in fully

Other Considerations

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions