Production-ready agent implementations for the Haive framework.
53+ working agent implementations covering conversation, planning, reasoning, RAG, memory, research, and multi-agent coordination — built on haive-core and ready for production. Every agent is verified end-to-end with real LLM calls (no mocks).
Building production agents from scratch on LangGraph is hard. You need to:
- Roll your own state schemas with the right fields for tool execution
- Implement reasoning loops with iteration tracking and convergence checks
- Handle structured output validation, tool routing, and error recovery
- Wire up memory, persistence, and KG extraction
- Compose agents into pipelines without losing type safety
haive-agents gives you all of this as a library. Each agent is a pydantic.BaseModel you can configure, compose, and extend. Agents follow a consistent pattern: configure with AugLLMConfig, run with agent.run(input), get back structured output.
The base agent. Single LLM call, optional structured output, no tools. Use this for chatbots, formatters, classifiers, and any task that doesn't need tool execution.
from haive.agents.simple.agent import SimpleAgent
from haive.core.engine.aug_llm import AugLLMConfig
writer = SimpleAgent(
name="writer",
engine=AugLLMConfig(
temperature=0.8,
system_message="You are a creative writer.",
),
)
result = writer.run("Write a haiku about quantum computing.")
print(result.messages[-1].content)Graph: START → agent_node → END
Use cases: Conversational chatbots, structured data extraction, content generation, classification, summarization.
Implements the ReAct pattern: reason → act → observe → repeat. The agent decides which tools to call, sees the results, and continues reasoning until it has an answer or hits the iteration limit.
from haive.agents.react.agent import ReactAgent
from langchain_core.tools import tool
@tool
def calculator(expression: str) -> str:
"""Calculate mathematical expressions."""
return str(eval(expression, {"__builtins__": {}}))
@tool
def web_search(query: str) -> str:
"""Search the web."""
return f"Results for: {query}"
researcher = ReactAgent(
name="researcher",
engine=AugLLMConfig(
tools=[calculator, web_search],
system_message="Use tools to answer questions. Reason step by step.",
),
max_iterations=5,
)
result = researcher.run("What is the population density of Tokyo?")Graph: START → agent_node → [tool_calls?] → tool_node → agent_node → ... → END
Use cases: Research, math, web scraping, multi-step problem solving, anything that needs tools.
Compose multiple agents into pipelines: sequential, parallel, or conditional. Each child agent can be a different type (Simple, React, Memory, etc.). Engines are passed through automatically so child tools work.
from haive.agents.multi.agent import MultiAgent
# Sequential: each agent sees output of previous
pipeline = MultiAgent(
name="research_pipeline",
agents=[researcher, analyzer, writer],
execution_mode="sequential",
)
# Parallel: all run concurrently, results merged
parallel = MultiAgent(
name="multi_perspective",
agents=[technical_analyst, business_analyst, security_analyst],
execution_mode="parallel",
)
# Conditional: route based on state
router = MultiAgent(
name="router",
agents=[classifier, simple_handler, complex_handler],
execution_mode="conditional",
)Use cases: Research pipelines, multi-perspective analysis, content workflows, fan-out/fan-in patterns.
A ReactAgent with handoff tools that can dynamically delegate to other agents. Add agents at runtime, remove them, or create new ones from system messages.
from haive.agents.dynamic_supervisor.agent import DynamicSupervisor
supervisor = DynamicSupervisor(
name="team_lead",
engine=AugLLMConfig(system_message="You coordinate a team."),
)
# Add agents at any time
supervisor.add_agent(math_agent, description="Handles math problems")
supervisor.add_agent(writer_agent, description="Writes content")
# Or create one on the fly
supervisor.create_agent(
name="translator",
system_message="You translate between languages.",
description="Handles translation tasks",
)
# Supervisor decides who handles each request
result = supervisor.run("Translate 'hello world' to French")Use cases: Customer service routing, dynamic task delegation, agent marketplaces, runtime adaptation.
The most sophisticated agent in the framework. A ReactAgent extended with:
- Persistent memory — saves facts about users in a store (PostgreSQL or in-memory)
- Auto-context loading — searches memory before each response, injects relevant facts
- KG extraction — automatically extracts subject-predicate-object triples from conversations
- Auto-summarization — summarizes long conversations to manage context length
- Neo4j integration — optional graph database for Cypher queries on the KG
from haive.agents.memory import create_memory_agent
# Dev mode (in-memory)
agent = create_memory_agent(name="assistant", user_id="alice")
# Production (PostgreSQL with pgvector)
agent = create_memory_agent(
name="assistant",
user_id="alice",
connection_string="postgresql://haive:haive@localhost/haive",
)
# With Neo4j knowledge graph
agent = create_memory_agent(
name="assistant",
user_id="alice",
connection_string="postgresql://haive:haive@localhost/haive",
neo4j_config=True, # Uses NEO4J_URI/USER/PASSWORD env vars
)
# Have a conversation — agent remembers and extracts KG
agent.run("My name is Alice. I work at DeepMind on reinforcement learning.")
# → Saves memory: "Alice works at DeepMind"
# → Extracts triples: (Alice)-[works_at]->(DeepMind), (Alice)-[focuses_on]->(RL)
# → Stores in PostgreSQL + syncs to Neo4j
agent.run("I also use PyTorch and JAX.")
# → Saves memory + new triples
agent.run("What do you know about me?")
# → Pre-hook: searches memories + KG triples + summaries
# → Injects context into system message
# → LLM responds with full recallMemory tools available to the LLM:
save_memory(content, importance)— save a factsearch_memory(query)— search past memoriessave_knowledge(subject, predicate, object_)— save a KG triplesearch_knowledge(query)— search KG triplesquery_knowledge_graph(question)— Cypher query (when Neo4j connected)
Document-level KG extraction:
# Extract KG from a document using GraphTransformer
triples = agent.extract_kg_from_document(
"Marie Curie was born in Warsaw, Poland in 1867. She won two Nobel prizes.",
allowed_nodes=["Person", "Location", "Award"],
)
# → [{subject: "Marie Curie", predicate: "born_in", object: "Warsaw"}, ...]Use cases: Personal assistants, customer support with history, research assistants, anything that needs long-term memory.
Every meaningful RAG variant from the literature, implemented and ready to use:
| Agent | Pattern | Best For |
|---|---|---|
| BaseRAGAgent | Simple retrieve → generate | Baseline RAG |
| AdaptiveRAGAgent | Routes by query complexity | Mixed query types |
| AgenticRAGAgent | ReactAgent with retrieval tools | Multi-step research |
| DynamicRAGAgent | Multi-source dynamic retrieval | Varied data sources |
| FLARERAGAgent | Forward-Looking Active REtrieval | Long-form generation |
| RAGFusionAgent | Reciprocal rank fusion | Better recall |
| HyDERAGAgent | Hypothetical document embeddings | Better semantic search |
| SelfReflectiveRAGAgent | Generate → grade → regenerate | Hallucination control |
| SelfRouteRAGAgent | Query-aware route selection | Mixed strategies |
| SpeculativeRAGAgent | Hypothesis + parallel verification | Speed + accuracy |
| StepBackRAGAgent | Abstract query → retrieve → answer | Complex reasoning |
| QueryPlanningRAGAgent | Query planning + multi-hop | Hierarchical questions |
| QueryDecomposerAgent | Decompose into sub-queries | Compound questions |
| MemoryAwareRAGAgent | RAG + memory context | Personalized RAG |
| GraphDBRAGAgent | NL → Cypher → Neo4j | Graph-based knowledge |
| SQLRAGAgent | NL → SQL → database | Structured data |
from haive.agents.rag.adaptive.agent import AdaptiveRAGAgent
from langchain_core.documents import Document
docs = [Document(page_content="...") for _ in range(100)]
agent = AdaptiveRAGAgent.from_documents(
documents=docs,
llm_config=AugLLMConfig(),
max_query_complexity="complex",
)
result = agent.run("What are the key insights from the documents?")Implementations of recent reasoning paper algorithms:
from haive.agents.reasoning_and_critique.reflexion.agent import ReflexionAgent
agent = ReflexionAgent(
name="reflexive",
engine=AugLLMConfig(),
max_revisions=3,
)
result = agent.run("Solve this complex problem...")
# → Draft → Reflect on draft → Revise → Loop until quality thresholdTree search with UCB1 selection, simulation, backprop, and reflection-based scoring.
from haive.agents.reasoning_and_critique.lats.agent import LATSAgent
agent = LATSAgent(
name="lats",
engine=AugLLMConfig(tools=[search]),
max_depth=4,
n_samples=3,
)Other reasoning agents: ReflectionAgent, LogicReasoningAgent, ToTAgent (Tree of Thoughts), SelfDiscoverAgent.
from haive.agents.planning.plan_and_execute import PlanAndExecuteAgent
agent = PlanAndExecuteAgent.create(
tools=[calculator, web_search],
name="planner",
)
result = agent.run("Research AI safety and write a 3-paragraph summary.")Plans tasks as a DAG, executes independent tasks in parallel, joins results, replans if needed.
from haive.agents.planning.llm_compiler.agent import LLMCompilerAgent
agent = LLMCompilerAgent(
name="dag_executor",
engine=AugLLMConfig(tools=[search, calc]),
)Plans all steps upfront, executes them all, then synthesizes. Only 2 LLM calls for the planning + synthesis (intermediate steps don't require LLM reasoning).
from haive.agents.planning.rewoo.agent import ReWOOAgent
agent = ReWOOAgent(name="rewoo", engine=AugLLMConfig(tools=[...]))QueryAnalyzer → Researcher (search + RAG) → Synthesizer
from haive.agents.research import create_research_agent
agent = create_research_agent(
name="research",
max_search_iterations=8,
# Tavily auto-detected if TAVILY_API_KEY set, else mock
)
result = agent.run("What are the latest advances in quantum computing?")Planner → Researcher → Analyzer → FactChecker → Writer
Shared ResearchStore across agents so the analyzer can retrieve what the researcher found.
from haive.agents.research import create_deep_research_agent
agent = create_deep_research_agent(
name="deep",
include_fact_check=True,
max_search_iterations=10,
)
result = agent.run("Compare React vs Vue.js for 2025 web development")Six conversation patterns:
| Agent | Description |
|---|---|
| BaseConversationAgent | Foundation for all conversation types |
| CollaborativeConversation | Agents collaborate on a goal |
| DebateConversation | Structured debate with positions |
| DirectedConversation | Moderator-directed flow |
| RoundRobinConversation | Sequential turn-taking |
| SocialMediaConversation | Social media simulation with personalities |
from haive.agents.conversation.debate.agent import DebateConversation
debate = DebateConversation(
name="ai_safety_debate",
engine=AugLLMConfig(),
topic="Should AI development be paused?",
n_rounds=3,
)
result = debate.run("Begin debate")Strip the noisy LangGraph debug output and see clean agent execution:
from haive.agents.utils.trace import run_traced
result = run_traced(agent, "Tell me about quantum computing", save_to="traces/")
# → Rich-formatted tree showing user message, AI response, tool calls,
# tool results, timing, token usage, store contentspip install haive-agentsFor specific extras:
pip install haive-agents[memory] # MemoryAgent + Neo4j support
pip install haive-agents[rag] # RAG dependencies
pip install haive-agents[research] # Tavily + research depsfrom haive.agents.simple.agent import SimpleAgent
from haive.agents.react.agent import ReactAgent
from haive.agents.multi.agent import MultiAgent
from haive.agents.memory import create_memory_agent
from haive.core.engine.aug_llm import AugLLMConfig
from langchain_core.tools import tool
# 1. Simple LLM agent
writer = SimpleAgent(
name="writer",
engine=AugLLMConfig(temperature=0.8, system_message="You are a writer."),
)
# 2. Tool-using agent
@tool
def search(query: str) -> str:
"""Search the web."""
return f"Results for {query}"
researcher = ReactAgent(
name="researcher",
engine=AugLLMConfig(tools=[search], system_message="Use search tool."),
max_iterations=3,
)
# 3. Compose into pipeline
pipeline = MultiAgent(
name="research_pipeline",
agents=[researcher, writer],
execution_mode="sequential",
)
result = pipeline.run("Write an article about quantum computing")
# 4. Add persistent memory
memory_agent = create_memory_agent(
name="assistant",
user_id="user123",
connection_string="postgresql://haive:haive@localhost/haive",
)
memory_agent.run("My name is Alice and I work at DeepMind on RL.")
memory_agent.run("What do you know about me?") # Recalls everything📖 Full documentation: https://pr1m8.github.io/haive-agents/
| Package | Description |
|---|---|
| haive-core | Foundation: engines, graphs, schemas |
| haive-tools | Tool implementations |
| haive-games | LLM-powered game agents |
| haive-mcp | Dynamic MCP integration |
MIT © pr1m8