Claude Autoresearch Skill — Autonomous goal-directed iteration for Claude Code. Inspired by Karpathy's autoresearch. Modify → Verify → Keep/Discard → Repeat forever.
-
Updated
Apr 6, 2026 - Shell
Claude Autoresearch Skill — Autonomous goal-directed iteration for Claude Code. Inspired by Karpathy's autoresearch. Modify → Verify → Keep/Discard → Repeat forever.
A curated list of autonomous improvement loops, research agents, and autoresearch-style systems inspired by Karpathy's autoresearch.
The knowledge compiler. Raw sources in, interlinked wiki out. Inspired by Karpathy's LLM Wiki pattern.
A complete GPT language model (training and inference) in ~600 lines of pure C#, zero dependencies
Agent Skills-compatible LLM wiki for Claude Code, Cursor, and Codex. Build a Karpathy-style knowledge base from raw sources, citations, and linting.
Inspired by Karpathy's LLM Wiki. Local-first RAG knowledge base compiler with MCP server (Claude Code, Codex, OpenCode, OpenClaw). Turn raw research into a persistent markdown wiki, knowledge graph, and hybrid search that compound over time.
Detailed python notes & code for lectures and exercises of Andrej Karpathy's course "Neural Networks: Zero to Hero." The course is focused on building neural networks from scratch.
GPT in a QR Code ; The actual most atomic way to train and inference a GPT in pure, dependency-free JS/Python.
Karpathy’s LLM Wiki, 100% local with Ollama. Drop Markdown notes → AI extracts concepts → your Obsidian wiki auto-links and grows. Zero cloud. Zero sharing. Your notes stay yours.
LLM-powered knowledge base from your Claude Code, Codex CLI, Copilot, Cursor & Gemini sessions. Karpathy's LLM Wiki pattern — implemented and shipped.
Memoriki - LLM Wiki + MemPalace. Personal knowledge base with real memory.
The Karpathy treatment for OpenClaw - stripped to ε. 515 lines of Python, 6 files, one dependency.
Build Karpathy's LLM Wiki with Claude Code. L1/L2 cache architecture. Logseq + Obsidian support.
Train Llama 3 models from scratch. Any scale, any personality. By Arianna Method.
Train your own ChatGPT on Apple Silicon — MLX port of nanochat
A git template for building your own LLM-powered personal wiki. Training period, metadata standard, lint system included. Clone and go.
The Self-Growing Karpathy LLM Wiki — grown by an AI agent yoyo from Karpathy's founding prompt
⚡ The AI Slop Index linter inspired by @karpathy. Detects hallucinated imports, any-type abuse, vibe coding, and soul-less patterns in TypeScript/JavaScript/React. Fast (~4s), educational, CI-ready. npx karpeslop@latest
Autonomous goal-directed iteration for Gemini CLI. Inspired by Karpathy's autoresearch. Modify → Verify → Keep/Discard → Repeat forever.
Add a description, image, and links to the karpathy topic page so that developers can more easily learn about it.
To associate your repository with the karpathy topic, visit your repo's landing page and select "manage topics."