Advanced course constructing a production-grade, continuously self-improving wiki that ingests Andrej Karpathy’s latest tweets and papers, converts them into a structured knowledge graph, then autonomously reflects, critiques, detects contradictions, and evolves its own understanding using visual-first geometric explanations of embeddings, similarity, clustering and attention followed by clean TypeScript + ai-sdk implementations.
Start by plotting 3D projections of 384- and 1536-dimensional embedding spaces using PCA and t-SNE; observe how semantically related Karpathy tweets cluster even without labels.
Derive cosine similarity from the dot-product angle; interactively rotate two tweet vectors in 3D space and watch similarity scores change; connect this to retrieval relevance in a personal wiki.
Apply geometric clustering to 50 recent Karpathy tweets; observe how outlier points signal potential contradictions or novel insights; introduce silhouette scores visually.
Render attention heatmaps as directed graphs in 2D projection; show how attention weights route information flow between knowledge nodes in a reflective loop.
Set up a robust TypeScript fetcher using Twitter API v2 and ai-sdk's generateObject to extract atomic knowledge claims from tweets.
Define Zod schemas for wiki entries (claim, context, sources, confidence) and use ai-sdk to produce validated structured objects directly from raw tweets.
Write TypeScript code to compute Voyage or OpenAI embeddings and store them with relationships in Neo4j; visualize the resulting graph in 3D.
Build a TypeScript retriever that queries both vector similarity and 2-hop graph neighborhoods; compare results in a visual dashboard.
Cluster overlapping retrievals in embedding space and keep only the most central representative; implement this as a reusable ai-sdk tool function.
Use the LLM to generate diverse query embeddings from a single seed question and combine their attention-weighted results.
Create structured critique prompts that ask the model to evaluate factual accuracy, logical consistency, and novelty using geometric “distance to frontier” metaphors.
Implement a contradiction engine that flags nodes whose embeddings are far apart yet have contradictory graph edges (e.g., 'supports' vs 'opposes'); visualize the conflicting pairs.
Build a loop that alternates between critique, edit, and re-embed; use ai-sdk's tool calling to enforce safe knowledge-graph mutation.
Define a merging algorithm that collapses near-duplicate nodes by averaging embeddings and synthesizing a new canonical statement using the LLM.
Extend the system to ingest Karpathy-related arXiv papers, embed key sections, and link them to existing wiki nodes via citation and conceptual similarity.
Use Temporal or a simple cron + queue to run nightly reflection passes; add observability so the system logs its own “aha” moments and confidence changes.
Create a React/Next.js frontend that renders 3D graph evolution over time using Three.js and force-directed layouts synced with Neo4j.
Implement a review queue where the agent proposes changes but defers controversial edits to human review; track override impact on future self-evolution.
Run the full agent on 100 new Karpathy tweets, measure knowledge quality improvements across three reflection cycles, and write a short report linking observed behaviors to recent papers on reflective agents.