A focused curriculum that replaces manual matrix calculation with TypeScript implementation, specifically designed to build the mathematical intuition required to understand Transformer architectures and vector databases.
How word meanings are turned into spatial coordinates and compared using vector arithmetic.
Moving from strings to numerical coordinates and why 'direction' represents semantic meaning.
Coding the dot product from scratch and visualizing how it measures alignment between two embedding vectors.
How to implement L2 normalization to ensure length doesn't distort semantic similarity in LLMs.
How matrices act as the 'engines' that transform input embeddings into useful hidden states.
Representing weight matrices in TypeScript and understanding them as foundations for neural layers.
Coding a transformation function to see how a matrix 'moves' a vector from one position to another.
Understanding how a matrix transformation looks for specific patterns or 'features' in the input data.
The mechanics of moving data between different levels of abstraction (latent spaces).
Implementing the projection formula to see how one vector can be 'mapped' onto another, mimicking the core of attention scores.
Why 1536-dimensional vectors (like OpenAI embeddings) behave differently than 2D/3D visualizations.
Writing a check to see if your data vectors provide new information or are just 'echoes' of existing dimensions.
Applying all previous modules to implement the specific linear algebra operations found in the Transformer architecture.
Using TypeScript to implement the three-way split of an embedding used in Self-Attention.
Combining vector scaling and addition to compute the final context vector for a token.
The mathematical reason behind the 'Square Root of d_k' division in the Attention formula.