AI ResearchKR

From Logit Lens to Tuned Lens: Reading the Intermediate Thoughts of Transformers

What happens inside an LLM between input and output? Logit Lens and Tuned Lens let us observe how Transformers build predictions layer by layer.

From Logit Lens to Tuned Lens: Reading the Intermediate Thoughts of Transformers

From Logit Lens to Tuned Lens: Reading the Intermediate Thoughts of Transformers

You type "The capital of France is" into an LLM and get back "Paris." But *where* inside the model did that answer actually form?

TL;DR

  • Logit Lens projects intermediate hidden states to vocabulary space using the model's final unembedding matrix
  • This reveals how Transformers build predictions incrementally, layer by layer
  • Tuned Lens fixes Logit Lens's systematic bias by learning a lightweight affine transformation per layer
  • Together, these tools give us a principled way to peek inside the black box

1. Where Does the Answer Come From?

When you ask GPT "What is the capital of France?", the answer "Paris" comes back in milliseconds. But think about it: the model has 32, 48, or even 96 layers. Did it "know" the answer at Layer 1? Did it only figure it out at the very last layer? Or did it gradually build up confidence somewhere in the middle?

This is not just a philosophical question. If we could watch the model's internal state evolve from layer to layer, we would gain real insight into *how* these models think. We could debug failures, detect hallucinations, and understand what each layer actually contributes.

🔒

Sign in to continue reading

Create a free account to access the full content.

Related Posts