News ArXiv AI Papers 2026-05-12

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

arXiv:2605.08405v1 Announce Type: new Abstract: How do LLMs learn in-context? Is it by pattern-matching recent tokens, or by inferring latent structure? We probe this question using a toy graph random-walk across two competing graph structures. This task's answer is, in principle, decidable: either the model tracks global topology, or it copies local transitions. We present two lines of evidence t

1 0
Share:

No detailed content yet

Discussion

Leave a Comment

0/2000
...
= ?