资讯 ArXiv AI Papers 2026-05-12

Belief or Circuitry? Causal Evidence for In-Context Graph Learning

arXiv:2605.08405v1 Announce Type: new Abstract: How do LLMs learn in-context? Is it by pattern-matching recent tokens, or by inferring latent structure? We probe this question using a toy graph random-walk across two competing graph structures. This task's answer is, in principle, decidable: either the model tracks global topology, or it copies local transitions. We present two lines of evidence t

2 0
分享:

暂无详细内容

讨论

发表评论

0/2000
...
= ?