资讯 The Gradient 2026-02-18

After Orthogonality: Virtue-Ethical Agency and AI Alignment

Preface This essay argues that rational people don’t have goals, and that rational AIs shouldn’t have goals. Human actions are rational not because we direct them at some final ‘goals,’ but because we align actions to practices[1]: networks of actions, action-dispositions, action-evaluation criteria,

0 0
分享:

暂无详细内容

讨论

发表评论

0/2000
...
= ?