Roman Yampolskiy • The Diary of a CEO (transcript)
Yampolskiy argues modern scaling could yield AGI soon and that control is unlikely once superintelligence arrives.
Curated signals from AI-safety researchers arguing that artificial superintelligence (ASI) could arrive soon—and what that implies for control, governance, and alignment.
Roman Yampolskiy • The Diary of a CEO (transcript)
Yampolskiy argues modern scaling could yield AGI soon and that control is unlikely once superintelligence arrives.
Yoshua Bengio (Chair) • UK Government / International AI Safety Report
Report synthesizes expert input across 30+ countries; warns current evaluations miss hazards in general-purpose systems.
Geoffrey Hinton • The Guardian
Hinton revises his estimate upward and urges stronger governance and safety evaluation.
Eliezer Yudkowsky • LessWrong
Proposes moratoria up to airspace enforcement due to uncontrolled takeoff risk.
We feature empirically grounded signals from senior researchers (papers, reports, talks, interviews) that argue for short AGI/ASI timelines or substantial misalignment risk. Each item links to a primary source. We annotate with topical tags and brief summaries.
Tip: in production, auto-ingest via RSS/APIs and keep a public changelog.