ASI Watch

Curated signals from AI-safety researchers arguing that artificial superintelligence (ASI) could arrive soon—and what that implies for control, governance, and alignment.

Methodology & counterpoints

We feature empirically grounded signals from senior researchers (papers, reports, talks, interviews) that argue for short AGI/ASI timelines or substantial misalignment risk. Each item links to a primary source. We annotate with topical tags and brief summaries.

Tip: in production, auto-ingest via RSS/APIs and keep a public changelog.