Mark Brakel on the UK AI Summit and the Future of AI Policy
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems.
Mark Brakel (Director of Policy at the Future of Life Institute) joins the podcast to discuss the AI Safety Summit in Bletchley Park, objections to AI policy, AI regulation in the EU and US, global institutions for safe AI, and autonomy in weapon systems. Timestamps: 00:00 AI Safety Summit in the UK 12:18 Are officials up to date on AI? 23:22 Objections to AI policy 31:27 The EU AI Act 43:37 The right level of regulation 57:11 Risks and regulatory tools 1:04:44 Open-source AI 1:14:56 Subsidising AI safety research 1:26:29 Global institutions for safe AI 1:34:34 Autonomy in weapon systems
Peter Wildeford discusses methods for forecasting AI progress and why he sees AI as neither a bubble nor a normal technology, covering economic effects, national security, cyber capabilities, robotics, export controls, and prediction markets.
Physician-scientist Emilia Javorsky argues that curing cancer is limited more by biology’s complexity, data quality, and incentives than by intelligence, and explores realistic uses of AI in drug development, trials, and reducing medical bureaucracy.
Emilia Javorsky explores how AI can realistically aid cancer research, where current hype exceeds evidence, and what changes researchers, policymakers, and funders must make to turn AI advances into real clinical impact.