Nach Genre filtern

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Machine Learning Street Talk (MLST)

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).

148 - Connor Leahy - e/acc, AGI and the future.
0:00 / 0:00
1x
  • 148 - Connor Leahy - e/acc, AGI and the future.

    Connor is the CEO of Conjecture and one of the most famous names in the AI alignment movement. This is the "behind the scenes footage" and bonus Patreon interviews from the day of the Beff Jezos debate, including an interview with Daniel Clothiaux. It's a great insight into Connor's philosophy. At the end there is an unreleased additional interview with Beff.


    Support MLST:

    Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, very early-access + exclusive content and lots more.

    https://patreon.com/mlst

    Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA

    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail


    Topics:

    Externalized cognition and the role of society and culture in human intelligence

    The potential for AI systems to develop agency and autonomy

    The future of AGI as a complex mixture of various components

    The concept of agency and its relationship to power

    The importance of coherence in AI systems

    The balance between coherence and variance in exploring potential upsides

    The role of dynamic, competent, and incorruptible institutions in handling risks and developing technology

    Concerns about AI widening the gap between the haves and have-nots

    The concept of equal access to opportunity and maintaining dynamism in the system

    Leahy's perspective on life as a process that "rides entropy"

    The importance of distinguishing between epistemological, decision-theoretic, and aesthetic aspects of morality (inc ref to Hume's Guillotine)

    The concept of continuous agency and the idea that the first AGI will be a messy admixture of various components

    The potential for AI systems to become more physically embedded in the future

    The challenges of aligning AI systems and the societal impacts of AI technologies like ChatGPT and Bing

    The importance of humility in the face of complexity when considering the future of AI and its societal implications


    Disclaimer: this video is not an endorsement of e/acc or AGI agential existential risk from us - the hosts of MLST consider both of these views to be quite extreme. We seek diverse views on the channel.


    00:00:00 Intro

    00:00:56 Connor's Philosophy

    00:03:53 Office Skit

    00:05:08 Connor on e/acc and Beff

    00:07:28 Intro to Daniel's Philosophy

    00:08:35 Connor on Entropy, Life, and Morality

    00:19:10 Connor on London

    00:20:21 Connor Office Interview

    00:20:46 Friston Patreon Preview

    00:21:48 Why Are We So Dumb?

    00:23:52 The Voice of the People, the Voice of God / Populism

    00:26:35 Mimetics

    00:30:03 Governance

    00:33:19 Agency

    00:40:25 Daniel Interview - Externalised Cognition, Bing GPT, AGI

    00:56:29 Beff + Connor Bonus Patreons Interview

    Sun, 21 Apr 2024 - 1h 19min
  • 147 - Prof. Chris Bishop's NEW Deep Learning Textbook!

    Professor Chris Bishop is a Technical Fellow and Director at Microsoft Research AI4Science, in Cambridge. He is also Honorary Professor of Computer Science at the University of Edinburgh, and a Fellow of Darwin College, Cambridge. In 2004, he was elected Fellow of the Royal Academy of Engineering, in 2007 he was elected Fellow of the Royal Society of Edinburgh, and in 2017 he was elected Fellow of the Royal Society. Chris was a founding member of the UK AI Council, and in 2019 he was appointed to the Prime Minister’s Council for Science and Technology.


    At Microsoft Research, Chris oversees a global portfolio of industrial research and development, with a strong focus on machine learning and the natural sciences.

    Chris obtained a BA in Physics from Oxford, and a PhD in Theoretical Physics from the University of Edinburgh, with a thesis on quantum field theory.


    Chris's contributions to the field of machine learning have been truly remarkable. He has authored (what is arguably) the original textbook in the field - 'Pattern Recognition and Machine Learning' (PRML) which has served as an essential reference for countless students and researchers around the world, and that was his second textbook after his highly acclaimed first textbook Neural Networks for Pattern Recognition.


    Recently, Chris has co-authored a new book with his son, Hugh, titled 'Deep Learning: Foundations and Concepts.' This book aims to provide a comprehensive understanding of the key ideas and techniques underpinning the rapidly evolving field of deep learning. It covers both the foundational concepts and the latest advances, making it an invaluable resource for newcomers and experienced practitioners alike.


    Buy Chris' textbook here:

    https://amzn.to/3vvLcCh


    More about Prof. Chris Bishop:

    https://en.wikipedia.org/wiki/Christopher_Bishop

    https://www.microsoft.com/en-us/research/people/cmbishop/


    Support MLST:

    Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more.

    https://patreon.com/mlst

    Donate: https://www.paypal.com/donate/?hosted_button_id=K2TYRVPBGXVNA

    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail


    TOC:

    00:00:00 - Intro to Chris

    00:06:54 - Changing Landscape of AI

    00:08:16 - Symbolism

    00:09:32 - PRML

    00:11:02 - Bayesian Approach

    00:14:49 - Are NNs One Model or Many, Special vs General

    00:20:04 - Can Language Models Be Creative

    00:22:35 - Sparks of AGI

    00:25:52 - Creativity Gap in LLMs

    00:35:40 - New Deep Learning Book

    00:39:01 - Favourite Chapters

    00:44:11 - Probability Theory

    00:45:42 - AI4Science

    00:48:31 - Inductive Priors

    00:58:52 - Drug Discovery

    01:05:19 - Foundational Bias Models

    01:07:46 - How Fundamental Is Our Physics Knowledge?

    01:12:05 - Transformers

    01:12:59 - Why Does Deep Learning Work?

    01:16:59 - Inscrutability of NNs

    01:18:01 - Example of Simulator

    01:21:09 - Control

    Wed, 10 Apr 2024 - 1h 22min
  • 146 - Philip Ball - How Life Works

    Dr. Philip Ball is a freelance science writer. He just wrote a book called "How Life Works", discussing the how the science of Biology has advanced in the last 20 years. We focus on the concept of Agency in particular.


    He trained as a chemist at the University of Oxford, and as a physicist at the University of Bristol. He worked previously at Nature for over 20 years, first as an editor for physical sciences and then as a consultant editor. His writings on science for the popular press have covered topical issues ranging from cosmology to the future of molecular biology.


    YT: https://www.youtube.com/watch?v=n6nxUiqiz9I


    Transcript link on YT description


    Philip is the author of many popular books on science, including H2O: A Biography of Water, Bright Earth: The Invention of Colour, The Music Instinct and Curiosity: How Science Became Interested in Everything. His book Critical Mass won the 2005 Aventis Prize for Science Books, while Serving the Reich was shortlisted for the Royal Society Winton Science Book Prize in 2014.


    This is one of Tim's personal favourite MLST shows, so we have designated it a special edition. Enjoy!


    Buy Philip's book "How Life Works" here: https://amzn.to/3vSmNqp


    Support MLST: Please support us on Patreon. We are entirely funded from Patreon donations right now. Patreon supports get private discord access, biweekly calls, early-access + exclusive content and lots more. https://patreon.com/mlst Donate: https://www.paypal.com/donate/?hosted... If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail

    Sun, 07 Apr 2024 - 2h 09min
  • 145 - Dr. Paul Lessard - Categorical/Structured Deep Learning

    Dr. Paul Lessard and his collaborators have written a paper on "Categorical Deep Learning and Algebraic Theory of Architectures". They aim to make neural networks more interpretable, composable and amenable to formal reasoning. The key is mathematical abstraction, as exemplified by category theory - using monads to develop a more principled, algebraic approach to structuring neural networks.


    We also discussed the limitations of current neural network architectures in terms of their ability to generalise and reason in a human-like way. In particular, the inability of neural networks to do unbounded computation equivalent to a Turing machine. Paul expressed optimism that this is not a fundamental limitation, but an artefact of current architectures and training procedures.


    The power of abstraction - allowing us to focus on the essential structure while ignoring extraneous details. This can make certain problems more tractable to reason about. Paul sees category theory as providing a powerful "Lego set" for productively thinking about many practical problems.


    Towards the end, Paul gave an accessible introduction to some core concepts in category theory like categories, morphisms, functors, monads etc. We explained how these abstract constructs can capture essential patterns that arise across different domains of mathematics.


    Paul is optimistic about the potential of category theory and related mathematical abstractions to put AI and neural networks on a more robust conceptual foundation to enable interpretability and reasoning. However, significant theoretical and engineering challenges remain in realising this vision.


    Please support us on Patreon. We are entirely funded from Patreon donations right now.

    https://patreon.com/mlst

    If you would like to sponsor us, so we can tell your story - reach out on mlstreettalk at gmail


    Links:

    Categorical Deep Learning: An Algebraic Theory of Architectures

    Bruno Gavranović, Paul Lessard, Andrew Dudzik,

    Tamara von Glehn, João G. M. Araújo, Petar Veličković

    Paper: https://categoricaldeeplearning.com/


    Symbolica:

    https://twitter.com/symbolica

    https://www.symbolica.ai/


    Dr. Paul Lessard (Principal Scientist - Symbolica)

    https://www.linkedin.com/in/paul-roy-lessard/


    Interviewer: Dr. Tim Scarfe


    TOC:

    00:00:00 - Intro

    00:05:07 - What is the category paper all about

    00:07:19 - Composition

    00:10:42 - Abstract Algebra

    00:23:01 - DSLs for machine learning

    00:24:10 - Inscrutibility

    00:29:04 - Limitations with current NNs

    00:30:41 - Generative code / NNs don't recurse

    00:34:34 - NNs are not Turing machines (special edition)

    00:53:09 - Abstraction

    00:55:11 - Category theory objects

    00:58:06 - Cat theory vs number theory

    00:59:43 - Data and Code are one in the same

    01:08:05 - Syntax and semantics

    01:14:32 - Category DL elevator pitch

    01:17:05 - Abstraction again

    01:20:25 - Lego set for the universe

    01:23:04 - Reasoning

    01:28:05 - Category theory 101

    01:37:42 - Monads

    01:45:59 - Where to learn more cat theory

    Mon, 01 Apr 2024 - 1h 49min
  • 144 - Can we build a generalist agent? Dr. Minqi Jiang and Dr. Marc Rigter

    Dr. Minqi Jiang and Dr. Marc Rigter explain an innovative new method to make the intelligence of agents more general-purpose by training them to learn many worlds before their usual goal-directed training, which we call "reinforcement learning". Their new paper is called "Reward-free curricula for training robust world models" https://arxiv.org/pdf/2306.09205.pdf https://twitter.com/MinqiJiang https://twitter.com/MarcRigter Interviewer: Dr. Tim Scarfe Please support us on Patreon, Tim is now doing MLST full-time and taking a massive financial hit. If you love MLST and want this to continue, please show your support! In return you get access to shows very early and private discord and networking. https://patreon.com/mlst We are also looking for show sponsors, please get in touch if interested mlstreettalk at gmail. MLST Discord: https://discord.gg/machine-learning-street-talk-mlst-937356144060530778

    Wed, 20 Mar 2024 - 1h 57min
Weitere Folgen anzeigen