Nach Genre filtern
Welcome to the Regulating AI: Innovate Responsibly podcast with host and AI regulation expert Sanjay Puri. Sanjay is a pivotal leader at the intersection of technology, policy and entrepreneurship and explores the intricate landscape of artificial intelligence governance on this podcast. You can expect thought-provoking conversations with global leaders as they tackle the challenge of regulating AI without stifling innovation. With diverse perspectives from industry giants, government officials and civil liberty proponents, each episode explores key questions and actionable steps for creating a balanced AI-driven world. Don't miss this essential guide to the future of AI governance, with a fresh episode available every week!
- 32 - Advocating for Stronger AI Regulations To Safeguard Civil Liberties with Congressman Joseph Morelle
On this episode, I am thrilled to sit down withCongressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.
Key Takeaways:
(02:13) Congressman Morelle's extensive experience in AI legislation and its implications.
(04:27) Deep fakes and their growing threat to privacy and integrity.
(07:13) Introducing federal legislation against non-consensual deep fakes.
(14:00) Urgent need for social media platforms to enforce their guidelines rigorously.
(19:46) The No AI Fraud Act and protecting individual likeness in AI use.
(23:06) The importance of adaptable and 'living' statutes in technology regulation.
(32:59) The critical role of continuous education and skill adaptation in the AI era.
(37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.
Resources Mentioned:
Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/
No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9
Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 30 Apr 2024 - 40min - 31 - Empowering Innovators for a Brighter AI Tomorrow with Dr. Sethuraman Panchanathan
On this episode, I welcomeDr. Sethuraman Panchanathan, Director of theU.S. National Science Foundationand a professor atArizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.
Key Takeaways:
(00:21) AI’s pivotal role in enhancing speech-language services.
(01:28) Introduction to Sethuraman’s visionary leadership at NSF.
(02:36) NSF’s significant AI investment totaled over $820 million.
(06:19) The shift toward interdisciplinary AI research at NSF.
(10:26) NSF’s initiative of launching 25 AI institutes for innovation.
(18:26) Emphasis on AI democratization through education and training.
(25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.
(30:21) Focus on ethical AI development to build public trust.
(40:10) AI’s transformative applications in healthcare, agriculture and more.
(42:45) The importance of ethical guardrails in AI’s development.
(43:08) Advancing AI through international collaborations.
(44:53) Lessons from a career in AI and advice for the next generation.
(50:19) Motivating young researchers and entrepreneurs in AI.
(52:24) Advocating for AI innovation and accessibility for everyone.
Resources Mentioned:
https://www.linkedin.com/in/drpanch/
U.S. National Science Foundation | LinkedIn -
https://www.linkedin.com/company/national-science-foundation/
U.S. National Science Foundation | Website -
https://www.nsf.gov/
https://www.linkedin.com/school/arizona-state-university/
https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building
Dr. Sethuraman Panchanathan’s NSF Profile -
https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan
NSF Regional Innovation Engines -
https://new.nsf.gov/funding/initiatives/regional-innovation-engines
National AI Research Resource (NAIRR) -
https://new.nsf.gov/focus-areas/artificial-intelligence/nairr
NSF Focus on Artificial Intelligence -
https://new.nsf.gov/focus-areas/artificial-intelligence
https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research
GRANTED Initiative for Broadening Participation in STEM -
https://new.nsf.gov/funding/initiatives/broadening-participation/granted
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 24 Apr 2024 - 54min - 30 - Evaluating the Effectiveness of AI Legislation in Cybersecurity with Bruce Schneier
The rapid evolution of artificial intelligence in cybersecurity presents both significant opportunities and daunting challenges. On this episode, I'm joined by Bruce Schneier, who is renowned globally for his expertise in cybersecurity and is dubbed a “security guru” by the Economist. Bruce, a best-selling author and lecturer at Harvard Kennedy School, discusses the fast-paced world of AI and cybersecurity, exploring how these technologies intersect with national security and what that means for future regulations.
Key Takeaways:
(00:00) I discuss with Bruce the challenges of regulating AI in the US.
(02:28) Bruce explains the role and future potential of AI in cybersecurity.
(05:05) The benefits of AI in defense, enhancing capabilities at computer speeds.
(07:22) The need for robust regulations akin to those in the EU.
(12:56) Bruce draws analogies between AI regulation and pharmaceutical controls.
(19:56) The critical role of knowledgeable staff in supporting legislators.
(22:24) The challenges of effectively regulating AI.
(26:15) The potential of AI to transform enforcement across various sectors.
(30:58) Reflections on the future of AI governance and ethical considerations.
Resources Mentioned:
Bruce Schneier Website - https://www.schneier.com/
EU AI Strategy - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 23 Apr 2024 - 33min - 29 - AI's Potential in Public Services with Trooper Sanders
On this episode, I’m joined byTrooper Sanders, CEO ofBenefits Data Trustand a member of the White House National Artificial Intelligence Advisory Committee. Trooper’s expertise in leveraging AI to enhance the efficiency and humanity of America’s social safety net offers unique insights into the potential and challenges of AI in public services.
Key Takeaways:
(02:27) The role of Benefits Data Trust in connecting people to essential benefits using AI.
(04:54) The components of trustworthy AI: reliability, public interest alignment, security, transparency, explainability, privacy and harm mitigation.
(09:38) The ‘tortoise and hare’ challenge in aligning AI advancements with legislative processes.
(16:17) The significance of voluntary industry commitments in shaping AI’s ethical use.
(20:32) Ethical considerations in deploying AI, focusing on its societal impact and the readiness of systems for AI integration.
(22:53) Addressing biases in AI to ensure fairness and equitable benefits across all socioeconomic groups.
(27:52) Amplifying diverse voices in the AI discussion to encompass a wide range of societal perspectives.
(34:22) The potential workforce disruption by AI and the necessity of supportive measures for affected individuals.
(37:26) Considering the potentially massive impact of AI-driven career changes across various professions.
Resources Mentioned:
https://www.linkedin.com/in/troopersanders/
Benefits Data Trust | LinkedIn -
https://www.linkedin.com/company/benefits-data-trust/
Benefits Data Trust | Website -
https://bdtrust.org/
White House National Artificial Intelligence Advisory Committee -
https://www.whitehouse.gov/ostp/ostps-teams/nstc/select-committee-on-artificial-intelligence/
BDT Launches AI and Human Services Learning Hub -
https://bdtrust.org/bdt-launches-ai-learning-lab/
Our Vision for an Intelligent Human Services and Benefits Access System -
https://bdtrust.org/our-vision-for-an-intelligent-human-services-and-benefits-access-system
Humans Must Control Human-Serving AI -
https://bdtrust.org/media-coverage-humans-must-control-human-serving-ai/
https://bdtrust.org/trooper-sanders/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Fri, 19 Apr 2024 - 41min - 28 - The Impact of AI on Global Military Strategies with Dr. Paul Lushenko
I'm thrilled to be joined byDr. Paul Lushenko, a Lieutenant Colonel in the U.S. Army and Director of Special Operations at theU.S. Army War College. Dr. Lushenko brings a wealth of knowledge from the front line of AI implementation in military strategy. He joins me to share his insights into the delicate balance between innovation and regulation.
Key Takeaways:
(02:28) The necessity of addressing AI’s impact on warfare and crisis escalation.
(06:37) The gaps in global governance regarding AI and autonomous weapon systems.
(08:30) U.S. policies on the responsible use of AI in military operations.
(16:29) The importance of cutting-edge research in informing legislative actions on AI.
(18:49) The risk of biases in AI systems used in national security.
(20:09) Discussion on automation bias and its consequences in military operations.
(32:49) Emphasis on the importance of careful management and extensive testing to build trust in AI systems within the military.
(39:51) The critical need for data-driven decision-making in high-stakes environments, advocating for leveraging expert insights.
(24:44) Dr. Lushenko argues for the adoption of a strategic framework to guide AI development in military contexts.
Resources Mentioned:
https://www.linkedin.com/in/paul-lushenko-phd-5b805113/
https://www.linkedin.com/school/united-states-army-war-college/
Political Declaration on Responsible Use of AI in Military Technologies -
https://www.state.gov/wp-content/uploads/2023/10/Latest-Version-Political-Declaration-on-Responsible-Military-Use-of-AI-and-Autonomy.pdf
Memorandum on Ethical Use of AI - White House 2023 -
https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 18 Apr 2024 - 41min - 27 - Harnessing AI for Equitable Education with Randi Weingarten, President of American Federation of Teachers
On this episode, I welcomeRandi Weingarten, President of theAmerican Federation of Teachers (AFT). She discusses why implementing AI in education requires a collaborative effort. Join us as we explore the challenges and opportunities of AI in shaping equitable and effective educational environments.
Key Takeaways:
(01:08) Introduction of Randi Weingarten and her role in the AFT.
(05:00) The critical issue of ensuring equitable access to AI technologies in education.
(08:06) Addressing bias and discrimination within AI-driven educational systems.
(11:53) The importance of inclusive participation in the implementation of educational technologies.
(13:09) The evolving necessity for educators to acquire new skills in response to AI advancements.
(17:26) The role of personalized teaching as a complement, not a replacement, for traditional educational methods.
(18:08) Concerns surrounding data privacy and security within AI-driven platforms.
(20:25) The need for regulation and oversight in the application of AI in educational settings.
(25:22) The potential for productive industry collaboration in developing AI tools for education.
(30:28) Advocating for a just transition fund to support workers displaced by AI and technological advancements.
Resources Mentioned:
Randi Weingarten - https://www.linkedin.com/in/randi-weingarten-05896224/
American Federation of Teachers - https://www.aft.org/
Testimony to Senator Schumer by Randi Weingarten on equity in AI - https://www.aft.org/press-release/afts-weingarten-calls-ai-guardrails-smart-regulation-ensure-new-technology-benefits
Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Mon, 01 Apr 2024 - 36min - 26 - Crafting Effective AI Policies for National Security With Insights From Anja Manuel
AI regulation is not a simple field, particularly in the realm of national security, and it requires a nuanced approach. In this episode, I welcomeAnja Manuel, the Executive Director of theAspen Strategy Group and the Aspen Security Forum, as well as Co-Founder and Partner atRice, Hadley, Gates & Manuel, LLC. Anja’s insights make the path forward clearer, framing effective AI legislation and emphasizing the need for global cooperation and ethical considerations. Her perspective, deeply rooted in national security expertise, underscores the critical balance between innovation and safeguarding against misuse.
Key Takeaways:
(00:17) The functionality of intelligence committees across party lines.
(00:59) AI in warfare reflects a shift from World War I tactics to modern tech battles.
(03:10) The rapid innovation in military technology and the US’s efforts to adapt.
(03:53) Risks of unregulated AI, including in cyber, autonomous weapons and bio-tech.
(07:09) AI regulation is needed both globally and nationally.
(11:21) International collaboration plays a vital role in AI regulation.
(13:39) Ethical considerations unique to AI applications in national security.
(14:31) National security agencies’ openness to regulatory frameworks.
(15:35) Public-private collaboration in addressing national security considerations.
(17:08) Establishing standards in AI technology for national security is necessary.
(18:28) Regulation of autonomous weapons and international agreements.
(19:32) Balancing secrecy in national security operations with public scrutiny of AI use.
(20:17) AI’s role and risks in intelligence and privacy.
(21:13) Regulating AI in cybersecurity and other areas is a challenge.
Resources Mentioned:
Anja Manuel - https://www.linkedin.com/in/anja-manuel-26805023/
Aspen Strategy Group - https://www.aspeninstitute.org/programs/aspen-strategy-group/
Aspen Security Forum - https://www.aspensecurityforum.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 26 Mar 2024 - 24min - 25 - Shaping the Future of Manufacturing With AI Insights with Dr. Gunter Beitinger
On this episode, I’m joined byDr. Gunter Beitinger, Senior Vice President of Manufacturing and Head of Factory Digitalization and Product Carbon Footprint atSiemens. Dr. Beitinger lends a comprehensive view on AI’s role in transforming manufacturing, emphasizing its potential to enhance productivity, ensure workforce well-being and drive sustainable practices without displacing human labor.
Key Takeaways:
(02:17) Dr. Beitinger’s extensive background and role at Siemens.
(05:13) Specific examples of AI-driven improvements in Siemens’ operations.
(07:52) The measurable productivity gains attributed to AI in manufacturing.
(10:02) The impact of AI on employment and the importance of re-skilling.
(13:06) The necessity for a collaborative approach between governments and the private sector in workforce development.
(16:24) The role of AI in improving the working conditions of industrial workers.
(26:53) The potential for smaller companies to leverage AI and compete with industry giants.
(36:49) AI’s future role in creating digital twins and the industrial metaverse.
Resources Mentioned:
https://www.linkedin.com/in/gunter-dr-beitinger/
Siemens | LinkedIn -
https://www.linkedin.com/showcase/siemens-industry-/?trk=public_post-text
Siemens | Website -
https://www.siemens.com/
https://blog.siemens.com/space/artificial-intelligence-in-industry/
https://blog.siemens.com/2023/07/the-need-to-rethink-production/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 19 Mar 2024 - 40min - 24 - Exploring AI’s Impact on National Security and Legislation with Sarah Kreps
On this episode, I’m joined bySarah Kreps, the John L Wetherell Professor in the Department of Government, Adjunct Professor of Law, and the Director of the Tech Policy Institute atCornellBrooks School of Public Policy. Her expertise in international politics, technology and national security offers a valuable perspective on shaping AI legislation.
Key Takeaways:
(00:20) The significant impact of industry and NGOs on AI regulation and congressional awareness.
(03:27) AI's multifaceted applications and its national security implications.
(05:07) Advanced efficiency of AI in misinformation campaigns and the importance of legislative responses.
(10:58) Proactive measures by AI firms like OpenAI for electoral fidelity and misinformation control.
(14:23) The challenge of balancing AI innovation with security and economic considerations in legislation.
(20:30) Concerns about potential AI monopolies and the economic consequences.
(28:16) Ethical and practical aspects of AI assistance in legislative processes.
(30:13) The critical need for human involvement in AI-augmented military decisions.
(35:32) National security agencies' approach to AI regulatory frameworks.
(39:13) The imperative of Congress's engagement with diverse sectors for comprehensive AI legislation.
Resources Mentioned:
Sarah Kreps - https://www.linkedin.com/in/sarah-kreps-51a3b7257/
Cornell - https://www.linkedin.com/school/cornell-university/
Sarah Kreps’ paper for the Brookings Institution - https://www.brookings.edu/articles/democratizing-harm-artificial-intelligence-in-the-hands-of-non-state-actors/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Global Governance - https://www.american.edu/sis/news/20230523-four-questions-on-ai-global-governance-following-the-g7-hiroshima-summit.cfm
Sarah Kreps - Cornell University -
https://government.cornell.edu/sarah-kreps
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 14 Mar 2024 - 44min - 23 - The Ethical Boundaries of AI and Robotics with Professor Emeritus Ronald Arkin
On this episode, I’m joined by ProfessorRonald Arkin,a renowned expert in robotics and roboethics from theGeorgia Institute of Technology. Our discussion focuses on AI and robotics. We explore the ethical implications and the necessity for regulatory frameworks that ensure responsible development and deployment.
Key Takeaways:
(02:40) Ethical guidelines for AI and robotics.
(03:19) IEEE’s role in creating soft law guidelines.
(06:56) Robotics’ overshadowing by large language models.
(10:13) The necessity of oversight and compliance in AI development.
(15:30) Ethical considerations for emotionally expressive robots.
(23:41) Liability frameworks for ethical lapses in robotics.
(27:43) The debate on open-sourcing robotics software.
(29:52) The impact of robotics on workforce and employment.
(33:37) Human rights implications in robotic deployment.
(42:55) Final insights on cautious advancement in AI regulation.
Resources Mentioned:
Ronald Arkin - https://sites.cc.gatech.edu/aimosaic/faculty/arkin/
Ronald Arkin | LinkedIn - https://www.linkedin.com/in/ronald-arkin-a3a9206/
Georgia Tech Mobile Robot Lab-https://sites.cc.gatech.edu/ai/robot-lab/
Georgia Institute of Technology - https://www.linkedin.com/school/georgia-institute-of-technology/
IEEE Standards Association - https://standards.ieee.org/
United Nations Convention on Certain Conventional Weapons - https://treaties.un.org/pages/ViewDetails.aspx?chapter=26&clang=_en&mtdsg_no=XXVI-2&src=TREATY
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Fri, 08 Mar 2024 - 42min - 22 - Navigating AI Innovation and Ethics in Legislation with Steve Mills
On this episode, I welcomeSteve Mills,Global Chief AI Ethics Officer forBoston Consulting Groupand Global AI Lead for the Public Sector. Steve shares insights into the intersection of AI innovation and ethical responsibility, guiding us through the often-confusing topic of AI regulation and ethics.
Key Takeaways:
(00:26) The role clear regulations play in fostering innovation.
(02:43) The importance of consultation with industry to set achievable regulations.
(04:07) Addressing the uncertainty surrounding AI regulation.
(06:19) The necessity of sector-specific AI regulations.
(07:33) The debate over establishing a separate AI regulatory body.
(09:22) Adapting AI policy to keep pace with technological advancements.
(11:40) Enhancing AI literacy and upskilling the workforce.
(13:06) Ethical considerations in AI deployment, focusing on trustworthiness and harmlessness.
(15:01) Strategies for ensuring AI systems are fair and equitable.
(20:10) The discussion on open-source AI and combating monopolies.
(22:00) The importance of transparency in AI usage by companies.
Resources Mentioned:
Steve Mills - https://www.linkedin.com/in/stevndmills/
Boston Consulting Group - https://www.linkedin.com/company/boston-consulting-group/
Responsible AI Ethics - https://www.bcg.com/capabilities/artificial-intelligence/responsible-ai
Study on the impact of AI in the workforce - https://www.bcg.com/publications/2022/a-responsible-ai-leader-does-more-than-just-avoiding-risk
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 07 Mar 2024 - 25min - 21 - The Impact of Rapid AI Evolution with Kai Zenner, Head of Office and Digital Policy Adviser for MEP Axel Voss (EPP group) in the European Parliament
On this episode, I welcomeKai Zenner, Head of Office and Digital Policy Advisor at theEuropean Parliament. We discuss the complexities and challenges of Artificial Intelligence, especially focusing on the legislative efforts within the EU to regulate AI technologies.
Key Takeaways:
(01:36) Diverse perspectives in AI legislation play a significant role.
(02:34) The EU AI Act’s status and its risk-based, innovation-friendly approach.
(07:11) The recommendation for a vertical, industry-specific approach to AI legislation.
(08:32) Measures in the AI Act to prevent AI power concentration and ensure transparency.
(11:50) The global approach of the EU AI Act and its focus on international alignment.
(14:28) Ethical considerations in AI development addressed by the AI Act.
(16:21) Implementation and enforcement mechanisms of the EU AI Act.
(23:31) The involvement of industry experts, researchers and civil society in developing the AI Act.
(29:51) The importance of educating the public on AI issues.
(33:12) Concerns about deepfake technology and election interference.
Resources Mentioned:
Kai Zenner - https://www.linkedin.com/in/kzenner/?originalSubdomain=be
European Parliament - https://www.linkedin.com/company/european-parliament/
EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Mon, 04 Mar 2024 - 38min - 20 - Existential Risk in AI with Otto Barten
In a world racing toward the development of Artificial General Intelligence (AGI), the balance between innovation and existential risk becomes a pivotal conversation. In this episode, I’m joined byOtto Barten, Founder of theExistential Risk Observatory. We focus on the critical issue of artificial general intelligence (AGI) and its potential to pose existential risks to humanity. Otto shares valuable insights into the necessity of global policy innovation and raising public awareness to navigate these uncharted waters responsibly.
Key Takeaways:
(00:18) Public awareness of AI risks is rising rapidly.
(01:39) The Existential Risk Observatory’s mission is to mitigate human extinction risks.
(02:51) The European Union’s political consensus on the EU AI Act.
(04:11) Otto explains multiple AI threat models leading to existential risks.
(07:01) Why distinguish between AGI and current AI capabilities?
(09:18) Sam Altman and Mark Zuckerberg made recent statements on AGI.
(12:15) The potential dangers of open-sourcing AGI.
(14:17) The current regulatory landscapes and potential improvements.
(17:01) The concept of a “pause button” for AI development is introduced.
(20:13) Balancing AI development with ethical considerations and existential risks.
(23:51) Increasing public and legislative awareness of AI risks.
(29:01) The significance of transparency and accountability in AI development.
Resources Mentioned:
Otto Barten - https://www.linkedin.com/in/ottobarten/?originalSubdomain=nl
Existential Risk Observatory - https://www.linkedin.com/company/existential-risk-observatory/
European Union AI Act -
The Bletchley Process for global AI safety summits -
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 27 Feb 2024 - 37min - 19 - The Role of AI in Society with Lexy Kassan, Lead Data and AI Strategist of Databricks
On this episode, I’m joined byLexi Kassan, Lead Data and AI Strategist ofDatabricksand Founder and Host of theData Science Ethics Podcast. Lexi brings a wealth of knowledge from her dual role as an AI ethicist and industry insider, providing an in-depth perspective on how legislation can shape the future of AI without curbing its potential.
Key Takeaways:
(02:44) The global impact of the EU AI Act.
(03:46) The necessity for risk-based AI model assessments.
(08:20) Ethical challenges hidden within AI applications.
(11:45) Strategies for inclusive AI benefiting marginalized communities.
(13:29) Core ethical principles for AI systems.
(19:50) The complexity of creating unbiased AI data sets.
(21:58) Categories of unacceptable risks in AI according to the EU Act.
(27:18) Accountability in AI deployment.
(30:53) The role of open-source models in AI development.
(36:24) Businesses seek clear regulatory guidelines.
Resources Mentioned:
Lexi Kassan - https://www.linkedin.com/in/lexykassan/?originalSubdomain=uk
Data Science Ethics Podcast - https://www.linkedin.com/company/dsethics/
EU AI Act - https://artificialintelligenceact.eu/
Databricks - https://www.databricks.com/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 29 Feb 2024 - 39min - 18 - A Vision for a Balanced AI Future with Daniel Jeffries of AI Infrastructure Alliance and Kentauros AI
On this episode, I'm joined byDaniel Jeffries, Managing Director of theAI Infrastructure Allianceand CEO ofKentauros, to explore the complexities of AI's potential and the critical need for balanced, forward-thinking legislation.
Key Takeaways:
(02:05) Recent executive orders on AI, watermarking and model size regulation.
(03:54) Autonomous weapons and the need for regulation in areas exempted by governments.
(07:01) Liability in AI-induced harm and the challenge of assigning responsibility.
(07:52) The rapid evolution of AI and the legislative challenge to keep pace.
(10:37) The risk of regulatory capture and the importance of preventing AI monopolies.
(13:29) The role of open source in fostering innovation.
(16:32) Skepticism towards the feasibility of a global consensus on AI regulation.
(18:21) Advocacy for industry-specific regulations, emphasizing use-case and industry nuances.
(22:33) Recommendations for policymakers to focus on real-world problems.
Resources Mentioned:
Daniel Jeffries - https://www.linkedin.com/in/danjeffries/
AI Infrastructure Alliance - https://www.linkedin.com/company/ai-infrastructure-alliance/
Kentauros - https://www.linkedin.com/company/kentauros-ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Fri, 16 Feb 2024 - 29min - 17 - Crafting Equitable AI Policies for Work and Education with Alex Swartsel
On this episode, I welcomeAlex Swartsel, Managing Director of InsightsatJFFLabs. We discuss AI’s role in the employment landscape’s transformation, highlighting the delicate balance between leveraging AI for growth and mitigating its potential disruptions.
Key Takeaways:
(00:16) AI’s transformative impact on employment.
(02:35) The role AI plays in job transformation and skill enhancement.
(04:30) The automation and augmentation of tasks by AI.
(06:10) Rethinking education and skill development in the age of AI.
(09:22) The significance of soft skills in conjunction with technical knowledge.
(11:00) AI’s potential to customize learning experiences.
(17:20) The pivotal role of community colleges in workforce training.
(21:33) The imperative of reskilling and the government’s role.
(29:51) Using AI for personalized education and career guidance.
(35:09) Promoting AI as a tool for human advancement.
Resources Mentioned:
Alex Swartsel - https://www.linkedin.com/in/alexswartsel/
JFFLabs’ New Center for Artificial Intelligence and the Future of Work - https://www.jff.org/
The AI-Ready Workforcereport -https://info.jff.org/ai-ready
IMF Report on AI’s Impact on Jobs - https://www.imf.org/en/Publications/Staff-Discussion-Notes/Issues/2024/01/14/Gen-AI-Artificial-Intelligence-and-the-Future-of-Work-542379
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 14 Feb 2024 - 35min - 16 - Envisioning a Harmonious Future Between AI and Humanity with Avi Loeb
On this episode, I'm joined by Professor Avi Loeb, Professor of Science at Harvard University, Director of the Institute for Theory and Computation within the Harvard Smithsonian Center for Astrophysics, Head of the Galileo Project, Chair of Harvard's Department of Astronomy and best-selling author. Avi provides an astrophysicist's perspective on the ethical and regulatory frameworks necessary to ensure the responsible use of artificial intelligence.
Key Takeaways:
(00:36) The essential role of academia in fostering dialogue across differing viewpoints.
(06:58) Professor Loeb's concerns about AI's unpredictability.
(09:18) The importance of training AI systems with value-aligned datasets to moderate societal risks.
(10:59) Assigning responsibility for AI's actions.
(14:29) The need for international treaties to regulate AI's use in national security and warfare.
(17:58) Addressing internal disinformation and the role of AI in amplifying societal divisions.
(22:40) Engaging the public in AI regulation discussions to ensure diverse perspectives.
(26:37) The potential for AI to revolutionize space exploration and decision-making in remote environments.
Resources Mentioned:
Harvard University's Galileo Project - https://projects.iq.harvard.edu/galileo/home
Rubin Observatory - https://rubinobservatory.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 08 Feb 2024 - 35min - 15 - The Potential Effect of AI and Autonomous Flying Robots on National Security with Timothy Bean of Fortem Technologies
In this latest episode, I'm joined byTimothy Bean, President and COO ofFortem Technologies, to explore the intricate interplay between artificial intelligence, national security and the legislative landscape that surrounds it.
Key Takeaways:
(02:42) The evolution of national security tools and the advent of AI.
(03:49) The importance of data privacy in AI legislation and national security.
(05:07) The challenges of regulating AI in a rapidly advancing technological landscape.
(10:13) How legislative bodies should adapt and embrace AI to keep pace with technological advancements.
(12:13) The impending impact of quantum computing on AI and national security.
(15:38) The US faces an arms race in AI and quantum computing against global competitors like China and Russia.
(17:25) Public-private partnerships in enhancing national security through AI.
(18:39) The role of transparency and accountability in AI applications for national security.
(22:16) Debating the merits of open-sourcing AI models in the context of national security.
(24:55) The significance of educating the public on data privacy and the potential of AI.
Resources Mentioned:
https://www.linkedin.com/in/meghalred/
https://www.linkedin.com/company/fortem-technologies/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Department of Defense AI Ethics Principles -
https://www.ai.mil/blog_02_26_21-ai_ethics_principles-highlighting_the_progress_and_future_of_responsible_ai.html
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Tue, 06 Feb 2024 - 33min - 14 - AI Education and Policy with Nathan Grant of Teach AI
On this episode, I'm thrilled to chat withNathan Grant, Policy Fellow of TeachAI, an initiative championed by notable organizations includingCode.org, ETS, ISTE, Khan Academy and the World Economic Forum. Nathan shares invaluable insights on integrating AI education within K-12, emphasizing the importance of a balanced approach to harness AI's potential while mitigating its risks.
Key Takeaways:
(01:16) Introduction of Nathan Grant and the TeachAI initiative.
(02:14) TeachAI's broad coalition, including tech giants and educational stakeholders.
(03:45) Perspectives on President Biden's Executive Order on AI.
(06:27) AI literacy's critical role across all subjects in K-12 education.
(07:30) Addressing the digital and AI divide for equitable education.
(09:03) Engaging students in the AI legislation dialogue.
(12:44) Concerns over banning AI tools like ChatGPT in schools.
(14:33) The risk of AI tool monopolization by a few large tech companies.
(16:00) The importance of education in demonstrating AI's potential and ensuring its responsible use.
(18:59) The potential for standardized AI education guidelines.
Resources Mentioned:
Nathan Grant - https://www.linkedin.com/in/nathan-grant-t/
Code.org - https://www.linkedin.com/company/code-org/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
TeachAI initiative - https://www.teachai.org/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Fri, 02 Feb 2024 - 26min - 13 - Unpacking AI's Ethical Implications and Future with Expert Beth Rudden
In a world where AI shapes our daily lives, ethical considerations are paramount. In this episode, I have the pleasure of speaking withBeth Rudden, CEO ofBast AIand a trailblazer in AI ethics. Her journey from IBM to leading Bast AI offers a unique lens on the intricate relationship between AI, ethics and technology.
Key Takeaways:
(01:25) Insights into diverse perspectives on AI regulation.
(02:24) Beth discusses the ethical risks in AI development.
(03:38) The importance of education in AI ethics and technology.
(05:05) Emphasizing explainable AI in regulation.
(06:35) Discussing the role of data privacy and dignity.
(09:01) The necessity of transparency in AI systems.
(12:16) The impact of AI on social media and communication.
(15:33) Core ethical principles in AI development.
(19:25) The role of accountability in AI systems.
(22:09) The concept of AI as a community utility.
(26:39) Beth's views on creating unbiased AI systems.
(30:17) The importance of human rights and privacy in AI.
(34:27) Addressing AI's role in societal issues.
Resources Mentioned:
Beth Rudden - https://www.linkedin.com/in/brudden/
Joy Boulamwini's "Unmasking AI" - https://www.penguinrandomhouse.com/books/670356/unmasking-ai-by-joy-buolamwini/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Bast AI Website - https://bast.ai/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 01 Feb 2024 - 38min - 12 - Educating Society on Responsible Use of AI with Haniyeh Mahmoudian at DataRobot and NAIAC
Creating a safe and ethical AI system starts at its conception. On this episode, I have the pleasure of speaking withHaniyeh Mahmoudian, Ph.D., distinguished Global AI Ethicist atDataRobotand Advisor to NAIAC (National AI Advisory Committee). We discuss AI regulation, ethical considerations and the importance of education around responsible use of AI.
Key Takeaways:
(02:09) Insights into President Biden’s AI Executive Order.
(04:32) The importance of private-public partnerships in AI education and workforce upskilling.
(06:35) The need for realistic job qualifications in AI-related fields.
(08:23) The EU AI Act, its risk framework for AI use cases and the need for flexible and adaptable legislative frameworks in AI regulation.
(11:42) The US's approach to AI regulation compared to the EU.
(15:59) Ethical risks in AI development, particularly the lack of education in AI literacy.
(18:55) Ensuring historically marginalized communities can participate in and benefit from AI advancements.
(21:04) The need for robust governance processes and accountability at every stage of AI development and deployment.
(23:53) Challenges and benefits of democratizing AI technology access.
(25:50) The necessity of companies disclosing their use of AI systems to end-users.
(27:12) Concerns about the impact of AI, particularly deepfakes, on democracy.
Resources Mentioned:
Haniyeh Mahmoudian - https://www.linkedin.com/in/haniyeh-mahmoudian-ph-d-78a18072
DataRobot - https://www.linkedin.com/company/datarobot
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
National AI Advisory Committee Recommendations - https://ai.gov/naiac/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 25 Jan 2024 - 31min - 11 - Delving Into the Future of Responsible AI with Dr. Ravit Dotan
This era of rapid technological advancement can make finding the equilibrium between innovation and responsible governance difficult. On this episode, I’m joined byDr. Ravit Dotan, Founder and CEO ofTechBetter, Responsible AI Advocate ofBriaand AI Ethicist. We discuss the complexities of AI regulation in our modern world. We also focus on the pivotal role policies and ethics play in steering the course of AI toward a future that benefits all.
Key Takeaways:
(01:18) Discussing President Biden’s Executive Order on AI and its implications for a new era of regulation.
(03:02) Contrasting the divergent paths of the US and UK in AI regulation.
(07:18) Investigating AI regulation’s influence on innovation.
(08:22) Assessing the ethical risks of misinformation within AI systems.
(12:13) Addressing the amplification of biases in AI decision-making.
(16:42) The challenge of achieving fairness in AI.
(17:40) The necessity of banning harmful AI applications.
(19:52) The role of AI ethics officers in organizations.
(21:30) Analyzing responsibility in AI-related incidents.
(24:26) The influence of major tech companies on AI’s direction.
(30:50) Discussing strategies against AI deepfakes in political campaigns.
Resources Mentioned:
Dr. Ravit Dotan - https://www.linkedin.com/in/ravit-dotan/
TechBetter - https://www.linkedin.com/company/techbetter/
Bria - https://www.linkedin.com/company/briaai/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Mon, 22 Jan 2024 - 33min - 10 - Balancing AI Innovation and Civil Liberties with Esha Bhandari of ACLU
On this episode of Regulating AI: Innovate Responsibly, I am thrilled to hostEsha Bhandari, the Deputy Project Director of theACLU (American Civil Liberties Union), who shares her expertise in AI and civil liberties. Esha is also aMember of the Law Enforcement Subcommittee of the National AI Advisory and Adjunct Professor of Clinical Law at the New York University School of Law.
We explore the complex relationship between artificial intelligence and civil liberties, discussing the implications of AI regulation, the challenges posed by algorithmic bias and the potential impact of AI on various sectors, including law enforcement, housing and employment.
Key Takeaways:
(01:59) Esha’s perspective on President Biden’s Executive Order on AI, emphasizing the inclusion of civil liberties and civil rights.
(04:01) Challenges in law enforcement and national security contexts regarding AI.
(07:56) A discussion on the potential of a separate government agency for AI regulation.
(10:41) The balancing act between preventing AI from replicating societal biases and fostering innovation.
(12:53) The question of liability in AI systems: developer, deployer, or user?
(14:21) Keeping pace with rapid AI advancements in policy and legislation.
(18:51) The ACLU’s stance on open-source technology and AI.
(25:01) The role AI regulation plays on a global scale.
(26:44) Addressing the potential impacts of AI on upcoming elections and protecting civil liberties.
Resources Mentioned:
https://www.linkedin.com/in/eshabhandari/
ACLU (American Civil Liberties Union) -
https://www.linkedin.com/company/aclu/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
Discussions on AI Regulation in the EU -
https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 18 Jan 2024 - 30min - 9 - Delving Into AI Ethics, Safety and Global Regulations with Stuart Russell
On this episode, I'm delighted to be joined by a leading mind in AI,Stuart Russell, Professor of Computer Science at UC Berkeley; Former Chair of the Electrical Engineering and Computer Science Program at UC Berkeley; Holder of the Smith-Zadeh Chair in Engineering; Director of the Center for Human-Compatible AI; Author of Artificial Intelligence: A Modern Approach, which is currently part of the curriculum in 1,500 universities in 135 countries and translated into 20 languages.
Our conversation ventures into the depths of AI's potential, its impact on society and the critical role of legislation in shaping a safe and prosperous AI-powered future.
Key Takeaways:
(00:56) Introduction of Professor Stuart Russell and his significant contributions to AI.
(02:22) Analysis of the Biden Executive Order on AI and its limitations.
(03:49) Evolution and current status of the EU AI Act.
(07:31) The paradox of open-source AI in regulatory contexts.
(08:31) The challenge of controlling AI systems that are more powerful than humans.
(13:08) The necessity of proactive safety measures in AI development.
(15:12) The potential risks and concerns around AI agents.
(17:02) Balancing innovation and regulation in AI.
(19:20) Adapting AI legislation to technological advancements.
(21:49) The need for a dedicated regulatory agency for AI.
(26:08) Global collaboration on AI safety and national security.
(30:33) Public perception and education on AI safety.
(34:23) The role of AI in national security and ethical concerns.
(37:04) The impact of AI and deepfakes on the 2024 elections.
Resources Mentioned:
Stuart Russell - https://www.linkedin.com/in/stuartjonathanrussell/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
EU AI Act - https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Mon, 15 Jan 2024 - 38min - 8 - Balancing AI Risks and Promises with Congresswoman Anna Eshoo
On this episode, I'm joined by Congresswoman Anna Eshoo, Co-Chair of AI Caucus.Time Magazinehas selected Anna as one of the 100 most influential people in AI, and I’m delighted to hear her invaluable insights into the legislative challenges and opportunities in the world of AI.
Key Takeaways:
(01:23) The role of the National AI Research Resource in President Biden’s executive order.
(03:20) The urgency for Congress to enact durable AI statutes.
(05:31) Objectives of the Create AI Act in making AI accessible to diverse sectors.
(08:03) The dynamic nature of AI policy and state-level legislation's role.
(10:43) The security implications of open-source AI models.
(12:18) Addressing the threat of deepfakes in elections.
(14:29) Strategies for workforce reskilling and attracting global AI talent.
(18:15) Democratizing AI to avert monopolistic trends.
(20:38) US Rep. Eshoo's predictions on the AI legislative timeline.
Resources Mentioned:
Anna Eshoo - https://www.linkedin.com/in/anna-eshoo-b0392095/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
National AI Research Resource - https://www.whitehouse.gov/ostp/news-updates/2023/01/24/national-artificial-intelligence-research-resource-task-force-releases-final-report/
Keep STEM Talent Act 2021 - https://www.congress.gov/bill/117th-congress/house-bill/5924?q=%7B%22search%22%3A%5B%22h.r.+5924%22%2C%22h.r.%22%2C%225924%22%5D%7D&s=1&r=2
Create AI Act - https://eshoo.house.gov/sites/evo-subsites/eshoo.house.gov/files/evo-media-document/eshoo_043_xml.pdf
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 11 Jan 2024 - 21min - 7 - Advocacy for Startups in AI Policy with Nathan Lindfors of Engine
Navigating the labyrinth of AI policy is a daunting task, especially for startups. In this episode, I explore this complex world withNathan Lindfors, who brings unique insights from his role as Policy Director ofEngine, an organization at the forefront of advocating for startup interests in the AI realm.
Key Takeaways:
(01:40) The mission and goals of Engine in advocating for startups.
(02:40) How startups differ from companies like OpenAI and Anthropic in the AI space.
(04:22) The role of Engine in educating startups on AI policy developments.
(05:33) Nathan’s take on President Biden’s Executive Order on AI.
(09:12) Concerns over regulatory capture impacting startup innovation.
(10:28) The debate around open-sourcing AI models.
(13:17) Addressing the risks of AI tools falling into the hands of bad actors.
(16:46) Liability issues in AI and their impact on startups.
(19:50) Preparing the workforce for the future of AI.
(23:25) The need for transparent AI usage disclosures by companies.
(25:28) Discussion on the complexities of global versus regional AI regulations.
Resources Mentioned:
https://www.linkedin.com/in/nathan-lindfors-24032b150/
Engine -
https://www.linkedin.com/company/engine-advocacy/
Engine Advocacy for Startups -
https://www.linkedin.com/company/engine-advocacy/
President Biden’s Executive Order on AI -
https://www.whitehouse.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 20 Dec 2023 - 32min - 6 - Balancing AI Advancements With Public Safety and Transparency with Senator Pete Ricketts
As artificial intelligence continues to revolutionize our society, the need for thoughtful regulation becomes increasingly crucial. In this episode, I have the honor of discussing these challenges withSenator Pete Rickettsfrom Nebraska. With his background in governance and entrepreneurship, Senator Ricketts offers invaluable insights into the legislative aspects of AI. Together, we delve into how to harness AI responsibly for the benefit of all.
Key Takeaways:
(01:45) Introduction of a bill for watermarking AI-generated materials.
(03:15) Addressing the concerns of deepfakes and intellectual property in the AI sphere.
(04:01) AI’s transformative potential and the critical need for careful regulation.
(05:19) The impact of AI on national security and election processes.
(05:44) The importance of including small businesses and educational institutions in AI legislation.
(07:00) The need for federal preemption over state laws to avoid a patchwork of AI regulations.
(08:08) The role of workforce reskilling and talent attraction in AI development.
(10:03) Predictions for the timeline of comprehensive AI legislation in Congress.
Resources Mentioned:
Senator Ricketts’ AI Watermarking Bill - https://www.ricketts.senate.gov/press-releases/ricketts-introduces-bill-to-combat-deepfakes-require-watermarks-on-a-i-generated-content/
National Security Implications of AI - https://www.csis.org/analysis/addressing-national-security-implications-ai
AI’s Role in Elections - https://www.brookings.edu/articles/how-ai-will-transform-the-2024-elections/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Mon, 18 Dec 2023 - 12min - 5 - Exploring the Future of AI Regulation With a Congressional Insight
Navigating the complexities of AI isn’t just about technology. It’s about sculpting our future. In this episode, I’m joined byCongressman Jay Obernolte, representing California’s 23rd district and serving as the vice-chair of the congressional AI caucus. With a rich background in AI and a keen eye for policy, Congressman Obernolte offers invaluable insights into the intricate dance of AI innovation and regulation.
Key Takeaways:
(02:06) Assessing President Biden’s Executive Order on AI and concerns of regulatory overreach.
(04:54) Exploring the Create AI Act’s goal to democratize AI research across academia.
(06:41) Addressing the risk of regulatory capture in the AI industry.
(08:57) Evaluating the role of AI in hiring and the inherent challenges of bias.
(11:05) Debating the need for a new AI regulatory structure.
(14:25) Delving into the implications of open-source AI.
(16:08) Highlighting the role of AI in spreading misinformation and the importance of transparency.
(18:19) Emphasizing the need for diverse perspectives in shaping AI regulation.
(19:44) Advocating for federal over regional or global AI regulation models.
(21:42) Offering predictions on the timeline and direction of comprehensive AI legislation in Congress.
Resources Mentioned:
Congressman Jay Obernolte - https://www.linkedin.com/in/jayobernolte/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 13 Dec 2023 - 23min - 4 - The Role of MLOps Community in Influencing AI Policymaking
Are we ready for the AI revolution? How do we balance innovation with regulation? On this episode, I’m joined byDemetrios Brinkmann, Founder and CEO of the MLOps Community, to explore AI's impact on global economies, security and workforce, and the challenges in creating effective regulatory frameworks.
Key Takeaways:
(00:51) The dual role of AI in boosting GDP and posing a threat to workforce and national security.
(01:10) The US Congress' efforts to create a legislative framework for AI.
(02:14) The significance of the MLOps community in AI production.
(03:05) The impact of global AI regulations on the MLOps community.
(03:40) President Biden's Executive Order on AI and the challenges in regulating large language models.
(08:01) The EU's AI Act focusing on risk management and post-market monitoring.
(14:41) Identifying key risks from AI that require regulation.
(21:24) The debate over open-sourcing LLMs.
(26:15) Concerns about regulatory capture by big tech companies.
(30:38) The importance of global or regional AI regulations.
Resources Mentioned:
Demetrios Brinkmann - https://www.linkedin.com/in/dpbrinkm/
MLOps Community - https://ai-infrastructure.org/mlops-community-now/
President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/
EU AI Act - https://www.europarl.europa.eu/news/en/headlines/society/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Thu, 07 Dec 2023 - 37min - 3 - The Role of AI in Job Creation and Global Tech Leadership
In this episode, I’m joined by former Governor Terry McAuliffe, who shares his insights on the future of AI and its impact on job creation, national security and global technological dominance. With his extensive experience in both politics and entrepreneurship, Governor McAuliffe provides a unique perspective on the necessary steps the United States must make to take the lead in AI innovation and regulation.
Key Takeaways:
(02:08) The significance of President Biden’s Executive Order on AI.
(03:46) The need for long-term, consistent AI standards and legislation.
(04:25) Addressing public concerns about AI and job displacement.
(06:16) The importance of establishing a regulatory agency for AI.
(07:37) Promoting AI education starting from kindergarten.
(09:18) Proposing a scholarship program for AI studies.
(10:19) AI’s role in maintaining global leadership and job growth.
(12:34) AI is a crucial aspect of national security.
Resources Mentioned:
President Biden’s Executive Order on AI
National Science Foundation (NSF)
National Institute of Standards and Technology (NIST)
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 22 Nov 2023 - 13min - 2 - Balancing Innovation With Social Responsibility in the Age of AI
Individual progress in technology isn’t just about personal achievement; it’s about shaping the future for society. On this episode, I’m joined byCongressman Don Beyer, US Representative for Virginia’s 8th District and Vice Chair of the AI Caucus in theHouse of Representatives, who brings a unique perspective to the table with his dedication to understanding and shaping AI legislation.
Key Takeaways:
(01:29) Congressman Beyer’s unique approach to learning about AI.
(02:55) The significance of President Biden’s Executive Order on AI.
(03:46) The debate on creating a separate regulatory agency for AI.
(06:36) The importance of democratizing AI through legislation like the Create AI Act.
(08:46) The pros and cons of open-sourcing AI models.
(12:10) AI’s role in political advertising and the need for ethical considerations.
(16:22) How AI will impact workforce and immigration policies.
(20:12) The priorities for AI legislation in Congress.
Resources Mentioned:
Congressman Don Beyer - https://www.linkedin.com/in/don-beyer-6b444b4/
House of Representatives - https://www.linkedin.com/company/u.s.-house-of-representatives/
President Biden’s Executive Order on AI - https://www.whitehouse.gov/
Create AI Act - https://www.congress.gov/
Discussions on AI with EU Parliamentarians - https://www.europarl.europa.eu/
National AI Research Resource - https://www.nsf.gov/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Fri, 17 Nov 2023 - 26min - 1 - Navigating the Challenges of AI Legislation
The potential of AI is limitless, yet its implications are complex and multifaceted. Striking a balance between innovation and regulation is crucial for harnessing its benefits while safeguarding against risks.
In this episode, I sit down withRaja Krishnamoorthi, US Congressman, Representing Illinois 8th District, to delve deep into the world of AI, its possibilities, its dangers and how the US is positioning itself in this global race.
Key Takeaways:
(02:36) The necessity of AI regulation.
(03:06) Debating a potential AI regulatory agency.
(04:09) Concerns about global competitiveness, especially China’s AI advances.
(04:52) Introduction of the P.A.S.T. model for AI legislation: Privacy, Accountability, Security and Transparency.
(07:00) Concerns about regulatory capture by corporations and the need for diverse perspectives.
(08:35) Thoughts on open-sourcing large AI language models and implications.
(13:10) The geopolitical impact of AI development, especially in China’s context.
(15:48) Worries about deepfake technology and its election impact.
(21:34) Congressional challenges and ambitious goals for AI regulations, with potential timing considerations.
Resources Mentioned:
Raja Krishnamoorthi - https://www.linkedin.com/in/rajakrishnamoorthi/
US Congressman - https://www.linkedin.com/company/u.s.-house-of-representatives/
Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.
#AIRegulation #AISafety #AIStandard
Wed, 25 Oct 2023 - 22min
Podcasts ähnlich wie Regulating AI: Innovate Responsibly
- Conversations ABC listen
- Global News Podcast BBC World Service
- El Partidazo de COPE COPE
- Herrera en COPE COPE
- The Dan Bongino Show Cumulus Podcast Network | Dan Bongino
- Es la Mañana de Federico esRadio
- La Noche de Dieter esRadio
- Hondelatte Raconte - Christophe Hondelatte Europe 1
- Dateline NBC NBC News
- 財經一路發 News98
- La rosa de los vientos OndaCero
- Más de uno OndaCero
- La Zanzara Radio 24
- L'Heure Du Crime RTL
- El Larguero SER Podcast
- Nadie Sabe Nada SER Podcast
- SER Historia SER Podcast
- Todo Concostrina SER Podcast
- 安住紳一郎の日曜天国 TBS RADIO
- アンガールズのジャンピン[オールナイトニッポンPODCAST] ニッポン放送
- 辛坊治郎 ズーム そこまで言うか! ニッポン放送
- 飯田浩司のOK! Cozy up! Podcast ニッポン放送
- 吳淡如人生實用商學院 吳淡如
- 武田鉄矢・今朝の三枚おろし 文化放送PodcastQR