Shh...A Special Offer for Summit 2024 Attendees
Announcing our 2025 Summit—with a special offer for 2024 Summit attendees.
In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research.
We hear a lot about harm from AI and how the big platforms are focused on using AI and user data to enhance their profits. What about developing AI for good for the rest of us? What would it take to design AI systems that are beneficial to humans?
In this episode, we talk with Mark Nitzberg who is Executive Director of CHAI or the UC Berkeley Center for Human-Compatible AI and head of strategic outreach for Berkeley AI Research. Mark began studying AI in the early 1980s and completed his PhD in Computer Vision and Human Perception under David Mumford at Harvard. He has built companies and products in various AI fields including The Blindsight Corporation, a maker of assistive technologies for low vision and active aging, which was acquired by Amazon. Mark is also co-author of The AI Generation which examines how AI reshapes human values, trust and power around the world.
We talk with Mark about CHAI’s goal of reorienting AI research towards provably beneficial systems, why it’s hard to develop beneficial AI, variability in human thinking and preferences, the parallels between management OKRs and AI objectives, human-centered AI design and how AI might help humans realize the future we prefer.
Links:
Learn more about UC Berkeley CHAI
Subscribe to get Artificiality delivered to your email
Learn more about Sonder Studio
P.S. Thanks to Jonathan Coulton for our music
The Artificiality Weekend Briefing: About AI, Not Written by AI