5 questions for Navrina Singh

happy Friday And welcome to the latest installment of our regular feature, The Future in 5 Questions. This week I posed the list of questions to Navrina Singh, the founder of Credo AI, whose work sits right at the intersection of the growth of AI and the growing interest in it from rule makers.

Credo isn’t an AI development company per se – it’s a platform that aims to help other companies ensure their AI tools comply with the rapidly growing set of laws, regulations, and “recommendations” that govern their use. She understands both bases and underlying technology from back to front, having served on the Biden administration’s national AI advisory committee and working on artificial intelligence for several years at both Microsoft and the World Economic Forum.

We talked about her belief that governance can actually accelerate innovation, the limits of what AI can do on its own and her view of the different philosophical approaches the US and EU have taken to AI. Responses have been edited for length and clarity.

What’s the underrated big idea?

How governance can actually increase the benefits of AI.

We’ve seen time and time again as AI shifts from experimentation and iteration to actual production scenarios—across financial services, insurance, healthcare, and education—how governance and having the right set of firewalls leads to lower failure rates and higher awareness of the risks of systems. It results in more compatible systems. Governance can actually be a power multiplier for your bets on technology.

What technology do you think is overrated?

Artificial intelligence itself will not solve all of our problems. With new technologies like generative AI, if you don’t know the technology very well, it can quickly tempt you with the thought, “Oh my God, this is so powerful, it can solve all my problems.” But when you start unpacking the layers of how ChatGPT responds, for example, you kind of start to unpack how much unrealistic and unrealistic information is in those answers.

You can’t use these systems in high stakes scenarios, and that’s what we’re trying to break down with generative AI governance this year.

What book has most shaped your perceptions of the future?

If you’re thinking in geopolitical terms about how technology shapes the world, Thomas Friedman’s The World Is Flat is one of my all-time favorite books.

But when it comes to recent developments in artificial intelligence, I would say “weapons of mathematical destruction.” Cathy O’NeillIt’s definitely one of my favourites. But there are still not enough books being written on why this technology is so important to directing how it shapes humanity. This is the book I look forward to writing.

What can government do about technology that it isn’t?

I am a member of President Biden’s National Advisory Committee on Artificial Intelligence, so it is important that I consider this to be my view and I am not speaking on behalf of the committee.

Over the past seven years, I’ve looked around the world to Europe, Canada, Singapore, the United States and China, to see what governments do and don’t do. I think there is a waking moment happening now among governments realizing that there is a need for political will to put up barriers around technology. The way you can put in place the right barriers is not just by getting policy makers together, but by an open and honest discourse involving many stakeholders between the public and private sectors.

But what is Not What goes well is that we need to move towards a better understanding, especially of artificial intelligence. We need to move faster to come up with mechanisms to make AI governance a reality. For example, some sort of transparency report or disclosure report as to how a company acquires AI, or how AI systems are built, is a great step that a government can enforce to ensure that AI is actually at the service of citizens.

What surprised you the most this year?

The reception that generative AI technologies have received in the past year, given that a lot of people like me have been working on these technologies for a long time. Seeing consumers use it so quickly, and seeing the scale these generative AI technologies have reached over the past six to seven months, has been amazing to me. I’m excited to see how governance will begin to rein in the use of generative AI in critical applications.

And now, for some counter-programming: Eli Dorado, popular Twitter user and senior research fellow at the Center for Growth and Opportunity, would like to throw some cold water on the AI ​​party.

in Yesterday’s blog postDorado emphasized that the transformative power of AI will be somewhat limited. housing? Medicine? Energy and transportation? and this is Difficult Political issues, with dozens of entrenched human stakeholders standing in the way of true technology-driven transformation. Instead, he makes the case that the main “disruption” generated by AI will be in the media world, where, he said, “the First Amendment ensures that the industry is open to all.”

Dorado cites Neil Stephenson’s science fiction novel Fall or Shuffle Into Hell, which posits a world where an influx of false content generated by artificial intelligence requires every human to have their own personal editor. Which means he’s pessimistic about the benefits of AI media disruption, too: “Large language paradigms like ChatGPT can write, but they aren’t able to format their own output,” Dorado wrote. “They are not able to make it consistently good… In a world of infinite content, I just want amazing works of genius. It is not clear if a model trained in some way to give the next average token can produce something much higher than average”.

Well, another shot at AI before the weekend.

Ethan Mullick, an author and associate professor at the Wharton School of the University of Pennsylvania writes, Substack Brief This week he’s talking about how he’s already integrating ChatGPT into his business class. Mollick’s approach is to present it to his students not as a tool that will radically change their pedagogical experience, but as one that will support the same basic skills they are trying to develop in the classroom.

“Producing good AI-written material is not easy. Getting an AI to produce meaningful content requires expertise and subject matter skill,” Molick writes. “I give them my guide to using AI in writing, and ask them to take credit for the AI ​​and give directions that they use when handing in the essay. They will learn how to use the tool even while applying it.”

Molick acknowledges that while his experiments may not work — they are, after all, experiments — simply bringing technology into the classroom has its own benefits: “…while the sudden emergence of generative AI may be disturbing to educators, it is even more disruptive to the future of The students we teach. We need to give them the skills to thrive in a changing world by embracing what AI can do.”