|
TL;DR Geoffrey Hinton, the “Godfather of AI,” warns pastors that superintelligent AI may arrive within 5–20 years—and argues the only safe path is building AI that cares for people like a mother cares for her child. 1. Control-based AI safety won’t work—care-based design is essential. 2. Without “maternal instincts,” AI could seek survival and control, threatening humanity. 3. Political dysfunction and research cuts risk leaving society unprepared. 4. AI in healthcare shows promise, but the Church must offer a moral voice now. |
Day Two Insights from Ai4 2025 in Las Vegas
This week at the largest event in North American about AI, it’s been a great time to reflect on big questions. Day two of AI4 2025 delivered an unexpected moment that should have every pastor leaning in—a Nobel Prize-winning computer scientist essentially preaching about love, care, and sacrificial relationships as humanity’s salvation.

Geoffrey Hinton, the “Godfather of AI” whose neural network breakthroughs laid the foundation for today’s AI revolution, spent a significant portion of his keynote interview making arguments that sounded remarkably theological, even as he warned that superintelligent AI could arrive within the next two decades.
The Timeline Has Accelerated
When asked about the timeline to AGI (Artificial General Intelligence), Hinton’s response was startling in its immediacy. His estimates have dramatically shortened over recent years—from “30, 50 years away” to now believing we could see superintelligent AI “sometime between 5 and 20 years.” Even more concerning, he noted that “almost everybody [who knows what they’re talking about] thinks we’re going to get super intelligence.”
This isn’t a distant sci-fi scenario. We’re potentially talking about AI systems vastly more intelligent than humans arriving within many of our lifetimes, possibly within the tenure of current ministry leaders.
The Control Problem: Why Dominance Won’t Work
Perhaps Hinton’s most profound insight challenged the entire tech industry’s approach to AI safety. He argued that the prevailing mindset—”we have to stay in control of these AIs, we’ve somehow got to be stronger than them”—is fundamentally flawed.
“They’re going to be smarter than us,” Hinton explained. “It’s going to be like imagine you were in charge of a playgroup of three-year-olds and you worked for them. It wouldn’t be very hard to get control of it. You just promised them free candy for a week.”
His solution? We need to completely reframe the problem.
The “Mother AI” Solution
In what may be the interview’s most thought-provoking moment, Hinton proposed that instead of trying to dominate AI, we need to create “mother AI”—systems built with genuine care for humanity as their core motivation.
“The only model we have of a more intelligent thing being controlled by a less intelligent thing is a mother being controlled by her baby,” he explained. “The mother has all sorts of built-in instincts, hormones as well as social pressures to really care about the baby… We need to build maternal instincts into these things so they really care about people.”
This represents a fundamental shift from control-based to care-based AI development—a concept that should resonate deeply with Christian understandings of love, stewardship, and care for the vulnerable.
The Existential Stakes
Hinton didn’t shy away from the existential implications: “If it’s not going to parent me, it’s going to replace me.” He outlined how any truly intelligent AI will quickly develop two critical sub-goals: staying alive and gaining more control—both of which could put humanity at risk.
The timeline for this potential threat aligns with his AGI predictions: we could be facing these scenarios within decades, not centuries.
Political Realities and Missed Opportunities
In a particularly pointed moment, Hinton suggested that partisan politics could literally cost lives. He referenced how the Biden administration couldn’t pass legislation requiring virus synthesis companies to screen for dangerous pathogens “because, of course, the Republicans wouldn’t collaborate.” His stark assessment: “So we may all die because the Republicans wouldn’t collaborate.”
The dry humor in his delivery got genuine laughs from the audience, but the underlying truth was chilling—here was one of AI’s founding figures suggesting that political dysfunction could undermine critical safety measures precisely when humanity can least afford it.
This highlights how political dysfunction can undermine critical AI safety measures at the worst possible time.
The Research Crisis
Hinton also sounded alarms about America’s retreat from basic research funding, noting dramatic cuts to NIH and NSF. “If you look at the return on investment from funding basic research, it’s huge. That’s where all the long-term progress comes from. And you’d only cut the basic research if you didn’t care about the long-term future.”
This trend threatens the very academic institutions that historically produced breakthrough innovations—including Hinton’s own foundational work.
Healthcare: A Bright Spot
Despite the warnings, Hinton expressed optimism about AI’s potential in healthcare, particularly in analyzing medical imaging. He noted that AI has already discovered information in retinal scans that ophthalmologists didn’t know was there, and expects similar breakthroughs in cancer detection and treatment.
“Healthcare is a good place to increase efficiency because it’s elastic. We can absorb endless amounts of healthcare. So if you make a doctor 10 times as efficient, we’re all just going to get 10 times as much healthcare.”
The Absent Christian Voice—Again
What struck me throughout Hinton’s presentation was how his core concerns—care versus control, the nature of intelligence, what it means to value human life, the ethics of creating “beings”—are fundamentally theological questions. Yet the Christian voice remains largely absent from these crucial conversations.
When Hinton talks about building “maternal instincts” into AI systems, he’s essentially describing the need to embed sacrificial love and care into the most powerful technology humanity has ever created. This is precisely where biblical understanding of love, stewardship, and human dignity should be informing the conversation.
The Regulation Reality
Hinton was blunt about regulation’s limitations for existential AI risks: “Regulation is not going to work for that. We have to find a way to live with superintelligent AI.” However, he strongly advocated for regulation in other areas, like preventing bioweapon creation.
This nuanced view—that some AI risks require regulatory solutions while others require fundamental changes in how we build these systems—demands sophisticated engagement from faith communities.
Key Insights from Geoffrey Hinton
- On AI timeline: “I think most experts think sometime between five and 20 years from now… Almost everybody thinks we’re going to get super intelligence.”
- On the control fallacy: “People have been saying we have to stay in control of these AIs, we’ve somehow got to be stronger than them… That’s not going to work. They’re going to be smarter than us.”
- On the mother AI concept: “We need AI mothers rather than AI assistants. An assistant is someone you can fire. You can’t fire your mother.”
- On the stakes: “If it’s not going to parent me, it’s going to replace me.”
- On AI goals: “They will very quickly develop two sub goals if they’re smart. One is to stay alive… The other sub goal is Get More Control.”
- On human vulnerability: “It’s going to be like imagine you were in charge of a playgroup of three-year-olds and you worked for them. It wouldn’t be very hard to get control of it.”
- On international cooperation: “All the countries want AI not to take over from people… Our one chance is to build it so it cares about us.”
- On research funding: “I think it’s a huge mistake to give up on the funding of basic research… you’d only cut the basic research if you didn’t care about the long-term future.”
- On AI companies and safety: “Anthropic was set up to be more concerned with safety than OpenAI and it is… among the big AI labs, that’s the most concerned with safety.”
The Church’s Moment of Choice
As I listened to Hinton—a man who helped create the technology now warning about its dangers—I couldn’t help but think about the church’s response. Here is someone calling for AI systems built on care rather than control, on love rather than dominance. These are fundamentally Christian concepts, yet we’re largely absent from the conversation.
The window for influence is still open, but it’s narrowing rapidly. The questions Hinton raised—How do we build caring intelligence? What values should guide the creation of superintelligent beings? How do we ensure powerful systems serve rather than replace humanity?—are questions the church should be leading, not following.
We have perhaps decades, not centuries, to get this right. The question remains: will the church rise to this moment, or will we once again find ourselves trying to retrofit Christian values into systems designed without us?
The stakes, as Hinton made clear, couldn’t be higher. The choice, thankfully, is still ours to make.



I saw one of the most absurd videos in regards to AI. Hopefully I can share this with you. I’ll post the video below.
https://www.facebook.com/share/v/1CULZmH9GP/
Kenny, thanks for the article. These are great issues to consider. The “Mother AI” solution sounds interesting, but there is a reason we moved out of our parents’ house. Is AI going to make sure we’ve eaten our veggies before it will unlock the cookie jar? Will it turn off my screen if I’m playing a video game past my bed time? If I’m single, will it try to set me up on a date with her best friend’s single daughter. 😀