HomeAI for ChurchesWhen AI Becomes God

When AI Becomes God

-

TL;DR Joe Rogan’s warning that AI could “become god” highlights the urgent need for faith leaders to guide AI development with ethics, wisdom, and a redemptive vision.

1. History shows tech innovation without ethics leads to harm (social media, marketplaces, etc.).
2. AI is different—its power can shape identity, agency, and belief.
3. Risks: bias, privacy loss, unchecked autonomy, and cultural erosion.
4. Call to action: people of faith must lead AI’s future, ensuring it serves human flourishing.

During his January 10 interview with Mark Zuckerberg, Joe Rogan uttered a stark observation: “We’ve got to hope that the right people are in control of AI when it becomes god.”

This statement captures a profound and growing concern about artificial intelligence—a technology poised to shape the future of humanity in ways we can barely comprehend. The idea of AI becoming “god” speaks to its potential to wield extraordinary influence over our decisions, ethics, and even beliefs. As such, ensuring that AI development is guided by a moral, ethical, and human-centric approach is not just a technical challenge but a spiritual and existential imperative. People of faith, equipped with a redemptive worldview, must lead in the development of artificial intelligence to safeguard its alignment with human values and human flourishing.

Below I’ll outline:

  • The trajectory of technology development,
  • The implications of innovation without ethics,
  • A call to action for people of faith to lead in this new era, and
  • A vision for what could be if we walk worthy of the calling we have received.

The Trajectory of Innovation and Ethics

Technological innovation has been nothing short of revolutionary. Each leap—from hardware, to software, to the Web, to AI—has profoundly reshaped human society. This evolution has been driven largely by market forces and profit motives, often without the anchor of a redemptive or ethical framework. And the consequences, while sub-optimal, have been mostly benign. Until now. Let’s trace this progression:

  1. Hardware: The birth of modern computing in the mid-20th century laid the foundation for technological innovation. Companies like IBM and Apple revolutionized hardware, enabling computation at scales previously unimaginable. At this stage, the focus was on functionality and capability, and while ethical considerations were a “nice to have,” they were not critical to the immediate impact of the technology.
  2. Software: The rise of software, spearheaded by companies like Microsoft, allowed computers to perform diverse functions, revolutionizing productivity and communication. The focus here was still primarily on expanding possibilities and streamlining tasks. Ethical considerations, such as accessibility and fairness, were secondary concerns.
  3. The Internet and Browsers: The internet’s emergence connected the world in unprecedented ways. Browsers like Netscape made the web accessible, democratizing information but also introducing new challenges, such as misinformation and digital inequality. Here, the stakes began to rise, as the internet’s power to influence thoughts and behaviors became evident.
  4. Search: Search engines revolutionized access to knowledge, with companies like Yahoo! and Google becoming gateways to the world’s information. Ethical considerations, such as data privacy and algorithmic transparency, started to emerge as significant concerns. However, the absence of a strong ethical framework meant these issues were often addressed reactively rather than proactively.
  5. Marketplaces: E-commerce platforms reshaped consumer behavior, emphasizing convenience and efficiency. As these platforms grew, ethical issues around labor practices, environmental impact, and monopolistic tendencies surfaced. Yet, these considerations remained peripheral to the primary market-driven objectives.
  6. Social Media: The advent of social media profoundly changed how people connect, communicate, and consume information. Here, the lack of an ethical framework began to have noticeable consequences, such as mental health crises, polarization, and the spread of misinformation. Despite these stakes, faith-based voices were largely absent from the development of these platforms, leaving ethical considerations to be shaped by market priorities.
  7. Artificial Intelligence: Now, with AI, we face an innovation that not only responds to human input, but learns, evolves, and potentially influences our most foundational beliefs and decisions. Unlike previous innovations, AI’s consequences can be existential, impacting humanity’s future at a global scale.

In the earlier stages of innovation, an ethical or redemptive perspective was a secondary consideration. However, as we moved into search, marketplaces, and social media, the stakes grew higher. The absence of people of faith or those with strong moral convictions in these developments has left a void. Now, with AI, the stakes are higher than ever, making it essential for people of faith and those with a high ethical worldview to be central to its development.

The Problem Of Innovation Without Ethics

AI is not merely another tool; it is a technology with the potential to reshape what it means to be human. Zuckerberg’s perspective, shared during his January 10 interview with Rogan, highlights this transformation. He remarked, “I actually think that natural blending of the kind of digital world and the physical is way more natural than this segmentation that we have today… There isn’t like a physical world in a digital world anymore… It’s one world. Like these things should get blended.”

While Zuckerberg’s vision celebrates the seamless integration of digital and physical realities, it also underscores a concerning erosion of boundaries that define human experience. This blending risks diminishing individual agency and autonomy, as machine influence becomes increasingly pervasive and indistinguishable from human decisions.

From automating tasks to augmenting decision-making, AI’s capabilities raise profound questions about autonomy, agency, and identity. The potential risks include:

  • Bias and Inequality: AI systems can perpetuate and even amplify societal biases, leading to systemic injustices.
  • Loss of Privacy: As AI gathers and analyzes vast amounts of data, concerns about surveillance and personal freedom intensify.
  • Autonomous Decision-Making: The prospect of AI systems making critical decisions, from medical diagnoses to military actions, raises moral dilemmas.
  • Existential Risks: The development of superintelligent AI poses theoretical risks that could threaten humanity’s very existence.

Given these stakes, it is imperative that AI development is not left solely to market forces or technocratic elites. Instead, it requires the involvement of individuals and communities committed to ethical and moral principles.

A Call to Action for People of Faith

People of faith bring a unique perspective to the table, grounded in the belief that human life is sacred and that technology should serve, not dominate, humanity. This worldview is essential in shaping AI development in ways that promote human flourishing and align with divine purposes.

  1. Ethical Leadership: Faith communities can advocate for ethical guidelines that prioritize human dignity, equity, and justice in AI development.
  2. Redemptive Innovation: By infusing AI systems with principles of compassion, humility, and stewardship, people of faith can help ensure that technology serves the common good.
  3. Community Engagement: Faith leaders and organizations can facilitate conversations about AI’s implications, empowering communities to participate in shaping its trajectory.
  4. Building Ecosystems: Beyond advocacy, people of faith must actively participate in creating AI technologies and ecosystems that reflect trustworthy and ethical principles. These systems can serve not only faith-based communities but also broader society, providing a model of responsible innovation.

A Vision for Redemptive AI

What might a redemptive approach to AI look like? It starts with integrating principles that honor the dignity of creators, users, and communities. Drawing inspiration from “Our Napster Moment,” this vision includes:

  • Consent: Ensuring that AI systems respect the ownership of content and are never used without permission.
  • Compensation: Creating pathways for fair monetization and revenue sharing for those whose data or intellectual property powers AI.
  • Controls: Providing tools for content owners to manage where, when, and how their work is used.
  • Credit: Offering transparent and accurate attribution to creators and contributors.
  • Clarity: Guaranteeing transparency about the technologies and models in use, fostering trust and informed engagement.
  • Confidence: Committing to the safe and trusted distribution of faith-aligned and ethically grounded content.

Developers and stakeholders must also embrace core principles of fairness, safety, transparency, and human-centricity:

  • Fair: Designed to minimize bias and promote equity.
  • Safe: Built with safeguards to prevent misuse and harm.
  • Transparent: Operating in ways that are understandable and accountable to users.
  • Human-Centric: Prioritizing the well-being and flourishing of individuals and communities.

This vision requires not only technical expertise but also moral courage and spiritual insight. It demands a willingness to challenge prevailing norms and advocate for a higher standard of accountability and purpose.

Conclusion

Joe Rogan’s lament—“We’ve got to hope that the right people are in control of AI when it becomes god”—is a clarion call for people of faith to step into the arena of artificial intelligence. The trajectory of technological innovation has brought us to a pivotal moment, where the consequences of inaction could be profound and irreversible. By bringing a redemptive worldview to AI development, people of faith can help ensure that this transformative technology serves humanity in ways that reflect the image of God.

The time to act is now. The hands that shape AI will, in many ways, shape the future of humanity. Let those hands be guided by wisdom, compassion, and a commitment to the common good.

Jason Malec
Jason Malec
Jason helps organizations accelerate mission by streamlining objectives, strategies, and storytelling. He works with Gloo AI, an AI-powered trustworthy source for spiritual growth. (ai.gloo.us) He’s served in executive leadership at Living on the Edge, American Bible Society, New Denver Church, ExploreGod, and North Point Ministries. He and Meredith married in 1997, and have three grown children. He’s an avid cyclist, swimmer, and yogi, and earned second place in a sixth-grade speling be.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Featured Posts

LATEST POSTS