A Conversation on AI, Ethics, and Human Responsibility
- admin047438
- Apr 10
- 5 min read
Author: Wan Mohd Aimran Wan Mohd Kamil | 11th March 2025
Abstract
This dialogue examines AI ethics, human responsibility, and the theological implications of technology. It critiques Ulama AI, contrasts Western and Islamic views on machines, and stresses positive resistance in AI governance. Emphasizing education and accountability, it argues that AI should aid, not replace, human reasoning in ethical and scholarly decision-making.
The conversation
Key: Shafira Noh (S) and Wan Mohd Aimran (A)
A: Strategically, as Muslims, we need to advocate for allies in developing safe and responsible AI.
S: Among other allies, I see that people are starting to develop virtuous AI, which I think is excellent. It shows that we are not the only ones concerned about ethics and virtue.
A: That’s right. The first prerequisite is to learn their key terms: safety, virtue, virtuous. Then, we should compare their vision of AI with ours.
S: Wow, this is mind-blowing! Other key terms would include governance, alignment, and agent (AI agent).
A: Allies are those who recognize the same problems as we do, who identify the same root causes, and who propose the same strategic solutions. At the very least, our allies in AI must acknowledge that technology is not neutral. If that is the minimum criterion, then most researchers and lecturers in our local engineering and Islamic studies faculties would be excluded. So, we don’t seek their validation.
S: Hence, potential allies are those who have the potential to recognize the same problem, who could identify the same root causes, and who might eventually propose the same strategic solutions within certain timelines. Identifying key terms helps in recognizing these potential allies. If local faculties are excluded, we can find allies elsewhere.
A: How do we seek allies? By investigating their values, especially whether they accept these two premises: first, that technology is not neutral; second, that the economy is not built solely on greed. And as far as advocacy is concerned, do you think an Islamic studies lecturer delivering a talk to engineering lecturers and researchers on the worldview of Islam will make them change their theory and practice in AI? If change were as simple as telling people something, many of our problems would already be solved. Telling is not educating.
S: And educating can take a long time and involve many creative approaches. Framing it as education expands the challenge: How do we educate policymakers? It’s not just about pointing things out. How do we educate the public about AI risks? Also, not just by pointing things out. BlueDot is doing an essential fardu kifayah by continuously improving their platform. They iterate alongside students and experts—not an easy task. Interestingly, they started from a small reading club.
A: Exactly. Their positive resistance is more impactful and deliberate. Meanwhile, we speak in closed seminars. Frankly, who cares?
S: That’s why I see a challenge when AI safety is only understood from a risk perspective. One way to mitigate risk is to avoid public discussions. I like the phrase positive resistance as a benchmark for impact. People often measure impact by the number of students, projects, or high-impact research. But another way is to assess whether we demystify and clarify AI-related risks appropriately—neither with panic nor complacency. This is why BlueDot’s work is significant.
A: One important step is to categorize different risks. AI safety is not just about making AI safe for humans; it’s also about making humans safe to use AI.
S: From within the field itself, both aspects matter. Researchers are developing ways to make AI safe for humans and the world, which is why there’s an ongoing discussion on existential risks (x-risks, s-risks). My facilitator suggested this resource: https://airisk.mit.edu/. In Malaysia, I met a startup founder developing AI to monitor workers. Yesterday, a startup called Mengaji announced an Ulama AI project: https://www.instagram.com/p/DG3pUwyhe_A/.
A: Interesting. Perhaps they shouldn’t dilute the term ulama.
S: True. But their contribution in helping asatizah reach a global audience is commendable. AI enables worldwide teaching.
A: That’s a benefit in terms of increasing the quantity of knowledge dissemination, but it should not compromise the quality of guidance received.
S: They are working with JAKIM, so perhaps JAKIM hasn’t noticed the dilution of the term ulama.
A: Ulama perform istinbat hukum, which means deriving rulings from the Quran and Hadith. But istinbat is not just about matching verses or hadith to situations. That is only one step. For example, when students answer physics problems, do they merely match formulas to given scenarios? When doctors treat patients, do they just match medicines to symptoms? Diagnosis requires distinguishing primary from secondary symptoms, identifying key differences, and comparing with past cases. This requires human intelligence. Solving a problem is not just about matching existing solutions to predefined problems.
S: But proponents of AI-based reasoning argue that modern large language models (LLMs) possess cognitive tools, like chain-of-thought reasoning and advanced inference, which mimic human processes.
A: We can challenge them: If you were sick and had the choice between a human doctor and an AI doctor, which would you choose? People who argue otherwise often do not understand what thinking truly entails. Not all thinking is formulaic. Bureaucrats may believe so, but creative artists, historians, and scientists will admit that their thinking is not formulaic. Among Muslim scholars, fuqaha are the most formulaic, simply applying existing solutions to new problems. Unfortunately, they dominate our religious institutions.
S: So, that’s why people think an Ulama AI is possible?
A: Exactly. But ulama are not just fuqaha.
S: No wonder you emphasized the dilution of the term ulama.
A: Imam al-Ghazali pointed out nearly a thousand years ago that the meaning of ulama has been restricted to just fuqaha. The fuqaha are like pharmacists—they dispense medicine prescribed by doctors. That’s all.
S: This conversation highlights that AI is often unsafe not because of its inherent nature, but because of human misunderstanding. Fuqaha are accountable for their rulings, but AI has no accountability if it delivers an incorrect fatwa.
A: Something can be unsafe not because it is inherently dangerous, but because of how it is applied. And only people can be held accountable for rulings.
S: Josep Curto also mentions that AI usage should follow risk categories. If it involves high-stakes decisions, AI must be scrutinized before deployment. We shouldn’t panic about AI/LLMs if we leverage our legal structures, institutions, and traditions. Could disclaimers alone be enough?
A: That’s like putting health risk labels on cigarettes or vapes without regulating their use. Disclaimers essentially say: We know this is dangerous, but we want to sell it anyway. If something goes wrong, it’s not our fault—we warned you! It’s a way of relinquishing responsibility.
S: That reinforces the need for education. I’ve been reflecting on Surah At-Tin, where Allah affirms that humans are created in the best form. This challenges the assumption that AI/LLMs are inherently smarter than humans. While machines operate on silicon-based processing, humans possess unique cognitive abilities that go beyond mere computation…
A: After all, God does not mention in the Quran that He created machines to worship Him.
S: I’m perplexed by where this human inferiority complex toward machines comes from—especially in the context of both worshiping Him and fulfilling our role as khalifah on Earth.
A: Therefore, we also need to investigate the differing attitudes of Westerners and Muslims toward machines.
Food for thought
How do different cultural and religious perspectives shape the way we understand and interact with AI?
What are the risks of assigning human-like qualities to AI, and how can we ensure that AI remains a tool rather than an authority?
How can we educate both policymakers and the public on AI ethics without creating unnecessary fear or complacency?
What practical steps can be taken to ensure that AI development aligns with ethical and moral values rather than purely economic or technological interests?
How does the historical relationship between humans and machines influence our current views on AI and its role in society?

Comments