7.6. How Cultural Protocols and Belief Systems Impact AI#

7.6.1. Belief Systems#

Belief systems, whether rooted in moral principles, cultural traditions, or personal values, shape how humans define right and wrong, and these same belief systems influence how AI systems are designed and trained. AI does not possess intrinsic understanding or ethics, so it relies entirely on the data and goals programmed by people. This makes it vulnerable to both explicit and unconscious human biases.

Example: Designing Self-Driving Cars#

In ethical dilemmas like self-driving car scenarios, different belief systems might lead to vastly different programming choices.

  • The belief that all lives hold equal value and that we should act for the greater good could lead to the conclusion that the car should swerve, sacrificing one life to save five.

  • Religious or cultural beliefs that emphasise the role of fate may suggest that we should not intervene and alter the car’s current path.

  • Moral beliefs and legal frameworks may argue that deliberately taking an action that causes harm is wrong, which would support the decision not to swerve.

Although this is an extreme scenario, it highlights how belief systems influence the design of AI. The choices an AI system makes are ultimately based on decisions programmed by humans, reflecting our own belief systems and values.

Example: Bias Towards Dominant Beliefs#

When AI systems are trained on data that underrepresents minority populations, the decisions they make may reflect dominant cultural norms while overlooking or misinterpreting the needs and values of other groups. This can reinforce existing inequalities and marginalise certain belief systems. For example, in social media content moderation, AI might approve posts that align with one cultural norm but are considered offensive or inappropriate in another. Over time, this can lead to a loss of trust in AI systems among underrepresented communities and may promote the idea that certain worldviews or experiences are less valid.

7.6.2. Cultural Protocols#

Cultural protocols are the accepted social norms and practices that shape appropriate behaviour within a group. These protocols often reflect the group’s underlying belief system and can influence how people communicate, make decisions, and determine what is considered respectful or appropriate, for example, in language use, dress, or social interactions.

AI systems must consider cultural differences because cultural norms, values, and expectations significantly influence how people perceive fairness, appropriateness, usefulness, and trust in technology. Ignoring cultural variation can result in AI tools that are biased, ineffective, or even harmful in certain contexts.

Example: Cultural Difference in Age Care#

In sectors like aged care, cultural values play a crucial role in shaping expectations and approaches to care. For example, Western frameworks often emphasise individual autonomy and independence, while many other cultures prioritise family involvement, community care, and interdependence. If AI systems fail to account for these cultural differences, they may make decisions that conflict with the values of the people they serve. This misalignment can lead to ineffective care, mistrust, and even emotional or psychological harm to those receiving support.

The Role of Cultural Values in Shaping AI Interactions#

Cultural values shape how individuals engage with AI in various ways:

  • Cultural attitudes toward AI can influence how frequently it’s used, which tools they engage with and determine when or in what situations its use is considered appropriate.

  • Cultural confidence in AI can influence the level of trust placed in it as a reliable source of information or support.

  • Beliefs about human-AI relationships shape interaction styles, such as how polite, respectful, or informal users are when communicating with AI tools.