Whose ethics should be programmed into the robots of tomorrow?

Whose ethics should be programmed into the robots of tomorrow?

As we stand on the precipice of a new era, the question of whose ethics should guide the robots of tomorrow becomes increasingly pertinent. This exploration delves into the complexities of programming morality, a task as nuanced as humanity itself.

Are values and morals really about what the majority want?

When we make moral decisions, we weigh up principles and certain values. It’s unlikely that we can recreate that in AI

Purpose of thought experiments

Moral thought experiments serve to perturb the initial conditions of a moral situation until one’s intuitions vary

Building ethical AI

As machines start to integrate more and more into our lives, they will inevitably face more ethical decisions

What would you do if…?

An international team of philosophers, scientists, and data analysts gave us The Moral Machine Experiment

The democratization of values for ethical AI

How do we know what we say we’ll do will match what we’ll actually do?

Source

Get in