Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Artificial Ethics in the Matrix

“it is the world that has been pulled over your eyes to blind you from the truth”

Hannah Fuhrmann

on 16 January 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Artificial Ethics in the Matrix


Is it Morally Wrong to Unplug People
Artificial Ethics
Within the Matrix

By: Mohammad Sheikh, Brandon Smith, Hannah Fuhrmann, Cory Crisostomo, and Jyles Datoon
Manipulation and Immorality
Manipulation and
Immorality Continued

Special Terms
"The human beings actions in that respect have no real or actual consequences in a world that exists independently of his or her mind"(Driver)

"The Matrix isn't a dream" (Driver)
"Some writers have argued that one cannot be held responsible for what happens in a dream, since dreams themselves are not voluntary, nor are the 'actions' one seems to perform in a dream"
Main Ideas
- " Without actual bad effects, the actions of those in the matrix are not immoral."

- Controversial claim, "It's the thought that counts"

- Immanuel Kant

- Forming bad intentions is immoral

- Non-veridical wrongdoing in the Matrix
Julia Driver's
Therefore, since the
programs and agents
of the Matrix display
the same capacity of
these virtues as humans...
they should have the
same moral standing as
humans too
Humans associate moral standing
with virtues such as consciousness,
sentience & rationality
The more prevalent these
virtues are in any being,
the more entitled that
being is to moral rights
Moral Status of Programs
& Immorality
Are there any moral rules in a fake world?

If you die in the matrix you die in real life, however if you get hurt in the Matrix it will not harm you in real life.

Is it wrong to hurt someone in the matrix when it does not affect them in the real world?

The Film's
Ethical Dilemmas
The morality of a person's
actions or choices could
either be defined by their
intentions at the time of the
act, or by the consequences
that occur as a result of their
Also, if a person is in
some aspect deluded,
similar to an insane
person, they can not be
fully held responsible
for their actions.
Manipulation &
Immorality Continued
However, even though the people in the Matrix are deceived into thinking their virtual environment to be real, each person has equal awareness to one another in terms of their reality
Therefore, they proceed
to make rational choices
resulting in the same
effects that would've
occured, had they made
the same choice in the
real world

- Decisions in dreams do not impact the real world, but the Matrix isn't a dream
- " The virtual fur coat is the result then of a virtual animal getting killed, but a virtual animal with all the right sorts of mental states - in this case pain and suffering."
If the Matrix is Complete
- Virtual Harms
- Produced bad effects therefore it is immoral
- Relates to the book "Divergent"
If the Matrix is not Complete
- This issue becomes more complicated
- Cyphers Steak example:

"If Cypher's virtual steak comes from a virtual meat locker, and the meat locker is the end of the line - and the acquisition of the steak does not involve the killing of a virtual animal with all the same psychology of pain and suffering a 'real' animal feels, then no moral harm has been done."
Manipulation and Immorality
The authors final evaluation:

"My guess is that the Matrix is a complete alternate reality created in the image of the pre-machine reality. And the Matrix, if it does offer such a complete replication of the pre-machine reality, is a self contained world. It has its own objects, its own people, animals and.... ethics. The systematic deception of the humans doesn't change this."
The ideas introduced by Driver's Artificial Ethics are directly connected to "The Matrix," with the movie relying heavily on the ethical concepts discussed in the article to both further the plot and intellectually challenge the viewer. These ideas can be seen in "The Matrix" with its question of the morality of both machines and humans, as well as the film's ethical dilemmas of both people's actions in a simulated world in addition to unplugging people from a normal world to have them live in an apocalyptic wasteland.
So therefore, the people in
the Matrix CAN be held morally
responsible for their actions,
even though their choices have
no effects in the real world.
Robotic Morals

Since the Matrix is not a dream the author makes an argument that people in the Matrix can make voluntary choices, however they are not irrational.
Characters like Neo believes any rational, or reasonable person would believe if they were in the Matrix.

Killing Innocent people in the Matrix

Is it morally wrong or right, if you are either an enlightened individual or someone in the Matrix to kill someone who isn't unplugged?

Using the example from the scene where Neo and Trinity go to save Morpheus and kill the officers in the building.
They both know it is not a real world and it doesn't matter to them at the time, however those people are dead in the real world now.
Julia Drivers view:
"The view I favor is that without actual bad effects the actions of those in the Matrix are not immoral"
Therefore, In Neo's case with that particular scene
he felt it was morally right to kill those people in order to save Morpheus as the lives of every person in the city of Zion were threatened with Morpheus being held captive.

Machines are created by humans for their own needs, using them as tools to further civilization

Machines are given emotions, logic, and general
thought, yet are treated as servants to humans

"Smith, of course, and his colleagues seem remarkably without affect. Yet, at critical points they do display emotions: anger, fear, and surprise. (Driver)"
As soon as the machines demonstrate their ability to have their own desires, humans feel threatened and a war breaks out.

Humans desire to reap the benefits of their creations, but not when they must live with any negative traits

"'Hateful day when I received life!' I exclaimed in agony. 'Accursed creator! Why did you form a monster so hideous that even you turned from me in disgust?' (Shelley)"
In Mary Shelley's "Frankenstein," Victor Frankenstein creates his monster for his own desire of the advancement of science
The remaining humans want to destroy the
entire race of robots for the survival of their own species, despite the consciousness of the machines

Humans are unable to compromise for the sake of the future, destroying the machine's fuel source machines instead of finding equilibrium

“Just as it would be wrong to flip a switch and kill an innocent human being, no matter how that human being came into existence, it would be wrong to flip a switch and kill a sentient program. As long, of course, as that program possessed the qualities we regard as morally relevant. (Driver)”
Frankenstein is notices his creation's appearance and is horrified, abandoning it. The monster is dismayed at being left alone in the world and being discriminated against.
Frankenstein does not give his creature the choice to atone for his sins and teach him the ways of the world, but rather continues to treat him as a threat
“I hate this place. This zoo. This prison. This reality, whatever you want to call it, I can't stand it any longer.” - Agent Smith
Machines take humans captive for their
own survival
Humans are given relatively humane treatment, with a simulated world like their own
Both humans and machines are fighting for
their own survival

Machines demonstrate moral behaviour throughout the movie

The dynamic between the morality of the humans and the morality of the machines is present

Many similarities to Mary Shelley's "Frankenstein"

When you are in a dream any actions one performs will not actually have a good or bad effect when you wake up. A great example connecting the Matrix and the idea of dream like environment is the movie Inception.
Morally Relevant
Sentient / Sentience
Moral Status
Moral Complexity
- Bad decisions in dreams do not impact the real world
If the Matrix is complete
- Virtual Harms

- "The virtual fur coat is the result then of a virtual animal getting killed, but a virtual animal with all the right sorts of mental states - in this case, pain and suffering."
- Still produces bad effects
- Becomes more complicated
If the Matrix is not Complete
"If Cypher's virtual steak comes from a virtual meat locker, and the meat locker is the end of this line - and the acquisition of the steak does not involve the killing of a virtual animal with all the same psychology of pain and suffering a 'real' animal feels, then no moral harm has been done."
- Thoreau's point still holds

Henry David Thoreau's View:
"What we seem to do in a dream reflects our own character; and the contents of dreams could reveal true virtue. Even if the actions one performs in a dream have no actual good or bad consequences, they reveal truths about one's desires, in turn revealing their character."
Kant's View:
"All that matters is a good will - actual consequences are irrelevant to moral worth."
But was it wrong to kill them?
- Yes, it is morally wrong to kill someone

- A controversial situation

- What Kant says

- Look at it from a different perspective or example
"There seems to be an implicit view that their existence is less significant, their lives of less moral import, than the lives of ‘naturally’ existing creatures such as ourselves. (Driver)"
Humans are not the only sentient beings within the Matrix, meaning that human morals are not infallible
Both humans and machines attempt to ensure the best result for their own independent races
"But both sides view themselves as fighting for survival, and I imagine that Smith and Smith’s creators, as well as Neo and his friends, would argue that moral qualms like these are a luxury. (Driver)
Robots are given artificial intelligence but not treated as people

As soon as robots begin to ask for basic rights, they are considered a threat

Robots are exiled by humanity, so they build up their own society with their superior ingenuity

Humans attempt to destroy the robots but fail, resulting in a war between the races

The machines eventually win, despite humans blocking out the sun
Two sides to this argument!
Yes... because it makes the life they have to experience more unpleasant & harder
No... because unplugging people would give them true freedom of influence from the machines, and give them awareness of what true reality is.
- However, some would argue in regards
to morals, "it's simply the thought that
counts!" to make someone guilty of being
immoral if they had full intentions to do
something morally wrong.
Can we really
hold the agents responsible for their actions?
They're like soldiers following orders
It's in their own best interest to avoid deletion
Therefore, they carry out their actions based on what the master computer in charge feels is a threat to itself!
Let's Put a Spin on it this!
It is not accurate to assume that the humans are exclusively morally justified in their actions
People automatically sympathize with the human struggle
The rebel's mission can be associated with terrorism to the machines
Full transcript