Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Logical Agents for Language & Action

Paper presentation for CS3612 Intelligent Systems

Tharindu Rusira

on 5 September 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Logical Agents for Language & Action

What ?
What is "axe"?
Starts with "who owns", then "axe" must be an object
How should I answer?
Logical Agents for Language & Action
Computer games ? Yey !!!
Intelligent non-player characters (NPC)
Why important in real time game worlds?
How intelligent are the conventional models?
Augmented Natural Deductive Intelligence
ANDI Case study

Non-Player characters
Other characters appear in a game except human-controlled roles

Naive in many early games

ANDI - Augmented Natural Deductive Intelligence.
An agent architecture where logic is used to represent the knowledge of NPCs in a declarative form that is applicable even in unforeseen situations,
Theorem proving derives answers to dialog questions that were not scripted beforehand,
Reasoning about action finds plans for scenarios that were not pre-determined
Martin Magnusson & Patrick Dohetry
Nightmare for
Developers - "how to code all these state transitions ? "
Script writers draw dialog trees - "I think it's all, no wait... "
Game designers - "I never thought that Connor would climb that tree..."
The hallmark of intelligence is the ability to deal with the unexpected
Is it possible to design smart NPCs ?
NPCs that ,
overcome unforeseen situations,

novel questions that are not present in the script,

consider alternative courses of action ?
F.E.A.R. (2005)-
planning algorithm(like STRIPS)

The Sims - reactive, responsive to the
dynamic world

These are all rule-based architectures
What about NLP?
Well, sounds cool but....
complete NLP approaches introduce ambiguity making things worse
presented by
Tharindu Rusira
Milinda Fernando
Chamara Philips
This is where ANDIs live,
- human player
- ANDI Land blacksmith (NPC)
- ANDI Land Woodsman (NPC)

Magni is allowed to talk with NPCs in natural language,Magni can type anything. No restricted dialog trees
Full natural language understanding is AI-complete
Communicate in Q & A format
Who are you?
I am Smith
Magni: Who owns the axe?

Assume that agent Smith never
encountered this question.
informRef(magni, value(12:15,owner(axe)))

How Programmers Implement Non-Player Characters
Basically There are Two Ways,

Implement NPC in pre- defined(dumb) way

Implement NPC in Intelligent
Not so easy
Rely on developer's foresight

Programmers design finite state machines that specifies what NPC do in every state they end up?

How many states in a complex game character ? 100, 1000, 10000, even more ?

Can you predict all the human-player moves?

Even if you find everything, how long will it take to compute?
Intelligent NPC
Basically Contains of two parts.
Knowledge Base
Automated Reasoning Engine

ANDI Agent Architecture
Natural Language to Logic Parser
Automated Theorem Prover
Knowledge Base
Temporal Action Logic
Magni goes to Smith, the blacksmith

Questions and Answers
inform(magni, "Id("value(12:15),owner(axe)"),"smith")")
Ignorance and Learning
Magni: How much gold do I own?
Smith: I dont know.
Since Magni's amount of gold fluent is unknown by Smith, the answer is "I don't know"
Magni:I own 6 gold.
Smith: I see
Smith updates his KB if this fact doesn't contradict with current KB.

Has Smith learned the gold amount of Magni ?
Since the knowledge base of smith is updated,
Magni: How much gold do I own?
Smith: You own 6 gold.
Reasoning and Proof
Dialogs can be used to teach NPC simple
Sometimes answer to a question is not stored in an NPC's knowledge base but is an implicit consequence of it
Magni: Is my gold more than the axe's price?
questions correspond to requests for information about whether the statement is true or false.
So NLP parser parse this to
speech act
infromif(magni,"value(12:20,gold(magni)) > value(12:20,price(axe))")
Smith has never seen this question before but he is still capable of answering.
Actions and Execution
Magni: Sell the axe to me
The grammar recognizes the above as an imperative command where Magni requests Smith to perform the selling action
There exits t1,t2 Occurs(smith,(t1,t2],sell(axe,magni))
Committing to executing an action is not enough to prove that the action will actually occur.
To perform the action agent has to check whether this action violates the current knowledge base. For example if magni has less gold than the axe's price then this action can't be performed.
Persistence and Change
Actions like Magni purchase of axe, change some aspects of the world while leaving MOST UNAFFECTED.
What are
Properties and Relations that may change over time such as owner(axe) are called
Should we update all the fluents in the ANDI-Land every time? No Why?
Instead of that we assume the
blanket assumption
. That is we assume that the values of the fluents persist over time
Errors Due to Blanket Assumption
Consider the following conversation.
Magni: Who owned the axe yesterday?
Jack: Smith owned the axe yesterday.
Magni: Who owns the axe today?
Jack: Smith owns the axe today.

Jack still doesn't know the owner of the axe fluent has changed so he still thinks that owner of the axe is smith according to the blanket assumption.
The Errors due to Blanket assumption can be avoid using learning properties of agents.
Consider the following conversation

Mangi: I bought the axe from smith
Jack: I see.
Mangi: Who owns the axe today?
Jack: You own the axe today
Mangi: Who owns the axe tomorrow?
Jack: You will own the axe tomorrow.
Goals and Plans
Lies and Trust
What happens if a player tells a lie to an agent intentionally or accidentally.
If Magni tries to fool Jack by telling that she owns a bag,
Magni: I own the bag
Jack: I See.
Jack assumes that the Mangi is trustworthy there for he update his knowledge base.
Consider the following conversation
Magni: I own the lumber
Jack: No (Because this contradicts with jack's KB)
Magni: Who owns the Lumber?
Jack: I own the lumber

After Jack get to know that Magni has lied he remove all the facts learned from the Magni. So after lying if Magni asks,
Magni: Who owns the bag?
Jack: I don't know.
Magni: I own the bag.
Jack: Maybe
The same logical representation and reasoning can be use to establish the goals and action planing.
Assume that jack want to be the owner of the lumber, So he can establish a goal using logic.
There exists t value(t,owner(lumber))=jack and t>12:25
Jack knows that to have lumber that he has to cut trees and to cut trees he needs an axe (These are the facts that in KB) So he make an action plan to achieve his goal.
There exists t1,t2,t3,t4,t5,t6
Schedule(jack, (t1 , t2 ],request(magni, “Occurs(magni, (t3 , t4 ],sell(axe, jack))”)) ∧
Schedule(jack, (t5 , t6 ], chop) ∧
t 1 < t2 < t3 < t4 < t5 < t6

NPC can be programed using AI concepts like logic representation and logic inferences.
The intelligent NPCs can act more natural than the pre-defined finite state NPC agents.
By making NPCs intelligent the game can be more entertaining to the player.

Thank You


Failures and Recovery
After Jack Detects the Magni is lying then what
happens if Jack disbelieves Magni all the time?

So After some period Jack detects that Magni is not lying then he starts to believe Magni again. This is called the
Recovery process

ANDI Theorem prover is forced to reestablish the goal by finding an alternative proof. For Example, Ask another NPC.
Full transcript