The Internet belongs to everyone. Let’s keep it that way.

Protect Net Neutrality
Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

IBM Watson, how it works?

Presentation about IBM Watson for Trondheim Big Data meetup. It includes introduction to natural language processing and overview of the architecture and core components of the system. Estimated total time for the talk is 75 minutes.
by

Gleb Sizov

on 6 October 2017

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of IBM Watson, how it works?

1. If a question gives a number of properties about the answer then the answer is likely to be the title of that document


2. If entities in the question have title-oriented documents about them, then it is likely that the answer is in one of those documents
IBM Watson, how it works?
Gleb Sizov
PhD Candidate at NTNU

Structured vs unstructured information
Question answering architecture combines more than 100 algorithms
DeepQA
Factoids
President under whom the U.S. gave full recognition to Communist China.
(Answer: Jimmy Carter)
Puns
The "Jerry Maguire" star who automatically maintains your vehicle’s speed.
(Answer: Tom Cruise control)
Common bonds:
trout, loose change in your pocket, and compliments.
(Answer: things that you fish for)

AI next challenge
Open-domain question-answering (QA)
Introduction
Focus detection
One of the pronouns it/they/them/its/their, e.g.
It
forbids Congress from interfering with a citizen’s freedom of religion, speech, assembly, or
petition.
Lexical answer type (LAT)
"[Focus] of/for [X]" extract [X] when [Focus] is any of one/name/type/kind, e.g.

A mom compares her kid’s messy room to
this
kind of hog
enclosure
.

Syntactic parsing
Relation extraction
Dependency-based parse graph
Domain analysis for initial source acquisition
95.47% of the answers are Wikipedia titles
Types of answers not in the titles:
multiple answer questions (e.g.
"Indiana, Wisconsin, and Ohio"
)
synthesized answers to puzzle questions (e.g.,
"level evil"
)
verb phrases (e.g.,
"get your dog to heel"
).
TENNIS: The first U.S. men’s national singles
championship, played right here in 1881, evolved into this New York City tournament.

Multiple facts:
it is the first U.S. men’s singles championship
it was first played in 1881; and
it is now a tournament played in New York City

These facts are mentioned in a document about
U.S. Open.
Error analysis for additional source acquisition
Questions not covered by Wikipedia:
Inverse definition questions
(identify the term given one or more of its definitions)
Quotation questions
(complete a quotation or identify the source of the quote)
Bible trivia, book and movie plots
, etc.

Additional sources
: Wiktionary, Wikiquote, Bible, popular books from Project Gutenberg
Source acquisition
Source acquisition
Source transformation
Utilizing title-oriented documents

Source expansion
Question:
Aleksander Kwasniewski became the president of this country in 1995.

First sentence in the Wikipedia article:
Aleksander Kwasniewski is a Polish socialist politician who served as the President of
Poland
from 1995 to 2005.
Generating title-oriented documents
Literature, the Bible, song lyrics

Generate titles based on:
author name
work name
character names
Phrase-based parse tree
Named entity recognition
Making Watson fast
Co-reference resolution
Natural language processing
Automatic knowledge extraction
Projection patterns:
subj-verb-obj
noun-Isa
Axioms:
{<subj, "Einstein"> <verb, "receive"> <obj, "Nobel prize">} (0.9)
{<noun, "Bill Clinton">, <isa, "politician">} (0.7)
Statistics:
frequency
conditional probability
pointwise mutual information
Corpus processing
Parsing
Named entity recognition
Co-reference resolution
Frame extraction
Frame projection
Question analysis
Multiple queries per question:
(2.0 "Robert Redford") (2.0 "Paul Newman") star depression era grifter (1.5 flick)
depression era grifter flick
Structured resources
Title of title-oriented documents:
"US Open"
Noun phrases from the retrieved passages that are titles in Wikipedia:
Passage:
"...tremendous sacrifice by the
Soviet Union
, which suffered the highest military casualties in the war, losing approximately 20 million men."
Title
: "
Soviet Union
in World War II"
Based on meta-data
: anchor text, document titles, hyperlink targets
Unstructured Information Management Applications (UIMA)
Hypothesis and evidence scoring
Evidence retrieval
1. Question:
"In
1840
this German
romantic
married
Clara Wieck
, an outstanding
pianist
and a
composer
, too."
2. Primary search:
"
Clara Wieck
Schumann: a
German
musician, one of the leading pianists of the Romantic era, as well as a
composer
, and wife of
composer

Robert Schumann.
"
3. Candidate answer:
"
Robert Schumann
"
5. Evidence query:
"
Robert Schumann
", "
1840
", "
German
", "
romantic
", "
pianist
", "
Clara Wieck
", and "
composer
".
6. Retrieved evidence:
"Although
Robert Schumann
made some symphonic attempts in the autumn of
1840
, soon after he married his beloved
Clara Wieck
, he did not compose the symphony until early 1841."
Evidence scoring
How good is the evidence?
Matching evidence with the hypothesis.
hypothesis = question + candidate answer
= matching score
Question classification
What it takes to win Jeopardy!
buzz in for at least 70% of the questions
get at least 85% correct
A neutron walks into a bar. "How much for a drink?"
To which the bartender responds, "For you, no charge."

How Watson learns?
Confidence merging and ranking
Ranking by evidence scores
Likelihood that a candidate answer is correct
Phase-based machine learning framework
Challenges of applying machine learning
Some candidate answers are equivalent (answer merging)
Value of features is different for different question classes (routed models)
Little training data available for some question classes (transfer learning)
Value of features is different at different stages of ranking (standardization)
Extremely heterogeneous features (normalization)
Correct/wrong class imbalance (instance weighting)

Question:
Robert Redford and Paul Newman starred in this depression-era grifter flick. (Answer: "The Sting")
Focus
: "flick" in NP "is depression-era grifter flick"
LAT
: "flick", no co-references
Relations
:
actorIn(Robert Redford; flick : focus)
actorIn(Paul Newman; flick : focus)
Relation
: actorIn(Robert Redford; flick : focus)
Question analysis
Hypothesis generation
Unstructured
Machine learning
Full transcript