Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

a reflection on psychometric artificial intelligence

PP slides from my PAI seminar pasted on Prezi. Sorry for the quality - the text is the same.
by

Tuuli Pöllänen

on 6 February 2014

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of a reflection on psychometric artificial intelligence

Bringsjord, S & Licato, J. (2012). Psychometric Artificial General Intelligence: The Piaget-MacGyver Room. Atlantis Press Preview, Rensselaer Polytechnic Institute (RPI), Troy NY 12180 USA.
Bringsjord, S. & Schimanski, B. (2004) Psychometric AI: “Pulling it All Together” at the Human Level. AAAI Fall 2004 Symposium Series: Achieving Human-Level Intelligence Through Integrated Systems and Research.
Bringsjord, S. (2011). Psychometric artificial intelligence. Journal of Experimental & Theoretical Artificial Intelligence, 23(3), 271-277.
Evans, G. (1968). A program for the solution of a class of geometry-analogy intelligence-test questions, in M. Minsky (ed.), Semantic Information Processing, MIT Press, Cambridge, AM, pp. 271-353.
Harnad, S. (1991). Other bodies, other minds: A machine incarnation of an old
Langley, P., Laird, J.E., Rogers, S. & Sun, R. (2008). Cognitive architectures: research issues and challenges. Cognitive Systems Research, doi:10.1016/j.cogsys.2006.07.004
Newell, A. (1973). You can't play 20 questions with nature and win: Projective comments on the papers of this symposium. In W. G. Chase (Ed.), Visual information processing (pp.283-308). New York: Academic Press
philosophical problem, Minds and Machines 1(1), 43–54.
Ritter, F. E. & Young, M. R. (2001). Embodied models as simulated users: introduction to the special issue on using cognitive models to improve interface design. International Journal of Human-Computer Studies, 55, 1-14
Schimanski, B. (2004). Introducing Psychometric AI, and the robot PERI, its first implementation. Rensselaer Polytechnic Institute, Troy, New York.
Turing, A. (1950). Computing machinery and intelligence, Mind LIX(236), 433–460. references http://www.youtube.com/watch?v=y14VepdBza4
Embodied model – goal was to solve the entire WAIS and then move on to other tests
full system capable of logic and reasoning, vision, physical manipulation, speech and hearing
Based on cognitive robotics, not modeling
Brute force problem solving and data space searches
Robotics, not cognitive architectures!
This is interesting, because intelligence is inherently human-like (since it’s a construct built on assessing humans), and cognitive architectures are created to model human-like behavior
robotics often explicitly avoid modeling human-like behavior
Because they’re created to compensate shortcomings
Except androids, which are just there to creep you out
 
Is it wise to model intelligence tests with “any means necessary” algorithms?
If the goal of the project is to discover something about human intelligence, it might be better to use a hybrid approach of PAI and cognitive architectures
Last update about PERI was in 2005
Source code and item list is on their web site, if you know Lisp and want to continue from where they left! PERI (Bringsjord & Schimanski 2004) Use different model parameters to simulate different cognitive processes
Embodied model – include sensorimotor components!
Alter parameters to display contributions of individual aspects of information processing on successful task performance
Allows developing tasks for special needs groups!
None of this will happen unless (P)AI becomes more dynamic and integrated. What can AI offer for psychometrics? Lots of potential if there was more collaboration within AI and between AI and psychology
Item creation
An exhausting part of psychometrics (time + money)
Could benefit from automation:
Build a working model capable of solving a pre-existing test with excellent metric characteristics and empirical backing
Use the same or similar test to create new items
save time and money.
Automatized item creation (specifically for “culture-free” tests):
A massive number of new items in a short time frame
Virtually no cost other than that required to construct the machine
Primitive types of item analysis even before pilot sample application
E.g. use data space searches executing an algorithm until solution to the task is discovered
Reset program in between, calculate average number of iterations required -> estimation of item difficulty
Use heuristic strategies instead of algorithms –> difficulty index + attractiveness of distractors
Enter pre-existing items with their metric characteristics as parameters -> construct similar items or compare to newly created items for better estimations of primary metric characteristics
This does nothing to the process of test validation and standardization  What AI has to offer for psychometrics Newell’s critique is still topical, just the domain has changed:
AI and cognitive science have fragmented into several well-defined sub-disciplines
Different interests, goals, evaluation criteria
Practical applications require holistic approaches!
Schimanski (2004) suggests PAI as the shared factor pulling together different fields of AI

I argue that model quality evaluation and shared criteria should draw AI fields together. Whether or not this is done with PAI is arbitrary.
It might be preferable for AI to develop its own assessment paradigm. Current state of AI and Newell’s fragmentation crisis General tendency to miss the central point of psychometrics as drafting instruments that can be used to assess some kind of a psychological variable
Humans as the domain represent the constraints for instrumental creation
These are not “care-free tools” for AI model creation!
Uncritical test use skews the image of the construct the authors try to model
Risks integrity of both fields
People that work in PAI must consider metric characteristics of the tests they use for model construction
Mandatory interdisciplinary collaboration!
The least you can do before building a model of a psychological construct is to familiarize with said construct and the instruments used in its assessment. You don’t get to claim to work between two major paradigms and just cherry pick elements from one while ignoring most of its very fundamental characteristics. Computer science: if it compiles and outputs stuff that looks about right, it works!
Brinsgjord & Licato (2012) argue that you can IGNORE construct validity
Generally a bad understanding about basic psychometric concepts, but ”ignore” construct validity?!
Their argument: Take test a of trait b. If your model can solve test a, it by definition has trait b.
This is true if the test has construct validity. Not all tests do.
Bringsjord & Licato – computerized space for testing computer models of general intelligence utilizing something similar to Piagetian tasks
They took construct a, created test b which, they argued, measures trait c, which is related, but does not equal trait a (which they did not even discuss), and then went ahead and assumed construct validity. Don’t even get me started on construct validity... …to make computer simulations of something that is inherently human and not even try to employ human-like problem solving, and then assume you’ll gain some insight of human intelligence by doing this?
- Human intelligence is not the only goal of PAI – AI model quality control
This depends on the nature of intelligence.
Can it be considered a tendency of engaging in algorithmic, rather than heuristic, problem solving?
- > I’d rather see combos of cognitive architectures and PAI
Even when using the same test, the expressions of machine and human intelligence are fundamentally different
IQ tests for humans – comparing to other test-takers (position in normal distribution)
No such quantitative scope for machines, no relative position on the bell curve
We can go easy on the metric characteristics!
Discriminativeness
Construct validity?
Not so fast… But is it meaningful... A definition of intelligence that can be actualized computationally
Highly methodological and mathematical
Easy implementation without having to worry about altering the nature of the concept
Direct tools for creation – assuming construct validity of test, all the model has to do is to be able to solve tasks on it
Opportunity for interdisciplinary collaboration and widening the horizons, scope and applications of AI
Evaluation of models – AI has notoriously bad tools for assessing model quality
Turing test and the TTT
Psychometrics: abundant range of tests with different levels of difficulty, different application domains
AI could develop their own assessment paradigm … … … … What does psychometrics offer to AI? (many things, second slide!) PAI: construct and evaluate artificial intelligent entities capable of solving psychometric tests.
Not: model human-like problem solving!
“Any means necessary”-methods
Brute force and extensive data searches are allowed
Psychometrics and AI make an interesting cocktail:
We enable AI people to create models that really genuinely do represent intelligence.
Without psychometrics, the nature of intelligence would be reduced to philosophical self-reflection.
if AI really works with intelligence, they would quickly run out of ingredients without contributions from psychology
AI = interesting tinker apps of search heuristics, algorithmic thought and machine learning
Sure looks intelligent – but is it? What psychometrics can offer to AI 1. Complete processing models
Open door for cognitive architectures – blueprints for creating and evaluating holistic artificial intelligent agents with human-like problem solving approaches.
2. Analyzing complex tasks
Focus on a single complex task instead of designing specific, simple, small experiments to settle specific, tiny question areas
Very popular paradigm, until The Deep Blue
Why analyze complex tasks when we have such crazy processor capacities that we can use elementary search algorithms to gain better results?
3. One program for many tasks
Open door for PAI – Newell suggested constructing a model that could complete WAIS
a single system to perform a diverse collection of small experimental tasks
Cognitive architectures immediately gained popularity, PAI only after turn of millennium (Except Evans, 1968)
Is it a utopian notion to construct AI that can take intelligence tests?
Narrow scope of application
Back to the roots of AI
A drift from Turing’s ideas to AI as simple tinkering. Time for a rewind. Newell’s three new paradigms 1. Know the method your subject is using to perform the experimental task – behavior is programmable
Behavior is predictable and can be programmed given realistic constraints and if the parameters are known
2. Never average over methods. This leads to either “garbage, or, even worse: spurious regularity”
Discover the invariant structure of the subject -> infer its methods for problem solving
Once goal and environmental parameters are known-> generate a collection of methods likely to be used for problem solving
Careful experimental design and post hoc analyses to see which method was selected
The context for doing this is a control structure, which constrains what methods can be evoked to perform the task
Newell: control structures are best illustrated in programming languages, as they create a space where different methods can be tested if they conform to the limitations of space and time. Newell’s two injunctions on how to approach experimentation Newell: “You can’t play 20 questions with nature and win” – turning point of computational psychology
The fragmentation of cognitive psychology
Commentary on symposium of visual information processing
Exciting, high quality experiments on tiny sections of the problem area
Impossible to draw a conclusion!
Fix with computational models?
Methodological problems: averaging over methods, binary distinctions -> correctional experiments… PAI and the triumph of computational psychology More a self-reflection than a thorough overview of literature
From “psychometric AI” to “computer scientists don’t know anything about psychometrics”
Dire need for interdisciplinary collaboration!

Program:
Roots of PAI – the triumph of computational psychology (Newell, again, sorry!)
Characteristics of PAI
Critique – PAI and its position in main stream AI
Implications of AI for psychometrics
The PERI project Contents Tuuli Pöllänen Psychometric Artificial Intelligence
Full transcript