Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
AI
What is now called “machine learning” got its start in the 1700s/1800s well before there were machines that could learn.
There is no universally acknowledged definition of artificial intelligence.
The idea of artificially intelligent beings is prehistoric, with widely recognized references to intelligent machines appearing as early as the 1300s.
The first actual learning machines showed up in the early 1950s.
The first widely recognized work of AI was the design of Turing-complete “artificial neurons.”
The field blossomed at a conference
at Dartmouth, with the presentation of the
Logic Theorist program.
This is when the term “artificial intelligence”
was coined.
Progress accelerated through the 60s, but slowed in the
70s, leading to the
first “AI winter.”
Expert Systems brought the field back into vogue.
Progress continued through the late 90s and early 2000s with much wider adoption of machine learning, though AI had become a bad word.
Today - The current AI Spring was heralded by Watson on Jeopardy in 2011 and cemented by ImageNet, DeepMind, and AlphaGo in 2014/2015.
Audio transcription
Product recommendation
Robotics
Predictive maintenance
Chat bots
Smart image search/analytics
Writing
Voice of the customer analytics
Targeted advertising
Autonomous vehicles
Search engines
Anomaly detection
Computer vision
Categorization
Voice-to-text
Audio generation
Recommendation engine
Image generation
Natural Language Querying (NLQ)
Natural Language Processing (NLP)
Natural Language Generation (NLG)
Natural Language Understanding (NLU)
Clustering
Support Vector Machines
Random forest
Markov processes
Logistic regression
Linear regression
Symbolic logic
Generative adversarial networks
(Artificial) neural networks
Recurrent neural networks
Convolutional neural networks
Deep neural networks
Expert systems
Decision trees
Not Machine Learning
Machine Learning
Regression
Regression
Probabilistic classifiers
Probabilistic classifiers
Support vector machines
Support vector machines
Neural networks
Neural networks
Clustering
--
Decision tree learning
--
Genetic algorithms
DeepMind develops Q-Learning algorithms to beat Atari games.
Google buys DeepMind for $500M.
DeepMind begins development on AlphaGo.
AlphaGo is trained on 30 million professional moves and thousands of matches against itself.
AlphaGo beats one of the top Go players in the world, Lee Sedol (4-1).
Before playing Lee Sedol, AlphaGo’s learning mechanism was turned off.
Was it still a machine learning program?
While the limits of expert systems were clearly demonstrated in the 80s, there is much excitement over the potential of “hybrid systems”.
“Human-in-the-loop” systems where ML does 80% of the work and humans close the gap.
NLP: Deep learning applied to syntax disambiguation.
Driverless cars: Computer vision within the confines of a well-defined task (driving).
Agents: Human-defined guard rails on otherwise autonomous AIs (see Microsoft Tay).
DNNs
95%
Random Forest
85%
Logic Regression / SVM
PRECISION
Linear Regression / NBC / kNN
Expert system
70%
0 pts
>10,000 pts
100 pts
500 pts
1,000 pts
DATA (/TIME)
Reminder: 4Degrees is about relationship management.
Trying to predict tags such as Engineer, VC, Entrepreneur, FinTech, Medicine.
Use people’s public tweets to signal “who” they are.
Determining tags from Twitter
Intuition: engineers are way more likely to mention Python or Node than normal people.
Model
Build out a mini-dictionary of domain-specific words (e.g., Python, Ruby).
Measure incidence/frequency for some known positives and negatives.
Find median/IQR to be able to detect outliers.
Results
~70%
Precision
Biggest source of error is unintended incidence of the terms in unrelated domains (e.g., snake handlers for Python, gemologists for Ruby).
Model
Build out bag of words on arbitrary Tweet set.
Start with unigrams.
Basic stop word filtering, stemming
Results
~70%
Precision
Error is much more random than expert system; seems to seize on strange oddities of Twitter language (e.g., “via”).
(not meaningfully better than expert system)
Results
Model
~75%
Tailor bag of words to ensure that it includes domain-relevant language.
Precision
Error not as bad as NBC, but model still doesn’t seem to be focusing on “key” words in the way you would expect.
Beefed up stop word filtering based on Twitter language.
(hey, ML is doing something for us!)
Consider bigrams.
Starting to see some funny--but correct--insights (e.g., “congrats” for VCs).
Model
Deep domain-tailored bag of words from before.
Continue experimenting with bigrams.
Layer on domain-specific language “flags” (similar to the dictionary from the expert system).
Results
~80%
Precision
Getting to the long tail of error / limits of simple ML models (e.g., model confusing medical practitioners with healthcare investors).
AI-driven “assistants” work alongside contact center workers (both chat and phone) to recommend relevant knowledge base articles and answers as they listen in on customer conversations.
These capabilities get better over time, increasingly automating customer service agents’ jobs.
Best-in-class today is ~30% automation.
Deep learning
disambiguation in NLP
improving nuanced interpretation
Industry-specific ontologies
enable fuzzy matching
Human-level voice-to-text
accurate NLP on phone calls
Central AI keeps an eye on employee communications to:
Identify compliance/risk behavior.
Improve performance.
Judgment improves over time, leading to more automation.
Out-of-the-ordinary is flagged for manual review; outcome fed back into the model.
Deep learning
disambiguation in NLP
improving nuanced interpretation
empowers anomaly detection
Translation of text into structured data
Availability of data in the enterprise and employees accepting being monitored.
Algorithms are now at human levels in many types of image processing; these capabilities have been opened up publicly in the last ~12 months
“Winning” the ImageNet competition in 2015 with CNNs was one of the major catalysts of the AI Spring.
Key constraint today is identifying business cases:
Insurance property assessment
Construction drone surveying
Google automated mapping
As of 2015, CNNs have achieved human-level object detection in images.
Facebook and Google have proven out super-human face detection algorithms.
In 2015 researchers proved out super-human emotion detection.
Massive impact potential on creatives (voice actors, graphic designers, photographers, etc.).
AI will be able to create realistic sounds (including speech in a given person’s voice) and images virtually for free.
Business cases completely unexplored but include accessibility, advertising personalization, and data visualization.
GANs
step-change improvement in audio and image creation
arXiv 1605.05396
arXiv 1512.00570
arXiv 1609.04802
Reinforcement learning agents and process automation.
Mass-automation of customer service.
Unified Information Access.
Text/speech summarization/synthesis.
Designed by Next Interactive Presentations:
AI-enabled search function.
Three sites pilot a secure, encrypted messaging system to speed up the flow of information.
Patients to submit digital requisitions online.
Patients can complete some forms online.
Enable citizens to pre-register for appointments online.
Including access to records for all NS citizens.
Caller ID enabled for 10 sites; others done by August.
Enable clinicians to use relevant mobile apps (ie. Translation).
Text appointment reminders to citizens.
Expanded online booking to include diagnostic imaging.
Pins with blue outline are Top 10 initiatives