Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Cheating Detection in Online Games

No description
by

Luke Geraghty

on 25 October 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Cheating Detection in Online Games


Cheating in Online Games
Unfair advantages
- cheating affects players
Revenue
- pay store, pay for patches, honest players leave game
Bad reputation
Anti-cheating devices are like anti-virus -
lifelong commitment
Solutions
involve measures that might violate users' privacy
Some current solutions by mainstream companies:
- Punkbuster (real-time scanning of memory)
- Warden by Blizzard (memory and CPU processes)
In this paper...
Detecting cheats by monitoring game logs
Came up with an FPS game
Fully online. Client & server both controlled.
Used
Support Vector Machines
and
Logistic Regression classifiers
Only looking at AimBots
. Lots of other types of online cheating.
- Maphacks, speedhacks, artificial lag etc.
Design and Implementation
1)

The game:
an FPS called Trojan Battles
Created the game server
A game client
...and integrated AimBots into that client





2)
Came up with
features
and a
feature extractor

3)
Lastly, designed a
data analyzer
that trains classifiers and generates models for cheats
Aimbot in Call of Duty
Thank you!
Overview
Paper published in 2013
Hashem Alayed, Fotos Frangoudes (University of Southern California), and Clifford Neuman (Information Sciences Institute)
Cheating Detection, Online Games, Machine Learning
Behavioral-Based Cheating Detection in Online First Person Shooters Using Machine Learning Techniques
Luke Geraghty
Evaluation
Not enough data
(460 mins)
Only one server and client
- and everything is under their control. Hardware, software.
2 players in the training. 3 players in the testing. And only a few players in general for a multiplayer game.
The game was invented
. Perhaps a commercial game, which is obviously more complicated than a two map, five weapon game such as this, would have different findings.
Using behavior-based cheat detection methods will protect
players’ privacy
?
Game client & server
Game server:
keeps track of the server and keeps logs
Game client:
the game itself. Players play deathmatch games and can customise certain settings e.g. the length of the match and the map.
While in the game, players can choose between 5 weapons.
Can also toggle on and off one of 5 AimBot cheats.
AimBots
Lock
(L): lock on to a visible target continuously and instantly
Auto-Switch
(AS): switches
Lock
on and off
Auto-Miss
(AM): create intentional misses
Slow-Aim
(SA): is like
Lock
, but slower
Auto-Fire
(AF): when a player aims at a target, automatically fire
Feature Extractor
Need to define cheating behaviour vs normal behaviour
Features taken from data sent from client to server
Features extracted in terms of time frames:
10
,
30
,
60
, or
90
second windows
...and most based on types of in-game behaviour (movement, firing, targeting)
Data Analyser
1)
Train classifiers on labeled data using ten-fold cross-validation
2)
Generate detection models to use in a three-player test run
3)
Specify the accuracy of each classifier with each AimBot type
Experiments
To produce the training data:
18 different deathmatches
Played by 2 players, one using AimBots, one not.
Eight matches were 10 mins long; ten 15 mins long.
...for
a total time of 460 mins
Feature extractor fed different time frames as mentioned earlier (10, 30, 60, 90 secs)
They classify frames using two classifiers
Logistic
Regression
- Linear (SVM-L) Kernel
- Radial Bases Function (SVM-RBF) Kernel
Creating and training the models
1) All cheats together, using multi-class classification
2) All cheats combined, using two classes (yes or no)
3) Lock-Based cheats classified together versus Auto-Fire (Multi-Class)
4) Each cheat classified separately
1) All cheats together, using multi-class classification

2) All cheats combined, using only two classes (yes or no)

3) Lock-Based cheats classified together versus Auto-Fire (Multi-Class)

4) Each cheat classified separately

5) Three Players Test Set
Combining all the cheating as a single cheat, i.e. label a cheat as "yes" if any cheating occurred.
All the classifiers performed similarly.
But we can see that accuracy decreases after the 60 second frame size.
Separated each cheat, and ran the classifier on each cheat.

90 second frame best for classifying Auto-Switch and Auto-Fire, and performs the same as the 60 second window for Slow Aim

This model has the best accuracy of any model.

Set-up:
Moving on to testing at last
New, unseen data
Only using SVM classifiers
(found to be the best)
Only using time frame size 60 (ditto)
30-min long deathmatch
Three players
one honest player
two cheaters (one using Lock, one using Auto-Fire)
Features Ranking
Features that were important and useful for the prediction process
Mean Aiming Accuracy (MeanAimAcc) was most informative feature for Lock-based cheats
For Auto-Fire, firing-based features are obviously more helpful
Analysis
Conclusion and future work
Takes the generated features file and applies classifiers in a step-by-step process to detect cheating
Support
Vector
Machines
Game data analysis depends on how well you know the game
Using behavior-based cheat detection methods will protect players’ privacy.
The cheat detection can be improved:
collect more data
using a mixture of cheats for each player instead of one cheat the whole game.
add more features
increase
the number of
maps
and
weapons


Experiments:
Training and testing results
Models followed on from one another and were based on findings from the previous model.
e.g. In
2)
, authors found Lock-Based cheats behaved differently to Auto-Fire. So they separated these two cheats in the classification for 3).
Testing phase
5) Three players test set
Looking at cheats individually gives less overall accuracy for finding whether cheating occurred than looking at all cheats together.
There is clearly
an optimum time frame size
for looking at FPS game cheats
Smart cheaters
activate cheats only when they need them
The
threshold
value for detecting cheats depends on detection accuracy and on the developers’ policy for cheat detection. Also on when the detection is taking place: online (real-time detection), or offline after a game is finished.
Some results weren’t accurate enough. For example:
The best accuracy obtained was using Logistic Regression with frame size = 60


Many misclassifications within the four ”Lock-Based” cheats (due to the similarity)
Measurements
: Overall Accuracy (ACC), True Positive Rate (TPR), and False Positive Rate (FPR)
In Model 1, it was clearly visible in the confusion matrix that LB cheats and AF cheats behaved differently, and AF alone was easy to detect.
So the authors separated these two types of cheats and tested them individually.
Findings:
Classifier believes other Lock-based cheats are being used
(high misclassification percentages)
AF much easier to classify
Normal behaviour very easy to detect
MeanAimAcc
is the number of times a player aimed at a target by the number of times a target was visible
Similar cheats i.e. Lock-based cheats, which all use similar features (player movement, targeting), are going to be harder for a classifier to detect.
The
most common top feature
is 'Mean Aim Accuracy'
To apply supervised machine learning models, knowledge about
data context
(here FPS AutoBots) is vital.

In online games, lag happens all the time, which cases the accuracy to reduce, because the message exchange rate drops and sometimes the content of the message is malformed.
The amount of cheating data collected compared to normal behavior data is not enough
Full transcript