Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Cheating Detection in Online Games

No description

Luke Geraghty

on 25 October 2016

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Cheating Detection in Online Games

Cheating in Online Games
Unfair advantages
- cheating affects players
- pay store, pay for patches, honest players leave game
Bad reputation
Anti-cheating devices are like anti-virus -
lifelong commitment
involve measures that might violate users' privacy
Some current solutions by mainstream companies:
- Punkbuster (real-time scanning of memory)
- Warden by Blizzard (memory and CPU processes)
In this paper...
Detecting cheats by monitoring game logs
Came up with an FPS game
Fully online. Client & server both controlled.
Support Vector Machines
Logistic Regression classifiers
Only looking at AimBots
. Lots of other types of online cheating.
- Maphacks, speedhacks, artificial lag etc.
Design and Implementation

The game:
an FPS called Trojan Battles
Created the game server
A game client
...and integrated AimBots into that client

Came up with
and a
feature extractor

Lastly, designed a
data analyzer
that trains classifiers and generates models for cheats
Aimbot in Call of Duty
Thank you!
Paper published in 2013
Hashem Alayed, Fotos Frangoudes (University of Southern California), and Clifford Neuman (Information Sciences Institute)
Cheating Detection, Online Games, Machine Learning
Behavioral-Based Cheating Detection in Online First Person Shooters Using Machine Learning Techniques
Luke Geraghty
Not enough data
(460 mins)
Only one server and client
- and everything is under their control. Hardware, software.
2 players in the training. 3 players in the testing. And only a few players in general for a multiplayer game.
The game was invented
. Perhaps a commercial game, which is obviously more complicated than a two map, five weapon game such as this, would have different findings.
Using behavior-based cheat detection methods will protect
players’ privacy
Game client & server
Game server:
keeps track of the server and keeps logs
Game client:
the game itself. Players play deathmatch games and can customise certain settings e.g. the length of the match and the map.
While in the game, players can choose between 5 weapons.
Can also toggle on and off one of 5 AimBot cheats.
(L): lock on to a visible target continuously and instantly
(AS): switches
on and off
(AM): create intentional misses
(SA): is like
, but slower
(AF): when a player aims at a target, automatically fire
Feature Extractor
Need to define cheating behaviour vs normal behaviour
Features taken from data sent from client to server
Features extracted in terms of time frames:
, or
second windows
...and most based on types of in-game behaviour (movement, firing, targeting)
Data Analyser
Train classifiers on labeled data using ten-fold cross-validation
Generate detection models to use in a three-player test run
Specify the accuracy of each classifier with each AimBot type
To produce the training data:
18 different deathmatches
Played by 2 players, one using AimBots, one not.
Eight matches were 10 mins long; ten 15 mins long.
a total time of 460 mins
Feature extractor fed different time frames as mentioned earlier (10, 30, 60, 90 secs)
They classify frames using two classifiers
- Linear (SVM-L) Kernel
- Radial Bases Function (SVM-RBF) Kernel
Creating and training the models
1) All cheats together, using multi-class classification
2) All cheats combined, using two classes (yes or no)
3) Lock-Based cheats classified together versus Auto-Fire (Multi-Class)
4) Each cheat classified separately
1) All cheats together, using multi-class classification

2) All cheats combined, using only two classes (yes or no)

3) Lock-Based cheats classified together versus Auto-Fire (Multi-Class)

4) Each cheat classified separately

5) Three Players Test Set
Combining all the cheating as a single cheat, i.e. label a cheat as "yes" if any cheating occurred.
All the classifiers performed similarly.
But we can see that accuracy decreases after the 60 second frame size.
Separated each cheat, and ran the classifier on each cheat.

90 second frame best for classifying Auto-Switch and Auto-Fire, and performs the same as the 60 second window for Slow Aim

This model has the best accuracy of any model.

Moving on to testing at last
New, unseen data
Only using SVM classifiers
(found to be the best)
Only using time frame size 60 (ditto)
30-min long deathmatch
Three players
one honest player
two cheaters (one using Lock, one using Auto-Fire)
Features Ranking
Features that were important and useful for the prediction process
Mean Aiming Accuracy (MeanAimAcc) was most informative feature for Lock-based cheats
For Auto-Fire, firing-based features are obviously more helpful
Conclusion and future work
Takes the generated features file and applies classifiers in a step-by-step process to detect cheating
Game data analysis depends on how well you know the game
Using behavior-based cheat detection methods will protect players’ privacy.
The cheat detection can be improved:
collect more data
using a mixture of cheats for each player instead of one cheat the whole game.
add more features
the number of

Training and testing results
Models followed on from one another and were based on findings from the previous model.
e.g. In
, authors found Lock-Based cheats behaved differently to Auto-Fire. So they separated these two cheats in the classification for 3).
Testing phase
5) Three players test set
Looking at cheats individually gives less overall accuracy for finding whether cheating occurred than looking at all cheats together.
There is clearly
an optimum time frame size
for looking at FPS game cheats
Smart cheaters
activate cheats only when they need them
value for detecting cheats depends on detection accuracy and on the developers’ policy for cheat detection. Also on when the detection is taking place: online (real-time detection), or offline after a game is finished.
Some results weren’t accurate enough. For example:
The best accuracy obtained was using Logistic Regression with frame size = 60

Many misclassifications within the four ”Lock-Based” cheats (due to the similarity)
: Overall Accuracy (ACC), True Positive Rate (TPR), and False Positive Rate (FPR)
In Model 1, it was clearly visible in the confusion matrix that LB cheats and AF cheats behaved differently, and AF alone was easy to detect.
So the authors separated these two types of cheats and tested them individually.
Classifier believes other Lock-based cheats are being used
(high misclassification percentages)
AF much easier to classify
Normal behaviour very easy to detect
is the number of times a player aimed at a target by the number of times a target was visible
Similar cheats i.e. Lock-based cheats, which all use similar features (player movement, targeting), are going to be harder for a classifier to detect.
most common top feature
is 'Mean Aim Accuracy'
To apply supervised machine learning models, knowledge about
data context
(here FPS AutoBots) is vital.

In online games, lag happens all the time, which cases the accuracy to reduce, because the message exchange rate drops and sometimes the content of the message is malformed.
The amount of cheating data collected compared to normal behavior data is not enough
Full transcript