Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

Algorithm Selection Using Deep Learning Without Feature Extraction

Mohamad Alissa

Kevin Sim

Emma hart

Overview

Algorithm Space

Problem Space

P4

A4

P2

30

75

20

80

A3

P2

P1

46

60

10

23

80

P1

A1

Example

80

I= { 46, 23, 26, 44, 25, 40, 77, 6 }

11

3

10

15

23

44

40

77

1. BF

Fitness=0.808

46

26

25

11

3

10

15

23

44

40

2. FF

77

Fitness=0.807

46

26

25

3

11

10

15

23

44

40

3. WF

77

Fitness=0.805

46

26

25

11

3

10

15

74

23

44

40

Fitness=0.62

4. NF

77

46

26

25

6

Introduction

P3

A2

A5

P5

PF1

Cat

A1

A1 solves P1 solution quality 70%

PF2

Cat

A1

Machine Learning

Trained ML Model

Machine Learning

Trained ML Model

A2 solves P1 solution quality 30%

Trained ML Model

A1

PF10

Trained ML Model

Cat

PF3

Dog

A2

Deep Learning

Features

Machine Learning

PF5

Dog

A2

Algorithm Selection Problem (ASP)

ASP

Rice

[1976]

VBS & SBS

Virtual Best Solver (VBS) is a perfect mapping between problem instances and heuristics.

Single Best Solver (SBS) is the heuristic that achieves the best performance over a set of instances

VBS & SBS

SBS

A1

A2

A3

VBS

How normally ASP is solved?

Memory Management

Scheduling Problem

Bin Packing Problem

application

Knowledge Gap

Extract features from the problem instances then feed these features to the approach

Whatever the application is,

whatever the approach is.

approach

Classic Machine Learning

Feature Reduction

Feature Composition

Disadvantages of dealing with features

Disadvantages

  • It requires hand-crafting and expert input.

  • It is complex.

  • It requires computational effort.
  • Feature selection.
  • Feature reduction.
  • Feature composition.

1D-Bin Packing Problem

Deterministic heuristics including BF, FF, NF and WF

What we mean by "sequence" or "time series" data?

e.g. The order of the items for packing task

Methodology

150

Our Approach

We avoid using features by training an LSTM with only sequence information

These sequences have NO FEATURES

The Approach

I= { 25, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }

Trained LSTM

Deep Neural Network

Predicted Best Heuristic

Sequences

Deep Learning

(DL)

DL has achieved ground breaking results in many applications.

DL has many topologies including Recurrent Neural Network (RNN).

RNNs are specifically designed to learn from time-series data.

Deep Learning

(a) a Simple RNN and (b) an example of unfolded RNN with two time steps

LSTM

Long Short-Term Memory (LSTM)

is a specialised RNN.

LSTM has been shown to be highly efficient in learning long-term sequences of ordered data.

LSTM neural networks incorporate additional gates that can retain, retrieve and forget information over long periods of time.

LSTM

LSTM memory block

Random Instances

Data

BF 68%

FF 29%

WF 3%

NF 0%

PCA plot of the performance of all heuristics on all 16,000 random instances

Evolutionary Algorithm

I= { 25, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }

Evolutionary Algorithm

Optimize the order of items such that the performance of the target algorithm exceeds the performance of the other algorithms

Evolve instances tailored for the heuristics BF, FF, NF and WF

Evolutionary Algorithm

{ 55, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }

{ 22, 43, 26, 56, 46, 23 ....... 77, 66, 34, 82, 55, 92,88 }

{ 77, 50, 26, 32, 46, 23 ....... 80, 55, 21, 28, 33, 92,45 }

{ 78, 40, 26, 44, 46, 23 ....... 80, 66, 55, 92, 44, 92,45 }

{ 22, 40, 45, 76, 46, 23 ....... 80, 66, 44, 44, 33, 92,77 }

{ 11, 40, 26, 44, 46, 23 ....... 80, 66, 32, 33, 33, 92,45 }

Population size is 500

Swap mutation

with 2% probability

BF solution with fitness α1

BF

FF solution with fitness α2

FF

{ 25, 40, 26, 44, 46, 23, 66, 21, 82, 33, 92,45 }

WF solution with fitness α3

WF

NF solution with fitness α4

NF

Evaluation Function = fit(targeted heuristic) − fit(α2)

α1, α2, α3, α4

Fitness Function =

fit(targeted heuristic) − fit(α2)

Evolutionary Algorithm

Tournament selection

tournament size of 2

single point crossover

with 99% probability

at 1X10^6 iterations

Evolutionary Algorithm work flow

Evolved Instances Example

Examples

Evolutionary part

Objective:

To create 4 balanced datasets each one contains equal number of instances tailored for FF, BF, WF and NF.

Experimental Setup:

EA experiments were conducted on a High Performance Cluster with 23 nodes each comprising 8Core /16 Thread CPUs with 64gb 2133Mhz Memory per node. The code was implemented in Java.

Experimental Setup

Configuration:

Datasets

Data set Generation Parameters

DS2 performance Box plot

DS2 Performance

Box plot

DS2 Performance with different heuristic

Random VS Evolved Instances

PCA plot of the performance of all heuristics on all 16,000 evolved problem instances

Implementation

Objective:

To learn the relationship between the instances and their heuristics.

Experimental Setup:

Deep Learning experiments were conducted on Google Colaboratory with TPU run-time and the code was implemented in Python (Keras library).

Implementation

Configuration:

The network structure

The hyper-parameters have been optimized manually since tuning these parameters is a computationally expensive task.

Learning iteration:

DS1 and DS2 ---> 300

DS3, DS4 and DS5 ---> 700

Batch size = 32

Experimental setup

Training and testing workflow

Results

Classification Kappa on the validation set

Results

Results of best trained model on the test set

Test the deep models on the test set

Testing Phase

Conclusion

NO FEATURES

Sequences

Trained LSTM

Deep Neural Network

Predicted Best Heuristic

Future Work

  • Online version of Bin Packing Problem

  • Other Application including
  • Scheduling Problem
  • Memory Management
Learn more about creating dynamic, engaging presentations with Prezi