Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
Algorithm Space
Problem Space
P4
A4
P2
30
75
20
80
A3
P2
P1
46
60
10
23
80
P1
A1
80
I= { 46, 23, 26, 44, 25, 40, 77, 6 }
11
3
10
15
23
44
40
77
1. BF
Fitness=0.808
46
26
25
11
3
10
15
23
44
40
2. FF
77
Fitness=0.807
46
26
25
3
11
10
15
23
44
40
3. WF
77
Fitness=0.805
46
26
25
11
3
10
15
74
23
44
40
Fitness=0.62
4. NF
77
46
26
25
6
P3
A2
A5
P5
PF1
Cat
A1
A1 solves P1 solution quality 70%
PF2
Cat
A1
Machine Learning
Trained ML Model
Machine Learning
Trained ML Model
A2 solves P1 solution quality 30%
Trained ML Model
A1
PF10
Trained ML Model
Cat
PF3
Dog
A2
Deep Learning
Features
Machine Learning
PF5
Dog
A2
[1976]
Virtual Best Solver (VBS) is a perfect mapping between problem instances and heuristics.
Single Best Solver (SBS) is the heuristic that achieves the best performance over a set of instances
SBS
A1
A2
A3
VBS
Memory Management
Scheduling Problem
Bin Packing Problem
application
Extract features from the problem instances then feed these features to the approach
Whatever the application is,
whatever the approach is.
approach
Classic Machine Learning
Feature Reduction
Feature Composition
Deterministic heuristics including BF, FF, NF and WF
What we mean by "sequence" or "time series" data?
e.g. The order of the items for packing task
150
We avoid using features by training an LSTM with only sequence information
These sequences have NO FEATURES
I= { 25, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }
Trained LSTM
Deep Neural Network
Predicted Best Heuristic
Sequences
DL has achieved ground breaking results in many applications.
DL has many topologies including Recurrent Neural Network (RNN).
RNNs are specifically designed to learn from time-series data.
(a) a Simple RNN and (b) an example of unfolded RNN with two time steps
Long Short-Term Memory (LSTM)
is a specialised RNN.
LSTM has been shown to be highly efficient in learning long-term sequences of ordered data.
LSTM neural networks incorporate additional gates that can retain, retrieve and forget information over long periods of time.
LSTM memory block
BF 68%
FF 29%
WF 3%
NF 0%
PCA plot of the performance of all heuristics on all 16,000 random instances
I= { 25, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }
Optimize the order of items such that the performance of the target algorithm exceeds the performance of the other algorithms
Evolve instances tailored for the heuristics BF, FF, NF and WF
{ 55, 40, 26, 44, 46, 23 ....... 80, 66, 21, 82, 33, 92,45 }
{ 22, 43, 26, 56, 46, 23 ....... 77, 66, 34, 82, 55, 92,88 }
{ 77, 50, 26, 32, 46, 23 ....... 80, 55, 21, 28, 33, 92,45 }
{ 78, 40, 26, 44, 46, 23 ....... 80, 66, 55, 92, 44, 92,45 }
{ 22, 40, 45, 76, 46, 23 ....... 80, 66, 44, 44, 33, 92,77 }
{ 11, 40, 26, 44, 46, 23 ....... 80, 66, 32, 33, 33, 92,45 }
Population size is 500
Swap mutation
with 2% probability
BF solution with fitness α1
BF
FF solution with fitness α2
FF
{ 25, 40, 26, 44, 46, 23, 66, 21, 82, 33, 92,45 }
WF solution with fitness α3
WF
NF solution with fitness α4
NF
Evaluation Function = fit(targeted heuristic) − fit(α2)
α1, α2, α3, α4
Fitness Function =
fit(targeted heuristic) − fit(α2)
Tournament selection
tournament size of 2
single point crossover
with 99% probability
at 1X10^6 iterations
Evolutionary Algorithm work flow
Objective:
To create 4 balanced datasets each one contains equal number of instances tailored for FF, BF, WF and NF.
Experimental Setup:
EA experiments were conducted on a High Performance Cluster with 23 nodes each comprising 8Core /16 Thread CPUs with 64gb 2133Mhz Memory per node. The code was implemented in Java.
Configuration:
Data set Generation Parameters
DS2 Performance with different heuristic
PCA plot of the performance of all heuristics on all 16,000 evolved problem instances
Objective:
To learn the relationship between the instances and their heuristics.
Experimental Setup:
Deep Learning experiments were conducted on Google Colaboratory with TPU run-time and the code was implemented in Python (Keras library).
Configuration:
The network structure
The hyper-parameters have been optimized manually since tuning these parameters is a computationally expensive task.
Learning iteration:
DS1 and DS2 ---> 300
DS3, DS4 and DS5 ---> 700
Batch size = 32
Training and testing workflow
Classification Kappa on the validation set
Results of best trained model on the test set
NO FEATURES
Sequences
Trained LSTM
Deep Neural Network
Predicted Best Heuristic