Skip to content

Reproducing results with YATES

This page shows how to replicate the results from our SOSR 2018 paper.


Setup

The running example in the paper uses AT&T topology from the Internet Topology Zoo. To experiment with the topology, we need the following inputs:

See the documentation for inputs and outputs on GitHub for more details.

Note: Gurobi is required for reproducing all the results. See instructions for installing Gurobi. Otherwise, you can skip the following options which require solving an LP to generate a partial set of results.

-mcf, -scalesyn, -scale, -semimcfksp, -ffced

Congestion Analysis

Figure 4 in the paper compares the following TE algorithms in terms of maximum congestion:

  • Equal-Cost Multi Path (ECMP) using 1 as link weights.
  • Constrained Shortest Path (CSPF) using 1 as link weights.
  • Constrained Shortest Path (CSPF) using link RTT as link weight.
  • k-shortest path for selecting a static set of paths (using link RTT as link weight)and using a restricted version of MCF for dynamically tuning path weights (KSP+MCF).

To run the experiments using 1 as link weights, run YATES with the following arguments:

yates data/topologies/AttMpls.dot data/demands/AttMpls.txt \
data/demands/AttMpls.txt data/hosts/AttMpls.hosts -ecmp -cspf \
-num-tms 24 -budget 4 -simtime 100 -scalesyn -scale 1 -out att-hops

To run the experiments using link RTTs as link weights, run YATES with the following arguments:

yates data/topologies/AttMpls.dot data/demands/AttMpls.txt \
  data/demands/AttMpls.txt data/hosts/AttMpls.hosts -cspf -semimcfksp \
  -num-tms 24 -budget 4 -simtime 100 -scalesyn -scale 1 -rtt-file \
  data/rtt/AttMpls.rtt -out att-rtt
The output will for the first run will be generated in data/results/att-hops/MaxCongestionVsIterations.dat, while that for the second run will be generated in data/results/att-rtt/MaxCongestionVsIterations.dat.

Overheads

Figure 6 illustrates the overheads of MCF-based TE compared to ECMP in terms of latency and path churn. To perform a similar analysis, we can invoke YATES as:

yates data/topologies/AttMpls.dot data/demands/AttMpls.txt \
  data/demands/AttMpls.txt data/hosts/AttMpls.hosts -ecmp -mcf \
  -num-tms 24 -budget 4 -simtime 100 -scalesyn -scale 1 -rtt-file \
  data/rtt/AttMpls.rtt -out overheads
This will generate the results in data/results/overheads directory. The latency distribution statistics for each TE system and traffic matrix are recorded in data/results/overheads/LatencyDistributionVsIterations.dat, while the data for path churn is recorded in data/results/overheads/TMChurnVsIterations.dat. From the analysis, we can see that MCF-based approach has higher latency and path churn.

Robustness Analysis

Next, we compare the robustness of two approaches: FFC and KSP+MCF. To reproduce this scenario where KSP+MCF runs without any recovery mechanism, invoke YATES as:
yates data/topologies/AttMpls.dot data/demands/AttMpls.txt
data/demands/AttMpls.txt data/hosts/AttMpls.hosts -semimcfksp \
-robust -fail-num 1 -budget 2 -simtime 100 -scalesyn -scale 1.5 -rtt-file \
data/rtt/AttMpls.rtt -out robust_no_rec
YATES will iterate over all single-link failure scenarios and it will record the normalized throughput in data/results/robust_no_rec/TotalThroughputVsIterations.dat. Next, we implement a recovery mechanism for KSP+MCF in which whenever a link failures, we stop using the affected paths and redistribute traffic over remaining paths from a source to destination. We also run FFC with this recovery mechanism, as FFC is designed to handle failures. To reproduce this scenario, invoke YATES as:
yates data/topologies/AttMpls.dot data/demands/AttMpls.txt
  data/demands/AttMpls.txt data/hosts/AttMpls.hosts -ffced -semimcfksp \
  -robust -fail-num 1 -budget 2 -simtime 100 -scalesyn -scale 1.5 -rtt-file \
  data/rtt/AttMpls.rtt -lr-delay 0 -out robust_rec
This will generate the statistics on normalized throughput here: data/results/robust_rec/TotalThroughputVsIterations.dat.