View on GitHub

neverenough-sec2021

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation

Process Overview

This page provides some general details about our experimental process, including the exact versions of software we used, how we stage and generate Tor models, and how we process the output.

Generally, our research follows this process:

  1. stage and generate some Tor network models using our methods from §3;
  2. simulate the models in Shadow;
  3. collect and process Shadow’s output; and
  4. visualized the processed data.

Unfortunately, the size of the raw data files produced by Shadow are too large to distribute here. Instead, we provide the Tor model configuration bundles that are used as input to Shadow, and Shadow’s output data after performing an intermediate processing step. The data that we provide can be used to reproduce the figures presented in the paper.

Please note that all of these steps have been simplified in and are handled directly by the latest version of TorNetTools. We provide details of the more manual process that we followed for posterity.

Software Versions

The contributions we made as part of our work were merged as described on the main page. However, we used slightly different versions of these tools when running experiments and collecting results for this paper. We list here the exact commits that we used during our research for this paper:

Package Setup

To run Shadow simulations, you will need to install Shadow and the other tools listed above following the respective installation guides distributed with each tool. To run the analysis explained on this site, the following packages are needed when using Ubuntu 18.04 LTS:

sudo apt-get install \
  openssl libssl-dev libevent-dev build-essential automake zlib1g zlib1g-dev \
  python3-venv dstat pypy texlive-latex-extra texlive-fonts-recommended dvipng cm-super

Python Setup

We used a variety of python modules to simplify our processing and visualization steps, which we installed into a virtual environment as follows:

python3 -m venv ~/venvs/nevenufenv
source ~/venvs/nevenufenv/bin/activate
pip3 install numpy scipy matplotlib

git clone https://github.com/shadow/tornettools
cd tornettools
git checkout -b nevenuf c126160d5f2e16bff30e623b3a9f830e43801b5c
pip3 install -r requirements.txt
pip3 install -I .
cd ..

git clone https://github.com/shadow/tgen
cd tgen
git checkout -b nevenuf 8825a1500cda63e81499be95a60d2783267c39cd
cd tools
pip3 install -r requirements.txt
pip3 install -I .
cd ../..

git clone https://github.com/shadow/oniontrace
cd oniontrace
git checkout -b nevenuf 6c467177306226bcaa82f73be4da388916f81198
cd tools
pip3 install -r requirements.txt
pip3 install -I .
cd ../..

Tor Model Configs

Generating Tor models involves staging and generation.

Staging

We first run the staging phase by downloading Tor consensus, server descriptor, and Tor performance results and processing them as described in the TorNetTools staging instructions. Here we provide the staging files that we produced, covering Tor network state during the period from 2019-01-01 through 2019-01-31:

Generation

Given the above staging files, we can generate any number of Tor models. We use a script similar to the one below for all Tor model configuration bundles that we use in §4, §5, and §6 in the paper, where we modified the n, l, and v parameters (in the for loops) according to the specific requirements for each section of the paper. As written, the script below would generate far more networks than necessary.

source ~/venvs/nevenufenv/bin/activate
base=/home/rjansen/tors/tor-0.4.3.6
export PATH=${PATH}:${base}/src/core/or:${base}/src/app:${base}/src/tools

OUT=configs

for n in 0.01 0.1 0.3 1.0
do
  for l in 1.0 1.2
  do
    for v in {1..100}
    do
      tornetgen generate \
        relayinfo_staging_2019-01-01--2019-02-01.json \
        userinfo_staging_2019-01-01--2019-02-01.json \
        tmodel-ccs2018.github.io \
        --network_scale ${n} \
        --load_scale ${l} \
        --prefix ${OUT}/shadowtor-${n}-${v}-${l}-config \
        --atlas /storage/rjansen/share/atlas.201801.shadow113.noloss.graphml.xml \
        --events BW,ORCONN,CIRC,STREAM
    done
  done
done

Notes:

Unfortunately, we did not record the seeds that were used when generating the configs, so we cannot deterministically recreate them using TorNetTools. (The latest version of TorNetTools has corrected this oversight.)

The model validation, significance analysis, and performance analysis pages describe the generated Tor model configuration bundles included in this repository.

Simulating the Models in Shadow

We used the following to run the Shadow experiments. This assumes that Shadow, TGen, OnionTrace, and shadow-plugin-tor have been installed according to the installation instructions of each tool (see the software version links above).

After the Shadow, TGen, OnionTrace, and shadow-plugin-tor tools are installed, the following commands should be run inside of a Tor model configuration bundle directory, such as shadowtor-0.1-1-1.0-config.

dstat -cmstTy --fs --output dstat.log > /dev/null &
dstat_pid=$!

date > free.log
free -w -b -l -s 1 >> free.log &
free_pid=$!

shadow -w 32 shadow.config.xml | xz -T 2 > shadow.log.xz

kill ${free_pid}
kill ${dstat_pid}
date >> free.log

Processing Shadow’s Output

Shadow’s output is processed into multiple steps: parsing the log files, extracting visualization data from the processed output, and plotting the results.

Parsing

We use the TGenTools and OnionTraceTools python modules (installed as described above) to parse the respective log files.

The following commands should be run inside of a model configuration directory, such as shadowtor-0.1-1-1.0-config.

source ~/venvs/nevenufenv/bin/activate

xz -T 0 -e -9 free.log
xz -T 0 -e -9 dstat.log

tgentools parse -m 0 shadow.data/hosts
oniontracetools parse -m 0 shadow.data/hosts -e ".*oniontrace\.1001\.log"
xzcat shadow.log.xz | pypy /home/rjansen/shadow/src/tools/parse-shadow.py -m 0 -

Extraction

In this phase, we extract visualization data from the parsing step to simplify our plotting scripts. The extraction scripts are provided in the process directory in this repository.

source ~/venvs/nevenufenv/bin/activate
dir=shadowtor-0.1-1-1.0-config

python3 extract_oniontrace_cbt.py -i ${dir} -o data
python3 extract_oniontrace_tput.py -i ${dir} -o data
python3 extract_resource_usage.py -i ${dir} -o data
python3 extract_tgen_errrate.py -i ${dir} -o data
python3 extract_tgen_rtt.py -i ${dir} -o data
python3 extract_tgen_ttb.py -i ${dir} -o data

The model validation, significance analysis, and performance analysis pages describe the extracted data included in this repository.

Plotting

The model validation, significance analysis, and performance analysis pages describe the scripts we used to produce the figures for the paper.