View on GitHub

neverenough-sec2021

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation

Overview

This is the landing page for the following research publication:

Once is Never Enough: Foundations for Sound Statistical Inference in Tor Network Experimentation
Proceedings of the 30th USENIX Security Symposium (Sec 2021)
by Rob Jansen, Justin Tracey, and Ian Goldberg
[Conference version] [Extended version]

If you reference this paper or use any of the data or code provided on this site, please cite the paper. Here is a bibtex entry for latex users:

@inproceedings{neverenough-sec2021,
  author = {Rob Jansen and Justin Tracey and Ian Goldberg},
  title = {Once is Never Enough: Foundations for Sound Statistical Inference in {Tor} Network Experimentation},
  booktitle = {30th USENIX Security Symposium (Sec)},
  year = {2021},
  note = {See also \url{https://neverenough-sec2021.github.io}},
}

Methods and Tools

In §3 of our paper, we describe our approach for producing models that accurately represent the composition and traffic characteristics of the public Tor network. In §4 of our paper, we describe enhancements we made to the Shadow simulator to improve its performance, scalability, and accuracy. In §5 of our paper, we describe analysis methods that allow us to make statistical statements about the results of Tor experiments.

We have distilled our contributions into the following tools. These have been merged upstream to the Shadow project so that the community may benefit from our work:

Whether you are trying to reproduce our results or performing Tor research of your own, the best place to start is with our TorNetTools artifact. This tool implements our modeling and analysis methods and will help you run Tor simulations in Shadow using best practices. TorNetTools will guide you through the following experimentation phases:

The tool includes an extensive help menu and the GitHub page provides more details to help you get started.

Experiments

The general experimental process that we used during our research is described on the process page. The specific Tor model configuration bundles and visualization data for each section of the paper are described separately as follows.

Model Validation

In §4 of our paper, we run some experiments to evaluate our modeling tools and Shadow improvements. More information about reproducing the analysis and graphs (in Figures 1, 2, and 3, and Table 2 in the paper) is available on the model validation page.

Significance Analysis

In §5 of our paper, we describe an analysis methodology that allows us to compute confidence intervals over a set of simulations results, in order to make more informed inferences about Tor network performance. More information about reproducing the analysis and graphs (in Figures 4, 5, and 6 in the paper) is available on the significance analysis page.

Performance Analysis Case Study

In §6 of our paper, we present the results of a case study of the effects of an increase in Tor usage on Tor client performance. More information about reproducing the analysis and graphs (in Figures 7 and 8 in the paper, and in Figures 9, 10, 11, and 12 in the full version) is available on the performance analysis page.