\newcommand{\ie}{\emph{i.e.}}
\newcommand{\eg}{\emph{e.g.}}
\newcommand{\etc}{\emph{etc.}\xspace}
\newcommand{\PP}{\mathsf{p}}
\newcommand{\EE}{\mathcal{E}}
\section{Modeling Methodology}
\label{sec:model}
The basic model underlying our comparative study of mix network topologies
is \emph{mixing as probabilistic permutation}. At the cost of a
few simplifying but reasonable assumptions about distribution of message
traffic in the network, we obtain a tractable Markov chain model,
and use a fully automated probabilistic model checking technique to
compute probability distributions for different network topologies and
configurations. We use \emph{entropy} of each topology's respective
distribution as our comparison metric, in the spirit of~\cite{Diaz02,Serj02}.
\subsection{Mixing as permutation}
Consider a single batch of $N$ messages entering the mix network together.
We can view each message $m_1,\ldots,m_N$ as occupying
a certain position in a (virtual) input array of length $N$.
Suppose the adversary targets a particular
message $m$ in position $i$. Without loss of generality, assume that
$i=1$ (we can always re-number the input array so that the targeted
message is in the first slot).
Having passed the network, all $N$ messages re-appear and may be observed
by the adversary again. Of course, if some of the network nodes have
been compromised by the adversary, the adversary will have access to their
observations, too. Let $m'_1,\ldots,m'_N$ be the (virtual) output array.
Due to the mixing performed by the network, it may or may not be the case
that $m'_i=m_i$, \ie, the messages have been probabilistically permuted
by the network. We will refer to the discrete probability
distribution $\PP_1\ldots\PP_N$, where $\PP_i=\mathit{Prob}(m'_i=m)$,
as the \emph{mixing distribution} of the network.
%\begin{equation}
%\label{maindist}
%\begin{array}{l}
%\PP_1\quad\ldots\quad\PP_N \\
%\mbox{where}\ \PP_i=\mathit{Prob}(m'_i=m)
%\end{array}
%\end{equation}
Informally, each $\PP_i$ is the probability that the targeted
message $m$ re-appears in the $i$th position of the output buffer.
In our basic model, we assume that the network doesn't lose messages
(this restriction is not critical and may be relaxed, if necessary).
Therefore, $\sum_{1 \leq i \leq N}\PP_i = 1$, and $\PP_i$ form a proper
discrete probability distribution. Following~\cite{Serj02}, we calculate
\emph{entropy} of this distribution as
\[
\EE = - \sum_{1 \leq i \leq N} \PP_i \log_2(\PP_i)
\]
Very informally, entropy is a measure of ``randomness'' in a distribution.
Other things being equal, network topologies that provide mixing
distributions associated with higher entropy values are considered
preferable.
\subsection{Overview of the model}
We use the standard techniques of probabilistic verification and model the
mix network as a discrete-time Markov chain. Formally, a \emph{Markov
chain} consists of a finite set of states $S$, the initial state $s_0$,
the transition relation $T:S\times S\rightarrow[0,1]$ such that
$\forall s\in S\ \sum_{s'\in S} T(s,s')=1$, and a labeling function.
In our model, the states of the Markov chain will represent the position
of the targeted message $m$ in the (virtual) buffer of $N$ messages as
$m$ moves through the network. The initial state $s_0$ corresponds
to the message being in the first slot of the input array
prior to entering the mix network.
Every probabilistic state transition $s \rightarrow s'$ is associated with
$m$ passing through a single mix within the network. Intuitively, $s$
can be interpreted as $m$'s position before passing through the mix,
and $s'$ as its position afterwards.
For the purposes of computing the mixing distribution $\PP_i$, we are
interested in deadlock states, \ie, those corresponding to the situation
in which $m$ has passed through all mixes in its path and exited the
mix network with no further transitions possible. Suppose a special
predicate $\mathit{done}$ is true in such states. Then $\PP_i$ is simply
$\mathit{Prob}[\mathcal{U}(s=i\ \wedge \mathit{done})]$ evaluated
in the initial state $s_0$. (Informally, formula $\mathcal{U}\varphi$
holds if $\varphi$ eventually becomes true.)
We use a probabilistic model checker called PRISM~\cite{Pri} to compute
these probabilities automatically. We omit
the details of the underlying model checking algorithms; a detailed
explanation of how probabilistic model checking is used to analyze
randomized routing protocols can be found in~\cite{S04}.
\subsection{Single-mix model}
\label{singlemix}
Consider a single mix receiving a batch of $K$ messages, including the
targeted message $m$. Assume an uncompromised mix that
collects all $K$ messages before distributing them to their
respective destinations. In this case, the mixing performed by the
mix can be interpreted as permutation in a virtual buffer of size $K$.
In particular, the targeted message $m$ appears in any of the $K$
output positions with equal probability after passing through the mix.
Therefore, each honest mix can be modeled by a simple Markov chain as below
(recall that state $s$ represents the current position of message $m$,
and let $t$ be the sequential number of the current hop). However,
the compromised
mix performs no mixing at all, and thus does not change
the position of any message it processes.
\begin{figure}
\begin{minipage}[ht]{5.75cm}
\includegraphics[scale=0.3]{goodmix}
\caption{Model of a good mix}
\end{minipage}
\hfill
\begin{minipage}[ht]{5.75cm}
\includegraphics[scale=0.3]{badmix}
\caption{Model of a bad mix}
\end{minipage}
\hfill
\end{figure}
\subsection{Network model}
We consider several mix network topologies, and compare them under various
assumptions about the density of hostile mixes in the network. Instead of
assuming a fixed number of hostile mixes, in each scenario we will assume
a fixed \emph{probability} that a randomly selected mix is hostile.
For each topology, the behavior of a single node is modeled as
in Section~\ref{singlemix}.
The main difference between topologies
is how the targeted message moves through the network, resulting in
different mixing distributions $\PP_1\ldots\PP_N$.
%\subsubsection{Assumptions.}
We assume the adversary observes the edge of the network and thus
knows the first mix chosen by the targeted message---so the
randomness of mix selection is ignored for the first hop. Formally,
we make probability $\PP_i$ conditional on selection of a particular
first mix. Instead of computing
$\mathit{Prob}[\mathcal{U}(s=i\ \wedge \mathit{done})]$,
we compute
$\mathit{Prob}[\mathcal{U}(s=i\ \wedge \mathit{done}\ |\
\mbox{mix $x$ was selected as entry point}
)]$.
Note that we must consider two sources of uncertainty. The first is the
distribution of compromised nodes in the network, which we address by
assuming a fixed probability that any given node is bad. Thus we are
calculating \emph{prior} distributions---effectively the average of all
possible occurrences of compromised nodes in the network. (In contrast,
\cite{Diaz02,Serj02} consider \emph{posterior} distributions, where
certain nodes are known to be bad). The second uncertainty is the users'
selection of message routes, which we address by treating the message
load on each internal link within the network as exactly equal to
the statistically expected load given a particular network topology.
This assumption is approximated with very high probability when the
number of messages in a single batch is significantly higher than the
number of network nodes (see Section~\ref{subsec:average-actual} for
discussion). %, and randomness
%involved in message routing is obtained from unbiased coin tosses in
%the same way for all messages.
Intuitively, suppose there are four mixes in the first layer of the
network, and batch size is 128. We will analyze the average-case
behavior of the network, \ie, we will assume that each of the mixes
receives exactly 32 messages, even though it is possible (albeit highly
improbable) that in some batch all 128 senders will randomly choose the
same entry mix.
% An alternative to this is to assume instead that the adversary cannot
% observe communication between nodes that he does not control. Under
% either assumption,
Under the equal loading assumption, we treat the size of the input/output
buffer for each mix (see Section~\ref{singlemix}) as a constant which is
determined only by batch size and network topology, and is independent
of the actual random distribution of a given batch through the network.
Appendix \ref{sec:walk-through} provides a walk-through of calculating
entropy for each topology, to help the unfamiliar reader build intuition
about our assumptions and results.