\documentclass[10pt]{article}
\usepackage{amsfonts,amsthm,amsmath,amssymb}
\usepackage{array}
\usepackage{epsfig}
\usepackage{fullpage}
\usepackage[table]{xcolor}
\usepackage{xspace}
\usepackage{url}
\usepackage{hyperref}
\usepackage{graphicx}
% packages for the graphs:
%\usepackage{tikz}
%\usepackage{tikz-qtree}
\begin{document}
\input{preamble.tex}
\lecture{9: {\large Multi-Party Computation}}{December 21, 2017}{Nir Bitansky}{Noam Nissan and Omri Shmueli}
\section{Previously on Foundations of Crypto}
Having established the most basic crypto objective --- secure communication --- we now move on to explore more general cryptographic goals. As a preliminary step, we've learned about zero-knowledge (ZK) proofs, and saw that (under minimal computational assumptions) we can prove any NP statement in ZK. While the concept of ZK proofs is well motivated on its own,
we will see how ZK proofs can be applied more generally for enforcing (semi) honest behavior.
\section{Multi-Party Computation}
Today, we will discuss the problem of {\em multi-party computation} (MPC). Here we would like to design a protocol that will allow a set of parties $1,\dots,m$ to compute a joint function
$$
(y_1,\dots,y_m) = f(x_1,\dots,x_m)
$$
of their private inputs $x_i \in \zo^n$, so that each party $i$ will obtain her respective output $y_i$. However, we'd like the protocol to be secure even if some of the parties may be malicious.
\begin{center}
{\em But what does it mean for the protocol to be ``secure''?}
\end{center}
Naturally, security can reflect various requirements on what the protocol should guarantee in the presence of malicious parties. To get some sense of the requirements we might want to capture, let's consider a specific example of {\em auctions}. Here there is an auctioneer party, say $1$, and each of the parties $2,\dots,m$ has a bid $x_i\geq 0$. We'd like that at the end of the auction protocol, party $1$ will learn the identity of the highest bidder, as well as her bid. The rest of the parties should learn whether they have won or not.
With this example in mind, we can think of several natural requirements:
\begin{itemize}
\item {\bf Correctness:} We'd like to ensure that the received outputs indeed correspond to computing the prescribed auction function $f$, on some set of inputs, and not a different function, say $f^*$ that always announces $2$ to be the winner with a bid of 100 Shekel.
\item {\bf Input Independence:} Computing the function $f$ on {\em some} inputs isn't enough, we would also like to ensure that parties cannot choose their inputs based on those of other parties. In particular, $2$ shouldn't be able to set his input $x_2$ to $\max \set{x_i\pST i\geq 3}$.
\item
{\bf Privacy:} Parties may not want other parties to learn what was their bid.
\item
{\bf Fairness:} Malicious parties should not be able to prevent the auction from terminating if they don't like the result.
\end{itemize}
%
One could think of other security requirements for such a protocol. We will want one definition that can capture all such requirements. We've already faced a similar (but more specific) challenge in the context of ZK, where we tried to capture {\em all the things that a verifier can learn from a proof}. Our definitional approach will indeed follows a similar approach.
\paragraph{Defining MPC via the Real-Ideal principle.} Specifically, we would like our definition to follow the real-ideal principle:
\begin{center}
{\em The adversary's affect on the real protocol should not be worse than what it could be in an ideal protocol where the function is computed by a trusted party.}
\end{center}
\scribes{add two side-by-side pictures, one of the ideal world with a trusted party and w/o party-to-party communication and the real world w/ no trusted party and full party-to-party communication.}
\begin{figure}[h!]
\centering
\includegraphics [width=15cm, trim=1cm 4cm 1cm 3cm, clip=true]{ideal_vs_real.pdf}
\caption{On the left is the ideal world, where a trusted party gets all inputs $x_j$, computes $f$, and returns corresponding outputs $y_j$ to each party. On the right we have the real world, where we do not assume the existence of a trusted party, and $f$ is computed through interaction.}
\end{figure}
%
As in the case of ZK, we will formalize this principle using the simulation paradigm. There are actually various ways to do this, with different nuances. We will focus on a rather simple definition that already captures the essence of MPC. In particular, we shall assume that the honest parties communicate through ideally secure channels; that is, the adversary only controls malicious parties, and does not control the communication between honest parties. This simplifying assumption can in principle be removed using tools like encryption and digital signatures (albeit there are some subtleties to be dealt with, such as synchrony).
\medskip\noindent
In what follows, $m=m(n)$ is a polynomially bounded function, and $f:\(\zo^{n}\)^m \rightarrow \(\zo^{*}\)^m$ is a poly-time computable, possibly randomized function. An $m$-party interactive protocol for $f$ is given by a PPT algorithm $\Pi$, which at each round, can be used by each party $i$, to compute its next message in the protocol (given the previous messages it received, and the randomness it had used).
\begin{definition}[MPC (A Slightly Too Strong Definition)]\label{def:MPC}
An $m$-party protocol $\Pi$ for $f$ is secure if for any n.u. PPT $A=\set{A_n}$, there exists a n.u. PPT $S=\set{S_n}$ such that
$$
\set{Real_{A}(n,C,\vec x)}_{n,C,\vec x} \approx_c \set{Ideal_{S}(n,C,\vec x)}_{n,C, \vec x}\enspace,
$$
where $n\in\Nat$, $C\subseteq [m]$ (is the subset of corrupted parties), $\vec x=x_1,\dots,x_m\in \zo^{n}$ (are the inputs), and $Real, Ideal$ are distributions on outputs $\vec y = y_1,\dots,y_m$ defined as follows:
\begin{itemize}
\item $Real_A(n,C,\vec x):$ Each honest party $i\in [m]\setminus C$, follows the protocol $\Pi$ with input $x_i$, to obtain an output $y_i$. For each corrupted $j\in C$, the adversary $A_n$, obtains $x_j$, and throughout the protocol chooses the messages for the party as well as their output $y_j$.
\item $Ideal_S(n,C,\vec x):$ For each corrupted $j\in C$, the simulator $S_n$ obtains $x_j$, chooses a possibly new input $x_j'$, and submits it to the trusted party. Each honest party $i\in [m]\setminus C$, submits their input $x'_i=x_i$ to the trusted party, which computes $(y'_1,\dots,y'_m) \gets f(x_1',\dots,x_m')$. The parties each obtain $y'_j$ in their turn. For $j\in C$, $S_n$ may arbitrarily choose the final output $y_j$. For the honest parties $y_i=y'_i$.
\end{itemize}
\end{definition}
\paragraph{Remarks:}
\begin{itemize}
\item {\bf Corrupted Parties' Outputs.} The corrupted parties can produce an output that arbitrarily (but efficiently) depends on the adversary's view in the protocol's execution. An equivalent definition that is perhaps more intuitive (but involves more notation) is the real and ideal views include the outputs of the honest parties, as well as the adversary's view in the protocol, including all the messages that she sees during the protocol and her randomness.
\item {\bf Universal Simulation.} As was the case for ZK, a common stronger definition requires that there is one {\em universal PPT simulator} $S$ that can simulate any adversary $A$, given the code of $A$ as auxiliary input. This definition is very useful when composing different protocols together. For now, we will keep ignoring this difference.
\end{itemize}
%
\noindent It is instructive to try and convince yourself that the above definition indeed captures the specific properties mentioned above in the context of auction protocols. For instance, convince yourself that for any bid $x_2$ for party $2$ and any two bids $x_3