This repository archives the code used in the paper Post-Quantum Electronic Identity: Adapting OpenID Connect and OAuth 2.0 to the Post-Quantum Era.
We used docker to conternize our implementation of OpenID Connect's three roles: the OpenID Connect Provider (op), Relying Party (rp), and User Agent.
Run git submodule init
and git submodule update
to download the required submodules.
You need to have at least docker
and docker compose
to run our realistic use case. If you want to reproduce the results from our paper locally (i.e. ignoring latency), you need to have mergecap
, gnuplot
and traceroute
. Finally, to reproduce our tests in real-world conditions, you will need to rent Amazon EC2 instances (other vendors should work fine) and also have ssh
installed.
The way to set parameters for the simulations is through environment variables.
Relevant variables to keep in mind:
TLS_SIGN
: The signature algorithm used in the TLS handshake. The available options arersa
,ecdsa
,dilithium2
,dilithium3
,dilithium5
,falcon512
,falcon1024
,sphincsshake256128fsimple
,sphincsshake256192fsimple
, andsphincsshake256256fsimple
. Defaults torsa
;JWT_SIGN
: The signature algorithm used to sign theaccess token
,refresh_token
and theID token
. The available options arersa
,ecdsa
,dilithium2
,dilithium3
,dilithium5
,falcon512
,falcon1024
,sphincsshake256128fsimple
,sphincsshake256192fsimple
, andsphincsshake256256fsimple
. Defaults torsa
;OP_IP
: The IP address of the OpenID Connect Provider. Defaults toop
(the container name);RP_IP
: The IP address of the Relying Party (i.e. the client as per the OAuth 2 nomenclature). Defaults torp
;REPEAT
: The number of times the test will be repeated. If it is> 1
, then an extra test is added as a cold start and its timing is removed from the result set. Defaults to1
;
Less relevant variables:
SUBJECT_ALT_NAME_TYPE
: x509 stuff, default value isDNS
. In local tests we need to useDNS
because we use hostnamesop
andrp
, which are not valid IP addresses;LOG_LEVEL
: The amount of stuff you will see. Options areDEBUG
,INFO
,WARNING
,ERROR
,CRITICAL
. Defaults toCRITICAL
(i.e. not showing anything);DELAY_START
: The delay in seconds before starting the first test. Defaults to 1 second;DELAY_BETWEEN
: The delay in seconds to check if the RP and OP are running. Defaults to0.01
(because0.001
changes nothing as far as I could test and0.1
adds delay);TIMEOUT
: timeout for any request made byuser_agent
andrp
. Defaults to10
.SAVE_TLS_DEBUG
:true
if you want to save the file to decrypt TLS communication. The file is stored atuser_agent/app/tls_debug/user_agent.tls_debug
andresults/*/*/tls_debug/user_agent.tls_debug
. Defaults totrue
.
Run the following to repeat our use case ten times, using RSA for the JWT and no TLS:
TLS_SIGN= JWT_SIGN=rsa REPEAT=10 docker compose up --exit-code-from user_agent op rp user_agent
It produces the raw performance numbers regarding time and size, which you can find at user_agent/app/logs/
.
We created a script to automate a large portion of the emphirical evaluation. You can reproduce the experiments from our paper locally (i.e. ignoring latency) with:
./run_experiments.sh
If you want to run in a realistic environment, then start two Amazon EC2 instances.
There are extra variables for remote installation and execution, they are:
AMAZON_PEM_FILE
: env variable pointing to the localtion of the .pem file downloaded from Amazon EC2 to SSH into the machines;AMAZON_USER
: as the name suggests.
Then run the following to install everything is needed on your EC2 instances (adjust the IP addresses accordingly):
OP_IP=54.209.156.87 RP_IP=54.87.166.113 AMAZON_PEM_FILE=~/<your pem file>.pem ./install_amazon.sh
Then, you can run the experiments with:
OP_IP=54.209.156.87 RP_IP=54.87.166.113 AMAZON_PEM_FILE=~/teste.pem REPEAT=50 ./run_experiments.sh
Warning: tcpdump
s grow quickly. E.g. if you run REPEAT=50 ./run_experiments.sh
you will get around 5GB of pcap files.
- If, for some reason, you need to recreate the TLS certificates, then you need to remove the existing containers and, most importantly, the volumes:
docker kill $(docker ps -q)
docker rm $(docker ps -q -a)
docker volume rm post_quantum_op_certs post_quantum_rp_certs
docker rmi -f $(docker images -a --filter=dangling=true -q)
- If, for some reason, you need to remove EVERYTHING and start from scratch:
docker system prune -a --volumes -f
- If you want to evaluate the TLS handshake times like we did in our paper, you can use our tool built for this purpose.
Run with no TLS; JWT using RSA; and 100 tests locally:
TLS_SIGN= JWT_SIGN=rsa REPEAT=100 docker compose up --exit-code-from user_agent op rp user_agent
user_agent_1 | Storing detailed logs (times + sizes) on /app/logs/detailed/TEST=all RP=rp OP=op TLS= JWT=rsa REPEAT=100.csv
user_agent_1 | Storing resumed logs (times + sizes) on /app/logs/resumed_TEST=all.csv
user_agent_1 | Min time: 0.063583
user_agent_1 | Max time: 0.103506
user_agent_1 | Mean time: 0.066275
user_agent_1 | Stdev time: 0.004112
user_agent_1 |
user_agent_1 | Mean req/sec: 317.705191
user_agent_1 | Stdev req/sec: 13.964885
user_agent_1 |
user_agent_1 | Mean resp size: 1006559.000000
user_agent_1 | Stdev resp size: 0.000000
Run with TLS using Dilithium 5, JWT using Falcon-512 and 100 tests:
TLS_SIGN=dilithium5 JWT_SIGN=falcon512 REPEAT=100 docker compose up --exit-code-from user_agent op rp user_agent
user_agent_1 | Storing detailed logs (times + sizes) on /app/logs/detailed/TEST=all RP=rp OP=op TLS=dilithium5 JWT=falcon512 REPEAT=100.csv
user_agent_1 | Storing resumed logs (times + sizes) on /app/logs/resumed_TEST=all.csv
user_agent_1 | Min time: 0.088119
user_agent_1 | Max time: 0.130666
user_agent_1 | Mean time: 0.092610
user_agent_1 | Stdev time: 0.004317
user_agent_1 |
user_agent_1 | Mean req/sec: 227.131523
user_agent_1 | Stdev req/sec: 8.208270
user_agent_1 |
user_agent_1 | Mean resp size: 1009611.260000
user_agent_1 | Stdev resp size: 5.125929
To see what is rolling behind the scenes try this:
TLS_SIGN=ecdsa JWT_SIGN=rsa LOG_LEVEL=DEBUG docker compose up --exit-code-from user_agent op rp user_agent
If you want to produce the same images we shown in our paper:
- Run the experiments with
./run_experiments.sh
multiple times, i.e., one for each latency scenario you want to evaluate, changing the environment variables as instructed previously; - Go to the
results
folder; - For each experiment a folder will be created. Manually prepend to each folder's name the latency of said experiment. This is required to ensure the ploting order is correct;
- Run
run-tls-analyzer-all-results.sh
to extract the TLS handshake times. This might take a while. You can tweak this file to use multiple processes and speed things up (see variable N); - Adapt the script
plot-paper-results.sh
as follows:
- Change the labels of line 76 with the latencies of your experiments (i.e., the ones you manually added to each folder's name);
- If you executed more or less than 4 experiments, you will have to change lines 76 and possibly more to adjust the graph generation to your specific use case;
- Run
plot-paper-results.sh
to get the results (results.csv, ratios.pdf and stacked.pdf)