Setting up a secure HTTPS server for iPXE booting

Today iPXE will be evaluated. The goal is flash a USB stick with a suitably configured iPXE that can,

  1. Securely retrieve secret configuration data from a remote application
  2. Download a kernel and early-userspace image from a remote site (security not important here)
  3. And then hand-off to the kernel.

How can security be achieved in point 1?

A detour through nginx

To learn how to configure nginx, a second test-bed is required. Podman can be useful,

podman run --rm -it --name  testing-nginx \
       -v ./files:/files `# file service` \
       -v ./ssl:/etc/charles-plus-ssl `# key material` \
       -v ./nginx.conf:/etc/nginx/nginx.conf:ro `# nginx configuration` \
       -v ./conf.d:/etc/nginx/conf.d:ro `# nginx configuration` \
       -p 8080:80 `# http service` \
       -p 8081:443 `# https service` \
       nginx `# container from the upstream devs`

By configuring an editor to run,

podman exec -it --user root $nginx sh -c 'nginx -t && service nginx configtest && service nginx reload'

after saving the nginx configuration files, the user may live edit a configuration and see the changes via https://localhost:8081 for example.

N.B.1. Due to nginx gracefully handling clients across reloads, it might be necessary to refresh the browser window a few extra times to have nginx start a new worker to affect your latest configuration changes, even after a "reload".

N.B.2 Use the web browser development tools to ensure the browser is disabling its page cache. It can be source of confusion while testing HTTP* applications locally. Chrome calls this option "Disable cache" in the "Network" devtools at the time of writing.

It is important to see what nginx is doing, hence logging. The official nginx container at the time of writing sets up the access and error logs to /var/log/nginx/{access,error}.log respectively, and these are symlinks to stdout and stderr.

So, after starting the nginx in the background, it is possible to watch the nginx logs using podman logs -f testing-nginx.

Keep the variable index close. The logging docs are also useful.

A sojourn into nginx SSL support

Certificates and their related cryptography are very complicated subjects requiring a lot of research to fully understand.

Some jargon

Certificate Authority
Certificate Signing Request
Domain validation, the cheapest sort of check, validates the owner owns a domain listed on the cert. Could enough for this use-case

But, basically we have three steps

  1. Create a private key as a self-proclaimed CA.
  2. Create a CRT and self-sign it.
  3. Install the self-served certificate to a the nginx server.

Since the goal is be one's own certificate authority, the root certificates should first be generated.

gen_ca() {

    openssl req -x509 -newkey rsa:2048 -days 3650 -nodes \
	    -out "$ca_dir"/"$ca_name"-ca.crt \
	    -keyout "$ca_dir"/"$ca_name"-ca.key \
	    -subj "/C=GB/L=London/O=Charles Turner (Igalia, Valve)/" \
	    -addext ",,DNS:localhost,IP:,IP:"
gen_ca ./ssl cturner

Check the cert using openssl x509 -text -in cturner-ca.crt -noout or check it using gcr-viewer,


After making these files available in the /etc/charles-plus-ssl directory, SSL is enabled thusly in nginx,

modified   example1/conf.d/default.conf
@@ -2,9 +2,13 @@
 server {
     listen       80;
+    listen      443 ssl;
     server_name  localhost;
-    access_log  /var/log/nginx/access.log  upstream_time;
+    ssl_certificate /etc/charles-plus-ssl/cturner-ca.crt;
+    ssl_certificate_key /etc/charles-plus-ssl/cturner-ca.key;

A low-level check for basic SSL support,

openssl s_client -connect localhost:8081

This should return Verification error: self signed certificate, since the self-signed certificate is probably not in the trusted certificate store. Tell OpenSSL you trust yourself using,

openssl s_client -connect localhost:8081 -debug -CAfile ./ssl/cturner-ca.crt

And watch for Verification: OK to check it worked. Use wireshark (or tcpdump -i lo) to watch the traffic on lo for deeper debugging.

Check the content of the PXE boot script,

wget --no-check-certificate -O- https://localhost:8081/ipxe



Toying with iPXE

Since an HTTPS server is now running on localhost:8081, it is possible to boot from it with the following PXE script,


chain https://localhost:8081/ipxe

Then, create a specific ISO that boots using this script,

make EMBED=ipxescript bin/ipxe.iso DEBUG=tls,x509:3,certstore,privkey TRUST=cturner-ca.crt

Where cturner-ca.crt is the self-signed CA generated earlier.

The iPXE script hosted on locahost:8081 looks like this,


Hence, when booting the custom build of ipxe,

qemu-system-x86_64 -enable-kvm -cdrom ipxe.iso -nic user,model=virtio-net-pci -nographic ; reset

N.B. reset is useful with -nographic, which often upsets the terminal.

Via a detour through the local nginx, a vmlinuz will greet us,

iPXE Boot Demonstration

Linux (none) 3.16.0-rc4+ #1 SMP Wed Jul 9 15:44:09 BST 2014 x86_64 unknown

Congratulations!  You have successfully booted the iPXE demonstration
image from

See for more ideas on how to use iPXE.


With said nginx detour in place, many features can be implemented on the server-side to control the machine boot configuration. Instead of serving a static iPXE script, server-side scripts may now generate something based on the request headers. That will be addressed later in the post.

It is important not to be too proud of this setup, client certificates have not been added yet. With the security currently in place, it is possible for iPXE to know that it is talking to the server you trust. It is not yet possible for the server to know who it is talking to. The latter point is important, since this server is delivering sensitive configuration data, and only authentication is required here.

Since it's all automated, client certificates are needed next.

Enforcing client certificates

Each client must prove they are trusted by the server as well. First, generate private material for the new client,

    openssl req -newkey rsa -keyout client.key -out client.csr -nodes
    openssl ca -config "$__TMP"/ca.cnf -in client.csr -out client.crt

The ca.cnf here is a convenience to sign with the self-signed cert generated earlier. That will be used by nginx again to verify clients are trusted.

Build an iPXE ISO for the new client, which is specific to their keys like so,

make EMBED=ipxescript bin/ipxe.iso CERT="$__CA_CRT","$__CLIENT_CRT" TRUST="$__CA_CRT" PRIVKEY="$__CLIENT_KEY"

The parameters are what was generated before. Passing DEBUG=tls,x509:3,certstore,privkey is frequently helpful while debugging issues.

The only nginx changes required for client certificates becomes,

@@ -8,6 +8,9 @@ server {
     ssl_certificate /etc/charles-plus-ssl/cturner-ca.crt;
     ssl_certificate_key /etc/charles-plus-ssl/cturner-ca.key;
+    ssl_client_certificate /etc/charles-plus-ssl/cturner-ca.crt;
+    ssl_verify_client on;

Now, proof is established in both directions, with the tradeoff that iPXE boots more slowly, about 2 seconds more in the current environment, having to now perform expenses cryptographic checks itself now.

A local-check using wget,

wget --ca-cert=./ssl/cturner-ca.crt \
  --certificate=client.crt \
  --private-key=client.key \

Congratulations, security properties complete!

Client-specific PXE configurations with Python

Flask can be used as a simple nginx application server. For example,

from flask import Flask
from flask import request
app = Flask(__name__)

def ipxe():
    return b'#!ipxe\r\nchain\r\n'

if __name__ == "__main__":'', port=8082)

After python, nginx can be configured to proxy authenticated requests to this service like so,

modified   example1/conf.d/default.conf
@@ -15,7 +15,14 @@ server {
     #return 301 $scheme://$request_uri;
     location / {
-        root   /files;
+	proxy_pass         http://app_servers;
+        proxy_redirect     off;
+        proxy_set_header   Host $host;
+        proxy_set_header   X-Real-IP $remote_addr;
+        proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
+        proxy_set_header   X-Forwarded-Host $server_name;
+	proxy_set_header   X-SSL-Client-Serial $ssl_client_serial;
+	proxy_set_header   X-SSL-Client-Fingerprint	$ssl_client_fingerprint;
     #error_page  404              /404.html;
modified   example1/nginx.conf
@@ -29,6 +29,11 @@ http {
     access_log  /var/log/nginx/access.log  sslparams;
+    upstream app_servers {
+        server;
+	# ...
+    }

Note the proxy passing of the client certificate's serial and fingerprint, this can be used by the Python application to return client-specific iPXE configurations. Secrets may be transmitted in the response, since these channels are as secure as possible.

It can no be verified using the QEMU testbed that the Python application has everything it needs to do a good job,

Host: localhost
X-Forwarded-Host: localhost
X-Ssl-Client-Serial: 03
X-Ssl-Client-Fingerprint: 4d5f61bcd81f5dc224cb0b34764d1b2ef6530961
Connection: close
User-Agent: iPXE/1.21.1+ (g85eb)

For production, one can host the application in gunicorn with a file like,

from app import app

if __name__ == "__main__":

And then run the server

gunicorn -b '' --workers=2 wsgi:app

With the application server in place, it's now possible to complete the requirements,

$ wget --ca-cert=./ssl/cturner-ca.crt --certificate=/home/cturner/igalia/graphics/board-farm/ipxe-experiments/client.crt --private-key=/home/cturner/igalia/graphics/board-farm/ipxe-experiments/client.key https://localhost:8081/ipxe/52%3A54%3A00%3A12%3A34%3A56 -qO-
kernel b2c.minio="bbz,,0028ce5566be8940000000002,K002rfqoZWg6fN9Dckrz/Phk1mb+q1s" b2c.volume="perm,mirror=bbz/tchar-dut1-perm,pull_on=pipeline_start,push_on=pipeline_end,overwrite" b2c.container="-v perm:/mnt/perm --tls-verify=false docker://busybox:latest find /mnt/perm" b2c.ntp_peer= b2c.pipefail b2c.cache_device=auto b2c.poweroff_delay=15 console=ttyS0

Everything was generated by a Python script, that could make use of the either the client's MAC address, or unique metadata in the client's SSL certificate (here, the fingerprint is used to select appropriate kernels and ramdisks).

To production…

This is great, the requirements are met and basic prototype is in place. Now the system must be reproducible for other developers and environments. A deployment for nginx and Flask is needed, as well as a means of maintaining the mapping between client IDs and ramdisks / kernels. The s3 storage is also manually taken care of. That's all for the next iteration.


Created: 2021-11-28 Sun 17:58