Verification of Self-Signed Certificates
When interfacing to third-party web services, one often has to deal with self-signed SSL certificates that trigger verification errors. One workaround is to suppress those errors. (For instance, the Curl tool has the ‘insecure’ flag for this purpose.) However, at my company, we found ways to verify such certificates in order to safeguard from Man-in-the-Middle attacks.
Conventionally a web browser relies on a Public Key Infrastructure (PKI) to verify SSL certificates. Every certificate is signed by another (signing) certificate. That signing certificate must be signed by another, in a chain ending on a trusted certificate. This linkage allows a web server operator to switch to a new SSL certificate without requiring visitors to his website to update their web browsers. Alternatively, he could ask his users to trust his specific certificate so that the browser would not need to walk up the signature chain to verify it.
An SSL certificate carries inside it a public key of the webserver. On a conceptual level, the authenticity of that public key is the thing that allows us to establish an authenticated Diffie-Hellman key exchange between the browser and the server. Thus, if we could verify that it is the correct public key, we would have “verified” the certificate. However, instead of verifying a long public key, we could instead verify its checksum that is usually much shorter. A checksum of the entire certificate is called its fingerprint, and it’s always formatted as a colon-separated list of hex codes. For instance, here is the SHA-256 fingerprint of the certificate served by https://google.com:
To retrieve the details of a website’s certificate, click on the lock icon in your browser’s URL bar and then inspecting SSL certificate details. Alternatively, you could use the following shell script:
# If your environment does not require a HTTP proxy, delete the '-proxy $PROXY' parameter below
echo quit | openssl s_client -showcerts -servername $HOST -connect $HOST:$PORT -proxy $PROXY > result.txt
The output file
result.txt includes the certificate in PEM format and metadata. The PEM format consists of binary data encoded using Base64 into ASCII, enveloped with "begin" and "end" lines like so:
<certificate encoded in base64 encoding>
You may feed the
result.txt file into the following command to compute the fingerprint of the certificate. (The
openssl tool would use the first certificate it finds in the input file and ignores everything else.)
$ openssl x509 -noout -fingerprint -sha256 -inform pem -in result.txt
The above command uses a
-sha256 switch, which determines the length of the fingerprint to be 32 bytes. There are only a few widely-used variants, therefore the length of the fingerprint identifies the algorithm used to derive the fingerprint.
Here are three methods by which you can verify certificates by their fingerprints.
Method #1: Use Python
Verification of certificates by a fingerprint is supported out-of-the-box by the
urllib3 library using the
from urllib.parse import urlparse
def http_get_request(url, fingerprint):
parsed_url = urlparse(url)
host = parsed_url.netloc
path = parsed_url.path
pool = urllib3.HTTPSConnectionPool(host, assert_fingerprint=fingerprint)
response = pool.urlopen('GET', path)
response = http_get_request('https://example.com/a/b/c', '14:71:...')
Notice that the fingerprint option configures an
HTTPSConnectionPool object which could then be used to make a series of queries against a website, such that each of the queries would verify the fingerprint of the certificate.
requests library supports certificate fingerprint verification also because it builds upon the
urllib3 library. It is based on adapter objects that return the
HTTPSConnectionPool objects discussed above, and it provides a method
Session::mount() which allows setting a custom adapter for a particular base URL. Putting this together, we have this code:
from urllib.parse import urlparse
def create_fingerprint_session(url, fingerprint):
host = urlparse(url).netloc
s = requests.Session()
s.verify = False
session = create_fingerprint_session(('https://example.com/a/b/c', '14:71:...')
response = session.get(url)
Note that the
verify setting must be set to False, otherwise, the
requests library would also try to verify the SSL certificate using the conventional way, by the signature chain and the domain name.
(Note that the
verify parameter may also be set to a location of a certificate file that contains a concatenated list of trusted certificates in PEM format. However, a self-signed certificate is signed by a custom Certificate Authority (CA), but the certificate of the CA is usually unknown to us. Thus, we do not use this option but set
All that remains now is to implement the
FingerprintAdapter. The quickest way is to subclass
HTTPAdapter class and to modify the methods that create an
HTTPPoolConnection object to include the
from requests.adapters import HTTPAdapter
A TransportAdapter that allows to verify certificates by fingerprint
def __init__(self, fingerprint, *args, **kwargs):
self._fingerprint = fingerprint
HTTPAdapter.__init__(self, *args, **kwargs)
def init_poolmanager(self, *args, **kwargs):
kwargs['assert_fingerprint'] = self._fingerprint
return super().init_poolmanager(*args, **kwargs)
def proxy_manager_for(self, *args, **kwargs):
kwargs['assert_fingerprint'] = self._fingerprint
return super().proxy_manager_for(*args, **kwargs)
In summary, set
verify=True when working with certificates signed by a trusted CA, otherwise set
verify=False and mount a
FingerprintAdapter when verifying self-signed certificates by fingerprint. Test that the verification is working by altering the fingerprint value and observing a security error.
Method #2: Site-wide
What if you wished to use other tools, besides Python, to query web sites signed with self-signed certificates? A site-wide solution is to add the self-signed certificate to a list of trusted certificates, such that if it’s there, then no further signature checking would be made by the verifier.
However, the downside of this method is that a compromised trusted third party could now sign certificates for any domain which all programs on the machine would trust. This is a significant security risk for a long-lived server, but it may be tolerable if “site-wide” does not extend beyond a Docker container which runs a program that only connects to one endpoint.
The following instructions are for Ubuntu or Debian; for other distributions, make necessary adjustments.
Look in directory
/usr/share/ca-certificates and you will see the directory
mozilla, with many certificate files inside it. Make your own subdirectory on the same nesting level, for instance,
/usr/share/ca-certficates/custom and put in it self-signed certificates of interest in PEM format, stored as separate files with extension
.crt. Next, edit
/var/ca-certificates.conf and list the custom certificates after the
mozilla certificates. For instance,
update-ca-certificates command. Once that's done, symlinks to your certificates would appear in
/etc/ssl/certs directory. At this point, the
curl tool would work to accept the self-signed certificate from
However, the fingerprint method described previously, had the advantage that it worked even if there was a domain name mismatch. A mismatch would happen if you queried the target HTTPs server by an IP address (e.g.
https://220.127.116.11/a/b/). If this is your situation, you can add an entry to
/etc/hosts file to query the web server using the precise domain name that is listed inside the self-signed certificate.
The side-wide method works for all tools that rely on the
libopenssl library, which includes
curl. However, it is not sufficient for Python's
requests library since it does some of its own checking of certificates.
Method #3: Strip SSL
Another way to allow a variety of tools to access HTTPs websites signed by self-signed certificates is to access them through a trusted proxy server that would strip the SSL after verifying the legitimacy of the self-signed certificates using fingerprints. We could implement such a proxy server in Python using the techniques above. Alternatively, we could use a utility program called
stunnel that stands for the "Universal SSL Tunnel."
First, prepare a
connection.conf configuration file like this one:
pid = /var/run/stunnel1.pid
CApath = /etc/ssl/certs
stunnel with the configuration file as the argument to have
https://example.com proxied as
$ stunnel connection.conf >& output.log &
$ curl 'http://localhost:8081/a/b/c'
An important thing to notice in the example configuration file is the
verifyPeer options. They combine to verify the certificate by a fingerprint only and would ignore an incomplete signature chain. These options were added to
stunnel in July 2016, in version 5.34. Another thing to notice that the domain name doesn't matter. The
sni parameter is used only to instruct the webserver which virtual host you are interested in, but it plays no role in validating the certificate.
stunnel site-wide make the following configuration changes: store the configuration as
/etc/stunnel/stunnel.conf, remove the
foreground=yes bit, set
/var/run/stunnel.pid and add additional connection sections as needed.
We have demonstrated three ways to work with self-signed certificates without compromising security. At my company, we are trusted by vendors of parking systems to protect their data, and we use such techniques to justify their trust.