Skip to content

Mattermost Office 365 SSO and TLS

My goal was to test out the Office 365 SSO authentication to Mattermost. The limitation is that the Office 365 SSO (Azure AD) requires the OAuth Redirect URI to match whatever the SiteURL is configured on the Mattermost server. 

My Mattermost HA Cluster lab environment was setup with:
Mattermost SiteURL:
NGINX TLS config:

So in my example this will fail as Azure AD expects as the redirect URI endpoint whereas I only have the HTTPS endpoint published by NGINX to the internet and visible to Office 365/Azure AD.

The Mattermost service is secured with a certificate and domain name I do not own so I wasn’t able to change certs or DNS myself. So I could….
1) Configure the NGINX server as a load balancer to passthrough TLS to the Mattermost app server cluster
2) Remove the NGINX proxy and configure a load balancer in AWS to passthrough TLS to the Mattermost app server cluster

NGINX must be a version that has the Stream module to enable passthrough. First thing is to check the stream module is installed:

nginx -V 2>&1 | tr -- - '\n' | grep module

Once I knew NGINX could support passthrough I focused on getting the SSL certificate onto the Mattermost server. I was able to just copy the same certificate and private key file currently used by the NGINX reverse proxy to the Mattermost server. Making sure I changed the owner and permissions:

sudo chown -R mattermost:mattermost /opt/mattermost/config/fullchain.pem
sudo chown -R mattermost:mattermost /opt/mattermost/config/privekey.pem
sudo chmod 400 *.pem

I then followed the TLS configuration steps described in our docs. Leaving me with ServiceSettings similar to below:

"ServiceSettings": {
        "SiteURL": "",
        "WebsocketURL": "",
        "LicenseFileLocation": "",
        "ListenAddress": ":443",
        "ConnectionSecurity": "TLS",
        "TLSCertFile": "/opt/mattermost/config/fullchain.pem",
        "TLSKeyFile": "/opt/mattermost/config/privkey.pem",

NGINX needs to be told to look for a passthrough configuration. So edit the nginx.conf and at the end of the http configuration block add the include statement.

sudo nano /etc/nginx/nginx.conf 
http {
include /etc/nginx/passthrough.conf;

We now have to create the passthrough configuration file you have just referenced.

sudo nano /etc/nginx/passthrough.conf

The NGINX docs are really good here and I picked out a couple of key points. There are many settings you could use to determine the load balancing settings but I wanted to keep my configuration simple in my lab.

Load Balancing Method and Session Persistence
Firstly the Open Source version of the product only allows the Session persistence methods hash or ip_hash directive. I tried ip_hash first but ended up getting the following error:
nginx: [emerg] “ip_hash” directive is not allowed here in /etc/nginx/passthrough.conf:4
As far as the notes go, it should work but I didn’t have the patience to find the answer.
I selected for my configuration:

hash – The server to which a request is sent is determined from a user‑defined key
$remote_addr – client address
consistent – Requests are evenly distributed across all upstream servers

# LB https to 2 backend servers
stream {
    upstream mm_mydomain_com {
      hash $remote_addr consistent;

    server {
        listen 443;
        proxy_pass mm_mydomain_com;
        proxy_next_upstream error timeout;

Once I had a passthrough.conf that I thought would work, I removed the current NGINX https reverse proxy configuration.

cd /etc/nginx/sites-enabled
ls -l
sudo rm mattermost

I then ran a test against the NGINX configuration sudo nginx -t

The final test was to take my two Mattermost app servers and reboot testing connections to both, which finally worked!

This was the best way I had to quickly get a working configuration in place to test Office 365 SSO with Mattermost and give me a chance to learn a bit about NGINX. There are likely better ways I could have completed this but this works well for me for testing Mattermost Office 365 sign on in my lab.

If I was looking at this for a production instance then I would investigate the ip_hash configuration, health_check and proxy_next_upstream settings in more detail.

Thank you to those who have shared before and used as a basis for this:

Published inMattermostRecipes

Be First to Comment

    Leave a Reply

    Your email address will not be published. Required fields are marked *