Docker, .NET Core 5.0, Angular 11, Nginx and Postgres on the Google Cloud Platform — Pt 2

In part 1 we built and ran our images locally and covered the docker-compose file and the docker build files. Now its time to look at moving everything to the cloud and getting it running almost for free!

The source code for the example can be found here.

Pushing The Api Docker Image

When we move to the cloud we’ll be relying on the docker compose instance on our VM to pull down the prebuilt Api image we created in the first step. In order make our image accessable we’ll be putting it up on Docker Hub, a docker image repository; other repositories are available such as Azure Container Registry and Google’s Container Registry.

First you’ll need to sign up and create a Docker Hub account and give yourself a Docker ID.

Once you’ve completed this step it’s time to push our container to the hub with the following commands.

Enter in your Docker ID as your user name and then enter your password.

docker login

Now open powershell in the root of our solution where the docker-compose.yml file lives.

This will create a fresh images for our applications including the Api.

Note, it’s faster just to go the the server directory and run a docker build in there, but I want to keep things as simple as possible with as few commands to explain.

docker-compose build

Docker images will show us all the images on our system, including the Api image.

docker images

We should get an output that looks something like this.

In order to push the api image to Docker Hub we’ll need to give it a tag including the Docker ID you gave it earlier. In my case it’s my name ‘stephenadam’

docker tag gcpblog_api stephenadam/gcpblog_api

Now to push our image to Docker Hub.

docker push stephenadam/gcpblog_api

Note, by default this image will be public to the outside world. Docker Hub allow you to have one private image for free, when working with your own site go to the settings for the image on Docker Hub and set it to private.

Setting Up the Cloud Environment

For this step we’ll need a Google account to access to the Google Cloud Platform and access the 3 month / $300 trial they offer.

Once you have signed into Google create a free Google Cloud Platform Account by going to the following URL.

https://cloud.google.com/free/

Next let’s go the the Cloud Console and get to work!

https://console.cloud.google.com/

First we need to create a VM to run our site on. Find the ‘Compute Engine’ on the left hand menu and then go to VM instances. While there are other options for running our code, Compute Engines VMs are the cheapest option and let us configure our own environment.

Google offers a free VM option whichh we’ll be using here. We’ll need to provision an f1-micro instance in one of the following locations — us-west1, us-central1 or us-east1.

The only option we’ll need to pay for later is the fixed IP which comes out as $2.88 a month. Not bad for a site hosted in GCP!

Create an instance.

Now we need to configure our virtual machine. For this example we’ll be leaving everything in its default setting apart from the following options:

Name: Pick something suitable. I’ve gone for gcp-blog.
Machine type: e2-small. This is a low powered machine but it’s cheap and will happily work for our purposes.
Firewall: Here ensure we allow both HTTP and HTTPS access.

We have up to 30 gig of free storage so go to the boot disk section and click on ‘Change’.

Now up the disk space to 30gig.

If you have setup the machine successfully you should see the following message in the top right had screen when setting up the VM.

Now just click the ‘Create’ button to create your VM.

Setting Up a Swap File

The f1-micro we’re using only has 614 meg of RAM available. We might run into trouble with this little working memory so let’s get around this by setting up a swap file in case we run out.

sudo fallocate -l 1G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab

Installing Docker on the VM

Now we have our shiny new VM created we’re going to need to install docker and docker-compose on it in order to run our application. In the VM instrances section of the Compute Engine click on the SSH option to open up a command prompt on the machine.

Run the following commands.

sudo apt-get updatesudo apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-commonsudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable”sudo apt-get updatesudo apt-get install docker-ce docker-ce-cli containerd.io

You may get the following error when running sudo ‘apt-get update’.

https://download.docker.com/linux/debian buster InRelease
The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 7EA0A9C3F273FCD8

The apt packaging system has a set of trusted keys that determine whether a package can be authenticated and therefore trusted to be installed on the system. Sometimes the system does not have all the keys it needs and runs into this issue. Fortunately, there is a quick fix. Each key that is listed as missing needs to be added to the apt key manager so that it can authenticate the packages.

https://chrisjean.com/fix-apt-get-update-the-following-signatures-couldnt-be-verified-because-the-public-key-is-not-available/

In order to fix this we just need to run the following command with the missing key it’s complaining about.

apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 7EA0A9C3F273FCD8

Installing Docker Compose on the VM

sudo curl -L “https://github.com/docker/compose/releases/download/1.28.6/docker-compose-$(uname -s)-$(uname -m)” -o /usr/local/bin/docker-composesudo chmod +x /usr/local/bin/docker-compose

Uploading the Site to Google’s CDN

Now we’ve setup our shiny new VM it’s time to move our site to a bucket on Google’s CDN. It’s far cheaper to store and access our data from the CDN.

To keep things as simple as possible we’ll be serving our CDN files from the NGINX server on our VM via it’s reverse proxy feature.

Note, this is far from ideal in a production setup as we will be making two hops and losing much of the benifit of a CDN such as lowering the server load and geo location of the content.

First let’s build the site in production mode. We’ll need to update the ‘GcpBlog\client\src\environmentsenvironment.prod.ts’ to point to the correct domain. Here I’ve set it to gcpblog.dev.

export const environment = {
production: true,
baseUrl: ‘https://gcpblog.dev/api/'
};

Now open the client directory in powershell and build the site in production mode.

ng build --prod

We now have our production ready files in the following folder.

‘GcpBlog\client\dist\client’

Now go the the Google Cloud Platform and go to ‘Cloud Storage’ in the left hand menu.

Click ‘Create Bucket’ and enter a name for our new storage bucket. You’ll need to use this later to configure our NGINX server.

Select your region setting and then select ‘standard’ for the storage class. Access control should be set to uniform and advanced settings should be left in their default state.

When presented with the new bucket, click upload files, select all the the files in the ‘GcpBlog\client\dist\client’ directory and upload them to our new bucket!

Setting public permissions on the files

By default the files will be private. The final step will be making them viewable to the outside world. To do this carry out the following steps.

Select the Permissions tab near the top of the page.

  • Click the Add members button.
  • The Add members dialog box appears.
  • In the New members field, enter allUsers.
  • In the Select a role drop down, select the Cloud Storage sub-menu, and click the Storage Object Viewer option.
  • Click Save.

Reserving a Static IP Address

By default the IP address assigned to our VM is ephemeral, that is, it can change at any time. When we later come to point our domain A record at the VM we’ll want an IP address which won’t change.

To get our fixed IP address go to the left hand menu and select VPC Network from the Networking section. We should see the current external port of our VM along with a ‘Type’ drop down menu. Click on that and select ‘Static’ and give the IP address a name.

We should now see the same IP address come up as static on the listing page.

The Production Docker Compose

The production dock-compose file is very similar to the compose file we used to run the application up locally in part 1. This along with other files which will live on the server can be found in the ‘vm-files’ folder in our solution.

Here I’ll cover off the differences before moving our application to the virtual machine.

Postgres Database

The Postgres database running on the database service is identical other than the volumes statement. When running our example locally we weren’t peristing our data — once the container was taken down, any changes to the database were lost.

Volumes option allow us to create a link between the container and the file system of the machine we are running on and to persist the data. Here we create a link between the ‘database-data’ volume and the ‘var/lib/postgresql/data/’ folder on the container. The physical path to the data on the machine is ‘/var/lib/docker/volumes/username_database-data’.

Now we can restart our database container and deploy updates without losing any data we have stored on it.

We define the volume at the bottom of the compose file.

.NET Core Web Api

The ASPNETCORE_ENVIRONMENT variable is set to ‘Production’ rather than ‘LocalDocker’. This will load the appsettings.Production.json file with the production database connection string rather than the local .

Also, we are no longer publishing the ports, in the local example the Angular application was directly calling the Api from our host machine. In our production example we’re using an NGINX reverse proxy to handle all incoming requests, this means call to the api are going to the same URL and port as our application code. This means we don’t need to expose the ports of our Api to the ourside world. More on this later in the NGINX section!

Looking further down the file we see two new services, certbot and nginx. Both of these deserve a proper explaination and are covered below.

Here’s a reminder of how it fits together.

Acessing Files on the VM Through WinSCP

The next step in the process requires us to edit files on the server and we’ll shortly be needing to upload files too so it’s a great time to get WinSCP setup.

WinSCP (Windows Secure Copy) is a free and open-source SSH File Transfer Protocol (SFTP), File Transfer Protocol (FTP), WebDAV, Amazon S3, and secure copy protocol (SCP) client for Microsoft Windows.

WinScp let’s us upload, download and edit files on our VM. Download and install it from here.

We’re also going going to need to generate a public and a private SSH Key. PuTTYgen is a great tool for doing this so let’s get than downloaded and installed too.

First let’s create the ssh keypair. Open up PuTTYgen and click generate. Then enter your Google account name into the Key Comment field. This is the same username shown in the SSH terminal we’ve been using on the terminal.

GCP Username

We need to save two keys. First copy and paste the public key at the top of the box with the header ‘Public key for pasting into OpenSSH authorised_keys file:’ into a text file and keep it safe. Next click save private key and save this in the same location you have your public key from the previous step and select the option to NOT password protect it.

PuTTY Key Generator

Now we have our key pair it’s time to put enter the public one on the Google Cloud Platform. Select metadata from the Compute Engine section of the GCP left hand menu.

Now select SSH from in the top nav and add the public key we just saved in the previous step.

With the public ssh key setup on our Compute instance we can now access the files on our VM using WinSCP and the private key.

Open up WinSCP annd select ‘New Session’. Click new site and enter a name and add the IP address of the VM into the the ‘Host Name’ section and enter you’re GCP username into the username field.

We are using the private key to access our VM, select ‘Advanced’, then select ‘Authentication’ from the window that comes up. Next, select the private key file and hit ‘Ok’.

The setup is complete. Hit save and then after selecting the site hit login and we’ll be able to access and upload files onto the server!

NGINX

NGINX is a free and extremely fast web server we’ll be using to host our application and act as a reverse proxy. The ‘command’ in the docker-compose file above ensures that every 6 hours it reloads the SSL certificate that may have been updated by certbot.

Along with the volumes pointing to the certificate and the challange request data we also point to the configuration file to run our application. Let’s dive into that now.

The first part of the configuration is the upstream statement. This is used to define groups of servers that can be referenced by the proxy_pass. Here we’re referencing the container running the api. Within the the set of services running within our docker compose we can address them using their service name, in this case ‘api’ running on port 880. We use this at the bottom of the second server to proxy requests coming into our domain matching the following pattern ‘gcpblog.dev/api’ to our api application.

Next we have our first server block, these are analogous to Apache’s VirtualHosts. Here we are listining for incoming http requests on port 80 to gcpblog.dev. The first location block handles return the challange data required by Let’s Encrypt to prove the ownership of the domain. We need to provide this over http as we won’t have setup our SSL cert at this point.

The following statement ensures any other requests for the http version of the site are redirected using a 301 code to the https version of the site.

The following server block handles the https requests coming in on port 443. We include the paths to the ssl certificate we’ll be setting up int the next step and other associated ssl configuration.

Now we setup the proxy configuration to the website files we uploaded to the gcp bucket in the preivous step. The specific our location statement is the higher priority it is given.

The first statement uses a special syntax, this means any requests coming directly to our gcpblog.dev domain are forwarded to the index.html file on our bucket and loading our angular application.

location = / {     
proxy_pass https://storage.googleapis.com/gcpblog-bucket/index.html;
}

This statement serves up the files we need, mapping any css, javascript and images files we have hosted on our bucket.

location / { 
proxy_pass https://storage.googleapis.com/gcpblog-bucket/;
}

We do have a problem with paths now, if we navigate through the application using Angular to the https://www.gcpblog.dev/add-article page then hit F5 to refresh the page then the GCP bucket will have no idea what to do with that path as it doesn’t exist on the server. What we need to do is redirect it, and all other routes on the application to the index.html file and let Angular handle the routing.

location /add-post { 
proxy_pass https://storage.googleapis.com/gcpblog-bucket/index.html; }

If you remember from the first article, our articles have an seo friendly url with the title being turned into a slug and used to identity them such as ‘https://www.gcpblog.dev/article/angular-11/’?

The line below matches any incoming requests which match the pattern ‘/article/’ followed by any characters. We then use a re-write rule to take that path and re-write it to /index.html and pass it to the proxy_pass directive. The last statement it used to stop processing and pass the result on.

If we didnn’t take these steps then all parts of the path after the domain would be passed on through the proxy pass directive to the storage bucket and we’d get an error.

location /article { 
rewrite ^/blog/(.*)$ /index.html last;
proxy_pass https://storage.googleapis.com/gcpblog-bucket/index.html; }

Finally the last location statement is used to take any requests to the api and pass them along with any parts of the path over to our api which we defined earlier in the upstream directive.

location /api/ { 
proxy_pass http://web-api/;
}

Let’s Encrypt, Certbot and enabling HTTPS

Ideally all sites should be running https, for security, seo and for the peace of mind it gives your users. Here we dive into how to get a free SSL certficatate for your site and automate the process of retrieving new certificates when your current one expires.

I want to take this opportunity to thank Philipp for his excellent article on the subject which I used to figure all this out. While I’ll be running over this, I strongly suggest you read through his article which goes into plenty of depth and will help you work through any issues you may have.

Let’s Encrypt is a free, automated, and open certificate authority. In order to use this service and automate much of the process we’ll be using a certbot image.

Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS.

https://certbot.eff.org/about/

First let’s look over the the docker compose file again to see what we’ve needed to add to get this working.

certbot:
image: certbot/certbot
restart: unless-stopped
volumes:
— ./data/certbot/conf:/etc/letsencrypt
— ./data/certbot/www:/var/www/certbot
entrypoint: “/bin/sh -c ‘trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;’”
nginx:
image: nginx:1.15-alpine
restart: unless-stopped
volumes:
— ./data/nginx:/etc/nginx/conf.d
— ./data/certbot/conf:/etc/letsencrypt
— ./data/certbot/www:/var/www/certbot
ports:
— “80:80”
— “443:443”
command: ‘/bin/sh -c ‘’while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g “daemon off;”’’’

First we specify the certbot service image and make sure that we restart it if it errors. Next we create two shared volumes which are used by both the certbot and nginx.

In order for Let’s Encrypt to to verify you have control of a domain it issues a challange request. This challange data needs to be created by the certbot and served by our NGINX server. We also need to serve up our SSL certificate via NGINX. This is why we have the two volumes both shared by NGINX and the certbot service.

The next line to cover is the ‘entrypoint’ in the certbot service. By default the certbot service won’t automatically renew our certficate, this line fixes this by checking if our certficate is about to expire every 12 hours and renewing it if it is.

We do have a problem however we need to create a dummy certificate. From Philipp article.

Now for the tricky part. We need nginx to perform the Let’s Encrypt validation But nginx won’t start if the certificates are missing.

So what do we do? Create a dummy certificate, start nginx, delete the dummy and request the real certificates.
Luckily, you don’t have to do all this manually, I have created a convenient script for this.

SSH onto our vm and execute the following statement to pull down his script.

curl -L https://raw.githubusercontent.com/wmnnd/nginx-certbot/master/init-letsencrypt.sh > init-letsencrypt.sh

Putting it Live!

Now we need to use WinSCP to edit the init-letsencrypt.sh shell script file that’s now on our server.

Open up the site in WinSCP and right click ‘edit’ on the ‘init-letsencrypt.sh’ file. Now find the domains line and add your domain rather than example.com. For Example.

domains=(gcpblog.dev www.gcpblog.dev)

Now upload all of the files and the data folder in the ‘vm-files’ folder to the the default folder on the VM using WinSCP.

Next edit the app.conf file in the following folder and change the two instances of server_name to the domain you’re using.

GcpBlog\vm-files\data\nginx\app.conf

Then update the SSL certificate paths to match the domain name too. For example.

ssl_certificate /etc/letsencrypt/live/gcpblog.dev/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/gcpblog.dev/privkey.pem;

Now point your domain A record at the server.

Once we have waited for our A record update to propagate, it’s time to get our SSL certificate.

First login to docker on the vm so we can pull down our api image.

docker login

Now set execute permissions on the script file.

chmod +x init-letsencrypt.sh.

Then get our cert by running the following.

sudo ./init-letsencrypt.sh

You should now get a success message stating that a certificate has been correctly generated for your domain.

Now we just need to start the site by bringing up our containers though Docker Compose.

docker-compose up

Now when you navigate to your domain you should see the site!

References and further reading

Software developer based in Brighton UK.