ALC 4 Cloud Challenge Phase II using GCP Virtual Machine

I will be showing us how to use the gcp’s compute engines virtual machines to complete the challenge. All of this will be done using the either Windows Powershell (for Windows users) or Bash Terminal (for Ubuntu users) so we will not be doing it from the gcp console as this would mean a lot of images but you should have your gcp console open so you can see the results of you terminal commands.

Prerequisites:

  1. Cloud SDK installed and setup on your local machine

The steps will be:

  1. Create the virtual machine,

Create the virtual Machine:

gcloud beta compute --project=spikey-bigtable-new instances create challenge-vm --zone=us-central1-a --machine-type=g1-small --tags=http-server,https-server --image=ubuntu-1604-xenial-v20191010 --image-project=ubuntu-os-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=ubuntu-vm

Special note we are using

  • “ — machine-type” of “g1-small” to reduce cost

The other option should be self explanatory.

Install Nodejs and Docker:

Stop the vm so we can add a start up script that will do this, this can be done from your console. From the instance details page, complete the following steps:

  1. Stop the instance
  • startup-script: Supply the startup script contents directly with this key.
#! /bin/bashsudo apt-get updatesudo apt-get install apt-transport-https ca-certificates curl gnupg-agent  software-properties-commoncurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get updatesudo apt-get install docker-ce docker-ce-cli containerd.io -ycurl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash -sudo apt-get install nodejs

Simple copy that and past it into your startup script window. Now restart the vm and if it doesn’t run when you test or try to use the applications in the script run this in the open ssh terminal:

sudo google_metadata_script_runner --script-type startup --debug

Create your applications:

You will “ssh” into your vm and setup your react app but here you will be working from the terminal either locally with

gcloud compute ssh name_of_instance

or using the “ssh” button in the instance in gcp console. Create a directory we will be working from:

mkdir challenge-home
cd challenge-home

Now we run the code to:

  1. Install yarn,
sudo npm install -g yarn

2. Install “create-react-app” tool

yarn add global create-react-app

3. Create react app run and test

# windows users
yarn create-react-app challenge-app
cd challenge-app
yarn start
# for Ubuntu users
create-react-app challenge-app
cd challenge-app
yarn start

4. Create firewall rule to allow you test your app on your vm instance in any browser:

gcloud compute firewall-rules create node-test-rule --action=ALLOW --direction=INGRESS --rules=tcp:3000 --source-ranges="0.0.0.0/0"

Now copy the external ip address and paste into a browser in the following format “http://35.223.16.237:3000/”. Note that react is running with “http” and not “https” and don’t forget to add the port of “3000”.

Now shut it down with this keyboard combinations:

ctrl + c

4. Create docker image using Dockerfile we will create the docker file called “Dockerfile” with the following command:

touch Dockerfile

and add the following lines in the and create the image:

FROM node as build-deps
WORKDIR /usr/src/app
COPY package.json yarn.lock ./
RUN yarn
COPY . ./
RUN yarn build
FROM nginx:1.12-alpine
COPY --from=build-deps /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

we now create the image with the docker command:

sudo docker build . -t udoyen/challenge-image

Note that “udoyen” is my docker username, use yours, now test the image:

sudo docker run -d -p 8080:80 --name alc-challenge udoyen/challenge-image

Create a firewall rule to access the docker container.

gcloud compute firewall-rules create docker-rule --action=ALLOW --direction=INGRESS --rules=tcp:8080 --source-ranges="0.0.0.0/0"

Copy the instance external ip address and paste into a browser in the format “http://35.223.16.237:8080”,

now if that goes well we stop and prepare to push to google container registry.

sudo docker container stop alc-challenge

Push the created image to google container registry:

This will involve these steps:

  1. Set up the instance to allow pushing of docker images to google container registry:
gcloud compute instances stop challenge-vm --zone=us-central1-a
gcloud compute instances set-service-account challenge-vm --scopes=storage-rw --zone=us-central1-a
gcloud compute instances start challenge-vm

2. Setup docker configurations:

gcloud auth configure-docker
sudo chown $USER:$USER ~/.docker/config.json

3. Remove the default cloud sdk installed using the Ubuntu package manager (this is crucial to get the image to successfully push to google container registry)

a. Download the installer for linux from using curl command:

curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-267.0.0-linux-x86_64.tar.gz

b. Untar it:

tar xvf google-cloud-sdk-267.0.0-linux-x86_64.tar.gz

c. Change into that untared folder and run the installer. Accept all request also note the “&&” means the second command won’t run if the previous one failed:

cd google-cloud-sdk && ./install.sh

d. Remove the old installations:

sudo rm -rf /usr/lib/google-cloud-sdk

e. Install the gcloud docker component(optional):

gcloud components install docker-credential-gcr

f. Authenticate with docker:

gcloud auth print-access-token | sudo docker login -u oauth2accesstoken --password-stdin https://gcr.io

3. Create a docker token to use, don’t use your current docker hub password. Follow this link: https://hub.docker.com/settings/security

4. Log into docker:

a. Create a firewall rule to allow out bound traffic to the docker site:

gcloud compute firewall-rules create docker-rule-out --allow=tcp --direction=EGRESS --destination-ranges="0.0.0.0/0"

b. Log into docker using your docker credentials:

docker login --username <your_username>

5. Tag the image, and note that the “gcr.io/spikey-bigtable-new” part is a most as the “gcr.io” is the host and the “spikey-bigtable-new” is the project name on gcp. The “challengeapp:ubuntu-version” part can be anything you like:

sudo docker tag udoyen/challenge-image gcr.io/spikey-bigtable-new/challengeapp:ubuntu-version

6. Push to google container registry

sudo docker push gcr.io/spikey-bigtable-new/challengeapp:ubuntu-version

Man that was a hand full. The issue with “docker-credential-gcr” not been installable using the Ubuntu package tool version was not what I had expected it should be easy. Now let’s move to the stage of creating the kubernetes cluster.

Create kubernetes cluster:

Create cluster:

gcloud beta container --project "spikey-bigtable-new" clusters create "challenge-cluster" --zone "us-central1-a"  --machine-type "g1-small" --num-nodes "3"

Deploy the gcr container to the cluster

1. Install the kubectl tool:

gcloud components install kubectl

2. Create a firewall rule to allow the deployment then close the ssh session and open a new one (optional if using the instance terminal to work otherwise if using you own system terminal, whether Windows or Linux, not needed). Note if the part is giving you a head ache then close the instance ssh session and use you local terminal to deploy since we now have the the image in the google container registry:

gcloud compute --project=spikey-bigtable-new firewall-rules create kubernetes-deploy-out --direction=EGRESS --priority=1000 --network=default --action=ALLOW --rules=tcp:8080 --destination-ranges=0.0.0.0/0

3. Deploy the image we stored in google container registry:

kubectl create deployment challenge-deploy --image=gcr.io/spikey-bigtable-new/challengeapp:ubuntu-version

Expose the deployment using a LoadBalancer:

kubectl expose deployment challenge-deploy --type=LoadBalancer --port 80 --target-port 80

Finally we are at the end. Some of the challenge participants were using machines that were unable to run docker and I decided to show how we could use the gcp resources available to us to complete the challenge. Noticeable longer than using ones system but better than doing nothing. Now that we are done please clean up the resources we provisioned in the course of doing this challenge such as the virtual machine instance, and the various firewall rules created and of course you decide when to bring down you cluster after completing the challenge. Do leave us a clap if you found this useful!

DevOps | FullStack developer | Python::Flask | GCP Cloud Certified | AWS & AZURE Cloud Savy | Linux Sysadmin | Google IT Support