Thursday, February 18, 2021

Creating Atlas Cluster GCP /24 in Terraform

 

1. Generating MongoDB Atlas Provider API Keys

In order to configure the authentication with the MongoDB Atlas provider, an API key must be generated.

We login to the MongoDB Atlas Portal and select our organization (or create a new organization, if we don't have one), then we select Access Manager, and we click the Create API Key button.

Image for post

We enter a description for the API Key and select the Organization Project Creator permission.

Image for post

and copy the private API Key to a safe place:

Image for post

Finally, we can see our API Keys listed on the portal:

Image for post

2. Configuring the MongoDB Atlas Provider

We will need to configure the MongoDB Atlas Provider using the API Keys generated on the previous step.

We have two options: using Static Credentials or Environment Variables.

2.1. Configuring the MongoDB Atlas Provider using Static Credentials

We create a file called provider-main.tf, used to configure both Terraform and MongoDB Atlas providers and add the following code:

# Define Terraform provider
terraform {
required_version = ">= 0.12"
}
# Define the MongoDB Atlas Provider
provider "mongodbatlas" {
public_key = var.atlas_public_key
private_key = var.atlas_private_key
}

and create the file provider-variables.tf, to manage variables for providers:

variable "atlas_public_key" {
type = string
description = "MongoDB Atlas Public Key"
}
variable "atlas_private_key" {
type = string
description = "MongoDB Atlas Private Key"
}

2.2. Configuring the MongoDB Atlas Provider using Environment Variables

We can also configure our API credentials via the environment variables, MONGODB_ATLAS_PUBLIC_KEY and MONGODB_ATLAS_PRIVATE_KEY, for our public and private MongoDB Atlas API key pair.

We create a file called provider.tf, used to configure both Terraform and MongoDB Atlas providers and add the following code:

# Define Terraform provider
terraform {
required_version = ">= 0.12"
}
# Define the MongoDB Atlas Provider
provider "mongodbatlas" {}

Usage:

$ export MONGODB_ATLAS_PUBLIC_KEY="mncbcoqr"
$ export MONGODB_ATLAS_PRIVATE_KEY="c35902a3-a047-9497-c2b3-341415372389"
$ terraform init

3. Creating a MongoDB Atlas Project

MongoDB Atlas Projects (also known as Groups) helps us to organize our projects and resources inside the organization.

To create a project using Terraform, we will need the MongoDB Atlas Organization ID and the Organization Owner or Organization Project Creator permissions (defined when we create the MongoDB Atlas Provider API Keys, on step 1).

3.1. Getting the Organization ID

In order to create a MongoDB Atlas project, we will need to get the Organization ID from the MongoDB Atlas portal.

We click on the Settings icon, located next to our organization name, and copy the Organization ID.

Image for post

3.2. Creating a MongoDB Atlas Project using Terraform

Create a file atlas-main.tf and add the following code to create a project:

# Create a Project
resource "mongodbatlas_project" "atlas-project" {
org_id = var.atlas_org_id
name = var.atlas_project_name
}

and a file called atlas-variables.tf to manage the project variables:

# Atlas Organization ID 
variable "atlas_org_id" {
type = string
description = "Atlas organization id"
}
# Atlas Project Name
variable "atlas_project_name" {
type = string
description = "Atlas project name"
}

4. Creating a Database User

In this section, we will create a database user that will be applied to all MongoDB clusters within the project.

We can add multiple roles blocks to provide different levels of access to several databases to a single user.

Build-in MongoDB Roles or Privileges:

  • atlasAdmin (Atlas admin)
  • readWriteAnyDatabase (Read and write to any database)
  • readAnyDatabase (Only read any database)

Custom Users Privileges:

  • backup
  • clusterMonitor
  • dbAdmin
  • dbAdminAnyDatabase
  • enableSharding
  • read
  • readWrite
  • readWriteAnyDatabase
  • readAnyDatabase

Note: In Atlas deployments of MongoDB, the authentication database resource (auth_database_name) is always the admin database.

We add the following code to create a random password and a database user to the existing atlas-main.tf file:

# Create a Database Password
resource "random_password" "db-user-password" {
length = 16
special = true
override_special = "_%@"
}
# Create a Database User
resource "mongodbatlas_database_user" "db-user" {
username = "galaxy-read"
password = random_password.db-user-password.result
project_id = mongodbatlas_project.atlas-project.id
auth_database_name = "admin"
roles {
role_name = "read"
database_name = "${var.atlas_project_name}-db"
}
}

5. Granting IP Access to our MongoDB Atlas Project

We can use the mongodbatlas_project_ip_whitelist resource to grant access from IPs and CIDRs to clusters within the Project.

Note: we can use cidr_block or ip_address. They are mutually exclusive.

Using CIDR Block

In the example below, we added the CIDR 200.171.171.200/32 to the project whitelist.

resource "mongodbatlas_project_ip_whitelist" "atlas-whitelist" {
project_id = mongodbatlas_project.atlas-project.id
cidr_block = "200.171.171.0/24"
comment = "CIDR block for main office"
}

Using the IP Address

In this example, we will use HTTP data resource to get our current IP Address and pass to the ip_address parameter.

# Get My IP Address
data "http" "myip" {
url = "http://ipv4.icanhazip.com"
}
# Whitelist my current IP address
resource "mongodbatlas_project_ip_whitelist" "project-whitelist-myip" {
project_id = mongodbatlas_project.atlas-project.id
ip_address = chomp(data.http.myip.body)
comment = "IP Address for home office"
}

This is the view of the IP Whitelist in the MongoDB Atlas Portal

Image for post

6. Creating a MongoDB Atlas Cluster

In this section, we will use the mongodbatlas_cluster Terraform resource to create a Cluster resource. This resource lets us create, edit, and delete clusters.

Note: the MongoDB Atlas provider (and the Atlas API) don’t support the Free tier cluster creation (M0)

For specific details about the provider_instance_size_name and the provider_region_name, please check https://docs.atlas.mongodb.com/reference/google-gcp/

We add the following code to create a cluster to the existing atlas-main.tf file:

resource "mongodbatlas_cluster" "atlas-cluster" {
project_id = mongodbatlas_project.atlas-project.id
name = "${var.atlas_project_name}-${var.environment}-cluster"
num_shards = 1
replication_factor = 3
provider_backup_enabled = true
auto_scaling_disk_gb_enabled = true
mongo_db_major_version = "4.2"

provider_name = "GCP"
disk_size_gb = 40
provider_instance_size_name = var.cluster_instance_size_name
provider_region_name = var.atlas_region
}

and the following code to the existing atlas-variables.tf file:

# Atlas Project environment
variable "environment" {
type = string
description = "The environment to be built"
}
# Cluster instance size name
variable "cluster_instance_size_name" {
type = string
description = "Cluster instance size name"
default = "M10"
}
# Atlas region
variable "atlas_region" {
type = string
description = "GCP region where resources will be created"
default = "WESTERN_EUROPE"
}

6. Creating the Input Definition Variables File

In the last step, we are going to create input definition variables file terraform.tfvars and add the following code to the file:

atlas_public_key = "mncbcoqr"
atlas_private_key = "c35902a3-a047-9497-c2b3-3414153723897"
atlas_org_id = "5egaf79a8693fg52367876h3"
atlas_project_name = "galaxy"
environment = "dev"
cluster_instance_size_name = "M10"
cluster_location = "WESTERN_EUROPE"

7. Initializing the Terraform Stack

We open a command-line console as administrator, and type the following command: terraform init to initialize our Terraform stack.

Image for post

8. Executing the Terraform Stack

From the command-line console, we type the following command: terraform apply to execute our Terraform stack.

Image for post

And this is our MongoDB Atlas cluster on the console!



All the code to create this project is here:

 https://github.com/jgschmitz/Atlas-GCP







Friday, November 27, 2020

 

How to set up an EKS cluster running MongoDB

Follow the AWS instructions on how to build a basic three node K8s cluster in EKS its pretty simple https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html

You should have a three node Kubernetes cluster deployed based on the default EKS configuration.

$ kubectl get nodes
NAME                                            STATUS    ROLES     AGE       VERSION
ip-192-168-113-226.us-west-2.compute.internal   Ready     none      9m        v1.10.3
ip-192-168-181-241.us-west-2.compute.internal   Ready     none      9m        v1.10.3
ip-192-168-201-131.us-west-2.compute.internal   Ready     none      9m        v1.10.3

$ kubectl get nodes

Installing Portworx in EKS

Installing Portworx on Amazon EKS is not very different from installing it on a Kubernetes cluster setup through Kops. Portworx EKS documentation has the steps involved in running the Portworx cluster in a Kubernetes environment deployed in AWS.

Portworx cluster needs to be up and running on EKS before proceeding to the next step. The kube-system namespace should have the Portworx pods in running state.

$ kubectl get pods -n=kube-system -l name=portworx
NAME             READY     STATUS    RESTARTS   AGE
portworx-42kg4   1/1       Running   0          1d
portworx-5c6pp   1/1       Running   0          1d
portworx-dqfhz   1/1       Running   0          1d

Creating a storage class for MongoDB

Once the EKS cluster is up and running, and Portworx is installed and configured, we will deploy a highly available MongoDB database.

Through storage class objects, an admin can define different classes of Portworx volumes that are offered in a cluster. These classes will be used during the dynamic provisioning of volumes. The storage class defines the replication factor, I/O profile (e.g., for a database or a CMS), and priority (e.g., SSD or HDD). These parameters impact the availability and throughput of workloads and can be specified for each volume. This is important because a production database will have different requirements than a development Jenkins cluster.

In this example, the storage class that we deploy has a replication factor of 3 with I/O profile set to “db,” and priority set to “high.” This means that the storage will be optimized for low latency database workloads like MongoDB and automatically placed on the highest performance storage available in the cluster. Notice that we also mention the filesystem, xfs in the storage class.

$ cat > px-mongo-sc.yaml << EOF
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
    name: px-ha-sc
provisioner: kubernetes.io/portworx-volume
parameters:
   repl: "3"
   io_profile: "db_remote"
   priority_io: "high"
   fs: "xfs"
EOF

Create the storage class and verify its available in the default namespace.

$ kubectl create -f px-mongo-sc.yaml
storageclass.storage.k8s.io "px-ha-sc" created

$ kubectl get sc
NAME                PROVISIONER                     AGE
px-ha-sc            kubernetes.io/portworx-volume   10s
stork-snapshot-sc   stork-snapshot                  3d

Creating a MongoDB PVC on Kubernetes

We can now create a Persistent Volume Claim (PVC) based on the Storage Class. Thanks to dynamic provisioning, the claims will be created without explicitly provisioning a persistent volume (PV).

$ cat > px-mongo-pvc.yaml << EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
   name: px-mongo-pvc
   annotations:
     volume.beta.kubernetes.io/storage-class: px-ha-sc
spec:
   accessModes:
     - ReadWriteOnce
   resources:
     requests:
       storage: 1Gi
EOF

$ kubectl create -f px-mongo-pvc.yaml
persistentvolumeclaim "px-mongo-pvc" created

$ kubectl get pvc
NAME           STATUS    VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
px-mongo-pvc   Bound     pvc-e0acf1df-9231-11e8-864b-0abd3d2e35a4   1Gi       RWO            px-ha-sc       19s

Deploying MongoDB on EKS

Finally, let’s create a MongoDB instance as a Kubernetes deployment object. For simplicity’s sake, we will just be deploying a single Mongo pod. Because Portworx provides synchronous replication for High Availability, a single MongoDB instance might be the best deployment option for your MongoDB database. Portworx can also provide backing volumes for multi-node MongoDB replica sets. The choice is yours.

$ cat > px-mongo-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  selector:
    matchLabels:
      app: mongo  
  template:
    metadata:
      labels:
        app: mongo
    spec:
      schedulerName: stork
      containers:
      - name: mongo
        image: mongo
        imagePullPolicy: "Always"
        ports:
        - containerPort: 27017
        volumeMounts:
        - mountPath: /data/db
          name: mongodb
      volumes:
      - name: mongodb
        persistentVolumeClaim:
          claimName: px-mongo-pvc
EOF
$ kubectl create -f px-mongo-app.yaml
deployment.extensions "mongo" created

The MongoDB deployment defined above is explicitly associated with the PVC,  px-mongo-pvc created in the previous step.

This deployment creates a single pod running MongoDB backed by Portworx.

$ kubectl get pods
NAME                     READY     STATUS    RESTARTS   AGE
mongo-68cc69bc95-mxqsb   1/1       Running   0          54s

We can inspect the Portworx volume by accessing the pxctl tool running with the Mongo pod.

$ VOL=`kubectl get pvc | grep px-mongo-pvc | awk '{print $3}'`
$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume inspect ${VOL}
Volume	:  607995846497198316
	Name            	 :  pvc-64b57bdd-9254-11e8-8c5e-0253036635a0
	Size            	 :  1.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Jul 28 10:53:01 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-95-234.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd607995846497198316
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  52
	Reads MS        	 :  20
	Bytes Read      	 :  225280
	Writes          	 :  106
	Writes MS       	 :  236
	Bytes Written   	 :  2453504
	IOs in progress 	 :  0
	Bytes used      	 :  10 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.95.234 (Pool 0)
		  Node 		 :  192.168.203.81 (Pool 0)
		  Node 		 :  192.168.185.157 (Pool 0)
	Replication Status	 :  Up

$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl volume inspect $ {VOL}

The output from the above command confirms the creation of volumes that are backing MongoDB database instance.

Failing over MongoDB pod on Kubernetes

Populating sample data

Let’s populate the database with some sample data.

We will first find the pod that’s running MongoDB to access the shell.

$ POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`

$ kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..

Now that we are inside the shell, we can populate a collection.

db.ships.insert({name:'USS Enterprise-D',operator:'Starfleet',type:'Explorer',class:'Galaxy',crew:750,codes:[10,11,12]})
db.ships.insert({name:'USS Prometheus',operator:'Starfleet',class:'Prometheus',crew:4,codes:[1,14,17]})
db.ships.insert({name:'USS Defiant',operator:'Starfleet',class:'Defiant',crew:50,codes:[10,17,19]})
db.ships.insert({name:'IKS Buruk',operator:' Klingon Empire',class:'Warship',crew:40,codes:[100,110,120]})
db.ships.insert({name:'IKS Somraw',operator:' Klingon Empire',class:'Raptor',crew:50,codes:[101,111,120]})
db.ships.insert({name:'Scimitar',operator:'Romulan Star Empire',type:'Warbird',class:'Warbird',crew:25,codes:[201,211,220]})
db.ships.insert({name:'Narada',operator:'Romulan Star Empire',type:'Warbird',class:'Warbird',crew:65,codes:[251,251,220]})

Let’s run a few queries on the Mongo collection.

Find one arbitrary document:

db.ships.findOne()
{
	"_id" : ObjectId("5b5c16221108c314d4c000cd"),
	"name" : "USS Enterprise-D",
	"operator" : "Starfleet",
	"type" : "Explorer",
	"class" : "Galaxy",
	"crew" : 750,
	"codes" : [
		10,
		11,
		12
	]
}

Find all documents and using nice formatting:

db.ships.find().pretty()
…..
{
	"_id" : ObjectId("5b5c16221108c314d4c000d1"),
	"name" : "IKS Somraw",
	"operator" : " Klingon Empire",
	"class" : "Raptor",
	"crew" : 50,
	"codes" : [
		101,
		111,
		120
	]
}
{
	"_id" : ObjectId("5b5c16221108c314d4c000d2"),
	"name" : "Scimitar",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 25,
	"codes" : [
		201,
		211,
		220
	]
}
…..

Shows only the names of the ships:

db.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

db.ships.find({}, {name:true, _id:false})

Finds one document by attribute:

db.ships.findOne({'name':'USS Defiant'})
{
	"_id" : ObjectId("5b5c16221108c314d4c000cf"),
	"name" : "USS Defiant",
	"operator" : "Starfleet",
	"class" : "Defiant",
	"crew" : 50,
	"codes" : [
		10,
		17,
		19
	]
}

Exit from the client shell to return to the host.

Simulating node failure

Now, let’s simulate the node failure by cordoning off the node on which MongoDB is running.

$ NODE=`kubectl get pods -l app=mongo -o wide | grep -v NAME | awk '{print $7}'`

$ kubectl cordon ${NODE}
node "ip-192-168-217-164.us-west-2.compute.internal" cordoned

The above command disabled scheduling on one of the nodes.

$ kubectl get nodes
NAME                                            STATUS                     ROLES     AGE       VERSION
ip-192-168-128-254.us-west-2.compute.internal   Ready                          3d        v1.10.3
ip-192-168-217-164.us-west-2.compute.internal   Ready,SchedulingDisabled       3d        v1.10.3
ip-192-168-94-92.us-west-2.compute.internal     Ready                          3d        v1.10.3

Now, let’s go ahead and delete the MongoDB pod.

$ POD=`kubectl get pods -l app=mongo -o wide | grep -v NAME | awk '{print $1}'`
$ kubectl delete pod ${POD}
pod "mongo-68cc69bc95-mxqsb" deleted

As soon as the pod is deleted, it is relocated to the node with the replicated data, even when that node is in a different Availability Zone. STorage ORchestrator for Kubernetes (STORK), a Portworx-contributed open source storage scheduler, ensures that the pod is rescheduled on the exact node where the data is stored.

Let’s verify this by running the below command. We will notice that a new pod has been created and scheduled in a different node.

$ kubectl get pods -l app=mongo -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP               NODE
mongo-68cc69bc95-thqbm   1/1       Running   0          19s       192.168.82.119   ip-192-168-94-92.us-west-2.compute.internal

Let’s uncordon the node to bring it back to action.

$ kubectl uncordon ${NODE}
node "ip-192-168-217-164.us-west-2.compute.internal" uncordoned

Finally, let’s verify that the data is still available.

Verifying that the data is intact

Let’s find the pod name and run the ‘exec’ command, and then access the Mongo shell.

POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`
kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..

We will query the collection to verify that the data is intact.

Find one arbitrary document:

db.ships.findOne()
{
	"_id" : ObjectId("5b5c16221108c314d4c000cd"),
	"name" : "USS Enterprise-D",
	"operator" : "Starfleet",
	"type" : "Explorer",
	"class" : "Galaxy",
	"crew" : 750,
	"codes" : [
		10,
		11,
		12
	]
}

Find all documents and using nice formatting:

db.ships.find().pretty()
…..
{
	"_id" : ObjectId("5b5c16221108c314d4c000d1"),
	"name" : "IKS Somraw",
	"operator" : " Klingon Empire",
	"class" : "Raptor",
	"crew" : 50,
	"codes" : [
		101,
		111,
		120
	]
}
{
	"_id" : ObjectId("5b5c16221108c314d4c000d2"),
	"name" : "Scimitar",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 25,
	"codes" : [
		201,
		211,
		220
	]
}
…..

Shows only the names of the ships:

db.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

Finds one document by attribute:

db.ships.findOne({'name':Narada'})
{
	"_id" : ObjectId("5b5c16221108c314d4c000d3"),
	"name" : "Narada",
	"operator" : "Romulan Star Empire",
	"type" : "Warbird",
	"class" : "Warbird",
	"crew" : 65,
	"codes" : [
		251,
		251,
		220
	]
}

Observe that the collection is still there and all the content is intact! Exit from the client shell to return to the host.

Performing Storage Operations on MongoDB

After testing end-to-end failover of the database, let’s perform StorageOps for MongoDB on our EKS cluster.

Expanding the Kubernetes Volume with no downtime

Currently the Portworx volume that we created at the beginning is of 1Gib size. We will now expand it to double the storage capacity.

First, let’s get the volume name and inspect it through the pxctl tool.

If you have access, SSH into one of the nodes and run the following command.

$ POD=`/opt/pwx/bin/pxctl volume list --label pvc=px-mongo-pvc | grep -v ID | awk '{print $1}'`
$ /opt/pwx/bin/pxctl v i $POD
Volume	:  607995846497198316
	Name            	 :  pvc-64b57bdd-9254-11e8-8c5e-0253036635a0
	Size            	 :  1.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Jul 28 10:53:01 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-95-234.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd607995846497198316
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  52
	Reads MS        	 :  20
	Bytes Read      	 :  225280
	Writes          	 :  106
	Writes MS       	 :  236
	Bytes Written   	 :  2453504
	IOs in progress 	 :  0
	Bytes used      	 :  10 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.95.234 (Pool 0)
		  Node 		 :  192.168.203.81 (Pool 0)
		  Node 		 :  192.168.185.157 (Pool 0)
	Replication Status	 :  Up

Notice the current Portworx volume. It is 1GiB. Let’s expand it to 2GiB.

$ /opt/pwx/bin/pxctl volume update $POD --size=2
Update Volume: Volume update successful for volume 607995846497198316

Check the new volume size.

$ /opt/pwx/bin/pxctl v i $POD
Volume	:  607995846497198316
	Name            	 :  pvc-64b57bdd-9254-11e8-8c5e-0253036635a0
	Size            	 :  2.0 GiB
	Format          	 :  xfs
	HA              	 :  3
	IO Priority     	 :  LOW
	Creation time   	 :  Jul 28 10:53:01 UTC 2018
	Shared          	 :  no
	Status          	 :  up
	State           	 :  Attached: ip-192-168-95-234.us-west-2.compute.internal
	Device Path     	 :  /dev/pxd/pxd607995846497198316
	Labels          	 :  namespace=default,pvc=px-mongo-pvc
	Reads           	 :  65
	Reads MS        	 :  20
	Bytes Read      	 :  278528
	Writes          	 :  218
	Writes MS       	 :  344
	Bytes Written   	 :  3149824
	IOs in progress 	 :  0
	Bytes used      	 :  11 MiB
	Replica sets on nodes:
		Set  0
		  Node 		 :  192.168.95.234 (Pool 0)
		  Node 		 :  192.168.203.81 (Pool 0)
		  Node 		 :  192.168.185.157 (Pool 0)
	Replication Status	 :  Up

$ /opt/pwx/bin/pxctl v i $POD

Taking Snapshots of a Kubernetes volume and restoring the database

Portworx supports creating snapshots for Kubernetes PVCs.

Let’s create a snapshot for the PVC we created for MongoDB.

cat > px-mongo-snap.yaml << EOF
apiVersion: volumesnapshot.external-storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: px-mongo-snapshot
  namespace: default
spec:
  persistentVolumeClaimName: px-mongo-pvc
EOF
$ kubectl create -f px-mongo-snap.yaml
volumesnapshot.volumesnapshot.external-storage.k8s.io "px-mongo-snapshot" created

Verify the creation of volume snapshot.

$ kubectl get volumesnapshot
NAME                AGE
px-mongo-snapshot   1m
$ kubectl get volumesnapshotdatas
NAME                                                       AGE
k8s-volume-snapshot-9e539249-9255-11e8-b018-e2f4b6cbb690   2m

With the snapshot in place, let’s go ahead and delete the database.

$ POD=`kubectl get pods -l app=mongo | grep Running | grep 1/1 | awk '{print $1}'`
$ kubectl exec -it $POD mongo
db.ships.drop()

Since snapshots are just like volumes, we can use it to start a new instance of MongoDB. Let’s create a new instance of MongoDB by restoring the snapshot data.

$ cat > px-mongo-snap-pvc << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: px-mongo-snap-clone
  annotations:
    snapshot.alpha.kubernetes.io/snapshot: px-mongo-snapshot
spec:
  accessModes:
     - ReadWriteOnce
  storageClassName: stork-snapshot-sc
  resources:
    requests:
      storage: 2Gi
EOF

$ kubectl create -f px-mongo-snap-pvc.yaml
persistentvolumeclaim "px-mongo-snap-clone" created

From the new PVC, we will create a MongoDB pod.

cat < px-mongo-snap-restore.yaml >> EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo-snap
spec:
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
    type: RollingUpdate
  replicas: 1
  selector:
    matchLabels:
      app: mongo-snap
  replicas: 1
  template:
    metadata:
      labels:
        app: mongo-snap
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: px/running
                operator: NotIn
                values:
                - "false"
              - key: px/enabled
                operator: NotIn
                values:
                - "false"
    spec:
      containers:
      - name: mongo
        image: mongo
        imagePullPolicy: "Always"
        ports:
        - containerPort: 27017
        volumeMounts:
        - mountPath: /data/db
          name: mongodb
      volumes:
      - name: mongodb
        persistentVolumeClaim:
          claimName: px-mongo-snap-clone
EOF

$ kubectl create -f px-mongo-snap-restore.yaml
deployment.extensions "mongo-snap" created

Verify that the new pod is in running state.

$ kubectl get pods -l app=mongo-snap
NAME                         READY     STATUS    RESTARTS   AGE
mongo-snap-6cd7d5b7f-gcrw2   1/1       Running   0          5m

Finally, let’s access the sample data created earlier in the walkthrough.

$ POD=`kubectl get pods -l app=mongo-snap | grep Running | grep 1/1 | awk '{print $1}'`
$ kubectl exec -it $POD mongo
MongoDB shell version v4.0.0
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.0
Welcome to the MongoDB shell.
…..
sdb.ships.find({}, {name:true, _id:false})
{ "name" : "USS Enterprise-D" }
{ "name" : "USS Prometheus" }
{ "name" : "USS Defiant" }
{ "name" : "IKS Buruk" }
{ "name" : "IKS Somraw" }
{ "name" : "Scimitar" }
{ "name" : "Narada" }

Notice that the collection is still there with the data intact. We can also push the snapshot to Amazon S3 if we want to create a Disaster Recovery backup in another Amazon region. Portworx snapshots also work with any S3 compatible object storage, so the backup can go to a different cloud or even an on-premises data center.

Creating Atlas Cluster GCP /24 in Terraform

  1. Generating MongoDB Atlas Provider API Keys In order to configure the authentication with the MongoDB Atlas provider, an API key must be...