$ sudo gravity enter $ kubectl
To Migrate Anypoint Platform Private Cloud Edition, Version 1.6.0 to 1.6.1 (One Node)
This topic describes how to migrate a one node installation Anypoint Platform Private Cloud Edition version 1.6.0 to version 1.6.1.
During the migration process, there may be some downtime as each node is migrated. |
Prerequisites
Before migrating, ensure that you have performed and met the following prerequisites:
-
Perform a backup of your system as described in About Backup and Recovery.
-
Ensure that your environment meets all of the system and network requirements described in About Minimum System Requirements
-
Enable TCP ports
5973
,3022
,7373
intra-node to enable communication with the database cluster. -
Ensure you have permission to run the
sudo
command on the node where you launch the migration tool. -
Ensure the
kubectl
command is available in the node where you are performing the migration. To verify thatkubectl
is installed, run the following: -
Ensure that you have run the following script on your one-node installation. You must run this script to avoid data loss in your installation.
https://anypoint-anywhere.s3.amazonaws.com/1.6.1-GA/mounts-data-migration.sh?Signature=rt5yMRQrqHEJQh6rMHWJ%2BqdyX6s%3D&Expires=1536705930&AWSAccessKeyId=AKIAITTY5MSTT3INJ7XQ
Performing the Upgrade
-
Obtain the
anypoint-1.6.1-installer.tar.gz
archive from your customer success representative. -
Use
ssh
to login to the node of your installation. -
Upload the installer to the node of your installation.
-
Uncompress the installer archive.
mkdir anypoint-1.6.1 tar -xzf anypoint-1.6.1-installer.tar.gz -C anypoint-1.6.1
-
Navigate to the
anypoint-1.6.1
directory, then run the upload script.cd anypoint-1.6.1 sudo ./upload
Depending on your network configuration, this command may take a while to complete. Wait until the command finishes before proceeding to the next step.
Note: If this command fails, it may be due to lack of space in your `/tmp.
-
Download the installation script and copy it to each node in your cluster.
-
Download script from the following URL
https://anypoint-anywhere.s3.amazonaws.com/1.6.1-GA/manual_update_161.sh?Signature=xTkDlF%2F1OKFYtjG2lXPZcuc2itY%3D&Expires=1536705930&AWSAccessKeyId=AKIAITTY5MSTT3INJ7XQ
-
Copy the script to directory where you ran the
./upload
.
-
-
Export RBAC bootstrap package.
From the master node, run the following command:
./manual_update_161.sh export-rbac
-
Scale down deployments
From the master node, run the following command:
./manual_update_161.sh scale-down
-
Initiate the update operation
From the master node, run the following command:
./manual_update_161.sh start-update
The output of this command should be similar to the following:
./manual_update_161.sh start-update Initiating the update process, setting it in manual mode updating anypoint from 1.5.2 to 1.6.1 update operation (3f097853-64dd-4201-973f-2bb4a686c9ee) has been started The update operation has been created in manual mode.
-
Bootstrap the system update process
From the master node, run the following command:
./manual_update_161.sh bootstrap-system
-
Perform the system update
Perform the following command by logging into each node in your cluster. You must run this command sequentially on each node. Wait until this command completes before running it on the next node.
./manual_update_161.sh update-system
-
Bootstrap the RBAC configuration in the cluster
From the master node, run the following command:
./manual_update_161.sh bootstrap-rbac
-
Determine the name of each of your nodes using the following command:
sudo gravity enter kubectl get nodes
-
Exit the gravity shell
exit
-
Drain each of the nodes in your cluster.
From the master node, run the following command one each node in your cluster. You must pass the nodename for each node.
./manual_update_161.sh drain=<node-name>
The output of this command should be similar to the following:
./manual_update_161.sh drain=172.31.11.215 Draining node 172.31.11.215 node "172.31.11.215" cordoned WARNING: Ignoring DaemonSet-managed pods: cassandra-p4mjy, stolon-keeper-d2get, gravity-site-tgme5, kube-dns-v18-41u28, log-forwarder-ujp6d; Deleting pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: bandwagon; Deleting pods with local storage: bandwagon-mulesoft-install-35afd2-ingx2, gravity-site-tgme5, monitoring-app-install-39664d-l7xo4, pithos-app-install-95fa7b-58flh, site-app-post-install-916df9-03pol, stolon-app-install-5480c4-v6n81 pod "exchange-api-db-migration-q8itn" evicted pod "site-app-post-install-916df9-03pol" evicted pod "pithos-app-install-95fa7b-58flh" evicted ... ... ...
Before continuing, ensure that all pods are in
running
orpending
state. No pod should be incrashloopbackoff
orterminating
state. -
Make each of the nodes in your cluster is schedulable.
From the master node, run the following command for each node in your cluster. You must pass the nodename for each node.
./manual_update_161.sh uncordon=<node-name>
-
Check the status of your cluster.
kubectl get pods
Verify that all of the pods in your cluster are running. Wait until all pods are running before continuing to the next procedure.
-
Fix the LDAP config directory permissions
./manual_update_161.sh fix-ldap
-
Initiate the application update
./manual_update_161.sh update-app
-
Finalize and complete the update operation
./manual_update_161.sh finalize-update
-
If you are running a load balancer in your installation, update the health check on the load balancer.
You must enable port 10248 for the load balancer health check.