sudo yum install -y hdparm
To Verify System Requirements
This topic provides information on how to verify that you system meets the minimum prerequisites to install and run the Anypoint Platform Private Cloud Edition.
Disk Performance Verification
To Verify Disk Performance
To measure disk throughput, use a tool such as hdparm
. On CentOS, hdparm
can be installed by running
To Test Disk Throughput
All disks should have at least 300 MB/sec of throughput. Use the following command to verify the throughput of your disk:
sudo hdparm -d <device>
For example:
$ sudo hdparm -t /dev/sdd /dev/sdd: Timing buffered disk reads: 4726 MB in 3.00 seconds = 1574.94 MB/sec
You can also measure throughput using the dd
tool. dd
writes directly to the specified file, even if it is a device. Do not use this tool on a bare devices. Instead, after a device is formatted and mounted, you can write to a file on that device to measure throughput.
$ sudo mount /dev/sdb /var/lib/gravity # must be already formatted! $ sudo dd if=/dev/zero of=/var/lib/gravity/testfile count=1000 bs=1M 1000+0 records in 1000+0 records out 1048576000 bytes (1.0 GB, 1000 MiB) copied, 0.382535 s, 2.7 GB/s
The dd
command provides less accurate information than hdparm
, but is available on any operating system and provides the ability to easily verify general disk performance.
Test Disk IOPS
Depending on your hardware or virtualization provider, you may be able to configure disk IOPS (I/O Operations Per Second). Using a tool like iops
(available here https://benjamin-schweizer.de/files/iops/iops-2011-02-11), you can verify available IOPS:
$ sudo ./iops --time 2 /dev/xvdb /dev/xvdb, 107.37 GB, 32 threads: 512 B blocks: 1893.0 IO/s, 946.5 KiB/s ( 7.8 Mbit/s) 1 KiB blocks: 1354.8 IO/s, 1.3 MiB/s ( 11.1 Mbit/s) 2 KiB blocks: 1091.8 IO/s, 2.1 MiB/s ( 17.9 Mbit/s) 4 KiB blocks: 807.1 IO/s, 3.2 MiB/s ( 26.4 Mbit/s) 8 KiB blocks: 803.7 IO/s, 6.3 MiB/s ( 52.7 Mbit/s) 16 KiB blocks: 787.4 IO/s, 12.3 MiB/s (103.2 Mbit/s) 32 KiB blocks: 700.8 IO/s, 21.9 MiB/s (183.7 Mbit/s) 64 KiB blocks: 590.0 IO/s, 36.9 MiB/s (309.3 Mbit/s) 128 KiB blocks: 327.6 IO/s, 40.9 MiB/s (343.5 Mbit/s) ...
Install Logical Volume Manager 2 (LVM2)
To install and run Anypoint Platform Private Cloud Edition, you must install LVM2. LVM2 is a tool that adds a layer of abstraction between your operating system and the disks/partitions it uses. You must have root access to install LVM2.
Only LVM2 versions 2.02.166-1.el7_3.5 and earlier are supported. Later versions of LVM2 are not supported. |
You can install LVM using Yum. Yum is an open-source command-line package-management utility for Linux operating systems using the RPM Package Manager. Using Yum, you can install LVM through the following command:
sudo yum install lvm2
Uninstall Unsupported Packages
The following sections describe software that may cause conflicts with Anypoint Platform Private Cloud Edition. You must uninstall these components before performing the installation.
Device Requirements
For the platform’s configuration you must assign four dedicated devices for use. One as a system state directory, one holds application data, another holds cluster information and the final device is used as a target for Docker devicemapper configuration.
Anypoint system data device
The main purpose of the system state directory is to store system configuration and metadata - for example, database and packages among other things. As package sizes can be arbitrary large, it is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time.
This device will be formatted either as xfs
or ext4
and mounted as /var/lib/gravity
. You can use the following shell snippet to guide this process (be sure to specify the correct device name in 2 places).
The following is provided only as an example, and should not be used as-is in production. Formatting and mounting disks is something that should be done by your system administrator. The end result of this is mounting the Anypoint system data device to the path /var/lib/gravity
. This can be achieved using systemd mount files as shown in the example below, or /etc/fstab
, or some other method. Please consult with your system administrator to determine the best way to meet this requirement
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity.mount sudo systemctl start var-lib-gravity.mount
etcd device
The main purpose of the etcd device is to provide dedicated storage for a distributed database used for cluster coordination. It does not require much space, 20GB should be enough.
This device will be formatted either as xfs
or ext4
and mounted as /var/lib/gravity/planet/etcd
. You can use the following shell snippet to guide this process (be sure to specify the correct device name in 2 places).
The following is provided only as an example, and should not be used as-is in production. Formatting and mounting disks is something that should be done by your system administrator. The end result of this is mounting the Etcd device to the path /var/lib/gravity/planet/etcd
. This can be achieved using systemd mount files as shown in the example below, or /etc/fstab
, or some other method. Please consult with your system administrator to determine the best way to meet this requirement.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/gravity/planet/etcd echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/gravity/planet/etcd\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-gravity-planet-etcd.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-gravity-planet-etcd.mount sudo systemctl start var-lib-gravity-planet-etcd.mount
Anypoint application data device
The main purpose of application data directory is storing application configuration and data. The amount of space required should be at minimum 250GB, but might vary depending on your specific use case. It is important to estimate the minimum size requirements and allocate enough space as a dedicated device ahead of time.
This device will be formatted either as xfs
or ext4
and mounted as /var/lib/data
. You can use the following shell snippet to guide this process (be sure to specify the correct device name in 2 places).
The following is provided only as an example, and should not be used as-is in production. Formatting and mounting disks is something that should be done by your system administrator. The end result of this is mounting the Anypoint application data device to the path /var/lib/data
. This can be achieved using systemd mount files as shown in the example below, or /etc/fstab
, or some other method. Please consult with your system administrator to determine the best way to meet this requirement.
sudo mkfs.ext4 /dev/<device name> sudo mkdir -p /var/lib/data echo -e "[Mount]\nWhat=/dev/<device name>\nWhere=/var/lib/data\nType=ext4\n[Install]\nWantedBy=local-fs.target" | sudo tee /etc/systemd/system/var-lib-data.mount sudo systemctl daemon-reload sudo systemctl enable var-lib-data.mount sudo systemctl start var-lib-data.mount
Docker device
This device is used by Docker’s Device Mapper storage driver.
It is strongly recommended to have at least 100Gb sized device for the Device Mapper directory - with devices 50Gb and less the system performance will degrade dramatically or might not work at all. |
Unless specified, Docker configuration defaults to the use of Device Mapper in loopback mode (using /dev/loopX devices) which is not recommended for production. To configure Docker to use a dedicated device for Device Mapper storage driver, an unformatted device (or a partition) (i.e. /dev/sdd) can be provided during installation. This directory will be automatically configured and set up for use.
Unformatted devices potentially usable for system directory / Device Mapper are automatically discovered by agents running on each node. Discovered devices are offered on a drop-down menu for configuration before the installation is started.
You can list unmounted devices with the following command: --- lsblk --output=NAME,TYPE,SIZE,FSTYPE -P -I 8,9,202|grep 'FSTYPE=""' ---
Unmounted devices have an empty value in FSTYPE column. Devices with TYPE="part" are partitions on another device. This command only lists specific device types:
Device type |
Description |
8 |
SCSI disk devices |
9 |
Metadisk (RAID) devices |
202 |
Xen virtual block devices (Amazon EC2) |
To Manually Reset Devices and Partitions
Logical Volume Manager allows one to group multiple physical volumes into a single storage volume (Volume Group) and then divide these into Logical Volumes. Physical Volumes are either a whole device or a partition.
In some cases when a device is in use by another logical volume or you want to manually reset a device previously configured for Device Mapper the following commands may be useful.
The Logical Volume Manager toolset consists of the following commands: * dmsetup - is a low-level logical volume management * pv/vg/lv-prefixed commands like pvdisplay and pvcreate/pvremove - for working with specific LVM object types (i.e. lv - for logical volumes and vg - for volume groups)
To reset a device use the following commands:
* remove logical volume with lvremove -f docker/thinpool
(use lvdisplay
to find the volume to remove)
* remove volume group with vgremove docker
(use vgdisplay
to locate the volume group to remove)
* remove physical volume and reset device with pvremove /dev/<device name>
(use pvdisplay
to find the physical volume to remove and the device name it is on)