Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.5 First Published: 2018-10-15 Last Modified: 2018-12-07 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release3.5First Published: 2018-10-15
Last Modified: 2018-12-07
Americas HeadquartersCisco Systems, Inc.170 West Tasman DriveSan Jose, CA 95134-1706USAhttp://www.cisco.comTel: 408 526-4000
800 553-NETS (6387)Fax: 408 527-0883
C H A P T E R 1Cisco HyperFlex Storage Integration forKubernetes
• Overview, on page 1• Components, on page 2
OverviewThe Cisco HyperFlex Storage Integration for Kubernetes allows HyperFlex to dynamically provide persistentstorage to Kubernetes Pods running on HyperFlex. The integration enables orchestration of the entire PersistentVolume object lifecycle to be offloaded and managed by HyperFlex, while ultimately being driven (initiated)by developers and users through standard Kubernetes Persistent Volume Claim objects. Developers and usersget the benefit of leveraging HyperFlex for their Kubernetes persistent storage needs with zero additionaladministration overhead from their perspective.
With the HyperFlex Storage Integration for Kubernetes, each Persistent Volume object in Kubernetes isrepresented by an iSCSI-based LUN residing on the HyperFlex storage subsystem. Each iSCSI LUN is
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.51
presented to the Kubernetes nodes (VMs) by the scvmclient service running on the local ESXi host where theVMs reside. The Kubernetes nodes (VMs) themselves are running the software iSCSI initiator service whichallows them to mount the iSCSI LUNs provided by the iSCSI target. Each ESXi host is configured with ahost-only vSwitch, where all iSCSI-based traffic between the iSCSI target (scvmclient service in ESXi) andthe iSCSI initiators (Kubernetes VMs running locally on that ESXi host) resides.
ComponentsThere are two components that make up the Cisco HyperFlex Storage Integration for Kubernetes. Bothcomponents work in tandem to dynamically provision storage in HyperFlex and, ultimately, provide thatstorage to the appropriate Kubernetes Pod as a Persistent Volume object.
• HyperFlex FlexVolume Plug-in
• HyperFlex Kubernetes Storage Provisioner
HyperFlex FlexVolume Plug-in
The HyperFlex FlexVolume Plug-in is a plug-in that is developed by Cisco Systems and leverages theKubernetes FlexVolume “out-of-tree” framework. The Hyperflex FlexVolume plugin manages connectionsto the HyperFlex cluster from the Kubernetes nodes andmakes storage volumes available to containers throughthe Kubernetes FlexVolume interface.
HyperFlex Kubernetes Storage Provisioner
The HyperFlex Kubernetes Storage Provisioner is a container that is developed by Cisco Systems. Thiscontainer is deployed within the target Kubernetes cluster and serves as the storage provisioning orchestrationpoint for Persistent Volumes fromHyperFlex. Developers submit their application storage requirements usingPersistent Volume Claims and reference a specific StorageClass associated with HyperFlex. Kubernetes passesthe required storage request to the HyperFlex Kubernetes Storage Provisioner configured in the StorageClass.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.52
Cisco HyperFlex Storage Integration for KubernetesComponents
C H A P T E R 2Kubernetes Support in HyperFlex Connect
• Kubernetes Integration in HyperFlex Connect, on page 3• Preventing FlexVolume Traffic Disruption, on page 3• Enabling Kubernetes Integration , on page 4• Partially Enabled Cluster Status, on page 5
Kubernetes Integration in HyperFlex ConnectKubernetes support must be explicitly enabled within HyperFlex Connect in order to use the HyperFlexStorage Integration for Kubernetes, regardless of whether you are using the Cisco Container Platform (CCP)or RedHat OpenShift Container Platform (OCP). EnablingKubernetes support in HyperFlex Connect configuresthe underlying HyperFlex storage subsystem to support the iSCSI-based LUNs (which ultimately maps toPersistent Volume objects). In addition, enabling Kubernetes support configures the required ESXi networkingto support iSCSI traffic between the iSCSI target (scvmclient service in each ESXi host) and iSCSI initiators(Kubernetes VMs residing locally on each ESXi host).
The ESXi network configuration required to support the HyperFlex Storage Integration for Kubernetes willnot disrupt any previous or existing network configuration in place on each ESXi host. In other words, enablingKubernetes support in HyperFlex Connect will not disrupt any virtual machines or applications currentlyrunning on the HyperFlex system.
Note
Preventing FlexVolume Traffic DisruptionAfter you upgrade your Kubernetes cluster from an older HXDP release to 3.5(2a) or later, ensure that thefollowing configuration changes are completed on all the storage controller VM nodes. This will avoid incorrectFlexVolume configuration on Kubernetes nodes.
Step 1 Open the application.conf file located at /opt/springpath/storfs-mgmt/stMgr-1.0/conf. Searchfor iscsiTargetAddress. Modify the value for this parameter from 169.254.254.1 to 169.254.1.1.
Step 2 Open the application.conf file located at /opt/springpath/storfs-mgmt/hxSvcMgr-1.0/conf/.Search for istgtConfTargetAddress. Modify the value for this parameter from 169.254.254.1 to 169.254.1.1.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.53
Step 3 Run the following commands on all the storage controller VM nodes:
# restart hxSvcMgr# restart stMgr
Enabling Kubernetes IntegrationThe following procedure details the steps that are required to enable Kubernetes support in HyperFlex Connect:
1. Navigate to the HyperFlex cluster by using a supported web browser (for example,https://<hyperflex_cluster_management_IP_address).
2. Log in to HyperFlex Connect using a VMware SSO account and password with administrative privileges(that is, [email protected]).
3. In the upper right-hand corner of HyperFlex Connect, click the Settingsmenu icon (represented by a Gearicon).Figure 1: HyperFlex Connect Settings Menu
4. Under Integrations, click Kubernetes.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.54
Kubernetes Support in HyperFlex ConnectEnabling Kubernetes Integration
Figure 2: Selecting Integrations > Kubernetes
5. On the Enable Persistent Volumes for Kubernetes page, the Current Status: value is Disabled for anew cluster. ClickEnable to configure the HyperFlex cluster to support Persistent Volumes for Kubernetes.Figure 3: Enable Persistent Volumes for Kubernetes Page
Partially Enabled Cluster StatusThere are certain circumstances where a previously enabled cluster may show a current status of PartiallyEnabled. This status typically appears when one of the following scenarios occur:
• Expansion of the HyperFlex cluster
• Change to required ESXi networking for iSCSI
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.55
Kubernetes Support in HyperFlex ConnectPartially Enabled Cluster Status
In either of the preceding scenarios, perform the procedure outlined in Enabling Kubernetes Integration , onpage 4 to reenable Kubernetes support in HyperFlex Connect. HyperFlex will ensure that all hosts areproperly enabled and configured. After reenabling Kubernetes support, the current status should change toEnabled.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.56
Kubernetes Support in HyperFlex ConnectPartially Enabled Cluster Status
C H A P T E R 3Configuring HyperFlex FlexVolume StorageIntegration for Cisco Container Platform
• Support Matrix for HX FlexVolume Integration with CCP, on page 7• Prerequisites, on page 8• Creating Cisco Container Platform Tenant Cluster, on page 8• Managing HyperFlex FlexVolume Plug-in, on page 8• Managing HyperFlex FlexVolume Provisioner, on page 10• Configuring Storage Classes, on page 11• Provisioning Persistent Volumes, on page 11
Support Matrix for HX FlexVolume Integration with CCPThe following table summarizes the Cisco Container Platform (CCP) software versions that are supportedwith each of the HX Data Platform software versions.
Table 1: Support Matrix for HX FlexVolume Integration with CCP
CCP Version3.0
CCP Versions2.1, 2.2
CCP Version2.0
CCP Version1.5
CCP Versions1.1, 1.2, 1.3, 1.4
CCP Version1.0
HX DataPlatformVersion
———SupportedSupportedSupported3.0(1a) orlater
—PlannedSupportedNotSupported
——3.5(1a) orlater
—PlannedPlanned———3.5(2a) orlater
PlannedPlanned————4.0(1a) orlater
PlannedPlanned————4.1(1a) orlater
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.57
PrerequisitesThe following prerequisites must be met prior to configuring HyperFlex FlexVolume Storage Integration forCisco Container Platform.
• Cisco HyperFlex cluster is installed and running 3.5(1a) or later.
• Cisco Container Platform Control Plane is installed and running 2.0 or later.
Creating Cisco Container Platform Tenant ClusterDuring the CCP tenant cluster creation workflow within the CCP control plane, you must set the HyperFlexLocal Network option in order to install and configure the HyperFlex FlexVolume Storage Integration forKubernetes. Once the option is selected, CCPwill automatically install both the HyperFlex FlexVolume Pluginand HyperFlex FlexVolume Provisioner as part of the tenant cluster deployment.
The following section is an abbreviated description of the CCP “Create Cluster” workflow which highlightstheHyperFlex Local Network option to install the HyperFlex FlexVolume Storage Integration for Kubernetes.For more details on creating a new CCP tenant cluster, please refer to the CCP documentation.
Step 1 Log into the CCP control plane UI.Step 2 On the Clusters page, click New Cluster.Step 3 On the Basic Information page, enter the appropriate information and click Next.Step 4 On the Provider Settings page, enter the appropriate information and ensure the HyperFlex Local Network option is
set to k8-priv-iscsvm-network in order to tell CCP to install and configure the HyperFlex FlexVolume plug-in.Step 5 Click Next.Step 6 On the Summary page, view the cluster information and click Submit.
The CCP control plane will then deploy the requested CCP tenant cluster, including the installation and configuration ofall required components of the HyperFlex FlexVolume Storage Integration for Kubernetes.
Managing HyperFlex FlexVolume Plug-in
Installing HyperFlex FlexVolume Plug-inIf the HyperFlex Local Network option is configured properly when deploying the CCP tenant cluster, theCCP control plane automatically installs and configures the HyperFlex FlexVolume plug-in on the CCP tenantcluster. No additional configuration is required.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.58
Configuring HyperFlex FlexVolume Storage Integration for Cisco Container PlatformPrerequisites
The following steps must be performed on one CCP tenant cluster node only.Important
Step 1 Log into one of the tenant Kubernetes cluster nodes using SSH. Use the username that corresponds to the SSH keyprovided during the New Cluster workflow when provisioning the CCP tenant cluster through CCP.
Step 2 Run the following command to change directories to the/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hyperflex-hxvolume/ directory.cd /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hyperflex~hxvolume/
Upgrading HyperFlex FlexVolume Plug-inUpgrading of the HyperFlex FlexVolume Plug-in is performed as part of the CCP tenant cluster upgradeprocess. Follow the steps that are provided in the CCP documentation to upgrade the CCP tenant cluster. Thisprocess includes upgrading the HyperFlex FlexVolume Plug-in to the latest available version.
Modifying Configuration for Kubernetes VMsComplete the following task when existing HyperFlex clusters use IP addresses in the range, 169.154.1.0 to169.154.1.24 for ESXi. After a Kubernetes cluster operation, such as scale up or upgrade, this procedure mustbe repeated on the ALL new VMs.
After an upgrade to HXDP release 3.5(2a), run the following command on eachKubernetes VM. This commandwill find the parameter "targetIp" in the hxflexvolume.json file, and replace the value from"169.254.254.1" to "169.254.254.1".
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.59
Configuring HyperFlex FlexVolume Storage Integration for Cisco Container PlatformChecking HyperFlex FlexVolume Plug-in Version
The <ssh user> must match the SSH user that was specified during cluster creation.
The <private key file> must correspond to the public key that was specified during cluster creation.
Note
Managing HyperFlex FlexVolume Provisioner
Installing HyperFlex FlexVolume ProvisionerIf the HyperFlex Local Network option is configured properly when deploying the CCP tenant cluster, theCCP control plane automatically installs and configures the HyperFlex FlexVolume Provisioner on the CCPtenant cluster. No additional configuration is required.
Checking HyperFlex FlexVolume Provisioner Version
Step 1 Run the kubectl get pods -n kube-system command to get the complete name of the deployed HyperFlex FlexVolumeProvisioner pod.kubectl get pods -n kube-system
Step 2 Run the kubectl describe pods... command to get the complete details of the deployed HyperFlex FlexVolumeProvisioner pod. Look for the hx-provisioner container image name which includes the version as a tag (that is, afterthe colon in the container name).kubectl describe pods <pod_name> -n kube-system
Upgrading HyperFlex FlexVolume ProvisionerUpgrading of the HyperFlex FlexVolume Provisioner is performed as part of the CCP tenant cluster upgrade.Follow the steps that are provided in CCP documentation to upgrade the CCP tenant cluster. This processincludes upgrading the HyperFlex FlexVolume Provisioner to the latest available version.
Configuring Storage ClassesIf the HyperFlex Local Network option was set properly when deploying the CCP tenant cluster, the CCPcontrol plane automatically creates a StorageClass for HyperFlex on the CCP tenant cluster. By default, theHyperFlex StorageClass is not set as the default StorageClass in the CCP tenant cluster. In this case, by default,developers must explicitly specify HyperFlex as the StorageClass in Persistent Volume Claims to use theHyperFlex FlexVolume storage integration.
Use the kubectl get sc command to view the StorageClasses on the CCP tenant cluster.
Step 2 Run the kubectl create command to submit the pvc.yaml file and create the Persistent Volume Claim object in the CCPtenant Kubernetes cluster. In parallel, as part of the operation, HyperFlex creates a Persistent Volume object to complementthe Persistent Volume Claim object and bind the two together in Kubernetes.kubectl create -f ~/hxkube/<pvc_name>.yaml
Step 3 Check the status of the Persistent Volume Claim object with the kubectl get pvc command to make sure it was createdsuccessfully and is bound to a Persistent Volume object.kubectl get pvc
Example:
ccpuser@admin-host:~$ kubectl get pvcNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEmessage-board-pvc Bound
Step 4 Deploy Kubernetes Pod using the kubectl create command while specifying the Persistent Volume Claim object inthe Pod YAML file.kubectl create -f <pod_yaml_file>
Example:
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.512
C H A P T E R 4Configuring HyperFlex FlexVolume StorageIntegration for RedHat OpenShift ContainerPlatform
• Support Matrix for HX FlexVolume Integration with OCP, on page 15• Prerequisites, on page 16• Setting Up an Administrator Host, on page 16• Command Execution, on page 17• Deploying RedHat OpenShift Container Platform, on page 17• Distributing HyperFlex FlexVolume Software, on page 18• Managing HyperFlex FlexVolume Plug-in, on page 23• Managing HyperFlex FlexVolume Provisioner , on page 31• Configuring Storage Classes, on page 35• Provisioning Persistent Volumes, on page 36
Support Matrix for HX FlexVolume Integration with OCPThe following table summarizes the Red Hat OpenShift Container Platform (OCP) software versions that aresupported with each of the HX Data Platform software versions.
Table 2: Support Matrix for HX FlexVolume Integration with Red Hat OCP
Red Hat OCPVersion 3.13
Red Hat OCPVersion 3.12
Red Hat OCPVersion 3.11
Red Hat OCPVersion 3.10
Red Hat OCPVersion 3.9
Red Hat OCPVersion 3.7
HX DataPlatformVersion
————NotSupported
NotSupported
3.0(1a) orlater
———SupportedSupported—3.5(1a) orlater
———PlannedPlanned—3.5(2a) orlater
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.515
Red Hat OCPVersion 3.13
Red Hat OCPVersion 3.12
Red Hat OCPVersion 3.11
Red Hat OCPVersion 3.10
Red Hat OCPVersion 3.9
Red Hat OCPVersion 3.7
HX DataPlatformVersion
——PlannedPlanned——4.0(1a) orlater
TBDTBDPlannedPlanned——4.1(1a) orlater
PrerequisitesThe following prerequisites must be met before configuring HyperFlex FlexVolume Storage Integration forRedHat OpenShift Container Platform.
• Cisco HyperFlex cluster is installed and running 3.5(1a) or later
• RedHat OpenShift Container Platform installed and running version 3.9 or later
• Downloaded the latest HyperFlex Kubernetes bundle (zip) file from the HyperFlex HX Data Platformsection of Cisco Software Downloads.
Setting Up an Administrator HostIn the context of this document, the administrator host refers to a Linux-based host used for remotelyadministering the OpenShift cluster. This document does not dictate which Linux distribution should be usedfor the administrator host operating system, however some commands may vary slightly based on thedistribution that is used. The administrator host may be a newly deployed host, or it can also be an existinghost in the environment. The server used for the automated installation of the OpenShift cluster using Ansible,also known as the “bastion” node, makes a good candidate to use as the administrator host as it typicallyalready meets most of the required prerequisites, such as OpenShift node connectivity, password-less SSHaccess, and so on. The examples in the following sections use the “bastion” node as the administrator host.
Perform the following steps on the administrator host.Important
Step 1 Ensure the Kubernetes command-line toolset oc is installed and configured to manage the target OpenShift cluster. If thetoolset is not installed, you can find the procedure based on Linux distribution here:https://kubernetes.io/docs/tasks/tools/install-oc/#install-oc.
Step 2 Ensure an SSH (public and private) keypair has been generated. The SSH keypair is used to manage the remote OpenShiftcluster. Ensure that password-less SSH authentication works between the administrator host and all OpenShift clusternodes.
Step 3 Download the latest HyperFlex Kubernetes bundle (zip) file from the HyperFlex HX Data Platform section of CiscoSoftware Downloads.. Transfer the HyperFlex Kubernetes bundle (zip) file to the administrator host using any preferredmethod, such as scp. The remainder of this document assumes the HyperFlex Kubernetes bundle (zip) file has been copiedto the following directory path ~/hxkube on the administrator host.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.516
Configuring HyperFlex FlexVolume Storage Integration for RedHat OpenShift Container PlatformPrerequisites
By default, the ~/hxkube directory does not exist and will need to be created.Note
Step 4 Unzip the HyperFlex Kubernetes bundle (zip) file to the ~/hxkube directory on the administrator host. You may need toinstall the unzip package using a package manager (for example, yum or apt-get) based on the administrator host’sLinux distribution.
Command ExecutionThe sections in this chapter are related to OpenShift and require that some commands be repeated acrossmultiple nodes in the OpenShift cluster. You may manually run each command on all required OpenShiftnodes, however this is highly repetitive. It is recommended to leverage a “while” loop in order to iteratethrough a list of all OpenShift nodes and execute the required commands.
Example: Using a “while” loop and a text file containing a list of OpenShift nodes
Create a file containing the IP addresses or hostnames of all OpenShift nodes.administrator-host:~/hxkube$ vi ./ocp_nodes.txtocp-masterocp-infra1ocp-infra2ocp-node1ocp-node2
Iterate through the list of OpenShift nodes and run the command on each node.administrator-host:~/hxkube$ cat ~/hxkube/ocp_nodes.txt | while read host; \do ssh -n <ocpuser>@$host <command>; \done
administrator-host:~/hxkube$
Hostnames or IP addresses can be used for each OpenShift node in the text file for the “while” loop.Note
Pay close attention to which commands should be run on which nodes, not all commands are run on all nodes.Note
Deploying RedHat OpenShift Container PlatformRedHat provides Ansible playbooks as a standard mechanism for automating the installation of a RedHatOpenShift Container Platform cluster. Extensive documentation and information can be found on the RedHatwebsite for installing and configuring an OpenShift cluster using Ansible. The subsequent sections assume arunning instance of RedHat OpenShift Container Platform exists or has been installed using standard RedHatmethods and best practices, as found on the RedHat website.
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.517
Add an additional virtual machine network interface for each OpenShift node and attach it to thek8-priv-iscsivm-networkVMware port-group. This interface is required to use the HyperFlex FlexVolumeStorage Integration for OpenShift.
Important
The additional interface can be added during OpenShift node deployment or can be added after deploymentby editing the VMware virtual machine settings, as long as the additional interface exists prior to movingforward with the HyperFlex FlexVolume Storage Integration for OpenShift installation. There is no need toconfigure the added interface within the Operating System; this action will be done as part of the HyperFlexFlexVolume Storage Integration for OpenShift installation.
Note
Distributing HyperFlex FlexVolume SoftwareIn order to properly install and configure the HyperFlex FlexVolume Storage Integration for OpenShift,distribute the HyperFlex Kubernetes bundle (zip) file across all OpenShift cluster nodes. The following stepsdetail the process for distributing the HyperFlex Kubernetes bundle (zip) file to the appropriate hosts.
Step 1 Run the following command to create a directory named hxkube on each OpenShift cluster node.
Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package unzip.x86_64 0:6.0-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:unzip x86_64 6.0-19.el7 rhel-7-server-rpms 170 k
================================================================================Package Arch Version Repository Size================================================================================Installing:unzip x86_64 6.0-19.el7 rhel-7-server-rpms 170 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package unzip.x86_64 0:6.0-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:unzip x86_64 6.0-19.el7 rhel-7-server-rpms 170 k
================================================================================Package Arch Version Repository Size================================================================================Installing:unzip x86_64 6.0-19.el7 rhel-7-server-rpms 170 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package unzip.x86_64 0:6.0-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:unzip x86_64 6.0-19.el7 rhel-7-server-rpms 170 k
Step 1 The Ansible installation playbooks for OpenShift should by default install the iscsi-initiator-utils package.Verify that the package is installed.
Scope:
Run on all OpenShift nodes.
Command:sudo yum list installed iscsi-initiator-utils
Example:administrator-host:hxkube$ cat ~/hxkube/ocp_nodes.txt | while read host; \do ssh -n $host sudo yum list installed iscsi-initiator-utils; \done
Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package avahi-autoipd.x86_64 0:0.6.31-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:avahi-autoipd x86_64 0.6.31-19.el7 rhel-7-server-rpms 40 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package avahi-autoipd.x86_64 0:0.6.31-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:avahi-autoipd x86_64 0.6.31-19.el7 rhel-7-server-rpms 40 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package avahi-autoipd.x86_64 0:0.6.31-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:avahi-autoipd x86_64 0.6.31-19.el7 rhel-7-server-rpms 40 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package avahi-autoipd.x86_64 0:0.6.31-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:avahi-autoipd x86_64 0.6.31-19.el7 rhel-7-server-rpms 40 k
Complete!Loaded plugins: product-id, search-disabled-repos, subscription-managerResolving Dependencies--> Running transaction check---> Package avahi-autoipd.x86_64 0:0.6.31-19.el7 will be installed--> Finished Dependency Resolution
Dependencies Resolved
================================================================================Package Arch Version Repository Size================================================================================Installing:avahi-autoipd x86_64 0.6.31-19.el7 rhel-7-server-rpms 40 k
Step 4 In the Deploying RedHat OpenShift Container Platform section, you were instructed to add an additional virtual machinenetwork interface to each OpenShift node. The interface name of the added virtual machine network interface is nowrequired in order to proceed with the installation.
Use the ifconfig -a command to find the name of the interface on one of the OpenShift nodes. The interface shouldnot have an IP address assigned and it is recommended to cross reference the MAC address with VMware vCenter inorder to ensure the correct interface. In this particular environment the added interface is named ens224.
The ifconfig -a command only needs to be run on a single OpenShift node.Note
Scope:
Run on a single OpenShift node.
Command:
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.526
Step 5 Ensure there is no network configuration file (/etc/sysconfig/network-scripts/ifcfg-<interface_name>) for theadditional virtual machine network interface that was added for the HyperFlex Storage Integration for OpenShift.
Step 9 Edit the /etc/kubernetes/hxflexvolume.json configuration file to change the target IP address from169.254.1.1 to 169.254.254.1.
Command:sudo sed -i -e s/169.254.1.1/169.254.254.1/ /etc/kubernetes/hxflexvolume.json.example
Example:administrator-host:~/hxkube$ cat ~/hxkube/ocp_nodes.txt | while read host; \
do ssh -n $host sudo sed -i -e s/169.254.1.1/169.254.254.1//etc/kubernetes/hxflexvolume.json.example; \
done
Step 10 Create the HyperFlex FlexVolume Plugin configuration file. The file will be copied from an example file found in the/etc/kubernetes/ directory.
Step 11 Edit the /etc/kubernetes/hxflexvolume.json configuration file and update with the appropriate name forthe interface to be used for the HyperFlex Storage Integration for OpenShift. In the example in this document theinterface is named ens224.
Scope:
Run on all OpenShift nodes.
Command:sudo sed -i -e s/ens[[:digit:]+]/<interface_name>/ /etc/kubernetes/hxflexvolume.json
Example:administrator-host:hxkube$cat ~/hxkube/ocp_nodes.txt | while read host; \dossh -n $host sudo sed -i -e s/ens[[:digit:]+]/ens224/ /etc/kubernetes/hxflexvolume.json;\done
Example:ocpuser@openshift-master:~$ cd /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hyperflex~hxvolume/ocpuser@openshift-master:/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hyperflex~hxvolume$
Step 3 Run the hxvolume version command as the root user (with sudo) to view the HyperFlex FlexVolume plug-in version.
Step 2 Run thehx-provisioner-setup script to generate the required YAML file for deploying the HyperFlex Provisionerpod on the OpenShift cluster. Provide the following information as parameters when running thehx-provisioner-setup script.
Parameters:
• -cluster-name—Name of the OpenShift cluster (must be unique across the HyperFlex cluster).
• -url—URL to reach the HyperFlex API. This URL is equivalent to thehttps://<hyperFlex_cluster_management_IP_address.
• -username—The username that is used to authenticate to the HyperFlex cluster. Typically, a vCenter SSO accountsuch as [email protected].
• -password—Name of the resulting output file generated by the hx-provisioner-setup script.
This procedure schedules the HyperFlex Provisioner pod to run on one of the OpenShift cluster nodes nodes.While there is no technical reason for it, if you require to run the HyperFlex Provisioner pod on a specificOpenShift cluster node (perhaps the infra nodes or even the master node), edit the <output> file created bythe hx-provisioner-setup script to include a Toleration to overcome to include a Toleration toovercome any configured taints. For more information seehttps://kubernetes.io/docs/concepts/configuration/taint-and-toleration/.
Step 2 Run describe pods command to get the complete details of the deployed HyperFlex FlexVolume Provisioner pod. Lookfor the hx-provisioner container image name which includes the version as a tag (that is, after the colon in the containername).
Step 1 Download the latest HX Kubernetes release package from Cisco Software Downloads.Step 2 Follow the steps in Distributing HyperFlex FlexVolume Software to copy the latest HX Kubernetes release package to
each OpenShift node. Unzip the file on each OpenShift node.
Scope:
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.534
Step 2 Edit the hx-storageclass.yml file using your choice of file editor (for example, vi.) and insert the following text.Be sure to copy the text exactly as shown below, including indentations. Save the file once complete.kind: StorageClassapiVersion: storage.k8s.io/v1metadata:name: hyperflexannotations:storageclass.kubernetes.io/is-default-class: "true"
provisioner: hyperflex.io/hxvolume
If you do not wish for the hyperflex storage class to be the default storage class, you can remove the followingtwo lines from the above hx-storageclass.yml file before creating the storage class:annotations:
Step 4 Use the get sc command to view the new hyperflex storage classes on the OpenShift cluster.
Scope:
Run on administrator host.
Command:oc get sc
Example:
administrator-host:hxkube$ oc get sc
NAME PROVISIONER AGEhyperflex (default) hyperflex.io/hxvolume 23sadministrator-host:hxkube$
Provisioning Persistent VolumesAt this point, the HyperFlex Storage Integration for OpenShift has been fully deployed and can now beleveraged to provide persistent storage to OpenShift workloads. Developers and users can now simply submitPersistent Volume Claim requests, and if the hyperflex storage class is configured as the default storage class,the requested storage will be automatically provisioned by HyperFlex and provided to the OpenShift
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.536
environment. This results in a new Persistent Volume Claim bound to a new Persistent Volume object as well.The Persistent Volume Claim can then be used when deploying workloads within OpenShift.
When creating a Persistent Volume Claim request, the storageClassName:hyperflexline is required only ifyou decide not to create the hyperflex storage class as the default storage class. If the hyperflex storage classis the default storage class, you can remove that line from any Persistent Volume Claim requests.
Note
The following steps provide a sample workflow of deploying a simple “Cisco Message Board” applicationusing persistent storage from HyperFlex.
Step 1 Create a new project (namespace) for the sample application.
Step 4 Run the oc create <file> command to submit the pvc.yaml file and create the Persistent Volume Claim object inthe OpenShift cluster. In parallel, as part of the operation, HyperFlex will create a Persistent Volume object to complementthe Persistent Volume Claim object and bind the two together in OpenShift.
Step 5 Check the status of the Persistent Volume Claim object with the oc get pvc command to make sure it was createdsuccessfully and is “Bound” to a Persistent Volume object.
Scope:
Run on administrator host.
Command:oc get pvc
Example:administrator-host:hxkube$ oc get pvc
NAME STATUS VOLUMECAPACITY ACCESS MODES STORAGECLASS AGEmessage-board-pvc Bound hx-default-message-board-pvc-c54defc5-f26b-11e8-8aff-00505698adb3100Gi RWO,ROX hyperflex <invalid>administrator-host:hxkube$
Step 6 Now that the Persistent Volume Claim and the supporting Persistent Volume (from HyperFlex) have been successfullycreated, you can deploy the Message Board Pod. Start the process by creating a file called message-board.yml in the~/hxkube directory.
Step 7 Edit the message-board.yml file and insert the following text. Save the file once complete.
For purposes of this example, the Pod definition also includes a simple Service definition so the applicationcan be reached outside the OpenShift cluster.
Note
The Persistent Volume Claim message-board-pvc is referenced in the Pod definition.Note
Cisco HyperFlex Systems Administration Guide for Kubernetes, Release 3.538
Step 8 By default, OpenShift runs containers as the default user within a project (namespace). In order to successfully deploythe Message Board Pod, it is required that you add the privileged security context constraint (scc) to the default userin the message-board project (namespace).
This step is not recommended in production as this presents a security risk.Important