Install Portworx on OpenShift on vSphere


This article provides instructions for installing Portworx on OpenShift running on vSphere. To accomplish this, you must:

  • Install the Portworx Operator using the Red Hat OperatorHub
  • Deploy Portworx using the Operator
  • Verify your installation

Once you’ve successfully installed and verified your Portworx installation, you’re ready to start using Portworx. To get started after installation, you may want to perform two common tasks:

  • Create a PersistentVolumeClaim
  • Set up cluster monitoring

Prerequsites

  • Your cluster must be running OpenShift 4 or higher.
  • You must have an OpenShift cluster deployed on infrastructure that meets the minimum requirements for Portworx.
  • Ensure that any underlying nodes used for Portworx in OCP have Secure Boot disabled.

Install the Portworx Operator

Before you can install Portworx on your OpenShift cluster, you must first install the Portworx Operator. Perform the following steps to prepare your OpenShift cluster by installing the Operator.

  1. Navigate to the OperatorHub tab of your OpenShift cluster admin page:

    Portworx catalog

  2. Select the kube-system project from the project dropdown. This defines the namespace in which the Operator will be deployed:

    Portworx project callout

  3. Search for and select either the Portworx Enterprise or Portworx Essentials Operator:

    select Portworx Enterprise

  4. Select Install to install the Certified Portworx Operator:

    Portworx Operator

    Portworx Operator

    The Portworx Operator begins to install and takes you to the Installed Operators page. From there, you can deploy Portworx onto your cluster:

    Portworx Operator

Deploy Portworx using the Operator

The Portworx Enterprise Operator takes a custom Kubernetes resource called StorageCluster as input. The StorageCluster is a representation of your Portworx cluster configuration. Once the StorageCluster object is created, the Operator will deploy a Portworx cluster corresponding to the specification in the StorageCluster object. The Operator will watch for changes on the StorageCluster and update your cluster according to the latest specifications.

For more information about the StorageCluster object and how the Operator manages changes, refer to the StorageCluster article.

Grant the required cloud permissions

Grant permissions Portworx requires by creating a secret with user credentials:

  1. Create a secret using the following template. Retrieve the credentials from your own environment and specify them under the data section:

    apiVersion: v1
    kind: Secret
    metadata:
        name: px-vsphere-secret
        namespace: kube-system
    type: Opaque
    data:
        VSPHERE_USER: <your-vcenter-server-user>
        VSPHERE_PASSWORD: <your-vcenter-server-password>
    • VSPHERE_USER: to find your vSphere user, enter the following command:

      echo '<vcenter-server-user>' | base64
    • VSPHERE_PASSWORD: to find your vSphere password, enter the following command:

      echo '<vcenter-server-password>' | base64

    Once you’ve updated the template with your user and password, apply the spec:

    oc apply -f <your-spec-name>
  2. Ensure ports 17001-17020 on worker nodes are reachable from the control plane node and other worker nodes.

  3. If you’re running a Portworx Essentials cluster, then create the following secret with your Essential Entitlement ID:

    oc -n kube-system create secret generic px-essential \
        --from-literal=px-essen-user-id=YOUR_ESSENTIAL_ENTITLEMENT_ID \
        --from-literal=px-osb-endpoint='https://pxessentials.portworx.com/osb/billing/v1/register'

Generate the StorageCluster spec

To install Portworx with OpenShift, you must generate a StorageCluster spec that you will deploy in your cluster.

  1. Navigate to the Portworx spec generator.
  1. Select Portworx Enterprise from the product catalog:

    Portworx Operator

  2. On the Product Line page, Select Portworx Enterprise and click Continue to start the spec generator:

    Portworx Operator

  3. On the Basic tab, Select Use the Portworx Operator and select the Portworx version you want. Choose Built-in ETCD if you have no external ETCD cluster:

    Portworx Operator

    Select the Next button to continue.

  4. On the Storage tab:

    • At the Select your environment dialog, select the Cloud radio button.
    • At the Select cloud platform dialog, select vSphere.
    • At the bottom pane, enter your vCenter endpoint, vCenter datastore prefix, and the Kubernetes Secret Name you created in step 1 of the Grant the required cloud permissions section:

    Portworx Operator

    Select the Next button to continue.

  5. On the Network tab, keep the default values and select the Next button to continue:

    Portworx Operator

  6. On the Customize tab, select the Openshift 4+ radio buttom from the Are you running either of these? dialog box.

    • If you’re using a proxy, you can add your details to the Environment Variables section:

      Portworx Operator

    • If you’re using a private container registry, enter your registry location, registry secret, and specify an image pull policy under the Registry and Image Settings:

      Portworx Operator

    Select the Finish button to continue.

  7. Save and download the spec for future reference:

    Portworx Operator

Apply the StorageCluster spec

You can apply the StorageCluster spec in one of two ways:

  • Using the OpenShift UI
  • Using the CLI

Apply the spec using the OpenShift UI

  1. Within the Portworx Operator page, select the operator Portworx Enterprise

    Portworx Operator

  2. Select Create StorageCluster to create a StorageCluster object.

    Portworx Operator

  3. The spec displayed here represents a very basic default spec. Copy the spec you created with the spec generator and paste it over the default spec in the YAML view and select the Create button:

    Portworx Operator

  4. Verify that Portworx has deployed successfully by navigating to the Storage Cluster tab of the Installed Operators page:

    Portworx Operator

    Once Portworx has fully deployed, the status will show as Online:

    Portworx Operator

Apply the spec using the CLI

If you’re not using the OpenShift console, you can create the StorageCluster object using the oc command:

  1. Apply the generated specs to your cluster with the oc apply command:

    oc apply -f px-spec.yaml
  2. Using the oc get pods command, monitor the Portworx deployment process. Wait until all Portworx pods show as ready:

    oc get pods -o wide -n kube-system -l name=portworx
  3. Verify that Portworx is deployed by checking its status with the following commands:

    PX_POD=$(oc get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
     
    oc exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status

Verify your Portworx installation

Once you’ve installed Portworx, you can perform the following tasks to verify that Portworx has installed correctly.

Verify if all pods are running

Enter the following oc get pods command to list and filter the results for Portworx pods:

oc get pods -n kube-system -o wide | grep -e portworx -e px
portworx-api-774c2                                      1/1     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-api-t4lf9                                      1/1     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
portworx-kvdb-94bpk                                     1/1     Running   0                4s      192.168.121.196   username-k8s1-node0    <none>           <none>
portworx-operator-58967ddd6d-kmz6c                      1/1     Running   0                4m1s    10.244.1.99       username-k8s1-node0    <none>           <none>
prometheus-px-prometheus-0                              2/2     Running   0                2m41s   10.244.1.105      username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-9gs79   2/2     Running   0                2m55s   192.168.121.196   username-k8s1-node0    <none>           <none>
px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx   1/2     Running   0                2m55s   192.168.121.99    username-k8s1-node1    <none>           <none>
px-csi-ext-868fcb9fc6-54bmc                             4/4     Running   0                3m5s    10.244.1.103      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-8tk79                             4/4     Running   0                3m5s    10.244.1.102      username-k8s1-node0    <none>           <none>
px-csi-ext-868fcb9fc6-vbqzk                             4/4     Running   0                3m5s    10.244.3.107      username-k8s1-node1    <none>           <none>
px-prometheus-operator-59b98b5897-9nwfv                 1/1     Running   0                3m3s    10.244.1.104      username-k8s1-node0    <none>           <none>

Note the name of one of your px-cluster pods. You’ll run pxctl commands from these pods in following steps.

Verify Portworx cluster status

You can find the status of the Portworx cluster by running pxctl status commands from a pod. Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n kube-system -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
Telemetry: Disabled or Unhealthy
Metering: Disabled or Unhealthy
License: Trial (expires in 31 days)
Node ID: 788bf810-57c4-4df1-9a5a-70c31d0f478e
        IP: 192.168.121.99 
        Local Storage Pool: 1 pool
        POOL    IO_PRIORITY     RAID_LEVEL      USABLE  USED    STATUS  ZONE    REGION
        0       HIGH            raid0           3.0 TiB 10 GiB  Online  default default
        Local Storage Devices: 3 devices
        Device  Path            Media Type              Size            Last-Scan
        0:1     /dev/vdb        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:2     /dev/vdc        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        0:3     /dev/vdd        STORAGE_MEDIUM_MAGNETIC 1.0 TiB         14 Jul 22 22:03 UTC
        * Internal kvdb on this node is sharing this storage device /dev/vdc  to store its data.
        total           -       3.0 TiB
        Cache Devices:
         * No cache devices
Cluster Summary
        Cluster ID: px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d
        Cluster UUID: 33a82fe9-d93b-435b-943e-6f3fd5522eae
        Scheduler: kubernetes
        Nodes: 2 node(s) with storage (2 online)
        IP              ID                                      SchedulerNodeName       Auth            StorageNode     Used    Capacity        Status  StorageStatus       Version         Kernel                  OS
        192.168.121.196 f6d87392-81f4-459a-b3d4-fad8c65b8edc    username-k8s1-node0      Disabled        Yes             10 GiB  3.0 TiB         Online  Up 2.11.0-81faacc   3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
        192.168.121.99  788bf810-57c4-4df1-9a5a-70c31d0f478e    username-k8s1-node1      Disabled        Yes             10 GiB  3.0 TiB         Online  Up (This node)      2.11.0-81faacc  3.10.0-1127.el7.x86_64  CentOS Linux 7 (Core)
Global Storage Pool
        Total Used      :  20 GiB
        Total Capacity  :  6.0 TiB

The Portworx status will display PX is operational if your cluster is running as intended.

Verify pxctl cluster provision status

  • Find the storage cluster, the status should show as Online:

    oc -n kube-system get storagecluster
    NAME                                              CLUSTER UUID                           STATUS   VERSION   AGE
    px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d   33a82fe9-d93b-435b-943e-6f3fd5522eae   Online   2.11.0    10m
  • Find the storage nodes, the statuses should show as Online:

    oc -n kube-system get storagenodes
    NAME                  ID                                     STATUS   VERSION          AGE
    username-k8s1-node0   f6d87392-81f4-459a-b3d4-fad8c65b8edc   Online   2.11.0-81faacc   11m
    username-k8s1-node1   788bf810-57c4-4df1-9a5a-70c31d0f478e   Online   2.11.0-81faacc   11m
  • Verify the Portworx cluster provision status . Enter the following oc exec command, specifying the pod name you retrieved in the previous section:

    oc exec px-cluster-1c3edc42-4541-48fc-b173-3e9bf3cd834d-vpptx -n kube-system -- /opt/pwx/bin/pxctl cluster provision-status
    Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
    NODE                                    NODE STATUS     POOL                                            POOL STATUS     IO_PRIORITY     SIZE    AVAILABLE  USED     PROVISIONED     ZONE    REGION  RACK
    788bf810-57c4-4df1-9a5a-70c31d0f478e    Up              0 ( 96e7ff01-fcff-4715-b61b-4d74ecc7e159 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default
    f6d87392-81f4-459a-b3d4-fad8c65b8edc    Up              0 ( e06386e7-b769-4ce0-b674-97e4359e57c0 )      Online          HIGH            3.0 TiB 3.0 TiB    10 GiB   0 B             default default default

Create your first StorageClass and PVC

For your apps to use persistent volumes powered by Portworx, you must create a StorageClass that references Portworx as the provisioner. Once you’ve defined a StorageClass, you can create PersistentVolumeClaims (PVCs) that reference this StorageClass. For a more general overview of how storage works within Kubernetes, refer to the Persistent Volumes section of the Kubernetes documentation.

Perform the steps in this topic to create and associate StorageClass and PVC objects in your cluster.

Create a StorageClass

  1. Create a StorageClass spec using the following spec and save it as sc-1.yaml. This StorageClass uses CSI:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: <your-storageclass-name>
    provisioner: pxd.portworx.com
    parameters:
      repl: "1"
  2. Apply the spec using the following oc apply command to create the StorageClass:

    oc apply -f sc-1.yaml
    storageclass.storage.k8s.io/example-storageclass created

Create a PVC

  1. Create a PVC based on your defined StorageClass and save the file:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: <your-pvc-name>
    spec:
      storageClassName: <your-storageclass-name>
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 2Gi
  2. Run the oc apply command to create a PVC:

    oc apply -f <your-pvc-name>.yaml
    persistentvolumeclaim/example-pvc created

Verify your StorageClass and PVC

  1. Enter the following oc get storageclass command, specify the name of the StorageClass you created in the steps above:

    oc get storageclass <your-storageclass-name>
    NAME                   PROVISIONER        RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    example-storageclass   pxd.portworx.com   Delete          Immediate           false                  24m

    oc will return details about your storageClass if it was created correctly. Verify the configuration details appear as you intended.

  2. Enter the oc get pvc command, if this is the only StorageClass and PVC you’ve created, you should see only one entry in the output:

    oc get pvc <your-pvc-name>
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    example-pvc   Bound    pvc-dce346e8-ff02-4dfb-935c-2377767c8ce0   2Gi        RWO            example-storageclass   3m7s

    oc will return details about your PVC if it was created correctly. Verify the configuration details appear as you intended.



Last edited: Sunday, Oct 16, 2022