IBM Cloud Docs
Setting up virtualization on a Satellite location

Setting up virtualization on a Satellite location

You can set up your Bare Metal Servers to use Red Hat OpenShift virtualization in your Satellite location. By using virtualization, you can provision Windows or other virtual machines on your Bare Metal Servers in a managed Red Hat OpenShift space.

Supported host operating systems
Red Hat CoreOS (RHCOS)

Prerequisites

  • Create a RHCOS-enabled location. To check whether your location is RHCOS-enabled, see Is my location enabled for Red Hat CoreOS?. If your location is not enabled, create a new one with RHCOS.
  • Attach hosts to your location and set up your location control plane.
  • Find and record your bare metal host name.
  • Find your bare metal server network information. Record the CIDR and gateway information for the public and private interfaces for your system.
  • If you want to use IBM Cloud Object Storage to store your ignition file, create or identify a bucket.
  • Create or identify a cluster within the Satellite location that runs a supported operating system; for example, this tutorial uses a Red Hat OpenShift cluster that is running 4.11.
  • If you want to use OpenShift Data Foundation as your storage solution, add 2 storage disks to each of your Bare Metal Servers when you provision them.

Bare Metal Server requirements for Satellite

To set up virtualization, your Bare Metal Server must meet the following requirements.

  • Must support virtualization technology.
    • For Intel CPUs, support for virtualization is referred to as Intel VT or VT-x.
    • For AMD CPUs, support for virtualization is referred to as AMD Virtualization or AMD-V.
  • Must have a minimum of 8 cores and 32 GB RAM, plus any additional cores that you need for your vCPU overhead. For more information, see CPU overhead in the Red Hat OpenShift docs.
  • Must include enough memory for your workload needs. For example: 360 MiB + (1.002 * requested memory) + 146 MiB + 8 MiB * (number of vCPUs) + 16 MiB * (number of graphics devices). For more information, see Memory overhead in the Red Hat OpenShift docs.
  • No operating system installed. The Red Hat CoreOS operating system is installed later in this process.
  • If you want to use OpenShift Data Foundation as your storage solution, add 2 storage disks to each of your Bare Metal Servers when you provision them.

If your servers do not meet these requirements, you can create a Bare Metal Server. For a list of bare metal options, see Available options for a bare metal server.

Attaching bare metal servers to your location

Follow these general steps to attach your bare metal servers to your location. These steps might vary, depending on your specific hardware. For a complete tutorial, see Attaching IBM Cloud Bare Metal Servers for Classic hosts.

  1. Download a Red Hat CoreOS ISO. Find the corresponding ISO version that matches the Red Hat OpenShift version that you want to use. For example, if you want to use version 4.11, download a version of RHCOS for 4.11, such as rhcos-4.11.9-x86_64-live.x86_64.iso.

  2. Log in to your bare metal server.

  3. In the BIOS settings, ensure that virtualization is enabled.

  4. Set up your boot order to boot the Red Hat CoreOS ISO file that you downloaded in step 1.

  5. Boot your system to install the ISO.

  6. After Red Hat CoreOS is booted into memory, set up network connections so you can provide the location ignition file and attach the server to your location.

  7. Download the ignition script for your Satellite location.

    ibmcloud sat host attach --location <location_name> --operating-system RHCOS
    
  8. Edit your ignition file to include the bare metal host name and network information. For more information about adding these details, see Configuring your ignition file. Note that you must edit the ignition file for each bare metal server that you want to attach to your location.

  9. Put your ignition file in a location that your bare metal server can reach. For example, you can upload it to an IBM Cloud Object Storage bucket.

  10. Download the ignition script to your bare metal server.

    curl <ignition_file_location > ignition.ign
    
  11. Run the following install command to start the ignition file. Replace <diskName> with the full path of the disk location where you want Red Hat CoreOS installed and replace <filename> with the path of the ignition file.

    sudo coreos-installer install <diskName> --ignition-file <filename>
    

    The installation process can take an hour or two to complete.

  12. After the installation completes, unplug your RHCOS ISO file and reboot from your hard disk.

  13. Check your Satellite location to confirm that your bare metal server is attached.

  14. Repeat these steps for each bare metal host that you want to attach to your location.

Assigning a Bare Metal Server host to your Red Hat OpenShift cluster

After your Bare Metal Server is attached to your location, you can assign it to a Red Hat OpenShift cluster worker pool.

  1. Find the hosts to add to your Red Hat OpenShift cluster worker pool.

    ibmcloud sat hosts --location <locationID>
    
  2. Assign the Bare Metal Server to the Red Hat OpenShift cluster worker pool.

    ibmcloud sat host assign --location <locationID> --cluster <clusterID> --host <hostID> --worker-pool default --zone <zone>
    

Repeat these steps to assign more Bare Metal Servers to your cluster.

Now that your Bare Metal Server is assigned to a worker pool, you can set up Red Hat OpenShift virtualization.

Setting up storage for your cluster

In this example scenario, you deploy OpenShift Data Foundation across 3 nodes in the cluster by automatically discovering the available storage disks on your Bare Metal Servers.

After you attach at least 3 Bare Metal Servers to your location and assigned them as worker nodes in your cluster, you can deploy OpenShift Data Foundation by using the odf-local Satellite storage template.

  1. From the Satellite locations console, click your location, then click Storage > Create storage configuration.
  2. Give your configuration a name.
  3. Select OpenShift Data Foundation for local devices and select version 4.10.
  4. For this example, leave the rest of the default settings and click Next.
  5. Wait for ODF to deploy. Then, verify that the pods are ready by listing the pods in the openshift-storage namespace.
    oc get pods -n openshift-storage
    
    Example output
    NAME                                                              READY   STATUS      RESTARTS   AGE
    ocs-metrics-exporter-5b85d48d66-lwzfn                             1/1     Running     0          2d1h
    ocs-operator-86498bf74c-qcgvh                                     1/1     Running     0          2d1h
    odf-console-68bcd54c7c-5fvkq                                      1/1     Running     0          2d1h
    rook-ceph-mgr-a-758845d77c-xjqkg                                  2/2     Running     0          2d1h
    rook-ceph-mon-a-85d65d9f66-crrhb                                  2/2     Running     0          2d1h
    rook-ceph-mon-b-74fd78856d-s2pdf                                  2/2     Running     0          2d1h
    rook-ceph-mon-c-76f9b8b5f9-gqcm4                                  2/2     Running     0          2d1h
    rook-ceph-operator-5d659cb494-ctkx6                               1/1     Running     0          2d1h
    rook-ceph-osd-0-846cf86f79-z97mc                                  2/2     Running     0          2d1h
    rook-ceph-osd-1-7f79ccf77d-8g4cn                                  2/2     Running     0          2d1h
    rook-ceph-osd-2-549cc486b4-7wf5k                                  2/2     Running     0          2d1h
    rook-ceph-osd-prepare-ocs-deviceset-0-data-0z6pn9-6fwqr           0/1     Completed   0          10d
    rook-ceph-osd-prepare-ocs-deviceset-1-data-0kkxrw-cppk9           0/1     Completed   0          10d
    rook-ceph-osd-prepare-ocs-deviceset-2-data-0pxktc-xm2rc           0/1     Completed   0          10d
    rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-54c58859nc8j   2/2     Running     0          2d1h
    ...
    ...
    

Installing the virtualization operator

Follow the steps to Install Red Hat OpenShift Virtualization using the CLI.

Setting up the virtctl CLI

Follow the steps to download and install the virtctl CLI tool.

Creating a data volume for your virtual machine

After you deploy OpenShift Data Foundation, you can use the sat-ocs-ceprbd-gold storage class to create a data volume to use storage for your virtual machine.

  1. Copy the following example data volume and save it to a file called datavol.yaml.
    apiVersion: cdi.kubevirt.io/v1beta1
    kind: DataVolume
    metadata:
      name: fedora-1
      namespace: openshift-cnv
    spec:
      source:
        registry:
          pullMethod: node
          url: docker://quay.io/containerdisks/fedora@sha256:29b80ef738f9b09c19efc245aac3921deab9acd542c886cf5295c94ab847dfb5
      pvc:
        accessModes:
          - ReadWriteMany
        resources:
          requests:
            storage: 10Gi
        volumeMode: Block
        storageClassName: sat-ocs-cephrbd-gold
    
  2. Create the data volume.
    oc apply -f datavol.yaml
    
  3. Verify that the data volume and corresponding PVC were created.
    oc get dv,pvc -n openshift-cnv
    
    Example output.
    NAME                                  PHASE       PROGRESS   RESTARTS   AGE
    datavolume.cdi.kubevirt.io/fedora-1   Succeeded   100.0%                16h
    NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
    persistentvolumeclaim/fedora-1   Bound    pvc-fd8b1a5b-cc32-42bd-95d0-4ccf2e40bca7   10Gi       RWX            sat-ocs-cephrbd-gold   16h
    

Creating a virtual machine

  1. Copy the following VirtualMachine configuration and save it to a file called vm.yaml. Note that you can also create virtual machines through the OpenShift web console.

    apiVersion: kubevirt.io/v1
    kind: VirtualMachine
    metadata:
      labels:
        app: fedora-1
      name: fedora-1
      namespace: openshift-cnv
    spec:
      running: false
      template:
        metadata:
          labels:
            kubevirt.io/domain: fedora-1
        spec:
          domain:
            cpu:
              cores: 1
              sockets: 2
              threads: 1
            devices:
              disks:
              - disk:
                  bus: virtio
                name: rootdisk
              - disk:
                  bus: virtio
                name: cloudinitdisk
              interfaces:
              - masquerade: {}
                name: default
              rng: {}
            features:
              smm:
                enabled: true
            firmware:
              bootloader:
                efi: {}
            resources:
              requests:
                memory: 8Gi
          evictionStrategy: LiveMigrate
          networks:
          - name: default
            pod: {}
          volumes:
          - dataVolume:
              name: fedora-1
            name: rootdisk
          - cloudInitNoCloud:
              userData: |-
                #cloud-config
                user: cloud-user
                password: 'fedora-1-password' 
                chpasswd: { expire: False }
            name: cloudinitdisk
    
  2. Create the virtual machine in your cluster.

    oc apply -f vm.yaml
    
  3. Start the virtual machine.

    virtctl start fedora-1 -n openshift-cnv
    
  4. Verify that the virtual machine is running.

    oc get vm -n openshift-cnv
    

    Example output.

    NAME                                  AGE   STATUS    READY
    virtualmachine.kubevirt.io/fedora-1   16h   Running   True
    
  5. From the OpenShift web console, log in to your VM by using the username and password you specified in the VirtualMachine config. For example, user: cloud-user and password: 'fedora-1-password'.

Congratulations! You just deployed a Fedora virtual machine on your Satellite cluster.

You can find more information about what to do next in the OpenShift Virtualization Hands-on Lab.

Additional resources