Planning your migration to Red Hat OpenShift Virtualization
Planning your migration of virtual machines from VMware vSphere, Red Hat Virtualization or OpenStack platforms, or other platforms to Red Hat OpenShift Virtualization by using the Migration Toolkit for Virtualization
Abstract
- 1. Planning a migration
- 2. Cold and warm migration in MTV
- 3. Live migration in MTV
- 4. Prerequisites for migration
- 4.1. Software requirements
- 4.2. Storage support and default modes
- 4.3. Network prerequisites
- 4.4. Source virtual machine prerequisites
- 4.5. MTV encryption support
- 4.6. Red Hat Virtualization prerequisites
- 4.7. OpenStack prerequisites
- 4.8. Additional authentication methods for migrations with OpenStack source providers
- 4.9. Using token authentication with an OpenStack source provider
- 4.10. Using application credential authentication with an OpenStack source provider
- 4.11. VMware prerequisites
- 4.12. Open Virtual Appliance (OVA) prerequisites
- 4.13. OpenShift Virtualization prerequisites
- 4.14. Software compatibility guidelines
- 5. Installing and configuring the MTV Operator
- 6. Migrating virtual machines by using the Red Hat OpenShift web console
- 7. Migrating virtual machines by using the command-line interface
- 8. Mapping networks and storage in migration plans
- 9. Planning migration of virtual machines from VMware vSphere
- 9.1. Creating ownerless network maps in the MTV UI
- 9.2. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
- 9.3. Adding a VMware vSphere source provider
- 9.4. Selecting a migration network for a VMware source provider
- 9.5. Adding an OpenShift Virtualization destination provider
- 9.6. Selecting a migration network for an OpenShift Virtualization provider
- 9.7. Creating a VMware vSphere migration plan by using the MTV wizard
- 10. Planning a migration of virtual machines from Red Hat Virtualization
- 10.1. Creating ownerless network maps in the MTV UI
- 10.2. Creating ownerless storage maps using the form page of the MTV UI
- 10.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
- 10.4. Adding a Red Hat Virtualization source provider
- 10.5. Adding an OpenShift Virtualization destination provider
- 10.6. Selecting a migration network for an OpenShift Virtualization provider
- 10.7. Creating a Red Hat Virtualization migration plan by using the MTV wizard
- 11. Planning migration of virtual machines from OpenStack
- 11.1. Creating ownerless network maps in the MTV UI
- 11.2. Creating ownerless storage maps using the form page of the MTV UI
- 11.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
- 11.4. Adding an OpenStack source provider
- 11.5. Adding an OpenShift Virtualization destination provider
- 11.6. Selecting a migration network for an OpenShift Virtualization provider
- 11.7. Creating an OpenStack migration plan by using the MTV wizard
- 12. Planning a migration of virtual machines from OVA
- 12.1. Creating ownerless network maps in the MTV UI
- 12.2. Creating ownerless storage maps using the form page of the MTV UI
- 12.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
- 12.4. Adding an Open Virtual Appliance (OVA) source provider
- 12.5. Adding an OpenShift Virtualization destination provider
- 12.6. Selecting a migration network for an OpenShift Virtualization provider
- 12.7. Creating an Open Virtualization Appliance (OVA) migration plan by using the MTV wizard
- 12.8. Configuring OVA file upload by web browser
- 13. Planning a migration of virtual machines from OpenShift Virtualization
- 13.1. Creating ownerless network maps in the MTV UI
- 13.2. Creating ownerless storage maps using the form page of the MTV UI
- 13.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
- 13.4. Adding a Red Hat OpenShift Virtualization source provider
- 13.5. Adding an OpenShift Virtualization destination provider
- 13.6. Selecting a migration network for an OpenShift Virtualization provider
- 13.7. Creating an OpenShift Virtualization migration plan by using the MTV wizard
Chapter 1. Planning a migration
You can use the Migration Toolkit for Virtualization (MTV) to plan your migration of virtual machines from the following source providers to OpenShift Virtualization destination providers:
- VMware vSphere
- Red Hat Virtualization
- OpenStack
- Open Virtual Appliances (OVAs) that were created by VMware vSphere
- Remote OpenShift Virtualization clusters
1.1. Types of migration
MTV supports three types of migration: cold, warm, and live.
- Cold migration is available for all of the source providers listed above. This type of migration migrates VMs that are powered off and does not require common stored storage.
Warm migration is available only for VMware vSphere and Red Hat Virtualization. This type of migration migrates VMs that are powered on and does require common stored storage.
These two types of migration are discussed in detail in About cold and warm migration.
Live migration is available only for migrations between OpenShift Virtualization clusters or between namespaces on the same OpenShift Virtualization cluster. It requires MTV version 2.10 or later and OpenShift Virtualization version 4.20 or later.
Live migration is discussed in detail in xref:assembly_live_migration_mtv[Live migration in {product-short}.
ImportantLive migration is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.
Chapter 2. Cold and warm migration in MTV
Cold migration is when a powered off virtual machine (VM) is migrated to a separate host. The VM is powered off, and there is no need for common shared storage.
Warm migration is when a powered on VM is migrated to a separate host. A source host state is cloned to the destination host.
2.1. About cold and warm migration
Migration Toolkit for Virtualization (MTV) supports cold and warm migration as follows:
MTV supports cold migration from the following source providers:
- VMware vSphere
- Red Hat Virtualization
- OpenStack
- Open Virtual Appliances (OVAs) that were created by VMware vSphere
- Remote OpenShift Virtualization clusters
MTV supports warm migration from the following source providers:
- VMware vSphere
- Red Hat Virtualization
2.1.1. Cold migration
Cold migration is the default migration type. The source virtual machines are shut down while the data is copied.
VMware only: In cold migrations, in situations in which a package manager cannot be used during the migration, MTV does not install the qemu-guest-agent daemon on the migrated VMs. This has some impact on the functionality of the migrated VMs, but overall, they are still expected to function.
To enable MTV to automatically install qemu-guest-agent on the migrated VMs, ensure that your package manager can install the daemon during the first boot of the VM after migration.
If that is not possible, use your preferred automated or manual procedure to install qemu-guest-agent manually.
2.1.2. Warm migration
Most of the data is copied during the precopy stage while the source virtual machines (VMs) are running.
Then the VMs are shut down and the remaining data is copied during the cutover stage.
2.1.3. Precopy stage
The VMs are not shut down during the precopy stage.
The VM disks are copied incrementally by using changed block tracking (CBT) snapshots. The snapshots are created at one-hour intervals by default. You can change the snapshot interval by updating the forklift-controller deployment.
You must enable CBT for each source VM and each VM disk.
A VM can support up to 28 CBT snapshots. If the source VM has too many CBT snapshots and the Migration Controller service is not able to create a new snapshot, warm migration might fail. The Migration Controller service deletes each snapshot when the snapshot is no longer required.
The precopy stage runs until the cutover stage is started manually or is scheduled to start.
2.1.4. Cutover stage
The VMs are shut down during the cutover stage and the remaining data is migrated. Data stored in RAM is not migrated.
You can start the cutover stage manually by using the MTV console or you can schedule a cutover time in the Migration manifest.
2.1.5. Advantages and disadvantages of cold and warm migrations
The table that follows offers a more detailed description of the advantages and disadvantages of cold migration and warm migration. It assumes that you have installed Red Hat Enterprise Linux (RHEL) 9 on the Red Hat OpenShift platform on which you installed MTV:
Table 2.1. Advantages and disadvantages of cold and warm migrations
| Cold migration | Warm migration | |
|---|---|---|
|
Duration |
Correlates to the amount of data on the disks. Each block is copied once. |
Correlates to the amount of data on the disks and VM utilization. Blocks may be copied multiple times. |
|
Fail fast |
Convert and then transfer. Each VM is converted to be compatible with OpenShift and, if the conversion is successful, the VM is transferred. If a VM cannot be converted, the migration fails immediately. |
Transfer and then convert. For each VM, MTV creates a snapshot and transfers it to Red Hat OpenShift. When you start the cutover, MTV creates the last snapshot, transfers it, and then converts the VM. |
|
Tools |
|
Containerized Data Importer (CDI), a persistent storage management add-on, and |
|
Data transferred |
Approximate sum of all disks |
Approximate sum of all disks and VM utilization |
|
VM downtime |
High: The VMs are shut down, and the disks are transferred. |
Low: Disks are transferred in the background. The VMs are shut down during the cutover stage, and the remaining data is migrated. Data stored in RAM is not migrated. |
|
Parallelism |
Disks are transferred sequentially for each VM. For remote migration to a destination that does not have MTV installed, disks are transferred in parallel using CDI. |
Disks are transferred in parallel by different pods. |
|
Connection use |
Keeps the connection to the Source only during the disk transfer. |
Keeps the connection to the Source during the disk transfer, but the connection is released between snapshots. |
|
Tools |
MTV only. |
MTV and CDI from OpenShift Virtualization. |
The preceding table describes the situation for VMs that are running because the main benefit of warm migration is the reduced downtime, and there is no reason to initiate warm migration for VMs that are down. However, performing warm migration for VMs that are down is not the same as cold migration, even when MTV uses virt-v2v and RHEL 9. For VMs that are down, MTV transfers the disks using CDI, unlike in cold migration.
When importing from VMware, there are additional factors which impact the migration speed such as limits related to ESXi, vSphere. or VDDK.
2.1.6. Conclusions
Based on the preceding information, we can draw the following conclusions about cold migration vs. warm migration:
- The shortest downtime of VMs can be achieved by using warm migration.
- The shortest duration for VMs with a large amount of data on a single disk can be achieved by using cold migration.
- The shortest duration for VMs with a large amount of data that is spread evenly across multiple disks can be achieved by using warm migration.
2.2. Migration speed comparison
If you compare the migration speeds of cold and warm migrations, you can observe that:
- The observed speeds for the warm migration single disk transfer and disk conversion are approximately the same as for the cold migration.
- The benefit of warm migration is that the transfer of the snapshot happens in the background while the VM is powered on.
- The default snapshot time is taken every 60 minutes. If VMs change substantially, more data needs to be transferred than in cold migration when the VM is powered off.
- The cutover time, meaning the shutdown of the VM and last snapshot transfer, depends on how much the VM has changed since the last snapshot.
Live migration reduces downtime even further than warm migration, but live migration is available only for migration between OpenShift Virtualization clusters or between namespaces on the same OpenShift Virtualization cluster. Therefore it is not included in the comparison above.
Chapter 3. Live migration in MTV
You can use live migration to migrate virtual machines (VMs) between OpenShift Virtualization clusters, or between namespaces on the same OpenShift Virtualization cluster, with extremely limited downtime.
Live migration is supported by Migration Toolkit for Virtualization (MTV) version 2.10.0 and later. It requires OpenShift Virtualization 4.20 or later on both your source and target clusters.
Live migration makes it easier for you to perform Day 2 tasks, such as seamless maintenance and workload balancing after you have migrated your VMs to OpenShift Virtualization.
3.1. Benefits of live migration
The major advantage of live migration is that it significantly reduces the amount of downtime needed to perform migrations. As a result, you can perform migrations with minimal service interruption. This allows your end-users to continue using critical applications during migrations.
Live migration also gives you the following benefits:
- Additional migration functionality: Live migration supports migrating virtual machines (VMs) between OpenShift Virtualization clusters and between namespaces on the same OpenShift Virtualization clusters, making Day 2 operations easier and safer to perform.
- Improved service continuity: Live migration lets you quickly migrate VMs from one cluster to another, allowing you to eliminate the need for scheduled downtime during cluster maintenance or upgrades. This allows you to provide more consistent and reliable services.
- Greater operational flexibility: Live migration allows your IT team to manage your infrastructure dynamically without harming business operations. Your team can respond to changing demands or perform necessary maintenance without complex, disruptive procedures.
- Enhanced performance and scalability: Live migration gives you the ability to balance workloads across clusters. This helps ensure that applications have the resources they need, leading to better overall system performance and scalability.
3.2. Live migration, MTV, and OpenShift Virtualization
Live migration is a joint operation between Migration Toolkit for Virtualization (MTV) and OpenShift Virtualization that allows you to leverage the strengths of MTV when you migrate virtual machines (VMs) from one OpenShift Virtualization cluster to another. Tasks and responsibilities are divided between the two as follows:
- MTV handles the high-level orchestration that is needed to perform a live migration of OpenShift Virtualization Kubevirt VMs from one cluster to another.
- OpenShift Virtualization is responsible for the low-level migration mechanics, such as the actual state and storage transfer between the clusters.
Orchestration is done by the ForkliftController component of MTV, rather than by OpenShift Virtualization, because ForkliftController is already designed to manage the migration pipeline, which includes the following responsibilities:
- Build an inventory of source resources and map them to the destination cluster.
- Create and run the migration plan.
- Ensure that all necessary shared resources, such as instance types, SSH keys, secrets, and config maps, are available and accessible on the destination cluster.
3.3. Limitations of live migration
Live migration lets you migrate virtual machines (VMs) between OpenShift Virtualization clusters or between namespaces on the same OpenShift Virtualization cluster with a minimum of downtime, but the feature does have the following limitations:
- Live migration is available only for migrations between OpenShift Virtualization clusters or between namespaces on the same OpenShift Virtualization cluster. It is not available for any other source provider, whether the provider is supported by Migration Toolkit for Virtualization (MTV) or not.
- Live migration does not establish connectivity between OpenShift Virtualization clusters. Establishing such connectivity is the responsibility of the cluster administrator.
- Live migration does not migrate resources unrelated to VMs, such as services, routes, or other application components, that may be necessary for application availability after a migration.
3.4. Live migration workflow
Live migration uses a different workflow than other types of migration. You can use the following workflow to understand how Migration Toolkit for Virtualization (MTV) orchestrates a live migration with OpenShift Virtualization handling the low-level mechanics of the migration. This workflow is also designed to help you troubleshoot problems that might occur during a live migration.
- Start: When you click Start plan, MTV initiates the migration plan.
- PreHook: If you added a pre-migration hook, MTV runs it now.
-
Create empty
DataVolumes: MTV creates empty targetDataVolumesin the target OpenShift Virtualization cluster. OpenShift Virtualization usesKubeVirtto handle the actual storage migration. - Ensure resources: MTV copies all secrets or config maps that are mounted by a source VM to the target namespace.
-
Create target VMs: MTV creates target VMs in a running state and creates a
VirtualMachineInstanceMigrationresource on each cluster. The VMs have a specialKubeVirtannotation indicating to start them in migration target mode. -
Wait for state transfer: MTV waits for
KubeVirtto handle the state transfer and for the destination VMs to report as ready.KubeVirtalso handles the shutdown of the source VMs after the state transfer. - PostHook: If you added a post-migration hook, MTV runs it now.
- Completed: MTV indicates that the migration is finished.
Chapter 4. Prerequisites for migration
Review the following prerequisites to ensure that your environment is prepared for migration.
4.1. Software requirements
Migration Toolkit for Virtualization (MTV) has software requirements for all providers as well as specific software requirements per provider.
You must install compatible versions of Red Hat OpenShift and OpenShift Virtualization.
4.2. Storage support and default modes
Migration Toolkit for Virtualization (MTV) uses the following default volume and access modes for supported storage.
Table 4.1. Default volume and access modes
| Provisioner | Volume mode | Access mode |
|---|---|---|
|
kubernetes.io/aws-ebs |
Block |
ReadWriteOnce |
|
kubernetes.io/azure-disk |
Block |
ReadWriteOnce |
|
kubernetes.io/azure-file |
Filesystem |
ReadWriteMany |
|
kubernetes.io/cinder |
Block |
ReadWriteOnce |
|
kubernetes.io/gce-pd |
Block |
ReadWriteOnce |
|
kubernetes.io/hostpath-provisioner |
Filesystem |
ReadWriteOnce |
|
manila.csi.openstack.org |
Filesystem |
ReadWriteMany |
|
openshift-storage.cephfs.csi.ceph.com |
Filesystem |
ReadWriteMany |
|
openshift-storage.rbd.csi.ceph.com |
Block |
ReadWriteOnce |
|
kubernetes.io/rbd |
Block |
ReadWriteOnce |
|
kubernetes.io/vsphere-volume |
Block |
ReadWriteOnce |
If the OpenShift Virtualization storage does not support dynamic provisioning, you must apply the following settings:
Filesystemvolume modeFilesystemvolume mode is slower thanBlockvolume mode.ReadWriteOnceaccess modeReadWriteOnceaccess mode does not support live virtual machine migration.
See Enabling a statically-provisioned storage class for details on editing the storage profile.
If your migration uses block storage and persistent volumes created with an EXT4 file system, increase the file system overhead in the Containerized Data Importer (CDI) to be more than 10%. The default overhead that is assumed by CDI does not completely include the reserved place for the root partition. If you do not increase the file system overhead in CDI by this amount, your migration might fail.
When you migrate from OpenStack, or when you run a cold migration from Red Hat Virtualization to the Red Hat OpenShift cluster that MTV is deployed on, the migration allocates persistent volumes without CDI. In these cases, you might need to adjust the file system overhead.
If the configured file system overhead, which has a default value of 10%, is too low, the disk transfer will fail due to lack of space. In such a case, you would want to increase the file system overhead.
In some cases, however, you might want to decrease the file system overhead to reduce storage consumption.
You can change the file system overhead by changing the value of the controller_filesystem_overhead in the spec portion of the forklift-controller CR, as described in Configuring the MTV Operator.
4.3. Network prerequisites
The following network prerequisites apply to all migrations:
- Do not change IP addresses, VLANs, and other network configuration settings during a migration. The MAC addresses of the virtual machines (VMs) are preserved during migration.
- The network connections between the source environment, the OpenShift Virtualization cluster, and the replication repository must be reliable and uninterrupted.
- If you are mapping more than one source and destination network, you must create a network attachment definition for each additional destination network.
4.3.1. Ports
The firewalls must enable traffic over the following ports:
Table 4.2. Network ports required for migrating from VMware vSphere
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
|
443 |
TCP |
OpenShift nodes |
VMware vCenter |
VMware provider inventory Disk transfer authentication |
|
443 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer authentication |
|
902 |
TCP |
OpenShift nodes |
VMware ESXi hosts |
Disk transfer data copy |
Table 4.3. Network ports required for migrating from Red Hat Virtualization
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
|
443 |
TCP |
OpenShift nodes |
RHV Engine |
RHV provider inventory Disk transfer authentication |
|
54322 |
TCP |
OpenShift nodes |
RHV hosts |
Disk transfer data copy |
Table 4.4. Network ports required for migrating from OpenStack
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
|
8776 |
Cinder |
OpenShift nodes |
OpenStack hosts |
Block storage API |
|
8774 |
Nova |
OpenShift nodes |
OpenStack hosts |
Virtualization API |
|
5000 |
Keystone |
OpenShift nodes |
OpenStack hosts |
Authentication API |
|
9696 |
Neutron |
OpenShift nodes |
OpenStack hosts |
Network API |
|
9292 |
Glance |
OpenShift nodes |
OpenStack hosts |
Image service API |
Table 4.5. Network ports required for migrating from Open Virtual Appliance (OVA) files
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
|
2049 |
TCP |
OpenShift nodes |
Server containing the OVA files |
NFS service |
|
111 |
TCP or UCP |
OpenShift nodes |
Server containing the OVA files |
RPC Portmapper, only needed for NFSv4.0 |
Table 4.6. Network ports required for migrating from OpenShift Virtualization
| Port | Protocol | Source | Destination | Purpose |
|---|---|---|---|---|
|
6443 |
API |
OpenShift nodes |
OpenShift Virtualization host |
Access API to get information from a VM’s manifest |
|
443 |
TCP |
OpenShift nodes |
OpenShift Virtualization host |
Download VM data using the |
4.4. Source virtual machine prerequisites
The following prerequisites for source virtual machines (VMs) apply to all migrations:
- ISO images and CD-ROMs are unmounted.
- Each NIC contains either an IPv4 address or an IPv6 address, although a NIC may use both.
- The operating system of each VM is certified and supported as a guest operating system for conversions.
You can check that the operating system is supported by referring to the table in Converting virtual machines from other hypervisors to KVM with virt-v2v. See the columns of the table that refer to RHEL 8 hosts and RHEL 9 hosts.
- VMs that you want to migrate with MTV 2.6.z run on RHEL 8.
- VMs that you want to migrate with MTV 2.7.z run on RHEL 9.
-
The name of a VM must not contain a period (
.). Migration Toolkit for Virtualization (MTV) changes any period in a VM name to a dash (-). The name of a VM must not be the same as any other VM in the OpenShift Virtualization environment.
WarningMTV has limited support for the migration of dual-boot operating system VMs.
In the case of a dual-boot operating system VM, MTV will try to convert the first boot disk it finds. Alternatively the root device can be specified in the MTV UI.
WarningFor virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications.
When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest operating system in order for the snapshot and
Quiesce guest file systemto succeed.If you do not start VSS on the Windows guest operating system, the snapshot creation during the Warm migration fails with the following error:
An error occurred while taking a snapshot: Failed to restart the virtual machine
If you set the VSS service to
Manualand start a snapshot creation withQuiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.NoteMigration Toolkit for Virtualization automatically assigns a new name to a VM that does not comply with the rules.
Migration Toolkit for Virtualization makes the following changes when it automatically generates a new VM name:
- Excluded characters are removed.
- Uppercase letters are switched to lowercase letters.
-
Any underscore (
_) is changed to a dash (-).
This feature allows a migration to proceed smoothly even if someone enters a VM name that does not follow the rules.
NoteMicrosoft Windows VMs, which use the Measured Boot feature, cannot be migrated. Measured Boot is a mechanism to prevent any kind of device changes by checking each start-up component, including the firmware, all the way to the boot driver.
The alternative to migration is to re-create the Windows VM directly on OpenShift Virtualization.
NoteVirtual machines (VMs) with Secure Boot enabled currently might not be migrated automatically. This is because Secure Boot would prevent the VMs from booting on the destination provider. Secure boot is a security standard developed by members of the PC industry to ensure that a device boots using only software that is trusted by the Original Equipment Manufacturer (OEM).
Workaround: The current workaround is to disable Secure Boot on the destination. For more details, see Disabling Secure Boot. (MTV-1548)
4.5. MTV encryption support
Migration Toolkit for Virtualization (MTV) supports the following types of encryption for virtual machines (VMs):
- For VMs that run on Linux: Linux Unified Key Setup (LUKS)
- For VMs that run on Windows: BitLocker
4.6. Red Hat Virtualization prerequisites
The following prerequisites apply to Red Hat Virtualization migrations:
-
To create a source provider, you must have at least the
UserRoleandReadOnlyAdminroles assigned to you. These are the minimum required permissions, however, any other administrator or superuser permissions will also work.
You must keep the UserRole and ReadOnlyAdmin roles until the virtual machines of the source provider have been migrated. Otherwise, the migration will fail.
To migrate virtual machines:
You must have one of the following:
- RHV admin permissions. These permissions allow you to migrate any virtual machine in the system.
-
DiskCreatorandUserVmManagerpermissions on every virtual machine you want to migrate.
- You must use a compatible version of Red Hat Virtualization.
You must have the Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate.
You can obtain the Manager CA certificate by navigating to https://<engine_host>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA in a browser.
- If you are migrating a virtual machine with a direct logical unit number (LUN) disk, ensure that the nodes in the OpenShift Virtualization destination cluster that the VM is expected to run on can access the backend storage.
- Unlike disk images that are copied from a source provider to a target provider, LUNs are detached, but not removed, from virtual machines in the source provider and then attached to the virtual machines (VMs) that are created in the target provider.
- LUNs are not removed from the source provider during the migration in case fallback to the source provider is required. However, before re-attaching the LUNs to VMs in the source provider, ensure that the LUNs are not used by VMs on the target environment at the same time, which might lead to data corruption.
4.7. OpenStack prerequisites
The following prerequisites apply to OpenStack migrations:
- You must use a compatible version of OpenStack.
4.8. Additional authentication methods for migrations with OpenStack source providers
MTV versions 2.6 and later support the following authentication methods for migrations with OpenStack source providers in addition to the standard username and password credential set:
- Token authentication
- Application credential authentication
You can use these methods to migrate virtual machines with OpenStack source providers using the command-line interface (CLI) the same way you migrate other virtual machines, except for how you prepare the Secret manifest.
4.9. Using token authentication with an OpenStack source provider
You can use token authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of token authentication:
- Token with user ID
- Token with user name
For each type of token authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
Have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for token authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
To get the data needed for token authentication, run the following command:
$ openstack token issue
The output, referred to here as
<openstack_token_output>, includes thetoken,userID, andprojectIDthat you need for authentication using a token with user ID.Create a
Secretmanifest similar to the following:For authentication using a token with user ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> projectID: <projectID_from_openstack_token_output> userID: <userID_from_openstack_token_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOFFor authentication using a token with user name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-tokenname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: token token: <token_from_openstack_token_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> projectName: <OS_PROJECT_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
4.10. Using application credential authentication with an OpenStack source provider
You can use application credential authentication, instead of username and password authentication, when you create an OpenStack source provider.
MTV supports both of the following types of application credential authentication:
- Application credential ID
- Application credential name
For each type of application credential authentication, you need to use data from OpenStack to create a Secret manifest.
Prerequisites
You have an OpenStack account.
Procedure
- In the dashboard of the OpenStack web console, click Project > API Access.
Expand Download OpenStack RC file and click OpenStack RC file.
The file that is downloaded, referred to here as
<openstack_rc_file>, includes the following fields used for application credential authentication:OS_AUTH_URL OS_PROJECT_ID OS_PROJECT_NAME OS_DOMAIN_NAME OS_USERNAME
To get the data needed for application credential authentication, run the following command:
$ openstack application credential create --role member --role reader --secret redhat forklift
The output, referred to here as
<openstack_credential_output>, includes:-
The
idandsecretthat you need for authentication using an application credential ID -
The
nameandsecretthat you need for authentication using an application credential name
-
The
Create a
Secretmanifest similar to the following:For authentication using the application credential ID:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appid namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialID: <id_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
For authentication using the application credential name:
cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openstack-secret-appname namespace: openshift-mtv labels: createdForProviderType: openstack type: Opaque stringData: authType: applicationcredential applicationCredentialName: <name_from_openstack_credential_output> applicationCredentialSecret: <secret_from_openstack_credential_output> domainName: <OS_DOMAIN_NAME_from_openstack_rc_file> username: <OS_USERNAME_from_openstack_rc_file> url: <OS_AUTH_URL_from_openstack_rc_file> EOF
4.11. VMware prerequisites
It is strongly recommended to create a VDDK image to accelerate migrations. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.
The following prerequisites apply to VMware migrations:
- You must use a compatible version of VMware vSphere.
- You must be logged in as a user with at least the minimal set of VMware privileges.
- To access the virtual machine using a pre-migration hook, VMware Tools must be installed on the source virtual machine.
-
The VM operating system must be certified and supported for use as a guest operating system with OpenShift Virtualization and for conversion to KVM with
virt-v2v. - If you are running a warm migration, you must enable changed block tracking (CBT) on the VMs and on the VM disks.
- If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File Copy (NFC) service memory of the host.
- It is strongly recommended to disable hibernation because Migration Toolkit for Virtualization (MTV) does not support migrating hibernated VMs.
For virtual machines (VMs) running Microsoft Windows, Volume Shadow Copy Service (VSS) inside the guest VM is used to quiesce the file system and applications.
When performing a warm migration of a Microsoft Windows virtual machine from VMware, you must start VSS on the Windows guest operating system in order for the snapshot and Quiesce guest file system to succeed.
If you do not start VSS on the Windows guest operating system, the snapshot creation during the Warm migration fails with the following error:
An error occurred while taking a snapshot: Failed to restart the virtual machine
If you set the VSS service to Manual and start a snapshot creation with Quiesce guest file system = yes. In the background, the VMware Snapshot provider service requests VSS to start the shadow copy.
In case of a power outage, data might be lost for a VM with disabled hibernation. However, if hibernation is not disabled, migration will fail.
Neither MTV nor OpenShift Virtualization support conversion of Btrfs for migrating VMs from VMware.
4.11.1. VMware privileges
The following minimal set of VMware privileges is required to migrate virtual machines to OpenShift Virtualization with the Migration Toolkit for Virtualization (MTV).
Table 4.7. VMware privileges
| Privilege | Description |
|---|---|
|
| |
|
|
Allows powering off a powered-on virtual machine. This operation powers down the guest operating system. |
|
|
Allows powering on a powered-off virtual machine and resuming a suspended virtual machine. |
|
|
Allows managing a virtual machine by the VMware Virtual Infrastructure eXtension (VIX) API. |
|
Note
All | |
|
|
Allows opening a disk on a virtual machine for random read and write access. Used mostly for remote disk mounting. |
|
|
Allows operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
|
Allows opening a disk on a virtual machine for random read access. Used mostly for remote disk mounting. |
|
|
Allows read operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
|
Allows write operations on files associated with a virtual machine, including VMX, disks, logs, and NVRAM. |
|
|
Allows cloning of a template. |
|
|
Allows cloning of an existing virtual machine and allocation of resources. |
|
|
Allows creation of a new template from a virtual machine. |
|
|
Allows customization of a virtual machine’s guest operating system without moving the virtual machine. |
|
|
Allows deployment of a virtual machine from a template. |
|
|
Allows marking an existing powered-off virtual machine as a template. |
|
|
Allows marking an existing template as a virtual machine. |
|
|
Allows creation, modification, or deletion of customization specifications. |
|
|
Allows promote operations on a virtual machine’s disks. |
|
|
Allows reading a customization specification. |
|
| |
|
|
Allows creation of a snapshot from the virtual machine’s current state. |
|
|
Allows removal of a snapshot from the snapshot history. |
|
| |
|
|
Allows exploring the contents of a datastore. |
|
|
Allows performing low-level file operations - read, write, delete, and rename - in a datastore. |
|
| |
|
|
Allows verification of the validity of a session. |
|
| |
|
|
Allows decryption of an encrypted virtual machine. |
|
|
Allows access to encrypted resources. |
Create a role in VMware with the permissions described in the preceding table and then apply this role to the Inventory section, as described in Creating a VMware role to apply MTV permissions.
4.11.2. Creating a VMware role to grant MTV privileges
You can create a role in VMware to grant privileges for Migration Toolkit for Virtualization (MTV) and then grant those privileges to users with that role.
The procedure that follows explains how to do this in general. For detailed instructions, see VMware documentation.
Procedure
- In the vCenter Server UI, create a role that includes the set of privileges described in the table in VMware prerequisites.
In the vSphere inventory UI, grant privileges for users with this role to the appropriate vSphere logical objects at one of the following levels:
- At the user or group level: Assign privileges to the appropriate logical objects in the data center and use the Propagate to child objects option.
- At the object level: Apply the same role individually to all the relevant vSphere logical objects involved in the migration, for example, hosts, vSphere clusters, data centers, or networks.
4.11.3. Creating a VDDK image
It is strongly recommended that Migration Toolkit for Virtualization (MTV) should be used with the VMware Virtual Disk Development Kit (VDDK) SDK when transferring virtual disks from VMware vSphere.
Creating a VDDK image, although optional, is highly recommended. Using MTV without VDDK is not recommended and could result in significantly lower migration speeds.
To make use of this feature, you download the VDDK, build a VDDK image, and push the VDDK image to your image registry.
The VDDK package contains symbolic links, therefore, the procedure of creating a VDDK image must be performed on a file system that preserves symbolic links (symlinks).
Storing the VDDK image in a public registry might violate the VMware license terms.
Prerequisites
- Red Hat OpenShift image registry.
-
podmaninstalled. - You are working on a file system that preserves symbolic links (symlinks).
- If you are using an external registry, OpenShift Virtualization must be able to access it.
Procedure
Create and navigate to a temporary directory:
$ mkdir /tmp/<dir_name> && cd /tmp/<dir_name>
- In a browser, navigate to the VMware VDDK version 8 download page.
- Select version 8.0.1 and click Download.
In order to migrate to OpenShift Virtualization 4.12, download VDDK version 7.0.3.2 from the VMware VDDK version 7 download page.
- Save the VDDK archive file in the temporary directory.
Extract the VDDK archive:
$ tar -xzf VMware-vix-disklib-<version>.x86_64.tar.gz
Create a
Dockerfile:$ cat > Dockerfile <<EOF FROM registry.access.redhat.com/ubi8/ubi-minimal USER 1001 COPY vmware-vix-disklib-distrib /vmware-vix-disklib-distrib RUN mkdir -p /opt ENTRYPOINT ["cp", "-r", "/vmware-vix-disklib-distrib", "/opt"] EOF
Build the VDDK image:
$ podman build . -t <registry_route_or_server_path>/vddk:<tag>
Push the VDDK image to the registry:
$ podman push <registry_route_or_server_path>/vddk:<tag>
- Ensure that the image is accessible to your OpenShift Virtualization environment.
4.11.4. Increasing the NFC service memory of an ESXi host
If you are migrating more than 10 VMs from an ESXi host in the same migration plan, you must increase the Network File copy (NFC) service memory of the host. Otherwise, the migration fails because the NFC service memory is limited to 10 parallel connections.
Procedure
- Log in to the ESXi host as root.
Change the value of
maxMemoryto1000000000in/etc/vmware/hostd/config.xml:... <nfcsvc> <path>libnfcsvc.so</path> <enabled>true</enabled> <maxMemory>1000000000</maxMemory> <maxStreamMemory>10485760</maxStreamMemory> </nfcsvc> ...Restart
hostd:# /etc/init.d/hostd restart
You do not need to reboot the host.
4.11.5. VDDK validator containers need requests and limits
If you have the cluster or project resource quotas set, you must ensure that you have a sufficient quota for the MTV pods to perform the migration.
You can see the defaults, which you can override in the ForkliftController custom resource (CR), listed as follows. If necessary, you can adjust these defaults.
These settings are highly dependent on your environment. If there are many migrations happening at once and the quotas are not set enough for the migrations, then the migrations can fail. This can also be correlated to the MAX_VM_INFLIGHT setting that determines how many VMs/disks are migrated at once.
The following defaults can be overriden in the ForkliftController CR:
Defaults that affect both cold and warm migrations:
For cold migration, it is likely to be more resource intensive as it performs the disk copy. For warm migration, you could potentially reduce the requests.
-
virt_v2v_container_limits_cpu:
4000m -
virt_v2v_container_limits_memory:
8Gi -
virt_v2v_container_requests_cpu:
1000m virt_v2v_container_requests_memory:
1GiNoteCold and warm migration using
virt-v2vcan be resource-intensive. For more details, see Compute power and RAM.
-
virt_v2v_container_limits_cpu:
Defaults that affect any migrations with hooks:
-
hooks_container_limits_cpu:
1000m -
hooks_container_limits_memory:
1Gi -
hooks_container_requests_cpu:
100m -
hooks_container_requests_memory:
150Mi
-
hooks_container_limits_cpu:
Defaults that affect any OVA migrations:
-
ova_container_limits_cpu:
1000m -
ova_container_limits_memory:
1Gi -
ova_container_requests_cpu:
100m -
ova_container_requests_memory:
150Mi
-
ova_container_limits_cpu:
4.12. Open Virtual Appliance (OVA) prerequisites
The following prerequisites apply to Open Virtual Appliance (OVA) file migrations:
- All OVA files are created by VMware vSphere.
Migration of OVA files that were not created by VMware vSphere but are compatible with vSphere might succeed. However, migration of such files is not supported by MTV. MTV supports only OVA files created by VMware vSphere.
The OVA files are in one or more folders under an NFS shared directory in one of the following structures:
In one or more compressed Open Virtualization Format (OVF) packages that hold all the VM information.
The filename of each compressed package must have the
.ovaextension. Several compressed packages can be stored in the same folder.When this structure is used, MTV scans the root folder and the first-level subfolders for compressed packages.
For example, if the NFS share is
/nfs, then:-
The folder
/nfsis scanned. -
The folder
/nfs/subfolder1is scanned. -
However,
/nfs/subfolder1/subfolder2is not scanned.
-
The folder
In extracted OVF packages.
When this structure is used, MTV scans the root folder, first-level subfolders, and second-level subfolders for extracted OVF packages.
However, there can be only one
.ovffile in a folder. Otherwise, the migration will fail.For example, if the NFS share is
/nfs, then:-
The OVF file
/nfs/vm.ovfis scanned. -
The OVF file
/nfs/subfolder1/vm.ovfis scanned. -
The OVF file
/nfs/subfolder1/subfolder2/vm.ovfis scanned. -
However, the OVF file
/nfs/subfolder1/subfolder2/subfolder3/vm.ovfis not scanned.
-
The OVF file
4.13. OpenShift Virtualization prerequisites
The following prerequisites apply to migrations from one OpenShift Virtualization cluster to another:
- Both the source and destination OpenShift Virtualization clusters must have the same version of Migration Toolkit for Virtualization (MTV) installed.
- The source cluster must use OpenShift Virtualization 4.16 or later.
- Migration from a later version of OpenShift Virtualization to an earlier one is not supported.
- Migration from an earlier version of OpenShift Virtualization to a later version is supported if both are supported by the current version of MTV. For example, if the current version of OpenShift Virtualization is 4.18, a migration from version 4.16 or 4.17 to version 4.18 is supported, but a migration from version 4.15 to any version is not.
It is strongly recommended to migrate only between clusters with the same version of OpenShift Virtualization, although migration from an earlier version of OpenShift Virtualization to a later one is supported.
4.13.1. OpenShift Virtualization live migration prerequisites
In addition to the regular virt} prerequisites, live migration has the following additional prerequisites:
- Migration Toolkit for Virtualization (MTV) 2.10.0 or later installed. MTV treats all OpenShift Virtualization migrations run on MTV 2.9 or earlier as cold migrations, even if they are configured as live migrations.
- OpenShift Virtualization 4.20.0 or later installed on both source and target clusters.
-
In the MTV Operator, in the
specportion of theforklift-controllerYAML,feature_ocp_live_migrationis set totrue. You must havecluster-adminprivileges to set this field. -
In the
KubeVirtresource of both clusters in thefeatureGatesof the YAML,DecntralizedLiveMigrationis listed. You must havecluster-adminprivileges to set this field. - Connectivity between the clusters must be established, including connectivity for state transfer. Technologies such as Submariner can be used for this purpose.
-
The target cluster has
VirtualMachineInstanceTypesandVirtualMachinePreferencesthat match those used by the VMs on the source cluster.
4.14. Software compatibility guidelines
You must install compatible software versions. The table that follows lists the relevant software versions for this version of Migration Toolkit for Virtualization (MTV).
Table 4.8. Compatible software versions
| Migration Toolkit for Virtualization | Red Hat OpenShift | OpenShift Virtualization | VMware vSphere | Red Hat Virtualization | OpenStack |
|---|---|---|---|---|---|
|
2.10 |
4.20, 4.19, 4.18 |
4.20, 4.19, 4.18 |
6.5 or later |
4.4 SP1 or later |
16.1 or later |
Migration from Red Hat Virtualization 4.3
MTV was tested only with Red Hat Virtualization 4.4 SP1. Migration from Red Hat Virtualization (RHV) 4.3 has not been tested with MTV 2.10. While not supported, basic migrations from RHV 4.3 are expected to work.
Generally it is advised to upgrade Red Hat Virtualization Manager to the previously mentioned supported version before the migration to OpenShift Virtualization.
Therefore, it is recommended to upgrade RHV to the supported version above before the migration to OpenShift Virtualization.
However, migrations from RHV 4.3.11 were tested with MTV 2.3, and might work in practice in many environments using MTV 2.10. In this case, it is recommended to upgrade Red Hat Virtualization Manager to the previously mentioned supported version before the migration to OpenShift Virtualization.
4.14.1. OpenShift Operator Life Cycles
For more information about the software maintenance Life Cycle classifications for Operators shipped by Red Hat for use with OpenShift Container Platform, see OpenShift Operator Life Cycles.
Chapter 5. Installing and configuring the MTV Operator
You can install the MTV Operator by using the Red Hat OpenShift web console or the command-line interface (CLI).
In Migration Toolkit for Virtualization (MTV) version 2.4 and later, the MTV Operator includes the MTV plugin for the Red Hat OpenShift web console.
After you install the MTV Operator by using either the Red Hat OpenShift web console or the CLI, you can configure the Operator.
5.1. Installing the MTV Operator by using the Red Hat OpenShift web console
You can install the MTV Operator by using the Red Hat OpenShift web console.
Prerequisites
- Red Hat OpenShift 4.20, 4.19, 4.18 installed.
- OpenShift Virtualization Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-adminpermissions.
Procedure
- In the Red Hat OpenShift web console, click Operators → OperatorHub.
- Use the Filter by keyword field to search for mtv-operator.
- Click Migration Toolkit for Virtualization Operator and then click Install.
- Click Create ForkliftController when the button becomes active.
Click Create.
Your ForkliftController appears in the list that is displayed.
- Click Workloads → Pods to verify that the MTV pods are running.
Click Operators → Installed Operators to verify that Migration Toolkit for Virtualization Operator appears in the openshift-mtv project with the status Succeeded.
When the plugin is ready you will be prompted to reload the page. The Migration menu item is automatically added to the navigation bar, displayed on the left of the Red Hat OpenShift web console.
5.2. Installing the MTV Operator by using the command-line interface
You can install the MTV Operator by using the command-line interface (CLI).
Prerequisites
- Red Hat OpenShift 4.20, 4.19, 4.18 installed.
- OpenShift Virtualization Operator installed on an OpenShift migration target cluster.
-
You must be logged in as a user with
cluster-adminpermissions.
Procedure
Create the openshift-mtv project:
$ cat << EOF | oc apply -f - apiVersion: project.openshift.io/v1 kind: Project metadata: name: openshift-mtv EOF
Create an
OperatorGroupCR calledmigration:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: migration namespace: openshift-mtv spec: targetNamespaces: - openshift-mtv EOFCreate a
SubscriptionCR for the Operator:$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: mtv-operator namespace: openshift-mtv spec: channel: release-v2.10 installPlanApproval: Automatic name: mtv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: "mtv-operator.v2.10.0" EOF
Create a
ForkliftControllerCR:$ cat << EOF | oc apply -f - apiVersion: forklift.konveyor.io/v1beta1 kind: ForkliftController metadata: name: forklift-controller namespace: openshift-mtv spec: olm_managed: true EOF
Verify that the MTV pods are running:
$ oc get pods -n openshift-mtv
The following is example output:
Example
NAME READY STATUS RESTARTS AGE forklift-api-bb45b8db4-cpzlg 1/1 Running 0 6m34s forklift-controller-7649db6845-zd25p 2/2 Running 0 6m38s forklift-must-gather-api-78fb4bcdf6-h2r4m 1/1 Running 0 6m28s forklift-operator-59c87cfbdc-pmkfc 1/1 Running 0 28m forklift-ui-plugin-5c5564f6d6-zpd85 1/1 Running 0 6m24s forklift-validation-7d84c74c6f-fj9xg 1/1 Running 0 6m30s forklift-volume-populator-controller-85d5cb64b6-mrlmc 1/1 Running 0 6m36s
5.3. Configuring the MTV Operator
You can configure the following settings of the MTV Operator by modifying the ForkliftController custom resource (CR), or in the Settings section of the Overview page, unless otherwise indicated.
- Maximum number of virtual machines (VMs) or disks per plan that Migration Toolkit for Virtualization (MTV) can migrate simultaneously.
-
How long
must gatherreports are retained before being automatically deleted (ForkliftControllerCR only). - CPU limit allocated to the main controller container.
- Memory limit allocated to the main controller container.
- Interval at which a new snapshot is requested before initiating a warm migration.
- Frequency with which the system checks the status of snapshot creation or removal during a warm migration.
-
Percentage of space in persistent volumes allocated as file system overhead when the
storageclassisfilesystem(ForkliftControllerCR only). -
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
storageclassthat is block-based (ForkliftControllerCR only). -
Configuration map of operating systems to preferences for vSphere source providers (
ForkliftControllerCR only). -
Configuration map of operating systems to preferences for Red Hat Virtualization (RHV) source providers (
ForkliftControllerCR only). -
Whether to retain importer pods so that the Containerized Data Importer (CDI) does not delete them during migration (
ForkliftControllerCR only).
The procedure for configuring these settings by using the user interface is presented in Configuring MTV settings. The procedure for configuring these settings by modifying the ForkliftController CR is presented following.
Procedure
Change a parameter’s value in the
specsection of theForkliftControllerCR by adding the parameter and value as follows:spec: parameter: value 1- 1
- Parameters that you can configure using the CLI are shown in the table that follows, along with a description of each parameter and its default value.
Table 5.1. MTV Operator parameters
| Parameter | Description | Default value |
|---|---|---|
|
|
Varies with provider as follows:
|
|
|
|
The duration in hours for retaining |
|
|
|
The CPU limit allocated to the main controller container. |
|
|
|
The memory limit allocated to the main controller container. |
|
|
|
The interval in minutes at which a new snapshot is requested before initiating a warm migration. |
|
|
|
The frequency in seconds with which the system checks the status of snapshot creation or removal during a warm migration. |
|
|
|
Percentage of space in persistent volumes allocated as file system overhead when the
|
|
|
|
Fixed amount of additional space allocated in persistent block volumes. This setting is applicable for any
|
|
|
|
Config map for vSphere source providers. This config map maps the operating system of the incoming VM to a OpenShift Virtualization preference name. This config map needs to be in the namespace where the MTV Operator is deployed. To see the list of preferences in your OpenShift Virtualization environment, open the OpenShift web console and click Virtualization > Preferences.
Add values to the config map when this parameter has the default value,
|
|
|
|
Config map for RHV source providers. This config map maps the operating system of the incoming VM to a OpenShift Virtualization preference name. This config map needs to be in the namespace where the MTV Operator is deployed. To see the list of preferences in your OpenShift Virtualization environment, open the OpenShift web console and click Virtualization → Preferences.
You can add values to the config map when this parameter has the default value,
|
|
|
|
Whether to retain importer pods so that the Containerized Data Importer (CDI) does not delete them during migration.
|
|
5.4. Configuring the controller_max_vm_inflight parameter
The value of controller_max_vm_inflight parameter, which is shown in the UI as Max concurrent virtual machine migrations, varies by the source provider of the migration
For all migrations except Open Virtual Appliance (OVA) or VMware migrations, the parameter specifies the maximum number of disks that Migration Toolkit for Virtualization (MTV) can transfer simultaneously. In these migrations, MTV migrates the disks in parallel. This means that if the combined number of disks that you want to migrate is greater than the value of the setting, additional disks must wait until the queue is free, without regard for whether a VM has finished migrating.
For example, if the value of the parameter is 15, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks except for the 16th disk start migrating at the same time. Once any of them has migrated, the 16th disk can be migrated, even though not all the disks on VM A and the disks on VM B have finished migrating.
For OVA migrations, the parameter specifies the maximum number of VMs that MTV can migrate simultaneously, meaning that all additional disks must wait until at least one VM has been completely migrated.
For example, if the value of the parameter is 2, and VM A has 5 disks, VM B has 5 disks, and VM C has 6 disks, all the disks on VM C must wait to migrate until either all the disks on VM A or on VM B finish migrating.
For VMware migrations, the parameter has the following meanings:
Cold migration:
- To local OpenShift Virtualization: VMs for each ESXi host that can migrate simultaneously.
- To remote OpenShift Virtualization: Disks for each ESXi host that can migrate simultaneously.
- Warm migration: Disks for each ESXi host that can migrate simultaneously.
Chapter 6. Migrating virtual machines by using the Red Hat OpenShift web console
Use the MTV user interface to migrate virtual machines (VMs). It is located in the Virtualization section of the Red Hat OpenShift web console.
6.1. The MTV user interface
The Migration Toolkit for Virtualization (MTV) user interface is integrated into the OpenShift web console.
In the left panel, you can choose a page related to a component of the migration progress, for example, Providers. Or, if you are an administrator, you can choose Overview, which contains information about migrations and lets you configure MTV settings.
In pages related to components, you can click on the Projects list, which is in the upper-left portion of the page, and see which projects (namespaces) you are allowed to work with.
6.2. The MTV Overview page
The Migration Toolkit for Virtualization (MTV) Overview page displays system-wide information about migrations and a list of Settings you can change.
If you have Administrator privileges, you can access the Overview page by clicking Migration → Overview in the Red Hat OpenShift web console.
The Overview page has 3 tabs:
- Overview
- YAML
- Health
- History
- Settings
6.2.1. Overview tab
The Overview tab is to help you quickly create providers and find information about the whole system:
-
In the upper pane, is the Welcome section, which includes buttons that let you open the Create provider UI for each vendor (VMware, Open Virtual Appliance, OpenStack, Red Hat Virtualization, and OpenShift Virtualization). You can close this section by clicking the Options menu
in the upper-right corner and selecting Hide from view. You can reopen it by clicking Show the welcome card in the upper-right corner.
In the center-left pane is a "donut" chart named Virtual machines. This chart shows the number of running, failed, and successful virtual machine migrations that MTV ran for the time interval that you select. You can choose a different interval by clicking the list in the upper-right corner of the pane. You can select a different interval by clicking the list. The options are: Last 24 hours, Last 10 days, Last 31 days, and All. By clicking on each division of the chart, you can navigate to the History tab for information about the migrations.
NoteData for this chart includes only the most recent run of a migration plan that was modified due to a failure. For example, if a plan with 3 VMs fails 4 times, then this chart shows that 3 VMs failed, not 12.
-
In the center-right pane is an area chart named Migration history. This chart shows the number of migrations that succeeded, failed, or were running during the interval shown in the title of the chart. You can choose a different interval by clicking the Options menu
in the upper-right corner of the pane. The options are: Last 24 hours, Last 10 days, and Last 31 days. By clicking on each division of the chart, you can navigate to the History tab for information about the migrations.
In the lower-left pane is a "donut" chart named Migration plans. This chart shows the current number of migration plans grouped by their status. This includes plans that were not started, cannot be started, are incomplete, archived, paused, or have an unknown status. By clicking the Show all plans link, you can quickly navigate to the Migration plans page.
NoteSince a single migration might involve many virtual machines, the number of migrations performed using MTV might vary significantly from the number of migrated virtual machines.
-
In the lower-right pane is a table named MTV health. This table lists all of the MTV pods. The most important one,
forklift-controller, is first. The remaining pods are listed in alphabetical order. The View all link opens the Health tab. The status and creation time of each pod are listed. There is also a link to the logs of each pod.
6.2.2. YAML tab
The YAML tab displays the ForkliftController custom resource (CR) that defines the operation of the MTV Operator. You can modify the CR in this tab.
6.2.3. Health tab
The Health tab has two panes:
-
In the upper pane, there is a table named Health. It lists all the MTV pods. The most important one,
forklift-controller, is first. The remaining pods are listed in alphabetical order. For each pod, the status, and creation time of the pod are listed, and there is a link to the logs of the pod. - In the lower pane, there is a table named Conditions. It lists the following possible types (states) of the MTV Operator, the status of the type, the last time the condition was updated, the reason for the update, and a message about the condition.
6.2.4. History tab
The History tab displays information about migrations.
- In the upper-left of the page, there is a filter that you can use to display only migrations of a certain status, for example, Succeeded.
- To the right of the filter is the Group by plan toggle switch, which lets you display either all migrations or view only the most recent migration run per plan within the specified time range.
6.2.5. Settings tab
The table that follows describes the settings that are visible in the Settings tab, their default values, and other possible values that can be set or chosen, if needed.
Table 6.1. MTV settings
| Setting | Description | Default value | Additional values |
|---|---|---|---|
|
Maximum concurrent VM migrations |
Varies with provider as follows:
|
20. |
Adjustable by either using the + and - keys to set a different value or by clicking the textbox and entering a new value. |
|
Controller main container CPU limit |
The CPU limit that is allocated to the main controller container, in milliCPUs (m). |
500 m. |
Adjustable by selecting another value from the list. Options: 200 m, 500 m, 2000 m, 8000 m. |
|
Controller main container memory limit |
The memory limit that is allocated to the main controller container in mebibytes (Mi). |
800 Mi. |
Adjustable by selecting another value from the list. Options: 200 Mi, 800 Mi, 2000 Mi, 8000 Mi. |
|
Controller inventory container memory limit |
The memory limit that is allocated to the inventory controller container in mebibytes (Mi). |
1000 Mi. |
Adjustable by selecting another value from the list. Options: 400 Mi, 1000 Mi, 2000 Mi, 8000 Mi. |
|
Precopy internal (minutes) |
The interval in minutes at which a new snapshot is requested before initiating a warm migration. |
60 minutes. |
Adjustable by selecting another value from the list. Options: 5 minutes, 30 minutes, 60 minutes, 120 minutes. |
|
Snapshot polling interval |
The interval in seconds between which the system checks the status of snapshot creation or removal during a warm migration. |
10 seconds. |
Adjustable by choosing another value from the list. Options: 1 second, 5 seconds, 10 seconds, 60 seconds. |
6.3. Migrating virtual machines by using the MTV user interface
Use the MTV user interface to migrate virtual machines (VMs) from the following providers:
- VMware vSphere
- Red Hat Virtualization (RHV)
- OpenStack
- Open Virtual Appliances (OVAs) that were created by VMware vSphere
- OpenShift Virtualization clusters
For all migrations, you specify the source provider, the destination provider, and the migration plan. The specific procedures vary per provider.
You must ensure that all prerequisites are met. For more information, see Prerequisites for migration.
VMware only: You must have the minimal set of VMware privileges.
VMware only: Creating a VMware Virtual Disk Development Kit (VDDK) image will increase migration speed.
6.4. Renaming virtual machines for migration
You can rename source virtual machines (VMs) in the MTV UI to address naming conflicts. Renaming VMs ensures conformity with the Kubernetes-based naming conventions of Red Hat OpenShift Virtualization.
In the OpenShift platform, all resource names, including VMs, must have DNS-compliant names. Valid names consist only of lowercase alphanumeric characters (a-z, 0-9) and hyphens (-). Names must not start or end with a hyphen or contain consecutive hyphens, and the length of a name is limited to 63 characters. The OpenShift API rejects noncompliant source VM names during the creation of the target VM.
Table 6.2. Examples of VM naming conflicts
| Source VM name | Naming conflict | New target VM name |
|---|---|---|
|
|
Contains uppercase letters and underscores |
|
|
|
Starts with a hyphen |
|
|
|
Contains consecutive hyphens |
|
|
|
Exceeds the 63-character length limit |
|
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
- Open the Plan Details page for your migration plan.
- Click the Virtual Machines tab to view a table of all VMs from the configured source provider.
- If the Target name column does not already show in the VM table, click Manage columns to select Target name and display the column.
- Identify non-conformant names in the list of VMs by checking the alerts in the Concerns column.
- To rename a VM, click the More icon at the end of the row for the VM, and click Edit target name.
- Enter and save a new name for the VM. Ensure that the new name consists only of alphanumeric characters (a-z, 0-9) and hyphens. Ensure that the name does not start or end with a hyphen or contain consecutive hyphens and is less than 63 characters long.
Chapter 7. Migrating virtual machines by using the command-line interface
You migrate virtual machines (VMs) to OpenShift Virtualization from the command-line by creating MTV custom resources (CRs). The CRs and the migration procedure vary by source provider.
You must specify a name for cluster-scoped CRs.
You must specify both a name and a namespace for namespace-scoped CRs.
To migrate to or from an OpenShift cluster that is different from the one the migration plan is defined on, you must have an OpenShift Virtualization service account token with cluster-admin privileges.
You must ensure that all prerequisites are met. For more information, see Prerequisites for migration.
7.1. Permissions needed by non-administrators to work with migration plan components
If you are an administrator, you can work with all components of migration plans (for example, providers, network mappings, and migration plans).
By default, non-administrators have limited ability to work with migration plans and their components. As an administrator, you can modify their roles to allow them full access to all components, or you can give them limited permissions.
For example, administrators can assign non-administrators one or more of the following cluster roles for migration plans:
Table 7.1. Example migration plan roles and their privileges
| Role | Description |
|---|---|
|
|
Can view migration plans but not to create, delete or modify them |
|
|
Can create, delete or modify (all parts of |
|
|
All |
Predefined cluster roles include a resource (for example, plans), an API group (for example, forklift.konveyor.io-v1beta1) and an action (for example, view, edit).
As a more comprehensive example, you can grant non-administrators the following set of permissions per namespace:
- Create and modify storage maps, network maps, and migration plans for the namespaces they have access to
- Attach providers created by administrators to storage maps, network maps, and migration plans
- Not be able to create providers or to change system settings
Table 7.2. Example permissions required for non-adminstrators to work with migration plan components but not create providers
| Actions | API group | Resource |
|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Empty string |
|
Non-administrators need to have the create permissions that are part of edit roles for network maps and for storage maps to create migration plans, even when using a template for a network map or a storage map.
Chapter 8. Mapping networks and storage in migration plans
You can create network maps and storage maps in the Migration Toolkit for Virtualization (MTV) to map source networks and disk storage to OpenShift Virtualization networks and storage classes.
8.1. About network maps in migration plans
You can create network maps in Migration Toolkit for Virtualization (MTV) migration plans to map source networks to OpenShift Virtualization networks.
There are two types of network maps: maps created for a specific migration plan and maps created for use by any migration plan.
- Maps created for a specific plan are said to be owned by that plan. You can create these kinds of maps in the Network maps step of the Plan creation wizard.
Maps created for use by any migration plan are to said to be ownerless. You can create these kinds of maps in the Network maps page of the Migration for Virtualization section of the OpenShift Virtualization web console.
You, or anyone working in the same project, can use them when creating a migration plan in the Plan creation wizard. When you choose one of these unowned maps for a migration plan, MTV creates a copy of the map and defines your migration plan as the owner of that copy. Any changes you make to the copy do not affect the original map, nor do they apply to any other plan that uses a copy of the map. Both types of network maps for a project are shown in the Network maps page, but there is an important difference in the information displayed in the Owner column of that page for each:
- Maps created in the Network maps page of the Migration for Virtualization section of the OpenShift Virtualization web console are shown as having no owner.
- Maps created in the Network maps step of the Plan creation wizard are shown as being owned by the migration plan.
8.2. About storage maps in migration plans
You can create storage maps in Migration Toolkit for Virtualization (MTV) migration plans to map source disk storages to OpenShift Virtualization storage classes.
There are two types of storage maps: maps created for a specific migration plan and maps created for use by any migration plan.
- Maps created for a specific plan are said to be owned by that plan. You can create these kinds of maps in the Storage maps step of the Plan creation wizard.
Maps created for use by any migration plan are to said to be ownerless. You can create these kinds of maps in the Storage maps page of the Migration for Virtualization section of the OpenShift Virtualization web console.
You, or anyone working in the same project, can use them when creating a migration plan in the Plan creation wizard. When you choose one of these unowned maps for a migration plan, MTV creates a copy of the map and defines your migration plan as the owner of that copy. Any changes you make to the copy do not affect the original map, nor do they apply to any other plan that uses a copy of the map. Both types of storage maps for a project are shown in the Storage maps page, but there is an important difference in the information displayed in the Owner column of that page for each:
- Maps created in the Storage maps page of the Migration for Virtualization section of the OpenShift Virtualization web console are shown as having no owner.
- Maps created in the Storage maps step of the Plan creation wizard are shown as being owned by the migration plan.
8.3. Creating ownerless storage maps in the MTV UI
You can create ownerless storage maps by using the MTV UI to map source disk storage to OpenShift Virtualization storage classes.
You can create this type of map by using one of the following methods:
- Create with form, selecting items such as a source provider from lists
- Create with YAML, either by entering YAML or JSON definitions or by attaching files containing the same
Chapter 9. Planning migration of virtual machines from VMware vSphere
You prepare and create your VMware vSphere migration plan by performing the following high-level steps in the MTV UI:
- Create ownerless network maps
- Add a VMware vSphere source provider
- Select a migration network for a VMware source provider
- Add an OpenShift Virtualization destination provider
- Select a migration network for an OpenShift Virtualization provider
- Create a VMware vSphere migration plan
9.1. Creating ownerless network maps in the MTV UI
You can create ownerless network maps by using the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
Click Create NetworkMap.
The Create NetworkMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: NetworkMap
metadata:
name: <network_map>
namespace: <namespace>
spec:
map:
- destination:
name: <network_name>
type: pod 1
source: 2
id: <source_network_id>
name: <source_network_name>
- destination:
name: <network_attachment_definition> 3
namespace: <network_attachment_definition_namespace> 4
type: multus
source:
id: <source_network_id>
name: <source_network_name>
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
pod,multus, andignored. Useignoredto avoid attaching VMs to this network for this migration. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the VMware vSphere network Managed Object Reference (moRef). For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines to Red Hat OpenShift Virtualization. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of network maps.
You can create ownerless storage maps by using the form page of the MTV UI.
Prerequisites
- Have a VMware source provider and a OpenShift Virtualization destination provider. For more information, see Adding a VMware vSphere source provider or Adding an OpenShift Virtualization destination provider.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
- Click Create storage map > Create with form.
Specify the following:
- Map name: Name of the storage map.
- Project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Source storage: Select from the list.
- Target storage: Select from the list
Optional: If this is a storage map for a migration using storage copy offload, specify the following offload options:
- Offload plugin: Select from the list.
- Storage secret: Select from the list.
Storage product: Select from the list
ImportantStorage copy offload is Developer Preview software only. Developer Preview software is not supported by Red Hat in any way and is not functionally complete or production-ready. Do not use Developer Preview software for production or business-critical workloads. Developer Preview software provides early access to upcoming product software in advance of its possible inclusion in a Red Hat product offering. Customers can use this software to test functionality and provide feedback during the development process. This software might not have any documentation, is subject to change or removal at any time, and has received limited testing. Red Hat might provide ways to submit feedback on Developer Preview software without an associated SLA.
For more information about the support scope of Red Hat Developer Preview software, see Developer Preview Support Scope.
- Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Click Create.
Your map appears in the list of storage maps.
9.2. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
You can create ownerless storage maps by using YAML or JSON definitions in the Migration Toolkit for Virtualization (MTV) UI.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
Click Create storage map > Create with YAML.
The Create StorageMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: <storage_map>
namespace: <namespace>
spec:
map:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> 1
source:
id: <source_datastore> 2
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- Specify the VMware vSphere datastore moRef. For example,
f2737930-b567-451a-9ceb-2887f6207009. For more information about retrieving the moRef, see Retrieving a VMware vSphere moRef in Migrating your virtual machines to Red Hat OpenShift Virtualization.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of storage maps.
9.3. Adding a VMware vSphere source provider
You can migrate VMware vSphere VMs from VMware vCenter or from a VMware ESX/ESXi server without going through vCenter.
EMS enforcement is disabled for migrations with VMware vSphere source providers in order to enable migrations from versions of vSphere that are supported by Migration Toolkit for Virtualization but do not comply with the 2023 FIPS requirements. Therefore, users should consider whether migrations from vSphere source providers risk their compliance with FIPS. Supported versions of vSphere are specified in Software compatibility guidelines.
Anti-virus software can cause migrations to fail. It is strongly recommended to remove such software from source VMs before you start a migration.
MTV does not support migrating VMware Non-Volatile Memory Express (NVMe) disks.
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a VMware vSphere migration plan using the MTV wizard.
Prerequisites
- It is strongly recommended to create a VMware Virtual Disk Development Kit (VDDK) image in a secure registry that is accessible to all clusters. A VDDK image accelerates migration and reduces the risk of a plan failing. If you are not using VDDK and a plan fails, retry with VDDK installed. For more information, see Creating a VDDK image.
Virtual machine (VM) migrations do not work without VDDK when a VM is backed by VMware vSAN.
Procedure
Access the Create provider page for VMware by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click VMware.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click VMware.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click VMware when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
Provider details
- Provider resource name: Name of the source provider.
- Endpoint type: Select the vSphere provider endpoint type. Options: vCenter or ESXi. You can migrate virtual machines from vCenter, an ESX/ESXi server that is not managed by vCenter, or from an ESX/ESXi server that is managed by vCenter but does not go through vCenter.
-
URL: URL of the SDK endpoint of the vCenter on which the source VM is mounted. Ensure that the URL includes the
sdkpath, usually/sdk. For example,https://vCenter-host-example.com/sdk. If a certificate for FQDN is specified, the value of this field needs to match the FQDN in the certificate. VDDK init image:
VDDKInitImagepath. It is strongly recommended to create a VDDK init image to accelerate migrations. For more information, see Creating a VDDK image.Do one of the following:
- Select the Skip VMWare Virtual Disk Development Kit (VDDK) SDK acceleration (not recommended).
-
Enter the path in the VDDK init image text box. Format:
<registry_route_or_server_path>/vddk:<tag>. Upload a VDDK archive and build a VDDK init image from the archive by doing the following:
- Click Browse next to the VDDK init image archive text box, select the desired file, and click Select.
Click Upload.
The URL of the uploaded archive is displayed in the VDDK init image archive text box.
Provider credentials
-
Username: vCenter user or ESXi user. For example,
user@vsphere.local. - Password: vCenter user password or ESXi user password.
-
Username: vCenter user or ESXi user. For example,
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
NoteIt might take a few minutes for the provider to have the status
Ready.Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
9.4. Selecting a migration network for a VMware source provider
You can select a migration network in the Red Hat OpenShift web console for a source provider to reduce risk to the source environment and to improve performance.
Using the default network for migration can result in poor performance because the network might not have sufficient bandwidth. This situation can have a negative effect on the source platform because the disk transfer operation might saturate the network.
You can also control the network from which disks are transferred from a host by using the Network File Copy (NFC) service in vSphere.
If you input any value of maximum transmission unit (MTU) besides the default value in your migration network, you must also input the same value in the OpenShift transfer network that you use. For more information about the OpenShift transfer network, see Creating a migration plan.
Prerequisites
- The migration network must have enough throughput, minimum speed of 10 Gbps, for disk transfer.
The migration network must be accessible to the OpenShift Virtualization nodes through the default gateway.
NoteThe source virtual disks are copied by a pod that is connected to the pod network of the target namespace.
- The migration network should have jumbo frames enabled.
Procedure
- In the Red Hat OpenShift web console, click Migration → Providers for virtualization.
- Click the host number in the Hosts column beside a provider to view a list of hosts.
- Select one or more hosts and click Select migration network.
Specify the following fields:
- Network: Network name
-
ESXi host admin username: For example,
root - ESXi host admin password: Password
- Click Save.
Verify that the status of each host is Ready.
If a host status is not Ready, the host might be unreachable on the migration network or the credentials might be incorrect. You can modify the host configuration and save the changes.
9.5. Adding an OpenShift Virtualization destination provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
Access the Create OpenShift Virtualization provider interface by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
9.6. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
In MTV version 2.9 and earlier, MTV used the pod network as the default network.
In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.
MTV supports using UDNs for all providers except OpenShift Virtualization.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
Configure a gateway in the network used for MTV migrations by completing the following steps:
- In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
- Select the appropriate default transfer network NAD.
- Click the YAML tab.
Add
forklift.konveyor.io/routeto the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> 1- 1
- The
NetworkAttachmentDefinitionparameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
- Click Save.
9.7. Creating a VMware vSphere migration plan by using the MTV wizard
You can migrate VMware vSphere virtual machines (VMs) from VMware vCenter or from a VMware ESX or ESXi server by using the Migration Toolkit for Virtualization plan creation wizard.
The wizard is designed to lead you step-by-step in creating a migration plan.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
When you click Create plan on the Review and create page of the wizard, Migration Toolkit for Virtualization (MTV) validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.
Prerequisites
- Have a VMware source provider and a OpenShift Virtualization destination provider. For more information, see Adding a VMware vSphere source provider or Adding an OpenShift Virtualization destination provider.
- If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
- If you are using a user-defined network (UDN), note the name of its namespace as defined in OpenShift Virtualization.
Procedure
- On the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Target project: Select from the list. If you are using a UDN, this is the namespace defined in OpenShift Virtualization.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and click Next.
- If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Migration type page, choose one of the following:
- Cold migration (default)
- Warm migration
- Click Next.
On the Other settings (optional) page, specify any of the following settings that are appropriate for your plan. All are optional.
Disk decryption passphrases: For disks encrypted using Linux Unified Key Setup (LUKS).
- Enter a decryption passphrase for a LUKS-encrypted device.
- To add another passphrase, click Add passphrase and add a passphrase.
Repeat as needed.
You do not need to enter the passphrases in a specific order. For each LUKS-encrypted device, MTV tries each passphrase until one unlocks the device.
Transfer Network: The network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.
- Verify that the transfer network is in the selected target project.
- To choose a different transfer network, select a different transfer network from the list.
Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
- To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
Preserve static IPs: By default, virtual network interface controllers (vNICs) change during the migration process. As a result, vNICs that are configured with a static IP linked to the interface name in the guest VM lose their IP during migration.
To preserve static IPs, select the Preserve the static IPs checkbox.
MTV then issues a warning message about any VMs whose vNIC properties are missing. To retrieve any missing vNIC properties, run those VMs in vSphere. This causes the vNIC properties to be reported to MTV.
Root device: Applies to multi-boot VM migrations only. By default, MTV uses the first bootable device detected as the root device.
To specify a different root device, enter it in the text box.
MTV uses the following format for disk location:
/dev/sd<disk_identifier><disk_partition>. For example, if the second disk is the root device and the operating system is on the disk’s second partition, the format would be:/dev/sdb2. After you enter the boot device, click Save.If the conversion fails because the boot device provided is incorrect, it is possible to get the correct information by checking the conversion pod logs.
Shared disks: Applies to cold migrations only. Shared disks are disks that are attached to multiple VMs and that use the multi-writer option. These characteristics make shared disks difficult to migrate. By default, MTV migrates shared disks.
NoteMigrating shared disks might slow down the migration process.
- To migrate shared disks in the migration plan, verify that Shared disks is selected in the checkbox.
- To avoid migrating shared disks, clear the Shared disks checkbox.
- Click Next.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
The Plan settings section of the page includes settings that you specified in the Other settings (optional) page and some additional optional settings. The steps below refer to the additional optional steps, but all of the settings can be edited by clicking the Options menu
, making the change, and then clicking Save.
Check the following items on the Plan settings section of the page:
Volume name template: Specifies a template for the volume interface name for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.PVCName: Name of the PVC mounted to the VM using this volume .VolumeIndex: Sequential index of the volume interface (0-based)Examples
-
"disk-{{.VolumeIndex}}" "pvc-{{.PVCName}}"Variable names cannot exceed 63 characters.
To specify a volume name template for all the VMs in your plan, do the following:
- Click the Edit icon.
- Click Enter custom naming template.
- Enter the template according to the instructions.
- Click Save.
To specify a different volume name template only for specific VMs, do the following:
- Click the Virtual Machines tab.
- Select the desired VMs.
-
Click the Options menu
of the VM.
- Select Edit Volume name template.
- Enter the template according to the instructions.
Click Save.
ImportantChanges you make on the Virtual Machines tab override any changes on the Plan details page.
-
PVC name template: Specifies a template for the name of the persistent volume claim (PVC) for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.VmName: Name of the VM -
.PlanName: Name of the migration plan -
.DiskIndex: Initial volume index of the disk .RootDiskIndex: Index of the root diskExamples
-
"{{.VmName}}-disk-{{.DiskIndex}}" "{{if eq .DiskIndex .RootDiskIndex}}root{{else}}data{{end}}-{{.DiskIndex}}"Variable names cannot exceed 63 characters.
To specify a PVC name template for all the VMs in your plan, do the following:
- Click the Edit icon.
- Click Enter custom naming template.
- Enter the template according to the instructions.
- Click Save.
To specify a PVC name template only for specific VMs, do the following:
- Click the Virtual Machines tab.
- Select the desired VMs.
-
Click the Options menu
of the VM.
- Select Edit PVC name template.
- Enter the template according to the instructions.
Click Save.
ImportantChanges you make on the Virtual Machines tab override any changes on the Plan details page.
-
Network name template: Specifies a template for the network interface name for the VMs in your plan.
The template follows the Go template syntax and has access to the following variables:
-
.NetworkName:If the target network ismultus, add the name of the Multus Network Attachment Definition. Otherwise, leave this variable empty. -
.NetworkNamespace: If the target network ismultus, add the namespace where the Multus Network Attachment Definition is located. -
.NetworkType: Network type. Options:multusorpod. .NetworkIndex: Sequential index of the network interface (0-based).Examples
-
"net-{{.NetworkIndex}}" {{if eq .NetworkType "pod"}}pod{{else}}multus-{{.NetworkIndex}}{{end}}"Variable names cannot exceed 63 characters.
To specify a network name template for all the VMs in your plan, do the following:
- Click the Edit icon.
- Click Enter custom naming template.
- Enter the template according to the instructions.
- Click Save.
To specify a different network name template only for specific VMs, do the following:
- Click the Virtual Machines tab.
- Select the desired VMs.
-
Click the Options menu
of the VM.
- Select Edit Network name template.
- Enter the template according to the instructions.
Click Save.
ImportantChanges you make on the Virtual Machines tab override any changes on the Plan details page.
-
Raw copy mode: By default, during migration, virtual machines (VMs) are converted using a tool named
virt-v2vthat makes them compatible with OpenShift Virtualization. For more information about the virt-v2v conversion process, see How MTV uses the virt-v2v tool in Migrating your virtual machines to Red Hat OpenShift Virtualization. Raw copy mode copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on OpenShift Virtualization.To use raw copy mode for your migration plan, do the following:
- Click the Edit icon.
- Toggle the Raw copy mode switch.
Click Save.
MTV validates any changes you made on this page.
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 9.1. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any
Chapter 10. Planning a migration of virtual machines from Red Hat Virtualization
You prepare and create your Red Hat Virtualization migration plan by performing the following high-level steps in the MTV UI:
- Create ownerless network maps.
- Add a Red Hat Virtualization source provider.
- Select a migration network for a VMware source provider.
- Add an OpenShift Virtualization destination provider.
- Select a migration network for an OpenShift Virtualization provider.
- Create a Red Hat Virtualization migration plan.
10.1. Creating ownerless network maps in the MTV UI
You can create ownerless network maps by using the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
Click Create NetworkMap.
The Create NetworkMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: NetworkMap
metadata:
name: <network_map>
namespace: <namespace>
spec:
map:
- destination:
name: <network_name>
type: pod 1
source: 2
id: <source_network_id>
name: <source_network_name>
- destination:
name: <network_attachment_definition> 3
namespace: <network_attachment_definition_namespace> 4
type: multus
source:
id: <source_network_id>
name: <source_network_name>
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the RHV network Universal Unique ID (UUID). - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of network maps.
10.2. Creating ownerless storage maps using the form page of the MTV UI
You can create ownerless storage maps by using the form page of the MTV UI.
Prerequisites
- Have a Red Hat Virtualization source provider and a OpenShift Virtualization destination provider. For more information, see Adding a Red Hat Virtualization source provider or Adding an OpenShift Virtualization destination provider.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
- Click Create storage map > Create with form.
Specify the following:
- Map name: Name of the storage map.
- Project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Source storage: Select from the list.
- Target storage: Select from the list
- Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Click Create.
Your map appears in the list of storage maps.
10.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
You can create ownerless storage maps by using YAML or JSON definitions in the Migration Toolkit for Virtualization (MTV) UI.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
Click Create storage map > Create with YAML.
The Create StorageMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: <storage_map>
namespace: <namespace>
spec:
map:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> 1
source:
id: <source_storage_domain> 2
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of storage maps.
10.4. Adding a Red Hat Virtualization source provider
You can add a Red Hat Virtualization source provider by using the Red Hat OpenShift web console.
Prerequisites
- Manager CA certificate, unless it was replaced by a third-party certificate, in which case, specify the Manager Apache CA certificate
Procedure
Access the Create provider page for Red Hat Virtualization by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click Red Hat Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click Red Hat Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click Red Hat Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider.
-
URL: URL of the API endpoint of the Red Hat Virtualization Manager (RHVM) on which the source VM is mounted. Ensure that the URL includes the path leading to the RHVM API server, usually
/ovirt-engine/api. For example,https://rhv-host-example.com/ovirt-engine/api. - Username: Username.
- Password: Password.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
10.5. Adding an OpenShift Virtualization destination provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
Access the Create OpenShift Virtualization provider interface by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
10.6. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
In MTV version 2.9 and earlier, MTV used the pod network as the default network.
In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.
MTV supports using UDNs for all providers except OpenShift Virtualization.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
Configure a gateway in the network used for MTV migrations by completing the following steps:
- In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
- Select the appropriate default transfer network NAD.
- Click the YAML tab.
Add
forklift.konveyor.io/routeto the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> 1- 1
- The
NetworkAttachmentDefinitionparameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
- Click Save.
10.7. Creating a Red Hat Virtualization migration plan by using the MTV wizard
You can migrate Red Hat Virtualization virtual machines (VMs) by using the Migration Toolkit for Virtualization plan creation wizard.
The wizard is designed to lead you step-by-step in creating a migration plan.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
When you click Create plan on the Review and create page of the wizard, Migration Toolkit for Virtualization (MTV) validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.
Prerequisites
- Have an Red Hat Virtualization source provider and a OpenShift Virtualization destination provider. For more information, see Adding a Red Hat Virtualization source provider or Adding an OpenShift Virtualization destination provider.
- If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
- If you are using a user-defined network (UDN), note the name of the namespace as defined in OpenShift Virtualization.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Target project: Select from the list. If you are using a UDN, this is the namespace defined in OpenShift Virtualization.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and click Next.
- If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Migration type page, choose one of the following:
- Cold migration (default)
- Warm migration
- Click Next.
On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.
The transfer network is the network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.
- Verify that the transfer network is in the selected target project.
- To choose a different transfer network, select a different transfer network from the list.
Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
- To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
- Click Next.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
The Plan settings section of the page includes settings that you specified in the Other settings (optional) page and some additional optional settings. The steps below refer to the additional optional steps, but all of the settings can be edited by clicking the Options menu
, making the change, and then clicking Save.
Check the following item on the Plan settings section of the page:
Preserve CPU mode: Generally, the CPU model (type) for Red Hat Virtualization VMs is set at the cluster level. However, the CPU model can be set at the VM level, which is called a custom CPU model.
By default, MTV sets the CPU model on the destination cluster as follows: MTV preserves custom CPU settings for VMs that have them. For VMs without custom CPU settings, MTV does not set the CPU model. Instead, the CPU model is later set by OpenShift Virtualization.
To preserve the cluster-level CPU model of your Red Hat Virtualization VMs, do the following:
-
Click the Options menu
.
- Toggle the Whether to preserve the CPU model switch.
Click Save.
MTV validates any changes you made on this page.
-
Click the Options menu
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 10.1. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any
Chapter 11. Planning migration of virtual machines from OpenStack
You prepare and create your OpenStack migration plan by performing the following high-level steps in the MTV UI:
- Create ownerless network maps.
- Add an OpenStack source provider.
- Select a migration network for an OpenStack provider.
- Add an OpenShift Virtualization destination provider.
- Select a migration network for an OpenShift Virtualization provider.
- Create an OpenStack migration plan.
11.1. Creating ownerless network maps in the MTV UI
You can create ownerless network maps by using the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
Click Create NetworkMap.
The Create NetworkMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: NetworkMap
metadata:
name: <network_map>
namespace: <namespace>
spec:
map:
- destination:
name: <network_name>
type: pod 1
source:2
id: <source_network_id>
name: <source_network_name>
- destination:
name: <network_attachment_definition> 3
namespace: <network_attachment_definition_namespace> 4
type: multus
source:
id: <source_network_id>
name: <source_network_name>
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
podandmultus. - 2
- You can use either the
idor thenameparameter to specify the source network. Forid, specify the OpenStack network UUID. - 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of network maps.
11.2. Creating ownerless storage maps using the form page of the MTV UI
You can create ownerless storage maps by using the form page of the MTV UI.
Prerequisites
- Have an OpenStack source provider and a OpenShift Virtualization destination provider. For more information, see Adding an Red Hat OpenShift source provider or Adding an OpenShift Virtualization destination provider.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
- Click Create storage map > Create with form.
Specify the following:
- Map name: Name of the storage map.
- Project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Source storage: Select from the list.
- Target storage: Select from the list
- Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Click Create.
Your map appears in the list of storage maps.
11.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
You can create ownerless storage maps by using YAML or JSON definitions in the Migration Toolkit for Virtualization (MTV) UI.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
Click Create storage map > Create with YAML.
The Create StorageMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: <storage_map>
namespace: <namespace>
spec:
map:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> 1
source:
id: <source_volume_type> 2
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of storage maps.
11.4. Adding an OpenStack source provider
You can add an OpenStack source provider by using the Red Hat OpenShift web console.
When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM, and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created.
Procedure
Access the Create provider page for OpenStack by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenStack.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenStack.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenStack when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider.
-
URL: URL of the OpenStack Identity (Keystone) endpoint. For example,
http://controller:5000/v3. Authentication type: Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret.
Application credential ID
- Application credential ID: OpenStack application credential ID
-
Application credential secret: OpenStack application credential
Secret
Application credential name
- Application credential name: OpenStack application credential name
-
Application credential secret: OpenStack application credential
Secret - Username: OpenStack username
- Domain: OpenStack domain name
Token with user ID
- Token: OpenStack token
- User ID: OpenStack user ID
- Project ID: OpenStack project ID
Token with user Name
- Token: OpenStack token
- Username: OpenStack username
- Project: OpenStack project
- Domain name: OpenStack domain name
Password
- Username: OpenStack username
- Password: OpenStack password
- Project: OpenStack project
- Domain: OpenStack domain name
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
11.5. Adding an OpenShift Virtualization destination provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
Access the Create OpenShift Virtualization provider interface by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
11.6. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
In MTV version 2.9 and earlier, MTV used the pod network as the default network.
In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.
MTV supports using UDNs for all providers except OpenShift Virtualization.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
Configure a gateway in the network used for MTV migrations by completing the following steps:
- In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
- Select the appropriate default transfer network NAD.
- Click the YAML tab.
Add
forklift.konveyor.io/routeto the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> 1- 1
- The
NetworkAttachmentDefinitionparameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
- Click Save.
11.7. Creating an OpenStack migration plan by using the MTV wizard
You can migrate OpenStack virtual machines (VMs) by using the Migration Toolkit for Virtualization plan creation wizard.
The wizard is designed to lead you step-by-step in creating a migration plan.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
When you click Create plan on the Review and create page of the wizard, Migration Toolkit for Virtualization (MTV) validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.
Prerequisites
- Have an OpenStack source provider and an OpenShift Virtualization destination provider. For more information, see Adding an OpenStack source provider or Adding an OpenShift Virtualization destination provider.
- If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
- If you are using a user-defined network (UDN), note the name of its namespace as defined in OpenShift Virtualization.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Target project: Select from the list. If you are using a UDN, this is the namespace defined in OpenShift Virtualization.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and click Next.
- If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its ownerr. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.
The transfer network is the network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.
- Verify that the transfer network is in the selected target project.
- To choose a different transfer network, select a different transfer network from the list.
Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
- To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
- Click Next.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 11.1. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any
Chapter 12. Planning a migration of virtual machines from OVA
You prepare and create your OVA migration plan by performing the following high-level steps in the MTV UI:
- Create ownerless network maps.
- Add a OVA source provider.
- Select a migration network for an OVA source provider.
- Add an OpenShift Virtualization destination provider.
- Select a migration network for an OpenShift Virtualization provider.
- Create a OVA migration plan.
12.1. Creating ownerless network maps in the MTV UI
You can create ownerless network maps by using the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
Click Create NetworkMap.
The Create NetworkMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: NetworkMap
metadata:
name: <network_map>
namespace: <namespace>
spec:
map:
- destination:
name: <network_name>
type: pod 1
source:
id: <source_network_id> 2
- destination:
name: <network_attachment_definition> 3
namespace: <network_attachment_definition_namespace> 4
type: multus
source:
id: <source_network_id>
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
podandmultus. - 2
- Specify the OVA network Universal Unique ID (UUID).
- 3
- Specify a network attachment definition for each additional OpenShift Virtualization network.
- 4
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of network maps.
12.2. Creating ownerless storage maps using the form page of the MTV UI
You can create ownerless storage maps by using the form page of the MTV UI.
Prerequisites
- Have an Open Virtual Appliance (OVA) source provider and a OpenShift Virtualization destination provider. For more information, see Adding an Open Virtual Appliance (OVA) source provider or Adding an OpenShift Virtualization destination provider.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
- Click Create storage map > Create with form.
Specify the following:
- Map name: Name of the storage map.
- Project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Source storage: Select from the list.
- Target storage: Select from the list
- Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Click Create.
Your map appears in the list of storage maps.
12.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
You can create ownerless storage maps by using YAML or JSON definitions in the Migration Toolkit for Virtualization (MTV) UI.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
Click Create storage map > Create with YAML.
The Create StorageMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: <storage_map>
namespace: <namespace>
spec:
map:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> 1
source:
name: Dummy storage for source provider <provider_name> 2
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany. - 2
- For OVA, the
StorageMapcan map only a single storage, which all the disks from the OVA are associated with, to a storage class at the destination. For this reason, the storage is referred to in the UI as "Dummy storage for source provider <provider_name>". In the YAML, write the phrase as it appears above, without the quotation marks and replacing <provider_name> with the actual name of the provider.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of storage maps.
12.4. Adding an Open Virtual Appliance (OVA) source provider
You can add Open Virtual Appliance (OVA) files that were created by VMware vSphere as a source provider by using the Red Hat OpenShift web console.
Procedure
Access the Create provider page for Open Virtual Appliance by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click Open Virtual Appliance.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click Open Virtual Appliance.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click Open Virtual Appliance when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the NFS file share that serves the OVA
Click Create provider to add and save the provider.
The provider appears in the list of providers.
NoteAn error message might appear that states that an error has occurred. You can ignore this message.
12.5. Adding an OpenShift Virtualization destination provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
Access the Create OpenShift Virtualization provider interface by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
12.6. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
In MTV version 2.9 and earlier, MTV used the pod network as the default network.
In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.
MTV supports using UDNs for all providers except OpenShift Virtualization.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
Configure a gateway in the network used for MTV migrations by completing the following steps:
- In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
- Select the appropriate default transfer network NAD.
- Click the YAML tab.
Add
forklift.konveyor.io/routeto the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> 1- 1
- The
NetworkAttachmentDefinitionparameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
- Click Save.
12.7. Creating an Open Virtualization Appliance (OVA) migration plan by using the MTV wizard
You can migrate Open Virtual Appliance (OVA) files that were created by VMware vSphere by using the Migration Toolkit for Virtualization plan creation wizard.
The wizard is designed to lead you step-by-step in creating a migration plan.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
When you click Create plan on the Review and create page of the wizard, Migration Toolkit for Virtualization (MTV) validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.
Prerequisites
- Have an OVA source provider and a OpenShift Virtualization destination provider. For more information, see Adding an Open Virtual Appliance (OVA) source provider or Adding an OpenShift Virtualization destination provider.
- If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
- If you are using a user-defined network (UDN), note the name of its namespace as defined in OpenShift Virtualization.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Target project: Select from the list. If you are using a UDN, this is the namespace defined in OpenShift Virtualization.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and click Next.
- If you are using a UDN, verify that the IP address of the provider is outside the subnet of the UDN. If the IP address is within the subnet of the UDN, the migration fails.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.
The transfer network is the network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.
- Verify that the transfer network is in the selected target project.
- To choose a different transfer network, select a different transfer network from the list.
Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
- To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
- Click Next.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 12.1. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any
12.8. Configuring OVA file upload by web browser
You can configure Open Virtual Appliance (OVA) file upload by web browser to upload an OVA file directly to an NFS share. To configure OVA file upload, you first enable OVA appliance management in the ForkliftController custom resource (CR) and then enable OVA upload for each OVA provider. When you enable OVA upload for an OVA provider, the Upload local OVA files option populates on the provider’s Details page in the MTV UI.
Prerequisites
- You have an NFS share to point the OVA provider at.
- You have enough storage space in your NFS share.
- You have a valid .ova file to upload.
- Your .ova file has a unique file name.
Procedure
- In the Red Hat OpenShift web console, click Operators > Installed Operators.
Click Migration Toolkit for Virtualization Operator.
The Operator Details page opens in the Details tab.
-
Click the ForkliftController tab, and open the
forklift-controllerresource. Click the
forklift-controllerYAML tab, and add thefeature_ova_appliance_managementfield to thespecsection of theforklift-controllercustom resource (CR):Example:
spec: ... feature_ova_appliance_management: 'true'
- In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
Click the provider to open the Details page. Click the provider’s YAML tab.
For information about creating a provider, see Adding an Open Virtual Appliance (OVA) source provider.
Scroll down the provider’s YAML file to
spec.settings.applianceManagement, and setapplianceManagementto'true':Example:
spec: secret: ... settings: applianceManagement: 'true' type: ova ...- A temporary ConnectionTestFailed error message displays while the update is processing. You can ignore the error message.
- Click the provider’s Details tab, and scroll down to the Conditions section. Verify that ApplianceManagementEnabled shows as True in the list of conditions.
- In the Upload local OVA files section, click Browse to find a valid .ova file.
Click Upload.
A success message confirms the file upload. After several seconds, the number of virtual machines increases under the Provider inventory section.
- If your OVA appliance is large, you might receive a request timeout error message.
Chapter 13. Planning a migration of virtual machines from OpenShift Virtualization
You prepare and create your OpenShift Virtualization migration plan by performing the following high-level steps in the MTV UI:
- Create ownerless network maps.
- Add an OpenShift Virtualization source provider.
- Select a migration network for an OpenShift Virtualization provider.
- Add an OpenShift Virtualization destination provider.
- Select a migration network for an OpenShift Virtualization provider.
- Create a OpenShift Virtualization migration plan.
13.1. Creating ownerless network maps in the MTV UI
You can create ownerless network maps by using the Migration Toolkit for Virtualization (MTV) UI to map source networks to OpenShift Virtualization networks.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Network maps.
Click Create NetworkMap.
The Create NetworkMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
$ cat << EOF | oc apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: NetworkMap
metadata:
name: <network_map>
namespace: <namespace>
spec:
map:
- destination:
name: <network_name>
type: pod 1
source:
name: <network_name>
type: pod
- destination:
name: <network_attachment_definition> 2
namespace: <network_attachment_definition_namespace> 3
type: multus
source:
name: <network_attachment_definition>
namespace: <network_attachment_definition_namespace>
type: multus
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
podandmultus. - 2
- Specify a network attachment definition for each additional OpenShift Virtualization network. Specify the
namespaceeither by using thenamespace propertyor with a name built as follows:<network_namespace>/<network_name>. - 3
- Required only when
typeismultus. Specify the namespace of the OpenShift Virtualization network attachment definition.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of network maps.
13.2. Creating ownerless storage maps using the form page of the MTV UI
You can create ownerless storage maps by using the form page of the MTV UI.
Prerequisites
- Have a OpenShift Virtualization source provider and a OpenShift Virtualization destination provider. For more information, see Adding an OpenShift Virtualization source provider or Adding an OpenShift Virtualization destination provider.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
- Click Create storage map > Create with form.
Specify the following:
- Map name: Name of the storage map.
- Project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Source storage: Select from the list.
- Target storage: Select from the list
- Optional: Click Add mapping to create additional storage maps, including mapping multiple storage sources to a single target storage class.
Click Create.
Your map appears in the list of storage maps.
13.3. Creating ownerless storage maps using YAML or JSON definitions in the MTV UI
You can create ownerless storage maps by using YAML or JSON definitions in the Migration Toolkit for Virtualization (MTV) UI.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Storage maps.
Click Create storage map > Create with YAML.
The Create StorageMap page opens.
- Enter the YAML or JSON definitions into the editor, or drag and drop a file into the editor.
- If you enter YAML definitions, use the following:
[source,yaml,subs="attributes"]
$ cat << EOF | {oc} apply -f -
apiVersion: forklift.konveyor.io/v1beta1
kind: StorageMap
metadata:
name: <storage_map>
namespace: <namespace>
spec:
map:
- destination:
storageClass: <storage_class>
accessMode: <access_mode> 1
source:
name: <storage_class>
provider:
source:
name: <source_provider>
namespace: <namespace>
destination:
name: <destination_provider>
namespace: <namespace>
EOF- 1
- Allowed values are
ReadWriteOnceandReadWriteMany.
- Optional: To download your input, click Download.
Click Create.
Your map appears in the list of storage maps.
13.4. Adding a Red Hat OpenShift Virtualization source provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on.
The Red Hat OpenShift cluster version of the source provider must be 4.16 or later.
Procedure
Access the Create provider interface OpenShift Virtualization by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
Optional: Add access to the UI of the provider:
On the Providers page, click the provider.
The Provider details page opens.
- Click the Edit icon under External UI web link.
Enter the link and click Save.
NoteIf you do not enter a link, MTV attempts to calculate the correct link.
- If MTV succeeds, the hyperlink of the field points to the calculated link.
- If MTV does not succeed, the field remains empty.
13.5. Adding an OpenShift Virtualization destination provider
You can use a Red Hat OpenShift Virtualization provider as both a source provider and a destination provider.
Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider.
You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV.
You can migrate VMs from the cluster that MTV is deployed on to another cluster or from a remote cluster to the cluster that MTV is deployed on.
Prerequisites
-
You must have an OpenShift Virtualization service account token with
cluster-adminprivileges.
Procedure
Access the Create OpenShift Virtualization provider interface by doing one of the following:
In the Red Hat OpenShift web console, click Migration for Virtualization > Providers.
- Click Create Provider.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
- Click OpenShift Virtualization.
If you have Administrator privileges, in the Red Hat OpenShift web console, click Migration for Virtualization > Overview.
In the Welcome pane, click OpenShift Virtualization.
If the Welcome pane is not visible, click Show the welcome card in the upper-right corner of the page, and click OpenShift Virtualization when the Welcome pane opens.
Select a Project from the list. The default project shown depends on the active project of MTV.
If the active project is All projects, then the default project is
openshift-mtv. Otherwise, the default project is the same as the active project.If you have Administrator privileges, you can see all projects, otherwise, you can see only the projects you are authorized to work with.
Specify the following fields:
- Provider resource name: Name of the source provider
- URL: URL of the endpoint of the API server
Service account bearer token: Token for a service account with
cluster-adminprivilegesIf both URL and Service account bearer token are left blank, the local OpenShift cluster is used.
Choose one of the following options for validating CA certificates:
- Use a custom CA certificate: Migrate after validating a custom CA certificate.
- Use the system CA certificate: Migrate after validating the system CA certificate.
Skip certificate validation : Migrate without validating a CA certificate.
- To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select.
- To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty.
- To skip certificate validation, toggle the Skip certificate validation switch to the right.
Optional: Ask MTV to fetch a custom CA certificate from the provider’s API endpoint URL.
- Click Fetch certificate from URL. The Verify certificate window opens.
If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm. If not, click Cancel, and then, enter the correct certificate information manually.
Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint.
Click Create provider to add and save the provider.
The provider appears in the list of providers.
13.6. Selecting a migration network for an OpenShift Virtualization provider
You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured.
After you select a transfer network, associate its network attachment definition (NAD) with the gateway to be used by this network.
In MTV version 2.9 and earlier, MTV used the pod network as the default network.
In version 2.10.0 and later, MTV detects if you have selected a user-defined network (UDN) as your default network. Therefore, if you set the UDN to be the migration’s namespace, you do not need to select a new default network when you create your migration plan.
MTV supports using UDNs for all providers except OpenShift Virtualization.
You can override the default migration network of the provider by selecting a different network when you create a migration plan.
Procedure
- In the Red Hat OpenShift web console, click Migration > Providers for virtualization.
Click the OpenShift Virtualization provider whose migration network you want to change.
When the Providers detail page opens:
- Click the Networks tab.
- Click Set default transfer network.
- Select a default transfer network from the list and click Save.
Configure a gateway in the network used for MTV migrations by completing the following steps:
- In the Red Hat OpenShift web console, click Networking > NetworkAttachmentDefinitions.
- Select the appropriate default transfer network NAD.
- Click the YAML tab.
Add
forklift.konveyor.io/routeto the metadata:annotations section of the YAML, as in the following example:apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: localnet-network namespace: mtv-test annotations: forklift.konveyor.io/route: <IP address> 1- 1
- The
NetworkAttachmentDefinitionparameter is needed to configure an IP address for the interface, either from the Dynamic Host Configuration Protocol (DHCP) or statically. Configuring the IP address enables the interface to reach the configured gateway.
- Click Save.
13.7. Creating an OpenShift Virtualization migration plan by using the MTV wizard
You can migrate OpenShift Virtualization virtual machines (VMs) by using the Migration Toolkit for Virtualization plan creation wizard.
The wizard is designed to lead you step-by-step in creating a migration plan.
Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration.
This prevents concurrent disk access to the storage the guest points to.
A plan cannot contain more than 500 VMs or 500 disks.
When you click Create plan on the Review and create page of the wizard, Migration Toolkit for Virtualization (MTV) validates your plan. If everything is OK, the Plan details page for your plan opens. This page contains settings that do not appear in the wizard, but are important. Be sure to read and follow the instructions for this page carefully, even though it is outside the plan creation wizard. The page can be opened later, any time before you run the plan, so you can come back to it if needed.
Prerequisites
- Have an OpenShift Virtualization source provider and an OpenShift Virtualization destination provider. For more information, see Adding an OpenShift Virtualization source provider or Adding an OpenShift Virtualization destination provider.
- If you plan to create a Network map or a Storage map that will be used by more than one migration plan, create it in the Network maps or Storage maps page of the UI before you create a migration plan that uses that map.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list.
- Target provider: Select from the list.
- Target project: Select from the list.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and click Next.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Other settings (optional) page, you have the option to change the Transfer network of your migration plan.
The transfer network is the network used to transfer the VMs to OpenShift Virtualization. This is the default transfer network of the provider.
- Verify that the transfer network is in the selected target project.
- To choose a different transfer network, select a different transfer network from the list.
Optional: To configure another OpenShift network in the OpenShift web console, click Networking > NetworkAttachmentDefinitions.
To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform.
- To adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information, see Selecting a migration network for a VMware source provider.
- Click Next.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 13.1. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any
13.7.1. Creating a migration plan for a live migration by using the MTV wizard
You create a migration plan for a live migration of virtual machines (VMs) almost exactly as you create a migration plan for a cold migration. The only difference is that you specify it is a Live migration in the Migration type page.
Prerequisites
As described in OpenShift Virtualization live migration prerequisites.
Procedure
- In the Red Hat OpenShift web console, click Migration for Virtualization > Migration plans.
Click Create plan.
The Create migration plan wizard opens.
On the General page, specify the following fields:
- Plan name: Enter a name.
- Plan project: Select from the list.
- Source provider: Select from the list. You can use any OpenShift Virtualization provider. You do not need to create a new one for a live migration.
- Target provider: Select from the list. Be sure to select the correct OpenShift Virtualization target provider.
- Target project: Select from the list.
- Click Next.
- On the Virtual machines page, select the virtual machines you want to migrate and ensure they are powered on. A live migration fails if any of its VMs are powered off.
- Click Next.
On the Network map page, choose one of the following options:
Use an existing network map: Select an existing network map from the list.
These are network maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you select an existing map, be sure it has the same source provider and target provider as the ones you want to use in your plan.
Use a new network map: Allows you to create a new network map by supplying the following data. This map is attached to this plan, which is then considered to be its owner. Maps that you create using this option are not available in the Use an existing network map option because each is created with an owner.
NoteYou can create an ownerless network map, which you and others can use for additional migration plans, in the Network Maps section of the UI.
- Source network: Select from the list.
Target network: Select from the list.
If needed, click Add mapping to add another mapping.
- Network map name: Enter a name or let MTV automatically generate a name for the network map.
- Click Next.
On the Storage map page, choose one of the following options:
Use an existing storage map: Select an existing storage map from the list.
These are storage maps available for all plans, and therefore, they are ownerless in terms of the system. If you select this option and choose a map, a copy of that map is attached to your plan, and your plan is the owner of that copy. Any changes you make to your copy do not affect the original plan or any copies that other users have.
NoteIf you choose an existing map, be sure it has the same source provider and the same target provider as the ones you want to use in your plan.
Use new storage map: Allows you to create one or two new storage maps by supplying the following data. These maps are attached to this plan, which is then their owner. Maps that you create using this option are not available in the Use an existing storage map option because each is created with an owner.
NoteYou can create an ownerless storage map, which you and others can use for additional migration plans, in the Storage Maps section of the UI.
- Source storage: Select from the list.
Target storage: Select from the list.
If needed, click Add mapping to add another mapping.
- Storage map name: Enter a name or let MTV automatically generate a name for the storage map.
- Click Next.
On the Migration type page, choose Live migration.
If Live migration does not appear as an option, return to the General page and verify that both the source provider and target provider are OpenShift Virtualization clusters.
If they are both OpenShift Virtualization clusters, have someone with
cluster-adminprivileges ensure that the prerequisites for live migration are met. For more information, see OpenShift Virtualization live migration prerequisites.- Click Next.
- On the Other settings (optional) page, click Next without doing anything else.
- On the Hooks (optional) page, you can add a pre-migration hook, a post-migration hook, or both types of migration hooks. All are optional.
- To add a hook, select the appropriate Enable hook checkbox.
- Enter the Hook runner image.
Enter the Ansible playbook of the hook in the window.
NoteYou cannot include more than one pre-migration hook or more than one post-migration hook in a migration plan.
- Click Next.
- On the Review and Create page, review the information displayed.
Edit any item by doing the following:
Click its Edit step link.
The wizard opens to the page where you defined the item.
- Edit the item.
- Either click Next to advance to the next page of the wizard, or click Skip to review to return directly to the Review and create page.
When you finish reviewing the details of the plan, click Create plan. MTV validates your plan.
When your plan is validated, the Plan details page for your plan opens in the Details tab.
In addition to listing details based on your entries in the wizard, the Plan details tab includes the following two sections after the details of the plan:
- Migration history: Details about successful and unsuccessful attempts to run the plan
- Conditions: Any changes that need to be made to the plan so that it can run successfully
When you have fixed all conditions listed, you can run your plan from the Plans page.
The Plan details page also includes five additional tabs, which are described in the table that follows:
Table 13.2. Tabs of the Plan details page
YAML Virtual Machines Resources Mappings Hooks Editable YAML
Planmanifest based on your plan’s details including source provider, network and storage maps, VMs, and any issues with your VMsThe VMs the plan migrates
Calculated resources: VMs, CPUs, and total memory for both total VMs and running VMs
Editable specification of the network and storage maps used by your plan
Updatable specification of the hooks used by your plan, if any