Chapter 12. File systems and storage
12.1. File systems
12.1.1. Btrfs has been removed
The Btrfs file system has been removed in Red Hat Enterprise Linux 8. This includes the following components:
-
The
btrfs.ko
kernel module -
The
btrfs-progs
package -
The
snapper
package
You can no longer create, mount, or install on Btrfs file systems in Red Hat Enterprise Linux 8. The Anaconda installer and the Kickstart commands no longer support Btrfs.
12.1.3. The ext4 file system now supports metadata checksums
With this update, ext4 metadata is protected by checksums. This enables the file system to recognize the corrupt metadata, which avoids damage and increases the file system resilience.
12.1.4. The /etc/sysconfig/nfs
file and legacy NFS service names are no longer available
In Red Hat Enterprise Linux 8.0, the NFS configuration has moved from the /etc/sysconfig/nfs
configuration file, which was used in Red Hat Enterprise Linux 7, to /etc/nfs.conf
.
The /etc/nfs.conf
file uses a different syntax. Red Hat Enterprise Linux 8 attempts to automatically convert all options from /etc/sysconfig/nfs
to /etc/nfs.conf
when upgrading from Red Hat Enterprise Linux 7.
Both configuration files are supported in Red Hat Enterprise Linux 7. Red Hat recommends that you use the new /etc/nfs.conf
file to make NFS configuration in all versions of Red Hat Enterprise Linux compatible with automated configuration systems.
Additionally, the following NFS service aliases have been removed and replaced by their upstream names:
-
nfs.service
, replaced bynfs-server.service
-
nfs-secure.service
, replaced byrpc-gssd.service
-
rpcgssd.service
, replaced byrpc-gssd.service
-
nfs-idmap.service
, replaced bynfs-idmapd.service
-
rpcidmapd.service
, replaced bynfs-idmapd.service
-
nfs-lock.service
, replaced byrpc-statd.service
-
nfslock.service
, replaced byrpc-statd.service
12.2. Storage
12.2.1. The BOOM boot manager simplifies the process of creating boot entries
BOOM is a boot manager for Linux systems that use boot loaders supporting the BootLoader Specification for boot entry configuration. It enables flexible boot configuration and simplifies the creation of new or modified boot entries: for example, to boot snapshot images of the system created using LVM.
BOOM does not modify the existing boot loader configuration, and only inserts additional entries. The existing configuration is maintained, and any distribution integration, such as kernel installation and update scripts, continue to function as before.
BOOM has a simplified command-line interface (CLI) and API that ease the task of creating boot entries.
12.2.2. Stratis is now available
Stratis is a new local storage manager. It provides managed file systems on top of pools of storage with additional features to the user.
Stratis enables you to more easily perform storage tasks such as:
- Manage snapshots and thin provisioning
- Automatically grow file system sizes as needed
- Maintain file systems
To administer Stratis storage, use the stratis
utility, which communicates with the stratisd
background service.
Stratis is provided as a Technology Preview.
For more information, see the Stratis documentation: Setting up Stratis file systems.
12.2.3. LUKS2 is now the default format for encrypting volumes
In RHEL 8, the LUKS version 2 (LUKS2) format replaces the legacy LUKS (LUKS1) format. The dm-crypt
subsystem and the cryptsetup
tool now uses LUKS2 as the default format for encrypted volumes. LUKS2 provides encrypted volumes with metadata redundancy and auto-recovery in case of a partial metadata corruption event.
Due to the internal flexible layout, LUKS2 is also an enabler of future features. It supports auto-unlocking through the generic kernel-keyring token built in libcryptsetup
that allow users unlocking of LUKS2 volumes using a passphrase stored in the kernel-keyring retention service.
Other notable enhancements include:
- The protected key setup using the wrapped key cipher scheme.
- Easier integration with Policy-Based Decryption (Clevis).
- Up to 32 key slots - LUKS1 provides only 8 key slots.
For more details, see the cryptsetup(8)
and cryptsetup-reencrypt(8)
man pages.
12.2.4. Multiqueue scheduling on block devices
Block devices now use multiqueue scheduling in Red Hat Enterprise Linux 8. This enables the block layer performance to scale well with fast solid-state drives (SSDs) and multi-core systems.
The SCSI Multiqueue (scsi-mq
) driver is now enabled by default, and the kernel boots with the scsi_mod.use_blk_mq=Y
option. This change is consistent with the upstream Linux kernel.
Device Mapper Multipath (DM Multipath) requires the scsi-mq
driver to be active.
12.2.5. VDO now supports all architectures
Virtual Data Optimizer (VDO) is now available on all of the architectures supported by RHEL 8.
12.2.6. VDO no longer supports read cache
The read cache functionality has been removed from Virtual Data Optimizer (VDO). The read cache is always disabled on VDO volumes, and you can no longer enable it using the --readCache
option of the vdo
utility.
Red Hat might reintroduce the VDO read cache in a later Red Hat Enterprise Linux release, using a different implementation.
12.2.7. The dmraid
package has been removed
The dmraid
package has been removed from Red Hat Enterprise Linux 8. Users requiring support for combined hardware and software RAID host bus adapters (HBA) should use the mdadm
utility, which supports native MD software RAID, the SNIA RAID Common Disk Data Format (DDF), and the Intel® Matrix Storage Manager (IMSM) formats.
12.2.8. Software FCoE and Fibre Channel no longer support the target mode
- Software FCoE: NIC Software FCoE target functionality is removed in Red Hat Enterprise Linux 8.0.
-
Fibre Channel no longer supports the target mode. Target mode is disabled for the
qla2xxx
QLogic Fibre Channel driver in Red Hat Enterprise Linux 8.0.
For more information, see FCoE software removal.
12.2.9. The detection of marginal paths in DM Multipath has been improved
The multipathd
service now supports improved detection of marginal paths. This helps multipath devices avoid paths that are likely to fail repeatedly, and improves performance. Marginal paths are paths with persistent but intermittent I/O errors.
The following options in the /etc/multipath.conf
file control marginal paths behavior:
-
marginal_path_double_failed_time
-
marginal_path_err_sample_time
-
marginal_path_err_rate_threshold
-
marginal_path_err_recheck_gap_time
DM Multipath disables a path and tests it with repeated I/O for the configured sample time if:
-
the listed
multipath.conf
options are set, - a path fails twice in the configured time, and
- other paths are available.
If the path has more than the configured err rate during this testing, DM Multipath ignores it for the configured gap time, and then retests it to see if it is working well enough to be reinstated.
For more information, see the multipath.conf
man page on your system.
12.2.10. New overrides
section of the DM Multipath configuration file
The /etc/multipath.conf
file now includes an overrides
section that allows you to set a configuration value for all of your devices. These attributes are used by DM Multipath for all devices unless they are overwritten by the attributes specified in the multipaths
section of the /etc/multipath.conf
file for paths that contain the device. This functionality replaces the all_devs
parameter of the devices
section of the configuration file, which is no longer supported.
12.2.11. NVMe/FC is fully supported on Broadcom Emulex and Marvell Qlogic Fibre Channel adapters
The NVMe over Fibre Channel (NVMe/FC) transport type is now fully supported in Initiator mode when used with Broadcom Emulex and Marvell Qlogic Fibre Channel 32Gbit adapters that feature NVMe support.
NVMe over Fibre Channel is an additional fabric transport type for the Nonvolatile Memory Express (NVMe) protocol, in addition to the Remote Direct Memory Access (RDMA) protocol that was previously introduced in Red Hat Enterprise Linux.
Enabling NVMe/FC:
To enable NVMe/FC in the
lpfc
driver, edit the/etc/modprobe.d/lpfc.conf
file and add the following option:lpfc_enable_fc4_type=3
To enable NVMe/FC in the
qla2xxx
driver, edit the/etc/modprobe.d/qla2xxx.conf
file and add the following option:qla2xxx.ql2xnvmeenable=1
Additional restrictions:
- NVMe clustering is not supported with NVMe/FC.
-
kdump
is not supported with NVMe/FC. - Booting from Storage Area Network (SAN) NVMe/FC is not supported.
12.2.12. Support for Data Integrity Field/Data Integrity Extension (DIF/DIX)
DIF/DIX is an addition to the SCSI Standard. It remains in Technology Preview for all HBAs and storage arrays, except for those specifically listed as supported.
DIF/DIX increases the size of the commonly used 512 byte disk block from 512 to 520 bytes, adding the Data Integrity Field (DIF). The DIF stores a checksum value for the data block that is calculated by the Host Bus Adapter (HBA) when a write occurs. The storage device then confirms the checksum on receipt, and stores both the data and the checksum. Conversely, when a read occurs, the checksum can be verified by the storage device, and by the receiving HBA.
12.2.13. libstoragemgmt-netapp-plugin
has been removed
The libstoragemgmt-netapp-plugin
package used by the libStorageMgmt
library has been removed. It is no longer supported because:
- The package requires the NetApp 7-mode API, which is being phased out by NetApp.
-
RHEL 8 has removed default support for the TLSv1.0 protocol with the
TLS_RSA_WITH_3DES_EDE_CBC_SHA
cipher, using this plug-in with TLS does not work.
12.2.14. Removal of Cylinder-Head-Sector addressing from sfdisk
and cfdisk
Cylinder-Head-Sector (CHS) addressing is no longer useful for modern storage devices. It has been removed as an option from the sfdisk
and cfdisk
commands. Since RHEL 8, you cannot use the following options:
-
-C, --cylinders number
-
-H, --heads number
-
-S, --sectors number
For more information, see the sfdisk(8)
and cfdisk(8)
man pages.
12.3. LVM
12.3.2. Removal of lvmetad
daemon
LVM no longer uses the lvmetad
daemon for caching metadata, and will always read metadata from disk. LVM disk reading has been reduced, which reduces the benefits of caching.
Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad
setting in the lvm.conf
configuration file. The correct way to disable autoactivation continues to be setting auto_activation_volume_list
in the lvm.conf
file.
12.3.3. LVM can no longer manage devices formatted with the GFS pool volume manager or the lvm1
metadata format.
LVM can no longer manage devices formatted with the GFS pool volume manager or the`lvm1` metadata format. if you created your logical volume before Red Hat Enterprise Linux 4 was introduced, then this may affect you. Volume groups using the lvm1
format should be converted to the lvm2
format using the vgconvert
command.
12.3.4. LVM libraries and LVM Python bindings have been removed
The lvm2app
library and LVM Python bindings, which were provided by the lvm2-python-libs
package, have been removed. Red Hat recommends the following solutions instead:
-
The LVM D-Bus API in combination with the
lvm2-dbusd
service. This requires using Python version 3. -
The LVM command-line utilities with JSON formatting; this formatting has been available since the
lvm2
package version 2.02.158. -
The
libblockdev
library, included in AppStream, for C/C++
You must port any applications using the removed libraries and bindings to the D-Bus API before upgrading to Red Hat Enterprise Linux 8.
12.3.5. The ability to mirror the log for LVM mirrors has been removed
The mirrored log feature of mirrored LVM volumes has been removed. Red Hat Enterprise Linux (RHEL) 8 no longer supports creating or activating LVM volumes with a mirrored mirror log.
The recommended replacements are:
- RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure.
-
Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command:
lvconvert --mirrorlog disk my_vg/my_lv
.