It is possible to implement RAID (Redundant Array of Independent Disks) in Red Hat Linux 9.2 using both hardware and software approaches. Here's an overview of both methods:
1. Hardware RAID
Hardware RAID is managed by a dedicated RAID controller card or by the motherboard if it has RAID functionality. The RAID configuration is abstracted from the operating system.
Steps for Hardware RAID:
-
Configure the RAID Controller:
- Access the RAID controller firmware during the boot process (usually via a key combination like
Ctrl+R
, Ctrl+H
, or a similar key depending on the RAID controller vendor).
- Create a RAID array by selecting the desired disks and choosing a RAID level (e.g., RAID 0, RAID 1, RAID 5, etc.).
-
Install Red Hat Linux 9.2:
- During the installation, the RAID array will appear as a single disk to the Red Hat Linux installer.
- Partition and format the RAID disk as needed during the installation process.
-
Install RAID Controller Drivers (if needed):
- Some RAID controllers require additional drivers for Red Hat Linux. Obtain the drivers from the controller vendor and install them as per the documentation.
2. Software RAID:
Software RAID is managed by the operating system using utilities like `mdadm`.
Steps for Software RAID:
-
Install Required Packages:
-
Ensure
mdadm
is installed:
sudo dnf install mdadm
-
Create RAID Devices:
-
Format and Mount the RAID Array:
-
Persist RAID Configuration:
-
Enable Auto-Mount:
- Add an entry to
/etc/fstab
for the RAID device.
Comparison: Hardware vs. Software RAID:
| Aspect | Hardware RAID | Software RAID |
|---------------------|-------------------------------------|-------------------------------|
| Performance: | Offloaded to RAID controller | Uses CPU for RAID operations |
| Flexibility: | Limited to controller features | Highly configurable |
| Cost: | Additional cost for RAID hardware | No additional cost |
| Ease of Use: | Managed in hardware BIOS | Requires Linux expertise |
If your hardware supports RAID, using hardware RAID might be preferable for performance. However, software RAID offers flexibility and is an excellent choice for environments without a RAID controller.
The LVM system organizes hard disks into Logical Volume (LV) groups. Essentially, physical hard disk partitions (or possibly RAID arrays) are set up in a bunch of equal sized chunks known as Physical Extents (PE). As there are several other concepts associated with the LVM system, let's start with some basic definitions:
- Physical Volume (PV) is the standard partition that you add to the LVM mix. Normally, a physical volume is a standard primary or logical partition. It can also
be a RAID array.
- Physical Extent (PE) is a chunk of disk space. Every PV is divided into a number of equal sized PEs. Every PE in a LV group is the same size. Different LV groups can have different sized PEs.
- Logical Extent (LE) is also a chunk of disk space. Every LE is mapped to a specific PE.
- Logical Volume (LV) is composed of a group of LEs. You can mount a filesystem such as /home and /var on an LV.
- Volume Group (VG) is composed of a group of LVs. It is the organizational group for LVM. Most of the commands that you'll use apply to a specific VG.
- Verify the size of Logical Volume: lvdisplay /dev/vg0/lv1
- Verify the Size on mounted directory: df –h or df –h mounted directory name
- Use : lvextend –L+400M /dev/vg0/lv1
- resize2fs /dev/vg0/lv1 to bring extended size online.
- Again Verify using lvdisplay and df –h command.
The next lesson concludes this module.