1 .\" $OpenBSD: softraid.4,v 1.55 2024年04月25日 07:21:43 stsp Exp $ 2 .\" 3 .\" Copyright (c) 2007 Todd T. Fries <todd@OpenBSD.org> 4 .\" Copyright (c) 2007 Marco Peereboom <marco@OpenBSD.org> 5 .\" 6 .\" Permission to use, copy, modify, and distribute this software for any 7 .\" purpose with or without fee is hereby granted, provided that the above 8 .\" copyright notice and this permission notice appear in all copies. 9 .\" 10 .\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES 11 .\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF 12 .\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR 13 .\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES 14 .\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN 15 .\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF 16 .\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. 17 .\" 18 .Dd $Mdocdate: April 25 2024 $ 19 .Dt SOFTRAID 4 20 .Os 21 .Sh NAME 22 .Nm softraid 23 .Nd software RAID 24 .Sh SYNOPSIS 25 .Cd "softraid0 at root" 26 .Sh DESCRIPTION 27The 28 .Nm 29device emulates a Host Bus Adapter (HBA) that provides RAID and other I/O 30related services. 31The 32 .Nm 33device provides a scaffold to implement more complex I/O transformation 34disciplines. 35For example, one can tie chunks together into a mirroring discipline. 36There really is no limit on what type of discipline one can write as long 37as it fits the SCSI model. 38 .Pp 39 .Nm 40supports a number of 41 .Em disciplines . 42A discipline is a collection of functions 43that provides specific I/O functionality. 44This includes I/O path, bring-up, failure recovery, and statistical 45information gathering. 46Essentially a discipline is a lower 47level driver that provides the I/O transformation for the softraid 48device. 49 .Pp 50A 51 .Em volume 52is a virtual disk device that is made up of a collection of chunks. 53 .Pp 54A 55 .Em chunk 56is a partition or storage area of fstype 57 .Dq RAID . 58 .Xr disklabel 8 59is used to alter the fstype. 60 .Pp 61Currently 62 .Nm 63supports the following disciplines: 64 .Bl -ohang -offset indent 65 .It RAID 0 66A 67 .Em striping 68discipline. 69It segments data over a number of chunks to increase performance. 70RAID 0 does not provide for data loss (redundancy). 71 .It RAID 1 72A 73 .Em mirroring 74discipline. 75It copies data across more than one chunk to provide for data loss. 76Read performance is increased, 77though at the cost of write speed. 78Unlike traditional RAID 1, 79 .Nm 80supports the use of more than two chunks in a RAID 1 setup. 81 .It RAID 5 82A striping discipline with 83 .Em floating parity 84across all chunks. 85It stripes data across chunks and provides parity to prevent data loss of 86a single chunk failure. 87Read performance is increased; 88write performance does incur additional overhead. 89 .It CRYPTO 90An 91 .Em encrypting 92discipline. 93It encrypts data on a single chunk to provide for data confidentiality. 94CRYPTO does not provide redundancy. 95 .It CONCAT 96A 97 .Em concatenating 98discipline. 99It writes data to each chunk in sequence to provide increased capacity. 100CONCAT does not provide redundancy. 101 .It RAID 1C 102A 103 .Em mirroring 104and 105 .Em encrypting 106discipline. 107It encrypts data to provide for data confidentiality and copies the 108encrypted data across more than one chunk to prevent data loss in 109case of a chunk failure. 110Unlike traditional RAID 1, 111 .Nm 112supports the use of more than two chunks in a RAID 1C setup. 113 .El 114 .Pp 115 .Xr installboot 8 116may be used to install 117 .Xr boot 8 118in the boot storage area of the 119 .Nm 120volume. 121All chunks in the volume will then be bootable. 122Boot support is currently limited to the CRYPTO, RAID 1 disciplines 123on the amd64, arm64, i386, riscv64 and sparc64 platforms. 124amd64, arm64, riscv64 and sparc64 also have boot support for the RAID 1C discipline. 125On sparc64, bootable chunks must be RAID partitions using the letter 126 .Sq a . 127At the 128 .Xr boot 8 129prompt, softraid volumes have names beginning with 130 .Sq sr 131and can be booted from like a normal disk device. 132CRYPTO and 1C volumes will require a decryption passphrase or keydisk 133at boot time. 134 .Pp 135The status of 136 .Nm 137volumes is reported via 138 .Xr sysctl 8 139such that it can be monitored by 140 .Xr sensorsd 8 . 141Each volume has one fourth level node named 142 .Va hw.sensors.softraid0.drive Ns Ar N , 143where 144 .Ar N 145is a small integer indexing the volume. 146The format of the volume status is: 147 .Pp 148 .D1 Ar value Po Ar device Pc , Ar status 149 .Pp 150The 151 .Ar device 152identifies the 153 .Nm 154volume. 155The following combinations of 156 .Ar value 157and 158 .Ar status 159can occur: 160 .Bl -tag -width Ds -offset indent 161 .It Sy online , OK 162The volume is operating normally. 163 .It Sy degraded , WARNING 164The volume as a whole is operational, but not all of its chunks are. 165In many cases, using 166 .Xr bioctl 8 167 .Fl R 168to rebuild the failed chunk is advisable. 169 .It Sy rebuilding , WARNING 170A rebuild operation was recently started and has not yet completed. 171 .It Sy failed , CRITICAL 172The device is currently unable to process I/O. 173 .It Sy unknown , UNKNOWN 174The status is unknown to the system. 175 .El 176 .Sh EXAMPLES 177An example to create a 3 chunk RAID 1 from scratch is as follows: 178 .Pp 179Initialize the partition tables of all disks: 180 .Bd -literal -offset indent 181# fdisk -iy wd1 182# fdisk -iy wd2 183# fdisk -iy wd3 184 .Ed 185 .Pp 186Now create RAID partitions on all disks: 187 .Bd -literal -offset indent 188# echo 'RAID *' | disklabel -wAT- wd1 189# echo 'RAID *' | disklabel -wAT- wd2 190# echo 'RAID *' | disklabel -wAT- wd3 191 .Ed 192 .Pp 193Assemble the RAID volume: 194 .Bd -literal -offset indent 195# bioctl -c 1 -l /dev/wd1a,/dev/wd2a,/dev/wd3a softraid0 196 .Ed 197 .Pp 198The console will show what device was added to the system: 199 .Bd -literal -offset indent 200scsibus0 at softraid0: 1 targets 201sd0 at scsibus0 targ 0 lun 0: <OPENBSD, SR RAID 1, 001> SCSI2 202sd0: 1MB, 0 cyl, 255 head, 63 sec, 512 bytes/sec, 3714 sec total 203 .Ed 204 .Pp 205It is good practice to wipe the front of the disk before using it: 206 .Bd -literal -offset indent 207# dd if=/dev/zero of=/dev/rsd0c bs=1m count=1 208 .Ed 209 .Pp 210Initialize the partition table and create a filesystem on the 211new RAID volume: 212 .Bd -literal -offset indent 213# fdisk -iy sd0 214# echo '/ *' | disklabel -wAT- sd0 215# newfs /dev/rsd0a 216 .Ed 217 .Pp 218The RAID volume is now ready to be used as a normal disk device. 219See 220 .Xr bioctl 8 221for more information on configuration of RAID sets. 222 .Pp 223Install 224 .Xr boot 8 225on the RAID volume, writing boot loaders to all 3 chunks: 226 .Bd -literal -offset indent 227# installboot sd0 228 .Ed 229 .Pp 230At the 231 .Xr boot 8 232prompt, load the /bsd kernel from the RAID volume: 233 .Bd -literal -offset indent 234boot> boot sr0a:/bsd 235 .Ed 236 .Sh SEE ALSO 237 .Xr bio 4 , 238 .Xr bioctl 8 , 239 .Xr boot_sparc64 8 , 240 .Xr disklabel 8 , 241 .Xr fdisk 8 , 242 .Xr installboot 8 , 243 .Xr newfs 8 244 .Sh HISTORY 245The 246 .Nm 247driver first appeared in 248 .Ox 4.2 . 249 .Sh AUTHORS 250 .An Marco Peereboom . 251 .Sh CAVEATS 252The driver relies on underlying hardware to properly fail chunks. 253 .Pp 254The RAID 1 discipline does not initialize the mirror upon creation. 255This is by design because all sectors that are read are written first. 256There is no point in wasting a lot of time syncing random data. 257 .Pp 258The RAID 5 discipline does not initialize parity upon creation, instead parity 259is only updated upon write. 260 .Pp 261Stacking disciplines (CRYPTO on top of RAID 1, for example) is not 262supported at this time. 263 .Pp 264Currently there is no automated mechanism to recover from failed disks. 265 .Pp 266Certain RAID levels can protect against some data loss 267due to component failure. 268RAID is 269 .Em not 270a substitute for good backup practices. 271