Super User's BSD Cross Reference: /NetBSD/sbin/raidctl/raidctl.8

1 .\" $NetBSD: raidctl.8,v 1.82 2023年09月25日 21:59:38 oster Exp $
2 .\"
3 .\" Copyright (c) 1998, 2002 The NetBSD Foundation, Inc.
4 .\" All rights reserved.
5 .\"
6 .\" This code is derived from software contributed to The NetBSD Foundation
7 .\" by Greg Oster
8 .\"
9 .\" Redistribution and use in source and binary forms, with or without
10 .\" modification, are permitted provided that the following conditions
11 .\" are met:
12 .\" 1. Redistributions of source code must retain the above copyright
13 .\" notice, this list of conditions and the following disclaimer.
14 .\" 2. Redistributions in binary form must reproduce the above copyright
15 .\" notice, this list of conditions and the following disclaimer in the
16 .\" documentation and/or other materials provided with the distribution.
17 .\"
18 .\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS
19 .\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED
20 .\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
21 .\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS
22 .\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
23 .\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
24 .\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
25 .\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
26 .\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
27 .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
28 .\" POSSIBILITY OF SUCH DAMAGE.
29 .\"
30 .\"
31 .\" Copyright (c) 1995 Carnegie-Mellon University.
32 .\" All rights reserved.
33 .\"
34 .\" Author: Mark Holland
35 .\"
36 .\" Permission to use, copy, modify and distribute this software and
37 .\" its documentation is hereby granted, provided that both the copyright
38 .\" notice and this permission notice appear in all copies of the
39 .\" software, derivative works or modified versions, and any portions
40 .\" thereof, and that both notices appear in supporting documentation.
41 .\"
42 .\" CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
43 .\" CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
44 .\" FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
45 .\"
46 .\" Carnegie Mellon requests users of this software to return to
47 .\"
48 .\" Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
49 .\" School of Computer Science
50 .\" Carnegie Mellon University
51 .\" Pittsburgh PA 15213-3890
52 .\"
53 .\" any improvements or extensions that they make and grant Carnegie the
54 .\" rights to redistribute these changes.
55 .\"
56 .Dd September 25, 2023
57 .Dt RAIDCTL 8
58 .Os
59 .Sh NAME
60 .Nm raidctl
61 .Nd configuration utility for the RAIDframe disk driver
62 .Sh SYNOPSIS
63 .Nm
64 .Ar dev
65 .Ar command
66 .Op Ar arg Op ...
67 .Nm
68 .Op Fl v
69 .Fl A Op yes | no | forceroot | softroot
70 .Ar dev
71 .Nm
72 .Op Fl v
73 .Fl a Ar component Ar dev
74 .Nm
75 .Op Fl v
76 .Fl C Ar config_file Ar dev
77 .Nm
78 .Op Fl v
79 .Fl c Ar config_file Ar dev
80 .Nm
81 .Op Fl v
82 .Fl F Ar component Ar dev
83 .Nm
84 .Op Fl v
85 .Fl f Ar component Ar dev
86 .Nm
87 .Op Fl v
88 .Fl G Ar dev
89 .Nm
90 .Op Fl v
91 .Fl g Ar component Ar dev
92 .Nm
93 .Op Fl v
94 .Fl I Ar serial_number Ar dev
95 .Nm
96 .Op Fl v
97 .Fl i Ar dev
98 .Nm
99 .Op Fl v
100 .Fl L Ar dev
101 .Nm
102 .Op Fl v
103 .Fl M
104 .Oo yes | no | set
105 .Ar params
106 .Oc
107 .Ar dev
108 .Nm
109 .Op Fl v
110 .Fl m Ar dev
111 .Nm
112 .Op Fl v
113 .Fl P Ar dev
114 .Nm
115 .Op Fl v
116 .Fl p Ar dev
117 .Nm
118 .Op Fl v
119 .Fl R Ar component Ar dev
120 .Nm
121 .Op Fl v
122 .Fl r Ar component Ar dev
123 .Nm
124 .Op Fl v
125 .Fl S Ar dev
126 .Nm
127 .Op Fl v
128 .Fl s Ar dev
129 .Nm
130 .Op Fl v
131 .Fl t Ar config_file
132 .Nm
133 .Op Fl v
134 .Fl U Ar unit Ar dev
135 .Nm
136 .Op Fl v
137 .Fl u Ar dev
138 .Sh DESCRIPTION
139 .Nm
140is the user-land control program for
141 .Xr raid 4 ,
142the RAIDframe disk device.
143 .Nm
144is primarily used to dynamically configure and unconfigure RAIDframe disk
145devices.
146For more information about the RAIDframe disk device, see
147 .Xr raid 4 .
148 .Pp
149This document assumes the reader has at least rudimentary knowledge of
150RAID and RAID concepts.
151 .Pp
152The simplified command-line options for
153 .Nm
154are as follows:
155 .Bl -tag -width indent
156 .It Ic create Ar level Ar component1 Ar component2 Ar ...
157where
158 .Ar level
159specifies the RAID level and is one of
160 .Ar 0
161,
162 .Ar 1 
163(or
164 .Ar mirror
165), or
166 .Ar 5
167and each of
168 .Ar componentN
169specify the devices to be configured into the RAID set.
170 .El
171 .Pp
172The advanced command-line options for
173 .Nm
174are as follows:
175 .Bl -tag -width indent
176 .It Fl A Ic yes Ar dev
177Make the RAID set auto-configurable.
178The RAID set will be automatically configured at boot
179 .Ar before
180the root file system is mounted.
181Note that all components of the set must be of type
182 .Dv RAID
183in the disklabel.
184 .It Fl A Ic no Ar dev
185Turn off auto-configuration for the RAID set.
186 .It Fl A Ic forceroot Ar dev
187Make the RAID set auto-configurable, and also mark the set as being
188eligible to be the root partition.
189A RAID set configured this way will
190 .Ar override
191the use of the boot disk as the root device.
192All components of the set must be of type
193 .Dv RAID
194in the disklabel.
195Note that only certain architectures
196(currently arc, alpha, amd64, bebox, cobalt, emips, evbarm, i386, landisk,
197ofppc, pmax, riscv, sandpoint, sgimips, sparc, sparc64, and vax)
198support booting a kernel directly from a RAID set.
199Please note that
200 .Ic forceroot
201mode was referred to as
202 .Ic root
203mode on earlier versions of
204 .Nx .
205For compatibility reasons,
206 .Ic root
207can be used as an alias for
208 .Ic forceroot .
209 .It Fl A Ic softroot Ar dev
210Like
211 .Ic forceroot ,
212but only change the root device if the boot device is part of the RAID set.
213 .It Fl a Ar component Ar dev
214Add
215 .Ar component
216as a hot spare for the device
217 .Ar dev .
218Component labels (which identify the location of a given
219component within a particular RAID set) are automatically added to the
220hot spare after it has been used and are not required for
221 .Ar component
222before it is used.
223 .It Fl C Ar config_file Ar dev
224As for
225 .Fl c ,
226but forces the configuration to take place.
227Fatal errors due to uninitialized components are ignored.
228This is required the first time a RAID set is configured.
229 .It Fl c Ar config_file Ar dev
230Configure the RAIDframe device
231 .Ar dev
232according to the configuration given in
233 .Ar config_file .
234A description of the contents of
235 .Ar config_file
236is given later.
237 .It Fl F Ar component Ar dev
238Fails the specified
239 .Ar component
240of the device, and immediately begin a reconstruction of the failed
241disk onto an available hot spare.
242This is one of the mechanisms used to start
243the reconstruction process if a component does have a hardware failure.
244 .It Fl f Ar component Ar dev
245This marks the specified
246 .Ar component
247as having failed, but does not initiate a reconstruction of that component.
248 .It Fl G Ar dev
249Generate the configuration of the RAIDframe device in a format suitable for
250use with the
251 .Fl c
252or
253 .Fl C
254options.
255 .It Fl g Ar component Ar dev
256Get the component label for the specified component.
257 .It Fl I Ar serial_number Ar dev
258Initialize the component labels on each component of the device.
259 .Ar serial_number
260is used as one of the keys in determining whether a
261particular set of components belong to the same RAID set.
262While not strictly enforced, different serial numbers should be used for
263different RAID sets.
264This step
265 .Em MUST
266be performed when a new RAID set is created.
267 .It Fl i Ar dev
268Initialize the RAID device.
269In particular, (re-)write the parity on the selected device.
270This
271 .Em MUST
272be done for
273 .Em all
274RAID sets before the RAID device is labeled and before
275file systems are created on the RAID device.
276 .It Fl L Ar dev
277Rescan all devices on the system, looking for RAID sets that can be 
278auto-configured. The RAID device provided here has to be a valid 
279device, but does not need to be configured. (e.g.
280 .Bd -literal -offset indent
281raidctl -L raid0
282 .Ed
283 .Pp
284is all that is needed to perform a rescan.)
285 .It Fl M Ic yes Ar dev
286 .\"XXX should there be a section with more info on the parity map feature?
287Enable the use of a parity map on the RAID set; this is the default,
288and greatly reduces the time taken to check parity after unclean
289shutdowns at the cost of some very slight overhead during normal
290operation.
291Changes to this setting will take effect the next time the set is
292configured.
293Note that RAID-0 sets, having no parity, will not use a parity map in
294any case.
295 .It Fl M Ic no Ar dev
296Disable the use of a parity map on the RAID set; doing this is not
297recommended.
298This will take effect the next time the set is configured.
299 .It Fl M Ic set Ar cooldown Ar tickms Ar regions Ar dev
300Alter the parameters of the parity map; parameters to leave unchanged
301can be given as 0, and trailing zeroes may be omitted.
302 .\"XXX should this explanation be deferred to another section as well?
303The RAID set is divided into
304 .Ar regions
305regions; each region is marked dirty for at most
306 .Ar cooldown
307intervals of
308 .Ar tickms
309milliseconds each after a write to it, and at least
310 .Ar cooldown
311\- 1 such intervals.
312Changes to
313 .Ar regions
314take effect the next time is configured, while changes to the other
315parameters are applied immediately.
316The default parameters are expected to be reasonable for most workloads.
317 .It Fl m Ar dev
318Display status information about the parity map on the RAID set, if any.
319If used with
320 .Fl v
321then the current contents of the parity map will be output (in
322hexadecimal format) as well.
323 .It Fl P Ar dev
324Check the status of the parity on the RAID set, and initialize
325(re-write) the parity if the parity is not known to be up-to-date.
326This is normally used after a system crash (and before a
327 .Xr fsck 8 )
328to ensure the integrity of the parity.
329 .It Fl p Ar dev
330Check the status of the parity on the RAID set.
331Displays a status message,
332and returns successfully if the parity is up-to-date.
333 .It Fl R Ar component Ar dev
334Fails the specified
335 .Ar component ,
336if necessary, and immediately begins a reconstruction back to
337 .Ar component .
338This is useful for reconstructing back onto a component after
339it has been replaced following a failure.
340 .It Fl r Ar component Ar dev
341Remove the specified
342 .Ar component
343from the RAID. The component must be in the failed, spare, or spared state
344in order to be removed.
345 .It Fl S Ar dev
346Check the status of parity re-writing and component reconstruction.
347The output indicates the amount of progress
348achieved in each of these areas.
349 .It Fl s Ar dev
350Display the status of the RAIDframe device for each of the components
351and spares.
352 .It Fl t Ar config_file
353Read and parse the
354 .Ar config_file ,
355reporting any errors, then exit.
356No raidframe operations are performed.
357 .It Fl U Ar unit Ar dev
358Set the
359 .Dv last_unit
360field in all the raid components, so that the next time the raid
361will be autoconfigured it uses that
362 .Ar unit .
363 .It Fl u Ar dev
364Unconfigure the RAIDframe device.
365This does not remove any component labels or change any configuration
366settings (e.g. auto-configuration settings) for the RAID set.
367 .It Fl v
368Be more verbose, and provide a progress indicator for operations such
369as reconstructions and parity re-writing.
370 .El
371 .Pp
372The device used by
373 .Nm
374is specified by
375 .Ar dev .
376 .Ar dev
377may be either the full name of the device, e.g.,
378 .Pa /dev/rraid0d ,
379for the i386 architecture, or
380 .Pa /dev/rraid0c
381for many others, or just simply
382 .Pa raid0
383(for
384 .Pa /dev/rraid0[cd] ) .
385It is recommended that the partitions used to represent the
386RAID device are not used for file systems.
387 .Ss Simple RAID configuration
388For simple RAID configurations using RAID levels 0 (simple striping),
3891 (mirroring), or 5 (striping with distributed parity)
390 .Nm
391supports command-line configuration of RAID setups without
392the use of a configuration file. For example,
393 .Bd -literal -offset indent
394raidctl raid0 create 0 /dev/wd0e /dev/wd1e /dev/wd2e
395 .Ed
396 .Pp
397will create a RAID level 0 set on the device named
398 .Pa raid0
399using the components
400 .Pa /dev/wd0e ,
401 .Pa /dev/wd1e ,
402and
403 .Pa /dev/wd2e .
404Similarly,
405 .Bd -literal -offset indent
406raidctl raid0 create mirror absent /dev/wd1e
407 .Ed
408 .Pp
409will create a RAID level 1 (mirror) set with an absent first component
410and
411 .Pa /dev/wd1e
412as the second component. In all cases the resulting RAID device will
413be marked as auto-configurable, will have a serial number set (based
414on the current time), and parity will be initialized (if the RAID level
415has parity and sufficent components are present). Reasonable
416performance values are automatically used by default for other
417parameters normally specified in the configuration file.
418 .Pp
419 .Ss Configuration file
420The format of the configuration file is complex, and
421only an abbreviated treatment is given here.
422In the configuration files, a
423 .Sq #
424indicates the beginning of a comment.
425 .Pp
426There are 4 required sections of a configuration file, and 2
427optional sections.
428Each section begins with a
429 .Sq START ,
430followed by the section name,
431and the configuration parameters associated with that section.
432The first section is the
433 .Sq array
434section, and it specifies
435the number of columns, and spare disks in the RAID set.
436For example:
437 .Bd -literal -offset indent
438START array
4393 0
440 .Ed
441 .Pp
442indicates an array with 3 columns, and 0 spare disks.
443Old configurations specified a 3rd value in front of the
444number of columns and spare disks.
445This old value, if provided, must be specified as 1:
446 .Bd -literal -offset indent
447START array
4481 3 0
449 .Ed
450 .Pp
451The second section, the
452 .Sq disks
453section, specifies the actual components of the device.
454For example:
455 .Bd -literal -offset indent
456START disks
457/dev/sd0e
458/dev/sd1e
459/dev/sd2e
460 .Ed
461 .Pp
462specifies the three component disks to be used in the RAID device.
463Disk wedges may also be specified with the NAME=<wedge name> syntax.
464If any of the specified drives cannot be found when the RAID device is
465configured, then they will be marked as
466 .Sq failed ,
467and the system will operate in degraded mode.
468Note that it is
469 .Em imperative
470that the order of the components in the configuration file does not
471change between configurations of a RAID device.
472Changing the order of the components will result in data loss
473if the set is configured with the
474 .Fl C
475option.
476In normal circumstances, the RAID set will not configure if only
477 .Fl c
478is specified, and the components are out-of-order.
479 .Pp
480The next section, which is the
481 .Sq spare
482section, is optional, and, if present, specifies the devices to be used as
483 .Sq hot spares
484\(em devices which are on-line,
485but are not actively used by the RAID driver unless
486one of the main components fail.
487A simple
488 .Sq spare
489section might be:
490 .Bd -literal -offset indent
491START spare
492/dev/sd3e
493 .Ed
494 .Pp
495for a configuration with a single spare component.
496If no spare drives are to be used in the configuration, then the
497 .Sq spare
498section may be omitted.
499 .Pp
500The next section is the
501 .Sq layout
502section.
503This section describes the general layout parameters for the RAID device,
504and provides such information as
505sectors per stripe unit,
506stripe units per parity unit,
507stripe units per reconstruction unit,
508and the parity configuration to use.
509This section might look like:
510 .Bd -literal -offset indent
511START layout
512# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level
51332 1 1 5
514 .Ed
515 .Pp
516The sectors per stripe unit specifies, in blocks, the interleave
517factor; i.e., the number of contiguous sectors to be written to each
518component for a single stripe.
519Appropriate selection of this value (32 in this example)
520is the subject of much research in RAID architectures.
521The stripe units per parity unit and
522stripe units per reconstruction unit are normally each set to 1.
523While certain values above 1 are permitted, a discussion of valid
524values and the consequences of using anything other than 1 are outside
525the scope of this document.
526The last value in this section (5 in this example)
527indicates the parity configuration desired.
528Valid entries include:
529 .Bl -tag -width inde
530 .It 0
531RAID level 0.
532No parity, only simple striping.
533 .It 1
534RAID level 1.
535Mirroring.
536The parity is the mirror.
537 .It 4
538RAID level 4.
539Striping across components, with parity stored on the last component.
540 .It 5
541RAID level 5.
542Striping across components, parity distributed across all components.
543 .El
544 .Pp
545There are other valid entries here, including those for Even-Odd
546parity, RAID level 5 with rotated sparing, Chained declustering,
547and Interleaved declustering, but as of this writing the code for
548those parity operations has not been tested with
549 .Nx .
550 .Pp
551The next required section is the
552 .Sq queue
553section.
554This is most often specified as:
555 .Bd -literal -offset indent
556START queue
557fifo 100
558 .Ed
559 .Pp
560where the queuing method is specified as fifo (first-in, first-out),
561and the size of the per-component queue is limited to 100 requests.
562Other queuing methods may also be specified, but a discussion of them
563is beyond the scope of this document.
564 .Pp
565The final section, the
566 .Sq debug
567section, is optional.
568For more details on this the reader is referred to
569the RAIDframe documentation discussed in the
570 .Sx HISTORY
571section.
572 .Pp
573Since
574 .Nx 10
575RAIDframe has been been capable of autoconfiguration of components
576originally configured on opposite endian systems. The current label
577endianness will be retained.
578 .Pp
579See
580 .Sx EXAMPLES
581for a more complete configuration file example.
582 .Sh FILES
583 .Bl -tag -width /dev/XXrXraidX -compact
584 .It Pa /dev/{,r}raid*
585 .Cm raid
586device special files.
587 .El
588 .Sh EXAMPLES
589The examples given in this section are for more complex
590setups than can be configured with the simplified command-line
591configuration option described early.
592 .Pp
593It is highly recommended that before using the RAID driver for real
594file systems that the system administrator(s) become quite familiar
595with the use of
596 .Nm ,
597and that they understand how the component reconstruction process works.
598The examples in this section will focus on configuring a
599number of different RAID sets of varying degrees of redundancy.
600By working through these examples, administrators should be able to
601develop a good feel for how to configure a RAID set, and how to
602initiate reconstruction of failed components.
603 .Pp
604In the following examples
605 .Sq raid0
606will be used to denote the RAID device.
607Depending on the architecture,
608 .Pa /dev/rraid0c
609or
610 .Pa /dev/rraid0d
611may be used in place of
612 .Pa raid0 .
613 .Ss Initialization and Configuration
614The initial step in configuring a RAID set is to identify the components
615that will be used in the RAID set.
616All components should be the same size.
617Each component should have a disklabel type of
618 .Dv FS_RAID ,
619and a typical disklabel entry for a RAID component might look like:
620 .Bd -literal -offset indent
621f: 1800000 200495 RAID # (Cyl. 405*- 4041*)
622 .Ed
623 .Pp
624While
625 .Dv FS_BSDFFS
626will also work as the component type, the type
627 .Dv FS_RAID
628is preferred for RAIDframe use, as it is required for features such as
629auto-configuration.
630As part of the initial configuration of each RAID set,
631each component will be given a
632 .Sq component label .
633A
634 .Sq component label
635contains important information about the component, including a
636user-specified serial number, the column of that component in
637the RAID set, the redundancy level of the RAID set, a
638 .Sq modification counter ,
639and whether the parity information (if any) on that
640component is known to be correct.
641Component labels are an integral part of the RAID set,
642since they are used to ensure that components
643are configured in the correct order, and used to keep track of other
644vital information about the RAID set.
645Component labels are also required for the auto-detection
646and auto-configuration of RAID sets at boot time.
647For a component label to be considered valid, that
648particular component label must be in agreement with the other
649component labels in the set.
650For example, the serial number,
651 .Sq modification counter ,
652and number of columns must all be in agreement.
653If any of these are different, then the component is
654not considered to be part of the set.
655See
656 .Xr raid 4
657for more information about component labels.
658 .Pp
659Once the components have been identified, and the disks have
660appropriate labels,
661 .Nm
662is then used to configure the
663 .Xr raid 4
664device.
665To configure the device, a configuration file which looks something like:
666 .Bd -literal -offset indent
667START array
668# numCol numSpare
6693 1
670
671START disks
672/dev/sd1e
673/dev/sd2e
674/dev/sd3e
675
676START spare
677/dev/sd4e
678
679START layout
680# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
68132 1 1 5
682
683START queue
684fifo 100
685 .Ed
686 .Pp
687is created in a file.
688The above configuration file specifies a RAID 5
689set consisting of the components
690 .Pa /dev/sd1e ,
691 .Pa /dev/sd2e ,
692and
693 .Pa /dev/sd3e ,
694with
695 .Pa /dev/sd4e
696available as a
697 .Sq hot spare
698in case one of the three main drives should fail.
699A RAID 0 set would be specified in a similar way:
700 .Bd -literal -offset indent
701START array
702# numCol numSpare
7034 0
704
705START disks
706/dev/sd10e
707/dev/sd11e
708/dev/sd12e
709/dev/sd13e
710
711START layout
712# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
71364 1 1 0
714
715START queue
716fifo 100
717 .Ed
718 .Pp
719In this case, devices
720 .Pa /dev/sd10e ,
721 .Pa /dev/sd11e ,
722 .Pa /dev/sd12e ,
723and
724 .Pa /dev/sd13e
725are the components that make up this RAID set.
726Note that there are no hot spares for a RAID 0 set,
727since there is no way to recover data if any of the components fail.
728 .Pp
729For a RAID 1 (mirror) set, the following configuration might be used:
730 .Bd -literal -offset indent
731START array
732# numCol numSpare
7332 0
734
735START disks
736/dev/sd20e
737/dev/sd21e
738
739START layout
740# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
741128 1 1 1
742
743START queue
744fifo 100
745 .Ed
746 .Pp
747In this case,
748 .Pa /dev/sd20e
749and
750 .Pa /dev/sd21e
751are the two components of the mirror set.
752While no hot spares have been specified in this
753configuration, they easily could be, just as they were specified in
754the RAID 5 case above.
755Note as well that RAID 1 sets are currently limited to only 2 components.
756At present, n-way mirroring is not possible.
757 .Pp
758The first time a RAID set is configured, the
759 .Fl C
760option must be used:
761 .Bd -literal -offset indent
762raidctl -C raid0.conf raid0
763 .Ed
764 .Pp
765where
766 .Pa raid0.conf
767is the name of the RAID configuration file.
768The
769 .Fl C
770forces the configuration to succeed, even if any of the component
771labels are incorrect.
772The
773 .Fl C
774option should not be used lightly in
775situations other than initial configurations, as if
776the system is refusing to configure a RAID set, there is probably a
777very good reason for it.
778After the initial configuration is done (and
779appropriate component labels are added with the
780 .Fl I
781option) then raid0 can be configured normally with:
782 .Bd -literal -offset indent
783raidctl -c raid0.conf raid0
784 .Ed
785 .Pp
786When the RAID set is configured for the first time, it is
787necessary to initialize the component labels, and to initialize the
788parity on the RAID set.
789Initializing the component labels is done with:
790 .Bd -literal -offset indent
791raidctl -I 112341 raid0
792 .Ed
793 .Pp
794where
795 .Sq 112341
796is a user-specified serial number for the RAID set.
797This initialization step is
798 .Em required
799for all RAID sets.
800As well, using different serial numbers between RAID sets is
801 .Em strongly encouraged ,
802as using the same serial number for all RAID sets will only serve to
803decrease the usefulness of the component label checking.
804 .Pp
805Initializing the RAID set is done via the
806 .Fl i
807option.
808This initialization
809 .Em MUST
810be done for
811 .Em all
812RAID sets, since among other things it verifies that the parity (if
813any) on the RAID set is correct.
814Since this initialization may be quite time-consuming, the
815 .Fl v
816option may be also used in conjunction with
817 .Fl i :
818 .Bd -literal -offset indent
819raidctl -iv raid0
820 .Ed
821 .Pp
822This will give more verbose output on the
823status of the initialization:
824 .Bd -literal -offset indent
825Initiating re-write of parity
826Parity Re-write status:
827 10% |**** | ETA: 06:03 /
828 .Ed
829 .Pp
830The output provides a
831 .Sq Percent Complete
832in both a numeric and graphical format, as well as an estimated time
833to completion of the operation.
834 .Pp
835Since it is the parity that provides the
836 .Sq redundancy
837part of RAID, it is critical that the parity is correct as much as possible.
838If the parity is not correct, then there is no
839guarantee that data will not be lost if a component fails.
840 .Pp
841Once the parity is known to be correct, it is then safe to perform
842 .Xr disklabel 8 ,
843 .Xr newfs 8 ,
844or
845 .Xr fsck 8
846on the device or its file systems, and then to mount the file systems
847for use.
848 .Pp
849Under certain circumstances (e.g., the additional component has not
850arrived, or data is being migrated off of a disk destined to become a
851component) it may be desirable to configure a RAID 1 set with only
852a single component.
853This can be achieved by using the word
854 .Dq absent
855to indicate that a particular component is not present.
856In the following:
857 .Bd -literal -offset indent
858START array
859# numCol numSpare
8602 0
861
862START disks
863absent
864/dev/sd0e
865
866START layout
867# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_1
868128 1 1 1
869
870START queue
871fifo 100
872 .Ed
873 .Pp
874 .Pa /dev/sd0e
875is the real component, and will be the second disk of a RAID 1 set.
876The first component is simply marked as being absent.
877Configuration (using
878 .Fl C
879and
880 .Fl I Ar 12345
881as above) proceeds normally, but initialization of the RAID set will
882have to wait until all physical components are present.
883After configuration, this set can be used normally, but will be operating
884in degraded mode.
885Once a second physical component is obtained, it can be hot-added,
886the existing data mirrored, and normal operation resumed.
887 .Pp
888The size of the resulting RAID set will depend on the number of data
889components in the set.
890Space is automatically reserved for the component labels, and
891the actual amount of space used
892for data on a component will be rounded down to the largest possible
893multiple of the sectors per stripe unit (sectPerSU) value.
894Thus, the amount of space provided by the RAID set will be less
895than the sum of the size of the components.
896 .Ss Maintenance of the RAID set
897After the parity has been initialized for the first time, the command:
898 .Bd -literal -offset indent
899raidctl -p raid0
900 .Ed
901 .Pp
902can be used to check the current status of the parity.
903To check the parity and rebuild it necessary (for example,
904after an unclean shutdown) the command:
905 .Bd -literal -offset indent
906raidctl -P raid0
907 .Ed
908 .Pp
909is used.
910Note that re-writing the parity can be done while
911other operations on the RAID set are taking place (e.g., while doing a
912 .Xr fsck 8
913on a file system on the RAID set).
914However: for maximum effectiveness of the RAID set, the parity should be
915known to be correct before any data on the set is modified.
916 .Pp
917To see how the RAID set is doing, the following command can be used to
918show the RAID set's status:
919 .Bd -literal -offset indent
920raidctl -s raid0
921 .Ed
922 .Pp
923The output will look something like:
924 .Bd -literal -offset indent
925Components:
926 /dev/sd1e: optimal
927 /dev/sd2e: optimal
928 /dev/sd3e: optimal
929Spares:
930 /dev/sd4e: spare
931Component label for /dev/sd1e:
932 Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
933 Version: 2 Serial Number: 13432 Mod Counter: 65
934 Clean: No Status: 0
935 sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
936 RAID Level: 5 blocksize: 512 numBlocks: 1799936
937 Autoconfig: No
938 Last configured as: raid0
939Component label for /dev/sd2e:
940 Row: 0 Column: 1 Num Rows: 1 Num Columns: 3
941 Version: 2 Serial Number: 13432 Mod Counter: 65
942 Clean: No Status: 0
943 sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
944 RAID Level: 5 blocksize: 512 numBlocks: 1799936
945 Autoconfig: No
946 Last configured as: raid0
947Component label for /dev/sd3e:
948 Row: 0 Column: 2 Num Rows: 1 Num Columns: 3
949 Version: 2 Serial Number: 13432 Mod Counter: 65
950 Clean: No Status: 0
951 sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
952 RAID Level: 5 blocksize: 512 numBlocks: 1799936
953 Autoconfig: No
954 Last configured as: raid0
955Parity status: clean
956Reconstruction is 100% complete.
957Parity Re-write is 100% complete.
958 .Ed
959 .Pp
960This indicates that all is well with the RAID set.
961Of importance here are the component lines which read
962 .Sq optimal ,
963and the
964 .Sq Parity status
965line.
966 .Sq Parity status: clean
967indicates that the parity is up-to-date for this RAID set,
968whether or not the RAID set is in redundant or degraded mode.
969 .Sq Parity status: DIRTY
970indicates that it is not known if the parity information is
971consistent with the data, and that the parity information needs
972to be checked.
973Note that if there are file systems open on the RAID set,
974the individual components will not be
975 .Sq clean
976but the set as a whole can still be clean.
977 .Pp
978To check the component label of
979 .Pa /dev/sd1e ,
980the following is used:
981 .Bd -literal -offset indent
982raidctl -g /dev/sd1e raid0
983 .Ed
984 .Pp
985The output of this command will look something like:
986 .Bd -literal -offset indent
987Component label for /dev/sd1e:
988 Row: 0 Column: 0 Num Rows: 1 Num Columns: 3
989 Version: 2 Serial Number: 13432 Mod Counter: 65
990 Clean: No Status: 0
991 sectPerSU: 32 SUsPerPU: 1 SUsPerRU: 1
992 RAID Level: 5 blocksize: 512 numBlocks: 1799936
993 Autoconfig: No
994 Last configured as: raid0
995 .Ed
996 .Ss Dealing with Component Failures
997If for some reason
998(perhaps to test reconstruction) it is necessary to pretend a drive
999has failed, the following will perform that function:
1000 .Bd -literal -offset indent
1001raidctl -f /dev/sd2e raid0
1002 .Ed
1003 .Pp
1004The system will then be performing all operations in degraded mode,
1005where missing data is re-computed from existing data and the parity.
1006In this case, obtaining the status of raid0 will return (in part):
1007 .Bd -literal -offset indent
1008Components:
1009 /dev/sd1e: optimal
1010 /dev/sd2e: failed
1011 /dev/sd3e: optimal
1012Spares:
1013 /dev/sd4e: spare
1014 .Ed
1015 .Pp
1016Note that with the use of
1017 .Fl f
1018a reconstruction has not been started.
1019To both fail the disk and start a reconstruction, the
1020 .Fl F
1021option must be used:
1022 .Bd -literal -offset indent
1023raidctl -F /dev/sd2e raid0
1024 .Ed
1025 .Pp
1026The
1027 .Fl f
1028option may be used first, and then the
1029 .Fl F
1030option used later, on the same disk, if desired.
1031Immediately after the reconstruction is started, the status will report:
1032 .Bd -literal -offset indent
1033Components:
1034 /dev/sd1e: optimal
1035 /dev/sd2e: reconstructing
1036 /dev/sd3e: optimal
1037Spares:
1038 /dev/sd4e: used_spare
1039[...]
1040Parity status: clean
1041Reconstruction is 10% complete.
1042Parity Re-write is 100% complete.
1043 .Ed
1044 .Pp
1045This indicates that a reconstruction is in progress.
1046To find out how the reconstruction is progressing the
1047 .Fl S
1048option may be used.
1049This will indicate the progress in terms of the
1050percentage of the reconstruction that is completed.
1051When the reconstruction is finished the
1052 .Fl s
1053option will show:
1054 .Bd -literal -offset indent
1055Components:
1056 /dev/sd1e: optimal
1057 /dev/sd4e: optimal
1058 /dev/sd3e: optimal
1059No spares.
1060[...]
1061Parity status: clean
1062Reconstruction is 100% complete.
1063Parity Re-write is 100% complete.
1064 .Ed
1065 .Pp
1066as
1067 .Pa /dev/sd2e
1068has been removed and replaced with
1069 .Pa /dev/sd4e .
1070 .Pp
1071If a component fails and there are no hot spares
1072available on-line, the status of the RAID set might (in part) look like:
1073 .Bd -literal -offset indent
1074Components:
1075 /dev/sd1e: optimal
1076 /dev/sd2e: failed
1077 /dev/sd3e: optimal
1078No spares.
1079 .Ed
1080 .Pp
1081In this case there are a number of options.
1082The first option is to add a hot spare using:
1083 .Bd -literal -offset indent
1084raidctl -a /dev/sd4e raid0
1085 .Ed
1086 .Pp
1087After the hot add, the status would then be:
1088 .Bd -literal -offset indent
1089Components:
1090 /dev/sd1e: optimal
1091 /dev/sd2e: failed
1092 /dev/sd3e: optimal
1093Spares:
1094 /dev/sd4e: spare
1095 .Ed
1096 .Pp
1097Reconstruction could then take place using
1098 .Fl F
1099as described above.
1100 .Pp
1101A second option is to rebuild directly onto
1102 .Pa /dev/sd2e .
1103Once the disk containing
1104 .Pa /dev/sd2e
1105has been replaced, one can simply use:
1106 .Bd -literal -offset indent
1107raidctl -R /dev/sd2e raid0
1108 .Ed
1109 .Pp
1110to rebuild the
1111 .Pa /dev/sd2e
1112component.
1113As the rebuilding is in progress, the status will be:
1114 .Bd -literal -offset indent
1115Components:
1116 /dev/sd1e: optimal
1117 /dev/sd2e: reconstructing
1118 /dev/sd3e: optimal
1119No spares.
1120 .Ed
1121 .Pp
1122and when completed, will be:
1123 .Bd -literal -offset indent
1124Components:
1125 /dev/sd1e: optimal
1126 /dev/sd2e: optimal
1127 /dev/sd3e: optimal
1128No spares.
1129 .Ed
1130 .Pp
1131In circumstances where a particular component is completely
1132unavailable after a reboot, a special component name will be used to
1133indicate the missing component.
1134For example:
1135 .Bd -literal -offset indent
1136Components:
1137 /dev/sd2e: optimal
1138 component1: failed
1139No spares.
1140 .Ed
1141 .Pp
1142indicates that the second component of this RAID set was not detected
1143at all by the auto-configuration code.
1144The name
1145 .Sq component1
1146can be used anywhere a normal component name would be used.
1147For example, to add a hot spare to the above set, and rebuild to that hot
1148spare, the following could be done:
1149 .Bd -literal -offset indent
1150raidctl -a /dev/sd3e raid0
1151raidctl -F component1 raid0
1152 .Ed
1153 .Pp
1154at which point the data missing from
1155 .Sq component1
1156would be reconstructed onto
1157 .Pa /dev/sd3e .
1158 .Pp
1159When more than one component is marked as
1160 .Sq failed
1161due to a non-component hardware failure (e.g., loss of power to two
1162components, adapter problems, termination problems, or cabling issues) it
1163is quite possible to recover the data on the RAID set.
1164The first thing to be aware of is that the first disk to fail will
1165almost certainly be out-of-sync with the remainder of the array.
1166If any IO was performed between the time the first component is considered
1167 .Sq failed
1168and when the second component is considered
1169 .Sq failed ,
1170then the first component to fail will
1171 .Em not
1172contain correct data, and should be ignored.
1173When the second component is marked as failed, however, the RAID device will
1174(currently) panic the system.
1175At this point the data on the RAID set
1176(not including the first failed component) is still self consistent,
1177and will be in no worse state of repair than had the power gone out in
1178the middle of a write to a file system on a non-RAID device.
1179The problem, however, is that the component labels may now have 3 different
1180 .Sq modification counters
1181(one value on the first component that failed, one value on the second
1182component that failed, and a third value on the remaining components).
1183In such a situation, the RAID set will not autoconfigure,
1184and can only be forcibly re-configured
1185with the
1186 .Fl C
1187option.
1188To recover the RAID set, one must first remedy whatever physical
1189problem caused the multiple-component failure.
1190After that is done, the RAID set can be restored by forcibly
1191configuring the raid set
1192 .Em without
1193the component that failed first.
1194For example, if
1195 .Pa /dev/sd1e
1196and
1197 .Pa /dev/sd2e
1198fail (in that order) in a RAID set of the following configuration:
1199 .Bd -literal -offset indent
1200START array
12014 0
1202
1203START disks
1204/dev/sd1e
1205/dev/sd2e
1206/dev/sd3e
1207/dev/sd4e
1208
1209START layout
1210# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
121164 1 1 5
1212
1213START queue
1214fifo 100
1215
1216 .Ed
1217 .Pp
1218then the following configuration (say "recover_raid0.conf")
1219 .Bd -literal -offset indent
1220START array
12214 0
1222
1223START disks
1224absent
1225/dev/sd2e
1226/dev/sd3e
1227/dev/sd4e
1228
1229START layout
1230# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_5
123164 1 1 5
1232
1233START queue
1234fifo 100
1235 .Ed
1236 .Pp
1237can be used with
1238 .Bd -literal -offset indent
1239raidctl -C recover_raid0.conf raid0
1240 .Ed
1241 .Pp
1242to force the configuration of raid0.
1243A
1244 .Bd -literal -offset indent
1245raidctl -I 12345 raid0
1246 .Ed
1247 .Pp
1248will be required in order to synchronize the component labels.
1249At this point the file systems on the RAID set can then be checked and
1250corrected.
1251To complete the re-construction of the RAID set,
1252 .Pa /dev/sd1e
1253is simply hot-added back into the array, and reconstructed
1254as described earlier.
1255 .Ss RAID on RAID
1256RAID sets can be layered to create more complex and much larger RAID sets.
1257A RAID 0 set, for example, could be constructed from four RAID 5 sets.
1258The following configuration file shows such a setup:
1259 .Bd -literal -offset indent
1260START array
1261# numCol numSpare
12624 0
1263
1264START disks
1265/dev/raid1e
1266/dev/raid2e
1267/dev/raid3e
1268/dev/raid4e
1269
1270START layout
1271# sectPerSU SUsPerParityUnit SUsPerReconUnit RAID_level_0
1272128 1 1 0
1273
1274START queue
1275fifo 100
1276 .Ed
1277 .Pp
1278A similar configuration file might be used for a RAID 0 set
1279constructed from components on RAID 1 sets.
1280In such a configuration, the mirroring provides a high degree
1281of redundancy, while the striping provides additional speed benefits.
1282 .Ss Auto-configuration and Root on RAID
1283RAID sets can also be auto-configured at boot.
1284To make a set auto-configurable,
1285simply prepare the RAID set as above, and then do a:
1286 .Bd -literal -offset indent
1287raidctl -A yes raid0
1288 .Ed
1289 .Pp
1290to turn on auto-configuration for that set.
1291To turn off auto-configuration, use:
1292 .Bd -literal -offset indent
1293raidctl -A no raid0
1294 .Ed
1295 .Pp
1296RAID sets which are auto-configurable will be configured before the
1297root file system is mounted.
1298These RAID sets are thus available for
1299use as a root file system, or for any other file system.
1300A primary advantage of using the auto-configuration is that RAID components
1301become more independent of the disks they reside on.
1302For example, SCSI ID's can change, but auto-configured sets will always be
1303configured correctly, even if the SCSI ID's of the component disks
1304have become scrambled.
1305 .Pp
1306Having a system's root file system
1307 .Pq Pa /
1308on a RAID set is also allowed, with the
1309 .Sq a
1310partition of such a RAID set being used for
1311 .Pa / .
1312To use raid0a as the root file system, simply use:
1313 .Bd -literal -offset indent
1314raidctl -A forceroot raid0
1315 .Ed
1316 .Pp
1317To return raid0a to be just an auto-configuring set simply use the
1318 .Fl A Ar yes
1319arguments.
1320 .Pp
1321Note that kernels can only be directly read from RAID 1 components on
1322architectures that support that
1323(currently alpha, i386, pmax, sandpoint, sparc, sparc64, and vax).
1324On those architectures, the
1325 .Dv FS_RAID
1326file system is recognized by the bootblocks, and will properly load the
1327kernel directly from a RAID 1 component.
1328For other architectures, or to support the root file system
1329on other RAID sets, some other mechanism must be used to get a kernel booting.
1330For example, a small partition containing only the secondary boot-blocks
1331and an alternate kernel (or two) could be used.
1332Once a kernel is booting however, and an auto-configuring RAID set is
1333found that is eligible to be root, then that RAID set will be
1334auto-configured and used as the root device.
1335If two or more RAID sets claim to be root devices, then the
1336user will be prompted to select the root device.
1337At this time, RAID 0, 1, 4, and 5 sets are all supported as root devices.
1338 .Pp
1339A typical RAID 1 setup with root on RAID might be as follows:
1340 .Bl -enum
1341 .It
1342wd0a - a small partition, which contains a complete, bootable, basic
1343 .Nx
1344installation.
1345 .It
1346wd1a - also contains a complete, bootable, basic
1347 .Nx
1348installation.
1349 .It
1350wd0e and wd1e - a RAID 1 set, raid0, used for the root file system.
1351 .It
1352wd0f and wd1f - a RAID 1 set, raid1, which will be used only for
1353swap space.
1354 .It
1355wd0g and wd1g - a RAID 1 set, raid2, used for
1356 .Pa /usr ,
1357 .Pa /home ,
1358or other data, if desired.
1359 .It
1360wd0h and wd1h - a RAID 1 set, raid3, if desired.
1361 .El
1362 .Pp
1363RAID sets raid0, raid1, and raid2 are all marked as auto-configurable.
1364raid0 is marked as being a root file system.
1365When new kernels are installed, the kernel is not only copied to
1366 .Pa / ,
1367but also to wd0a and wd1a.
1368The kernel on wd0a is required, since that
1369is the kernel the system boots from.
1370The kernel on wd1a is also
1371required, since that will be the kernel used should wd0 fail.
1372The important point here is to have redundant copies of the kernel
1373available, in the event that one of the drives fail.
1374 .Pp
1375There is no requirement that the root file system be on the same disk
1376as the kernel.
1377For example, obtaining the kernel from wd0a, and using
1378sd0e and sd1e for raid0, and the root file system, is fine.
1379It
1380 .Em is
1381critical, however, that there be multiple kernels available, in the
1382event of media failure.
1383 .Pp
1384Multi-layered RAID devices (such as a RAID 0 set made
1385up of RAID 1 sets) are
1386 .Em not
1387supported as root devices or auto-configurable devices at this point.
1388(Multi-layered RAID devices
1389 .Em are
1390supported in general, however, as mentioned earlier.)
1391Note that in order to enable component auto-detection and
1392auto-configuration of RAID devices, the line:
1393 .Bd -literal -offset indent
1394options RAID_AUTOCONFIG
1395 .Ed
1396 .Pp
1397must be in the kernel configuration file.
1398See
1399 .Xr raid 4
1400for more details.
1401 .Ss Swapping on RAID
1402A RAID device can be used as a swap device.
1403In order to ensure that a RAID device used as a swap device
1404is correctly unconfigured when the system is shutdown or rebooted,
1405it is recommended that the line
1406 .Bd -literal -offset indent
1407swapoff=YES
1408 .Ed
1409 .Pp
1410be added to
1411 .Pa /etc/rc.conf .
1412 .Ss Unconfiguration
1413The final operation performed by
1414 .Nm
1415is to unconfigure a
1416 .Xr raid 4
1417device.
1418This is accomplished via a simple:
1419 .Bd -literal -offset indent
1420raidctl -u raid0
1421 .Ed
1422 .Pp
1423at which point the device is ready to be reconfigured.
1424 .Ss Performance Tuning
1425Selection of the various parameter values which result in the best
1426performance can be quite tricky, and often requires a bit of
1427trial-and-error to get those values most appropriate for a given system.
1428A whole range of factors come into play, including:
1429 .Bl -enum
1430 .It
1431Types of components (e.g., SCSI vs. IDE) and their bandwidth
1432 .It
1433Types of controller cards and their bandwidth
1434 .It
1435Distribution of components among controllers
1436 .It
1437IO bandwidth
1438 .It
1439file system access patterns
1440 .It
1441CPU speed
1442 .El
1443 .Pp
1444As with most performance tuning, benchmarking under real-life loads
1445may be the only way to measure expected performance.
1446Understanding some of the underlying technology is also useful in tuning.
1447The goal of this section is to provide pointers to those parameters which may
1448make significant differences in performance.
1449 .Pp
1450For a RAID 1 set, a SectPerSU value of 64 or 128 is typically sufficient.
1451Since data in a RAID 1 set is arranged in a linear
1452fashion on each component, selecting an appropriate stripe size is
1453somewhat less critical than it is for a RAID 5 set.
1454However: a stripe size that is too small will cause large IO's to be
1455broken up into a number of smaller ones, hurting performance.
1456At the same time, a large stripe size may cause problems with
1457concurrent accesses to stripes, which may also affect performance.
1458Thus values in the range of 32 to 128 are often the most effective.
1459 .Pp
1460Tuning RAID 5 sets is trickier.
1461In the best case, IO is presented to the RAID set one stripe at a time.
1462Since the entire stripe is available at the beginning of the IO,
1463the parity of that stripe can be calculated before the stripe is written,
1464and then the stripe data and parity can be written in parallel.
1465When the amount of data being written is less than a full stripe worth, the
1466 .Sq small write
1467problem occurs.
1468Since a
1469 .Sq small write
1470means only a portion of the stripe on the components is going to
1471change, the data (and parity) on the components must be updated
1472slightly differently.
1473First, the
1474 .Sq old parity
1475and
1476 .Sq old data
1477must be read from the components.
1478Then the new parity is constructed,
1479using the new data to be written, and the old data and old parity.
1480Finally, the new data and new parity are written.
1481All this extra data shuffling results in a serious loss of performance,
1482and is typically 2 to 4 times slower than a full stripe write (or read).
1483To combat this problem in the real world, it may be useful
1484to ensure that stripe sizes are small enough that a
1485 .Sq large IO
1486from the system will use exactly one large stripe write.
1487As is seen later, there are some file system dependencies
1488which may come into play here as well.
1489 .Pp
1490Since the size of a
1491 .Sq large IO
1492is often (currently) only 32K or 64K, on a 5-drive RAID 5 set it may
1493be desirable to select a SectPerSU value of 16 blocks (8K) or 32
1494blocks (16K).
1495Since there are 4 data sectors per stripe, the maximum
1496data per stripe is 64 blocks (32K) or 128 blocks (64K).
1497Again, empirical measurement will provide the best indicators of which
1498values will yield better performance.
1499 .Pp
1500The parameters used for the file system are also critical to good performance.
1501For
1502 .Xr newfs 8 ,
1503for example, increasing the block size to 32K or 64K may improve
1504performance dramatically.
1505As well, changing the cylinders-per-group
1506parameter from 16 to 32 or higher is often not only necessary for
1507larger file systems, but may also have positive performance implications.
1508 .Ss Summary
1509Despite the length of this man-page, configuring a RAID set is a
1510relatively straight-forward process.
1511All that needs to be done is the following steps:
1512 .Bl -enum
1513 .It
1514Use
1515 .Xr disklabel 8
1516to create the components (of type RAID).
1517 .It
1518Construct a RAID configuration file: e.g.,
1519 .Pa raid0.conf
1520 .It
1521Configure the RAID set with:
1522 .Bd -literal -offset indent
1523raidctl -C raid0.conf raid0
1524 .Ed
1525 .It
1526Initialize the component labels with:
1527 .Bd -literal -offset indent
1528raidctl -I 123456 raid0
1529 .Ed
1530 .It
1531Initialize other important parts of the set with:
1532 .Bd -literal -offset indent
1533raidctl -i raid0
1534 .Ed
1535 .It
1536Get the default label for the RAID set:
1537 .Bd -literal -offset indent
1538disklabel raid0 > /tmp/label
1539 .Ed
1540 .It
1541Edit the label:
1542 .Bd -literal -offset indent
1543vi /tmp/label
1544 .Ed
1545 .It
1546Put the new label on the RAID set:
1547 .Bd -literal -offset indent
1548disklabel -R -r raid0 /tmp/label
1549 .Ed
1550 .It
1551Create the file system:
1552 .Bd -literal -offset indent
1553newfs /dev/rraid0e
1554 .Ed
1555 .It
1556Mount the file system:
1557 .Bd -literal -offset indent
1558mount /dev/raid0e /mnt
1559 .Ed
1560 .It
1561Use:
1562 .Bd -literal -offset indent
1563raidctl -c raid0.conf raid0
1564 .Ed
1565 .Pp
1566To re-configure the RAID set the next time it is needed, or put
1567 .Pa raid0.conf
1568into
1569 .Pa /etc
1570where it will automatically be started by the
1571 .Pa /etc/rc.d
1572scripts.
1573 .El
1574 .Sh SEE ALSO
1575 .Xr ccd 4 ,
1576 .Xr raid 4 ,
1577 .Xr rc 8
1578 .Sh HISTORY
1579RAIDframe is a framework for rapid prototyping of RAID structures
1580developed by the folks at the Parallel Data Laboratory at Carnegie
1581Mellon University (CMU).
1582A more complete description of the internals and functionality of
1583RAIDframe is found in the paper "RAIDframe: A Rapid Prototyping Tool
1584for RAID Systems", by William V. Courtright II, Garth Gibson, Mark
1585Holland, LeAnn Neal Reilly, and Jim Zelenka, and published by the
1586Parallel Data Laboratory of Carnegie Mellon University.
1587The
1588 .Nm
1589command first appeared as a program in CMU's RAIDframe v1.1 distribution.
1590This version of
1591 .Nm
1592is a complete re-write, and first appeared in
1593 .Nx 1.4 .
1594 .Sh COPYRIGHT
1595 .Bd -literal
1596The RAIDframe Copyright is as follows:
1597
1598Copyright (c) 1994-1996 Carnegie-Mellon University.
1599All rights reserved.
1600
1601Permission to use, copy, modify and distribute this software and
1602its documentation is hereby granted, provided that both the copyright
1603notice and this permission notice appear in all copies of the
1604software, derivative works or modified versions, and any portions
1605thereof, and that both notices appear in supporting documentation.
1606
1607CARNEGIE MELLON ALLOWS FREE USE OF THIS SOFTWARE IN ITS "AS IS"
1608CONDITION. CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY KIND
1609FOR ANY DAMAGES WHATSOEVER RESULTING FROM THE USE OF THIS SOFTWARE.
1610
1611Carnegie Mellon requests users of this software to return to
1612
1613 Software Distribution Coordinator or Software.Distribution@CS.CMU.EDU
1614 School of Computer Science
1615 Carnegie Mellon University
1616 Pittsburgh PA 15213-3890
1617
1618any improvements or extensions that they make and grant Carnegie the
1619rights to redistribute these changes.
1620 .Ed
1621 .Sh WARNINGS
1622Certain RAID levels (1, 4, 5, 6, and others) can protect against some
1623data loss due to component failure.
1624However the loss of two components of a RAID 4 or 5 system,
1625or the loss of a single component of a RAID 0 system will
1626result in the entire file system being lost.
1627RAID is
1628 .Em NOT
1629a substitute for good backup practices.
1630 .Pp
1631Recomputation of parity
1632 .Em MUST
1633be performed whenever there is a chance that it may have been compromised.
1634This includes after system crashes, or before a RAID
1635device has been used for the first time.
1636Failure to keep parity correct will be catastrophic should a
1637component ever fail \(em it is better to use RAID 0 and get the
1638additional space and speed, than it is to use parity, but
1639not keep the parity correct.
1640At least with RAID 0 there is no perception of increased data security.
1641 .Pp
1642When replacing a failed component of a RAID set, it is a good
1643idea to zero out the first 64 blocks of the new component to insure the
1644RAIDframe driver doesn't erroneously detect a component label in the
1645new component.
1646This is particularly true on
1647 .Em RAID 1
1648sets because there is at most one correct component label in a failed RAID
16491 installation, and the RAIDframe driver picks the component label with the
1650highest serial number and modification value as the authoritative source
1651for the failed RAID set when choosing which component label to use to
1652configure the RAID set.
1653 .Sh BUGS
1654Hot-spare removal is currently not available.
1655 

AltStyle によって変換されたページ (->オリジナル) /