JavaScript is required to for searching.
Oracle Technology Network
Library
PDF
Print View
Feedback

Document Information

Preface

1.  Transitioning From Oracle Solaris 10 to Oracle Solaris 11 (Overview)

2.  Transitioning to an Oracle Solaris 11 Installation Method

3.  Managing Devices

4.  Managing Storage Features

5.  Managing File Systems

Oracle Solaris 11 File System Changes

Root File System Requirements and Changes

Considering ZFS Backup Features

Migrating File System Data to ZFS File Systems

Recommended Data Migration Practices

Migrating Data With ZFS Shadow Migration

Migrating UFS Data to a ZFS File System (ufsdump and ufsrestore)

6.  Managing Software

7.  Managing Network Configuration

8.  Managing System Configuration

9.  Managing Security

10.  Managing Oracle Solaris Releases in a Virtual Environment

11.  User Account Management and User Environment Changes

12.  Using Oracle Solaris Desktop Features

A.  Transitioning From Previous Oracle Solaris 11 Releases to Oracle Solaris 11

Managing ZFS File System Changes

The following ZFS file system features, not available in the Oracle Solaris 10 release, are available in Oracle Solaris 11:

Displaying ZFS File System Information

After the system is installed, review your ZFS storage pool and ZFS file system information.

Display ZFS storage pool information by using the zpool status command. For example:

# zpool status
 pool: rpool
 state: ONLINE
 scan: none requested
config:
 NAME STATE READ WRITE CKSUM
 rpool ONLINE 0 0 0
 c2t0d0s0 ONLINE 0 0 0
errors: No known data errors

Display ZFS file system information by using the zfs list command. For example:

# zfs list -r rpool
NAME USED AVAIL REFER MOUNTPOINT
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.39G 67.5G 74.5K /rpool
rpool/ROOT 3.35G 67.5G 31K legacy
rpool/ROOT/solaris 3.35G 67.5G 3.06G /
rpool/ROOT/solaris/var 283M 67.5G 214M /var
rpool/dump 1.01G 67.5G 1000M -
rpool/export 97.5K 67.5G 32K /rpool/export
rpool/export/home 65.5K 67.5G 32K /rpool/export/home
rpool/export/home/admin 33.5K 67.5G 33.5K /rpool/export/home/admin
rpool/swap 1.03G 67.5G 1.00G -

For a description of the root pool components, see Reviewing the Initial ZFS BE After an Installation.

Resolving ZFS File System Space Reporting Issues

The zpool list and zfs list commands are better than the previous df and du commands for determining your available pool and file system space. With the legacy commands, you cannot easily discern between pool and file system space, nor do the legacy commands account for space that is consumed by descendent file systems or snapshots.

For example, the following root pool (rpool) has 5.46 GB allocated and 68.5 GB free.

# zpool list rpool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 74G 5.46G 68.5G 7% 1.00x ONLINE -

If you compare the pool space accounting with the file system space accounting by reviewing the USED columns of your individual file systems, you can see that the pool space is accounted for. For example:

# zfs list -r rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 5.41G 67.4G 74.5K /rpool
rpool/ROOT 3.37G 67.4G 31K legacy
rpool/ROOT/solaris 3.37G 67.4G 3.07G /
rpool/ROOT/solaris/var 302M 67.4G 214M /var
rpool/dump 1.01G 67.5G 1000M -
rpool/export 97.5K 67.4G 32K /rpool/export
rpool/export/home 65.5K 67.4G 32K /rpool/export/home
rpool/export/home/admin 33.5K 67.4G 33.5K /rpool/export/home/admin
rpool/swap 1.03G 67.5G 1.00G -

Resolving ZFS Storage Pool Space Reporting Issues

The SIZE value that is reported by the zpool list command is generally the amount of physical disk space in the pool, but varies depending on the pool's redundancy level. See the examples below. The zfs list command lists the usable space that is available to file systems, which is disk space minus ZFS pool redundancy metadata overhead, if any.

Making ZFS File Systems Available

Making ZFS file systems available is similar to Oracle Solaris 10 releases in the following ways:

ZFS File System Sharing Changes

In Oracle Solaris 10, you could set the sharenfs or sharesmb property to create and publish a ZFS file system share, or you could use the legacy share command.

In this Solaris release, you create a ZFS file system share and then publish the share as follows:

The primary new sharing differences are as follows:

Legacy ZFS Sharing Syntax

Legacy sharing syntax is still supported without having to modify the /etc/dfs/dfstab file. Legacy shares are managed by an SMF service.

  1. Use the share command to share a file system.

    For example, to share a ZFS file system:

    # share -F nfs /tank/zfsfs
    # cat /etc/dfs/sharetab
    /tank/zfsfs - nfs rw

    The above syntax is identical to sharing a UFS file system:

    # share -F nfs /ufsfs
    # cat /etc/dfs/sharetab
    /ufsfs - nfs rw 
    /tank/zfsfs - nfs rw 
  2. You can create a file system with the sharenfs property enabled, as in previous releases. The Oracle Solaris 11 behavior is that a default share is created for the file system.

    # zfs create -o sharenfs=on rpool/data
    # cat /etc/dfs/sharetab
    /rpool/data rpool_data nfs sec=sys,rw

The above file system shares are published immediately.

ZFS Sharing Migration/Transition Issues

Review the share transition issues in this section.

ZFS Data Deduplication Requirements

In Oracle Solaris 11, you can use the deduplication (dedup) property to remove redundant data from your ZFS file systems. If a file system has the dedup property enabled, duplicate data blocks are removed synchronously. The result is that only unique data is stored, and common components are shared between files. For example:

# zfs set dedup=on tank/home

Do not enable the dedup property on file systems that reside on production systems until you perform the following steps to determine if your system can support data deduplication.

  1. Determine if your data would benefit from deduplication space savings. If your data is not dedup-able, there is no point in enabling dedup. Running the following command is very memory intensive:

    # zdb -S tank
    Simulated DDT histogram:
    bucket allocated referenced 
    ______ ______________________________ ______________________________
    refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
    ------ ------ ----- ----- ----- ------ ----- ----- -----
     1 2.27M 239G 188G 194G 2.27M 239G 188G 194G
     2 327K 34.3G 27.8G 28.1G 698K 73.3G 59.2G 59.9G
     4 30.1K 2.91G 2.10G 2.11G 152K 14.9G 10.6G 10.6G
     8 7.73K 691M 529M 529M 74.5K 6.25G 4.79G 4.80G
     16 673 43.7M 25.8M 25.9M 13.1K 822M 492M 494M
     32 197 12.3M 7.02M 7.03M 7.66K 480M 269M 270M
     64 47 1.27M 626K 626K 3.86K 103M 51.2M 51.2M
     128 22 908K 250K 251K 3.71K 150M 40.3M 40.3M
     256 7 302K 48K 53.7K 2.27K 88.6M 17.3M 19.5M
     512 4 131K 7.50K 7.75K 2.74K 102M 5.62M 5.79M
     2K 1 2K 2K 2K 3.23K 6.47M 6.47M 6.47M
     8K 1 128K 5K 5K 13.9K 1.74G 69.5M 69.5M
     Total 2.63M 277G 218G 225G 3.22M 337G 263G 270G
    dedup = 1.20, compress = 1.28, copies = 1.03, dedup * compress / copies = 1.50

    If the estimated dedup ratio is greater than 2, then you might see dedup space savings.

    In this example, the dedup ratio (dedup = 1.20) is less than 2, so enabling dedup is not recommended.

  2. Make sure your system has enough memory to support dedup.

    • Each in-core dedup table entry is approximately 320 bytes.

    • Multiply the number of allocated blocks times 320. For example:

      in-core DDT size = 2.63M x 320 = 841.60M
  3. Dedup performance is best when the deduplication table fits into memory. If the dedup table has to be written to disk, then performance will decrease. If you enable deduplication on your file systems without sufficient memory resources, system performance might degrade during file system related operations. For example, removing a large dedup-enabled file system without sufficient memory resources might impact system performance. .

Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Legal Notices
Previous Next

AltStyle によって変換されたページ (->オリジナル) /