Solaris 10 Configuration Notes: How the hell does this thing work again….


It’s been a while since I’ve had the pleasure (read: pain) of working with Sloowaris, but now that we have two 48TB Sun X4540 Thumpers in house, I have to…..Uggghhhh :-). Here are some notes:

  • Remember sudo -i does not work. Use “su -” to get the root environment through ssh (login as regular user).
  • The machine has 6 Controllers with 8 Disks each for a total of 48 disks.
  • To find out all the disks that are available on your system and their Labels…..root # format
    Searching for disks...done

    AVAILABLE DISK SELECTIONS:

  • To see the status of the zpool runroot # zpool status
    pool: pool1
    state: ONLINE
    status: The pool is formatted using an older on-disk format. The pool can
    still be used, but some features are unavailable.
    action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
    pool will no longer be accessible on older software versions.
    scrub: none requested
    config:

    NAME STATE READ WRITE CKSUM
    pool1 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t3d0 ONLINE 0 0 0
    c1t3d0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c4t3d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c5t3d0 ONLINE 0 0 0
    c0t7d0 ONLINE 0 0 0
    c1t7d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    c3t7d0 ONLINE 0 0 0
    spares
    c4t7d0 AVAIL
    c5t7d0 AVAIL

    errors: No known data errors

  • Our zpool is at version 10 and the latest is version 15, so we upgrade:root # zpool upgrade
    This system is currently running ZFS pool version 15.

    The following pools are out of date, and can be upgraded. After being
    upgraded, these pools will no longer be accessible by older software versions.

    VER POOL
    --- ------------
    10 pool1

    Use 'zpool upgrade -v' for a list of available versions and their associated
    features.
    root # zpool upgrade -v
    This system is currently running ZFS pool version 15.

    The following versions are supported:

    VER DESCRIPTION
    --- --------------------------------------------------------
    1 Initial ZFS version
    2 Ditto blocks (replicated metadata)
    3 Hot spares and double parity RAID-Z
    4 zpool history
    5 Compression using the gzip algorithm
    6 bootfs pool property
    7 Separate intent log devices
    8 Delegated administration
    9 refquota and refreservation properties
    10 Cache devices
    11 Improved scrub performance
    12 Snapshot properties
    13 snapused property
    14 passthrough-x aclinherit
    15 user/group space accounting
    For more information on a particular version, including supported releases, see:

    http://www.opensolaris.org/os/community/zfs/version/N

    Where 'N' is the version number.
    root #
    root # zpool upgrade pool1
    This system is currently running ZFS pool version 15.

    Successfully upgraded 'pool1' from version 10 to version 15

  • zpools are like autonomous raid subsystems that will eventually be added into a pool (which is similar to a LV). There are 3 types of pools raidz (raid-5 like), raidz2 (raid-6 like) and mirror.
  • C0T0D0 and C1T0D0 are kinda special and can’t be included in a zpool…..something about SVM metadb…..blahblahblah. Leave it out.root # metadb -i
    flags first blk block count
    a m p luo 16 8192 /dev/dsk/c0t0d0s7
    a p luo 8208 8192 /dev/dsk/c0t0d0s7
    a p luo 16400 8192 /dev/dsk/c0t0d0s7
    a p luo 16 8192 /dev/dsk/c1t0d0s7
    a p luo 8208 8192 /dev/dsk/c1t0d0s7
    a p luo 16400 8192 /dev/dsk/c1t0d0s7
    r - replica does not have device relocation information
    o - replica active prior to last mddb configuration change
    u - replica is up to date
    l - locator for this replica was read successfully
    c - replica's location was in /etc/lvm/mddb.cf
    p - replica's location was patched in kernel
    m - replica is master, this is replica selected as input
    W - replica has device write errors
    a - replica is active, commits are occurring to this replica
    M - replica had problem with master blocks
    D - replica had problem with data blocks
    F - replica had format problems
    S - replica is too small to hold current data base
    R - replica had device read errors
  • The following commands created the extra zpools needed:root # zpool add pool1 raidz1 c2t0d0 c3t0d0 c4t0d0 c5t0d0 c0t1d0
    root # zpool add pool1 raidz1 c1t1d0 c2t1d0 c3t1d0 c4t1d0 c5t1d0
    root # zpool add pool1 raidz1 c0t2d0 c1t2d0 c2t2d0 c3t2d0 c4t2d0
    root # zpool add pool1 raidz1 c1t4d0 c2t4d0 c3t4d0 c4t4d0 c5t4d0
    root # zpool add pool1 raidz1 c0t5d0 c2t5d0 c3t5d0 c4t5d0 c5t5d0
    root # zpool add pool1 raidz1 c0t6d0 c1t6d0 c3t6d0 c4t6d0 c5t6d0
  • This leaves the following 4 disks to be added to spare:root # zpool add pool1 spare c5t2d0 c0t4d0 c1t5d0 c2t6d0
  • Now for the fun part…..finding out what the heck all this did to the system:root # zpool status
    pool: pool1
    state: ONLINE
    scrub: none requested
    config:

    NAME STATE READ WRITE CKSUM
    pool1 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t3d0 ONLINE 0 0 0
    c1t3d0 ONLINE 0 0 0
    c2t3d0 ONLINE 0 0 0
    c3t3d0 ONLINE 0 0 0
    c4t3d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c5t3d0 ONLINE 0 0 0
    c0t7d0 ONLINE 0 0 0
    c1t7d0 ONLINE 0 0 0
    c2t7d0 ONLINE 0 0 0
    c3t7d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c2t0d0 ONLINE 0 0 0
    c3t0d0 ONLINE 0 0 0
    c4t0d0 ONLINE 0 0 0
    c5t0d0 ONLINE 0 0 0
    c0t1d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c1t1d0 ONLINE 0 0 0
    c2t1d0 ONLINE 0 0 0
    c3t1d0 ONLINE 0 0 0
    c4t1d0 ONLINE 0 0 0
    c5t1d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t2d0 ONLINE 0 0 0
    c1t2d0 ONLINE 0 0 0
    c2t2d0 ONLINE 0 0 0
    c3t2d0 ONLINE 0 0 0
    c4t2d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c1t4d0 ONLINE 0 0 0
    c2t4d0 ONLINE 0 0 0
    c3t4d0 ONLINE 0 0 0
    c4t4d0 ONLINE 0 0 0
    c5t4d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t5d0 ONLINE 0 0 0
    c2t5d0 ONLINE 0 0 0
    c3t5d0 ONLINE 0 0 0
    c4t5d0 ONLINE 0 0 0
    c5t5d0 ONLINE 0 0 0
    raidz1 ONLINE 0 0 0
    c0t6d0 ONLINE 0 0 0
    c1t6d0 ONLINE 0 0 0
    c3t6d0 ONLINE 0 0 0
    c4t6d0 ONLINE 0 0 0
    c5t6d0 ONLINE 0 0 0
    spares
    c4t7d0 AVAIL
    c5t7d0 AVAIL
    c5t2d0 AVAIL
    c0t4d0 AVAIL
    c1t5d0 AVAIL
    c2t6d0 AVAIL

    errors: No known data errors
    root # zpool list
    NAME SIZE USED AVAIL CAP HEALTH ALTROOT
    pool1 36.2T 222K 36.2T 0% ONLINE -
    root # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool1 161K 28.5T 28.8K /vol1

  • To create a CX special mount point we do:root # zfs create pool1/CX
    root # zfs list
    NAME USED AVAIL REFER MOUNTPOINT
    pool1 201K 28.5T 28.8K /vol1
    pool1/CX 33.6K 28.5T 33.6K /vol1/CX
  • When compiling iRODS on X4540 you might/will get an error like “make: Fatal error in reader: config/config.mk………Unexpected end of line seen“. This is caused because by default the system is configured for Sun’s make command (in /usr/ccs/bin/make) rather than the gnu make command which resides in /usr/sfw/bin/gmake under Solaris 10. To fix this add /usr/sfw/bin in front of your $PATH variable, export it and use gmake instead of make (read the INSTALL.txt file that comes with iRODS to find out how you can do the steps manually instead of using irodssetup command).
, , , ,

Leave a Reply