Zettabyte File System ZFS

por | 18 marzo, 2008

Features:

1. 256 quadrillion zettabytes ( Terabytes, Petabytes, Exabytes, Zettabytes (1024 Exabytes))
2. RAID-0/1 & RAID-Z ( RAID-5 with enhancements ) ( 2 – required virtual devices )
3. Snapshots – read-only copies of file systems or volumes
4. Create volumes
5. Uses storage pools to manage storage – aggregates virtual devices
6. File systems attached to pools grow dinamically as storage is added
7. File systems may span multiple physical disk
8. ZFS is transactional (Similar to databases. Example: Writing 100 mb, but only 80mb is commited in a traditional file system, leaving 20 mb unwriting, and the data is corrupted, and in ZFS all is written or NONE.
9. Pools & file systems are auto-mounted. No need to maintain /etc/vfstab
10. Supports file system hierarchies: /pool1/{home(5GB),var(10GB), etc.}
11. Supports reservation of storage: /pool1(36GB)/{home(10GB),var(36–10GB)}
12. Provides a secure web-based management tool

*********** ZFS – CLI **************
# which zpool
/usr/sbin/zpool

zpool list – list known pools
zpool create pool_name(alphanumeric, _,-,:,.)

Pool Name Constraints
1. mirror
2. raidz

—— Pause: note to vmware —————————————————————-

  Add a second disk to the image in vmware

In order to add second hard disk with Fusion.

  • solaris must be halted.
  • VM must be shut down.
  • Click the + sign, add disk and enter a size.
  • devfsadm  (almost typed reboot — -r but that would be «old think» so that format sees the new device.)

format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
       0. c1t0d0 <DEFAULT cyl 1302 alt 2 hd 255 sec 63>
          /pci@0,0/pci1000,30@10/sd@0,0
       1. c1t1d0 <DEFAULT cyl 2557 alt 2 hd 128 sec 32>
          /pci@0,0/pci1000,30@10/sd@1,0
———————————————————————————————————-

Continue….

zpool create pool_name device_name1, device_name2, device_name3, etc.

bash-3.00# devfsadm
bash-3.00# format
Searching for disks…done

AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
       1. c0d1 <DEFAULT cyl 1303 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@1,0
Specify disk (enter its number): ^Z

[1]+  Stopped                 format
bash-3.00# zpool create pool1 c0d1
bash-3.00# echo $?
0

mount
/pool1 on pool1 read/write/setuid/devices/exec/xattr/atime/dev=2d50002 on Tue Mar 18 09:24:51 2008–

bash-3.00# zpool list
NAME                    SIZE    USED   AVAIL    CAP  HEALTH     ALTROOT
pool1                  9.94G     88K   9.94G     0%  ONLINE     –

ZFS Pool Statuses
1. ONLINE
2. DEGRADED (part of the mirror is broken or entire disk is fail, but still available)
3. FAULTED
4. OFFLINE
5. UNAVAILABLE

(Pools or datasets)

—————————-
zfs list – returns ZFS dataset info

bash-3.00# zfs list
NAME    USED  AVAIL  REFER  MOUNTPOINT
pool1  85.5K  9.78G    25K  /pool1

zfs mount – returns pools and mount points
bash-3.00# zfs mount
pool1                           /pool1

zpool status – returns virtual devices that constitute pools
bash-3.00#zpool status
  pool: pool1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool1       ONLINE       0     0     0
          c0d1      ONLINE       0     0     0

errors: No known data errors

bash-3.00#zpool status -v pool1

Note: ZFS requieres a minimum of 128MB virtual device to create a pool

(Recommendation render all the disks over zfs minus /  )
var home should be separate disks for performance

——————-DESTROY ————————————————
zpool destroy pool1 – Destroys pool and associated file system

bash-3.00# zpool destroy pool1
bash-3.00# echo $?
0

—————-CREATE File systems within pool1 ———————
zfs create pool1/home – creates file system named ‘home’ in pool1

bash-3.00# zfs create pool1/home
bash-3.00# echo $?
0

bash-3.00# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool1        114K  9.78G  25.5K  /pool1
pool1/home  24.5K  9.78G  24.5K  /pool1/home

Note: Default action of  ‘zfs create pool1/home’ assigns all storage available to ‘pool1’, to ‘pool1/home’

##### Set quota on existing file system ######
zfs set quota=5G pool1/home

bash-3.00# zfs set quota=5G pool1/home
bash-3.00# zfs list
NAME         USED  AVAIL  REFER  MOUNTPOINT
pool1        114K  9.78G  25.5K  /pool1
pool1/home  24.5K  5.00G  24.5K  /pool1/home

##### Create user-based file system beneath pool1/home #####

zfs create pool1/home/zivo

bash-3.00# zfs get -r quota pool1
NAME                    PROPERTY  VALUE               SOURCE
pool1                     quota       none                 default
pool1/home             quota       5G                    local
pool1/home/pedrito   quota       none                default
pool1/home/zivo       quota       none                default

zfs get -r compression pool1
NAME                PROPERTY     VALUE               SOURCE
pool1               compression  off                 default
pool1/home          compression  off                 default
pool1/home/pedrito  compression  off                 default
pool1/home/zivo     compression  off                 default

############ Rename file systems ######################

bash-3.00# zfs rename pool1/home/zivo pool1/home/zivob
bash-3.00# echo $?
0

############ Extending dynamically, pool storage ######################
AVAILABLE DISK SELECTIONS:
       0. c0d0 <DEFAULT cyl 2607 alt 2 hd 255 sec 63>
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@0,0
       1. c0d1 <VMware V-0000000000000000-0001-10.00GB>
          /pci@0,0/pci-ide@7,1/ide@0/cmdk@1,0
       2. c1d1 <DEFAULT cyl 4093 alt 2 hd 128 sec 32>
          /pci@0,0/pci-ide@7,1/ide@1/cmdk@1,0

bash-3.00# zpool add pool1 c1d1
bash-3.00# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
pool1                    210K  17.6G  25.5K  /pool1
pool1/home              76.5K  5.00G  27.5K  /pool1/home
pool1/home/pedrito      24.5K  5.00G  24.5K  /pool1/home/pedrito
pool1/home/zivokickass  24.5K  5.00G  24.5K  /pool1/home/zivokickass

bash-3.00# zpool status
  pool: pool1
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool1       ONLINE       0     0     0
          c0d1      ONLINE       0     0     0
          c1d1      ONLINE       0     0     0

errors: No known data errors