/tutorial/

'>81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167
<!DOCTYPE html>
<html dir="ltr" lang="en">
    <head>
        <meta charset='utf-8'>
        <title>LVM</title>
    </head>
    <body>

        <a href="index.html">Tools Index</a>

        <h1>LVM</h1>

        <p>Read <a href="https://raid.wiki.kernel.org/index.php/RAID_setup">Raid Setup</a>,
        the only thing you will need outside system is:
        "Patience, Pizza, and your favorite caffeinated beverage.".
        <a href="https://wiki.archlinux.org/index.php/Software_RAID_and_LVM">Arch Wiki</a>
        article about Sofware RAID and LVM.</p>

        <p>LVM or Logic Volume Manager bring one more layer, read
        <a href="http://www.tuxradar.com/content/lvm-made-easy">Lvm made easy</a>.
        Partitions under lvm are easy to be resized, moved and there is
        a tool to help encrypt. There is more freedom to name physical
        disk names exp; production, development, backups...</p>


        <p>Basic idea behind RAID is to deal with independent disks
        as an array of drives. Raid 0 uses two or more disks as one,
        with performance gains without fault-tolerance. From raid 1
        to 6 they offer diferent fault tolerance mechanisms.</p>


        <p>Until now "from install" there is only one partition,
        it is good idea to have a system with diferent partitions for each
        propos. If is a "fresh install";</p>

        <pre>
        # cd /iso/crux/opt/
        # pkgadd lvm2#2.02.107-1.pkg.tar.xz
        #
        </pre>

        <h2 id="lvmpart">1. LVM Partition</h2>

        <p>There is no need to create a partition with fdisk or parted
        if all device will be used for lvm, just <a href="#pv">pvcreate</a>
        against the device (pvcreate /dev/sda).</p>

        <p>Create a LVM partition with parted;</p>

        <pre>
        parted --script ${DEV} \
                unit mib \
                mkpart primary 1000 4000 \
                set 1 lvm on
        </pre>

        <h2 id="pv">2. Create physical volume</h2>

        <pre>
         # pvcreate /dev/sdb3
          Physical volume "/dev/sdb3" successfully created
        </pre>

        <h2 id="vg">3. Create volume group</h2>

        <pre>
        # vgcreate vg_system /dev/sdb3
          Volume group "vg_system" successfully created
        # vgcreate homevg /dev/sdb4
          Volume group "homevg" successfully created
        #
        </pre>

        <h3>3.1. Search Volume Groups</h3>

        <pre>
        # vgscan
          Reading all physical volumes.  This may take a while...
          Found volume group "homevg" using metadata type lvm2
          Found volume group "vg_system" using metadata type lvm2
        #
        </pre>

        <h2 id="lv">4. Create logical volume</h2>

        <pre>
        # lvcreate -L 15G -n distfileslv vg_system
          Logical volume "distfileslv" created.
        # lvcreate -L 8G -n packageslv vg_system
          Logical volume "packageslv" created.
        # lvcreate -L 4G -n swaplv vg_system
          Logical volume "swaplv" created.
        # lvcreate -L 80G -n homelv homevg
          Logical volume "homelv" created.
        #
        </pre>

        <pre>
        # mkfs.ext4 /dev/vg_system/distfileslv
        # mkfs.ext4 /dev/vg_system/packageslv
        # mkswap /dev/vg_system/swaplv
        # mkfs.ext4 /dev/homevg/homelv
        </pre>

        <h3>4.1. Activate Deactivate</h3>

        <p>Deactivate logical volumes;</p>

        <pre>
        # lvchange -a -n /dev/vg_system/packageslv
        # lvchange -a -n /dev/vg_system/distfileslv
        # swapoff /dev/vg_system/sawplv
        # lvchange -a -n /dev/vg_system/swaplv
        </pre>

        <p>Deactivate volume group;</p>

        <pre>
         # vgchange -a n vg_system
         0 logical volume(s) in volume group "vg_system" now active
         #
        </pre>

        <p>Activate volume group;</p>
        <pre>
        # vgchange -a y vg_system
          3 logical volume(s) in volume group "vg_system" now active
        #
        </pre>

        <h2 id="fsck">5. Maintenance</h2>

        <h3 id="resize">Resize</h3>

        <p>First umount all lvm partitions;</p>

        <pre>
        # pvs
        </pre>

        <pre>
        # pvresize /dev/sdb
        </pre>

        <pre>
        # vgs
        </pre>

        <pre>
        # lvresize --resizefs --size +25GB /dev/mapper/vg_system-lv_ports
        </pre>

        <pre>
        # vgs
        </pre>

        <h2 id="encrypt">7. Encryption</h2>

        <a href="index.html">Tools Index</a>
        <p>
        This is part of the Hive System Documentation.
        Copyright (C) 2018
        Hive Team.
        See the file <a href="../fdl-1.3-standalone.html">Gnu Free Documentation License</a>
        for copying conditions.</p>
    </body>
</html>