User:Sram ss/sandbox

prtconf.. gives you the complete information on server. lsvg -p vg_mea10 vg_mea10: PV_NAME          PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION hdisk51          active            870         1           00..00..00..00..01 hdisk34          active            2614        0           00..00..00..00..00 /apps/atlas/atlas2v0/mea10$ lsvg vg_mea10 VOLUME GROUP:      vg_mea10                 VG IDENTIFIER:  00c7908e00004c000000014f64bab9db VG STATE:          active                   PP SIZE:        256 megabyte(s) VG PERMISSION:     read/write               TOTAL PPs:      3484 (891904 megabytes) MAX LVs:           512                      FREE PPs:       1 (256 megabytes) LVs:               12                       USED PPs:       3483 (891648 megabytes) OPEN LVs:          12                       QUORUM:         2 (Enabled) TOTAL PVs:         2                        VG DESCRIPTORS: 3 STALE PVs:         0                        STALE PPs:      0 ACTIVE PVs:        2                        AUTO ON:        yes MAX PPs per VG:    128016 MAX PPs per PV:    6096                     MAX PVs:        21 LTG size (Dynamic): 512 kilobyte(s)         AUTO SYNC:      no HOT SPARE:          no                       BB POLICY:      relocatable PV RESTRICTION:    none                     INFINITE RETRY: no DISK BLOCK SIZE:    512                      CRITICAL VG:    no FS SYNC OPTION:     no {s00va9926957} (atlas) {(20171107 - FAL) RITM2890419 - Automate Interest Rate Load

- lspv hdisk51        00c7908e746426d1                    vg_mea10        active hdisk34        00c7908eb3739892                    vg_mea10        active

lspv -p hdisk51 hdisk51: PP RANGE STATE   REGION        LV NAME             TYPE       MOUNT POINT 1-27   used    outer edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 28-34   used    outer edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 35-46   used    outer edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 47-49   used    outer edge    lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 50-133  used    outer edge    lv_atdmea10         jfs2       /apps/oradbf/atdmea10 134-135  used    outer edge    lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 136-153  used    outer edge    lv_atpmea10         jfs2       /apps/oradbf/atpmea10 154-170  used    outer edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 171-171  used    outer edge    lv_atpmea10         jfs2       /apps/oradbf/atpmea10 172-174  used    outer edge    lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 175-226  used    outer middle  lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 227-232  used    outer middle  lv_mea10infocdr     jfs2       /apps/atlas/atlas2v0/mea10/infocdr 233-326  used    outer middle  lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 327-337  used    outer middle  lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 338-348  used    outer middle  lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 349-378  used    center        lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 379-417  used    center        lv_atpmea10         jfs2       /apps/oradbf/atpmea10 418-522  used    center        lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 523-525  used    inner middle  lv_atpmea10         jfs2       /apps/oradbf/atpmea10 526-677  used    inner middle  lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 678-696  used    inner middle  lv_atpmea10         jfs2       /apps/oradbf/atpmea10 697-706  used    inner edge    lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 707-707  used    inner edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 708-708  free    inner edge 709-719  used    inner edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 720-739  used    inner edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 740-747  used    inner edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 748-796  used    inner edge    lv_atpmea10         jfs2       /apps/oradbf/atpmea10 797-816  used    inner edge    lv_atdmea10         jfs2       /apps/oradbf/atdmea10 817-839  used    inner edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 840-870  used    inner edge    lv_atpmea10         jfs2       /apps/oradbf/atpmea10

lspv -p hdisk34 hdisk34: PP RANGE STATE   REGION        LV NAME             TYPE       MOUNT POINT 1-3    used    outer edge    lv_impmea10         jfs2       /apps/atlas/atlas2v0/mea10/data1/imp 4-523  used    outer edge    lv_atpmea10         jfs2       /apps/oradbf/atpmea10 524-643  used    outer middle  lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 644-663  used    outer middle  lv_tipsme10         jfs2       /apps/oradbf/tipsme10 664-664  used    outer middle  lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 665-834  used    outer middle  lv_mea10infocdr     jfs2       /apps/atlas/atlas2v0/mea10/infocdr 835-835  used    outer middle  loglv17             jfs2log    N/A 836-836  used    outer middle  lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 837-842  used    outer middle  lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 843-846  used    outer middle  lv_orionmea10       jfs2       /apps/orion/060/mea10 847-850  used    outer middle  lv_mea10crnet       jfs2       /apps/crnet/030/mea10 851-1046 used    outer middle  lv_mea10fic         jfs2       /apps/atlas/atlas2v0/mea10/data1/fic 1047-1510 used    center        lv_mea10fic         jfs2       /apps/atlas/atlas2v0/mea10/data1/fic 1511-1568 used    center        lv_atdmea10         jfs2       /apps/oradbf/atdmea10 1569-1670 used    inner middle  lv_atdmea10         jfs2       /apps/oradbf/atdmea10 1671-1715 used    inner middle  lv_mea10fic         jfs2       /apps/atlas/atlas2v0/mea10/data1/fic 1716-1717 used    inner middle  lv_impmea10         jfs2       /apps/atlas/atlas2v0/mea10/data1/imp 1718-2005 used    inner middle  lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 2006-2028 used    inner middle  lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 2029-2091 used    inner middle  lv_mea10fic         jfs2       /apps/atlas/atlas2v0/mea10/data1/fic 2092-2103 used    inner edge    lv_mea10fic         jfs2       /apps/atlas/atlas2v0/mea10/data1/fic 2104-2107 used    inner edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc 2108-2128 used    inner edge    lv_mea10            jfs2       /apps/atlas/atlas2v0/mea10 2129-2155 used    inner edge    lv_impmea10         jfs2       /apps/atlas/atlas2v0/mea10/data1/imp 2156-2587 used    inner edge    lv_atdmea10         jfs2       /apps/oradbf/atdmea10 2588-2595 used    inner edge    lv_data1mea10       jfs2       /apps/atlas/atlas2v0/mea10/data1 2596-2614 used    inner edge    lv_mea10infoc       jfs2       /apps/atlas/atlas2v0/mea10/infoc -- --conclusino total pps as ver the volume group :3484

hdisk51:870 hdisk34:2614 so total=3484 which is matching with total pps in the volume group. --- if you look at the above logical volume lv_mea10 is comprised from both hdisk51 +hdisk34.

lslv -l lv_mea10 lv_mea10:/apps/atlas/atlas2v0/mea10 PV               COPIES        IN BAND       DISTRIBUTION hdisk34          141:000:000   85%           000:120:000:000:021 hdisk51          451:000:000   23%           039:105:135:152:020 -- lslv lv_mea10 LOGICAL VOLUME:    lv_mea10               VOLUME GROUP:   vg_mea10 LV IDENTIFIER:     00c7908e00004c000000014f64bab9db.1 PERMISSION:     read/write VG STATE:          active/complete        LV STATE:       opened/syncd TYPE:              jfs2                   WRITE VERIFY:   off MAX LPs:           4096                   PP SIZE:        256 megabyte(s) COPIES:            1                      SCHED POLICY:   parallel LPs:               592                    PPs:            592 STALE PPs:         0                      BB POLICY:      relocatable INTER-POLICY:      minimum                RELOCATABLE:    yes INTRA-POLICY:      middle                 UPPER BOUND:    32 MOUNT POINT:       /apps/atlas/atlas2v0/mea10 LABEL:          /apps/atlas/atlas2v0/mea10 DEVICE UID:        0                      DEVICE GID:     0 DEVICE PERMISSIONS: 432 MIRROR WRITE CONSISTENCY: on/ACTIVE EACH LP COPY ON A SEPARATE PV ?: yes Serialize IO ?:    NO INFINITE RETRY:     no                     PREFERRED READ: 0 --

Logical Volume Manager:-The set of operating system commands, library subroutines, and other tools that allow you to establish and control logical volume storage is called the Logical Volume Manager (LVM).Managing large hard disk farms by letting you add disks, replace disks, copy and share contents from one disk to another without disrupting service (hot swapping).One can think of LVMas a thin software layer on top of the hard disks and partitions, which creates an illusion of continuity and ease-of-use for managing hard-drive replacement, repartitioning, and backup.The LVM controls disk resources by mapping data between a more simple and flexible logical view of storage space and the actual physical disks. The LVMdoes this using a layer of device-driver code that runs above traditional disk device drivers.Logical volume storage concepts:-The five basic logical storage concepts are: Physical volumes, volume groups,physical partitions, logical volumes, and logical partitions. Each individual fixed-disk drive is called a physical volume (PV). All physical volumes belong to one volume group (VG) named rootvg. All of the physical volumes in a volume group are divided into physical partitions (PPs) of the same size. Within each volume group, one or more logical volumes (LVs) are defined. Logical volumes are groups of information located on physical volumes. Each logical volume consists of one or more logical partitions (LPs). Each logical partition corresponds to at least one physical partition.

Before you can start using Logical Volume Manager you must understand the basic mechanics and terminology.

The vary-on process is one of the mechanisms that the LVM uses to ensure that a volume group is ready to use and contains the most up-to-date data. The varyonvg and varyoffvg commands activate or deactivate a volume group that you have defined to the system.

If the vary-on operation cannot access one or more of the physical volumes defined in the volume group, the command displays the names of all physical volumes defined for that volume group and their status. This helps you decide whether to vary-off this volume group.

Two important thing you should know:-

/var/adm/ras/lvmcfg.log lvm log file shows what lvm commands were used

(alog -ot lvmcfg) alog -ot lvmt shows lvm commands and libs

There are limitations that you have to be aware of, which are listed in table-

VG type Maximum PVs Maximum LVs Maximum PPsper VG Maximum PP size Normal VG 32 256 32512 (1016*32) 1 GB Big VG 32 128 130048(1016*128) 1 GB Scalable VG 1024 4096 2097152 128 GB

Physical Volumes (PV):

When a disk drive is initially added to the system, it is seen a simple device. The disk is not yet accessible for LVM operations. To be made accessible, it has to be assigned to a volume group, which means changing from a disk to a physical volume. For each disk, two device drivers will be created under the /dev directory: one block device driver and one character device driver. The disk drive is assigned an 32-bit unique identifier that is called a physical volume identifier (PVID).

1.1 PVID Two disks would never have the same PVID. The PVIDs are stored also in ODM. They are used by LVM commands and external applications such as HACMP. command changes an available disk device to a physical volume by assigning a PVID

Commands-


 * 1) chdev -l -a pv=yes( for clearing PV use pv=clear)

Listing information about physical volumes


 * 1) lspv hdisk2

PHYSICAL VOLUME: hdisk2 VOLUME GROUP: testvg PV IDENTIFIER: 00c478de09caf37f VG IDENTIFIER 00c478de00004c00000001078fc3497d PV STATE: active STALE PARTITIONS: 0 ALLOCATABLE: yes PP SIZE: 128 megabyte(s) LOGICAL VOLUMES: 1 TOTAL PPs: 546 (69888 megabytes) VG DESCRIPTORS: 2 FREE PPs: 542 (69376 megabytes) HOT SPARE: no USED PPs: 4 (512 megabytes) MAX REQUEST: 256 kilobytes FREE DISTRIBUTION: 110..105..109..109..109 USED DISTRIBUTION: 00..04..00..00..00


 * 1) lspv -l hdisk0

option ? -M ( it is used to display the layout of a physical volume)

Changing the allocation permission for a physical volume-

if physical partitions located on that physical volume, which have not been allocated to a logical volume yet, can be allocated to logical volumes.

To turn on the allocation permission, use the following command:


 * 1) chpv -ay hdisk2

Changing the availability of a physical volume-


 * 1) lsvg testvg ( shows that the VG is active, contains two PVs, both PVs are active, and the VG has three VGDAs.)

options–> -p (shows all hdisk and it status)


 * 1) lspv hdiskn ( shows that hdisk3 is active and has two VGDAs)


 * 1) chpv -vr hdisk3 ( makes hdisk3 unavailable.)


 * 1) chpv -va hdisk3 ( makes hdisk3 available again)

Cleaning the boot record from a physical volume-


 * 1) chpv -c ( To clear the boot record located on physical volume)

Declaring a physical volume hot spare-


 * 1) chpv -hy hdiskn ( To define hdisk3 as a hot spare)


 * 1) chpv -hn hdiskn ( To remove hdisk3 from the hot spare pool of its volume group)

Migrating data from physical volumes-

eg:-


 * 1) lsvg -p rootvg ( displays all PVs that are contained in rootvg.)


 * 1) lsvg -M ( displays the map of all physical partitions located on .)


 * 1) lsvg -M ( shows that all partitions of are not allocated.)


 * 1) migratepv  ( migrates the data from to )


 * 1) lspv -M ( confirms that has all partitions free.)


 * 1) chpv -c ( clears the boot record from .)


 * 1) lspv -M ( confirms that all physical partitions have been migrated to .)

Migrating partitions-


 * 1) migratelp testlv/1/2 /123 ( migrates the data from the second copy of the logical partition number 1 of logical volume to on physical partition 123.)

Finding the LTG size-


 * 1) lquerypv -M hdisk0 ( Logical track group (LTG) size is the maximum allowed transfer size for an I/O disk operation.)

O/p ? 256

Volume Groups (VG):

When the operating system is installed, one volume group named rootvg is created by default. Additional volume groups can be created on the system using one or more physical volumes that have not been allocated to other volume groups yet and are in an available state. All physical volumes will be divided in physical partitions having the same size. The size of the physical partitions cannot be changed after the volume group is created.

Basic Commands for VG:


 * 1) lsvg


 * 1) lsvg -o


 * 1) lsattr -El hdiskn

Creating an original volume group :-


 * 1) mkvg -y vg1 -s64 -V99 hdiskn

O/p ? vg1

Suppose if vg creating fails


 * 1) mkvg -y testvg -s 4 -f

0516-1254 mkvg: Changing the PVID in the ODM.

0516-1208 mkvg: Warning, The Physical Partition Size of 4 requires the

creation of 17501 partitions for. The system limitation is

16256

physical partitions per disk at a factor value of 16. Specify a larger

Physical Partition Size or a larger factor value in order create a

volume group on this disk.

0516-862 mkvg: Unable to create volume group.

Creating a big volume group and Scalable volume group:-


 * 1) mkvg -B -y vg2 -s 128 -f -n -V 101

O/p ? vg2

Options -S is used to for Scalable VG

eg:-


 * 1) mkvg -S -y vg2 -s 128 -f -n -V 101

O/p ? 0516-1254 mkvg: Changing the PVID in the ODM.

0516-1254 mkvg: Changing the PVID in the ODM.

0516-1254 mkvg: Changing the PVID in the ODM.

0516-1254 mkvg: Changing the PVID in the ODM.

Suppose if it fails, it give the same o/p will print.

The mkvg command will automatically vary on the newly created volume group by calling the varyonvg command.

Note : if any error came in vg command do check the Limitations for BIG, NORMAL and SCALABLE.

Changing volume group characteristics:-

Varyon flag:-

Command changes the volume group testvg to be activated automatically the next time the system is restarted.


 * 1) chvg -ay newvg

Command changes the volume group testvg to not be activated automatically next time the system is restarted.


 * 1) chvg -an newvg

Quorum:-

The quorum is one of the mechanisms that the LVM uses to ensure that a volume group is ready to use and contains the most up-to-date data.

Nonquoram Volume Group:-

The Logical Volume Manager (LVM) automatically deactivates the volume group when it lacks a quorum of Volume Group Descriptor Areas (VGDAs) or Volume Group Status Areas (VGSAs). However, you can choose an option that allows the group to stay online as long as there is one VGDA/VGSA pair intact. This option produces a nonquorum volume group.

To turn off the quorum, use the command:


 * 1) chvg -Qn testvg

To turn on the quorum, use the command:


 * 1) chvg -Qy testvg

Changing a volume group format:-

You can change the format of an original volume group to either big or scalable. Once the volume group has been converted to a scalable format, it cannot be changed into a different format.Type


 * 1) varyoffvg xyz


 * 1) chvg -G xyz

The chvg -G command is use to change the format of the volume group tttt from original to scalable.


 * 1) varyonvg xyz

Changing LTG size:-

volume groups in AIX 5L Version 5.3 are created with a variable logical track group size. For volume groups created to be compatible with a previous version of AIX 5L, you can change the LTG size to 0, 128, 256, 512, or 1024. The new LTG size should be less than or equal to the smallest of the maximum transfer size of all disks in the volume group. You can change the LTG size for the testvg volume group using the following command.


 * 1) chvg -L 128 testvg

Changing the hot spare policy:-

To improve data availability, one or more disks from a volume group can be designated as hot spares. Physical volumes that are to be used as a hot spare must have all physical partitions free. All logical volumes from the volume group that contain hot spare disks must be mirrored


 * 1) chpv -hy hdiskn( tries to designate as a hot spare)


 * 1) chvg -hy test1v( changes the hot spare policy of the volume group to migrate data from a failing disk to one spare disk)

options ? -Y ( changes the hot spare policy of the volume group to migrate data from a failing disk to the entire pool of spare disks)

-n ( disables the hot spare policy of the volume group)

-sy (Changing synchronization policy of a volume group)

-P ( the -P option in chvg command to change the maximum number of physical partitions within a volume group)

-v ( the -v option in chvg command to change the maximum number of logical volumes within a volume group)

-u( You can remove the lock using -u option)


 * 1) lsvg -p test1vg ( displays physical volumes that are part of test1vg)

Extending a volume group:-

You can increase the space available in a volume group by adding new physical volumes using the extendvg command. Before adding a new disk, you have to ensure that the disk is in an available state. If the disk has one VGDA corresponding to another already varied on volume group, the command exits. If the VGDA belongs to a volume group that is varied off, the system will prompt the user for confirmation in continuing with command execution. If the user says yes, the old VGDA is erased and all previous data on that disk will be unavailable.

How to Extend VG:-


 * 1) extendvg test1vg (lets suppose x is the disk which you have added)

O/p ? 0516-1254 extendvg: Changing the PVID in the ODM.

Assigns an PVID to and adds it to the volume group test1vg.

Options- -f ( forcibly adds hdisk to volume group )

Reducing a volume group:-

The volume group must be varied on. When you remove the last physical volume from the volume group, the VG will also be removed. For volume groups created on AIX 5L Version 5.3 and varied on without using varyonvg -M reducevg will dynamically raise the LTG size if the remaining disks permit it.


 * 1) reducevg -f testvg ( prompts the user for confirmation, deletes the data located on physical volume, and removes the disk definition from the testvg volume group. lets y is any number of disk)

Note: if We close logical volumes lv1, lv2, and loglv01 by unmounting the corresponding file systems.reducevg testvg still does not work. Only -f option will work.

Resynchronizing the device configuration database:-


 * 1) synclvodm testvg

Exporting a volume group & Importing a volume group-

There are situations when all data from a volume group needs to be moved from one system to another system. You will need to delete any reference to that data from the originating system

The exportvg command only removes volume group definition from the ODM and does not delete any data from the physical disks. It clears the stanzas from /etc/filesystem

Importing a volume group means recreating the reference to the volume group data and making that data available. The importvg command reads the VGDA of one the physical volumes that are part of the volume group. It uses redefinevg to find all other disks that belong tothe volume group.It will add corresponding entries into the ODM database and update /etc/filesystems.

To export the volume group testvg, use the command


 * 1) exportvg testvg

To import the volume group testvg, use the command


 * 1) importvg -y testvg

O/p ? 0516-530 synclvodm: Logical volume name test1lv changed to fslv02.

0516-530 synclvodm: Logical volume name loglv00 changed to loglv01.

imfs: Warning: mount point /testmp already exists in /etc/filesystems.

test1vg


 * 1) lsvg -l test1vg

An imported volume group is automatically varied on, unless it is concurrent capable.

Note: You should run the fsck command before mounting the file systems.

Inter-policy:-

inter-physical volume allocation policy, can be minimum or maximum minimum: to allocate pp’s the minimum pv will be used (not spreading to all pv’s tha data if possible) maximum: to spread the physical partitions of this logical volume over as many physical volumes as possible.

This illustration shows 2 physical volumes. One contains partition 1 and a copy of partition 2. The other contains partition 2 with a copy of partition 1. The formula for allocation is Maximum Inter-Disk Policy (Range=maximum) with a Single Logical Volume Copy per Disk (Strict=y).

each lp copy on separate pv The strictness value. Current state of allocation, strict, nonstrict, or superstrict. A strict allocation states that no copies for a logical partition are allocated on the same physical volume. If the allocation does not follow the strict criteria, it is called nonstrict. A nonstrict allocation states that copies of a logical partition can share the same physical volume. A superstrict allocation states that no partition from one mirror copy may reside the same disk as another mirror copy. (mirror 2 and mirror 3 cannot be on the sam edisk)

So inter-policy and strictness have effect together how many disks are used: spreading to maximumdisks (1st lps) then mirroring them we need another bunch of disks; however spreading to minimum disks and mirroring, we need less disks.

Intra-policy:-

Intra-physical volume allocation policy, it specifies what startegy should be used for choosing pp‘s on a pv.it can be: edge (outer edge), middle (outer middle), center, inner middle, inner edge. If you specify a region, but it gets full, further partitions are allocated from near as possible to far away. The more i/o-s used, the pp‘s should be allocate to the outer edge.

Reorganizing a volume group:-

The reorgvg command is used to reorganize physical partitions within a volume group.

To reorganize only logical volumes lv1 and lv1 from volume group testvg, use:


 * 1) reorgvg testvg lv1 lv2

To reorganize only partitions located on physical volumes and that belong to logical volumes lv1 and lv2 from volume group testvg, use:

echo “ ” | reorgvg -i testvg lv1 lv2

Synchronizing a volume group:-

The syncvg command is used to synchronize stale physical partitions. It acceptsnames of logical volumes, physical volumes, or volume groups as parameters.The synchronization process can be time consuming, depending on the hardware characteristics and the total amount of data.

To synchronize the copies located on physical volumes and, use:


 * 1) syncvg -p

To synchronize the all physical partitions from volume group testvg, use:


 * 1) syncvg -v testvg

Note: When the -f flag is used, synchronization is forced and an uncorrupted physical copy is chosen and propagated to all other copies of the logical partition, whether or not they are stale.

Mirroring a volume group:-

You can use the mirrorvg command to mirror all logical volumes within a volume group.


 * 1) extendvg rootvg hdiskA


 * 1) mirrorvg rootvg

Op ? 0516-1124 mirrorvg: Quorum requirement turned off, reboot system for thisto take effect for rootvg.

0516-1126 mirrorvg: rootvg successfully mirrored, user should perform

bosboot of system to initialize boot records. Then, user must modify

bootlist to include: hdisk0.


 * 1) bosboot -ad /dev/hdiskA


 * 1) bootlist -m normal hdiskA hdiskB


 * 1) lsvg -l rootvg

Logical Volumes (LV):

Logical volumes provide applications with the ability to access data as though it was stored contiguously. A logical volume consists of a sequence of one or more numbered logical partitions. Each logical partition has at least one and a maximum of three corresponding physical partitions that can be located on different physical volumes. The location on the disk for physical partitions is determined by intra-physical and inter-physical allocation policies.

Notes: When the system is installed, the root volume group (rootvg) is created. This is where the AIX operating system files will be contained. Additional disks can either be added to rootvg or a new volume group can be created for them. There can be up to 255 VGs per system.

If you have external disks, it is recommended that they be placed in a separate volume group. By maintaining the user file systems and the operating system files in distinct volume groups, the user files are not jeopardized during operating system updates, reinstallations, and crash recoveries. Maintenance is easier because you can update or reinstall the operating system without having to restore user data. For security, you can make the volume group unavailable using varyoffvg.

Logical Volume types:

1. log logical volume: used by jfs/jfs2 2.dump logical volume: used by system dump, to copy selected areas of kernel data when a unexpected syszem halt occurs 3. boot logical volume: contains the initial information required to start the system 4. paging logical volume: used by the virtual memory manager to swap out pages of memory users and appl.-s will use these lvs: 5. raw logical volumes: these will be controlled by the appl. (it will nit use jfs/jfs2) 6. journaled filesystems:

Striped logical volumes:-Helps I/O capacity of the physical volumes to be used in parallel to access the data.

LVCB (Logical Volume Control Block):-He LVCB stores the attributes of the LV. Jfs does not access this area.Traditionally it was the fs boot block of 512 bytes.


 * 1) getlvcb -AT

Creating a new log logical volume:

You can create logical volumes using the mklv command.


 * 1) mklv -y lv3 -t jfs2 -a lvname vgname 1 pvname (creates the log lv)


 * 1) logform -V jfs2 /dev/lvname (used to create logfile )


 * 1) chfs -a log=/dev/lvname /fsname(changes the log lv (it can be checked in /etc/filesystems)

Removing a logical volume:-


 * 1) rmlv lvname (prompts for user confirmation and then deletes lv.)


 * 1) umount /fs ( closes lv)


 * 1) rmlv -p hdiskname lvname (tries to delete partitions of lv1 located on hdisk7 and prompts for user confirmation. and completes successfully..)


 * 1) lslv -l lv1 (confirms that physical partitions of lv located on hdisk were deleted.)

Listing information about logical volumes:-


 * 1) lslv lvname


 * 1) lslv -l lv1

O/p ? lv1:/fs1

PV COPIES IN BAND DISTRIBUTION

hdisk5 009:000:000 66% 000:003:000:000:006

hdisk6 009:000:000 66% 000:003:000:000:006


 * 1) lsvg -p testvg

O/p ? testvg:

PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION

hdisk5 active 273 266 55..48..54..54..55

hdisk6 active 273 273 55..55..54..54..55

hdisk7 active 273 266 55..48..54..54..55


 * 1) lslv -m testlv

O/p ? testlv:/test

LP PP1 PV1 PP2 PV2 PP3 PV3

0001 0056 hdisk5 0059 hdisk7

0002 0057 hdisk5 0060 hdisk7

0003 0058 hdisk5 0061 hdisk7


 * 1) lslv -n hdisk6 testlv


 * 1) getlvcb -AT lv1( You can display the LVCB of a logical volume using intermediate level)

Increasing the size of a logical volume:-

Additional logical partitions can be added to an already existing logical volume using the extendlv command. By default, the logical volume is expanded while preserving its characteristics. The initial characteristics of the whole volume group will remain unchanged. You can specify one or multiple disks. You can also specify blocks whose size is measured in KB, MB, or GB.


 * 1) extendlv -a ie -ex lv1 3 hdisk5 hdisk6


 * 1) lslv -l lv1

Copying a logical volume:-

You can copy the content of a logical volume to either a new or an already existing logical volume.


 * 1) cplv -v dumpvg -y lvA lvB

Creating copies of logical volumes

We used the mklvcopy command to create and synchronize one extra copy of each of the logical partitions of logical volume


 * 1) mklvcopy -k lv1 3 hdisk7 &


 * 1) lslv -m lv1

Changing characteristics of logical volumes:-

We use the chlv command to change, for logical volume lv1, the maximum number of logical partitions to 1000 and the scheduling policy for I/O operations to parallel/round-robin.


 * 1) chlv -x 1000 -d pr lv1


 * 1) lslv lv1

Splitting a logical volume:-

You can use the splitlvcopy command to split a logical volume that has at least two copies of each logical partition into two different logical volumes. If the original logical volume contains a file system, the data from the newly created logical volume will have to be accessed as a different file system.


 * 1) umount /test  (closes logical volume testlv.)


 * 1) splitlvcopy -y copylv testlv 2 (splits the logical volume.)


 * 1) crfs -v jfs2 -d /dev/copylv -m /copy  ( creates the file system structure for copylv. Note that this command will destroy any file system data.)


 * 1) lsvg -l testvg


 * 1) lslv -m testlv


 * 1) lslv -m copylv

If you want to maintain the file system data on the original logical volume, instead of running the crfs command in the last step, perform the following:


 * 1) mkdir /copy (creates a copy directory.)


 * 1) mount /dev/copylv /copy (mounts the copied file system.)

Edit the /etc/filesystems file manually and add an entry for the /copy mount point.

Removing a copy of a logical volume:-

You can use the rmlvcopy command to remove copies of logical partitions of a logical volume


 * 1) rmlvcopy testlv 2 hdisk6  ( removes copies located on hdisk6 and leaves two mirror copies.)

How to Extend/Reduce LVM’s (Logical Volume Management) in Linux – Part II
 * 1) lslv -m testlv ( shows that testlv now has two mirror copies located on hdisk5 and hdisk7.)

by Babin Lonston | Published: August 8, 2014 | Last Updated: June 27, 2017

Previously we have seen how to create a flexible disk storage using LVM. Here, we are going to see how to extend volume group, extend and reduce a logical volume. Here we can reduce or extend the partitions in Logical volume management (LVM) also called as flexible volume file-system.

Extend/Reduce LVMs in Linux Extend/Reduce LVMs in Linux

Requirements ?Create Flexible Disk Storage with LVM – Part I

When do we need to reduce volume?

May be we need to create a separate partition for any other use or we need to expand the size of any low space partition, if so we can reduce the large size partition and we can expand the low space partition very easily by the following simple easy steps.

My Server Setup – Requirements ?Operating System – CentOS 6.5 with LVM Installation ?Server IP – 192.168.0.200

How to Extend Volume Group and Reduce Logical Volume

Logical Volume Extending

Currently, we have One PV, VG and 2 LV. Let’s list them one by one using following commands.
 * 1) pvs
 * 2) vgs
 * 3) lvs

Logical Volume Extending Logical Volume Extending

There are no free space available in Physical Volume and Volume group. So, now we can’t extend the lvm size, for extending we need to add one physical volume (PV), and then we have to extend the volume group by extending the vg. We will get enough space to extend the Logical volume size. So first we are going to add one physical volume.

For adding a new PV we have to use fdisk to create the LVM partition. ?To Create new partition Press n. ?Choose primary partition use p. ?Choose which number of partition to be selected to create the primary partition. ?Press 1 if any other disk available. ?Change the type using t. ?Type 8e to change the partition type to Linux LVM. ?Use p to print the create partition ( here we have not used the option). ?Press w to write the changes.
 * 1) fdisk -cu /dev/sda

Restart the system once completed.

Create LVM Partition Create LVM Partition

List and check the partition we have created using fdisk.
 * 1) fdisk -l /dev/sda

Verify LVM Partition Verify LVM Partition

Next, create new PV (Physical Volume) using following command.
 * 1) pvcreate /dev/sda1

Verify the pv using below command.
 * 1) pvs

Create Physical Volume Create Physical Volume

Extending Volume Group

Add this pv to vg_tecmint vg to extend the size of a volume group to get more space for expanding lv.
 * 1) vgextend vg_tecmint /dev/sda1

Let us check the size of a Volume Group now using.
 * 1) vgs

Extend Volume Group Extend Volume Group

We can even see which PV are used to create particular Volume group using.
 * 1) pvscan

Check Volume Group Check Volume Group

Here, we can see which Volume groups are under Which Physical Volumes. We have just added one pv and its totally free. Let us see the size of each logical volume we have currently before expanding it.

Check All Logical Volume Check All Logical Volume ?LogVol00 defined for Swap. ?LogVol01 defined for /. ?Now we have 16.50 GB size for / (root). ?Currently there are 4226 Physical Extend (PE) available.

Now we are going to expand the / partition LogVol01. After expanding we can list out the size as above for confirmation. We can extend using GB or PE as I have explained it in LVM PART-I, here I’m using PE to extend.

For getting the available Physical Extend size run.
 * 1) vgdisplay

Check Available Physical Size Check Available Physical Size

There are 4607 free PE available = 18GB Free space available. So we can expand our logical volume up-to 18GB more. Let us use the PE size to extend.
 * 1) lvextend -l +4607 /dev/vg_tecmint/LogVol01

Use + to add the more space. After Extending, we need to re-size the file-system using.
 * 1) resize2fs /dev/vg_tecmint/LogVol01

Expand Logical Volume Expand Logical Volume ?Command used to extend the logical volume using Physical extends. ?Here we can see it is extended to 34GB from 16.51GB. ?Re-size the file system, If the file-system is mounted and currently under use. ?For extending Logical volumes we don’t need to unmount the file-system.

Now let’s see the size of re-sized logical volume using.
 * 1) lvdisplay

Resize Logical Volume Resize Logical Volume ?LogVol01 defined for / extended volume. ?After extending there is 34.50GB from 16.50GB. ?Current extends, Before extending there was 4226, we have added 4607 extends to expand so totally there are 8833.

Now if we check the vg available Free PE it will be 0.
 * 1) vgdisplay

See the result of extending.
 * 1) pvs
 * 2) vgs
 * 3) lvs

Verify Resize Partition Verify Resize Partition ?New Physical Volume added. ?Volume group vg_tecmint extended from 17.51GB to 35.50GB. ?Logical volume LogVol01 extended from 16.51GB to 34.50GB.

Here we have completed the process of extending volume group and logical volumes. Let us move towards some interesting part in Logical volume management.

Reducing Logical Volume (LVM)

Here we are going to see how to reduce the Logical Volumes. Everyone say its critical and may end up with disaster while we reduce the lvm. Reducing lvm is really interesting than any other part in Logical volume management. ?Before starting, it is always good to backup the data, so that it will not be a headache if something goes wrong. ?To Reduce a logical volume there are 5 steps needed to be done very carefully. ?While extending a volume we can extend it while the volume under mount status (online), but for reduce we must need to unmount the file system before reducing.

Let’s wee what are the 5 steps below. ?unmount the file system for reducing. ?Check the file system after unmount. ?Reduce the file system. ?Reduce the Logical Volume size than Current size. ?Recheck the file system for error. ?Remount the file-system back to stage.

For demonstration, I have created separate volume group and logical volume. Here, I’m going to reduce the logical volume tecmint_reduce_test. Now its 18GB in size. We need to reduce it to 10GB without data-loss. That means we need to reduce 8GB out of 18GB. Already there is 4GB data in the volume. 18GB ---> 10GB

While reducing size, we need to reduce only 8GB so it will roundup to 10GB after the reduce.
 * 1) lvs

Reduce Logical Volume Reduce Logical Volume

Here we can see the file-system information.
 * 1) df -h

Check File System Size Check File System Size ?The size of the Volume is 18GB. ?Already it used upto 3.9GB. ?Available Space is 13GB.

First unmount the mount point.
 * 1) umount -v /mnt/tecmint_reduce_test/

Unmount Parition Unmount Parition

Then check for the file-system error using following command.
 * 1) e2fsck -ff /dev/vg_tecmint_extra/tecmint_reduce_test

Scan Parition for Errors Scan Parition for Errors

Note: Must pass in every 5 steps of file-system check if not there might be some issue with your file-system.

Next, reduce the file-system.
 * 1) resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test 10GB

Reduce File System Reduce File System

Reduce the Logical volume using GB size.
 * 1) lvreduce -L -8G /dev/vg_tecmint_extra/tecmint_reduce_test

Reduce Logical Partition Reduce Logical Partition

To Reduce Logical volume using PE Size we need to Know the size of default PE size and total PE size of a Volume Group to put a small calculation for accurate Reduce size.
 * 1) lvdisplay vg_tecmint_extra

Here we need to do a little calculation to get the PE size of 10GB using bc command. 1024MB x 10GB = 10240MB or 10GB 10240MB / 4PE = 2048PE

Press CRTL+D to exit from BC.

Calculate PE Size Calculate PE Size

Reduce the size using PE.
 * 1) lvreduce -l -2048 /dev/vg_tecmint_extra/tecmint_reduce_test

Reduce Size Using PE Reduce Size Using PE

Re-size the file-system back, In this step if there is any error that means we have messed-up our file-system.
 * 1) resize2fs /dev/vg_tecmint_extra/tecmint_reduce_test

Resize File System Resize File System

Mount the file-system back to same point.
 * 1) mount /dev/vg_tecmint_extra/tecmint_reduce_test /mnt/tecmint_reduce_test/

Mount File System Mount File System

Check the size of partition and files.
 * 1) lvdisplay vg_tecmint_extra

Here we can see the final result as the logical volume was reduced to 10GB size.

Verify Logical Volume Size Verify Logical Volume Size

In this article, we have seen how to extend the volume group, logical volume and reduce the logical volume. In the next part (Part III), we will see how to take a Snapshot of logical volume and restore it to earlier stage.

ls -lrt | awk '{ if($5>=300 && $5<=1000) {print $0}}' ls -lrt | awk '{print $(NF-1)}'

ls -lrt | awk '{if ($0~"sql") for (i=1;i<=2;i++) {print $0} }' ls -lrt | awk '{if ($0~"sql") for (i=1;i<=2;i++) {print $0} else {print $0} }'

cat a 1  New York, New York[10]  8,244,910   1   New York-Northern New Jersey-Long Island, NY-NJ-PA MSA  19,015,900  1   New York-Newark-Bridgeport, NY-NJ-CT-PA CSA 22,214,083 2  Los Angeles, California 3,819,702   2   Los Angeles-Long Beach-Santa Ana, CA MSA    12,944,801  2   Los Angeles-Long Beach-Riverside, CA CSA    18,081,569 3  Chicago, Illinois   2,707,120   3   Chicago-Joliet-Naperville, IL-IN-WI MSA 9,504,753   3   Chicago-Naperville-Michigan City, IL-IN-WI CSA  9,729,825 4  Houston, Texas  2,145,146   4   Dallas-Fort Worth-Arlington, TX MSA 6,526,548   4   Washington-Baltimore-Northern Virginia, DC-MD-VA-WV CSA 8,718,083 5  Philadelphia, Pennsylvania[11]  1,536,471   5   Houston-Sugar Land-Baytown, TX MSA  6,086,538   5   Boston-Worcester-Manchester, MA-RI-NH CSA   7

- cut -d"," -f -4 a | awk -F"," '{print $1","$2",",$3","substr($4,1,3)}' 1  New York, New York[10]  8, 244,910 2  Los Angeles, California 3, 819,702 3  Chicago, Illinois   2, 707,120 4  Houston, Texas  2, 145,146 5  Philadelphia, Pennsylvania[11]  1, 536,471

cut -d"," -f -4 a | awk '{FS=","} {OFS=","} {print $1,$2,$3,substr($4,1,3)}' 1  New York, New York[10]  8,244,910 2  Los Angeles, California 3,819,702 3  Chicago, Illinois   2,707,120 4  Houston, Texas  2,145,146 5  Philadelphia, Pennsylvania[11]  1,536,471

awk '{if ( $2=="" || $3=="" || $4 =="" ) {print "Not all scores are available for "$1}}' a awk '{ if($2>=50&& $3>=50 && $4>=50) {print $1,": Pass"} else {print $1,": Fail"} }'

cat a 1234 5678 9101 1234 2999 5178 9101 2234 9999 5628 9201 1232 8888 3678 9101 1232

sed 's/[0-9]\{4\}/****/1' a | sed  's/[0-9]\{4\}/****/1' | sed  's/[0-9]\{4\}/****/1'
 * **** **** 1234
 * **** **** 2234
 * **** **** 1232
 * **** **** 1232

sed 's/\(.*\) \(.*\) \(.*\) \(.*\)/\1 \3 \2 \4/g' a

echo "raja ram mahi mama"| sed 's/\(.*\) \(.*\) \(.*\) \(.*\)/\1 \3 \2 \4/g'

sed 's/\(.*\) \(.*\) \(.*\) \(.*\)/\4 \3 \2 \1/g'

sed '/unix/ a "Add a new line"' file.txt a--append, i --insert c--change.. -- Access rights on directories. • r allows users to list files in the directory; • w means that users may delete files from the directory or move files into it; • x means the right to access files in the directory. This implies that you may read files in the directory provided you have read permission on the individual files.

So, in order to read a file, you must have execute permission on the directory containing that file, and hence on any directory containing that directory as a subdirectory, and so on, up the tree

--- you wud predominantly work on unix he said,what would be numbers that we are looking for Please help to complete this request on priority, since a restoration for production incident is affected due to this. Job has aborted since there is no space left on the drive. Please help to delete the database from following directory, since it will be restored as well soon. Hostname :- sinsa1100093.sg Path :- /apps/oradbf/in/atin11 RITM2403383 is raised for this and assigned to your queue. Please confirm closure. Please assist. RITM2365060 is raised to restore SG UAT C Prodbase and Sigbase on sinsa1100093.sg.server with 20th June 2016 BEOD data. Request is approved and currently is in your queue. Kindly help to prioritie this request since it is related to production incident. Please help to start with following 2 requests on priority since it is required for production incident. And the attached roadmap for Dashboard that was given in the ITECOS is not in the IC guidelines.(PFA, attachment 1). Kindly help us getting it in proper IC roadmap format.(PFA, attachment 2 for reference). Below files are missing in Spain A2 pre-prod server s00va9924863 /apps/atlas/backup/JAS9B2.bnp_exploit.tgz /apps/atlas/backup/CPP1/JAS9B2.rep_atlas2v0.tgz Could you please help us getting these files ? Thanks for your input. Could you please advise on the below ? Could you provide clarification for the below, as its blocking one of the validation in the release RS16-2. This GDI is waiting for more than a year now. Can anyone provide update on this ? We are waiting for the feedback from the user. Will keep you posted. Could you please provide update on the below restoration request. I have transferred them again under \\lnw552\Transfer\a45412 in the name ISM_Files_spain2. Please let us know if any issue. Any update on the below ? Is there any update on the below ?. Could you please advise on the below for the GDI 229916 (BSC-9143 / CBGS-17107). Kindly update us on the below. We are facing the below issue, while trying to login to the ite3 link with the below details: I will check with the Dev team here and let you know their views on this. Thanks a lot for your action. Kindly let us know when the files are available/sending is completed. Refresh is in progress. Please provide an update on this refresh: Prodbase restoration is in progress. This is a mandatory session for all team members. The below schedule is for Mumbai. Please complete the same. Any update on this refresh ? Thanks for the update. Will wait for your file. Please advise us what is the “missing 2 values” from the above description in the GDI mentioned to fix the same issue in production. This is a priority request and expect your co-operation to fix this issue on priority. We will work on the below export with the attached script. Thank you Moussa for your prompt reply. ? And we are only seeing “UK Monthly BNPP” in the drop down of ite3 link. Could you help adding “UK Daily BNPP”. We had performed ISM sending yesterday and not received the ISM return for spain2 on S00VA9931707. Please find the below details and attached files for the same. kindly do the needful. - Yes.. The monthly loading for cemli2,hunga2 and polan2 for IST phase and we performed another round of monthly loading(Monthly jump for the UAT day 1 date compatibility) on all 3 sites without any issues. - Please send the returns. I have validated the files from productions. It’s the same case there as well. Please help fixing this issue ASAP. Could you send the ISM returns for the attached files ASAP ? -- BSC-9516 and BSC-9295 are long pending jira, waiting for your feedback.

Kindly provide your feedback by EOD.

We are closing these jiras. Please feel free to open a new jira, if the issue still persists. - PFA, the file and the screenshot of the error. -- Could you please provide update on the below ?

We would really like to close this long pending item before this month end.

Thanks in advance for your response

--- As mentioned below please provide update on this GDI, Its getting high priority now to resolve this issue ASAP.

We are facing FS space issue, while trying to refresh the spain2 env. Kindly increase.

Server: S00VA9931707 Env: spain2 FS: /apps/spain1/infoc/ Space Increase: +4 GB

--- Please fix it as soon as possible because user needs to extract data from Significant base before month end. --- Please confirm if we can resolve this jira.

Could you please provide ETA for fixing the below issue.

Thanks in advance. - We have received mail from the Xavier regarding the missing file in atlas side. Please find the below details. The feedback is pending only with you.

Please ensure you submit it ASAP

To PARIS IPS CIB IA BP 

We are getting the below error, while trying to connect to database.

->sqlplus /

SQL*Plus: Release 11.2.0.3.0 Production on Wed Aug 10 14:52:16 2016

Copyright (c) 1982, 2011, Oracle. All rights reserved.

ERROR: ORA-01034: ORACLE not available ORA-27101: shared memory realm does not exist IBM AIX RISC System/6000 Error: 2: No such file or directory Process ID: 0 Session ID: 0 Serial number: 0

Enter user-name:

Server: S00VA9926665 SID: IAMDE2I0 Env: euro2d

Kindly help fixing this issue ASAP.

Thanks & Regards,

-- https://apac.cib.echonet/teams/ISPL-CBT/ADM-TS/Lists/Team%20Leave%20Tracker/calendar.aspx since when she is waiting. ---