LVM – thin-pool only 43% of data but almost all PE used

So I’ve got a bit of a situation, I’ll try to keep it brief!
Initial server setup:

2x drives with soft RAID1
LVM thin-pool setup on the md3 RAID device

Then the md3 (raid) device gets full so in my haste to fix the problem (knowing I can’t get an additional drive added at the DC) I removed the 2nd disk from RAID and setup a new PV on it. The remaining disk was left in RAID (just 1 drive) as a PV on the VG. I then extended the thin-pool to this “new” PV to effectively double the storage within the thin-pool for use by the VMs.
The mistake I made when doing this is (probably one of many), I forgot to completely format the drive so the previous LV structure was still present when I extended the thin-pool but I didn’t realise this until some time afterwards. However from my research I understand that when you set up a new PV on an existing volume, it’s a destructive act so despite the previous LV structure still showing the new drive is being used by LVM and it is spanning any new data across the two PV’s.
Not a perfect solution but it resolved the initial issue of no space being available.

Now I’m left trying to resolve this issue:
The thin-pool across the two drives is only 43% full of data + meta data

[root@XXX ~]# lvs
  LV                                        VG  Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  test                                      vg1 Vwi-a-tz--   5.00g thin_pool        20.00
  thin_pool                                 vg1 twi-aotz--   3.14t                  42.63  28.69

however…

[root@XXX ~]# pvdisplay -C
  PV             VG  Fmt  Attr PSize  PFree
  /dev/md3       vg1 lvm2 a--  <1.58t     0
  /dev/nvme0n1p4 vg1 lvm2 a--  <1.58t 15.73g
[root@XXX ~]# vgdisplay -C
  VG  #PV #LV #SN Attr   VSize  VFree
  vg1   2  59   0 wz--n- <3.16t 15.73g

It would appear that even though there should be enough space left across the two drives, the PE’s that were “left over” (when I forgot to format) are still being treated as if they’re in use so I barely have any free PE’s left.

When you do a lvs the LVs are not duplicated in either quantity or LV size, they’re exactly as I would expect them to be and the usage of the LV’s again is what I would expect them to be, not duplicated or doubled.

Any ideas on how to resolve this highly frustrating and self-made situation?

Any help would be enormously appreciated.

Go to Source
Author: papa_face

Running Windows in QEMU with LVM causes very slow disk acess

I have the problem, that whenever I try to run Windows within QEMU, it seems that disk access is becomming very slow after a short while. Surprisingly both access to the disk from within the VM as well as outside of the VM seems to become slow.

I have both my home and my QEMU windows drive on the same disk (this is a laptop, so I cannot use multiple disks), but on different LVM volumes (no qcow or anything, just the raw LV). Just after few minutes, windows becomes unusably slow, and the host also becomes slow. As soon as I disable the VM, the host becomes usable again. I have traced back the problem in windows to slow disk access using the resource monitor. But the problem seems to be in on the LVM side of the host. If I run iostat -xz I get the following:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          17,36   10,14    5,41   38,51    0,00   28,58

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz  aqu-sz  %util
dm-0             7,63     82,81     0,00   0,00   84,37    10,85   22,74    209,61     0,00   0,00  121,53     9,22    0,00      0,00     0,00   0,00    0,00     0,00    3,41   3,32
dm-1             0,05      0,27     0,00   0,00    5,05     5,49    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-2             0,00      0,09     0,00   0,00    6,10    21,75    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-3             2,28      9,12     0,00   0,00   21,28     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,05   0,53
dm-4             0,02      0,09     0,00   0,00    4,95     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-5             0,00      0,00     0,00   0,00   31,64     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-6             0,02      0,09     0,00   0,00    4,94     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-7             0,00      0,09     0,00   0,00    6,85    21,75    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-8            36,13   1454,39     0,00   0,00   46,74    40,25  528,37   2107,50     0,00   0,00  122,58     3,99    0,00      0,00     0,00   0,00    0,00     0,00   66,46  12,66
dm-9             7,63     82,77     0,00   0,00   84,49    10,85   22,74    213,86     0,00   0,00 1578,74     9,40    0,00      0,00     0,00   0,00    0,00     0,00   36,55   3,18
nvme0n1          4,49    176,13     5,82  56,45    0,19    39,26  101,54    445,00     0,07   0,07    0,95     4,38    0,00      0,00     0,00   0,00    0,00     0,00    0,07   0,38
sda             41,89   1547,29     4,19   9,10   46,21    36,93   41,60   2317,08   509,51  92,45  158,29    55,70    0,00      0,00     0,00   0,00    0,00     0,00    8,40  15,35

dm-8 is the windows LV and dm-9 is my home drive. So for some reason it seems that data is being queued for both these drives. The write speed isn’t terribly fast while the system is sluggish, somewhere around 1-5MB/s at most, which is very slow for the drive I have in the system.

CPU Usage is very low, while the VM is running (both inside the VM using resource monitor, as well as on the host). Usually, it is only around 10%.

I am using Virtio as a storage adapter and I already tried different configurations (threads, caching etc), but nothing seems to change this problem.

Is there some other configuration that I could try to get a better disk access?

Go to Source
Author: LiKao

LVM: Fix logical volume to a phyiscal device

I’m not sure if what I want to do is necessary nor actually helpful but maybe someone can clarify as I couldn’t find a good explanation/solution.

I have a LVM that originally consisted of just a single PV. I have now added a second PV and converted one of the LVs into a mirror using lvconvert -m1 vg/data. I would like the other LVs to remain on the original disk and not be spanned over multiple disks. That is in case I later extend one of the LVs they should never be allowed to be stored on two PVs (except if mirrored) with the intention that in case a disk fails I can still recover all the data from the other PV.

Basically I would like to modify the LV to be fixed to a single PV like when you create it using lvcreate -n fixedToDiskA -L10G vg /dev/sda.

Question: How do I know if thats already the case, e.g. that the LV will never grow to span data to the second PV and if that’s not the case what’s the command to do that?

Go to Source
Author: schneida

How would you (incrementally) backup KVM VMs based on LVM storage?

I’m running a set of KVM hypervisors on LVM storage (unfortunately I can’t use QCOW2, because the virtualization platform I’m using is strictly LVM-based).

The virtualization platform has a very poor backup support, so I wrote a series of scripts which perform LVM snapshots, grab the image with qemu-image, compress and store it on a separate storage.

My scripts work well enough, but with increasing number of VMs and data to manage are beginning to show their limits.

Can someone suggest me a free or commercial solution to have the work done well?
This is what I’m doing now and what I need to do:

  • scheduled backups
  • daily and weekly rotation and retention
  • backup saved on external storage
  • restore system
  • (extra points for incremental backup)

The VMs are both Linux and Windows, so I can’t rely on the internal filesystem.

I don’t need a web UI or other frills, CLI management is enough.

Go to Source
Author: godzillante