LVM – thin-pool only 43% of data but almost all PE used

So I’ve got a bit of a situation, I’ll try to keep it brief!
Initial server setup:

2x drives with soft RAID1
LVM thin-pool setup on the md3 RAID device

Then the md3 (raid) device gets full so in my haste to fix the problem (knowing I can’t get an additional drive added at the DC) I removed the 2nd disk from RAID and setup a new PV on it. The remaining disk was left in RAID (just 1 drive) as a PV on the VG. I then extended the thin-pool to this “new” PV to effectively double the storage within the thin-pool for use by the VMs.
The mistake I made when doing this is (probably one of many), I forgot to completely format the drive so the previous LV structure was still present when I extended the thin-pool but I didn’t realise this until some time afterwards. However from my research I understand that when you set up a new PV on an existing volume, it’s a destructive act so despite the previous LV structure still showing the new drive is being used by LVM and it is spanning any new data across the two PV’s.
Not a perfect solution but it resolved the initial issue of no space being available.

Now I’m left trying to resolve this issue:
The thin-pool across the two drives is only 43% full of data + meta data

[root@XXX ~]# lvs
  LV                                        VG  Attr       LSize   Pool      Origin Data%  Meta%  Move Log Cpy%Sync Convert
  test                                      vg1 Vwi-a-tz--   5.00g thin_pool        20.00
  thin_pool                                 vg1 twi-aotz--   3.14t                  42.63  28.69

however…

[root@XXX ~]# pvdisplay -C
  PV             VG  Fmt  Attr PSize  PFree
  /dev/md3       vg1 lvm2 a--  <1.58t     0
  /dev/nvme0n1p4 vg1 lvm2 a--  <1.58t 15.73g
[root@XXX ~]# vgdisplay -C
  VG  #PV #LV #SN Attr   VSize  VFree
  vg1   2  59   0 wz--n- <3.16t 15.73g

It would appear that even though there should be enough space left across the two drives, the PE’s that were “left over” (when I forgot to format) are still being treated as if they’re in use so I barely have any free PE’s left.

When you do a lvs the LVs are not duplicated in either quantity or LV size, they’re exactly as I would expect them to be and the usage of the LV’s again is what I would expect them to be, not duplicated or doubled.

Any ideas on how to resolve this highly frustrating and self-made situation?

Any help would be enormously appreciated.

Go to Source
Author: papa_face

Running Windows in QEMU with LVM causes very slow disk acess

I have the problem, that whenever I try to run Windows within QEMU, it seems that disk access is becomming very slow after a short while. Surprisingly both access to the disk from within the VM as well as outside of the VM seems to become slow.

I have both my home and my QEMU windows drive on the same disk (this is a laptop, so I cannot use multiple disks), but on different LVM volumes (no qcow or anything, just the raw LV). Just after few minutes, windows becomes unusably slow, and the host also becomes slow. As soon as I disable the VM, the host becomes usable again. I have traced back the problem in windows to slow disk access using the resource monitor. But the problem seems to be in on the LVM side of the host. If I run iostat -xz I get the following:

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          17,36   10,14    5,41   38,51    0,00   28,58

Device            r/s     rkB/s   rrqm/s  %rrqm r_await rareq-sz     w/s     wkB/s   wrqm/s  %wrqm w_await wareq-sz     d/s     dkB/s   drqm/s  %drqm d_await dareq-sz  aqu-sz  %util
dm-0             7,63     82,81     0,00   0,00   84,37    10,85   22,74    209,61     0,00   0,00  121,53     9,22    0,00      0,00     0,00   0,00    0,00     0,00    3,41   3,32
dm-1             0,05      0,27     0,00   0,00    5,05     5,49    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-2             0,00      0,09     0,00   0,00    6,10    21,75    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-3             2,28      9,12     0,00   0,00   21,28     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,05   0,53
dm-4             0,02      0,09     0,00   0,00    4,95     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-5             0,00      0,00     0,00   0,00   31,64     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-6             0,02      0,09     0,00   0,00    4,94     4,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-7             0,00      0,09     0,00   0,00    6,85    21,75    0,00      0,00     0,00   0,00    0,00     0,00    0,00      0,00     0,00   0,00    0,00     0,00    0,00   0,00
dm-8            36,13   1454,39     0,00   0,00   46,74    40,25  528,37   2107,50     0,00   0,00  122,58     3,99    0,00      0,00     0,00   0,00    0,00     0,00   66,46  12,66
dm-9             7,63     82,77     0,00   0,00   84,49    10,85   22,74    213,86     0,00   0,00 1578,74     9,40    0,00      0,00     0,00   0,00    0,00     0,00   36,55   3,18
nvme0n1          4,49    176,13     5,82  56,45    0,19    39,26  101,54    445,00     0,07   0,07    0,95     4,38    0,00      0,00     0,00   0,00    0,00     0,00    0,07   0,38
sda             41,89   1547,29     4,19   9,10   46,21    36,93   41,60   2317,08   509,51  92,45  158,29    55,70    0,00      0,00     0,00   0,00    0,00     0,00    8,40  15,35

dm-8 is the windows LV and dm-9 is my home drive. So for some reason it seems that data is being queued for both these drives. The write speed isn’t terribly fast while the system is sluggish, somewhere around 1-5MB/s at most, which is very slow for the drive I have in the system.

CPU Usage is very low, while the VM is running (both inside the VM using resource monitor, as well as on the host). Usually, it is only around 10%.

I am using Virtio as a storage adapter and I already tried different configurations (threads, caching etc), but nothing seems to change this problem.

Is there some other configuration that I could try to get a better disk access?

Go to Source
Author: LiKao

Correctly installing / configuring locally built qemu / libvirt

On Ubuntu 18.04, the default installation of qemu is something like version 3, and I needed virtiofs which has built-in support in later versions. So I uninstalled the qemu and related packages, downloaded the qemu 5.0 sources and complied it locally.

All worked well, including make install, which put the binaries in /usr/local/ which I guess is the correct default unless told otherwise.

Most things are working OK, but I’m now trying to get graceful shutdown / restart of guests working when the host is restarted, and have hit 2 snags so far.

  1. On host startup, I would see /usr/local/libexec/libvirt-guests.sh: 29: .: Can't open /usr/local/bin/gettext.sh. Of course, that’s not where getttext.sh normally lives, but I can get round that by ln -s /usr/bin/gettext.sh /usr/local/bin/gettext.sh
  2. No failure message there now, but later in the host boot logs I see libvirt-guests.sh[2166]: touch: cannot touch '/usr/local/var/lock/subsys/libvirt-guests': No such file or directory

I could go on symlinking things so they appear accessible to libvirt, but I’m wondering if the correct fix is actually to install qemu where it expects to be.

So, first question, is reinstalling the right approach, or have I just missed some basic configuration which would leave the local package where it is, but allow everything to work as expected?

If not, I guess I will have to run ./configure --prefix=/usr and rebuild, but how could I remove the currently installed version in /usr/local/ cleanly first? And, I’d ideally like to keep my current VM configurations. Searching for an XML file for a particular domain, I see 2 versions:

# find / -name 07x2.xml
/usr/local/var/run/libvirt/qemu/07x2.xml
/usr/local/etc/libvirt/qemu/07x2.xml

I’m not sure why there are 2, but I guess I could just virsh dumpxml before removing anything.

Go to Source
Author: dsl101

How would you (incrementally) backup KVM VMs based on LVM storage?

I’m running a set of KVM hypervisors on LVM storage (unfortunately I can’t use QCOW2, because the virtualization platform I’m using is strictly LVM-based).

The virtualization platform has a very poor backup support, so I wrote a series of scripts which perform LVM snapshots, grab the image with qemu-image, compress and store it on a separate storage.

My scripts work well enough, but with increasing number of VMs and data to manage are beginning to show their limits.

Can someone suggest me a free or commercial solution to have the work done well?
This is what I’m doing now and what I need to do:

  • scheduled backups
  • daily and weekly rotation and retention
  • backup saved on external storage
  • restore system
  • (extra points for incremental backup)

The VMs are both Linux and Windows, so I can’t rely on the internal filesystem.

I don’t need a web UI or other frills, CLI management is enough.

Go to Source
Author: godzillante

How do remove the default storage pool from a libvirt hypervisor, so that even after libvirtd restarts there is NO storage pool

I want to remove the default storage pool from my virt-manager AND NOT HAVE IT COME BACK BY ITSELF, EVER. I can destroy it and undefine it all I want, but when i restart libvirtd (for me thats “sudo systemctl restart libvirtd” in an arch linux terminal window), and restart virt-manager, the default storage pool is back, just like Frankenstein.

I don’t want a storage pool of any kind. I simply want to move from the dual-boot I have now (arch linux and windows) to running the two OS simultaneously. I intend to provision two physical disk partitions on the host to be disks on the guest, and I can do this via the xml that defines the domain.

Or am i required to have a storage pool no matter what?

Go to Source
Author: Scott Petrack