[libvirt] [PATCH] storage: Don't hold storage pool locked during wipeVol

Michal Privoznik posted 1 patch 5 years, 9 months ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/libvirt tags/patchew/e4c0c13ab8e42fa3a2571fc6f5ce303dd8fa4363.1534157377.git.mprivozn@redhat.com
Test syntax-check passed
src/storage/storage_backend_iscsi_direct.c | 5 +++++
src/storage/storage_backend_rbd.c          | 7 ++++++-
src/storage/storage_util.c                 | 8 +++++++-
3 files changed, 18 insertions(+), 2 deletions(-)
[libvirt] [PATCH] storage: Don't hold storage pool locked during wipeVol
Posted by Michal Privoznik 5 years, 9 months ago
Currently the way virStorageVolWipe() works is it looks up
pool containing given volume and hold it locked throughout while
API execution. This is suboptimal because wiping a volume means
writing data to it which can take ages. And if the whole pool is
locked during that operation no other API can be issued (nor
pool-list).

Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
---
 src/storage/storage_backend_iscsi_direct.c | 5 +++++
 src/storage/storage_backend_rbd.c          | 7 ++++++-
 src/storage/storage_util.c                 | 8 +++++++-
 3 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/src/storage/storage_backend_iscsi_direct.c b/src/storage/storage_backend_iscsi_direct.c
index 1624066e9c..58d25557f1 100644
--- a/src/storage/storage_backend_iscsi_direct.c
+++ b/src/storage/storage_backend_iscsi_direct.c
@@ -691,6 +691,9 @@ virStorageBackenISCSIDirectWipeVol(virStoragePoolObjPtr pool,
     if (!(iscsi = virStorageBackendISCSIDirectSetConnection(pool, NULL)))
         return -1;
 
+    vol->in_use++;
+    virObjectUnlock(pool);
+
     switch ((virStorageVolWipeAlgorithm) algorithm) {
     case VIR_STORAGE_VOL_WIPE_ALG_ZERO:
         if (virStorageBackendISCSIDirectVolWipeZero(vol, iscsi) < 0) {
@@ -719,6 +722,8 @@ virStorageBackenISCSIDirectWipeVol(virStoragePoolObjPtr pool,
  cleanup:
     virISCSIDirectDisconnect(iscsi);
     iscsi_destroy_context(iscsi);
+    virObjectLock(pool);
+    vol->in_use--;
     return ret;
 }
 
diff --git a/src/storage/storage_backend_rbd.c b/src/storage/storage_backend_rbd.c
index 642cacb673..30c94c109b 100644
--- a/src/storage/storage_backend_rbd.c
+++ b/src/storage/storage_backend_rbd.c
@@ -1212,7 +1212,10 @@ virStorageBackendRBDVolWipe(virStoragePoolObjPtr pool,
     VIR_DEBUG("Wiping RBD image %s/%s", def->source.name, vol->name);
 
     if (!(ptr = virStorageBackendRBDNewState(pool)))
-        goto cleanup;
+        return -1;
+
+    vol->in_use++;
+    virObjectUnlock(pool);
 
     if ((r = rbd_open(ptr->ioctx, vol->name, &image, NULL)) < 0) {
         virReportSystemError(-r, _("failed to open the RBD image %s"),
@@ -1271,6 +1274,8 @@ virStorageBackendRBDVolWipe(virStoragePoolObjPtr pool,
         rbd_close(image);
 
     virStorageBackendRBDFreeState(&ptr);
+    virObjectLock(pool);
+    vol->in_use--;
 
     return ret;
 }
diff --git a/src/storage/storage_util.c b/src/storage/storage_util.c
index 42a9b6abf0..5c1fb7b7d3 100644
--- a/src/storage/storage_util.c
+++ b/src/storage/storage_util.c
@@ -2750,7 +2750,7 @@ storageBackendVolWipePloop(virStorageVolDefPtr vol,
 
 
 int
-virStorageBackendVolWipeLocal(virStoragePoolObjPtr pool ATTRIBUTE_UNUSED,
+virStorageBackendVolWipeLocal(virStoragePoolObjPtr pool,
                               virStorageVolDefPtr vol,
                               unsigned int algorithm,
                               unsigned int flags)
@@ -2759,6 +2759,9 @@ virStorageBackendVolWipeLocal(virStoragePoolObjPtr pool ATTRIBUTE_UNUSED,
 
     virCheckFlags(0, -1);
 
+    vol->in_use++;
+    virObjectUnlock(pool);
+
     VIR_DEBUG("Wiping volume with path '%s' and algorithm %u",
               vol->target.path, algorithm);
 
@@ -2769,6 +2772,9 @@ virStorageBackendVolWipeLocal(virStoragePoolObjPtr pool ATTRIBUTE_UNUSED,
                                              vol->target.allocation, false);
     }
 
+    virObjectLock(pool);
+    vol->in_use--;
+
     return ret;
 }
 
-- 
2.16.4

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] storage: Don't hold storage pool locked during wipeVol
Posted by John Ferlan 5 years, 9 months ago

On 08/13/2018 06:49 AM, Michal Privoznik wrote:
> Currently the way virStorageVolWipe() works is it looks up
> pool containing given volume and hold it locked throughout while
> API execution. This is suboptimal because wiping a volume means
> writing data to it which can take ages. And if the whole pool is
> locked during that operation no other API can be issued (nor
> pool-list).
> 
> Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
> ---
>  src/storage/storage_backend_iscsi_direct.c | 5 +++++
>  src/storage/storage_backend_rbd.c          | 7 ++++++-
>  src/storage/storage_util.c                 | 8 +++++++-
>  3 files changed, 18 insertions(+), 2 deletions(-)
> 

Why not be more similar to storageVolCreateXMLFrom? That is handle the
in_use incr/decr at the storage driver level. Seems we could create a
helper that would follow the same steps...

For volume wiping, RBD and iscsi-direct use the @pool (obj), but by
incrementing not only vol->in_use, but the pool asyncjobs we can inhibit
the undefine, destroy, or deletion of the pool that would cause issues
for those uses AFAICT. Inhibiting refresh during these operations is a
matter of perhaps opinion as to whether it's a good idea or not -
although I suppose if a volume is open for write (locked), trying to
open/read to get stat type information probably is going to wait anyway
(although I'll admit I haven't put much time or research into that
thought - just going by gut feel ;-)).

BTW: Wouldn't uploadVol have the same issues?  Seems we have only cared
about build vol from.  Since uploadVol checks vol->in_use it seems
logical using the same argument as above that it too should use some new
helper to change pool asyncjobs and vol in_use. The building setting
could just be another parameter.

John

> diff --git a/src/storage/storage_backend_iscsi_direct.c b/src/storage/storage_backend_iscsi_direct.c
> index 1624066e9c..58d25557f1 100644
> --- a/src/storage/storage_backend_iscsi_direct.c
> +++ b/src/storage/storage_backend_iscsi_direct.c
> @@ -691,6 +691,9 @@ virStorageBackenISCSIDirectWipeVol(virStoragePoolObjPtr pool,

<sigh>

Should be BackendISCSI instead of BackenISCSI

and I see this whole new module used single blank line spacing between
functions instead of 2 blank lines. Both would be trivial patches IMO.

>      if (!(iscsi = virStorageBackendISCSIDirectSetConnection(pool, NULL)))
>          return -1;
>  
> +    vol->in_use++;
> +    virObjectUnlock(pool);
> +
>      switch ((virStorageVolWipeAlgorithm) algorithm) {
>      case VIR_STORAGE_VOL_WIPE_ALG_ZERO:
>          if (virStorageBackendISCSIDirectVolWipeZero(vol, iscsi) < 0) {
> @@ -719,6 +722,8 @@ virStorageBackenISCSIDirectWipeVol(virStoragePoolObjPtr pool,
>   cleanup:
>      virISCSIDirectDisconnect(iscsi);
>      iscsi_destroy_context(iscsi);
> +    virObjectLock(pool);
> +    vol->in_use--;
>      return ret;
>  }

[...]

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] storage: Don't hold storage pool locked during wipeVol
Posted by Michal Privoznik 5 years, 9 months ago
On 08/16/2018 05:22 PM, John Ferlan wrote:
> 
> 
> On 08/13/2018 06:49 AM, Michal Privoznik wrote:
>> Currently the way virStorageVolWipe() works is it looks up
>> pool containing given volume and hold it locked throughout while
>> API execution. This is suboptimal because wiping a volume means
>> writing data to it which can take ages. And if the whole pool is
>> locked during that operation no other API can be issued (nor
>> pool-list).
>>
>> Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
>> ---
>>  src/storage/storage_backend_iscsi_direct.c | 5 +++++
>>  src/storage/storage_backend_rbd.c          | 7 ++++++-
>>  src/storage/storage_util.c                 | 8 +++++++-
>>  3 files changed, 18 insertions(+), 2 deletions(-)
>>
> 
> Why not be more similar to storageVolCreateXMLFrom? That is handle the
> in_use incr/decr at the storage driver level. Seems we could create a
> helper that would follow the same steps...

I'm not sure we can do that. Sure, I though of that but then I've seen
virStorageBackendRBDVolWipe() which calls virStorageBackendRBDNewState()
-> virStoragePoolObjGetDef() and we certainly do not want to call that
with pool unlocked.


> 
> For volume wiping, RBD and iscsi-direct use the @pool (obj), but by
> incrementing not only vol->in_use, but the pool asyncjobs we can inhibit
> the undefine, destroy, or deletion of the pool that would cause issues
> for those uses AFAICT. 

Ah, okay I haven't realized we do have asyncJobs. That way we can unlock
the pool object just before calling the backend callback. However, for
the RBD and iscsi-direct cases I think we still should guard
PoolObjGetDef() calls with pool lock+unlock.

> Inhibiting refresh during these operations is a
> matter of perhaps opinion as to whether it's a good idea or not -
> although I suppose if a volume is open for write (locked), trying to
> open/read to get stat type information probably is going to wait anyway
> (although I'll admit I haven't put much time or research into that
> thought - just going by gut feel ;-)).

Whether a good idea or not we should be prepared for that. But if we set
asyncJob then refresh is denied, so we are safe IMO.

> 
> BTW: Wouldn't uploadVol have the same issues?  Seems we have only cared
> about build vol from.  Since uploadVol checks vol->in_use it seems
> logical using the same argument as above that it too should use some new
> helper to change pool asyncjobs and vol in_use. The building setting
> could just be another parameter.

Okay, I'll rework my patch.

Michal

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
Re: [libvirt] [PATCH] storage: Don't hold storage pool locked during wipeVol
Posted by John Ferlan 5 years, 9 months ago

On 08/17/2018 05:40 AM, Michal Privoznik wrote:
> On 08/16/2018 05:22 PM, John Ferlan wrote:
>>
>>
>> On 08/13/2018 06:49 AM, Michal Privoznik wrote:
>>> Currently the way virStorageVolWipe() works is it looks up
>>> pool containing given volume and hold it locked throughout while
>>> API execution. This is suboptimal because wiping a volume means
>>> writing data to it which can take ages. And if the whole pool is
>>> locked during that operation no other API can be issued (nor
>>> pool-list).
>>>
>>> Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
>>> ---
>>>  src/storage/storage_backend_iscsi_direct.c | 5 +++++
>>>  src/storage/storage_backend_rbd.c          | 7 ++++++-
>>>  src/storage/storage_util.c                 | 8 +++++++-
>>>  3 files changed, 18 insertions(+), 2 deletions(-)
>>>
>>
>> Why not be more similar to storageVolCreateXMLFrom? That is handle the
>> in_use incr/decr at the storage driver level. Seems we could create a
>> helper that would follow the same steps...
> 
> I'm not sure we can do that. Sure, I though of that but then I've seen
> virStorageBackendRBDVolWipe() which calls virStorageBackendRBDNewState()
> -> virStoragePoolObjGetDef() and we certainly do not want to call that
> with pool unlocked.
> 
> 
>>
>> For volume wiping, RBD and iscsi-direct use the @pool (obj), but by
>> incrementing not only vol->in_use, but the pool asyncjobs we can inhibit
>> the undefine, destroy, or deletion of the pool that would cause issues
>> for those uses AFAICT. 
> 
> Ah, okay I haven't realized we do have asyncJobs. That way we can unlock
> the pool object just before calling the backend callback. However, for
> the RBD and iscsi-direct cases I think we still should guard
> PoolObjGetDef() calls with pool lock+unlock.
> 

Yeah asyncjobs is kind of hidden and perhaps appears to only be
applicable to the create/create-from logic, but once I looked it in
terms of what is inhibited when it's set the realization kicked in.

Since the object cannot be deleted w/ asyncjobs incremented and the
reason to hold the pool lock was to be sure it wasn't deleted while we
were looking at it and getting the ObjDef - so I think we're still safe.

If there's PoolDefine that happens while processing that changes the def
does it matter?  Maybe we need to alter the Define code to fail
similarly when an asyncjob is in process.

John

>> Inhibiting refresh during these operations is a
>> matter of perhaps opinion as to whether it's a good idea or not -
>> although I suppose if a volume is open for write (locked), trying to
>> open/read to get stat type information probably is going to wait anyway
>> (although I'll admit I haven't put much time or research into that
>> thought - just going by gut feel ;-)).
> 
> Whether a good idea or not we should be prepared for that. But if we set
> asyncJob then refresh is denied, so we are safe IMO.
> 
>>
>> BTW: Wouldn't uploadVol have the same issues?  Seems we have only cared
>> about build vol from.  Since uploadVol checks vol->in_use it seems
>> logical using the same argument as above that it too should use some new
>> helper to change pool asyncjobs and vol in_use. The building setting
>> could just be another parameter.
> 
> Okay, I'll rework my patch.
> 
> Michal
> 

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list