Qcow2COWRegion has two attributes:
- The offset of the COW region from the start of the first cluster
touched by the I/O request. Since it's always going to be positive
and the maximum request size is at most INT_MAX, we can use a
regular unsigned int to store this offset.
- The size of the COW region in bytes. This is guaranteed to be >= 0,
so we should use an unsigned type instead.
In x86_64 this reduces the size of Qcow2COWRegion from 16 to 8 bytes.
It will also help keep some assertions simpler now that we know that
there are no negative numbers.
The prototype of do_perform_cow() is also updated to reflect these
changes.
Signed-off-by: Alberto Garcia <berto@igalia.com>
---
block/qcow2-cluster.c | 4 ++--
block/qcow2.h | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c
index d1c419f52b..a86c5a75a9 100644
--- a/block/qcow2-cluster.c
+++ b/block/qcow2-cluster.c
@@ -406,8 +406,8 @@ int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num,
static int coroutine_fn do_perform_cow(BlockDriverState *bs,
uint64_t src_cluster_offset,
uint64_t cluster_offset,
- int offset_in_cluster,
- int bytes)
+ unsigned offset_in_cluster,
+ unsigned bytes)
{
BDRVQcow2State *s = bs->opaque;
QEMUIOVector qiov;
diff --git a/block/qcow2.h b/block/qcow2.h
index 1801dc30dc..c26ee0a33d 100644
--- a/block/qcow2.h
+++ b/block/qcow2.h
@@ -301,10 +301,10 @@ typedef struct Qcow2COWRegion {
* Offset of the COW region in bytes from the start of the first cluster
* touched by the request.
*/
- uint64_t offset;
+ unsigned offset;
/** Number of bytes to copy */
- int nb_bytes;
+ unsigned nb_bytes;
} Qcow2COWRegion;
/**
--
2.11.0
On 06/07/2017 09:08 AM, Alberto Garcia wrote: > Qcow2COWRegion has two attributes: > > - The offset of the COW region from the start of the first cluster > touched by the I/O request. Since it's always going to be positive > and the maximum request size is at most INT_MAX, we can use a > regular unsigned int to store this offset. I don't know if we will ever get to the point that we allow a 64-bit request at the block layer (and then the block layer guarantees it is split down to the driver's limits, which works when a driver is still bound by 32-bit limits). But we ALSO know that a cluster is at most 2M (in our current implementation of qcow2), so the offset of where the COW region starts in relation to the start of a cluster is < 2M. > > - The size of the COW region in bytes. This is guaranteed to be >= 0, > so we should use an unsigned type instead. And likewise, since a COW region is a sub-cluster, and clusters are bounded at 2M, we also have a sub-int upper bound on the size of the region. > > In x86_64 this reduces the size of Qcow2COWRegion from 16 to 8 bytes. > It will also help keep some assertions simpler now that we know that > there are no negative numbers. > > The prototype of do_perform_cow() is also updated to reflect these > changes. > > Signed-off-by: Alberto Garcia <berto@igalia.com> > --- > block/qcow2-cluster.c | 4 ++-- > block/qcow2.h | 4 ++-- > 2 files changed, 4 insertions(+), 4 deletions(-) > > diff --git a/block/qcow2-cluster.c b/block/qcow2-cluster.c > index d1c419f52b..a86c5a75a9 100644 > --- a/block/qcow2-cluster.c > +++ b/block/qcow2-cluster.c > @@ -406,8 +406,8 @@ int qcow2_encrypt_sectors(BDRVQcow2State *s, int64_t sector_num, > static int coroutine_fn do_perform_cow(BlockDriverState *bs, > uint64_t src_cluster_offset, > uint64_t cluster_offset, > - int offset_in_cluster, > - int bytes) > + unsigned offset_in_cluster, > + unsigned bytes) I don't know if the code base has a strong preference for 'unsigned int' over 'unsigned', but it doesn't bother me. Reviewed-by: Eric Blake <eblake@redhat.com> -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
On Wed 07 Jun 2017 06:02:06 PM CEST, Eric Blake wrote: >> - The offset of the COW region from the start of the first cluster >> touched by the I/O request. Since it's always going to be positive >> and the maximum request size is at most INT_MAX, we can use a >> regular unsigned int to store this offset. > > I don't know if we will ever get to the point that we allow a 64-bit > request at the block layer (and then the block layer guarantees it is > split down to the driver's limits, which works when a driver is still > bound by 32-bit limits). But we ALSO know that a cluster is at most > 2M (in our current implementation of qcow2), so the offset of where > the COW region starts in relation to the start of a cluster is < 2M. That's correct for the offset of the first region (m->cow_start), however m->cow_end is also relative to the offset of the first allocated cluster, so it can be > 2M if the requests spans several clusters. So I guess the maximum theoretical offset would be something like INT_MAX + 2*cluster_size (a bit below that actually). >> - The size of the COW region in bytes. This is guaranteed to be >= 0, >> so we should use an unsigned type instead. > > And likewise, since a COW region is a sub-cluster, and clusters are > bounded at 2M, we also have a sub-int upper bound on the size of the > region. That's correct. Berto
On 06/08/2017 08:06 AM, Alberto Garcia wrote: > On Wed 07 Jun 2017 06:02:06 PM CEST, Eric Blake wrote: > >>> - The offset of the COW region from the start of the first cluster >>> touched by the I/O request. Since it's always going to be positive >>> and the maximum request size is at most INT_MAX, we can use a >>> regular unsigned int to store this offset. >> >> I don't know if we will ever get to the point that we allow a 64-bit >> request at the block layer (and then the block layer guarantees it is >> split down to the driver's limits, which works when a driver is still >> bound by 32-bit limits). But we ALSO know that a cluster is at most >> 2M (in our current implementation of qcow2), so the offset of where >> the COW region starts in relation to the start of a cluster is < 2M. > > That's correct for the offset of the first region (m->cow_start), > however m->cow_end is also relative to the offset of the first allocated > cluster, so it can be > 2M if the requests spans several clusters. > > So I guess the maximum theoretical offset would be something like > INT_MAX + 2*cluster_size (a bit below that actually). At which point, your earlier claim that we bound requests at INT_MAX takes over. But thanks for correcting me that the end cow region can indeed be more than a cluster away from the beginning region. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3266 Virtualization: qemu.org | libvirt.org
© 2016 - 2026 Red Hat, Inc.