include/block/block.h | 8 ++++++- include/sysemu/block-backend.h | 7 ++++++ block/io.c | 18 +++++++++----- block/stream.c | 19 +++++---------- nbd/server.c | 43 +++++++++++++++++++++++++++------- 5 files changed, 67 insertions(+), 28 deletions(-)
Hi all! Here is small new io API: blk_co_pcache, which does copy-on-read without extra buffer for read data. This means that only parts that needs COR will be actually read and only corresponding buffers allocated, no more. This allows to improve a bit block-stream and NBD_CMD_CACHE Vladimir Sementsov-Ogievskiy (3): block: implement blk_co_pcache block/stream: use blk_co_pcache nbd: improve CMD_CACHE: use blk_co_pcache include/block/block.h | 8 ++++++- include/sysemu/block-backend.h | 7 ++++++ block/io.c | 18 +++++++++----- block/stream.c | 19 +++++---------- nbd/server.c | 43 +++++++++++++++++++++++++++------- 5 files changed, 67 insertions(+), 28 deletions(-) -- 2.18.0
On 6/6/19 8:48 AM, Vladimir Sementsov-Ogievskiy wrote: > Hi all! > > Here is small new io API: blk_co_pcache, which does copy-on-read without > extra buffer for read data. This means that only parts that needs COR > will be actually read and only corresponding buffers allocated, no more. > > This allows to improve a bit block-stream and NBD_CMD_CACHE I'd really like to see qemu-io gain access to calling this command, so that we can add iotests coverage of this new feature. Note that the in-development libnbd (https://github.com/libguestfs/libnbd/commits/master) is also usable as an NBD client that can drive NBD_CMD_CACHE, although it's still new enough that we probably don't want to rely on it being available yet. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
06.06.2019 16:55, Eric Blake wrote: > On 6/6/19 8:48 AM, Vladimir Sementsov-Ogievskiy wrote: >> Hi all! >> >> Here is small new io API: blk_co_pcache, which does copy-on-read without >> extra buffer for read data. This means that only parts that needs COR >> will be actually read and only corresponding buffers allocated, no more. >> >> This allows to improve a bit block-stream and NBD_CMD_CACHE > > I'd really like to see qemu-io gain access to calling this command, so > that we can add iotests coverage of this new feature. Note that the > in-development libnbd > (https://github.com/libguestfs/libnbd/commits/master) is also usable as > an NBD client that can drive NBD_CMD_CACHE, although it's still new > enough that we probably don't want to rely on it being available yet. > Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? I didn't implement it. But may be I should.. May aim was only to avoid extra allocation and unnecessary reads. But if we implement full-featured io request, what should it do? On qcow2 with backing it should pull data from backing to top, like in copy-on-read. And for nbd it will send NBD_CMD_CACHE? These semantics seems different, but why not? -- Best regards, Vladimir
06.06.2019 17:07, Vladimir Sementsov-Ogievskiy wrote: > 06.06.2019 16:55, Eric Blake wrote: >> On 6/6/19 8:48 AM, Vladimir Sementsov-Ogievskiy wrote: >>> Hi all! >>> >>> Here is small new io API: blk_co_pcache, which does copy-on-read without >>> extra buffer for read data. This means that only parts that needs COR >>> will be actually read and only corresponding buffers allocated, no more. >>> >>> This allows to improve a bit block-stream and NBD_CMD_CACHE >> >> I'd really like to see qemu-io gain access to calling this command, so >> that we can add iotests coverage of this new feature. Note that the >> in-development libnbd >> (https://github.com/libguestfs/libnbd/commits/master) is also usable as >> an NBD client that can drive NBD_CMD_CACHE, although it's still new >> enough that we probably don't want to rely on it being available yet. >> > > Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? > I didn't implement it. But may be I should.. > > May aim was only to avoid extra allocation and unnecessary reads. But if we implement > full-featured io request, what should it do? > > On qcow2 with backing it should pull data from backing to top, like in copy-on-read. > And for nbd it will send NBD_CMD_CACHE? > These semantics seems different, but why not? > Any opinions here? Should I resend or could we use it as a first step, not touching external API but improving things a little bit? -- Best regards, Vladimir
Am 17.06.2019 um 13:20 hat Vladimir Sementsov-Ogievskiy geschrieben: > 06.06.2019 17:07, Vladimir Sementsov-Ogievskiy wrote: > > 06.06.2019 16:55, Eric Blake wrote: > >> On 6/6/19 8:48 AM, Vladimir Sementsov-Ogievskiy wrote: > >>> Hi all! > >>> > >>> Here is small new io API: blk_co_pcache, which does copy-on-read without > >>> extra buffer for read data. This means that only parts that needs COR > >>> will be actually read and only corresponding buffers allocated, no more. > >>> > >>> This allows to improve a bit block-stream and NBD_CMD_CACHE > >> > >> I'd really like to see qemu-io gain access to calling this command, so > >> that we can add iotests coverage of this new feature. Note that the > >> in-development libnbd > >> (https://github.com/libguestfs/libnbd/commits/master) is also usable as > >> an NBD client that can drive NBD_CMD_CACHE, although it's still new > >> enough that we probably don't want to rely on it being available yet. > >> > > > > Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? > > I didn't implement it. But may be I should.. > > > > May aim was only to avoid extra allocation and unnecessary reads. But if we implement > > full-featured io request, what should it do? > > > > On qcow2 with backing it should pull data from backing to top, like in copy-on-read. > > And for nbd it will send NBD_CMD_CACHE? > > These semantics seems different, but why not? > > > > Any opinions here? Should I resend or could we use it as a first step, > not touching external API but improving things a little bit? I'm not opposed to making only a first step now. The interface seems to make sense to me; the only thing that I don't really like is the naming both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) because to me, "cache" doesn't mean "read, but ignore the result". The operation only results in something being cached if the block graph is configured to cache things. And indeed, the most importatn use case for the flag is not populating a cache, but triggering copy-on-read. So I think calling this operation "cache" is misleading. Now, I don't have very good ideas for better names either. I guess BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if a blk_co_preadv_no_read wrapper is even needed when you can just call blk_co_preadv with the right flag.) I'm open for good naming ideas. Kevin
17.06.2019 15:09, Kevin Wolf wrote: > Am 17.06.2019 um 13:20 hat Vladimir Sementsov-Ogievskiy geschrieben: >> 06.06.2019 17:07, Vladimir Sementsov-Ogievskiy wrote: >>> 06.06.2019 16:55, Eric Blake wrote: >>>> On 6/6/19 8:48 AM, Vladimir Sementsov-Ogievskiy wrote: >>>>> Hi all! >>>>> >>>>> Here is small new io API: blk_co_pcache, which does copy-on-read without >>>>> extra buffer for read data. This means that only parts that needs COR >>>>> will be actually read and only corresponding buffers allocated, no more. >>>>> >>>>> This allows to improve a bit block-stream and NBD_CMD_CACHE >>>> >>>> I'd really like to see qemu-io gain access to calling this command, so >>>> that we can add iotests coverage of this new feature. Note that the >>>> in-development libnbd >>>> (https://github.com/libguestfs/libnbd/commits/master) is also usable as >>>> an NBD client that can drive NBD_CMD_CACHE, although it's still new >>>> enough that we probably don't want to rely on it being available yet. >>>> >>> >>> Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? >>> I didn't implement it. But may be I should.. >>> >>> May aim was only to avoid extra allocation and unnecessary reads. But if we implement >>> full-featured io request, what should it do? >>> >>> On qcow2 with backing it should pull data from backing to top, like in copy-on-read. >>> And for nbd it will send NBD_CMD_CACHE? >>> These semantics seems different, but why not? >>> >> >> Any opinions here? Should I resend or could we use it as a first step, >> not touching external API but improving things a little bit? > > I'm not opposed to making only a first step now. The interface seems to > make sense to me; the only thing that I don't really like is the naming > both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) > because to me, "cache" doesn't mean "read, but ignore the result". > > The operation only results in something being cached if the block graph > is configured to cache things. And indeed, the most importatn use case > for the flag is not populating a cache, but triggering copy-on-read. So > I think calling this operation "cache" is misleading. > > Now, I don't have very good ideas for better names either. I guess > BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if > a blk_co_preadv_no_read wrapper is even needed when you can just call > blk_co_preadv with the right flag.) > > I'm open for good naming ideas. > My first try (not published) was BDRV_REQ_FAKE_READ, passed as flag to blk_co_preadv, without separate io request function. I decided to make it to be Cache request inspired by NBD_CMD_CACHE, which was created to do exactly copy-on-read operation. So if we call it cache it will correspond to NBD protocol. _NO_DATA also works for me, not a problem to resend with this flag and without additional wrapper, as a first step. -- Best regards, Vladimir
On 6/17/19 7:09 AM, Kevin Wolf wrote: >>> >>> Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? >>> I didn't implement it. But may be I should.. >>> >>> May aim was only to avoid extra allocation and unnecessary reads. But if we implement >>> full-featured io request, what should it do? >>> >>> On qcow2 with backing it should pull data from backing to top, like in copy-on-read. >>> And for nbd it will send NBD_CMD_CACHE? >>> These semantics seems different, but why not? >>> >> >> Any opinions here? Should I resend or could we use it as a first step, >> not touching external API but improving things a little bit? > > I'm not opposed to making only a first step now. The interface seems to > make sense to me; the only thing that I don't really like is the naming > both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) > because to me, "cache" doesn't mean "read, but ignore the result". > > The operation only results in something being cached if the block graph > is configured to cache things. And indeed, the most importatn use case > for the flag is not populating a cache, but triggering copy-on-read. So > I think calling this operation "cache" is misleading. > > Now, I don't have very good ideas for better names either. I guess > BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if > a blk_co_preadv_no_read wrapper is even needed when you can just call > blk_co_preadv with the right flag.) > > I'm open for good naming ideas. Would 'prefetch' be a more useful name? The name NBD_CMD_CACHE was chosen in the NBD project, but we are not stuck to that name if we think something better makes more sense. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org
Am 17.06.2019 um 15:09 hat Eric Blake geschrieben: > On 6/17/19 7:09 AM, Kevin Wolf wrote: > > >>> > >>> Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? > >>> I didn't implement it. But may be I should.. > >>> > >>> May aim was only to avoid extra allocation and unnecessary reads. But if we implement > >>> full-featured io request, what should it do? > >>> > >>> On qcow2 with backing it should pull data from backing to top, like in copy-on-read. > >>> And for nbd it will send NBD_CMD_CACHE? > >>> These semantics seems different, but why not? > >>> > >> > >> Any opinions here? Should I resend or could we use it as a first step, > >> not touching external API but improving things a little bit? > > > > I'm not opposed to making only a first step now. The interface seems to > > make sense to me; the only thing that I don't really like is the naming > > both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) > > because to me, "cache" doesn't mean "read, but ignore the result". > > > > The operation only results in something being cached if the block graph > > is configured to cache things. And indeed, the most importatn use case > > for the flag is not populating a cache, but triggering copy-on-read. So > > I think calling this operation "cache" is misleading. > > > > Now, I don't have very good ideas for better names either. I guess > > BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if > > a blk_co_preadv_no_read wrapper is even needed when you can just call > > blk_co_preadv with the right flag.) > > > > I'm open for good naming ideas. > > Would 'prefetch' be a more useful name? The name NBD_CMD_CACHE was > chosen in the NBD project, but we are not stuck to that name if we think > something better makes more sense. Whether 'prefetch' is entirely accurate really depends on the graph configuration, too. But at least it gives me the right idea of "read something, but don't return it yet", so yes, I think that would work for me. Kevin
17.06.2019 16:20, Kevin Wolf wrote: > Am 17.06.2019 um 15:09 hat Eric Blake geschrieben: >> On 6/17/19 7:09 AM, Kevin Wolf wrote: >> >>>>> >>>>> Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? >>>>> I didn't implement it. But may be I should.. >>>>> >>>>> May aim was only to avoid extra allocation and unnecessary reads. But if we implement >>>>> full-featured io request, what should it do? >>>>> >>>>> On qcow2 with backing it should pull data from backing to top, like in copy-on-read. >>>>> And for nbd it will send NBD_CMD_CACHE? >>>>> These semantics seems different, but why not? >>>>> >>>> >>>> Any opinions here? Should I resend or could we use it as a first step, >>>> not touching external API but improving things a little bit? >>> >>> I'm not opposed to making only a first step now. The interface seems to >>> make sense to me; the only thing that I don't really like is the naming >>> both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) >>> because to me, "cache" doesn't mean "read, but ignore the result". >>> >>> The operation only results in something being cached if the block graph >>> is configured to cache things. And indeed, the most importatn use case >>> for the flag is not populating a cache, but triggering copy-on-read. So >>> I think calling this operation "cache" is misleading. >>> >>> Now, I don't have very good ideas for better names either. I guess >>> BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if >>> a blk_co_preadv_no_read wrapper is even needed when you can just call >>> blk_co_preadv with the right flag.) >>> >>> I'm open for good naming ideas. >> >> Would 'prefetch' be a more useful name? The name NBD_CMD_CACHE was >> chosen in the NBD project, but we are not stuck to that name if we think >> something better makes more sense. > > Whether 'prefetch' is entirely accurate really depends on the graph > configuration, too. But at least it gives me the right idea of "read > something, but don't return it yet", so yes, I think that would work for > me. > Do you mean BDRV_REQ_PREFETCH + blk_co_pprefetch, or only the flag? Hmm, doubled 'p' in blk_co_pprefetch looks strange, bit it should be here for consistency with other requests.. -- Best regards, Vladimir
Am 18.06.2019 um 09:38 hat Vladimir Sementsov-Ogievskiy geschrieben: > 17.06.2019 16:20, Kevin Wolf wrote: > > Am 17.06.2019 um 15:09 hat Eric Blake geschrieben: > >> On 6/17/19 7:09 AM, Kevin Wolf wrote: > >> > >>>>> > >>>>> Hmm, don't you think that blk_co_pcache sends NBD_CMD_CACHE if called on nbd driver? > >>>>> I didn't implement it. But may be I should.. > >>>>> > >>>>> May aim was only to avoid extra allocation and unnecessary reads. But if we implement > >>>>> full-featured io request, what should it do? > >>>>> > >>>>> On qcow2 with backing it should pull data from backing to top, like in copy-on-read. > >>>>> And for nbd it will send NBD_CMD_CACHE? > >>>>> These semantics seems different, but why not? > >>>>> > >>>> > >>>> Any opinions here? Should I resend or could we use it as a first step, > >>>> not touching external API but improving things a little bit? > >>> > >>> I'm not opposed to making only a first step now. The interface seems to > >>> make sense to me; the only thing that I don't really like is the naming > >>> both of the operation (blk_co_pcache) and of the flag (BDRV_REQ_CACHE) > >>> because to me, "cache" doesn't mean "read, but ignore the result". > >>> > >>> The operation only results in something being cached if the block graph > >>> is configured to cache things. And indeed, the most importatn use case > >>> for the flag is not populating a cache, but triggering copy-on-read. So > >>> I think calling this operation "cache" is misleading. > >>> > >>> Now, I don't have very good ideas for better names either. I guess > >>> BDRV_REQ_NO_DATA could work, though it's not perfect. (Also, not sure if > >>> a blk_co_preadv_no_read wrapper is even needed when you can just call > >>> blk_co_preadv with the right flag.) > >>> > >>> I'm open for good naming ideas. > >> > >> Would 'prefetch' be a more useful name? The name NBD_CMD_CACHE was > >> chosen in the NBD project, but we are not stuck to that name if we think > >> something better makes more sense. > > > > Whether 'prefetch' is entirely accurate really depends on the graph > > configuration, too. But at least it gives me the right idea of "read > > something, but don't return it yet", so yes, I think that would work for > > me. > > Do you mean BDRV_REQ_PREFETCH + blk_co_pprefetch, or only the flag? > Hmm, doubled 'p' in blk_co_pprefetch looks strange, bit it should be > here for consistency with other requests.. I think I would only do the flag because the wrapper is so trivial, but this is a matter of taste. The kind of thing that is decided by whoever writes the patch. Kevin
© 2016 - 2025 Red Hat, Inc.