From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222726133519.528878826849; Fri, 23 Jun 2017 05:58:46 -0700 (PDT) Received: from localhost ([::1]:35419 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOOAd-0001x4-GC for importer@patchew.org; Fri, 23 Jun 2017 08:58:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60184) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0w-0001wF-EZ for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:46 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0r-0004WD-Lw for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:42 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23056) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0h-0004Ey-1J; Fri, 23 Jun 2017 08:48:27 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NClgDF058453 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:42 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:53 +0300 Message-Id: <20170623124700.1389-2-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 1/8] block: move ThrottleGroup membership to ThrottleGroupMember X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit gathers ThrottleGroup membership details from BlockBackendPublic into ThrottleGroupMember and refactors existing code to use the structure. Signed-off-by: Manos Pitsidianakis Reviewed-by: Stefan Hajnoczi --- block/block-backend.c | 66 +++++---- block/qapi.c | 8 +- block/throttle-groups.c | 304 ++++++++++++++++++++----------------= ---- blockdev.c | 4 +- include/block/throttle-groups.h | 15 +- include/qemu/throttle.h | 26 ++++ include/sysemu/block-backend.h | 20 +-- tests/test-throttle.c | 53 +++---- 8 files changed, 260 insertions(+), 236 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index a2bbae90b1..90a7abaa53 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -216,9 +216,9 @@ BlockBackend *blk_new(uint64_t perm, uint64_t shared_pe= rm) blk->shared_perm =3D shared_perm; blk_set_enable_write_cache(blk, true); =20 - qemu_co_mutex_init(&blk->public.throttled_reqs_lock); - qemu_co_queue_init(&blk->public.throttled_reqs[0]); - qemu_co_queue_init(&blk->public.throttled_reqs[1]); + qemu_co_mutex_init(&blk->public.throttle_group_member.throttled_reqs_l= ock); + qemu_co_queue_init(&blk->public.throttle_group_member.throttled_reqs[0= ]); + qemu_co_queue_init(&blk->public.throttle_group_member.throttled_reqs[1= ]); block_acct_init(&blk->stats); =20 notifier_list_init(&blk->remove_bs_notifiers); @@ -286,7 +286,7 @@ static void blk_delete(BlockBackend *blk) assert(!blk->refcnt); assert(!blk->name); assert(!blk->dev); - if (blk->public.throttle_state) { + if (blk->public.throttle_group_member.throttle_state) { blk_io_limits_disable(blk); } if (blk->root) { @@ -597,9 +597,12 @@ BlockBackend *blk_by_public(BlockBackendPublic *public) */ void blk_remove_bs(BlockBackend *blk) { + ThrottleTimers *tt; + notifier_list_notify(&blk->remove_bs_notifiers, blk); - if (blk->public.throttle_state) { - throttle_timers_detach_aio_context(&blk->public.throttle_timers); + if (blk->public.throttle_group_member.throttle_state) { + tt =3D &blk->public.throttle_group_member.throttle_timers; + throttle_timers_detach_aio_context(tt); } =20 blk_update_root_state(blk); @@ -621,9 +624,10 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState = *bs, Error **errp) bdrv_ref(bs); =20 notifier_list_notify(&blk->insert_bs_notifiers, blk); - if (blk->public.throttle_state) { + if (blk->public.throttle_group_member.throttle_state) { throttle_timers_attach_aio_context( - &blk->public.throttle_timers, bdrv_get_aio_context(bs)); + &blk->public.throttle_group_member.throttle_timers, + bdrv_get_aio_context(bs)); } =20 return 0; @@ -985,8 +989,9 @@ int coroutine_fn blk_co_preadv(BlockBackend *blk, int64= _t offset, bdrv_inc_in_flight(bs); =20 /* throttling disk I/O */ - if (blk->public.throttle_state) { - throttle_group_co_io_limits_intercept(blk, bytes, false); + if (blk->public.throttle_group_member.throttle_state) { + throttle_group_co_io_limits_intercept(&blk->public.throttle_group_= member, + bytes, false); } =20 ret =3D bdrv_co_preadv(blk->root, offset, bytes, qiov, flags); @@ -1009,10 +1014,10 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, = int64_t offset, } =20 bdrv_inc_in_flight(bs); - /* throttling disk I/O */ - if (blk->public.throttle_state) { - throttle_group_co_io_limits_intercept(blk, bytes, true); + if (blk->public.throttle_group_member.throttle_state) { + throttle_group_co_io_limits_intercept(&blk->public.throttle_group_= member, + bytes, true); } =20 if (!blk->enable_write_cache) { @@ -1681,15 +1686,17 @@ static AioContext *blk_aiocb_get_aio_context(BlockA= IOCB *acb) void blk_set_aio_context(BlockBackend *blk, AioContext *new_context) { BlockDriverState *bs =3D blk_bs(blk); + ThrottleTimers *tt; =20 if (bs) { - if (blk->public.throttle_state) { - throttle_timers_detach_aio_context(&blk->public.throttle_timer= s); + if (blk->public.throttle_group_member.throttle_state) { + tt =3D &blk->public.throttle_group_member.throttle_timers; + throttle_timers_detach_aio_context(tt); } bdrv_set_aio_context(bs, new_context); - if (blk->public.throttle_state) { - throttle_timers_attach_aio_context(&blk->public.throttle_timer= s, - new_context); + if (blk->public.throttle_group_member.throttle_state) { + tt =3D &blk->public.throttle_group_member.throttle_timers; + throttle_timers_attach_aio_context(tt, new_context); } } } @@ -1907,33 +1914,34 @@ int blk_commit_all(void) /* throttling disk I/O limits */ void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg) { - throttle_group_config(blk, cfg); + throttle_group_config(&blk->public.throttle_group_member, cfg); } =20 void blk_io_limits_disable(BlockBackend *blk) { - assert(blk->public.throttle_state); + assert(blk->public.throttle_group_member.throttle_state); bdrv_drained_begin(blk_bs(blk)); - throttle_group_unregister_blk(blk); + throttle_group_unregister_tgm(&blk->public.throttle_group_member); bdrv_drained_end(blk_bs(blk)); } =20 /* should be called before blk_set_io_limits if a limit is set */ void blk_io_limits_enable(BlockBackend *blk, const char *group) { - assert(!blk->public.throttle_state); - throttle_group_register_blk(blk, group); + assert(!blk->public.throttle_group_member.throttle_state); + throttle_group_register_tgm(&blk->public.throttle_group_member, group); } =20 void blk_io_limits_update_group(BlockBackend *blk, const char *group) { /* this BB is not part of any group */ - if (!blk->public.throttle_state) { + if (!blk->public.throttle_group_member.throttle_state) { return; } =20 /* this BB is a part of the same group than the one we want */ - if (!g_strcmp0(throttle_group_get_name(blk), group)) { + if (!g_strcmp0(throttle_group_get_name(&blk->public.throttle_group_mem= ber), + group)) { return; } =20 @@ -1955,8 +1963,8 @@ static void blk_root_drained_begin(BdrvChild *child) /* Note that blk->root may not be accessible here yet if we are just * attaching to a BlockDriverState that is drained. Use child instead.= */ =20 - if (atomic_fetch_inc(&blk->public.io_limits_disabled) =3D=3D 0) { - throttle_group_restart_blk(blk); + if (atomic_fetch_inc(&blk->public.throttle_group_member.io_limits_disa= bled) =3D=3D 0) { + throttle_group_restart_tgm(&blk->public.throttle_group_member); } } =20 @@ -1965,8 +1973,8 @@ static void blk_root_drained_end(BdrvChild *child) BlockBackend *blk =3D child->opaque; assert(blk->quiesce_counter); =20 - assert(blk->public.io_limits_disabled); - atomic_dec(&blk->public.io_limits_disabled); + assert(blk->public.throttle_group_member.io_limits_disabled); + atomic_dec(&blk->public.throttle_group_member.io_limits_disabled); =20 if (--blk->quiesce_counter =3D=3D 0) { if (blk->dev_ops && blk->dev_ops->drained_end) { diff --git a/block/qapi.c b/block/qapi.c index 0a41d59bf3..70ec5552be 100644 --- a/block/qapi.c +++ b/block/qapi.c @@ -67,10 +67,11 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *b= lk, info->backing_file_depth =3D bdrv_get_backing_file_depth(bs); info->detect_zeroes =3D bs->detect_zeroes; =20 - if (blk && blk_get_public(blk)->throttle_state) { + if (blk && blk_get_public(blk)->throttle_group_member.throttle_state) { ThrottleConfig cfg; + BlockBackendPublic *blkp =3D blk_get_public(blk); =20 - throttle_group_get_config(blk, &cfg); + throttle_group_get_config(&blkp->throttle_group_member, &cfg); =20 info->bps =3D cfg.buckets[THROTTLE_BPS_TOTAL].avg; info->bps_rd =3D cfg.buckets[THROTTLE_BPS_READ].avg; @@ -118,7 +119,8 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *b= lk, info->iops_size =3D cfg.op_size; =20 info->has_group =3D true; - info->group =3D g_strdup(throttle_group_get_name(blk)); + info->group =3D + g_strdup(throttle_group_get_name(&blkp->throttle_group_member)= ); } =20 info->write_threshold =3D bdrv_write_threshold_get(bs); diff --git a/block/throttle-groups.c b/block/throttle-groups.c index a181cb1dee..5e9d8fb4d6 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -30,7 +30,7 @@ #include "sysemu/qtest.h" =20 /* The ThrottleGroup structure (with its ThrottleState) is shared - * among different BlockBackends and it's independent from + * among different ThrottleGroupMembers and it's independent from * AioContext, so in order to use it from different threads it needs * its own locking. * @@ -40,7 +40,7 @@ * The whole ThrottleGroup structure is private and invisible to * outside users, that only use it through its ThrottleState. * - * In addition to the ThrottleGroup structure, BlockBackendPublic has + * In addition to the ThrottleGroup structure, ThrottleGroupMember has * fields that need to be accessed by other members of the group and * therefore also need to be protected by this lock. Once a * BlockBackend is registered in a group those fields can be accessed @@ -58,8 +58,8 @@ typedef struct ThrottleGroup { =20 QemuMutex lock; /* This lock protects the following four fields */ ThrottleState ts; - QLIST_HEAD(, BlockBackendPublic) head; - BlockBackend *tokens[2]; + QLIST_HEAD(, ThrottleGroupMember) head; + ThrottleGroupMember *tokens[2]; bool any_timer_armed[2]; =20 /* These two are protected by the global throttle_groups_lock */ @@ -133,114 +133,112 @@ void throttle_group_unref(ThrottleState *ts) qemu_mutex_unlock(&throttle_groups_lock); } =20 -/* Get the name from a BlockBackend's ThrottleGroup. The name (and the poi= nter) +/* Get the name from a ThrottleGroupMember's group. The name (and the poin= ter) * is guaranteed to remain constant during the lifetime of the group. * - * @blk: a BlockBackend that is member of a throttling group + * @tgm: a ThrottleGroupMember * @ret: the name of the group. */ -const char *throttle_group_get_name(BlockBackend *blk) +const char *throttle_group_get_name(ThrottleGroupMember *tgm) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); + ThrottleGroup *tg =3D container_of(tgm->throttle_state, ThrottleGroup,= ts); return tg->name; } =20 -/* Return the next BlockBackend in the round-robin sequence, simulating a - * circular list. +/* Return the next ThrottleGroupMember in the round-robin sequence, simula= ting + * a circular list. * * This assumes that tg->lock is held. * - * @blk: the current BlockBackend - * @ret: the next BlockBackend in the sequence + * @tgm: the current ThrottleGroupMember + * @ret: the next ThrottleGroupMember in the sequence */ -static BlockBackend *throttle_group_next_blk(BlockBackend *blk) +static ThrottleGroupMember *throttle_group_next_tgm(ThrottleGroupMember *t= gm) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleState *ts =3D blkp->throttle_state; + ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); - BlockBackendPublic *next =3D QLIST_NEXT(blkp, round_robin); + ThrottleGroupMember *next =3D QLIST_NEXT(tgm, round_robin); =20 if (!next) { next =3D QLIST_FIRST(&tg->head); } =20 - return blk_by_public(next); + return next; } =20 /* - * Return whether a BlockBackend has pending requests. + * Return whether a ThrottleGroupMember has pending requests. * * This assumes that tg->lock is held. * - * @blk: the BlockBackend - * @is_write: the type of operation (read/write) - * @ret: whether the BlockBackend has pending requests. + * @tgm: the ThrottleGroupMember + * @is_write: the type of operation (read/write) + * @ret: whether the ThrottleGroupMember has pending requests. */ -static inline bool blk_has_pending_reqs(BlockBackend *blk, +static inline bool tgm_has_pending_reqs(ThrottleGroupMember *tgm, bool is_write) { - const BlockBackendPublic *blkp =3D blk_get_public(blk); - return blkp->pending_reqs[is_write]; + return tgm->pending_reqs[is_write]; } =20 -/* Return the next BlockBackend in the round-robin sequence with pending I= /O - * requests. +/* Return the next ThrottleGroupMember in the round-robin sequence with pe= nding + * I/O requests. * * This assumes that tg->lock is held. * - * @blk: the current BlockBackend + * @tgm: the current ThrottleGroupMember * @is_write: the type of operation (read/write) - * @ret: the next BlockBackend with pending requests, or blk if ther= e is - * none. + * @ret: the next ThrottleGroupMember with pending requests, or tgm = if + * there is none. */ -static BlockBackend *next_throttle_token(BlockBackend *blk, bool is_write) +static ThrottleGroupMember *next_throttle_token(ThrottleGroupMember *tgm, + bool is_write) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); - BlockBackend *token, *start; + ThrottleState *ts =3D tgm->throttle_state; + ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); + ThrottleGroupMember *token, *start; =20 start =3D token =3D tg->tokens[is_write]; =20 /* get next bs round in round robin style */ - token =3D throttle_group_next_blk(token); - while (token !=3D start && !blk_has_pending_reqs(token, is_write)) { - token =3D throttle_group_next_blk(token); + token =3D throttle_group_next_tgm(token); + while (token !=3D start && !tgm_has_pending_reqs(token, is_write)) { + token =3D throttle_group_next_tgm(token); } =20 /* If no IO are queued for scheduling on the next round robin token - * then decide the token is the current bs because chances are - * the current bs get the current request queued. + * then decide the token is the current tgm because chances are + * the current tgm get the current request queued. */ - if (token =3D=3D start && !blk_has_pending_reqs(token, is_write)) { - token =3D blk; + if (token =3D=3D start && !tgm_has_pending_reqs(token, is_write)) { + token =3D tgm; } =20 - /* Either we return the original BB, or one with pending requests */ - assert(token =3D=3D blk || blk_has_pending_reqs(token, is_write)); + /* Either we return the original TGM, or one with pending requests */ + assert(token =3D=3D tgm || tgm_has_pending_reqs(token, is_write)); =20 return token; } =20 -/* Check if the next I/O request for a BlockBackend needs to be throttled = or - * not. If there's no timer set in this group, set one and update the token - * accordingly. +/* Check if the next I/O request for a ThrottleGroupMember needs to be + * throttled or not. If there's no timer set in this group, set one and up= date + * the token accordingly. * * This assumes that tg->lock is held. * - * @blk: the current BlockBackend + * @tgm: the current ThrottleGroupMember * @is_write: the type of operation (read/write) * @ret: whether the I/O request needs to be throttled or not */ -static bool throttle_group_schedule_timer(BlockBackend *blk, bool is_write) +static bool throttle_group_schedule_timer(ThrottleGroupMember *tgm, + bool is_write) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleState *ts =3D blkp->throttle_state; - ThrottleTimers *tt =3D &blkp->throttle_timers; + ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); + ThrottleTimers *tt =3D &tgm->throttle_timers; bool must_wait; =20 - if (atomic_read(&blkp->io_limits_disabled)) { + if (atomic_read(&tgm->io_limits_disabled)) { return false; } =20 @@ -251,30 +249,29 @@ static bool throttle_group_schedule_timer(BlockBacken= d *blk, bool is_write) =20 must_wait =3D throttle_schedule_timer(ts, tt, is_write); =20 - /* If a timer just got armed, set blk as the current token */ + /* If a timer just got armed, set tgm as the current token */ if (must_wait) { - tg->tokens[is_write] =3D blk; + tg->tokens[is_write] =3D tgm; tg->any_timer_armed[is_write] =3D true; } =20 return must_wait; } =20 -/* Start the next pending I/O request for a BlockBackend. Return whether +/* Start the next pending I/O request for a ThrottleGroupMember. Return w= hether * any request was actually pending. * - * @blk: the current BlockBackend + * @tgm: the current ThrottleGroupMember * @is_write: the type of operation (read/write) */ -static bool coroutine_fn throttle_group_co_restart_queue(BlockBackend *blk, +static bool coroutine_fn throttle_group_co_restart_queue(ThrottleGroupMemb= er *tgm, bool is_write) { - BlockBackendPublic *blkp =3D blk_get_public(blk); bool ret; =20 - qemu_co_mutex_lock(&blkp->throttled_reqs_lock); - ret =3D qemu_co_queue_next(&blkp->throttled_reqs[is_write]); - qemu_co_mutex_unlock(&blkp->throttled_reqs_lock); + qemu_co_mutex_lock(&tgm->throttled_reqs_lock); + ret =3D qemu_co_queue_next(&tgm->throttled_reqs[is_write]); + qemu_co_mutex_unlock(&tgm->throttled_reqs_lock); =20 return ret; } @@ -283,19 +280,19 @@ static bool coroutine_fn throttle_group_co_restart_qu= eue(BlockBackend *blk, * * This assumes that tg->lock is held. * - * @blk: the current BlockBackend + * @tgm: the current ThrottleGroupMember * @is_write: the type of operation (read/write) */ -static void schedule_next_request(BlockBackend *blk, bool is_write) +static void schedule_next_request(ThrottleGroupMember *tgm, bool is_write) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); + ThrottleState *ts =3D tgm->throttle_state; + ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); bool must_wait; - BlockBackend *token; + ThrottleGroupMember *token; =20 /* Check if there's any pending request to schedule next */ - token =3D next_throttle_token(blk, is_write); - if (!blk_has_pending_reqs(token, is_write)) { + token =3D next_throttle_token(tgm, is_write); + if (!tgm_has_pending_reqs(token, is_write)) { return; } =20 @@ -304,12 +301,12 @@ static void schedule_next_request(BlockBackend *blk, = bool is_write) =20 /* If it doesn't have to wait, queue it for immediate execution */ if (!must_wait) { - /* Give preference to requests from the current blk */ + /* Give preference to requests from the current tgm */ if (qemu_in_coroutine() && - throttle_group_co_restart_queue(blk, is_write)) { - token =3D blk; + throttle_group_co_restart_queue(tgm, is_write)) { + token =3D tgm; } else { - ThrottleTimers *tt =3D &blk_get_public(token)->throttle_timers; + ThrottleTimers *tt =3D &token->throttle_timers; int64_t now =3D qemu_clock_get_ns(tt->clock_type); timer_mod(tt->timers[is_write], now); tg->any_timer_armed[is_write] =3D true; @@ -318,80 +315,80 @@ static void schedule_next_request(BlockBackend *blk, = bool is_write) } } =20 -/* Check if an I/O request needs to be throttled, wait and set a timer - * if necessary, and schedule the next request using a round robin - * algorithm. +/* Check if an I/O request needs to be throttled, wait and set a timer if + * necessary, and schedule the next request using a round robin algorithm. * - * @blk: the current BlockBackend + * @tgm: the current ThrottleGroupMember * @bytes: the number of bytes for this I/O * @is_write: the type of operation (read/write) */ -void coroutine_fn throttle_group_co_io_limits_intercept(BlockBackend *blk, +void coroutine_fn throttle_group_co_io_limits_intercept(ThrottleGroupMembe= r *tgm, unsigned int bytes, bool is_write) { bool must_wait; - BlockBackend *token; - - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); + ThrottleGroupMember *token; + ThrottleGroup *tg =3D container_of(tgm->throttle_state, ThrottleGroup,= ts); qemu_mutex_lock(&tg->lock); =20 /* First we check if this I/O has to be throttled. */ - token =3D next_throttle_token(blk, is_write); + token =3D next_throttle_token(tgm, is_write); must_wait =3D throttle_group_schedule_timer(token, is_write); =20 /* Wait if there's a timer set or queued requests of this type */ - if (must_wait || blkp->pending_reqs[is_write]) { - blkp->pending_reqs[is_write]++; + if (must_wait || tgm->pending_reqs[is_write]) { + tgm->pending_reqs[is_write]++; qemu_mutex_unlock(&tg->lock); - qemu_co_mutex_lock(&blkp->throttled_reqs_lock); - qemu_co_queue_wait(&blkp->throttled_reqs[is_write], - &blkp->throttled_reqs_lock); - qemu_co_mutex_unlock(&blkp->throttled_reqs_lock); + qemu_co_mutex_lock(&tgm->throttled_reqs_lock); + qemu_co_queue_wait(&tgm->throttled_reqs[is_write], + &tgm->throttled_reqs_lock); + qemu_co_mutex_unlock(&tgm->throttled_reqs_lock); qemu_mutex_lock(&tg->lock); - blkp->pending_reqs[is_write]--; + tgm->pending_reqs[is_write]--; } =20 /* The I/O will be executed, so do the accounting */ - throttle_account(blkp->throttle_state, is_write, bytes); + throttle_account(tgm->throttle_state, is_write, bytes); =20 /* Schedule the next request */ - schedule_next_request(blk, is_write); + schedule_next_request(tgm, is_write); =20 qemu_mutex_unlock(&tg->lock); } =20 typedef struct { - BlockBackend *blk; + ThrottleGroupMember *tgm; bool is_write; } RestartData; =20 static void coroutine_fn throttle_group_restart_queue_entry(void *opaque) { RestartData *data =3D opaque; - BlockBackend *blk =3D data->blk; + ThrottleGroupMember *tgm =3D data->tgm; + ThrottleState *ts =3D tgm->throttle_state; + ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); bool is_write =3D data->is_write; - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); bool empty_queue; =20 - empty_queue =3D !throttle_group_co_restart_queue(blk, is_write); + empty_queue =3D !throttle_group_co_restart_queue(tgm, is_write); =20 /* If the request queue was empty then we have to take care of * scheduling the next one */ if (empty_queue) { qemu_mutex_lock(&tg->lock); - schedule_next_request(blk, is_write); + schedule_next_request(tgm, is_write); qemu_mutex_unlock(&tg->lock); } } =20 -static void throttle_group_restart_queue(BlockBackend *blk, bool is_write) +static void throttle_group_restart_queue(ThrottleGroupMember *tgm, bool is= _write) { + BlockBackendPublic *blkp =3D container_of(tgm, BlockBackendPublic, + throttle_group_member); + BlockBackend *blk =3D blk_by_public(blkp); Coroutine *co; RestartData rd =3D { - .blk =3D blk, + .tgm =3D tgm, .is_write =3D is_write }; =20 @@ -399,28 +396,24 @@ static void throttle_group_restart_queue(BlockBackend= *blk, bool is_write) aio_co_enter(blk_get_aio_context(blk), co); } =20 -void throttle_group_restart_blk(BlockBackend *blk) +void throttle_group_restart_tgm(ThrottleGroupMember *tgm) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - - if (blkp->throttle_state) { - throttle_group_restart_queue(blk, 0); - throttle_group_restart_queue(blk, 1); + if (tgm->throttle_state) { + throttle_group_restart_queue(tgm, 0); + throttle_group_restart_queue(tgm, 1); } } =20 -/* Update the throttle configuration for a particular group. Similar - * to throttle_config(), but guarantees atomicity within the - * throttling group. +/* Update the throttle configuration for a particular group. Similar to + * throttle_config(), but guarantees atomicity within the throttling group. * - * @blk: a BlockBackend that is a member of the group - * @cfg: the configuration to set + * @tgm: a ThrottleGroupMember that is a member of the group + * @cfg: the configuration to set */ -void throttle_group_config(BlockBackend *blk, ThrottleConfig *cfg) +void throttle_group_config(ThrottleGroupMember *tgm, ThrottleConfig *cfg) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleTimers *tt =3D &blkp->throttle_timers; - ThrottleState *ts =3D blkp->throttle_state; + ThrottleTimers *tt =3D &tgm->throttle_timers; + ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); qemu_mutex_lock(&tg->lock); /* throttle_config() cancels the timers */ @@ -433,28 +426,26 @@ void throttle_group_config(BlockBackend *blk, Throttl= eConfig *cfg) throttle_config(ts, tt, cfg); qemu_mutex_unlock(&tg->lock); =20 - throttle_group_restart_blk(blk); + throttle_group_restart_tgm(tgm); } =20 /* Get the throttle configuration from a particular group. Similar to - * throttle_get_config(), but guarantees atomicity within the - * throttling group. + * throttle_get_config(), but guarantees atomicity within the throttling g= roup. * - * @blk: a BlockBackend that is a member of the group - * @cfg: the configuration will be written here + * @tgm: a ThrottleGroupMember that is a member of the group + * @cfg: the configuration will be written here */ -void throttle_group_get_config(BlockBackend *blk, ThrottleConfig *cfg) +void throttle_group_get_config(ThrottleGroupMember *tgm, ThrottleConfig *c= fg) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleState *ts =3D blkp->throttle_state; + ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); qemu_mutex_lock(&tg->lock); throttle_get_config(ts, cfg); qemu_mutex_unlock(&tg->lock); } =20 -/* ThrottleTimers callback. This wakes up a request that was waiting - * because it had been throttled. +/* ThrottleTimers callback. This wakes up a request that was waiting becau= se it + * had been throttled. * * @blk: the BlockBackend whose request had been throttled * @is_write: the type of operation (read/write) @@ -462,7 +453,8 @@ void throttle_group_get_config(BlockBackend *blk, Throt= tleConfig *cfg) static void timer_cb(BlockBackend *blk, bool is_write) { BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleState *ts =3D blkp->throttle_state; + ThrottleGroupMember *tgm =3D &blkp->throttle_group_member; + ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); =20 /* The timer has just been fired, so we can update the flag */ @@ -471,7 +463,7 @@ static void timer_cb(BlockBackend *blk, bool is_write) qemu_mutex_unlock(&tg->lock); =20 /* Run the request that was waiting for this timer */ - throttle_group_restart_queue(blk, is_write); + throttle_group_restart_queue(tgm, is_write); } =20 static void read_timer_cb(void *opaque) @@ -484,17 +476,20 @@ static void write_timer_cb(void *opaque) timer_cb(opaque, true); } =20 -/* Register a BlockBackend in the throttling group, also initializing its - * timers and updating its throttle_state pointer to point to it. If a +/* Register a ThrottleGroupMember from the throttling group, also initiali= zing + * its timers and updating its throttle_state pointer to point to it. If a * throttling group with that name does not exist yet, it will be created. * - * @blk: the BlockBackend to insert + * @tgm: the ThrottleGroupMember to insert * @groupname: the name of the group */ -void throttle_group_register_blk(BlockBackend *blk, const char *groupname) +void throttle_group_register_tgm(ThrottleGroupMember *tgm, + const char *groupname) { int i; - BlockBackendPublic *blkp =3D blk_get_public(blk); + BlockBackendPublic *blkp =3D container_of(tgm, BlockBackendPublic, + throttle_group_member); + BlockBackend *blk =3D blk_by_public(blkp); ThrottleState *ts =3D throttle_group_incref(groupname); ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); int clock_type =3D QEMU_CLOCK_REALTIME; @@ -504,19 +499,19 @@ void throttle_group_register_blk(BlockBackend *blk, c= onst char *groupname) clock_type =3D QEMU_CLOCK_VIRTUAL; } =20 - blkp->throttle_state =3D ts; + tgm->throttle_state =3D ts; =20 qemu_mutex_lock(&tg->lock); - /* If the ThrottleGroup is new set this BlockBackend as the token */ + /* If the ThrottleGroup is new set this ThrottleGroupMember as the tok= en */ for (i =3D 0; i < 2; i++) { if (!tg->tokens[i]) { - tg->tokens[i] =3D blk; + tg->tokens[i] =3D tgm; } } =20 - QLIST_INSERT_HEAD(&tg->head, blkp, round_robin); + QLIST_INSERT_HEAD(&tg->head, tgm, round_robin); =20 - throttle_timers_init(&blkp->throttle_timers, + throttle_timers_init(&tgm->throttle_timers, blk_get_aio_context(blk), clock_type, read_timer_cb, @@ -526,45 +521,46 @@ void throttle_group_register_blk(BlockBackend *blk, c= onst char *groupname) qemu_mutex_unlock(&tg->lock); } =20 -/* Unregister a BlockBackend from its group, removing it from the list, +/* Unregister a ThrottleGroupMember from its group, removing it from the l= ist, * destroying the timers and setting the throttle_state pointer to NULL. * - * The BlockBackend must not have pending throttled requests, so the calle= r has - * to drain them first. + * The ThrottleGroupMember must not have pending throttled requests, so the + * caller has to drain them first. * * The group will be destroyed if it's empty after this operation. * - * @blk: the BlockBackend to remove + * @tgm the ThrottleGroupMember to remove */ -void throttle_group_unregister_blk(BlockBackend *blk) +void throttle_group_unregister_tgm(ThrottleGroupMember *tgm) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroup *tg =3D container_of(blkp->throttle_state, ThrottleGroup= , ts); + ThrottleState *ts =3D tgm->throttle_state; + ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); + ThrottleGroupMember *token; int i; =20 - assert(blkp->pending_reqs[0] =3D=3D 0 && blkp->pending_reqs[1] =3D=3D = 0); - assert(qemu_co_queue_empty(&blkp->throttled_reqs[0])); - assert(qemu_co_queue_empty(&blkp->throttled_reqs[1])); + assert(tgm->pending_reqs[0] =3D=3D 0 && tgm->pending_reqs[1] =3D=3D 0); + assert(qemu_co_queue_empty(&tgm->throttled_reqs[0])); + assert(qemu_co_queue_empty(&tgm->throttled_reqs[1])); =20 qemu_mutex_lock(&tg->lock); for (i =3D 0; i < 2; i++) { - if (tg->tokens[i] =3D=3D blk) { - BlockBackend *token =3D throttle_group_next_blk(blk); - /* Take care of the case where this is the last blk in the gro= up */ - if (token =3D=3D blk) { + if (tg->tokens[i] =3D=3D tgm) { + token =3D throttle_group_next_tgm(tgm); + /* Take care of the case where this is the last tgm in the gro= up */ + if (token =3D=3D tgm) { token =3D NULL; } tg->tokens[i] =3D token; } } =20 - /* remove the current blk from the list */ - QLIST_REMOVE(blkp, round_robin); - throttle_timers_destroy(&blkp->throttle_timers); + /* remove the current tgm from the list */ + QLIST_REMOVE(tgm, round_robin); + throttle_timers_destroy(&tgm->throttle_timers); qemu_mutex_unlock(&tg->lock); =20 throttle_group_unref(&tg->ts); - blkp->throttle_state =3D NULL; + tgm->throttle_state =3D NULL; } =20 static void throttle_groups_init(void) diff --git a/blockdev.c b/blockdev.c index f92dcf24bf..794e681cf8 100644 --- a/blockdev.c +++ b/blockdev.c @@ -2696,7 +2696,7 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, = Error **errp) if (throttle_enabled(&cfg)) { /* Enable I/O limits if they're not enabled yet, otherwise * just update the throttling group. */ - if (!blk_get_public(blk)->throttle_state) { + if (!blk_get_public(blk)->throttle_group_member.throttle_state) { blk_io_limits_enable(blk, arg->has_group ? arg->group : arg->has_device ? arg->device : @@ -2706,7 +2706,7 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, = Error **errp) } /* Set the new throttling configuration */ blk_set_io_limits(blk, &cfg); - } else if (blk_get_public(blk)->throttle_state) { + } else if (blk_get_public(blk)->throttle_group_member.throttle_state) { /* If all throttling settings are set to 0, disable I/O limits */ blk_io_limits_disable(blk); } diff --git a/include/block/throttle-groups.h b/include/block/throttle-group= s.h index d983d34074..487b2da461 100644 --- a/include/block/throttle-groups.h +++ b/include/block/throttle-groups.h @@ -28,19 +28,20 @@ #include "qemu/throttle.h" #include "block/block_int.h" =20 -const char *throttle_group_get_name(BlockBackend *blk); +const char *throttle_group_get_name(ThrottleGroupMember *tgm); =20 ThrottleState *throttle_group_incref(const char *name); void throttle_group_unref(ThrottleState *ts); =20 -void throttle_group_config(BlockBackend *blk, ThrottleConfig *cfg); -void throttle_group_get_config(BlockBackend *blk, ThrottleConfig *cfg); +void throttle_group_config(ThrottleGroupMember *tgm, ThrottleConfig *cfg); +void throttle_group_get_config(ThrottleGroupMember *tgm, ThrottleConfig *c= fg); =20 -void throttle_group_register_blk(BlockBackend *blk, const char *groupname); -void throttle_group_unregister_blk(BlockBackend *blk); -void throttle_group_restart_blk(BlockBackend *blk); +void throttle_group_register_tgm(ThrottleGroupMember *tgm, + const char *groupname); +void throttle_group_unregister_tgm(ThrottleGroupMember *tgm); +void throttle_group_restart_tgm(ThrottleGroupMember *tgm); =20 -void coroutine_fn throttle_group_co_io_limits_intercept(BlockBackend *blk, +void coroutine_fn throttle_group_co_io_limits_intercept(ThrottleGroupMembe= r *tgm, unsigned int bytes, bool is_write); =20 diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h index 9109657609..e99cbfc865 100644 --- a/include/qemu/throttle.h +++ b/include/qemu/throttle.h @@ -27,6 +27,7 @@ =20 #include "qemu-common.h" #include "qemu/timer.h" +#include "qemu/coroutine.h" =20 #define THROTTLE_VALUE_MAX 1000000000000000LL =20 @@ -153,4 +154,29 @@ bool throttle_schedule_timer(ThrottleState *ts, =20 void throttle_account(ThrottleState *ts, bool is_write, uint64_t size); =20 + +/* The ThrottleGroupMember structure indicates membership in a ThrottleGro= up + * and holds related data. + */ + +typedef struct ThrottleGroupMember { + /* throttled_reqs_lock protects the CoQueues for throttled requests. = */ + CoMutex throttled_reqs_lock; + CoQueue throttled_reqs[2]; + + /* Nonzero if the I/O limits are currently being ignored; generally + * it is zero. Accessed with atomic operations. + */ + unsigned int io_limits_disabled; + + /* The following fields are protected by the ThrottleGroup lock. + * See the ThrottleGroup documentation for details. + * throttle_state tells us if I/O limits are configured. */ + ThrottleState *throttle_state; + ThrottleTimers throttle_timers; + unsigned pending_reqs[2]; + QLIST_ENTRY(ThrottleGroupMember) round_robin; + +} ThrottleGroupMember; + #endif diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h index 999eb2333a..4fec907b7f 100644 --- a/include/sysemu/block-backend.h +++ b/include/sysemu/block-backend.h @@ -70,24 +70,10 @@ typedef struct BlockDevOps { =20 /* This struct is embedded in (the private) BlockBackend struct and contai= ns * fields that must be public. This is in particular for QLIST_ENTRY() and - * friends so that BlockBackends can be kept in lists outside block-backen= d.c */ + * friends so that BlockBackends can be kept in lists outside block-backen= d.c + * */ typedef struct BlockBackendPublic { - /* throttled_reqs_lock protects the CoQueues for throttled requests. = */ - CoMutex throttled_reqs_lock; - CoQueue throttled_reqs[2]; - - /* Nonzero if the I/O limits are currently being ignored; generally - * it is zero. Accessed with atomic operations. - */ - unsigned int io_limits_disabled; - - /* The following fields are protected by the ThrottleGroup lock. - * See the ThrottleGroup documentation for details. - * throttle_state tells us if I/O limits are configured. */ - ThrottleState *throttle_state; - ThrottleTimers throttle_timers; - unsigned pending_reqs[2]; - QLIST_ENTRY(BlockBackendPublic) round_robin; + ThrottleGroupMember throttle_group_member; } BlockBackendPublic; =20 BlockBackend *blk_new(uint64_t perm, uint64_t shared_perm); diff --git a/tests/test-throttle.c b/tests/test-throttle.c index a9201b1fea..0f95da2592 100644 --- a/tests/test-throttle.c +++ b/tests/test-throttle.c @@ -592,6 +592,7 @@ static void test_groups(void) ThrottleConfig cfg1, cfg2; BlockBackend *blk1, *blk2, *blk3; BlockBackendPublic *blkp1, *blkp2, *blkp3; + ThrottleGroupMember *tgm1, *tgm2, *tgm3; =20 /* No actual I/O is performed on these devices */ blk1 =3D blk_new(0, BLK_PERM_ALL); @@ -602,21 +603,25 @@ static void test_groups(void) blkp2 =3D blk_get_public(blk2); blkp3 =3D blk_get_public(blk3); =20 - g_assert(blkp1->throttle_state =3D=3D NULL); - g_assert(blkp2->throttle_state =3D=3D NULL); - g_assert(blkp3->throttle_state =3D=3D NULL); + tgm1 =3D &blkp1->throttle_group_member; + tgm2 =3D &blkp2->throttle_group_member; + tgm3 =3D &blkp3->throttle_group_member; =20 - throttle_group_register_blk(blk1, "bar"); - throttle_group_register_blk(blk2, "foo"); - throttle_group_register_blk(blk3, "bar"); + g_assert(tgm1->throttle_state =3D=3D NULL); + g_assert(tgm2->throttle_state =3D=3D NULL); + g_assert(tgm3->throttle_state =3D=3D NULL); =20 - g_assert(blkp1->throttle_state !=3D NULL); - g_assert(blkp2->throttle_state !=3D NULL); - g_assert(blkp3->throttle_state !=3D NULL); + throttle_group_register_tgm(tgm1, "bar"); + throttle_group_register_tgm(tgm2, "foo"); + throttle_group_register_tgm(tgm3, "bar"); =20 - g_assert(!strcmp(throttle_group_get_name(blk1), "bar")); - g_assert(!strcmp(throttle_group_get_name(blk2), "foo")); - g_assert(blkp1->throttle_state =3D=3D blkp3->throttle_state); + g_assert(tgm1->throttle_state !=3D NULL); + g_assert(tgm2->throttle_state !=3D NULL); + g_assert(tgm3->throttle_state !=3D NULL); + + g_assert(!strcmp(throttle_group_get_name(tgm1), "bar")); + g_assert(!strcmp(throttle_group_get_name(tgm2), "foo")); + g_assert(tgm1->throttle_state =3D=3D tgm3->throttle_state); =20 /* Setting the config of a group member affects the whole group */ throttle_config_init(&cfg1); @@ -624,29 +629,29 @@ static void test_groups(void) cfg1.buckets[THROTTLE_BPS_WRITE].avg =3D 285000; cfg1.buckets[THROTTLE_OPS_READ].avg =3D 20000; cfg1.buckets[THROTTLE_OPS_WRITE].avg =3D 12000; - throttle_group_config(blk1, &cfg1); + throttle_group_config(tgm1, &cfg1); =20 - throttle_group_get_config(blk1, &cfg1); - throttle_group_get_config(blk3, &cfg2); + throttle_group_get_config(tgm1, &cfg1); + throttle_group_get_config(tgm3, &cfg2); g_assert(!memcmp(&cfg1, &cfg2, sizeof(cfg1))); =20 cfg2.buckets[THROTTLE_BPS_READ].avg =3D 4547; cfg2.buckets[THROTTLE_BPS_WRITE].avg =3D 1349; cfg2.buckets[THROTTLE_OPS_READ].avg =3D 123; cfg2.buckets[THROTTLE_OPS_WRITE].avg =3D 86; - throttle_group_config(blk3, &cfg1); + throttle_group_config(tgm3, &cfg1); =20 - throttle_group_get_config(blk1, &cfg1); - throttle_group_get_config(blk3, &cfg2); + throttle_group_get_config(tgm1, &cfg1); + throttle_group_get_config(tgm3, &cfg2); g_assert(!memcmp(&cfg1, &cfg2, sizeof(cfg1))); =20 - throttle_group_unregister_blk(blk1); - throttle_group_unregister_blk(blk2); - throttle_group_unregister_blk(blk3); + throttle_group_unregister_tgm(tgm1); + throttle_group_unregister_tgm(tgm2); + throttle_group_unregister_tgm(tgm3); =20 - g_assert(blkp1->throttle_state =3D=3D NULL); - g_assert(blkp2->throttle_state =3D=3D NULL); - g_assert(blkp3->throttle_state =3D=3D NULL); + g_assert(tgm1->throttle_state =3D=3D NULL); + g_assert(tgm2->throttle_state =3D=3D NULL); + g_assert(tgm3->throttle_state =3D=3D NULL); } =20 int main(int argc, char **argv) --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222255374131.19098231912562; Fri, 23 Jun 2017 05:50:55 -0700 (PDT) Received: from localhost ([::1]:35390 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO32-0003SA-Lr for importer@patchew.org; Fri, 23 Jun 2017 08:50:52 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60074) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0p-0001cG-Q5 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:37 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0n-0004TY-Og for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:35 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23054) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0h-0004F3-1O; Fri, 23 Jun 2017 08:48:27 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NCli6U058479 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:44 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:54 +0300 Message-Id: <20170623124700.1389-3-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 2/8] block: Add aio_context field in ThrottleGroupMember X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" timer_cb() needs to know about the current Aio context of the throttle request that is woken up. In order to make ThrottleGroupMember backend agnostic, this information is stored in an aio_context field instead of accessing it from BlockBackend. Signed-off-by: Manos Pitsidianakis --- block/block-backend.c | 1 + block/throttle-groups.c | 19 +++++---------- include/qemu/throttle.h | 1 + tests/test-throttle.c | 65 +++++++++++++++++++++++++++------------------= ---- util/throttle.c | 4 +++ 5 files changed, 48 insertions(+), 42 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index 90a7abaa53..1d501ec973 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -1928,6 +1928,7 @@ void blk_io_limits_disable(BlockBackend *blk) /* should be called before blk_set_io_limits if a limit is set */ void blk_io_limits_enable(BlockBackend *blk, const char *group) { + blk->public.throttle_group_member.aio_context =3D blk_get_aio_context(= blk); assert(!blk->public.throttle_group_member.throttle_state); throttle_group_register_tgm(&blk->public.throttle_group_member, group); } diff --git a/block/throttle-groups.c b/block/throttle-groups.c index 5e9d8fb4d6..7883cbb511 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -71,6 +71,7 @@ static QemuMutex throttle_groups_lock; static QTAILQ_HEAD(, ThrottleGroup) throttle_groups =3D QTAILQ_HEAD_INITIALIZER(throttle_groups); =20 + /* Increments the reference count of a ThrottleGroup given its name. * * If no ThrottleGroup is found with the given name a new one is @@ -383,9 +384,6 @@ static void coroutine_fn throttle_group_restart_queue_e= ntry(void *opaque) =20 static void throttle_group_restart_queue(ThrottleGroupMember *tgm, bool is= _write) { - BlockBackendPublic *blkp =3D container_of(tgm, BlockBackendPublic, - throttle_group_member); - BlockBackend *blk =3D blk_by_public(blkp); Coroutine *co; RestartData rd =3D { .tgm =3D tgm, @@ -393,7 +391,7 @@ static void throttle_group_restart_queue(ThrottleGroupM= ember *tgm, bool is_write }; =20 co =3D qemu_coroutine_create(throttle_group_restart_queue_entry, &rd); - aio_co_enter(blk_get_aio_context(blk), co); + aio_co_enter(tgm->aio_context, co); } =20 void throttle_group_restart_tgm(ThrottleGroupMember *tgm) @@ -447,13 +445,11 @@ void throttle_group_get_config(ThrottleGroupMember *t= gm, ThrottleConfig *cfg) /* ThrottleTimers callback. This wakes up a request that was waiting becau= se it * had been throttled. * - * @blk: the BlockBackend whose request had been throttled + * @tgm: the ThrottleGroupMember whose request had been throttled * @is_write: the type of operation (read/write) */ -static void timer_cb(BlockBackend *blk, bool is_write) +static void timer_cb(ThrottleGroupMember *tgm, bool is_write) { - BlockBackendPublic *blkp =3D blk_get_public(blk); - ThrottleGroupMember *tgm =3D &blkp->throttle_group_member; ThrottleState *ts =3D tgm->throttle_state; ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); =20 @@ -487,9 +483,6 @@ void throttle_group_register_tgm(ThrottleGroupMember *t= gm, const char *groupname) { int i; - BlockBackendPublic *blkp =3D container_of(tgm, BlockBackendPublic, - throttle_group_member); - BlockBackend *blk =3D blk_by_public(blkp); ThrottleState *ts =3D throttle_group_incref(groupname); ThrottleGroup *tg =3D container_of(ts, ThrottleGroup, ts); int clock_type =3D QEMU_CLOCK_REALTIME; @@ -512,11 +505,11 @@ void throttle_group_register_tgm(ThrottleGroupMember = *tgm, QLIST_INSERT_HEAD(&tg->head, tgm, round_robin); =20 throttle_timers_init(&tgm->throttle_timers, - blk_get_aio_context(blk), + tgm->aio_context, clock_type, read_timer_cb, write_timer_cb, - blk); + tgm); =20 qemu_mutex_unlock(&tg->lock); } diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h index e99cbfc865..3e92d4d4eb 100644 --- a/include/qemu/throttle.h +++ b/include/qemu/throttle.h @@ -160,6 +160,7 @@ void throttle_account(ThrottleState *ts, bool is_write,= uint64_t size); */ =20 typedef struct ThrottleGroupMember { + AioContext *aio_context; /* throttled_reqs_lock protects the CoQueues for throttled requests. = */ CoMutex throttled_reqs_lock; CoQueue throttled_reqs[2]; diff --git a/tests/test-throttle.c b/tests/test-throttle.c index 0f95da2592..d3298234aa 100644 --- a/tests/test-throttle.c +++ b/tests/test-throttle.c @@ -24,8 +24,9 @@ static AioContext *ctx; static LeakyBucket bkt; static ThrottleConfig cfg; +static ThrottleGroupMember tgm; static ThrottleState ts; -static ThrottleTimers tt; +static ThrottleTimers *tt; =20 /* useful function */ static bool double_cmp(double x, double y) @@ -153,19 +154,21 @@ static void test_init(void) { int i; =20 + tt =3D &tgm.throttle_timers; + /* fill the structures with crap */ memset(&ts, 1, sizeof(ts)); - memset(&tt, 1, sizeof(tt)); + memset(tt, 1, sizeof(*tt)); =20 /* init structures */ throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); =20 /* check initialized fields */ - g_assert(tt.clock_type =3D=3D QEMU_CLOCK_VIRTUAL); - g_assert(tt.timers[0]); - g_assert(tt.timers[1]); + g_assert(tt->clock_type =3D=3D QEMU_CLOCK_VIRTUAL); + g_assert(tt->timers[0]); + g_assert(tt->timers[1]); =20 /* check other fields where cleared */ g_assert(!ts.previous_leak); @@ -176,18 +179,18 @@ static void test_init(void) g_assert(!ts.cfg.buckets[i].level); } =20 - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); } =20 static void test_destroy(void) { int i; throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); for (i =3D 0; i < 2; i++) { - g_assert(!tt.timers[i]); + g_assert(!tt->timers[i]); } } =20 @@ -224,11 +227,11 @@ static void test_config_functions(void) orig_cfg.op_size =3D 1; =20 throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); /* structure reset by throttle_init previous_leak should be null */ g_assert(!ts.previous_leak); - throttle_config(&ts, &tt, &orig_cfg); + throttle_config(&ts, tt, &orig_cfg); =20 /* has previous leak been initialized by throttle_config ? */ g_assert(ts.previous_leak); @@ -236,7 +239,7 @@ static void test_config_functions(void) /* get back the fixed configuration */ throttle_get_config(&ts, &final_cfg); =20 - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); =20 g_assert(final_cfg.buckets[THROTTLE_BPS_TOTAL].avg =3D=3D 153); g_assert(final_cfg.buckets[THROTTLE_BPS_READ].avg =3D=3D 56); @@ -417,45 +420,45 @@ static void test_have_timer(void) { /* zero structures */ memset(&ts, 0, sizeof(ts)); - memset(&tt, 0, sizeof(tt)); + memset(tt, 0, sizeof(*tt)); =20 /* no timer set should return false */ - g_assert(!throttle_timers_are_initialized(&tt)); + g_assert(!throttle_timers_are_initialized(tt)); =20 /* init structures */ throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); =20 /* timer set by init should return true */ - g_assert(throttle_timers_are_initialized(&tt)); + g_assert(throttle_timers_are_initialized(tt)); =20 - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); } =20 static void test_detach_attach(void) { /* zero structures */ memset(&ts, 0, sizeof(ts)); - memset(&tt, 0, sizeof(tt)); + memset(tt, 0, sizeof(*tt)); =20 /* init the structure */ throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); =20 /* timer set by init should return true */ - g_assert(throttle_timers_are_initialized(&tt)); + g_assert(throttle_timers_are_initialized(tt)); =20 /* timer should no longer exist after detaching */ - throttle_timers_detach_aio_context(&tt); - g_assert(!throttle_timers_are_initialized(&tt)); + throttle_timers_detach_aio_context(tt); + g_assert(!throttle_timers_are_initialized(tt)); =20 /* timer should exist again after attaching */ - throttle_timers_attach_aio_context(&tt, ctx); - g_assert(throttle_timers_are_initialized(&tt)); + throttle_timers_attach_aio_context(tt, ctx); + g_assert(throttle_timers_are_initialized(tt)); =20 - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); } =20 static bool do_test_accounting(bool is_ops, /* are we testing bps or ops */ @@ -484,9 +487,9 @@ static bool do_test_accounting(bool is_ops, /* are we t= esting bps or ops */ cfg.op_size =3D op_size; =20 throttle_init(&ts); - throttle_timers_init(&tt, ctx, QEMU_CLOCK_VIRTUAL, + throttle_timers_init(tt, ctx, QEMU_CLOCK_VIRTUAL, read_timer_cb, write_timer_cb, &ts); - throttle_config(&ts, &tt, &cfg); + throttle_config(&ts, tt, &cfg); =20 /* account a read */ throttle_account(&ts, false, size); @@ -511,7 +514,7 @@ static bool do_test_accounting(bool is_ops, /* are we t= esting bps or ops */ return false; } =20 - throttle_timers_destroy(&tt); + throttle_timers_destroy(tt); =20 return true; } @@ -607,6 +610,10 @@ static void test_groups(void) tgm2 =3D &blkp2->throttle_group_member; tgm3 =3D &blkp3->throttle_group_member; =20 + tgm1->aio_context =3D blk_get_aio_context(blk1); + tgm2->aio_context =3D blk_get_aio_context(blk2); + tgm3->aio_context =3D blk_get_aio_context(blk3); + g_assert(tgm1->throttle_state =3D=3D NULL); g_assert(tgm2->throttle_state =3D=3D NULL); g_assert(tgm3->throttle_state =3D=3D NULL); diff --git a/util/throttle.c b/util/throttle.c index 3570ed25fc..e763474b1a 100644 --- a/util/throttle.c +++ b/util/throttle.c @@ -185,6 +185,8 @@ static bool throttle_compute_timer(ThrottleState *ts, void throttle_timers_attach_aio_context(ThrottleTimers *tt, AioContext *new_context) { + ThrottleGroupMember *tgm =3D container_of(tt, ThrottleGroupMember, thr= ottle_timers); + tgm->aio_context =3D new_context; tt->timers[0] =3D aio_timer_new(new_context, tt->clock_type, SCALE_NS, tt->read_timer_cb, tt->timer_opaque); tt->timers[1] =3D aio_timer_new(new_context, tt->clock_type, SCALE_NS, @@ -242,6 +244,8 @@ static void throttle_timer_destroy(QEMUTimer **timer) void throttle_timers_detach_aio_context(ThrottleTimers *tt) { int i; + ThrottleGroupMember *tgm =3D container_of(tt, ThrottleGroupMember, thr= ottle_timers); + tgm->aio_context =3D NULL; =20 for (i =3D 0; i < 2; i++) { throttle_timer_destroy(&tt->timers[i]); --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222643534752.8037301706098; Fri, 23 Jun 2017 05:57:23 -0700 (PDT) Received: from localhost ([::1]:35415 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO9K-0000Zk-0T for importer@patchew.org; Fri, 23 Jun 2017 08:57:22 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60139) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0r-0001lO-Pu for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0p-0004Ur-81 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:37 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23051) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0h-0004F5-1Q; Fri, 23 Jun 2017 08:48:27 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NClixU058486 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:44 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:55 +0300 Message-Id: <20170623124700.1389-4-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 3/8] block: add throttle block filter driver X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" block/throttle.c uses existing I/O throttle infrastructure inside a block filter driver. I/O operations are intercepted in the filter's read/write coroutines, and referred to block/throttle-groups.c The driver can be used with the command -drive driver=3Dthrottle,file.filename=3Dfoo.qcow2,iops-total=3D... The configuration flags and semantics are identical to the hardcoded throttling ones. Signed-off-by: Manos Pitsidianakis --- block/Makefile.objs | 1 + block/throttle.c | 427 ++++++++++++++++++++++++++++++++++++= ++++ include/qemu/throttle-options.h | 60 ++++-- 3 files changed, 469 insertions(+), 19 deletions(-) create mode 100644 block/throttle.c diff --git a/block/Makefile.objs b/block/Makefile.objs index ea955302c8..bb811a4d01 100644 --- a/block/Makefile.objs +++ b/block/Makefile.objs @@ -25,6 +25,7 @@ block-obj-y +=3D accounting.o dirty-bitmap.o block-obj-y +=3D write-threshold.o block-obj-y +=3D backup.o block-obj-$(CONFIG_REPLICATION) +=3D replication.o +block-obj-y +=3D throttle.o =20 block-obj-y +=3D crypto.o =20 diff --git a/block/throttle.c b/block/throttle.c new file mode 100644 index 0000000000..0c17051161 --- /dev/null +++ b/block/throttle.c @@ -0,0 +1,427 @@ +/* + * QEMU block throttling filter driver infrastructure + * + * Copyright (c) 2017 Manos Pitsidianakis + * + * This program is free software; you can redistribute it and/or + * modify it under the terms of the GNU General Public License as + * published by the Free Software Foundation; either version 2 or + * (at your option) version 3 of the License. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, see . + */ + +#include "qemu/osdep.h" +#include "block/throttle-groups.h" +#include "qemu/throttle-options.h" +#include "qapi/error.h" + + +static QemuOptsList throttle_opts =3D { + .name =3D "throttle", + .head =3D QTAILQ_HEAD_INITIALIZER(throttle_opts.head), + .desc =3D { + { + .name =3D QEMU_OPT_IOPS_TOTAL, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit total I/O operations per second", + },{ + .name =3D QEMU_OPT_IOPS_READ, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit read operations per second", + },{ + .name =3D QEMU_OPT_IOPS_WRITE, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit write operations per second", + },{ + .name =3D QEMU_OPT_BPS_TOTAL, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit total bytes per second", + },{ + .name =3D QEMU_OPT_BPS_READ, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit read bytes per second", + },{ + .name =3D QEMU_OPT_BPS_WRITE, + .type =3D QEMU_OPT_NUMBER, + .help =3D "limit write bytes per second", + },{ + .name =3D QEMU_OPT_IOPS_TOTAL_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "I/O operations burst", + },{ + .name =3D QEMU_OPT_IOPS_READ_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "I/O operations read burst", + },{ + .name =3D QEMU_OPT_IOPS_WRITE_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "I/O operations write burst", + },{ + .name =3D QEMU_OPT_BPS_TOTAL_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "total bytes burst", + },{ + .name =3D QEMU_OPT_BPS_READ_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "total bytes read burst", + },{ + .name =3D QEMU_OPT_BPS_WRITE_MAX, + .type =3D QEMU_OPT_NUMBER, + .help =3D "total bytes write burst", + },{ + .name =3D QEMU_OPT_IOPS_TOTAL_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the iopstotalmax burst period, in seconds= ", + },{ + .name =3D QEMU_OPT_IOPS_READ_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the iopsreadmax burst period, in seconds", + },{ + .name =3D QEMU_OPT_IOPS_WRITE_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the iopswritemax burst period, in seconds= ", + },{ + .name =3D QEMU_OPT_BPS_TOTAL_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the bpstotalmax burst period, in seconds", + },{ + .name =3D QEMU_OPT_BPS_READ_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the bpsreadmax burst period, in seconds", + },{ + .name =3D QEMU_OPT_BPS_WRITE_MAX_LENGTH, + .type =3D QEMU_OPT_NUMBER, + .help =3D "length of the bpswritemax burst period, in seconds", + },{ + .name =3D QEMU_OPT_IOPS_SIZE, + .type =3D QEMU_OPT_NUMBER, + .help =3D "when limiting by iops max size of an I/O in bytes", + }, + { + .name =3D QEMU_OPT_THROTTLE_GROUP_NAME, + .type =3D QEMU_OPT_STRING, + .help =3D "throttle group name", + }, + { /* end of list */ } + }, +}; + +/* Extract ThrottleConfig options. Assumes cfg is initialized and will be + * checked for validity. + */ + +static void throttle_extract_options(QemuOpts *opts, ThrottleConfig *cfg) +{ + if (qemu_opt_get(opts, QEMU_OPT_BPS_TOTAL)) { + cfg->buckets[THROTTLE_BPS_TOTAL].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_TOTAL, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_READ)) { + cfg->buckets[THROTTLE_BPS_READ].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_READ, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_WRITE)) { + cfg->buckets[THROTTLE_BPS_WRITE].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_WRITE, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_TOTAL)) { + cfg->buckets[THROTTLE_OPS_TOTAL].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_TOTAL, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_READ)) { + cfg->buckets[THROTTLE_OPS_READ].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_READ, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_WRITE)) { + cfg->buckets[THROTTLE_OPS_WRITE].avg =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_WRITE, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_TOTAL_MAX)) { + cfg->buckets[THROTTLE_BPS_TOTAL].max =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_TOTAL_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_READ_MAX)) { + cfg->buckets[THROTTLE_BPS_READ].max =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_READ_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_WRITE_MAX)) { + cfg->buckets[THROTTLE_BPS_WRITE].max =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_WRITE_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_TOTAL_MAX)) { + cfg->buckets[THROTTLE_OPS_TOTAL].max =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_TOTAL_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_READ_MAX)) { + cfg->buckets[THROTTLE_OPS_READ].max =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_READ_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_WRITE_MAX)) { + cfg->buckets[THROTTLE_OPS_WRITE].max =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_WRITE_MAX, 0); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_TOTAL_MAX_LENGTH)) { + cfg->buckets[THROTTLE_BPS_TOTAL].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_TOTAL_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_READ_MAX_LENGTH)) { + cfg->buckets[THROTTLE_BPS_READ].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_READ_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_BPS_WRITE_MAX_LENGTH)) { + cfg->buckets[THROTTLE_BPS_WRITE].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_BPS_WRITE_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_TOTAL_MAX_LENGTH)) { + cfg->buckets[THROTTLE_OPS_TOTAL].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_TOTAL_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_READ_MAX_LENGTH)) { + cfg->buckets[THROTTLE_OPS_READ].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_READ_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_WRITE_MAX_LENGTH)) { + cfg->buckets[THROTTLE_OPS_WRITE].burst_length =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_WRITE_MAX_LENGTH, 1); + } + if (qemu_opt_get(opts, QEMU_OPT_IOPS_SIZE)) { + cfg->op_size =3D + qemu_opt_get_number(opts, QEMU_OPT_IOPS_SIZE, 0); + } +} + +static int throttle_configure_tgm(BlockDriverState *bs, ThrottleGroupMembe= r *tgm, + QDict *options, Error = **errp) +{ + int ret =3D 0; + ThrottleState *ts; + ThrottleTimers *tt; + ThrottleConfig cfg; + QemuOpts *opts =3D NULL; + const char *group_name =3D NULL; + Error *local_err =3D NULL; + + opts =3D qemu_opts_create(&throttle_opts, NULL, 0, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return -EINVAL; + } + qemu_opts_absorb_qdict(opts, options, &local_err); + if (local_err) { + goto err; + } + + group_name =3D qemu_opt_get(opts, QEMU_OPT_THROTTLE_GROUP_NAME); + if (!group_name) { + group_name =3D bdrv_get_device_or_node_name(bs); + if (!strlen(group_name)) { + error_setg(&local_err, + "A group name must be specified for this device."); + goto err; + } + } + + tgm->aio_context =3D bdrv_get_aio_context(bs); + /* Register membership to group with name group_name */ + throttle_group_register_tgm(tgm, group_name); + + ts =3D tgm->throttle_state; + /* Copy previous configuration */ + throttle_get_config(ts, &cfg); + + /* Change limits if user has specified them */ + throttle_extract_options(opts, &cfg); + if (!throttle_is_valid(&cfg, &local_err)) { + throttle_group_unregister_tgm(tgm); + goto err; + } + tt =3D &tgm->throttle_timers; + /* Update group configuration */ + throttle_config(ts, tt, &cfg); + + qemu_co_queue_init(&tgm->throttled_reqs[0]); + qemu_co_queue_init(&tgm->throttled_reqs[1]); + + goto fin; + +err: + error_propagate(errp, local_err); + ret =3D -EINVAL; +fin: + qemu_opts_del(opts); + return ret; +} + +static int throttle_open(BlockDriverState *bs, QDict *options, + int flags, Error **errp) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + Error *local_err =3D NULL; + + bs->open_flags =3D flags; + bs->file =3D bdrv_open_child(NULL, options, "file", + bs, &child_file, false, &local_err); + + if (local_err) { + error_propagate(errp, local_err); + return -EINVAL; + } + + qdict_flatten(options); + return throttle_configure_tgm(bs, tgm, options, errp); +} + +static void throttle_close(BlockDriverState *bs) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + bdrv_drained_begin(bs); + throttle_group_unregister_tgm(tgm); + bdrv_drained_end(bs); + return; +} + + +static int64_t throttle_getlength(BlockDriverState *bs) +{ + return bdrv_getlength(bs->file->bs); +} + + +static int coroutine_fn throttle_co_preadv(BlockDriverState *bs, uint64_t = offset, + uint64_t bytes, QEMUIOVector *= qiov, + int flags) +{ + + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_group_co_io_limits_intercept(tgm, bytes, false); + + return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags); +} + +static int coroutine_fn throttle_co_pwritev(BlockDriverState *bs, uint64_t= offset, + uint64_t bytes, QEMUIOVector *= qiov, + int flags) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_group_co_io_limits_intercept(tgm, bytes, true); + + return bdrv_co_preadv(bs->file, offset, bytes, qiov, flags); +} + +static int coroutine_fn throttle_co_pwrite_zeroes(BlockDriverState *bs, + int64_t offset, int bytes, BdrvRequestFlags flags) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_group_co_io_limits_intercept(tgm, bytes, true); + + return bdrv_co_pwrite_zeroes(bs->file, offset, bytes, flags); +} + +static int coroutine_fn throttle_co_pdiscard(BlockDriverState *bs, + int64_t offset, int bytes) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_group_co_io_limits_intercept(tgm, bytes, true); + + return bdrv_co_pdiscard(bs->file->bs, offset, bytes); +} + +static int throttle_co_flush(BlockDriverState *bs) +{ + return bdrv_co_flush(bs->file->bs); +} + +static void throttle_detach_aio_context(BlockDriverState *bs) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_timers_detach_aio_context(&tgm->throttle_timers); +} + +static void throttle_attach_aio_context(BlockDriverState *bs, + AioContext *new_context) +{ + ThrottleGroupMember *tgm =3D bs->opaque; + throttle_timers_attach_aio_context(&tgm->throttle_timers, new_context); +} + +static int throttle_reopen_prepare(BDRVReopenState *reopen_state, + BlockReopenQueue *queue, Error **err= p) +{ + ThrottleGroupMember *tgm =3D NULL; + + assert(reopen_state !=3D NULL); + assert(reopen_state->bs !=3D NULL); + + reopen_state->opaque =3D g_new0(ThrottleGroupMember, 1); + tgm =3D reopen_state->opaque; + + return throttle_configure_tgm(reopen_state->bs, tgm, reopen_state->opt= ions, + errp); +} + +static void throttle_reopen_commit(BDRVReopenState *state) +{ + ThrottleGroupMember *tgm =3D state->bs->opaque; + BlockDriverState *bs =3D state->bs; + + bdrv_drained_begin(bs); + throttle_group_unregister_tgm(tgm); + g_free(state->bs->opaque); + state->bs->opaque =3D state->opaque; + bdrv_drained_end(bs); + + state->opaque =3D NULL; +} + +static void throttle_reopen_abort(BDRVReopenState *state) +{ + ThrottleGroupMember *tgm =3D state->opaque; + throttle_group_unregister_tgm(tgm); + g_free(state->opaque); + state->opaque =3D NULL; +} + +static BlockDriver bdrv_throttle =3D { + .format_name =3D "throttle", + .protocol_name =3D "throttle", + .instance_size =3D sizeof(ThrottleGroupMember), + + .bdrv_file_open =3D throttle_open, + .bdrv_close =3D throttle_close, + .bdrv_co_flush =3D throttle_co_flush, + + .bdrv_child_perm =3D bdrv_filter_default_perms, + + .bdrv_getlength =3D throttle_getlength, + + .bdrv_co_preadv =3D throttle_co_preadv, + .bdrv_co_pwritev =3D throttle_co_pwritev, + + .bdrv_co_pwrite_zeroes =3D throttle_co_pwrite_zeroes, + .bdrv_co_pdiscard =3D throttle_co_pdiscard, + + .bdrv_recurse_is_first_non_filter =3D bdrv_recurse_is_first_non_fi= lter, + + .bdrv_attach_aio_context =3D throttle_attach_aio_context, + .bdrv_detach_aio_context =3D throttle_detach_aio_context, + + .bdrv_reopen_prepare =3D throttle_reopen_prepare, + .bdrv_reopen_commit =3D throttle_reopen_commit, + .bdrv_reopen_abort =3D throttle_reopen_abort, + + .is_filter =3D true, +}; + +static void bdrv_throttle_init(void) +{ + bdrv_register(&bdrv_throttle); +} + +block_init(bdrv_throttle_init); diff --git a/include/qemu/throttle-options.h b/include/qemu/throttle-option= s.h index 3133d1ca40..508ee72625 100644 --- a/include/qemu/throttle-options.h +++ b/include/qemu/throttle-options.h @@ -10,81 +10,103 @@ #ifndef THROTTLE_OPTIONS_H #define THROTTLE_OPTIONS_H =20 +#define QEMU_OPT_IOPS_TOTAL "iops-total" +#define QEMU_OPT_IOPS_TOTAL_MAX "iops-total-max" +#define QEMU_OPT_IOPS_TOTAL_MAX_LENGTH "iops-total-max-length" +#define QEMU_OPT_IOPS_READ "iops-read" +#define QEMU_OPT_IOPS_READ_MAX "iops-read-max" +#define QEMU_OPT_IOPS_READ_MAX_LENGTH "iops-read-max-length" +#define QEMU_OPT_IOPS_WRITE "iops-write" +#define QEMU_OPT_IOPS_WRITE_MAX "iops-write-max" +#define QEMU_OPT_IOPS_WRITE_MAX_LENGTH "iops-write-max-length" +#define QEMU_OPT_BPS_TOTAL "bps-total" +#define QEMU_OPT_BPS_TOTAL_MAX "bps-total-max" +#define QEMU_OPT_BPS_TOTAL_MAX_LENGTH "bps-total-max-length" +#define QEMU_OPT_BPS_READ "bps-read" +#define QEMU_OPT_BPS_READ_MAX "bps-read-max" +#define QEMU_OPT_BPS_READ_MAX_LENGTH "bps-read-max-length" +#define QEMU_OPT_BPS_WRITE "bps-write" +#define QEMU_OPT_BPS_WRITE_MAX "bps-write-max" +#define QEMU_OPT_BPS_WRITE_MAX_LENGTH "bps-write-max-length" +#define QEMU_OPT_IOPS_SIZE "iops-size" +#define QEMU_OPT_THROTTLE_GROUP_NAME "throttling-group" + +#define THROTTLE_OPT_PREFIX "throttling." #define THROTTLE_OPTS \ { \ - .name =3D "throttling.iops-total",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_TOTAL,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit total I/O operations per second",\ },{ \ - .name =3D "throttling.iops-read",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_READ,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit read operations per second",\ },{ \ - .name =3D "throttling.iops-write",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_WRITE,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit write operations per second",\ },{ \ - .name =3D "throttling.bps-total",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_TOTAL,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit total bytes per second",\ },{ \ - .name =3D "throttling.bps-read",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_READ,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit read bytes per second",\ },{ \ - .name =3D "throttling.bps-write",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_WRITE,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "limit write bytes per second",\ },{ \ - .name =3D "throttling.iops-total-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_TOTAL_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "I/O operations burst",\ },{ \ - .name =3D "throttling.iops-read-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_READ_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "I/O operations read burst",\ },{ \ - .name =3D "throttling.iops-write-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_WRITE_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "I/O operations write burst",\ },{ \ - .name =3D "throttling.bps-total-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_TOTAL_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "total bytes burst",\ },{ \ - .name =3D "throttling.bps-read-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_READ_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "total bytes read burst",\ },{ \ - .name =3D "throttling.bps-write-max",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_WRITE_MAX,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "total bytes write burst",\ },{ \ - .name =3D "throttling.iops-total-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_TOTAL_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the iops-total-max burst period, in secon= ds",\ },{ \ - .name =3D "throttling.iops-read-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_READ_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the iops-read-max burst period, in second= s",\ },{ \ - .name =3D "throttling.iops-write-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_WRITE_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the iops-write-max burst period, in secon= ds",\ },{ \ - .name =3D "throttling.bps-total-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_TOTAL_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the bps-total-max burst period, in second= s",\ },{ \ - .name =3D "throttling.bps-read-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_READ_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the bps-read-max burst period, in seconds= ",\ },{ \ - .name =3D "throttling.bps-write-max-length",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_BPS_WRITE_MAX_LENGTH,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "length of the bps-write-max burst period, in second= s",\ },{ \ - .name =3D "throttling.iops-size",\ + .name =3D THROTTLE_OPT_PREFIX QEMU_OPT_IOPS_SIZE,\ .type =3D QEMU_OPT_NUMBER,\ .help =3D "when limiting by iops max size of an I/O in bytes",\ } --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222243492495.3792316024684; Fri, 23 Jun 2017 05:50:43 -0700 (PDT) Received: from localhost ([::1]:35389 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO2s-0003Ij-0Z for importer@patchew.org; Fri, 23 Jun 2017 08:50:42 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60087) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0q-0001eM-6n for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0o-0004U6-37 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:36 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23052) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0h-0004Ex-0v; Fri, 23 Jun 2017 08:48:27 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NCljBX058496 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:45 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:56 +0300 Message-Id: <20170623124700.1389-5-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 4/8] block: convert ThrottleGroup to object with QOM X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" ThrottleGroup is converted to an object to allow easy runtime configuration of throttling filter nodes in the BDS graph using QOM. Signed-off-by: Manos Pitsidianakis --- block/throttle-groups.c | 351 ++++++++++++++++++++++++++++++++++++++++++++= ++++ include/qemu/throttle.h | 4 + 2 files changed, 355 insertions(+) diff --git a/block/throttle-groups.c b/block/throttle-groups.c index 7883cbb511..60079dc8ea 100644 --- a/block/throttle-groups.c +++ b/block/throttle-groups.c @@ -25,9 +25,11 @@ #include "qemu/osdep.h" #include "sysemu/block-backend.h" #include "block/throttle-groups.h" +#include "qemu/throttle-options.h" #include "qemu/queue.h" #include "qemu/thread.h" #include "sysemu/qtest.h" +#include "qapi/error.h" =20 /* The ThrottleGroup structure (with its ThrottleState) is shared * among different ThrottleGroupMembers and it's independent from @@ -54,6 +56,7 @@ * that BlockBackend has throttled requests in the queue. */ typedef struct ThrottleGroup { + Object parent_obj; char *name; /* This is constant during the lifetime of the group */ =20 QemuMutex lock; /* This lock protects the following four fields */ @@ -562,3 +565,351 @@ static void throttle_groups_init(void) } =20 block_init(throttle_groups_init); + + +static bool throttle_group_exists(const char *name) +{ + ThrottleGroup *iter; + bool ret =3D false; + + qemu_mutex_lock(&throttle_groups_lock); + /* Look for an existing group with that name */ + QTAILQ_FOREACH(iter, &throttle_groups, list) { + if (!strcmp(name, iter->name)) { + ret =3D true; + break; + } + } + + qemu_mutex_unlock(&throttle_groups_lock); + return ret; +} + +typedef struct ThrottleGroupClass { + /* private */ + ObjectClass parent_class; + /* public */ +} ThrottleGroupClass; + + +#define DOUBLE 0 +#define UINT64 1 +#define UNSIGNED 2 + +typedef struct { + BucketType type; + int size; /* field size */ + ptrdiff_t offset; /* offset in LeakyBucket struct. */ +} ThrottleParamInfo; + +static ThrottleParamInfo throttle_iops_total_info =3D { + THROTTLE_OPS_TOTAL, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_iops_total_max_info =3D { + THROTTLE_OPS_TOTAL, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_iops_total_max_length_info =3D { + THROTTLE_OPS_TOTAL, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_iops_read_info =3D { + THROTTLE_OPS_READ, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_iops_read_max_info =3D { + THROTTLE_OPS_READ, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_iops_read_max_length_info =3D { + THROTTLE_OPS_READ, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_iops_write_info =3D { + THROTTLE_OPS_WRITE, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_iops_write_max_info =3D { + THROTTLE_OPS_WRITE, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_iops_write_max_length_info =3D { + THROTTLE_OPS_WRITE, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_bps_total_info =3D { + THROTTLE_BPS_TOTAL, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_bps_total_max_info =3D { + THROTTLE_BPS_TOTAL, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_bps_total_max_length_info =3D { + THROTTLE_BPS_TOTAL, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_bps_read_info =3D { + THROTTLE_BPS_READ, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_bps_read_max_info =3D { + THROTTLE_BPS_READ, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_bps_read_max_length_info =3D { + THROTTLE_BPS_READ, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_bps_write_info =3D { + THROTTLE_BPS_WRITE, DOUBLE, offsetof(LeakyBucket, avg), +}; + +static ThrottleParamInfo throttle_bps_write_max_info =3D { + THROTTLE_BPS_WRITE, DOUBLE, offsetof(LeakyBucket, max), +}; + +static ThrottleParamInfo throttle_bps_write_max_length_info =3D { + THROTTLE_BPS_WRITE, UNSIGNED, offsetof(LeakyBucket, burst_length), +}; + +static ThrottleParamInfo throttle_iops_size_info =3D { + 0, UINT64, offsetof(ThrottleConfig, op_size), +}; + + +static void throttle_group_obj_complete(UserCreatable *obj, Error **errp) +{ + char *name =3D NULL; + Error *local_error =3D NULL; + ThrottleGroup *tg =3D THROTTLE_GROUP(obj); + + name =3D object_get_canonical_path_component(OBJECT(obj)); + if (throttle_group_exists(name)) { + error_setg(&local_error, "A throttle group with this name already \ + exists."); + goto ret; + } + + qemu_mutex_lock(&throttle_groups_lock); + tg->name =3D name; + qemu_mutex_init(&tg->lock); + QLIST_INIT(&tg->head); + QTAILQ_INSERT_TAIL(&throttle_groups, tg, list); + tg->refcount++; + qemu_mutex_unlock(&throttle_groups_lock); + +ret: + error_propagate(errp, local_error); + return; + +} + +static void throttle_group_set(Object *obj, Visitor *v, const char * name, + void *opaque, Error **errp) + +{ + ThrottleGroup *tg =3D THROTTLE_GROUP(obj); + ThrottleConfig cfg =3D tg->ts.cfg; + Error *local_err =3D NULL; + ThrottleParamInfo *info =3D opaque; + int64_t value; + + visit_type_int64(v, name, &value, &local_err); + + if (local_err) { + goto out; + } + if (value < 0) { + error_setg(&local_err, "%s value must be in range [0, %"PRId64"]", + "iops-total", INT64_MAX); /* change option string */ + goto out; + } + + switch (info->size) { + case UINT64: + { + uint64_t *field =3D (void *)&cfg.buckets[info->type] + info->o= ffset; + *field =3D value; + } + break; + case DOUBLE: + { + double *field =3D (void *)&cfg.buckets[info->type] + info->off= set; + *field =3D value; + } + break; + case UNSIGNED: + { + unsigned *field =3D (void *)&cfg.buckets[info->type] + info->o= ffset; + *field =3D value; + } + } + + tg->ts.cfg =3D cfg; + +out: + error_propagate(errp, local_err); +} + +static void throttle_group_get(Object *obj, Visitor *v, const char *name, + void *opaque, Error **errp) +{ + ThrottleGroup *tg =3D THROTTLE_GROUP(obj); + ThrottleConfig cfg =3D tg->ts.cfg; + ThrottleParamInfo *info =3D opaque; + int64_t value; + + switch (info->size) { + case UINT64: + { + uint64_t *field =3D (void *)&cfg.buckets[info->type] + info->o= ffset; + value =3D *field; + } + break; + case DOUBLE: + { + double *field =3D (void *)&cfg.buckets[info->type] + info->off= set; + value =3D *field; + } + break; + case UNSIGNED: + { + unsigned *field =3D (void *)&cfg.buckets[info->type] + info->o= ffset; + value =3D *field; + } + } + + visit_type_int64(v, name, &value, errp); + +} + +static void throttle_group_init(Object *obj) +{ + ThrottleGroup *tg =3D THROTTLE_GROUP(obj); + throttle_init(&tg->ts); +} + +static void throttle_group_class_init(ObjectClass *klass, void *class_data) +{ + UserCreatableClass *ucc =3D USER_CREATABLE_CLASS(klass); + + ucc->complete =3D throttle_group_obj_complete; + /* iops limits */ + object_class_property_add(klass, QEMU_OPT_IOPS_TOTAL, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_total_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_TOTAL_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_total_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_TOTAL_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_total_max_length_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_READ, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_read_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_READ_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_read_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_READ_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_read_max_length_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_WRITE, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_write_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_WRITE_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_write_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_IOPS_WRITE_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_write_max_length_info, &error_abort); + /* bps limits */ + object_class_property_add(klass, QEMU_OPT_BPS_TOTAL, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_total_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_TOTAL_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_total_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_TOTAL_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_total_max_length_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_READ, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_read_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_READ_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_read_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_READ_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_read_max_length_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_WRITE, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_write_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_WRITE_MAX, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_write_max_info, &error_abort); + object_class_property_add(klass, QEMU_OPT_BPS_WRITE_MAX_LENGTH, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_bps_write_max_length_info, &error_abort); + /* rest */ + object_class_property_add(klass, QEMU_OPT_IOPS_SIZE, "int", + throttle_group_get, + throttle_group_set, + NULL, &throttle_iops_size_info, &error_abort); +} + + +static void throttle_group_finalize(Object *obj) +{ + ThrottleGroup *tg =3D THROTTLE_GROUP(obj); + + qemu_mutex_lock(&throttle_groups_lock); + if (--tg->refcount =3D=3D 0) { + QTAILQ_REMOVE(&throttle_groups, tg, list); + qemu_mutex_destroy(&tg->lock); + g_free(tg->name); + g_free(tg); + } + qemu_mutex_unlock(&throttle_groups_lock); + +} + +static const TypeInfo throttle_group_info =3D { + .name =3D TYPE_THROTTLE_GROUP, + .parent =3D TYPE_OBJECT, + .class_init =3D throttle_group_class_init, + .instance_size =3D sizeof(ThrottleGroup), + .instance_init =3D throttle_group_init, + .instance_finalize =3D throttle_group_finalize, + .interfaces =3D (InterfaceInfo[]) { + { TYPE_USER_CREATABLE }, + { } + }, +}; + +static void throttle_group_register_types(void) +{ + qemu_mutex_init(&throttle_groups_lock); + type_register_static(&throttle_group_info); +} + +type_init(throttle_group_register_types); diff --git a/include/qemu/throttle.h b/include/qemu/throttle.h index 3e92d4d4eb..dd56baeb35 100644 --- a/include/qemu/throttle.h +++ b/include/qemu/throttle.h @@ -28,6 +28,8 @@ #include "qemu-common.h" #include "qemu/timer.h" #include "qemu/coroutine.h" +#include "qom/object.h" +#include "qom/object_interfaces.h" =20 #define THROTTLE_VALUE_MAX 1000000000000000LL =20 @@ -180,4 +182,6 @@ typedef struct ThrottleGroupMember { =20 } ThrottleGroupMember; =20 +#define TYPE_THROTTLE_GROUP "throttling-group" +#define THROTTLE_GROUP(obj) OBJECT_CHECK(ThrottleGroup, (obj), TYPE_THROTT= LE_GROUP) #endif --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222297069992.4545892905556; Fri, 23 Jun 2017 05:51:37 -0700 (PDT) Received: from localhost ([::1]:35391 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO3i-0003yD-IS for importer@patchew.org; Fri, 23 Jun 2017 08:51:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60034) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0o-0001Zl-Bp for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0n-0004S6-1W for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:34 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23050) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0g-0004F7-20; Fri, 23 Jun 2017 08:48:26 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NClkRU058518 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:46 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:57 +0300 Message-Id: <20170623124700.1389-6-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 5/8] block: add BlockDevOptionsThrottle to QAPI X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This is needed to configure throttle filter driver nodes with QAPI. Signed-off-by: Manos Pitsidianakis Reviewed-by: Stefan Hajnoczi --- qapi/block-core.json | 19 ++++++++++++++++++- 1 file changed, 18 insertions(+), 1 deletion(-) diff --git a/qapi/block-core.json b/qapi/block-core.json index f85c2235c7..1d4afafe8c 100644 --- a/qapi/block-core.json +++ b/qapi/block-core.json @@ -2119,7 +2119,7 @@ 'host_device', 'http', 'https', 'iscsi', 'luks', 'nbd', 'nfs', 'null-aio', 'null-co', 'parallels', 'qcow', 'qcow2', 'qed', 'quorum', 'raw', 'rbd', 'replication', 'sheepdog', 'ssh', - 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs' ] } + 'throttle', 'vdi', 'vhdx', 'vmdk', 'vpc', 'vvfat', 'vxhs' ] } =20 ## # @BlockdevOptionsFile: @@ -2984,6 +2984,7 @@ 'replication':'BlockdevOptionsReplication', 'sheepdog': 'BlockdevOptionsSheepdog', 'ssh': 'BlockdevOptionsSsh', + 'throttle': 'BlockdevOptionsThrottle', 'vdi': 'BlockdevOptionsGenericFormat', 'vhdx': 'BlockdevOptionsGenericFormat', 'vmdk': 'BlockdevOptionsGenericCOWFormat', @@ -3723,3 +3724,19 @@ 'data' : { 'parent': 'str', '*child': 'str', '*node': 'str' } } + +## +# @BlockdevOptionsThrottle: +# +# Driver specific block device options for Throttle +# +# @throttling-group: the name of the throttling group to use +# +# @options: BlockIOThrottle options +# Since: 2.9 +## +{ 'struct': 'BlockdevOptionsThrottle', + 'data': { 'throttling-group': 'str', + 'file' : 'BlockdevRef', + '*options' : 'BlockIOThrottle' + } } --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222545329569.1865116289254; Fri, 23 Jun 2017 05:55:45 -0700 (PDT) Received: from localhost ([::1]:35411 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO7j-0007xP-1O for importer@patchew.org; Fri, 23 Jun 2017 08:55:43 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60056) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0p-0001aI-1k for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:36 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0n-0004SE-50 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:35 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23055) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0g-0004F0-1t; Fri, 23 Jun 2017 08:48:26 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NCllph058573 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:47 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:58 +0300 Message-Id: <20170623124700.1389-7-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 6/8] block: add options parameter to bdrv_new_open_driver() X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Allow passing a QDict *options parameter to bdrv_new_open_driver() so that it can be used if a driver needs it upon creation. The previous behaviour (empty bs->options and bs->explicit_options) remains when options is NULL. Signed-off-by: Manos Pitsidianakis --- block.c | 13 +++++++++---- block/commit.c | 4 ++-- block/mirror.c | 2 +- block/vvfat.c | 2 +- include/block/block.h | 2 +- 5 files changed, 14 insertions(+), 9 deletions(-) diff --git a/block.c b/block.c index 694396281b..c7d9f8959a 100644 --- a/block.c +++ b/block.c @@ -1150,20 +1150,25 @@ free_and_fail: } =20 BlockDriverState *bdrv_new_open_driver(BlockDriver *drv, const char *node_= name, - int flags, Error **errp) + int flags, QDict *options, Error **= errp) { BlockDriverState *bs; int ret; =20 bs =3D bdrv_new(); bs->open_flags =3D flags; - bs->explicit_options =3D qdict_new(); - bs->options =3D qdict_new(); + if (options) { + bs->explicit_options =3D qdict_clone_shallow(options); + bs->options =3D qdict_clone_shallow(options); + } else { + bs->explicit_options =3D qdict_new(); + bs->options =3D qdict_new(); + } bs->opaque =3D NULL; =20 update_options_from_flags(bs->options, flags); =20 - ret =3D bdrv_open_driver(bs, drv, node_name, bs->options, flags, errp); + ret =3D bdrv_open_driver(bs, drv, node_name, options, flags, errp); if (ret < 0) { QDECREF(bs->explicit_options); QDECREF(bs->options); diff --git a/block/commit.c b/block/commit.c index af6fa68cf3..e855f20d7f 100644 --- a/block/commit.c +++ b/block/commit.c @@ -338,7 +338,7 @@ void commit_start(const char *job_id, BlockDriverState = *bs, /* Insert commit_top block node above top, so we can block consistent = read * on the backing chain below it */ commit_top_bs =3D bdrv_new_open_driver(&bdrv_commit_top, filter_node_n= ame, 0, - errp); + NULL, errp); if (commit_top_bs =3D=3D NULL) { goto fail; } @@ -486,7 +486,7 @@ int bdrv_commit(BlockDriverState *bs) backing_file_bs =3D backing_bs(bs); =20 commit_top_bs =3D bdrv_new_open_driver(&bdrv_commit_top, NULL, BDRV_O_= RDWR, - &local_err); + NULL, &local_err); if (commit_top_bs =3D=3D NULL) { error_report_err(local_err); goto ro_cleanup; diff --git a/block/mirror.c b/block/mirror.c index 19afcc6f1a..3ebe953f18 100644 --- a/block/mirror.c +++ b/block/mirror.c @@ -1155,7 +1155,7 @@ static void mirror_start_job(const char *job_id, Bloc= kDriverState *bs, * reads on the top, while disabling it in the intermediate nodes, and= make * the backing chain writable. */ mirror_top_bs =3D bdrv_new_open_driver(&bdrv_mirror_top, filter_node_n= ame, - BDRV_O_RDWR, errp); + BDRV_O_RDWR, NULL, errp); if (mirror_top_bs =3D=3D NULL) { return; } diff --git a/block/vvfat.c b/block/vvfat.c index 8ab647c0c6..c8e8944947 100644 --- a/block/vvfat.c +++ b/block/vvfat.c @@ -3064,7 +3064,7 @@ static int enable_write_target(BlockDriverState *bs, = Error **errp) #endif =20 backing =3D bdrv_new_open_driver(&vvfat_write_target, NULL, BDRV_O_ALL= OW_RDWR, - &error_abort); + NULL, &error_abort); *(void**) backing->opaque =3D s; =20 bdrv_set_backing_hd(s->bs, backing, &error_abort); diff --git a/include/block/block.h b/include/block/block.h index a4f09df95a..e7db67c059 100644 --- a/include/block/block.h +++ b/include/block/block.h @@ -261,7 +261,7 @@ int bdrv_open_backing_file(BlockDriverState *bs, QDict = *parent_options, BlockDriverState *bdrv_open(const char *filename, const char *reference, QDict *options, int flags, Error **errp); BlockDriverState *bdrv_new_open_driver(BlockDriver *drv, const char *node_= name, - int flags, Error **errp); + int flags, QDict *options, Error **= errp); BlockReopenQueue *bdrv_reopen_queue(BlockReopenQueue *bs_queue, BlockDriverState *bs, QDict *options, int flags); --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222607734738.7777831143042; Fri, 23 Jun 2017 05:56:47 -0700 (PDT) Received: from localhost ([::1]:35414 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO8k-0000DJ-4n for importer@patchew.org; Fri, 23 Jun 2017 08:56:46 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60117) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0r-0001jC-A1 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0o-0004UY-Rh for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:37 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23053) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0h-0004F2-1E; Fri, 23 Jun 2017 08:48:27 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NCllMe058584 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:49 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:46:59 +0300 Message-Id: <20170623124700.1389-8-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 7/8] block: remove legacy I/O throttling X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" This commit removes all I/O throttling from block/block-backend.c. In order to support the existing interface, it is changed to use the block/throttle.c filter driver. The throttle filter node that is created by the legacy interface is stored in a 'throttle_node' field in the BlockBackendPublic of the device. The legacy throttle node is managed by the legacy interface completely. More advanced configurations with the filter drive are possible using the QMP API, but these will be ignored by the legacy interface. Signed-off-by: Manos Pitsidianakis --- block/block-backend.c | 158 ++++++++++++++++++++++++++-----------= ---- block/qapi.c | 8 +-- block/throttle.c | 4 ++ blockdev.c | 55 ++++++++++---- include/sysemu/block-backend.h | 8 +-- tests/test-throttle.c | 15 ++-- util/throttle.c | 4 -- 7 files changed, 160 insertions(+), 92 deletions(-) diff --git a/block/block-backend.c b/block/block-backend.c index 1d501ec973..c777943572 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -216,9 +216,6 @@ BlockBackend *blk_new(uint64_t perm, uint64_t shared_pe= rm) blk->shared_perm =3D shared_perm; blk_set_enable_write_cache(blk, true); =20 - qemu_co_mutex_init(&blk->public.throttle_group_member.throttled_reqs_l= ock); - qemu_co_queue_init(&blk->public.throttle_group_member.throttled_reqs[0= ]); - qemu_co_queue_init(&blk->public.throttle_group_member.throttled_reqs[1= ]); block_acct_init(&blk->stats); =20 notifier_list_init(&blk->remove_bs_notifiers); @@ -286,8 +283,8 @@ static void blk_delete(BlockBackend *blk) assert(!blk->refcnt); assert(!blk->name); assert(!blk->dev); - if (blk->public.throttle_group_member.throttle_state) { - blk_io_limits_disable(blk); + if (blk->public.throttle_node) { + blk_io_limits_disable(blk, &error_abort); } if (blk->root) { blk_remove_bs(blk); @@ -597,13 +594,7 @@ BlockBackend *blk_by_public(BlockBackendPublic *public) */ void blk_remove_bs(BlockBackend *blk) { - ThrottleTimers *tt; - notifier_list_notify(&blk->remove_bs_notifiers, blk); - if (blk->public.throttle_group_member.throttle_state) { - tt =3D &blk->public.throttle_group_member.throttle_timers; - throttle_timers_detach_aio_context(tt); - } =20 blk_update_root_state(blk); =20 @@ -624,12 +615,6 @@ int blk_insert_bs(BlockBackend *blk, BlockDriverState = *bs, Error **errp) bdrv_ref(bs); =20 notifier_list_notify(&blk->insert_bs_notifiers, blk); - if (blk->public.throttle_group_member.throttle_state) { - throttle_timers_attach_aio_context( - &blk->public.throttle_group_member.throttle_timers, - bdrv_get_aio_context(bs)); - } - return 0; } =20 @@ -987,13 +972,6 @@ int coroutine_fn blk_co_preadv(BlockBackend *blk, int6= 4_t offset, } =20 bdrv_inc_in_flight(bs); - - /* throttling disk I/O */ - if (blk->public.throttle_group_member.throttle_state) { - throttle_group_co_io_limits_intercept(&blk->public.throttle_group_= member, - bytes, false); - } - ret =3D bdrv_co_preadv(blk->root, offset, bytes, qiov, flags); bdrv_dec_in_flight(bs); return ret; @@ -1014,11 +992,6 @@ int coroutine_fn blk_co_pwritev(BlockBackend *blk, in= t64_t offset, } =20 bdrv_inc_in_flight(bs); - /* throttling disk I/O */ - if (blk->public.throttle_group_member.throttle_state) { - throttle_group_co_io_limits_intercept(&blk->public.throttle_group_= member, - bytes, true); - } =20 if (!blk->enable_write_cache) { flags |=3D BDRV_REQ_FUA; @@ -1686,18 +1659,9 @@ static AioContext *blk_aiocb_get_aio_context(BlockAI= OCB *acb) void blk_set_aio_context(BlockBackend *blk, AioContext *new_context) { BlockDriverState *bs =3D blk_bs(blk); - ThrottleTimers *tt; =20 if (bs) { - if (blk->public.throttle_group_member.throttle_state) { - tt =3D &blk->public.throttle_group_member.throttle_timers; - throttle_timers_detach_aio_context(tt); - } bdrv_set_aio_context(bs, new_context); - if (blk->public.throttle_group_member.throttle_state) { - tt =3D &blk->public.throttle_group_member.throttle_timers; - throttle_timers_attach_aio_context(tt, new_context); - } } } =20 @@ -1914,45 +1878,115 @@ int blk_commit_all(void) /* throttling disk I/O limits */ void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg) { - throttle_group_config(&blk->public.throttle_group_member, cfg); + ThrottleGroupMember *tgm; + + assert(blk->public.throttle_node); + tgm =3D blk->public.throttle_node->opaque; + throttle_group_config(tgm, cfg); } =20 -void blk_io_limits_disable(BlockBackend *blk) +void blk_io_limits_disable(BlockBackend *blk, Error **errp) { - assert(blk->public.throttle_group_member.throttle_state); - bdrv_drained_begin(blk_bs(blk)); - throttle_group_unregister_tgm(&blk->public.throttle_group_member); - bdrv_drained_end(blk_bs(blk)); + Error *local_err =3D NULL; + BlockDriverState *bs, *throttle_node; + + throttle_node =3D blk_get_public(blk)->throttle_node; + + assert(throttle_node && throttle_node->refcnt =3D=3D 1); + + bs =3D throttle_node->file->bs; + blk_get_public(blk)->throttle_node =3D NULL; + + /* ref throttle_node's child bs so that it isn't lost when throttle_no= de is + * destroyed */ + bdrv_ref(bs); + + /* this destroys throttle_node */ + blk_remove_bs(blk); + + blk_insert_bs(blk, bs, &local_err); + if (local_err) { + error_propagate(errp, local_err); + blk_insert_bs(blk, bs, NULL); + } + bdrv_unref(bs); } =20 /* should be called before blk_set_io_limits if a limit is set */ -void blk_io_limits_enable(BlockBackend *blk, const char *group) +void blk_io_limits_enable(BlockBackend *blk, const char *group, Error **e= rrp) { - blk->public.throttle_group_member.aio_context =3D blk_get_aio_context(= blk); - assert(!blk->public.throttle_group_member.throttle_state); - throttle_group_register_tgm(&blk->public.throttle_group_member, group); + BlockDriverState *bs =3D blk_bs(blk), *throttle_node; + Error *local_err =3D NULL; + /* + * increase bs refcount so it doesn't get deleted when removed + * from the BlockBackend's root + * */ + bdrv_ref(bs); + blk_remove_bs(blk); + + QDict *options =3D qdict_new(); + qdict_set_default_str(options, "file", bs->node_name); + qdict_set_default_str(options, "throttling-group", group); + throttle_node =3D bdrv_new_open_driver(bdrv_find_format("throttle"), + NULL, bdrv_get_flags(bs), options, &local_err); + + QDECREF(options); + if (local_err) { + blk_insert_bs(blk, bs, NULL); + bdrv_unref(bs); + error_propagate(errp, local_err); + return; + } + /* bs will be throttle_node's child now so unref it*/ + bdrv_unref(bs); + + blk_insert_bs(blk, throttle_node, &local_err); + if (local_err) { + error_propagate(errp, local_err); + bdrv_ref(bs); + bdrv_unref(throttle_node); + blk_insert_bs(blk, bs, NULL); + bdrv_unref(bs); + return; + } + bdrv_unref(throttle_node); + + assert(throttle_node->file->bs =3D=3D bs); + assert(throttle_node->refcnt =3D=3D 1); + + blk_get_public(blk)->throttle_node =3D throttle_node; } =20 -void blk_io_limits_update_group(BlockBackend *blk, const char *group) +void blk_io_limits_update_group(BlockBackend *blk, const char *group, Erro= r **errp) { + ThrottleGroupMember *tgm; + Error *local_err =3D NULL; + /* this BB is not part of any group */ - if (!blk->public.throttle_group_member.throttle_state) { + if (!blk->public.throttle_node) { return; } =20 + tgm =3D blk->public.throttle_node->opaque; + /* this BB is a part of the same group than the one we want */ - if (!g_strcmp0(throttle_group_get_name(&blk->public.throttle_group_mem= ber), + if (!g_strcmp0(throttle_group_get_name(tgm), group)) { return; } =20 - /* need to change the group this bs belong to */ - blk_io_limits_disable(blk); - blk_io_limits_enable(blk, group); + /* need to change the group this bs belongs to */ + blk_io_limits_disable(blk, &local_err); + if (local_err) { + error_propagate(errp, local_err); + return; + } + blk_io_limits_enable(blk, group, errp); } =20 static void blk_root_drained_begin(BdrvChild *child) { + ThrottleGroupMember *tgm; BlockBackend *blk =3D child->opaque; =20 if (++blk->quiesce_counter =3D=3D 1) { @@ -1963,19 +1997,25 @@ static void blk_root_drained_begin(BdrvChild *child) =20 /* Note that blk->root may not be accessible here yet if we are just * attaching to a BlockDriverState that is drained. Use child instead.= */ - - if (atomic_fetch_inc(&blk->public.throttle_group_member.io_limits_disa= bled) =3D=3D 0) { - throttle_group_restart_tgm(&blk->public.throttle_group_member); + if (blk->public.throttle_node) { + tgm =3D blk->public.throttle_node->opaque; + if (atomic_fetch_inc(&tgm->io_limits_disabled) =3D=3D 0) { + throttle_group_restart_tgm(tgm); + } } } =20 static void blk_root_drained_end(BdrvChild *child) { + ThrottleGroupMember *tgm; BlockBackend *blk =3D child->opaque; assert(blk->quiesce_counter); =20 - assert(blk->public.throttle_group_member.io_limits_disabled); - atomic_dec(&blk->public.throttle_group_member.io_limits_disabled); + if (blk->public.throttle_node) { + tgm =3D blk->public.throttle_node->opaque; + assert(tgm->io_limits_disabled); + atomic_dec(&tgm->io_limits_disabled); + } =20 if (--blk->quiesce_counter =3D=3D 0) { if (blk->dev_ops && blk->dev_ops->drained_end) { diff --git a/block/qapi.c b/block/qapi.c index 70ec5552be..053c3cb8b3 100644 --- a/block/qapi.c +++ b/block/qapi.c @@ -67,11 +67,11 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *b= lk, info->backing_file_depth =3D bdrv_get_backing_file_depth(bs); info->detect_zeroes =3D bs->detect_zeroes; =20 - if (blk && blk_get_public(blk)->throttle_group_member.throttle_state) { + if (blk && blk_get_public(blk)->throttle_node) { ThrottleConfig cfg; - BlockBackendPublic *blkp =3D blk_get_public(blk); + ThrottleGroupMember *tgm =3D blk_get_public(blk)->throttle_node->o= paque; =20 - throttle_group_get_config(&blkp->throttle_group_member, &cfg); + throttle_group_get_config(tgm, &cfg); =20 info->bps =3D cfg.buckets[THROTTLE_BPS_TOTAL].avg; info->bps_rd =3D cfg.buckets[THROTTLE_BPS_READ].avg; @@ -120,7 +120,7 @@ BlockDeviceInfo *bdrv_block_device_info(BlockBackend *b= lk, =20 info->has_group =3D true; info->group =3D - g_strdup(throttle_group_get_name(&blkp->throttle_group_member)= ); + g_strdup(throttle_group_get_name(tgm)); } =20 info->write_threshold =3D bdrv_write_threshold_get(bs); diff --git a/block/throttle.c b/block/throttle.c index 0c17051161..62fa28315a 100644 --- a/block/throttle.c +++ b/block/throttle.c @@ -341,6 +341,8 @@ static int throttle_co_flush(BlockDriverState *bs) static void throttle_detach_aio_context(BlockDriverState *bs) { ThrottleGroupMember *tgm =3D bs->opaque; + tgm->aio_context =3D NULL; + throttle_timers_detach_aio_context(&tgm->throttle_timers); } =20 @@ -348,6 +350,8 @@ static void throttle_attach_aio_context(BlockDriverStat= e *bs, AioContext *new_context) { ThrottleGroupMember *tgm =3D bs->opaque; + tgm->aio_context =3D new_context; + throttle_timers_attach_aio_context(&tgm->throttle_timers, new_context); } =20 diff --git a/blockdev.c b/blockdev.c index 794e681cf8..c928ced35a 100644 --- a/blockdev.c +++ b/blockdev.c @@ -610,7 +610,14 @@ static BlockBackend *blockdev_init(const char *file, Q= Dict *bs_opts, if (!throttling_group) { throttling_group =3D id; } - blk_io_limits_enable(blk, throttling_group); + blk_io_limits_enable(blk, throttling_group, &error); + if (error) { + error_propagate(errp, error); + blk_unref(blk); + blk =3D NULL; + goto err_no_bs_opts; + + } blk_set_io_limits(blk, &cfg); } =20 @@ -2621,6 +2628,9 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg, = Error **errp) BlockDriverState *bs; BlockBackend *blk; AioContext *aio_context; + BlockDriverState *throttle_node =3D NULL; + ThrottleGroupMember *tgm; + Error *local_err =3D NULL; =20 blk =3D qmp_get_blk(arg->has_device ? arg->device : NULL, arg->has_id ? arg->id : NULL, @@ -2696,19 +2706,38 @@ void qmp_block_set_io_throttle(BlockIOThrottle *arg= , Error **errp) if (throttle_enabled(&cfg)) { /* Enable I/O limits if they're not enabled yet, otherwise * just update the throttling group. */ - if (!blk_get_public(blk)->throttle_group_member.throttle_state) { - blk_io_limits_enable(blk, - arg->has_group ? arg->group : - arg->has_device ? arg->device : - arg->id); - } else if (arg->has_group) { - blk_io_limits_update_group(blk, arg->group); + if (!blk_get_public(blk)->throttle_node) { + blk_io_limits_enable(blk, arg->has_group ? arg->group : + arg->has_device ? arg->device : arg->id, &local_err); + if (local_err) { + error_propagate(errp, local_err); + goto out; + } } - /* Set the new throttling configuration */ - blk_set_io_limits(blk, &cfg); - } else if (blk_get_public(blk)->throttle_group_member.throttle_state) { - /* If all throttling settings are set to 0, disable I/O limits */ - blk_io_limits_disable(blk); + + if (arg->has_group) { + /* move throttle node membership to arg->group */ + blk_io_limits_update_group(blk, arg->group, &local_err); + if (local_err) { + error_propagate(errp, local_err); + goto out; + } + } + + throttle_node =3D blk_get_public(blk)->throttle_node; + tgm =3D throttle_node->opaque; + throttle_group_config(tgm, &cfg); + } else if (blk_get_public(blk)->throttle_node) { + /* + * If all throttling settings are set to 0, disable I/O limits + * by deleting the legacy throttle node + * */ + blk_io_limits_disable(blk, &local_err); + if (local_err) { + error_propagate(errp, local_err); + goto out; + } + } =20 out: diff --git a/include/sysemu/block-backend.h b/include/sysemu/block-backend.h index 4fec907b7f..bef8fd53fa 100644 --- a/include/sysemu/block-backend.h +++ b/include/sysemu/block-backend.h @@ -73,7 +73,7 @@ typedef struct BlockDevOps { * friends so that BlockBackends can be kept in lists outside block-backen= d.c * */ typedef struct BlockBackendPublic { - ThrottleGroupMember throttle_group_member; + BlockDriverState *throttle_node; } BlockBackendPublic; =20 BlockBackend *blk_new(uint64_t perm, uint64_t shared_perm); @@ -221,8 +221,8 @@ BlockAIOCB *blk_abort_aio_request(BlockBackend *blk, void *opaque, int ret); =20 void blk_set_io_limits(BlockBackend *blk, ThrottleConfig *cfg); -void blk_io_limits_disable(BlockBackend *blk); -void blk_io_limits_enable(BlockBackend *blk, const char *group); -void blk_io_limits_update_group(BlockBackend *blk, const char *group); +void blk_io_limits_disable(BlockBackend *blk, Error **errp); +void blk_io_limits_enable(BlockBackend *blk, const char *group, Error **er= rp); +void blk_io_limits_update_group(BlockBackend *blk, const char *group, Erro= r **errp); =20 #endif diff --git a/tests/test-throttle.c b/tests/test-throttle.c index d3298234aa..c5b5492e78 100644 --- a/tests/test-throttle.c +++ b/tests/test-throttle.c @@ -594,7 +594,6 @@ static void test_groups(void) { ThrottleConfig cfg1, cfg2; BlockBackend *blk1, *blk2, *blk3; - BlockBackendPublic *blkp1, *blkp2, *blkp3; ThrottleGroupMember *tgm1, *tgm2, *tgm3; =20 /* No actual I/O is performed on these devices */ @@ -602,13 +601,9 @@ static void test_groups(void) blk2 =3D blk_new(0, BLK_PERM_ALL); blk3 =3D blk_new(0, BLK_PERM_ALL); =20 - blkp1 =3D blk_get_public(blk1); - blkp2 =3D blk_get_public(blk2); - blkp3 =3D blk_get_public(blk3); - - tgm1 =3D &blkp1->throttle_group_member; - tgm2 =3D &blkp2->throttle_group_member; - tgm3 =3D &blkp3->throttle_group_member; + tgm1 =3D g_new0(ThrottleGroupMember, 1); + tgm2 =3D g_new0(ThrottleGroupMember, 1); + tgm3 =3D g_new0(ThrottleGroupMember, 1); =20 tgm1->aio_context =3D blk_get_aio_context(blk1); tgm2->aio_context =3D blk_get_aio_context(blk2); @@ -659,6 +654,10 @@ static void test_groups(void) g_assert(tgm1->throttle_state =3D=3D NULL); g_assert(tgm2->throttle_state =3D=3D NULL); g_assert(tgm3->throttle_state =3D=3D NULL); + + g_free(tgm1); + g_free(tgm2); + g_free(tgm3); } =20 int main(int argc, char **argv) diff --git a/util/throttle.c b/util/throttle.c index e763474b1a..3570ed25fc 100644 --- a/util/throttle.c +++ b/util/throttle.c @@ -185,8 +185,6 @@ static bool throttle_compute_timer(ThrottleState *ts, void throttle_timers_attach_aio_context(ThrottleTimers *tt, AioContext *new_context) { - ThrottleGroupMember *tgm =3D container_of(tt, ThrottleGroupMember, thr= ottle_timers); - tgm->aio_context =3D new_context; tt->timers[0] =3D aio_timer_new(new_context, tt->clock_type, SCALE_NS, tt->read_timer_cb, tt->timer_opaque); tt->timers[1] =3D aio_timer_new(new_context, tt->clock_type, SCALE_NS, @@ -244,8 +242,6 @@ static void throttle_timer_destroy(QEMUTimer **timer) void throttle_timers_detach_aio_context(ThrottleTimers *tt) { int i; - ThrottleGroupMember *tgm =3D container_of(tt, ThrottleGroupMember, thr= ottle_timers); - tgm->aio_context =3D NULL; =20 for (i =3D 0; i < 2; i++) { throttle_timer_destroy(&tt->timers[i]); --=20 2.11.0 From nobody Wed Nov 5 11:54:52 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1498222513507455.68417360351384; Fri, 23 Jun 2017 05:55:13 -0700 (PDT) Received: from localhost ([::1]:35404 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO7E-0007Xa-3j for importer@patchew.org; Fri, 23 Jun 2017 08:55:12 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:60091) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0q-0001fy-Mb for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dOO0o-0004Uf-V7 for qemu-devel@nongnu.org; Fri, 23 Jun 2017 08:48:36 -0400 Received: from smtp1.ntua.gr ([2001:648:2000:de::183]:23060) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dOO0j-0004JK-9g; Fri, 23 Jun 2017 08:48:29 -0400 Received: from mail.ntua.gr (ppp141255063244.access.hol.gr [141.255.63.244]) (authenticated bits=0) by smtp1.ntua.gr (8.15.2/8.15.2) with ESMTPSA id v5NClo7O058596 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Fri, 23 Jun 2017 15:47:51 +0300 (EEST) (envelope-from el13635@mail.ntua.gr) X-Authentication-Warning: smtp1.ntua.gr: Host ppp141255063244.access.hol.gr [141.255.63.244] claimed to be mail.ntua.gr From: Manos Pitsidianakis To: qemu-devel Date: Fri, 23 Jun 2017 15:47:00 +0300 Message-Id: <20170623124700.1389-9-el13635@mail.ntua.gr> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170623124700.1389-1-el13635@mail.ntua.gr> References: <20170623124700.1389-1-el13635@mail.ntua.gr> X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2001:648:2000:de::183 Subject: [Qemu-devel] [PATCH RFC v3 8/8] block: add throttle block filter driver interface tests X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Alberto Garcia , Stefan Hajnoczi , qemu-block Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Signed-off-by: Manos Pitsidianakis --- tests/qemu-iotests/184 | 144 +++++++++++++++++++++++++++++++++++++++++= ++++ tests/qemu-iotests/184.out | 31 ++++++++++ tests/qemu-iotests/group | 1 + 3 files changed, 176 insertions(+) create mode 100755 tests/qemu-iotests/184 create mode 100644 tests/qemu-iotests/184.out diff --git a/tests/qemu-iotests/184 b/tests/qemu-iotests/184 new file mode 100755 index 0000000000..529f9edbec --- /dev/null +++ b/tests/qemu-iotests/184 @@ -0,0 +1,144 @@ +#!/bin/bash +# +# Test I/O throttle block filter driver interface +# +# Copyright (C) 2017 Manos Pitsidianakis +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . +# + +# creator +owner=3D + +seq=3D`basename $0` +echo "QA output created by $seq" + +here=3D`pwd` +status=3D1 # failure is the default! + +_cleanup() +{ + _cleanup_test_img +} +trap "_cleanup; exit \$status" 0 1 2 3 15 + +# get standard environment, filters and checks +. ./common.rc +. ./common.filter + +_supported_fmt raw qcow2 +_supported_proto file +_supported_os Linux + +function do_run_qemu() +{ + echo Testing: "$@" | _filter_imgfmt + $QEMU -nographic -qmp stdio -serial none "$@" + echo +} + +function run_qemu() +{ + do_run_qemu "$@" 2>&1 | _filter_testdir | _filter_qemu | _filter_qmp\ + | _filter_qemu_io | _filter_generated_node_ids +} + +_make_test_img 64M +test_throttle=3D$($QEMU_IMG --help|grep throttle) +[ "$test_throttle" =3D "" ] && _supported_fmt throttle + +throttle=3D"driver=3Dthrottle" + +echo +echo "=3D=3D checking interface =3D=3D" + +run_qemu <