From nobody Sat Nov 30 07:23:27 2024 Received: from mail-wm1-f74.google.com (mail-wm1-f74.google.com [209.85.128.74]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1308918660F for ; Wed, 11 Sep 2024 09:31:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.74 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726047068; cv=none; b=NAe1xq2VUkeNb4FTI9BRTxHfniGwNYYnIn0DORHPkTDAfUdE9rWsokpqX2T0pnsJoSmgwX8+Tl/RULvfUsPWjFEYw19F5EI/D2ncvaISUCMMxlOteYolqwxnUgp5Zf6NkoCzi5Xt4DhJ0C3g5x6RAOAkhcYFuv5kbaZTN0pwfcM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1726047068; c=relaxed/simple; bh=kBguAfPku2pwxcP6/HNnzqauggRGfJg+YGBxt1l3JZ8=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=Ek6Y3Epnk9ot011CTzeEdUtbhW6wvIp92tM8GyLSvru45wHfDz05CwkGgupaAbsL9Usij6SPYn661yGgWYd9YDz/Kt/6Ck+HxVGy15xK6Dfv2iEHnhLMt6hElzIVVSEBNO50mwyOZjt3KXi2FCLRsWzbzZhef1m6ZLwWarXEZRU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=ozbUdX0W; arc=none smtp.client-ip=209.85.128.74 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="ozbUdX0W" Received: by mail-wm1-f74.google.com with SMTP id 5b1f17b1804b1-42cae209243so29669325e9.1 for ; Wed, 11 Sep 2024 02:31:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1726047063; x=1726651863; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=aFzSpffK8X1TuoaKPBfypqCQObUo450oa9mup+ydrVI=; b=ozbUdX0Wvfs9XDMK+wRrlywKZwhoNsPyLHapW+LvQ8dy7xaDnniTRnzHCD+AdsECkh tRURgyNOPsuC7xm3YnhcGxpLUUNmsPj1GLAOIkAmL4f3tb8wWr/UOcFs5xsMhnnEXPKx 34TdG4x2zLZ0pMHbQHlclwyo6wyNRepboqweMQCZDC47ze0JUa3axICPN9BSqePkBnNO LOlUSIuD+gO251aQq5WQAzQj4rnc/fl34mmJaal8xP5Ot7gisPq2K/TcD3rIRuU90P/j F1avYL2IkKi36XC8ZQtctvbcXYNQw9cWqyYXNXqmk9s58qxLFuwS/zqL5fCf9PW0Wu8J T53A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1726047063; x=1726651863; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aFzSpffK8X1TuoaKPBfypqCQObUo450oa9mup+ydrVI=; b=IPUCdR+iH4ET/zyGLOfctwGaOErsvLr/oQSO+OF7kl/BBr/fPbVIYufwELvPJf4Rw6 dEEkALblRZWAFXHrr70HO+xwmVfKomQbyBTU80EpN1mrKhuuS+kU6gCZJBF1qRAX0IEe 89I6ciEZXJ5c4qmREBt7LKZN8Qo7o7MlZl5TGtDMIQhqdViIXBpIh8NSBgWiVl21XT4C 2xOpT3UwUryT1s9iq2ePiv8rDGefAcFmy6epE266cdfLXUpE2b4TnLxwWFUrfZVas4Ot oPVlVvX4WOV48xOpJuhqagpXJvlyoSgqypQ8m41cx0g/1OCtEQjitJVRjtLXA2Luivsg DQuw== X-Forwarded-Encrypted: i=1; AJvYcCWY6VStCupFRmqH810xZZ7b6glNvRYfmNKzIEtYG9XhqwI64ycndWY6d/jr9HyrLrEOzGHDgh4QpveKgeE=@vger.kernel.org X-Gm-Message-State: AOJu0YyJMcAh+3nEwIdAn0waE1G2tPmtucU6OhIW4nd5WTT5mgd0oOfE Gk5sDY0n595kX9/xozHhGnKeC3j1YS8ouZHd3MwchWhtq86luDPL2VwvoNtXwBKcAVRX4LaxEcY huw0I1M8fkRV3tYGzrg== X-Google-Smtp-Source: AGHT+IGeb7tdJA1W7sb5dSgp9s5jXeeSfsgqexjzavxSHuLmRM8lNSVcHXeicPy55t7j+m49O+cEaQt/jdoneF9o X-Received: from vdonnefort.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:2eea]) (user=vdonnefort job=sendgmr) by 2002:a05:600c:601a:b0:42c:ae4e:a983 with SMTP id 5b1f17b1804b1-42cae4eae13mr197095e9.3.1726047063200; Wed, 11 Sep 2024 02:31:03 -0700 (PDT) Date: Wed, 11 Sep 2024 10:30:24 +0100 In-Reply-To: <20240911093029.3279154-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20240911093029.3279154-1-vdonnefort@google.com> X-Mailer: git-send-email 2.46.0.598.g6f2099f65c-goog Message-ID: <20240911093029.3279154-9-vdonnefort@google.com> Subject: [PATCH 08/13] KVM: arm64: Add hyp tracing to tracefs From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev Cc: kvmarm@lists.linux.dev, will@kernel.org, qperret@google.com, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" When running with KVM protected mode, the hypervisor is able to generate events into tracefs compatible ring-buffers. Plug those ring-buffers to tracefs. The interface is found in hyp/ and contains the same hierarchy as any host instances easing the support by existing user-space tools. This currently doesn't provide any event support which will come later. Signed-off-by: Vincent Donnefort diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile index 86a629aaf0a1..c5bbf6b087a0 100644 --- a/arch/arm64/kvm/Makefile +++ b/arch/arm64/kvm/Makefile @@ -28,6 +28,8 @@ kvm-y +=3D arm.o mmu.o mmio.o psci.o hypercalls.o pvtime.= o \ kvm-$(CONFIG_HW_PERF_EVENTS) +=3D pmu-emul.o pmu.o kvm-$(CONFIG_ARM64_PTR_AUTH) +=3D pauth.o =20 +kvm-$(CONFIG_TRACING) +=3D hyp_trace.o + always-y :=3D hyp_constants.h hyp-constants.s =20 define rule_gen_hyp_constants diff --git a/arch/arm64/kvm/arm.c b/arch/arm64/kvm/arm.c index 9bef7638342e..444719b44f7a 100644 --- a/arch/arm64/kvm/arm.c +++ b/arch/arm64/kvm/arm.c @@ -25,6 +25,7 @@ =20 #define CREATE_TRACE_POINTS #include "trace_arm.h" +#include "hyp_trace.h" =20 #include #include @@ -2330,6 +2331,9 @@ static int __init init_subsystems(void) =20 kvm_register_perf_callbacks(NULL); =20 + err =3D hyp_trace_init_tracefs(); + if (err) + kvm_err("Failed to initialize Hyp tracing\n"); out: if (err) hyp_cpu_pm_exit(); diff --git a/arch/arm64/kvm/hyp/hyp-constants.c b/arch/arm64/kvm/hyp/hyp-co= nstants.c index b257a3b4bfc5..5c4a797a701f 100644 --- a/arch/arm64/kvm/hyp/hyp-constants.c +++ b/arch/arm64/kvm/hyp/hyp-constants.c @@ -3,11 +3,15 @@ #include #include #include +#include =20 int main(void) { DEFINE(STRUCT_HYP_PAGE_SIZE, sizeof(struct hyp_page)); DEFINE(PKVM_HYP_VM_SIZE, sizeof(struct pkvm_hyp_vm)); DEFINE(PKVM_HYP_VCPU_SIZE, sizeof(struct pkvm_hyp_vcpu)); +#ifdef CONFIG_TRACING + DEFINE(STRUCT_HYP_BUFFER_PAGE_SIZE, sizeof(struct hyp_buffer_page)); +#endif return 0; } diff --git a/arch/arm64/kvm/hyp_trace.c b/arch/arm64/kvm/hyp_trace.c new file mode 100644 index 000000000000..b9d1f96d0678 --- /dev/null +++ b/arch/arm64/kvm/hyp_trace.c @@ -0,0 +1,664 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * Copyright (C) 2024 Google LLC + * Author: Vincent Donnefort + */ + +#include +#include +#include + +#include +#include + +#include "hyp_constants.h" +#include "hyp_trace.h" + +#define RB_POLL_MS 100 + +#define TRACEFS_DIR "hypervisor" +#define TRACEFS_MODE_WRITE 0640 +#define TRACEFS_MODE_READ 0440 + +static struct hyp_trace_buffer { + struct hyp_trace_desc *desc; + struct ring_buffer_writer writer; + struct trace_buffer *trace_buffer; + size_t desc_size; + bool tracing_on; + int nr_readers; + struct mutex lock; +} hyp_trace_buffer =3D { + .lock =3D __MUTEX_INITIALIZER(hyp_trace_buffer.lock), +}; + +static size_t hyp_trace_buffer_size =3D 7 << 10; + +/* Number of pages the ring-buffer requires to accommodate for size */ +#define NR_PAGES(size) \ + ((PAGE_ALIGN(size) >> PAGE_SHIFT) + 1) + +static inline bool hyp_trace_buffer_loaded(struct hyp_trace_buffer *hyp_bu= ffer) +{ + return !!hyp_buffer->trace_buffer; +} + +static inline bool hyp_trace_buffer_used(struct hyp_trace_buffer *hyp_buff= er) +{ + return hyp_buffer->nr_readers || hyp_buffer->tracing_on || + !ring_buffer_empty(hyp_buffer->trace_buffer); +} + +static int +bpage_backing_alloc(struct hyp_buffer_pages_backing *bpage_backing, size_t= size) +{ + size_t backing_size; + void *start; + + backing_size =3D PAGE_ALIGN(STRUCT_HYP_BUFFER_PAGE_SIZE * NR_PAGES(size) * + num_possible_cpus()); + + start =3D alloc_pages_exact(backing_size, GFP_KERNEL_ACCOUNT); + if (!start) + return -ENOMEM; + + bpage_backing->start =3D (unsigned long)start; + bpage_backing->size =3D backing_size; + + return 0; +} + +static void +bpage_backing_free(struct hyp_buffer_pages_backing *bpage_backing) +{ + free_pages_exact((void *)bpage_backing->start, bpage_backing->size); +} + +static int __get_reader_page(int cpu) +{ + return kvm_call_hyp_nvhe(__pkvm_swap_reader_tracing, cpu); +} + +static void hyp_trace_free_pages(struct hyp_trace_desc *desc) +{ + struct rb_page_desc *rb_desc; + int cpu, id; + + for_each_rb_page_desc(rb_desc, cpu, &desc->page_desc) { + free_page(rb_desc->meta_va); + for (id =3D 0; id < rb_desc->nr_page_va; id++) + free_page(rb_desc->page_va[id]); + } +} + +static int hyp_trace_alloc_pages(struct hyp_trace_desc *desc, size_t size) +{ + int err =3D 0, cpu, id, nr_pages =3D NR_PAGES(size); + struct trace_page_desc *trace_desc; + struct rb_page_desc *rb_desc; + + trace_desc =3D &desc->page_desc; + trace_desc->nr_cpus =3D 0; + + rb_desc =3D (struct rb_page_desc *)&trace_desc->__data[0]; + + for_each_possible_cpu(cpu) { + rb_desc->cpu =3D cpu; + rb_desc->nr_page_va =3D 0; + rb_desc->meta_va =3D (unsigned long)page_to_virt(alloc_page(GFP_KERNEL)); + if (!rb_desc->meta_va) { + err =3D -ENOMEM; + break; + } + for (id =3D 0; id < nr_pages; id++) { + rb_desc->page_va[id] =3D (unsigned long)page_to_virt(alloc_page(GFP_KER= NEL)); + if (!rb_desc->page_va[id]) { + err =3D -ENOMEM; + break; + } + rb_desc->nr_page_va++; + } + trace_desc->nr_cpus++; + rb_desc =3D __next_rb_page_desc(rb_desc); + } + + if (err) { + hyp_trace_free_pages(desc); + return err; + } + + return 0; +} + +static int __load_page(unsigned long va) +{ + return kvm_call_hyp_nvhe(__pkvm_host_share_hyp, virt_to_pfn((void *)va), = 1); +} + +static void __teardown_page(unsigned long va) +{ + WARN_ON(kvm_call_hyp_nvhe(__pkvm_host_unshare_hyp, virt_to_pfn((void *)va= ), 1)); +} + +static void hyp_trace_teardown_pages(struct hyp_trace_desc *desc, + int last_cpu) +{ + struct rb_page_desc *rb_desc; + int cpu, id; + + for_each_rb_page_desc(rb_desc, cpu, &desc->page_desc) { + if (cpu > last_cpu) + break; + __teardown_page(rb_desc->meta_va); + for (id =3D 0; id < rb_desc->nr_page_va; id++) + __teardown_page(rb_desc->page_va[id]); + } +} + +static int hyp_trace_load_pages(struct hyp_trace_desc *desc) +{ + int last_loaded_cpu =3D 0, cpu, id, err =3D -EINVAL; + struct rb_page_desc *rb_desc; + + for_each_rb_page_desc(rb_desc, cpu, &desc->page_desc) { + err =3D __load_page(rb_desc->meta_va); + if (err) + break; + + for (id =3D 0; id < rb_desc->nr_page_va; id++) { + err =3D __load_page(rb_desc->page_va[id]); + if (err) + break; + } + + if (!err) + continue; + + for (id--; id >=3D 0; id--) + __teardown_page(rb_desc->page_va[id]); + + last_loaded_cpu =3D cpu - 1; + + break; + } + + if (!err) + return 0; + + hyp_trace_teardown_pages(desc, last_loaded_cpu); + + return err; +} + +static int hyp_trace_buffer_load(struct hyp_trace_buffer *hyp_buffer, size= _t size) +{ + int ret, nr_pages =3D NR_PAGES(size); + struct rb_page_desc *rbdesc; + struct hyp_trace_desc *desc; + size_t desc_size; + + if (hyp_trace_buffer_loaded(hyp_buffer)) + return 0; + + desc_size =3D size_add(offsetof(struct hyp_trace_desc, page_desc), + offsetof(struct trace_page_desc, __data)); + desc_size =3D size_add(desc_size, + size_mul(num_possible_cpus(), + struct_size(rbdesc, page_va, nr_pages))); + if (desc_size =3D=3D SIZE_MAX) + return -E2BIG; + + /* + * The hypervisor will unmap the descriptor from the host to protect the + * reading. Page granularity for the allocation ensures no other + * useful data will be unmapped. + */ + desc_size =3D PAGE_ALIGN(desc_size); + + desc =3D (struct hyp_trace_desc *)alloc_pages_exact(desc_size, GFP_KERNEL= ); + if (!desc) + return -ENOMEM; + + ret =3D hyp_trace_alloc_pages(desc, size); + if (ret) + goto err_free_desc; + + ret =3D bpage_backing_alloc(&desc->backing, size); + if (ret) + goto err_free_pages; + + ret =3D hyp_trace_load_pages(desc); + if (ret) + goto err_free_backing; + + ret =3D kvm_call_hyp_nvhe(__pkvm_load_tracing, (unsigned long)desc, desc_= size); + if (ret) + goto err_teardown_pages; + + hyp_buffer->writer.pdesc =3D &desc->page_desc; + hyp_buffer->writer.get_reader_page =3D __get_reader_page; + hyp_buffer->trace_buffer =3D ring_buffer_reader(&hyp_buffer->writer); + if (!hyp_buffer->trace_buffer) { + ret =3D -ENOMEM; + goto err_teardown_tracing; + } + + hyp_buffer->desc =3D desc; + hyp_buffer->desc_size =3D desc_size; + + return 0; + +err_teardown_tracing: + kvm_call_hyp_nvhe(__pkvm_teardown_tracing); +err_teardown_pages: + hyp_trace_teardown_pages(desc, INT_MAX); +err_free_backing: + bpage_backing_free(&desc->backing); +err_free_pages: + hyp_trace_free_pages(desc); +err_free_desc: + free_pages_exact(desc, desc_size); + + return ret; +} + +static void hyp_trace_buffer_teardown(struct hyp_trace_buffer *hyp_buffer) +{ + struct hyp_trace_desc *desc =3D hyp_buffer->desc; + size_t desc_size =3D hyp_buffer->desc_size; + + if (!hyp_trace_buffer_loaded(hyp_buffer)) + return; + + if (hyp_trace_buffer_used(hyp_buffer)) + return; + + if (kvm_call_hyp_nvhe(__pkvm_teardown_tracing)) + return; + + ring_buffer_free(hyp_buffer->trace_buffer); + hyp_trace_teardown_pages(desc, INT_MAX); + bpage_backing_free(&desc->backing); + hyp_trace_free_pages(desc); + free_pages_exact(desc, desc_size); + hyp_buffer->trace_buffer =3D NULL; +} + +static int hyp_trace_start(void) +{ + struct hyp_trace_buffer *hyp_buffer =3D &hyp_trace_buffer; + int ret =3D 0; + + mutex_lock(&hyp_buffer->lock); + + if (hyp_buffer->tracing_on) + goto out; + + ret =3D hyp_trace_buffer_load(hyp_buffer, hyp_trace_buffer_size); + if (ret) + goto out; + + ret =3D kvm_call_hyp_nvhe(__pkvm_enable_tracing, true); + if (ret) { + hyp_trace_buffer_teardown(hyp_buffer); + goto out; + } + + hyp_buffer->tracing_on =3D true; + +out: + mutex_unlock(&hyp_buffer->lock); + + return ret; +} + +static void hyp_trace_stop(void) +{ + struct hyp_trace_buffer *hyp_buffer =3D &hyp_trace_buffer; + int ret; + + mutex_lock(&hyp_buffer->lock); + + if (!hyp_buffer->tracing_on) + goto end; + + ret =3D kvm_call_hyp_nvhe(__pkvm_enable_tracing, false); + if (!ret) { + ring_buffer_poll_writer(hyp_buffer->trace_buffer, + RING_BUFFER_ALL_CPUS); + hyp_buffer->tracing_on =3D false; + hyp_trace_buffer_teardown(hyp_buffer); + } + +end: + mutex_unlock(&hyp_buffer->lock); +} + +static ssize_t hyp_tracing_on(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + unsigned long val; + int ret; + + ret =3D kstrtoul_from_user(ubuf, cnt, 10, &val); + if (ret) + return ret; + + if (val) + ret =3D hyp_trace_start(); + else + hyp_trace_stop(); + + return ret ? ret : cnt; +} + +static ssize_t hyp_tracing_on_read(struct file *filp, char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + char buf[3]; + int r; + + mutex_lock(&hyp_trace_buffer.lock); + r =3D sprintf(buf, "%d\n", hyp_trace_buffer.tracing_on); + mutex_unlock(&hyp_trace_buffer.lock); + + return simple_read_from_buffer(ubuf, cnt, ppos, buf, r); +} + +static const struct file_operations hyp_tracing_on_fops =3D { + .write =3D hyp_tracing_on, + .read =3D hyp_tracing_on_read, +}; + +static ssize_t hyp_buffer_size(struct file *filp, const char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + unsigned long val; + int ret; + + ret =3D kstrtoul_from_user(ubuf, cnt, 10, &val); + if (ret) + return ret; + + if (!val) + return -EINVAL; + + mutex_lock(&hyp_trace_buffer.lock); + hyp_trace_buffer_size =3D val << 10; /* KB to B */ + mutex_unlock(&hyp_trace_buffer.lock); + + return cnt; +} + +static ssize_t hyp_buffer_size_read(struct file *filp, char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + char buf[64]; + int r; + + mutex_lock(&hyp_trace_buffer.lock); + r =3D sprintf(buf, "%lu (%s)\n", hyp_trace_buffer_size >> 10, + hyp_trace_buffer_loaded(&hyp_trace_buffer) ? + "loaded" : "unloaded"); + mutex_unlock(&hyp_trace_buffer.lock); + + return simple_read_from_buffer(ubuf, cnt, ppos, buf, r); +} + +static const struct file_operations hyp_buffer_size_fops =3D { + .write =3D hyp_buffer_size, + .read =3D hyp_buffer_size_read, +}; + +static void ht_print_trace_time(struct ht_iterator *iter) +{ + unsigned long usecs_rem; + u64 ts_ns =3D iter->ts; + + do_div(ts_ns, 1000); + usecs_rem =3D do_div(ts_ns, USEC_PER_SEC); + + trace_seq_printf(&iter->seq, "%5lu.%06lu: ", + (unsigned long)ts_ns, usecs_rem); +} + +static void ht_print_trace_cpu(struct ht_iterator *iter) +{ + trace_seq_printf(&iter->seq, "[%03d]\t", iter->ent_cpu); +} + +static int ht_print_trace_fmt(struct ht_iterator *iter) +{ + if (iter->lost_events) + trace_seq_printf(&iter->seq, "CPU:%d [LOST %lu EVENTS]\n", + iter->ent_cpu, iter->lost_events); + + ht_print_trace_cpu(iter); + ht_print_trace_time(iter); + + return trace_seq_has_overflowed(&iter->seq) ? -EOVERFLOW : 0; +}; + +static struct ring_buffer_event *__ht_next_pipe_event(struct ht_iterator *= iter) +{ + struct ring_buffer_event *evt =3D NULL; + int cpu =3D iter->cpu; + + if (cpu !=3D RING_BUFFER_ALL_CPUS) { + if (ring_buffer_empty_cpu(iter->trace_buffer, cpu)) + return NULL; + + iter->ent_cpu =3D cpu; + + return ring_buffer_peek(iter->trace_buffer, cpu, &iter->ts, + &iter->lost_events); + } + + iter->ts =3D LLONG_MAX; + for_each_possible_cpu(cpu) { + struct ring_buffer_event *_evt; + unsigned long lost_events; + u64 ts; + + if (ring_buffer_empty_cpu(iter->trace_buffer, cpu)) + continue; + + _evt =3D ring_buffer_peek(iter->trace_buffer, cpu, &ts, + &lost_events); + if (!_evt) + continue; + + if (ts >=3D iter->ts) + continue; + + iter->ts =3D ts; + iter->ent_cpu =3D cpu; + iter->lost_events =3D lost_events; + evt =3D _evt; + } + + return evt; +} + +static void *ht_next_pipe_event(struct ht_iterator *iter) +{ + struct ring_buffer_event *event; + + event =3D __ht_next_pipe_event(iter); + if (!event) + return NULL; + + iter->ent =3D (struct hyp_entry_hdr *)&event->array[1]; + iter->ent_size =3D event->array[0]; + + return iter; +} + +static ssize_t +hyp_trace_pipe_read(struct file *file, char __user *ubuf, + size_t cnt, loff_t *ppos) +{ + struct ht_iterator *iter =3D (struct ht_iterator *)file->private_data; + int ret; + +copy_to_user: + ret =3D trace_seq_to_user(&iter->seq, ubuf, cnt); + if (ret !=3D -EBUSY) + return ret; + + trace_seq_init(&iter->seq); + + ret =3D ring_buffer_wait(iter->trace_buffer, iter->cpu, 0, NULL, NULL); + if (ret < 0) + return ret; + + while (ht_next_pipe_event(iter)) { + int prev_len =3D iter->seq.seq.len; + + if (ht_print_trace_fmt(iter)) { + iter->seq.seq.len =3D prev_len; + break; + } + + ring_buffer_consume(iter->trace_buffer, iter->ent_cpu, NULL, + NULL); + } + + goto copy_to_user; +} + +static void __poll_writer(struct work_struct *work) +{ + struct delayed_work *dwork =3D to_delayed_work(work); + struct ht_iterator *iter; + + iter =3D container_of(dwork, struct ht_iterator, poll_work); + + ring_buffer_poll_writer(iter->trace_buffer, iter->cpu); + + schedule_delayed_work((struct delayed_work *)work, + msecs_to_jiffies(RB_POLL_MS)); +} + +static int hyp_trace_pipe_open(struct inode *inode, struct file *file) +{ + struct hyp_trace_buffer *hyp_buffer =3D &hyp_trace_buffer; + int cpu =3D (s64)inode->i_private; + struct ht_iterator *iter =3D NULL; + int ret; + + mutex_lock(&hyp_buffer->lock); + + if (hyp_buffer->nr_readers =3D=3D INT_MAX) { + ret =3D -EBUSY; + goto unlock; + } + + ret =3D hyp_trace_buffer_load(hyp_buffer, hyp_trace_buffer_size); + if (ret) + goto unlock; + + iter =3D kzalloc(sizeof(*iter), GFP_KERNEL); + if (!iter) { + ret =3D -ENOMEM; + goto unlock; + } + iter->trace_buffer =3D hyp_buffer->trace_buffer; + iter->cpu =3D cpu; + trace_seq_init(&iter->seq); + file->private_data =3D iter; + + ret =3D ring_buffer_poll_writer(hyp_buffer->trace_buffer, cpu); + if (ret) + goto unlock; + + INIT_DELAYED_WORK(&iter->poll_work, __poll_writer); + schedule_delayed_work(&iter->poll_work, msecs_to_jiffies(RB_POLL_MS)); + + hyp_buffer->nr_readers++; + +unlock: + if (ret) { + hyp_trace_buffer_teardown(hyp_buffer); + kfree(iter); + } + + mutex_unlock(&hyp_buffer->lock); + + return ret; +} + +static int hyp_trace_pipe_release(struct inode *inode, struct file *file) +{ + struct hyp_trace_buffer *hyp_buffer =3D &hyp_trace_buffer; + struct ht_iterator *iter =3D file->private_data; + + cancel_delayed_work_sync(&iter->poll_work); + + mutex_lock(&hyp_buffer->lock); + + WARN_ON(--hyp_buffer->nr_readers < 0); + + hyp_trace_buffer_teardown(hyp_buffer); + + mutex_unlock(&hyp_buffer->lock); + + kfree(iter); + + return 0; +} + +static const struct file_operations hyp_trace_pipe_fops =3D { + .open =3D hyp_trace_pipe_open, + .read =3D hyp_trace_pipe_read, + .release =3D hyp_trace_pipe_release, + .llseek =3D no_llseek, +}; + +int hyp_trace_init_tracefs(void) +{ + struct dentry *root, *per_cpu_root; + char per_cpu_name[16]; + long cpu; + + if (!is_protected_kvm_enabled()) + return 0; + + root =3D tracefs_create_dir(TRACEFS_DIR, NULL); + if (!root) { + pr_err("Failed to create tracefs "TRACEFS_DIR"/\n"); + return -ENODEV; + } + + tracefs_create_file("tracing_on", TRACEFS_MODE_WRITE, root, NULL, + &hyp_tracing_on_fops); + + tracefs_create_file("buffer_size_kb", TRACEFS_MODE_WRITE, root, NULL, + &hyp_buffer_size_fops); + + tracefs_create_file("trace_pipe", TRACEFS_MODE_WRITE, root, + (void *)RING_BUFFER_ALL_CPUS, &hyp_trace_pipe_fops); + + per_cpu_root =3D tracefs_create_dir("per_cpu", root); + if (!per_cpu_root) { + pr_err("Failed to create tracefs folder "TRACEFS_DIR"/per_cpu/\n"); + return -ENODEV; + } + + for_each_possible_cpu(cpu) { + struct dentry *per_cpu_dir; + + snprintf(per_cpu_name, sizeof(per_cpu_name), "cpu%ld", cpu); + per_cpu_dir =3D tracefs_create_dir(per_cpu_name, per_cpu_root); + if (!per_cpu_dir) { + pr_warn("Failed to create tracefs "TRACEFS_DIR"/per_cpu/cpu%ld\n", + cpu); + continue; + } + + tracefs_create_file("trace_pipe", TRACEFS_MODE_READ, per_cpu_dir, + (void *)cpu, &hyp_trace_pipe_fops); + } + + return 0; +} diff --git a/arch/arm64/kvm/hyp_trace.h b/arch/arm64/kvm/hyp_trace.h new file mode 100644 index 000000000000..14fc06c625a6 --- /dev/null +++ b/arch/arm64/kvm/hyp_trace.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0 */ + +#ifndef __ARM64_KVM_HYP_TRACE_H__ +#define __ARM64_KVM_HYP_TRACE_H__ + +#include +#include + +struct ht_iterator { + struct trace_buffer *trace_buffer; + int cpu; + struct hyp_entry_hdr *ent; + unsigned long lost_events; + int ent_cpu; + size_t ent_size; + u64 ts; + void *spare; + size_t copy_leftover; + struct trace_seq seq; + struct delayed_work poll_work; +}; + +#ifdef CONFIG_TRACING +int hyp_trace_init_tracefs(void); +#else +static inline int hyp_trace_init_tracefs(void) { return 0; } +#endif +#endif --=20 2.46.0.598.g6f2099f65c-goog