From nobody Fri Dec 19 20:16:01 2025 Received: from mail-wm1-f73.google.com (mail-wm1-f73.google.com [209.85.128.73]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ADAA53019D0 for ; Fri, 7 Nov 2025 09:38:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.73 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762508336; cv=none; b=tsRhaCeRv2Qyzgvkkc0x8SPAUZLZsozaS7xsAEA0mwSup5pltleN2oCNeSGcAfKzagzpGtRRFn6J1beagTNRG5X/FipTSx2YkkP1D7gONIHF+G9WKhHfe2peBtBpTsiWpbCLMI4jSLDfNGn7ESB8EJhZ7MjwhW8AmCxwhjySRcg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762508336; c=relaxed/simple; bh=gnj5YPQgyww+GAjgd+2k0HXOfRk5jK0g1YAjC8/tjao=; h=Date:In-Reply-To:Mime-Version:References:Message-ID:Subject:From: To:Cc:Content-Type; b=tt9h6cQwBxoJ5jjmU4dFMTvXNiB+mteD2UuLvMpZoREXvFA7BiJ0hgh/1sPK1htXsHvNl66P88Tt6qOzVzmAxi9Ysq4Q5/rS0qVAwPzuR5fbjXdsnzX1P9S1FvGuNzTU13L4yhz33d3NwsgdkJBRHQE2djHDTJAyG5Jcf5xdVdg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b=sW+LARVh; arc=none smtp.client-ip=209.85.128.73 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=google.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=flex--vdonnefort.bounces.google.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="sW+LARVh" Received: by mail-wm1-f73.google.com with SMTP id 5b1f17b1804b1-477563a0c75so3190315e9.1 for ; Fri, 07 Nov 2025 01:38:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1762508331; x=1763113131; darn=vger.kernel.org; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=9VxbCb+Kg6U0rUkZ3VVmnp/SSISDQljDIPQcB5fnLr4=; b=sW+LARVhQQq0vsyQ7JBlG0k7zZOPgJxIBuJWOboiZAIc9YC8nqDxlUxAyumSnWGb/K jHE6AVQenJ1Vx3EwTCDa8WJ4XuwYEs3qdTXVz0QuHr2ZkI8P1vz16lgfqIZweMYkFonO 9SrlFRa/Lu9RIpWICpBrETBb9Wa4h50tlvtHbdo+1oKIy5SE+mKLp5D5BueVapxcwpWM Z1unerrdqNZAROZx/htYbnzlFL0xq7NTILEOGdSKJFCM/qykz1iz/EF+wiKxMhM431Qj N/XAGq3YI9Bak4mG49dPhFhtNh3mGPOyd+/JK9czhC6EaG7oTvITTAO8QCPrakzpzRlZ P8xQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1762508331; x=1763113131; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=9VxbCb+Kg6U0rUkZ3VVmnp/SSISDQljDIPQcB5fnLr4=; b=iHcYkea2H3WZMFmi5ajLzaGN19S7opkeICoNXu/PfLlQ9r3e3sXRGUzezZAVMrWqIF 3Sptl0k8i7pPQPHPtr1P9LsLlvhUMQhJgoZLuRCX7Ktph9y6B/d0nzl9A0neSQS9T3Cc HD5BCnVZ+Z4uaYGj6WRW2PjRoFiqzBLEeg/jf2qa37Z+i4RQVV67/lPxh7djdfaSfauY B9+ehHO4P7wSzjxsiEZOqL9aZvLwR7o4BSTpNm3IszfIJANwJgm+3qt31sTvjaV6+odK 53gRO/kaf5xs+mHucDdR7zpHaSQ8kLgpJgBl81/ZHiswNV8hyYAQ+aEw5UU1UEq/OJh2 137A== X-Forwarded-Encrypted: i=1; AJvYcCULDq7zg+1IUhx3ZkjOS+Bv2Fhk+vpcNaMqKplUC0OsePMN4nEFsv361sFjgXLo+eHo4uOdKxxDpCZSvCM=@vger.kernel.org X-Gm-Message-State: AOJu0YwjRVhrTkmK7gCGXeHEuwYX4T9haF6b5nObCx30kO3UBsbecRDS xbJSCLxY9NMlv3D+pFe3qTisjfhZyMXr0YKfbXYoQrkk0HS7CD2lko6QvwgFDkVl0lXK1P+Zswd 2aVSVXe49mq3B87hjxfpDmw== X-Google-Smtp-Source: AGHT+IHLE2aMwvTwA4gEKe4rnkMrA8UQFJzrhH8USk+Up8tFECYOrzsDxCHW71XzpsCA817tCNmkN/S38dRWuYgh X-Received: from wmbet5.prod.google.com ([2002:a05:600c:8185:b0:46e:32f2:9964]) (user=vdonnefort job=prod-delivery.src-stubby-dispatcher) by 2002:a05:600c:4fd5:b0:477:557b:6926 with SMTP id 5b1f17b1804b1-4776bc8d908mr19700605e9.10.1762508331254; Fri, 07 Nov 2025 01:38:51 -0800 (PST) Date: Fri, 7 Nov 2025 09:38:15 +0000 In-Reply-To: <20251107093840.3779150-1-vdonnefort@google.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: Mime-Version: 1.0 References: <20251107093840.3779150-1-vdonnefort@google.com> X-Mailer: git-send-email 2.51.2.1041.gc1ab5b90ca-goog Message-ID: <20251107093840.3779150-4-vdonnefort@google.com> Subject: [PATCH v8 03/28] ring-buffer: Introduce ring-buffer remotes From: Vincent Donnefort To: rostedt@goodmis.org, mhiramat@kernel.org, mathieu.desnoyers@efficios.com, linux-trace-kernel@vger.kernel.org, maz@kernel.org, oliver.upton@linux.dev, joey.gouly@arm.com, suzuki.poulose@arm.com, yuzenghui@huawei.com Cc: kvmarm@lists.linux.dev, linux-arm-kernel@lists.infradead.org, jstultz@google.com, qperret@google.com, will@kernel.org, aneesh.kumar@kernel.org, kernel-team@android.com, linux-kernel@vger.kernel.org, Vincent Donnefort Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A ring-buffer remote is an entity outside of the kernel (most likely a firmware or a hypervisor) capable of writing events in a ring-buffer following the same format as the tracefs ring-buffer. To setup the ring-buffer on the kernel side, a description of the pages forming the ring-buffer (struct trace_buffer_desc) must be given. Callbacks (swap_reader_page and reset) must also be provided. It is expected from the remote to keep the meta-page updated. Signed-off-by: Vincent Donnefort diff --git a/include/linux/ring_buffer.h b/include/linux/ring_buffer.h index 876358cfe1b1..41193c5b0d28 100644 --- a/include/linux/ring_buffer.h +++ b/include/linux/ring_buffer.h @@ -250,4 +250,62 @@ int ring_buffer_map(struct trace_buffer *buffer, int c= pu, struct vm_area_struct *vma); int ring_buffer_unmap(struct trace_buffer *buffer, int cpu); int ring_buffer_map_get_reader(struct trace_buffer *buffer, int cpu); + +struct ring_buffer_desc { + int cpu; + unsigned int nr_page_va; /* excludes the meta page */ + unsigned long meta_va; + unsigned long page_va[] __counted_by(nr_page_va); +}; + +struct trace_buffer_desc { + int nr_cpus; + size_t struct_len; + char __data[]; /* list of ring_buffer_desc */ +}; + +static inline struct ring_buffer_desc *__next_ring_buffer_desc(struct ring= _buffer_desc *desc) +{ + size_t len =3D struct_size(desc, page_va, desc->nr_page_va); + + return (struct ring_buffer_desc *)((void *)desc + len); +} + +static inline struct ring_buffer_desc *__first_ring_buffer_desc(struct tra= ce_buffer_desc *desc) +{ + return (struct ring_buffer_desc *)(&desc->__data[0]); +} + +static inline size_t trace_buffer_desc_size(size_t buffer_size, unsigned i= nt nr_cpus) +{ + unsigned int nr_pages =3D max(DIV_ROUND_UP(buffer_size, PAGE_SIZE), 2UL) = + 1; + struct ring_buffer_desc *rbdesc; + + return size_add(offsetof(struct trace_buffer_desc, __data), + size_mul(nr_cpus, struct_size(rbdesc, page_va, nr_pages))); +} + +#define for_each_ring_buffer_desc(__pdesc, __cpu, __trace_pdesc) \ + for (__pdesc =3D __first_ring_buffer_desc(__trace_pdesc), __cpu =3D 0; \ + (__cpu) < (__trace_pdesc)->nr_cpus; \ + (__cpu)++, __pdesc =3D __next_ring_buffer_desc(__pdesc)) + +struct ring_buffer_remote { + struct trace_buffer_desc *desc; + int (*swap_reader_page)(unsigned int cpu, void *priv); + int (*reset)(unsigned int cpu, void *priv); + void *priv; +}; + +int ring_buffer_poll_remote(struct trace_buffer *buffer, int cpu); + +struct trace_buffer * +__ring_buffer_alloc_remote(struct ring_buffer_remote *remote, + struct lock_class_key *key); + +#define ring_buffer_alloc_remote(remote) \ +({ \ + static struct lock_class_key __key; \ + __ring_buffer_alloc_remote(remote, &__key); \ +}) #endif /* _LINUX_RING_BUFFER_H */ diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c index 80099a8b2495..1d300189f261 100644 --- a/kernel/trace/ring_buffer.c +++ b/kernel/trace/ring_buffer.c @@ -523,6 +523,8 @@ struct ring_buffer_per_cpu { struct trace_buffer_meta *meta_page; struct ring_buffer_cpu_meta *ring_meta; =20 + struct ring_buffer_remote *remote; + /* ring buffer pages to update, > 0 to add, < 0 to remove */ long nr_pages_to_update; struct list_head new_pages; /* new pages to add */ @@ -545,6 +547,8 @@ struct trace_buffer { =20 struct ring_buffer_per_cpu **buffers; =20 + struct ring_buffer_remote *remote; + struct hlist_node node; u64 (*clock)(void); =20 @@ -2197,6 +2201,40 @@ static void rb_meta_buffer_update(struct ring_buffer= _per_cpu *cpu_buffer, } } =20 +static struct ring_buffer_desc *ring_buffer_desc(struct trace_buffer_desc = *trace_desc, int cpu) +{ + struct ring_buffer_desc *desc, *end; + size_t len; + int i; + + if (!trace_desc) + return NULL; + + if (cpu >=3D trace_desc->nr_cpus) + return NULL; + + end =3D (struct ring_buffer_desc *)((void *)trace_desc + trace_desc->stru= ct_len); + desc =3D __first_ring_buffer_desc(trace_desc); + len =3D struct_size(desc, page_va, desc->nr_page_va); + desc =3D (struct ring_buffer_desc *)((void *)desc + (len * cpu)); + + if (desc < end && desc->cpu =3D=3D cpu) + return desc; + + /* Missing CPUs, need to linear search */ + for_each_ring_buffer_desc(desc, i, trace_desc) { + if (desc->cpu =3D=3D cpu) + return desc; + } + + return NULL; +} + +static void *ring_buffer_desc_page(struct ring_buffer_desc *desc, int page= _id) +{ + return page_id > desc->nr_page_va ? NULL : (void *)desc->page_va[page_id]; +} + static int __rb_allocate_pages(struct ring_buffer_per_cpu *cpu_buffer, long nr_pages, struct list_head *pages) { @@ -2204,6 +2242,7 @@ static int __rb_allocate_pages(struct ring_buffer_per= _cpu *cpu_buffer, struct ring_buffer_cpu_meta *meta =3D NULL; struct buffer_page *bpage, *tmp; bool user_thread =3D current->mm !=3D NULL; + struct ring_buffer_desc *desc =3D NULL; gfp_t mflags; long i; =20 @@ -2240,6 +2279,12 @@ static int __rb_allocate_pages(struct ring_buffer_pe= r_cpu *cpu_buffer, if (buffer->range_addr_start) meta =3D rb_range_meta(buffer, nr_pages, cpu_buffer->cpu); =20 + if (buffer->remote) { + desc =3D ring_buffer_desc(buffer->remote->desc, cpu_buffer->cpu); + if (!desc || WARN_ON(desc->nr_page_va !=3D (nr_pages + 1))) + return -EINVAL; + } + for (i =3D 0; i < nr_pages; i++) { struct page *page; =20 @@ -2266,6 +2311,16 @@ static int __rb_allocate_pages(struct ring_buffer_pe= r_cpu *cpu_buffer, rb_meta_buffer_update(cpu_buffer, bpage); bpage->range =3D 1; bpage->id =3D i + 1; + } else if (desc) { + void *p =3D ring_buffer_desc_page(desc, i + 1); + + if (WARN_ON(!p)) + goto free_pages; + + bpage->page =3D p; + bpage->range =3D 1; /* bpage->page can't be freed */ + bpage->id =3D i + 1; + cpu_buffer->subbuf_ids[i + 1] =3D bpage; } else { page =3D alloc_pages_node(cpu_to_node(cpu_buffer->cpu), mflags | __GFP_COMP | __GFP_ZERO, @@ -2369,6 +2424,30 @@ rb_allocate_cpu_buffer(struct trace_buffer *buffer, = long nr_pages, int cpu) if (cpu_buffer->ring_meta->head_buffer) rb_meta_buffer_update(cpu_buffer, bpage); bpage->range =3D 1; + } else if (buffer->remote) { + struct ring_buffer_desc *desc =3D ring_buffer_desc(buffer->remote->desc,= cpu); + + if (!desc) + goto fail_free_reader; + + cpu_buffer->remote =3D buffer->remote; + cpu_buffer->meta_page =3D (struct trace_buffer_meta *)(void *)desc->meta= _va; + cpu_buffer->nr_pages =3D nr_pages; + cpu_buffer->subbuf_ids =3D kcalloc(cpu_buffer->nr_pages + 1, + sizeof(*cpu_buffer->subbuf_ids), GFP_KERNEL); + if (!cpu_buffer->subbuf_ids) + goto fail_free_reader; + + /* Remote buffers are read-only and immutable */ + atomic_inc(&cpu_buffer->record_disabled); + atomic_inc(&cpu_buffer->resize_disabled); + + bpage->page =3D ring_buffer_desc_page(desc, cpu_buffer->meta_page->reade= r.id); + if (!bpage->page) + goto fail_free_reader; + + bpage->range =3D 1; + cpu_buffer->subbuf_ids[0] =3D bpage; } else { page =3D alloc_pages_node(cpu_to_node(cpu), GFP_KERNEL | __GFP_COMP | __GFP_ZERO, @@ -2431,6 +2510,9 @@ static void rb_free_cpu_buffer(struct ring_buffer_per= _cpu *cpu_buffer) =20 irq_work_sync(&cpu_buffer->irq_work.work); =20 + if (cpu_buffer->remote) + kfree(cpu_buffer->subbuf_ids); + free_buffer_page(cpu_buffer->reader_page); =20 if (head) { @@ -2453,7 +2535,8 @@ static struct trace_buffer *alloc_buffer(unsigned lon= g size, unsigned flags, int order, unsigned long start, unsigned long end, unsigned long scratch_size, - struct lock_class_key *key) + struct lock_class_key *key, + struct ring_buffer_remote *remote) { struct trace_buffer *buffer __free(kfree) =3D NULL; long nr_pages; @@ -2493,6 +2576,8 @@ static struct trace_buffer *alloc_buffer(unsigned lon= g size, unsigned flags, if (!buffer->buffers) goto fail_free_cpumask; =20 + cpu =3D raw_smp_processor_id(); + /* If start/end are specified, then that overrides size */ if (start && end) { unsigned long buffers_start; @@ -2548,6 +2633,15 @@ static struct trace_buffer *alloc_buffer(unsigned lo= ng size, unsigned flags, buffer->range_addr_end =3D end; =20 rb_range_meta_init(buffer, nr_pages, scratch_size); + } else if (remote) { + struct ring_buffer_desc *desc =3D ring_buffer_desc(remote->desc, cpu); + + buffer->remote =3D remote; + /* The writer is remote. This ring-buffer is read-only */ + atomic_inc(&buffer->record_disabled); + nr_pages =3D desc->nr_page_va - 1; + if (nr_pages < 2) + goto fail_free_buffers; } else { =20 /* need at least two pages */ @@ -2556,7 +2650,6 @@ static struct trace_buffer *alloc_buffer(unsigned lon= g size, unsigned flags, nr_pages =3D 2; } =20 - cpu =3D raw_smp_processor_id(); cpumask_set_cpu(cpu, buffer->cpumask); buffer->buffers[cpu] =3D rb_allocate_cpu_buffer(buffer, nr_pages, cpu); if (!buffer->buffers[cpu]) @@ -2598,7 +2691,7 @@ struct trace_buffer *__ring_buffer_alloc(unsigned lon= g size, unsigned flags, struct lock_class_key *key) { /* Default buffer page size - one system page */ - return alloc_buffer(size, flags, 0, 0, 0, 0, key); + return alloc_buffer(size, flags, 0, 0, 0, 0, key, NULL); =20 } EXPORT_SYMBOL_GPL(__ring_buffer_alloc); @@ -2625,7 +2718,18 @@ struct trace_buffer *__ring_buffer_alloc_range(unsig= ned long size, unsigned flag struct lock_class_key *key) { return alloc_buffer(size, flags, order, start, start + range_size, - scratch_size, key); + scratch_size, key, NULL); +} + +/** + * __ring_buffer_alloc_remote - allocate a new ring_buffer from a remote + * @remote: Contains a description of the ring-buffer pages and remote cal= lbacks. + * @key: ring buffer reader_lock_key. + */ +struct trace_buffer *__ring_buffer_alloc_remote(struct ring_buffer_remote = *remote, + struct lock_class_key *key) +{ + return alloc_buffer(0, 0, 0, 0, 0, 0, key, remote); } =20 void *ring_buffer_meta_scratch(struct trace_buffer *buffer, unsigned int *= size) @@ -5233,6 +5337,16 @@ unsigned long ring_buffer_overruns(struct trace_buff= er *buffer) } EXPORT_SYMBOL_GPL(ring_buffer_overruns); =20 +static bool rb_read_remote_meta_page(struct ring_buffer_per_cpu *cpu_buffe= r) +{ + local_set(&cpu_buffer->entries, READ_ONCE(cpu_buffer->meta_page->entries)= ); + local_set(&cpu_buffer->overrun, READ_ONCE(cpu_buffer->meta_page->overrun)= ); + local_set(&cpu_buffer->pages_touched, READ_ONCE(cpu_buffer->meta_page->pa= ges_touched)); + local_set(&cpu_buffer->pages_lost, READ_ONCE(cpu_buffer->meta_page->pages= _lost)); + + return rb_num_of_entries(cpu_buffer); +} + static void rb_iter_reset(struct ring_buffer_iter *iter) { struct ring_buffer_per_cpu *cpu_buffer =3D iter->cpu_buffer; @@ -5387,7 +5501,43 @@ rb_update_iter_read_stamp(struct ring_buffer_iter *i= ter, } =20 static struct buffer_page * -rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +__rb_get_reader_page_from_remote(struct ring_buffer_per_cpu *cpu_buffer) +{ + struct buffer_page *new_reader, *prev_reader; + + if (!rb_read_remote_meta_page(cpu_buffer)) + return NULL; + + /* More to read on the reader page */ + if (cpu_buffer->reader_page->read < rb_page_size(cpu_buffer->reader_page)= ) { + if (!cpu_buffer->reader_page->read) + cpu_buffer->read_stamp =3D cpu_buffer->reader_page->page->time_stamp; + return cpu_buffer->reader_page; + } + + prev_reader =3D cpu_buffer->subbuf_ids[cpu_buffer->meta_page->reader.id]; + + WARN_ON_ONCE(cpu_buffer->remote->swap_reader_page(cpu_buffer->cpu, + cpu_buffer->remote->priv)); + /* nr_pages doesn't include the reader page */ + if (WARN_ON_ONCE(cpu_buffer->meta_page->reader.id > cpu_buffer->nr_pages)) + return NULL; + + new_reader =3D cpu_buffer->subbuf_ids[cpu_buffer->meta_page->reader.id]; + + WARN_ON_ONCE(prev_reader =3D=3D new_reader); + + cpu_buffer->reader_page->page =3D new_reader->page; + cpu_buffer->reader_page->id =3D new_reader->id; + cpu_buffer->reader_page->read =3D 0; + cpu_buffer->read_stamp =3D cpu_buffer->reader_page->page->time_stamp; + cpu_buffer->lost_events =3D cpu_buffer->meta_page->reader.lost_events; + + return rb_page_size(cpu_buffer->reader_page) ? cpu_buffer->reader_page : = NULL; +} + +static struct buffer_page * +__rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) { struct buffer_page *reader =3D NULL; unsigned long bsize =3D READ_ONCE(cpu_buffer->buffer->subbuf_size); @@ -5557,6 +5707,13 @@ rb_get_reader_page(struct ring_buffer_per_cpu *cpu_b= uffer) return reader; } =20 +static struct buffer_page * +rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer) +{ + return cpu_buffer->remote ? __rb_get_reader_page_from_remote(cpu_buffer) : + __rb_get_reader_page(cpu_buffer); +} + static void rb_advance_reader(struct ring_buffer_per_cpu *cpu_buffer) { struct ring_buffer_event *event; @@ -5957,7 +6114,7 @@ ring_buffer_read_start(struct trace_buffer *buffer, i= nt cpu, gfp_t flags) struct ring_buffer_per_cpu *cpu_buffer; struct ring_buffer_iter *iter; =20 - if (!cpumask_test_cpu(cpu, buffer->cpumask)) + if (!cpumask_test_cpu(cpu, buffer->cpumask) || buffer->remote) return NULL; =20 iter =3D kzalloc(sizeof(*iter), flags); @@ -6125,6 +6282,23 @@ rb_reset_cpu(struct ring_buffer_per_cpu *cpu_buffer) { struct buffer_page *page; =20 + if (cpu_buffer->remote) { + if (!cpu_buffer->remote->reset) + return; + + cpu_buffer->remote->reset(cpu_buffer->cpu, cpu_buffer->remote->priv); + rb_read_remote_meta_page(cpu_buffer); + + /* Read related values, not covered by the meta-page */ + local_set(&cpu_buffer->pages_read, 0); + cpu_buffer->read =3D 0; + cpu_buffer->read_bytes =3D 0; + cpu_buffer->last_overrun =3D 0; + cpu_buffer->reader_page->read =3D 0; + + return; + } + rb_head_page_deactivate(cpu_buffer); =20 cpu_buffer->head_page @@ -6355,6 +6529,48 @@ bool ring_buffer_empty_cpu(struct trace_buffer *buff= er, int cpu) } EXPORT_SYMBOL_GPL(ring_buffer_empty_cpu); =20 +int ring_buffer_poll_remote(struct trace_buffer *buffer, int cpu) +{ + struct ring_buffer_per_cpu *cpu_buffer; + + if (cpu !=3D RING_BUFFER_ALL_CPUS) { + if (!cpumask_test_cpu(cpu, buffer->cpumask)) + return -EINVAL; + + cpu_buffer =3D buffer->buffers[cpu]; + + guard(raw_spinlock)(&cpu_buffer->reader_lock); + if (rb_read_remote_meta_page(cpu_buffer)) + rb_wakeups(buffer, cpu_buffer); + + return 0; + } + + cpus_read_lock(); + + /* + * Make sure all the ring buffers are up to date before we start reading + * them. + */ + for_each_buffer_cpu(buffer, cpu) { + cpu_buffer =3D buffer->buffers[cpu]; + + guard(raw_spinlock)(&cpu_buffer->reader_lock); + rb_read_remote_meta_page(cpu_buffer); + } + + for_each_buffer_cpu(buffer, cpu) { + cpu_buffer =3D buffer->buffers[cpu]; + + if (rb_num_of_entries(cpu_buffer)) + rb_wakeups(buffer, cpu_buffer); + } + + cpus_read_unlock(); + + return 0; +} + #ifdef CONFIG_RING_BUFFER_ALLOW_SWAP /** * ring_buffer_swap_cpu - swap a CPU buffer between two ring buffers @@ -6600,6 +6816,7 @@ int ring_buffer_read_page(struct trace_buffer *buffer, unsigned int commit; unsigned int read; u64 save_timestamp; + bool force_memcpy; =20 if (!cpumask_test_cpu(cpu, buffer->cpumask)) return -1; @@ -6637,6 +6854,8 @@ int ring_buffer_read_page(struct trace_buffer *buffer, /* Check if any events were dropped */ missed_events =3D cpu_buffer->lost_events; =20 + force_memcpy =3D cpu_buffer->mapped || cpu_buffer->remote; + /* * If this page has been partially read or * if len is not big enough to read the rest of the page or @@ -6646,7 +6865,7 @@ int ring_buffer_read_page(struct trace_buffer *buffer, */ if (read || (len < (commit - read)) || cpu_buffer->reader_page =3D=3D cpu_buffer->commit_page || - cpu_buffer->mapped) { + force_memcpy) { struct buffer_data_page *rpage =3D cpu_buffer->reader_page->page; unsigned int rpos =3D read; unsigned int pos =3D 0; @@ -7225,7 +7444,7 @@ int ring_buffer_map(struct trace_buffer *buffer, int = cpu, unsigned long flags; int err; =20 - if (!cpumask_test_cpu(cpu, buffer->cpumask)) + if (!cpumask_test_cpu(cpu, buffer->cpumask) || buffer->remote) return -EINVAL; =20 cpu_buffer =3D buffer->buffers[cpu]; --=20 2.51.2.1041.gc1ab5b90ca-goog