1 | Hello everyone, | 1 | The ADMV1014 is a silicon germanium (SiGe), wideband, |
---|---|---|---|
2 | microwave downconverter optimized for point to point microwave | ||
3 | radio designs operating in the 24 GHz to 44 GHz frequency range. | ||
2 | 4 | ||
3 | There was some interest at OSPM'25 to explore using the push task | 5 | Datasheet: |
4 | mechanism for idle and newidle balance. This series implements one such | 6 | https://www.analog.com/media/en/technical-documentation/data-sheets/ADMV1014.pdf |
5 | idea. The main reason for the RFC is to understand if this is the | ||
6 | implementation people were in favor of before trying to optimize it for | ||
7 | all the workloads from my test setup. | ||
8 | 7 | ||
9 | Note: The current performance of the prototype is rough. I haven't | 8 | NOTE: |
10 | optimized it yet since I would love some feedback first on the approach. | 9 | Currently depends on 64-bit architecture since the input |
10 | clock that server as Local Oscillator should support values | ||
11 | in the range 24 GHz to 44 GHz. | ||
11 | 12 | ||
13 | We might need some scaling implementation in the clock | ||
14 | framework so that u64 types are supported when using 32-bit | ||
15 | architectures. | ||
12 | 16 | ||
13 | Current approach | 17 | Antoniu Miclaus (3): |
14 | ================ | 18 | iio:frequency:admv1014: add support for ADMV1014 |
19 | dt-bindings:iio:frequency: add admv1014 doc | ||
20 | Documentation:ABI:testing:admv1014: add ABI docs | ||
15 | 21 | ||
16 | The push task framework for fair class has been cherry-pick from | 22 | .../testing/sysfs-bus-iio-frequency-admv1014 | 23 + |
17 | Vincent's series and has been implemented for !EAS case. | 23 | .../bindings/iio/frequency/adi,admv1014.yaml | 97 +++ |
24 | drivers/iio/frequency/Kconfig | 10 + | ||
25 | drivers/iio/frequency/Makefile | 1 + | ||
26 | drivers/iio/frequency/admv1014.c | 784 ++++++++++++++++++ | ||
27 | 5 files changed, 915 insertions(+) | ||
28 | create mode 100644 Documentation/ABI/testing/sysfs-bus-iio-frequency-admv1014 | ||
29 | create mode 100644 Documentation/devicetree/bindings/iio/frequency/adi,admv1014.yaml | ||
30 | create mode 100644 drivers/iio/frequency/admv1014.c | ||
18 | 31 | ||
19 | This series implements the idea from Valentin [2] where, in presence of | ||
20 | pushable tasks, the CPU will set itself on a per-LLC "overloaded_mask". | ||
21 | |||
22 | The inter-NUMA newidle balance has been modified to traverse the CPUs | ||
23 | set on the overloaded mask, first in the local-LLC, and then CPUs set on | ||
24 | overloaded mask of other LLCs in same NUMA node with the goal of pulling | ||
25 | a single task towards itself rather than performing a full fledged load | ||
26 | balancing. | ||
27 | |||
28 | This implements some of the ideas from David Vernet's SAHRED_RUNQ | ||
29 | prototype [3] except, instead of a single SHARED_RUNQ per-LLC / | ||
30 | per-shard, the overloaded mask serves an indicator of per-CPU rq(s) | ||
31 | containing pushable task that can be migrated to the CPU going idle. | ||
32 | This avoids having a per-SHARED_RUNQ lock at the expense of maintaining | ||
33 | the overloaded cpumask. | ||
34 | |||
35 | The push callback itself has been modified to try push the tasks on the | ||
36 | pushable task list to one of the CPUs on the "nohz.idle_cpus_mask" | ||
37 | taking the load off of idle balancing. | ||
38 | |||
39 | |||
40 | Clarification required | ||
41 | ====================== | ||
42 | |||
43 | I believe using the per-CPU pushable task list as a proxy for a single | ||
44 | SHARED_RUNQ was the idea Peter was implying during the discussion. Is | ||
45 | this correct or did I completely misunderstand it? P.S. SHARED_RUNQ | ||
46 | could also be modelled as a large per-LLC push list. | ||
47 | |||
48 | An alternate implementation is to allow CPUs to go to idle as quickly as | ||
49 | possible and then rely completely on push mechanism and the | ||
50 | "idle_cpu_mask" to push task to an idle CPU however this puts the burden | ||
51 | of moving tasks on a busy overloaded CPU which may not be ideal. | ||
52 | |||
53 | Since folks mentioned using "push mechanism" for newidle balance, was | ||
54 | the above idea the one they had in mind? | ||
55 | |||
56 | There seems to be some clear advantage from doing a complete balance in | ||
57 | the newidle path. Since the schedstats are not rigged up yet for the new | ||
58 | approach, I'm not completely sure where the advantages vs disadvantages | ||
59 | are currently. | ||
60 | |||
61 | If the current approach is right, I'll dig deeper to try address all the | ||
62 | shortcomings of this prototype. | ||
63 | |||
64 | Systems with unified LLC will likely run into bottlenecks to maintain a | ||
65 | large per-LLC mask that can have multiple concurrent updates. I have | ||
66 | plans to implement a "sd_shard" which shards the large LLC making the | ||
67 | cpumask maintenance less heavy on these systems. | ||
68 | |||
69 | |||
70 | References | ||
71 | ========== | ||
72 | |||
73 | [1] https://lore.kernel.org/lkml/20250302210539.1563190-6-vincent.guittot@linaro.org/ | ||
74 | [2] https://lore.kernel.org/lkml/xhsmh1putoxbz.mognet@vschneid-thinkpadt14sgen2i.remote.csb/ | ||
75 | [3] https://lore.kernel.org/lkml/20231212003141.216236-1-void@manifault.com/ | ||
76 | |||
77 | -- | ||
78 | K Prateek Nayak (4): | ||
79 | sched/fair: Introduce overloaded_mask in sched_domain_shared | ||
80 | sched/fair: Update overloaded mask in presence of pushable task | ||
81 | sched/fair: Rework inter-NUMA newidle balancing | ||
82 | sched/fair: Proactive idle balance using push mechanism | ||
83 | |||
84 | Vincent Guittot (1): | ||
85 | sched/fair: Add push task framework | ||
86 | |||
87 | include/linux/sched/topology.h | 1 + | ||
88 | kernel/sched/fair.c | 297 +++++++++++++++++++++++++++++++-- | ||
89 | kernel/sched/sched.h | 2 + | ||
90 | kernel/sched/topology.c | 25 ++- | ||
91 | 4 files changed, 306 insertions(+), 19 deletions(-) | ||
92 | |||
93 | |||
94 | base-commit: 6432e163ba1b7d80b5876792ce53e511f041ab91 | ||
95 | -- | 32 | -- |
96 | 2.34.1 | 33 | 2.34.1 |
34 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | From: Vincent Guittot <vincent.guittot@linaro.org> | ||
2 | 1 | ||
3 | Add the skeleton for push task infrastructure. The empty | ||
4 | push_fair_task() prototype will be used to implement proactive idle | ||
5 | balancing in subsequent commits. | ||
6 | |||
7 | [ prateek: Broke off relevant bits from [1] ] | ||
8 | |||
9 | Link: https://lore.kernel.org/all/20250302210539.1563190-6-vincent.guittot@linaro.org/ [1] | ||
10 | Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> | ||
11 | Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> | ||
12 | --- | ||
13 | kernel/sched/fair.c | 85 ++++++++++++++++++++++++++++++++++++++++++++ | ||
14 | kernel/sched/sched.h | 2 ++ | ||
15 | 2 files changed, 87 insertions(+) | ||
16 | |||
17 | diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c | ||
18 | index XXXXXXX..XXXXXXX 100644 | ||
19 | --- a/kernel/sched/fair.c | ||
20 | +++ b/kernel/sched/fair.c | ||
21 | @@ -XXX,XX +XXX,XX @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags) | ||
22 | hrtick_update(rq); | ||
23 | } | ||
24 | |||
25 | +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p); | ||
26 | static void set_next_buddy(struct sched_entity *se); | ||
27 | |||
28 | /* | ||
29 | @@ -XXX,XX +XXX,XX @@ static int dequeue_entities(struct rq *rq, struct sched_entity *se, int flags) | ||
30 | h_nr_idle = task_has_idle_policy(p); | ||
31 | if (task_sleep || task_delayed || !se->sched_delayed) | ||
32 | h_nr_runnable = 1; | ||
33 | + | ||
34 | + fair_remove_pushable_task(rq, p); | ||
35 | } else { | ||
36 | cfs_rq = group_cfs_rq(se); | ||
37 | slice = cfs_rq_min_slice(cfs_rq); | ||
38 | @@ -XXX,XX +XXX,XX @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) | ||
39 | return target; | ||
40 | } | ||
41 | |||
42 | +static inline bool fair_push_task(struct task_struct *p) | ||
43 | +{ | ||
44 | + if (!task_on_rq_queued(p)) | ||
45 | + return false; | ||
46 | + | ||
47 | + if (p->se.sched_delayed) | ||
48 | + return false; | ||
49 | + | ||
50 | + if (p->nr_cpus_allowed == 1) | ||
51 | + return false; | ||
52 | + | ||
53 | + return true; | ||
54 | +} | ||
55 | + | ||
56 | +static inline int has_pushable_tasks(struct rq *rq) | ||
57 | +{ | ||
58 | + return !plist_head_empty(&rq->cfs.pushable_tasks); | ||
59 | +} | ||
60 | + | ||
61 | +/* | ||
62 | + * See if the non running fair tasks on this rq can be sent on other CPUs | ||
63 | + * that fits better with their profile. | ||
64 | + */ | ||
65 | +static bool push_fair_task(struct rq *rq) | ||
66 | +{ | ||
67 | + return false; | ||
68 | +} | ||
69 | + | ||
70 | +static void push_fair_tasks(struct rq *rq) | ||
71 | +{ | ||
72 | + /* push_fair_task() will return true if it moved a fair task */ | ||
73 | + while (push_fair_task(rq)) | ||
74 | + ; | ||
75 | +} | ||
76 | + | ||
77 | +static DEFINE_PER_CPU(struct balance_callback, fair_push_head); | ||
78 | + | ||
79 | +static inline void fair_queue_pushable_tasks(struct rq *rq) | ||
80 | +{ | ||
81 | + if (!has_pushable_tasks(rq)) | ||
82 | + return; | ||
83 | + | ||
84 | + queue_balance_callback(rq, &per_cpu(fair_push_head, rq->cpu), push_fair_tasks); | ||
85 | +} | ||
86 | +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p) | ||
87 | +{ | ||
88 | + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
89 | +} | ||
90 | + | ||
91 | +static void fair_add_pushable_task(struct rq *rq, struct task_struct *p) | ||
92 | +{ | ||
93 | + if (fair_push_task(p)) { | ||
94 | + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
95 | + plist_node_init(&p->pushable_tasks, p->prio); | ||
96 | + plist_add(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
97 | + } | ||
98 | +} | ||
99 | + | ||
100 | /* | ||
101 | * select_task_rq_fair: Select target runqueue for the waking task in domains | ||
102 | * that have the relevant SD flag set. In practice, this is SD_BALANCE_WAKE, | ||
103 | @@ -XXX,XX +XXX,XX @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf) | ||
104 | return sched_balance_newidle(rq, rf) != 0; | ||
105 | } | ||
106 | #else | ||
107 | +static inline void fair_queue_pushable_tasks(struct rq *rq) {} | ||
108 | +static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p) {} | ||
109 | +static inline void fair_add_pushable_task(struct rq *rq, struct task_struct *p) {} | ||
110 | static inline void set_task_max_allowed_capacity(struct task_struct *p) {} | ||
111 | #endif /* CONFIG_SMP */ | ||
112 | |||
113 | @@ -XXX,XX +XXX,XX @@ pick_next_task_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf | ||
114 | put_prev_entity(cfs_rq, pse); | ||
115 | set_next_entity(cfs_rq, se); | ||
116 | |||
117 | + /* | ||
118 | + * The previous task might be eligible for being pushed on | ||
119 | + * another cpu if it is still active. | ||
120 | + */ | ||
121 | + fair_add_pushable_task(rq, prev); | ||
122 | + | ||
123 | __set_next_task_fair(rq, p, true); | ||
124 | } | ||
125 | |||
126 | @@ -XXX,XX +XXX,XX @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev, struct t | ||
127 | cfs_rq = cfs_rq_of(se); | ||
128 | put_prev_entity(cfs_rq, se); | ||
129 | } | ||
130 | + | ||
131 | + /* | ||
132 | + * The previous task might be eligible for being pushed on another cpu | ||
133 | + * if it is still active. | ||
134 | + */ | ||
135 | + fair_add_pushable_task(rq, prev); | ||
136 | + | ||
137 | } | ||
138 | |||
139 | /* | ||
140 | @@ -XXX,XX +XXX,XX @@ static void __set_next_task_fair(struct rq *rq, struct task_struct *p, bool firs | ||
141 | { | ||
142 | struct sched_entity *se = &p->se; | ||
143 | |||
144 | + fair_remove_pushable_task(rq, p); | ||
145 | + | ||
146 | #ifdef CONFIG_SMP | ||
147 | if (task_on_rq_queued(p)) { | ||
148 | /* | ||
149 | @@ -XXX,XX +XXX,XX @@ static void __set_next_task_fair(struct rq *rq, struct task_struct *p, bool firs | ||
150 | if (hrtick_enabled_fair(rq)) | ||
151 | hrtick_start_fair(rq, p); | ||
152 | |||
153 | + /* | ||
154 | + * Try to push prev task before checking misfit for next task as | ||
155 | + * the migration of prev can make next fitting the CPU | ||
156 | + */ | ||
157 | + fair_queue_pushable_tasks(rq); | ||
158 | update_misfit_status(p, rq); | ||
159 | sched_fair_update_stop_tick(rq, p); | ||
160 | } | ||
161 | @@ -XXX,XX +XXX,XX @@ void init_cfs_rq(struct cfs_rq *cfs_rq) | ||
162 | cfs_rq->tasks_timeline = RB_ROOT_CACHED; | ||
163 | cfs_rq->min_vruntime = (u64)(-(1LL << 20)); | ||
164 | #ifdef CONFIG_SMP | ||
165 | + plist_head_init(&cfs_rq->pushable_tasks); | ||
166 | raw_spin_lock_init(&cfs_rq->removed.lock); | ||
167 | #endif | ||
168 | } | ||
169 | diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h | ||
170 | index XXXXXXX..XXXXXXX 100644 | ||
171 | --- a/kernel/sched/sched.h | ||
172 | +++ b/kernel/sched/sched.h | ||
173 | @@ -XXX,XX +XXX,XX @@ struct cfs_rq { | ||
174 | struct list_head leaf_cfs_rq_list; | ||
175 | struct task_group *tg; /* group that "owns" this runqueue */ | ||
176 | |||
177 | + struct plist_head pushable_tasks; | ||
178 | + | ||
179 | /* Locally cached copy of our task_group's idle value */ | ||
180 | int idle; | ||
181 | |||
182 | -- | ||
183 | 2.34.1 | diff view generated by jsdifflib |
Deleted patch | |||
---|---|---|---|
1 | Introduce a new cpumask member "overloaded_mask" in sched_domain_shared. | ||
2 | This mask will be used to keep track of overloaded CPUs with pushable | ||
3 | tasks on them and will be later used by newidle balance to only scan | ||
4 | through the overloaded CPUs to pull a task to it. | ||
5 | 1 | ||
6 | Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> | ||
7 | --- | ||
8 | include/linux/sched/topology.h | 1 + | ||
9 | kernel/sched/topology.c | 25 ++++++++++++++++++------- | ||
10 | 2 files changed, 19 insertions(+), 7 deletions(-) | ||
11 | |||
12 | diff --git a/include/linux/sched/topology.h b/include/linux/sched/topology.h | ||
13 | index XXXXXXX..XXXXXXX 100644 | ||
14 | --- a/include/linux/sched/topology.h | ||
15 | +++ b/include/linux/sched/topology.h | ||
16 | @@ -XXX,XX +XXX,XX @@ struct sched_domain_shared { | ||
17 | atomic_t nr_busy_cpus; | ||
18 | int has_idle_cores; | ||
19 | int nr_idle_scan; | ||
20 | + cpumask_var_t overloaded_mask; | ||
21 | }; | ||
22 | |||
23 | struct sched_domain { | ||
24 | diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c | ||
25 | index XXXXXXX..XXXXXXX 100644 | ||
26 | --- a/kernel/sched/topology.c | ||
27 | +++ b/kernel/sched/topology.c | ||
28 | @@ -XXX,XX +XXX,XX @@ static void destroy_sched_domain(struct sched_domain *sd) | ||
29 | */ | ||
30 | free_sched_groups(sd->groups, 1); | ||
31 | |||
32 | - if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) | ||
33 | + if (sd->shared && atomic_dec_and_test(&sd->shared->ref)) { | ||
34 | + free_cpumask_var(sd->shared->overloaded_mask); | ||
35 | kfree(sd->shared); | ||
36 | + } | ||
37 | kfree(sd); | ||
38 | } | ||
39 | |||
40 | @@ -XXX,XX +XXX,XX @@ static int __sdt_alloc(const struct cpumask *cpu_map) | ||
41 | return -ENOMEM; | ||
42 | |||
43 | for_each_cpu(j, cpu_map) { | ||
44 | + int node = cpu_to_node(j); | ||
45 | struct sched_domain *sd; | ||
46 | struct sched_domain_shared *sds; | ||
47 | struct sched_group *sg; | ||
48 | struct sched_group_capacity *sgc; | ||
49 | |||
50 | sd = kzalloc_node(sizeof(struct sched_domain) + cpumask_size(), | ||
51 | - GFP_KERNEL, cpu_to_node(j)); | ||
52 | + GFP_KERNEL, node); | ||
53 | if (!sd) | ||
54 | return -ENOMEM; | ||
55 | |||
56 | *per_cpu_ptr(sdd->sd, j) = sd; | ||
57 | |||
58 | sds = kzalloc_node(sizeof(struct sched_domain_shared), | ||
59 | - GFP_KERNEL, cpu_to_node(j)); | ||
60 | + GFP_KERNEL, node); | ||
61 | if (!sds) | ||
62 | return -ENOMEM; | ||
63 | |||
64 | + if (!zalloc_cpumask_var_node(&sds->overloaded_mask, GFP_KERNEL, node)) | ||
65 | + return -ENOMEM; | ||
66 | + | ||
67 | *per_cpu_ptr(sdd->sds, j) = sds; | ||
68 | |||
69 | sg = kzalloc_node(sizeof(struct sched_group) + cpumask_size(), | ||
70 | - GFP_KERNEL, cpu_to_node(j)); | ||
71 | + GFP_KERNEL, node); | ||
72 | if (!sg) | ||
73 | return -ENOMEM; | ||
74 | |||
75 | @@ -XXX,XX +XXX,XX @@ static int __sdt_alloc(const struct cpumask *cpu_map) | ||
76 | *per_cpu_ptr(sdd->sg, j) = sg; | ||
77 | |||
78 | sgc = kzalloc_node(sizeof(struct sched_group_capacity) + cpumask_size(), | ||
79 | - GFP_KERNEL, cpu_to_node(j)); | ||
80 | + GFP_KERNEL, node); | ||
81 | if (!sgc) | ||
82 | return -ENOMEM; | ||
83 | |||
84 | @@ -XXX,XX +XXX,XX @@ static void __sdt_free(const struct cpumask *cpu_map) | ||
85 | kfree(*per_cpu_ptr(sdd->sd, j)); | ||
86 | } | ||
87 | |||
88 | - if (sdd->sds) | ||
89 | - kfree(*per_cpu_ptr(sdd->sds, j)); | ||
90 | + if (sdd->sds) { | ||
91 | + struct sched_domain_shared *sds = *per_cpu_ptr(sdd->sds, j); | ||
92 | + | ||
93 | + if (sds) | ||
94 | + free_cpumask_var(sds->overloaded_mask); | ||
95 | + kfree(sds); | ||
96 | + } | ||
97 | if (sdd->sg) | ||
98 | kfree(*per_cpu_ptr(sdd->sg, j)); | ||
99 | if (sdd->sgc) | ||
100 | -- | ||
101 | 2.34.1 | diff view generated by jsdifflib |
1 | With the introduction of "overloaded_mask" in sched_domain_shared | 1 | The ADMV1014 is a silicon germanium (SiGe), wideband, |
---|---|---|---|
2 | struct, it is now possible to scan through the CPUs that contain | 2 | microwave downconverter optimized for point to point microwave |
3 | pushable tasks that could be run on the CPU going newly idle. | 3 | radio designs operating in the 24 GHz to 44 GHz frequency range. |
4 | 4 | ||
5 | Redesign the inter-NUMA newidle balancing to opportunistically pull a | 5 | Datasheet: |
6 | task to the CPU going idle from the overloaded CPUs only. | 6 | https://www.analog.com/media/en/technical-documentation/data-sheets/ADMV1014.pdf |
7 | Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com> | ||
8 | --- | ||
9 | drivers/iio/frequency/Kconfig | 10 + | ||
10 | drivers/iio/frequency/Makefile | 1 + | ||
11 | drivers/iio/frequency/admv1014.c | 784 +++++++++++++++++++++++++++++++ | ||
12 | 3 files changed, 795 insertions(+) | ||
13 | create mode 100644 drivers/iio/frequency/admv1014.c | ||
7 | 14 | ||
8 | The search starts from sd_llc and moves up until sd_numa. Since | 15 | diff --git a/drivers/iio/frequency/Kconfig b/drivers/iio/frequency/Kconfig |
9 | "overloaded_mask" is per-LLC, each LLC domain is visited individually | ||
10 | using per-CPU sd_llc struct shared by all CPUs in an LLC. | ||
11 | |||
12 | Once visited for one, all CPUs in the LLC are marked visited and the | ||
13 | search resumes for the LLCs of CPUs that remain to be visited. | ||
14 | |||
15 | detach_one_task() was used in instead of pick_next_pushable_fair_task() | ||
16 | since detach_one_task() also considers the CPU affinity of the task | ||
17 | being pulled as opposed to pick_next_pushable_fair_task() which returns | ||
18 | the first pushable task. | ||
19 | |||
20 | Since each iteration of overloaded_mask rechecks the idle state of the | ||
21 | CPU doing newidle balance, the initial gating factor based on | ||
22 | "rq->avg_idle" has been removed. | ||
23 | |||
24 | Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> | ||
25 | --- | ||
26 | kernel/sched/fair.c | 129 +++++++++++++++++++++++++++++++++++++++----- | ||
27 | 1 file changed, 117 insertions(+), 12 deletions(-) | ||
28 | |||
29 | diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c | ||
30 | index XXXXXXX..XXXXXXX 100644 | 16 | index XXXXXXX..XXXXXXX 100644 |
31 | --- a/kernel/sched/fair.c | 17 | --- a/drivers/iio/frequency/Kconfig |
32 | +++ b/kernel/sched/fair.c | 18 | +++ b/drivers/iio/frequency/Kconfig |
33 | @@ -XXX,XX +XXX,XX @@ static inline bool nohz_idle_balance(struct rq *this_rq, enum cpu_idle_type idle | 19 | @@ -XXX,XX +XXX,XX @@ config ADF4371 |
34 | static inline void nohz_newidle_balance(struct rq *this_rq) { } | 20 | To compile this driver as a module, choose M here: the |
35 | #endif /* CONFIG_NO_HZ_COMMON */ | 21 | module will be called adf4371. |
36 | 22 | ||
37 | +static inline bool sched_newidle_continue_balance(struct rq *rq) | 23 | +config ADMV1014 |
38 | +{ | 24 | + tristate "Analog Devices ADMV1014 Microwave Downconverter" |
39 | + return !rq->nr_running && !rq->ttwu_pending; | 25 | + depends on SPI && COMMON_CLK && 64BIT |
40 | +} | 26 | + help |
41 | + | 27 | + Say yes here to build support for Analog Devices ADMV1014 |
42 | +static inline int sched_newidle_pull_overloaded(struct sched_domain *sd, | 28 | + 24 GHz to 44 GHz, Wideband, Microwave Downconverter. |
43 | + struct rq *this_rq, | 29 | + |
44 | + int *continue_balancing) | 30 | + To compile this driver as a module, choose M here: the |
45 | +{ | 31 | + module will be called admv1014. |
46 | + struct cpumask *cpus = this_cpu_cpumask_var_ptr(load_balance_mask); | 32 | + |
47 | + int cpu, this_cpu = cpu_of(this_rq); | 33 | config ADRF6780 |
48 | + struct sched_domain *sd_parent; | 34 | tristate "Analog Devices ADRF6780 Microwave Upconverter" |
49 | + struct lb_env env = { | 35 | depends on SPI |
50 | + .dst_cpu = this_cpu, | 36 | diff --git a/drivers/iio/frequency/Makefile b/drivers/iio/frequency/Makefile |
51 | + .dst_rq = this_rq, | 37 | index XXXXXXX..XXXXXXX 100644 |
52 | + .idle = CPU_NEWLY_IDLE, | 38 | --- a/drivers/iio/frequency/Makefile |
53 | + }; | 39 | +++ b/drivers/iio/frequency/Makefile |
54 | + | 40 | @@ -XXX,XX +XXX,XX @@ |
55 | + | 41 | obj-$(CONFIG_AD9523) += ad9523.o |
56 | + cpumask_and(cpus, sched_domain_span(sd), cpu_active_mask); | 42 | obj-$(CONFIG_ADF4350) += adf4350.o |
57 | + | 43 | obj-$(CONFIG_ADF4371) += adf4371.o |
58 | +next_domain: | 44 | +obj-$(CONFIG_ADMV1014) += admv1014.o |
59 | + env.sd = sd; | 45 | obj-$(CONFIG_ADRF6780) += adrf6780.o |
60 | + /* Allow migrating cache_hot tasks too. */ | 46 | diff --git a/drivers/iio/frequency/admv1014.c b/drivers/iio/frequency/admv1014.c |
61 | + sd->nr_balance_failed = sd->cache_nice_tries + 1; | 47 | new file mode 100644 |
62 | + | 48 | index XXXXXXX..XXXXXXX |
63 | + for_each_cpu_wrap(cpu, cpus, this_cpu) { | 49 | --- /dev/null |
64 | + struct sched_domain_shared *sd_share; | 50 | +++ b/drivers/iio/frequency/admv1014.c |
65 | + struct cpumask *overloaded_mask; | 51 | @@ -XXX,XX +XXX,XX @@ |
66 | + struct sched_domain *cpu_llc; | 52 | +// SPDX-License-Identifier: GPL-2.0-only |
67 | + int overloaded_cpu; | 53 | +/* |
68 | + | 54 | + * ADMV1014 driver |
69 | + cpu_llc = rcu_dereference(per_cpu(sd_llc, cpu)); | 55 | + * |
70 | + if (!cpu_llc) | 56 | + * Copyright 2021 Analog Devices Inc. |
71 | + break; | 57 | + */ |
72 | + | 58 | + |
73 | + sd_share = cpu_llc->shared; | 59 | +#include <linux/bitfield.h> |
74 | + if (!sd_share) | 60 | +#include <linux/bits.h> |
75 | + break; | 61 | +#include <linux/clk.h> |
76 | + | 62 | +#include <linux/clkdev.h> |
77 | + overloaded_mask = sd_share->overloaded_mask; | 63 | +#include <linux/clk-provider.h> |
78 | + if (!overloaded_mask) | 64 | +#include <linux/device.h> |
79 | + break; | 65 | +#include <linux/iio/iio.h> |
80 | + | 66 | +#include <linux/module.h> |
81 | + for_each_cpu_wrap(overloaded_cpu, overloaded_mask, this_cpu + 1) { | 67 | +#include <linux/mod_devicetable.h> |
82 | + struct rq *overloaded_rq = cpu_rq(overloaded_cpu); | 68 | +#include <linux/notifier.h> |
83 | + struct task_struct *p = NULL; | 69 | +#include <linux/property.h> |
84 | + | 70 | +#include <linux/regulator/consumer.h> |
85 | + if (sched_newidle_continue_balance(this_rq)) { | 71 | +#include <linux/spi/spi.h> |
86 | + *continue_balancing = 0; | 72 | +#include <linux/units.h> |
87 | + return 0; | 73 | + |
88 | + } | 74 | +#include <asm/unaligned.h> |
89 | + | 75 | + |
90 | + /* Quick peek to find if pushable tasks exist. */ | 76 | +/* ADMV1014 Register Map */ |
91 | + if (!has_pushable_tasks(overloaded_rq)) | 77 | +#define ADMV1014_REG_SPI_CONTROL 0x00 |
92 | + continue; | 78 | +#define ADMV1014_REG_ALARM 0x01 |
93 | + | 79 | +#define ADMV1014_REG_ALARM_MASKS 0x02 |
94 | + scoped_guard (rq_lock, overloaded_rq) { | 80 | +#define ADMV1014_REG_ENABLE 0x03 |
95 | + update_rq_clock(overloaded_rq); | 81 | +#define ADMV1014_REG_QUAD 0x04 |
96 | + | 82 | +#define ADMV1014_REG_LO_AMP_PHASE_ADJUST1 0x05 |
97 | + if (!has_pushable_tasks(overloaded_rq)) | 83 | +#define ADMV1014_REG_MIXER 0x07 |
98 | + break; | 84 | +#define ADMV1014_REG_IF_AMP 0x08 |
99 | + | 85 | +#define ADMV1014_REG_IF_AMP_BB_AMP 0x09 |
100 | + env.src_cpu = overloaded_cpu; | 86 | +#define ADMV1014_REG_BB_AMP_AGC 0x0A |
101 | + env.src_rq = overloaded_rq; | 87 | +#define ADMV1014_REG_VVA_TEMP_COMP 0x0B |
102 | + | 88 | + |
103 | + p = detach_one_task(&env); | 89 | +/* ADMV1014_REG_SPI_CONTROL Map */ |
104 | + } | 90 | +#define ADMV1014_PARITY_EN_MSK BIT(15) |
105 | + | 91 | +#define ADMV1014_SPI_SOFT_RESET_MSK BIT(14) |
106 | + if (!p) | 92 | +#define ADMV1014_CHIP_ID_MSK GENMASK(11, 4) |
107 | + continue; | 93 | +#define ADMV1014_CHIP_ID 0x9 |
108 | + | 94 | +#define ADMV1014_REVISION_ID_MSK GENMASK(3, 0) |
109 | + attach_one_task(this_rq, p); | 95 | + |
110 | + return 1; | 96 | +/* ADMV1014_REG_ALARM Map */ |
97 | +#define ADMV1014_PARITY_ERROR_MSK BIT(15) | ||
98 | +#define ADMV1014_TOO_FEW_ERRORS_MSK BIT(14) | ||
99 | +#define ADMV1014_TOO_MANY_ERRORS_MSK BIT(13) | ||
100 | +#define ADMV1014_ADDRESS_RANGE_ERROR_MSK BIT(12) | ||
101 | + | ||
102 | +/* ADMV1014_REG_ENABLE Map */ | ||
103 | +#define ADMV1014_IBIAS_PD_MSK BIT(14) | ||
104 | +#define ADMV1014_P1DB_COMPENSATION_MSK GENMASK(13, 12) | ||
105 | +#define ADMV1014_IF_AMP_PD_MSK BIT(11) | ||
106 | +#define ADMV1014_QUAD_BG_PD_MSK BIT(9) | ||
107 | +#define ADMV1014_BB_AMP_PD_MSK BIT(8) | ||
108 | +#define ADMV1014_QUAD_IBIAS_PD_MSK BIT(7) | ||
109 | +#define ADMV1014_DET_EN_MSK BIT(6) | ||
110 | +#define ADMV1014_BG_PD_MSK BIT(5) | ||
111 | + | ||
112 | +/* ADMV1014_REG_QUAD Map */ | ||
113 | +#define ADMV1014_QUAD_SE_MODE_MSK GENMASK(9, 6) | ||
114 | +#define ADMV1014_QUAD_FILTERS_MSK GENMASK(3, 0) | ||
115 | + | ||
116 | +/* ADMV1014_REG_LO_AMP_PHASE_ADJUST1 Map */ | ||
117 | +#define ADMV1014_LOAMP_PH_ADJ_I_FINE_MSK GENMASK(15, 9) | ||
118 | +#define ADMV1014_LOAMP_PH_ADJ_Q_FINE_MSK GENMASK(8, 2) | ||
119 | + | ||
120 | +/* ADMV1014_REG_MIXER Map */ | ||
121 | +#define ADMV1014_MIXER_VGATE_MSK GENMASK(15, 9) | ||
122 | +#define ADMV1014_DET_PROG_MSK GENMASK(6, 0) | ||
123 | + | ||
124 | +/* ADMV1014_REG_IF_AMP Map */ | ||
125 | +#define ADMV1014_IF_AMP_COARSE_GAIN_I_MSK GENMASK(11, 8) | ||
126 | +#define ADMV1014_IF_AMP_FINE_GAIN_Q_MSK GENMASK(7, 4) | ||
127 | +#define ADMV1014_IF_AMP_FINE_GAIN_I_MSK GENMASK(3, 0) | ||
128 | + | ||
129 | +/* ADMV1014_REG_IF_AMP_BB_AMP Map */ | ||
130 | +#define ADMV1014_IF_AMP_COARSE_GAIN_Q_MSK GENMASK(15, 12) | ||
131 | +#define ADMV1014_BB_AMP_OFFSET_Q_MSK GENMASK(9, 5) | ||
132 | +#define ADMV1014_BB_AMP_OFFSET_I_MSK GENMASK(4, 0) | ||
133 | + | ||
134 | +/* ADMV1014_REG_BB_AMP_AGC Map */ | ||
135 | +#define ADMV1014_BB_AMP_REF_GEN_MSK GENMASK(6, 3) | ||
136 | +#define ADMV1014_BB_AMP_GAIN_CTRL_MSK GENMASK(2, 1) | ||
137 | +#define ADMV1014_BB_SWITCH_HIGH_LOW_CM_MSK BIT(0) | ||
138 | + | ||
139 | +/* ADMV1014_REG_VVA_TEMP_COMP Map */ | ||
140 | +#define ADMV1014_VVA_TEMP_COMP_MSK GENMASK(15, 0) | ||
141 | + | ||
142 | +/* ADMV1014 Miscellaneous Defines */ | ||
143 | +#define ADMV1014_READ BIT(7) | ||
144 | +#define ADMV1014_REG_ADDR_READ_MSK GENMASK(6, 1) | ||
145 | +#define ADMV1014_REG_ADDR_WRITE_MSK GENMASK(22, 17) | ||
146 | +#define ADMV1014_REG_DATA_MSK GENMASK(16, 1) | ||
147 | + | ||
148 | +enum { | ||
149 | + ADMV1014_IQ_MODE, | ||
150 | + ADMV1014_IF_MODE | ||
151 | +}; | ||
152 | + | ||
153 | +enum { | ||
154 | + ADMV1014_SE_MODE_POS = 6, | ||
155 | + ADMV1014_SE_MODE_NEG = 9, | ||
156 | + ADMV1014_SE_MODE_DIFF = 12 | ||
157 | +}; | ||
158 | + | ||
159 | +enum { | ||
160 | + ADMV1014_GAIN_COARSE, | ||
161 | + ADMV1014_GAIN_FINE, | ||
162 | +}; | ||
163 | + | ||
164 | +static const int detector_table[] = {0, 1, 2, 4, 8, 16, 32, 64}; | ||
165 | + | ||
166 | +struct admv1014_state { | ||
167 | + struct spi_device *spi; | ||
168 | + struct clk *clkin; | ||
169 | + struct notifier_block nb; | ||
170 | + /* Protect against concurrent accesses to the device and to data*/ | ||
171 | + struct mutex lock; | ||
172 | + struct regulator *reg; | ||
173 | + unsigned int input_mode; | ||
174 | + unsigned int quad_se_mode; | ||
175 | + unsigned int p1db_comp; | ||
176 | + bool det_en; | ||
177 | + u8 data[3] ____cacheline_aligned; | ||
178 | +}; | ||
179 | + | ||
180 | +static const int mixer_vgate_table[] = {106, 107, 108, 110, 111, 112, 113, 114, 117, 118, 119, 120, 122, 123, 44, 45}; | ||
181 | + | ||
182 | +static int __admv1014_spi_read(struct admv1014_state *st, unsigned int reg, | ||
183 | + unsigned int *val) | ||
184 | +{ | ||
185 | + int ret; | ||
186 | + struct spi_transfer t = {0}; | ||
187 | + | ||
188 | + st->data[0] = ADMV1014_READ | FIELD_PREP(ADMV1014_REG_ADDR_READ_MSK, reg); | ||
189 | + st->data[1] = 0x0; | ||
190 | + st->data[2] = 0x0; | ||
191 | + | ||
192 | + t.rx_buf = &st->data[0]; | ||
193 | + t.tx_buf = &st->data[0]; | ||
194 | + t.len = 3; | ||
195 | + | ||
196 | + ret = spi_sync_transfer(st->spi, &t, 1); | ||
197 | + if (ret) | ||
198 | + return ret; | ||
199 | + | ||
200 | + *val = FIELD_GET(ADMV1014_REG_DATA_MSK, get_unaligned_be24(&st->data[0])); | ||
201 | + | ||
202 | + return ret; | ||
203 | +} | ||
204 | + | ||
205 | +static int admv1014_spi_read(struct admv1014_state *st, unsigned int reg, | ||
206 | + unsigned int *val) | ||
207 | +{ | ||
208 | + int ret; | ||
209 | + | ||
210 | + mutex_lock(&st->lock); | ||
211 | + ret = __admv1014_spi_read(st, reg, val); | ||
212 | + mutex_unlock(&st->lock); | ||
213 | + | ||
214 | + return ret; | ||
215 | +} | ||
216 | + | ||
217 | +static int __admv1014_spi_write(struct admv1014_state *st, | ||
218 | + unsigned int reg, | ||
219 | + unsigned int val) | ||
220 | +{ | ||
221 | + put_unaligned_be24(FIELD_PREP(ADMV1014_REG_DATA_MSK, val) | | ||
222 | + FIELD_PREP(ADMV1014_REG_ADDR_WRITE_MSK, reg), &st->data[0]); | ||
223 | + | ||
224 | + return spi_write(st->spi, &st->data[0], 3); | ||
225 | +} | ||
226 | + | ||
227 | +static int admv1014_spi_write(struct admv1014_state *st, unsigned int reg, | ||
228 | + unsigned int val) | ||
229 | +{ | ||
230 | + int ret; | ||
231 | + | ||
232 | + mutex_lock(&st->lock); | ||
233 | + ret = __admv1014_spi_write(st, reg, val); | ||
234 | + mutex_unlock(&st->lock); | ||
235 | + | ||
236 | + return ret; | ||
237 | +} | ||
238 | + | ||
239 | +static int __admv1014_spi_update_bits(struct admv1014_state *st, unsigned int reg, | ||
240 | + unsigned int mask, unsigned int val) | ||
241 | +{ | ||
242 | + int ret; | ||
243 | + unsigned int data, temp; | ||
244 | + | ||
245 | + ret = __admv1014_spi_read(st, reg, &data); | ||
246 | + if (ret) | ||
247 | + return ret; | ||
248 | + | ||
249 | + temp = (data & ~mask) | (val & mask); | ||
250 | + | ||
251 | + return __admv1014_spi_write(st, reg, temp); | ||
252 | +} | ||
253 | + | ||
254 | +static int admv1014_spi_update_bits(struct admv1014_state *st, unsigned int reg, | ||
255 | + unsigned int mask, unsigned int val) | ||
256 | +{ | ||
257 | + int ret; | ||
258 | + | ||
259 | + mutex_lock(&st->lock); | ||
260 | + ret = __admv1014_spi_update_bits(st, reg, mask, val); | ||
261 | + mutex_unlock(&st->lock); | ||
262 | + | ||
263 | + return ret; | ||
264 | +} | ||
265 | + | ||
266 | +static int admv1014_update_quad_filters(struct admv1014_state *st) | ||
267 | +{ | ||
268 | + unsigned int filt_raw; | ||
269 | + u64 rate = clk_get_rate(st->clkin); | ||
270 | + | ||
271 | + if (rate >= (5400 * HZ_PER_MHZ) && rate <= (7000 * HZ_PER_MHZ)) | ||
272 | + filt_raw = 15; | ||
273 | + else if (rate >= (5400 * HZ_PER_MHZ) && rate <= (8000 * HZ_PER_MHZ)) | ||
274 | + filt_raw = 10; | ||
275 | + else if (rate >= (6600 * HZ_PER_MHZ) && rate <= (9200 * HZ_PER_MHZ)) | ||
276 | + filt_raw = 5; | ||
277 | + else | ||
278 | + filt_raw = 0; | ||
279 | + | ||
280 | + return __admv1014_spi_update_bits(st, ADMV1014_REG_QUAD, | ||
281 | + ADMV1014_QUAD_FILTERS_MSK, | ||
282 | + FIELD_PREP(ADMV1014_QUAD_FILTERS_MSK, filt_raw)); | ||
283 | +} | ||
284 | + | ||
285 | +static int admv1014_update_vcm_settings(struct admv1014_state *st) | ||
286 | +{ | ||
287 | + unsigned int i, vcm_mv, vcm_comp, bb_sw_high_low_cm; | ||
288 | + int ret; | ||
289 | + | ||
290 | + vcm_mv = regulator_get_voltage(st->reg) / 1000; | ||
291 | + for (i = 0; i < ARRAY_SIZE(mixer_vgate_table); i++) { | ||
292 | + vcm_comp = 1050 + (i * 50) + (i / 8 * 50); | ||
293 | + if (vcm_mv == vcm_comp) { | ||
294 | + ret = __admv1014_spi_update_bits(st, ADMV1014_REG_MIXER, | ||
295 | + ADMV1014_MIXER_VGATE_MSK, | ||
296 | + FIELD_PREP(ADMV1014_MIXER_VGATE_MSK, | ||
297 | + mixer_vgate_table[i])); | ||
298 | + if (ret) | ||
299 | + return ret; | ||
300 | + | ||
301 | + bb_sw_high_low_cm = ~(i / 8); | ||
302 | + | ||
303 | + return __admv1014_spi_update_bits(st, ADMV1014_REG_BB_AMP_AGC, | ||
304 | + ADMV1014_BB_AMP_REF_GEN_MSK | | ||
305 | + ADMV1014_BB_SWITCH_HIGH_LOW_CM_MSK, | ||
306 | + FIELD_PREP(ADMV1014_BB_AMP_REF_GEN_MSK, i) | | ||
307 | + FIELD_PREP(ADMV1014_BB_SWITCH_HIGH_LOW_CM_MSK, bb_sw_high_low_cm)); | ||
111 | + } | 308 | + } |
112 | + | 309 | + } |
113 | + cpumask_andnot(cpus, cpus, sched_domain_span(cpu_llc)); | 310 | + |
114 | + } | 311 | + return -EINVAL; |
115 | + | 312 | +} |
116 | + if (sched_newidle_continue_balance(this_rq)) { | 313 | + |
117 | + *continue_balancing = 0; | 314 | +static int admv1014_read_raw(struct iio_dev *indio_dev, |
118 | + return 0; | 315 | + struct iio_chan_spec const *chan, |
119 | + } | 316 | + int *val, int *val2, long info) |
120 | + | 317 | +{ |
121 | + sd_parent = sd->parent; | 318 | + struct admv1014_state *st = iio_priv(indio_dev); |
122 | + if (sd_parent && !(sd_parent->flags & SD_NUMA)) { | 319 | + unsigned int data; |
123 | + cpumask_andnot(cpus, sched_domain_span(sd_parent), sched_domain_span(sd)); | 320 | + int ret; |
124 | + sd = sd_parent; | 321 | + |
125 | + goto next_domain; | 322 | + switch (info) { |
126 | + } | 323 | + case IIO_CHAN_INFO_OFFSET: |
324 | + ret = admv1014_spi_read(st, ADMV1014_REG_IF_AMP_BB_AMP, &data); | ||
325 | + if (ret) | ||
326 | + return ret; | ||
327 | + | ||
328 | + if (chan->channel2 == IIO_MOD_I) | ||
329 | + *val = FIELD_GET(ADMV1014_BB_AMP_OFFSET_I_MSK, data); | ||
330 | + else | ||
331 | + *val = FIELD_GET(ADMV1014_BB_AMP_OFFSET_Q_MSK, data); | ||
332 | + | ||
333 | + return IIO_VAL_INT; | ||
334 | + case IIO_CHAN_INFO_PHASE: | ||
335 | + ret = admv1014_spi_read(st, ADMV1014_REG_LO_AMP_PHASE_ADJUST1, &data); | ||
336 | + if (ret) | ||
337 | + return ret; | ||
338 | + | ||
339 | + if (chan->channel2 == IIO_MOD_I) | ||
340 | + *val = FIELD_GET(ADMV1014_LOAMP_PH_ADJ_I_FINE_MSK, data); | ||
341 | + else | ||
342 | + *val = FIELD_GET(ADMV1014_LOAMP_PH_ADJ_Q_FINE_MSK, data); | ||
343 | + | ||
344 | + return IIO_VAL_INT; | ||
345 | + case IIO_CHAN_INFO_SCALE: | ||
346 | + ret = admv1014_spi_read(st, ADMV1014_REG_MIXER, &data); | ||
347 | + if (ret) | ||
348 | + return ret; | ||
349 | + | ||
350 | + *val = FIELD_GET(ADMV1014_DET_PROG_MSK, data); | ||
351 | + return IIO_VAL_INT; | ||
352 | + case IIO_CHAN_INFO_HARDWAREGAIN: | ||
353 | + ret = admv1014_spi_read(st, ADMV1014_REG_BB_AMP_AGC, &data); | ||
354 | + if (ret) | ||
355 | + return ret; | ||
356 | + | ||
357 | + *val = FIELD_GET(ADMV1014_BB_AMP_GAIN_CTRL_MSK, data); | ||
358 | + return IIO_VAL_INT; | ||
359 | + default: | ||
360 | + return -EINVAL; | ||
361 | + } | ||
362 | +} | ||
363 | + | ||
364 | +static int admv1014_write_raw(struct iio_dev *indio_dev, | ||
365 | + struct iio_chan_spec const *chan, | ||
366 | + int val, int val2, long info) | ||
367 | +{ | ||
368 | + struct admv1014_state *st = iio_priv(indio_dev); | ||
369 | + | ||
370 | + switch (info) { | ||
371 | + case IIO_CHAN_INFO_OFFSET: | ||
372 | + if (chan->channel2 == IIO_MOD_I) | ||
373 | + return admv1014_spi_update_bits(st, ADMV1014_REG_IF_AMP_BB_AMP, | ||
374 | + ADMV1014_BB_AMP_OFFSET_I_MSK, | ||
375 | + FIELD_PREP(ADMV1014_BB_AMP_OFFSET_I_MSK, val)); | ||
376 | + else | ||
377 | + return admv1014_spi_update_bits(st, ADMV1014_REG_IF_AMP_BB_AMP, | ||
378 | + ADMV1014_BB_AMP_OFFSET_Q_MSK, | ||
379 | + FIELD_PREP(ADMV1014_BB_AMP_OFFSET_Q_MSK, val)); | ||
380 | + case IIO_CHAN_INFO_PHASE: | ||
381 | + if (chan->channel2 == IIO_MOD_I) | ||
382 | + return admv1014_spi_update_bits(st, ADMV1014_REG_LO_AMP_PHASE_ADJUST1, | ||
383 | + ADMV1014_LOAMP_PH_ADJ_I_FINE_MSK, | ||
384 | + FIELD_PREP(ADMV1014_LOAMP_PH_ADJ_I_FINE_MSK, val)); | ||
385 | + else | ||
386 | + return admv1014_spi_update_bits(st, ADMV1014_REG_LO_AMP_PHASE_ADJUST1, | ||
387 | + ADMV1014_LOAMP_PH_ADJ_Q_FINE_MSK, | ||
388 | + FIELD_PREP(ADMV1014_LOAMP_PH_ADJ_Q_FINE_MSK, val)); | ||
389 | + case IIO_CHAN_INFO_SCALE: | ||
390 | + return admv1014_spi_update_bits(st, ADMV1014_REG_MIXER, | ||
391 | + ADMV1014_DET_PROG_MSK, | ||
392 | + FIELD_PREP(ADMV1014_DET_PROG_MSK, val)); | ||
393 | + case IIO_CHAN_INFO_HARDWAREGAIN: | ||
394 | + return admv1014_spi_update_bits(st, ADMV1014_REG_BB_AMP_AGC, | ||
395 | + ADMV1014_BB_AMP_GAIN_CTRL_MSK, | ||
396 | + FIELD_PREP(ADMV1014_BB_AMP_GAIN_CTRL_MSK, val)); | ||
397 | + default: | ||
398 | + return -EINVAL; | ||
399 | + } | ||
400 | +} | ||
401 | + | ||
402 | +static ssize_t admv1014_read(struct iio_dev *indio_dev, | ||
403 | + uintptr_t private, | ||
404 | + const struct iio_chan_spec *chan, | ||
405 | + char *buf) | ||
406 | +{ | ||
407 | + struct admv1014_state *st = iio_priv(indio_dev); | ||
408 | + unsigned int data; | ||
409 | + int ret; | ||
410 | + | ||
411 | + switch ((u32)private) { | ||
412 | + case ADMV1014_GAIN_COARSE: | ||
413 | + if (chan->channel2 == IIO_MOD_I) { | ||
414 | + ret = admv1014_spi_read(st, ADMV1014_REG_IF_AMP, &data); | ||
415 | + if (ret) | ||
416 | + return ret; | ||
417 | + | ||
418 | + data = FIELD_GET(ADMV1014_IF_AMP_COARSE_GAIN_I_MSK, data); | ||
419 | + } else { | ||
420 | + ret = admv1014_spi_read(st, ADMV1014_REG_IF_AMP_BB_AMP, &data); | ||
421 | + if (ret) | ||
422 | + return ret; | ||
423 | + | ||
424 | + data = FIELD_GET(ADMV1014_IF_AMP_COARSE_GAIN_Q_MSK, data); | ||
425 | + } | ||
426 | + break; | ||
427 | + case ADMV1014_GAIN_FINE: | ||
428 | + ret = admv1014_spi_read(st, ADMV1014_REG_IF_AMP, &data); | ||
429 | + if (ret) | ||
430 | + return ret; | ||
431 | + | ||
432 | + if (chan->channel2 == IIO_MOD_I) | ||
433 | + data = FIELD_GET(ADMV1014_IF_AMP_FINE_GAIN_I_MSK, data); | ||
434 | + else | ||
435 | + data = FIELD_GET(ADMV1014_IF_AMP_FINE_GAIN_Q_MSK, data); | ||
436 | + break; | ||
437 | + default: | ||
438 | + return -EINVAL; | ||
439 | + } | ||
440 | + | ||
441 | + return sysfs_emit(buf, "%u\n", data); | ||
442 | +} | ||
443 | + | ||
444 | +static ssize_t admv1014_write(struct iio_dev *indio_dev, | ||
445 | + uintptr_t private, | ||
446 | + const struct iio_chan_spec *chan, | ||
447 | + const char *buf, size_t len) | ||
448 | +{ | ||
449 | + struct admv1014_state *st = iio_priv(indio_dev); | ||
450 | + unsigned int data, addr, msk; | ||
451 | + int ret; | ||
452 | + | ||
453 | + ret = kstrtou32(buf, 10, &data); | ||
454 | + if (ret) | ||
455 | + return ret; | ||
456 | + | ||
457 | + switch ((u32)private) { | ||
458 | + case ADMV1014_GAIN_COARSE: | ||
459 | + if (chan->channel2 == IIO_MOD_I) { | ||
460 | + addr = ADMV1014_REG_IF_AMP; | ||
461 | + msk = ADMV1014_IF_AMP_COARSE_GAIN_I_MSK; | ||
462 | + data = FIELD_PREP(ADMV1014_IF_AMP_COARSE_GAIN_I_MSK, data); | ||
463 | + } else { | ||
464 | + addr = ADMV1014_REG_IF_AMP_BB_AMP; | ||
465 | + msk = ADMV1014_IF_AMP_COARSE_GAIN_Q_MSK; | ||
466 | + data = FIELD_PREP(ADMV1014_IF_AMP_COARSE_GAIN_Q_MSK, data); | ||
467 | + } | ||
468 | + break; | ||
469 | + case ADMV1014_GAIN_FINE: | ||
470 | + addr = ADMV1014_REG_IF_AMP; | ||
471 | + | ||
472 | + if (chan->channel2 == IIO_MOD_I) { | ||
473 | + msk = ADMV1014_IF_AMP_FINE_GAIN_I_MSK; | ||
474 | + data = FIELD_PREP(ADMV1014_IF_AMP_FINE_GAIN_I_MSK, data); | ||
475 | + } else { | ||
476 | + msk = ADMV1014_IF_AMP_FINE_GAIN_Q_MSK; | ||
477 | + data = FIELD_PREP(ADMV1014_IF_AMP_FINE_GAIN_Q_MSK, data); | ||
478 | + } | ||
479 | + break; | ||
480 | + default: | ||
481 | + return -EINVAL; | ||
482 | + } | ||
483 | + | ||
484 | + ret = admv1014_spi_update_bits(st, addr, msk, data); | ||
485 | + | ||
486 | + return ret ? ret : len; | ||
487 | +} | ||
488 | + | ||
489 | +static int admv1014_read_avail(struct iio_dev *indio_dev, | ||
490 | + struct iio_chan_spec const *chan, | ||
491 | + const int **vals, int *type, int *length, | ||
492 | + long info) | ||
493 | +{ | ||
494 | + switch (info) { | ||
495 | + case IIO_CHAN_INFO_SCALE: | ||
496 | + *vals = detector_table; | ||
497 | + *type = IIO_VAL_INT; | ||
498 | + *length = ARRAY_SIZE(detector_table); | ||
499 | + | ||
500 | + return IIO_AVAIL_LIST; | ||
501 | + default: | ||
502 | + return -EINVAL; | ||
503 | + } | ||
504 | +} | ||
505 | + | ||
506 | +static int admv1014_reg_access(struct iio_dev *indio_dev, | ||
507 | + unsigned int reg, | ||
508 | + unsigned int write_val, | ||
509 | + unsigned int *read_val) | ||
510 | +{ | ||
511 | + struct admv1014_state *st = iio_priv(indio_dev); | ||
512 | + int ret; | ||
513 | + | ||
514 | + if (read_val) | ||
515 | + ret = admv1014_spi_read(st, reg, read_val); | ||
516 | + else | ||
517 | + ret = admv1014_spi_write(st, reg, write_val); | ||
518 | + | ||
519 | + return ret; | ||
520 | +} | ||
521 | + | ||
522 | +static const struct iio_info admv1014_info = { | ||
523 | + .read_raw = admv1014_read_raw, | ||
524 | + .write_raw = admv1014_write_raw, | ||
525 | + .read_avail = &admv1014_read_avail, | ||
526 | + .debugfs_reg_access = &admv1014_reg_access, | ||
527 | +}; | ||
528 | + | ||
529 | +static int admv1014_freq_change(struct notifier_block *nb, unsigned long action, void *data) | ||
530 | +{ | ||
531 | + struct admv1014_state *st = container_of(nb, struct admv1014_state, nb); | ||
532 | + int ret; | ||
533 | + | ||
534 | + if (action == POST_RATE_CHANGE) { | ||
535 | + mutex_lock(&st->lock); | ||
536 | + ret = notifier_from_errno(admv1014_update_quad_filters(st)); | ||
537 | + mutex_unlock(&st->lock); | ||
538 | + return ret; | ||
539 | + } | ||
540 | + | ||
541 | + return NOTIFY_OK; | ||
542 | +} | ||
543 | + | ||
544 | +#define _ADMV1014_EXT_INFO(_name, _shared, _ident) { \ | ||
545 | + .name = _name, \ | ||
546 | + .read = admv1014_read, \ | ||
547 | + .write = admv1014_write, \ | ||
548 | + .private = _ident, \ | ||
549 | + .shared = _shared, \ | ||
550 | +} | ||
551 | + | ||
552 | +static const struct iio_chan_spec_ext_info admv1014_ext_info[] = { | ||
553 | + _ADMV1014_EXT_INFO("gain_coarse", IIO_SEPARATE, ADMV1014_GAIN_COARSE), | ||
554 | + _ADMV1014_EXT_INFO("gain_fine", IIO_SEPARATE, ADMV1014_GAIN_FINE), | ||
555 | + { }, | ||
556 | +}; | ||
557 | + | ||
558 | +#define ADMV1014_CHAN(_channel, rf_comp) { \ | ||
559 | + .type = IIO_ALTVOLTAGE, \ | ||
560 | + .modified = 1, \ | ||
561 | + .output = 0, \ | ||
562 | + .indexed = 1, \ | ||
563 | + .channel2 = IIO_MOD_##rf_comp, \ | ||
564 | + .channel = _channel, \ | ||
565 | + .info_mask_separate = BIT(IIO_CHAN_INFO_PHASE) | \ | ||
566 | + BIT(IIO_CHAN_INFO_OFFSET), \ | ||
567 | + .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_HARDWAREGAIN) \ | ||
568 | + } | ||
569 | + | ||
570 | +#define ADMV1014_CHAN_GAIN(_channel, rf_comp, _admv1014_ext_info) { \ | ||
571 | + .type = IIO_ALTVOLTAGE, \ | ||
572 | + .modified = 1, \ | ||
573 | + .output = 0, \ | ||
574 | + .indexed = 1, \ | ||
575 | + .channel2 = IIO_MOD_##rf_comp, \ | ||
576 | + .channel = _channel, \ | ||
577 | + .ext_info = _admv1014_ext_info \ | ||
578 | + } | ||
579 | + | ||
580 | +#define ADMV1014_CHAN_DETECTOR(_channel) { \ | ||
581 | + .type = IIO_POWER, \ | ||
582 | + .modified = 1, \ | ||
583 | + .output = 0, \ | ||
584 | + .indexed = 1, \ | ||
585 | + .channel = _channel, \ | ||
586 | + .info_mask_separate = BIT(IIO_CHAN_INFO_SCALE), \ | ||
587 | + .info_mask_shared_by_type_available = BIT(IIO_CHAN_INFO_SCALE) \ | ||
588 | + } | ||
589 | + | ||
590 | +static const struct iio_chan_spec admv1014_channels[] = { | ||
591 | + ADMV1014_CHAN(0, I), | ||
592 | + ADMV1014_CHAN(0, Q), | ||
593 | + ADMV1014_CHAN_GAIN(0, I, admv1014_ext_info), | ||
594 | + ADMV1014_CHAN_GAIN(0, Q, admv1014_ext_info), | ||
595 | + ADMV1014_CHAN_DETECTOR(0) | ||
596 | +}; | ||
597 | + | ||
598 | +static int admv1014_init(struct admv1014_state *st) | ||
599 | +{ | ||
600 | + int ret; | ||
601 | + unsigned int chip_id, enable_reg, enable_reg_msk; | ||
602 | + struct spi_device *spi = st->spi; | ||
603 | + | ||
604 | + /* Perform a software reset */ | ||
605 | + ret = __admv1014_spi_update_bits(st, ADMV1014_REG_SPI_CONTROL, | ||
606 | + ADMV1014_SPI_SOFT_RESET_MSK, | ||
607 | + FIELD_PREP(ADMV1014_SPI_SOFT_RESET_MSK, 1)); | ||
608 | + if (ret) { | ||
609 | + dev_err(&spi->dev, "ADMV1014 SPI software reset failed.\n"); | ||
610 | + return ret; | ||
611 | + } | ||
612 | + | ||
613 | + ret = __admv1014_spi_update_bits(st, ADMV1014_REG_SPI_CONTROL, | ||
614 | + ADMV1014_SPI_SOFT_RESET_MSK, | ||
615 | + FIELD_PREP(ADMV1014_SPI_SOFT_RESET_MSK, 0)); | ||
616 | + if (ret) { | ||
617 | + dev_err(&spi->dev, "ADMV1014 SPI software reset disable failed.\n"); | ||
618 | + return ret; | ||
619 | + } | ||
620 | + | ||
621 | + ret = admv1014_spi_write(st, ADMV1014_REG_VVA_TEMP_COMP, 0x727C); | ||
622 | + if (ret) { | ||
623 | + dev_err(&spi->dev, "Writing default Temperature Compensation value failed.\n"); | ||
624 | + return ret; | ||
625 | + } | ||
626 | + | ||
627 | + ret = admv1014_spi_read(st, ADMV1014_REG_SPI_CONTROL, &chip_id); | ||
628 | + if (ret) | ||
629 | + return ret; | ||
630 | + | ||
631 | + chip_id = (chip_id & ADMV1014_CHIP_ID_MSK) >> 4; | ||
632 | + if (chip_id != ADMV1014_CHIP_ID) { | ||
633 | + dev_err(&spi->dev, "Invalid Chip ID.\n"); | ||
634 | + return -EINVAL; | ||
635 | + } | ||
636 | + | ||
637 | + ret = __admv1014_spi_update_bits(st, ADMV1014_REG_QUAD, | ||
638 | + ADMV1014_QUAD_SE_MODE_MSK, | ||
639 | + FIELD_PREP(ADMV1014_QUAD_SE_MODE_MSK, | ||
640 | + st->quad_se_mode)); | ||
641 | + if (ret) { | ||
642 | + dev_err(&spi->dev, "Writing Quad SE Mode failed.\n"); | ||
643 | + return ret; | ||
644 | + } | ||
645 | + | ||
646 | + ret = admv1014_update_quad_filters(st); | ||
647 | + if (ret) { | ||
648 | + dev_err(&spi->dev, "Update Quad Filters failed.\n"); | ||
649 | + return ret; | ||
650 | + } | ||
651 | + | ||
652 | + ret = admv1014_update_vcm_settings(st); | ||
653 | + if (ret) { | ||
654 | + dev_err(&spi->dev, "Update VCM Settings failed.\n"); | ||
655 | + return ret; | ||
656 | + } | ||
657 | + | ||
658 | + enable_reg_msk = ADMV1014_P1DB_COMPENSATION_MSK | | ||
659 | + ADMV1014_IF_AMP_PD_MSK | | ||
660 | + ADMV1014_BB_AMP_PD_MSK | | ||
661 | + ADMV1014_DET_EN_MSK; | ||
662 | + | ||
663 | + enable_reg = FIELD_PREP(ADMV1014_P1DB_COMPENSATION_MSK, st->p1db_comp) | | ||
664 | + FIELD_PREP(ADMV1014_IF_AMP_PD_MSK, !(st->input_mode)) | | ||
665 | + FIELD_PREP(ADMV1014_BB_AMP_PD_MSK, st->input_mode) | | ||
666 | + FIELD_PREP(ADMV1014_DET_EN_MSK, st->det_en); | ||
667 | + | ||
668 | + return __admv1014_spi_update_bits(st, ADMV1014_REG_ENABLE, enable_reg_msk, enable_reg); | ||
669 | +} | ||
670 | + | ||
671 | +static void admv1014_clk_disable(void *data) | ||
672 | +{ | ||
673 | + clk_disable_unprepare(data); | ||
674 | +} | ||
675 | + | ||
676 | +static void admv1014_reg_disable(void *data) | ||
677 | +{ | ||
678 | + regulator_disable(data); | ||
679 | +} | ||
680 | + | ||
681 | +static void admv1014_powerdown(void *data) | ||
682 | +{ | ||
683 | + unsigned int enable_reg, enable_reg_msk; | ||
684 | + | ||
685 | + /* Disable all components in the Enable Register */ | ||
686 | + enable_reg_msk = ADMV1014_IBIAS_PD_MSK | | ||
687 | + ADMV1014_IF_AMP_PD_MSK | | ||
688 | + ADMV1014_QUAD_BG_PD_MSK | | ||
689 | + ADMV1014_BB_AMP_PD_MSK | | ||
690 | + ADMV1014_QUAD_IBIAS_PD_MSK | | ||
691 | + ADMV1014_BG_PD_MSK; | ||
692 | + | ||
693 | + enable_reg = FIELD_PREP(ADMV1014_IBIAS_PD_MSK, 1) | | ||
694 | + FIELD_PREP(ADMV1014_IF_AMP_PD_MSK, 1) | | ||
695 | + FIELD_PREP(ADMV1014_QUAD_BG_PD_MSK, 1) | | ||
696 | + FIELD_PREP(ADMV1014_BB_AMP_PD_MSK, 1) | | ||
697 | + FIELD_PREP(ADMV1014_QUAD_IBIAS_PD_MSK, 1) | | ||
698 | + FIELD_PREP(ADMV1014_BG_PD_MSK, 1); | ||
699 | + | ||
700 | + admv1014_spi_update_bits(data, ADMV1014_REG_ENABLE, enable_reg_msk, enable_reg); | ||
701 | +} | ||
702 | + | ||
703 | +static int admv1014_properties_parse(struct admv1014_state *st) | ||
704 | +{ | ||
705 | + int ret; | ||
706 | + const char *str; | ||
707 | + struct spi_device *spi = st->spi; | ||
708 | + | ||
709 | + st->det_en = device_property_read_bool(&spi->dev, "adi,detector-enable"); | ||
710 | + | ||
711 | + st->p1db_comp = device_property_read_bool(&spi->dev, "adi,p1db-comp-enable"); | ||
712 | + if (st->p1db_comp) | ||
713 | + st->p1db_comp = 3; | ||
714 | + | ||
715 | + ret = device_property_read_string(&spi->dev, "adi,input-mode", &str); | ||
716 | + if (ret) | ||
717 | + st->input_mode = ADMV1014_IQ_MODE; | ||
718 | + | ||
719 | + if (!strcmp(str, "iq")) | ||
720 | + st->input_mode = ADMV1014_IQ_MODE; | ||
721 | + else if (!strcmp(str, "if")) | ||
722 | + st->input_mode = ADMV1014_IF_MODE; | ||
723 | + else | ||
724 | + return -EINVAL; | ||
725 | + | ||
726 | + ret = device_property_read_string(&spi->dev, "adi,quad-se-mode", &str); | ||
727 | + if (ret) | ||
728 | + st->quad_se_mode = ADMV1014_SE_MODE_DIFF; | ||
729 | + | ||
730 | + if (!strcmp(str, "diff")) | ||
731 | + st->quad_se_mode = ADMV1014_SE_MODE_DIFF; | ||
732 | + else if (!strcmp(str, "se-pos")) | ||
733 | + st->quad_se_mode = ADMV1014_SE_MODE_POS; | ||
734 | + else if (!strcmp(str, "se-neg")) | ||
735 | + st->quad_se_mode = ADMV1014_SE_MODE_NEG; | ||
736 | + else | ||
737 | + return -EINVAL; | ||
738 | + | ||
739 | + st->reg = devm_regulator_get(&spi->dev, "vcm"); | ||
740 | + if (IS_ERR(st->reg)) | ||
741 | + return dev_err_probe(&spi->dev, PTR_ERR(st->reg), | ||
742 | + "failed to get the common-mode voltage\n"); | ||
743 | + | ||
744 | + st->clkin = devm_clk_get(&spi->dev, "lo_in"); | ||
745 | + if (IS_ERR(st->clkin)) | ||
746 | + return dev_err_probe(&spi->dev, PTR_ERR(st->clkin), | ||
747 | + "failed to get the LO input clock\n"); | ||
127 | + | 748 | + |
128 | + return 0; | 749 | + return 0; |
129 | +} | 750 | +} |
130 | + | 751 | + |
131 | /* | 752 | +static int admv1014_probe(struct spi_device *spi) |
132 | * sched_balance_newidle is called by schedule() if this_cpu is about to become | 753 | +{ |
133 | * idle. Attempts to pull tasks from other CPUs. | 754 | + struct iio_dev *indio_dev; |
134 | @@ -XXX,XX +XXX,XX @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) | 755 | + struct admv1014_state *st; |
135 | u64 t0, t1, curr_cost = 0; | 756 | + int ret; |
136 | struct sched_domain *sd; | 757 | + |
137 | int pulled_task = 0; | 758 | + indio_dev = devm_iio_device_alloc(&spi->dev, sizeof(*st)); |
138 | + u64 domain_cost; | 759 | + if (!indio_dev) |
139 | 760 | + return -ENOMEM; | |
140 | update_misfit_status(NULL, this_rq); | 761 | + |
141 | 762 | + st = iio_priv(indio_dev); | |
142 | @@ -XXX,XX +XXX,XX @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) | 763 | + |
143 | rq_unpin_lock(this_rq, rf); | 764 | + indio_dev->info = &admv1014_info; |
144 | 765 | + indio_dev->name = "admv1014"; | |
145 | rcu_read_lock(); | 766 | + indio_dev->channels = admv1014_channels; |
146 | - sd = rcu_dereference_check_sched_domain(this_rq->sd); | 767 | + indio_dev->num_channels = ARRAY_SIZE(admv1014_channels); |
147 | - | 768 | + |
148 | - if (!get_rd_overloaded(this_rq->rd) || | 769 | + st->spi = spi; |
149 | - (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) { | 770 | + |
150 | - | 771 | + ret = admv1014_properties_parse(st); |
151 | - if (sd) | 772 | + if (ret) |
152 | - update_next_balance(sd, &next_balance); | 773 | + return ret; |
153 | + if (!get_rd_overloaded(this_rq->rd)) { | 774 | + |
154 | rcu_read_unlock(); | 775 | + ret = regulator_enable(st->reg); |
155 | - | 776 | + if (ret) { |
156 | goto out; | 777 | + dev_err(&spi->dev, "Failed to enable specified Common-Mode Voltage!\n"); |
157 | } | 778 | + return ret; |
158 | rcu_read_unlock(); | 779 | + } |
159 | 780 | + | |
160 | raw_spin_rq_unlock(this_rq); | 781 | + ret = devm_add_action_or_reset(&spi->dev, admv1014_reg_disable, st->reg); |
161 | 782 | + if (ret) | |
162 | + rcu_read_lock(); | 783 | + return ret; |
163 | t0 = sched_clock_cpu(this_cpu); | 784 | + |
164 | - sched_balance_update_blocked_averages(this_cpu); | 785 | + ret = clk_prepare_enable(st->clkin); |
165 | 786 | + if (ret) | |
166 | - rcu_read_lock(); | 787 | + return ret; |
167 | - for_each_domain(this_cpu, sd) { | 788 | + |
168 | - u64 domain_cost; | 789 | + ret = devm_add_action_or_reset(&spi->dev, admv1014_clk_disable, st->clkin); |
169 | + sd = rcu_dereference(per_cpu(sd_llc, this_cpu)); | 790 | + if (ret) |
170 | + if (sd) { | 791 | + return ret; |
171 | + pulled_task = sched_newidle_pull_overloaded(sd, this_rq, &continue_balancing); | 792 | + |
172 | + | 793 | + st->nb.notifier_call = admv1014_freq_change; |
173 | + t1 = sched_clock_cpu(this_cpu); | 794 | + ret = devm_clk_notifier_register(&spi->dev, st->clkin, &st->nb); |
174 | + domain_cost = t1 - t0; | 795 | + if (ret) |
175 | + curr_cost += domain_cost; | 796 | + return ret; |
176 | + t0 = t1; | 797 | + |
177 | 798 | + mutex_init(&st->lock); | |
178 | + if (pulled_task || !continue_balancing) | 799 | + |
179 | + goto skip_numa; | 800 | + ret = admv1014_init(st); |
180 | + } | 801 | + if (ret) |
181 | + | 802 | + return ret; |
182 | + sched_balance_update_blocked_averages(this_cpu); | 803 | + |
183 | + | 804 | + ret = devm_add_action_or_reset(&spi->dev, admv1014_powerdown, st); |
184 | + sd = rcu_dereference(per_cpu(sd_numa, this_cpu)); | 805 | + if (ret) |
185 | + while (sd) { | 806 | + return ret; |
186 | update_next_balance(sd, &next_balance); | 807 | + |
187 | 808 | + return devm_iio_device_register(&spi->dev, indio_dev); | |
188 | if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) | 809 | +} |
189 | @@ -XXX,XX +XXX,XX @@ static int sched_balance_newidle(struct rq *this_rq, struct rq_flags *rf) | 810 | + |
190 | */ | 811 | +static const struct spi_device_id admv1014_id[] = { |
191 | if (pulled_task || !continue_balancing) | 812 | + { "admv1014", 0 }, |
192 | break; | 813 | + {} |
193 | + | 814 | +}; |
194 | + sd = sd->parent; | 815 | +MODULE_DEVICE_TABLE(spi, admv1014_id); |
195 | } | 816 | + |
196 | + | 817 | +static const struct of_device_id admv1014_of_match[] = { |
197 | +skip_numa: | 818 | + { .compatible = "adi,admv1014" }, |
198 | rcu_read_unlock(); | 819 | + {}, |
199 | 820 | +}; | |
200 | raw_spin_rq_lock(this_rq); | 821 | +MODULE_DEVICE_TABLE(of, admv1014_of_match); |
822 | + | ||
823 | +static struct spi_driver admv1014_driver = { | ||
824 | + .driver = { | ||
825 | + .name = "admv1014", | ||
826 | + .of_match_table = admv1014_of_match, | ||
827 | + }, | ||
828 | + .probe = admv1014_probe, | ||
829 | + .id_table = admv1014_id, | ||
830 | +}; | ||
831 | +module_spi_driver(admv1014_driver); | ||
832 | + | ||
833 | +MODULE_AUTHOR("Antoniu Miclaus <antoniu.miclaus@analog.com"); | ||
834 | +MODULE_DESCRIPTION("Analog Devices ADMV1014"); | ||
835 | +MODULE_LICENSE("GPL v2"); | ||
201 | -- | 836 | -- |
202 | 2.34.1 | 837 | 2.34.1 |
838 | diff view generated by jsdifflib |
1 | Proactively try to push tasks to one of the CPUs set in the | 1 | Add device tree bindings for the ADMV1014 Upconverter. |
---|---|---|---|
2 | "nohz.idle_cpus_mask" from the push callback. | ||
3 | 2 | ||
4 | pick_next_pushable_fair_task() is taken from Vincent's series [1] as is | 3 | Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com> |
5 | but the locking rules in push_fair_task() has been relaxed to release | 4 | --- |
6 | the local rq lock after dequeuing the task and reacquiring it after | 5 | .../bindings/iio/frequency/adi,admv1014.yaml | 97 +++++++++++++++++++ |
7 | pushing it to the idle target. | 6 | 1 file changed, 97 insertions(+) |
7 | create mode 100644 Documentation/devicetree/bindings/iio/frequency/adi,admv1014.yaml | ||
8 | 8 | ||
9 | double_lock_balance() used in RT seems necessary to maintain strict | 9 | diff --git a/Documentation/devicetree/bindings/iio/frequency/adi,admv1014.yaml b/Documentation/devicetree/bindings/iio/frequency/adi,admv1014.yaml |
10 | priority ordering however that may not be necessary for fair tasks. | 10 | new file mode 100644 |
11 | 11 | index XXXXXXX..XXXXXXX | |
12 | Link: https://lore.kernel.org/all/20250302210539.1563190-6-vincent.guittot@linaro.org/ [1] | 12 | --- /dev/null |
13 | Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> | 13 | +++ b/Documentation/devicetree/bindings/iio/frequency/adi,admv1014.yaml |
14 | --- | 14 | @@ -XXX,XX +XXX,XX @@ |
15 | kernel/sched/fair.c | 59 +++++++++++++++++++++++++++++++++++++++++++++ | 15 | +# SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) |
16 | 1 file changed, 59 insertions(+) | 16 | +%YAML 1.2 |
17 | 17 | +--- | |
18 | diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c | 18 | +$id: http://devicetree.org/schemas/iio/frequency/adi,admv1014.yaml# |
19 | index XXXXXXX..XXXXXXX 100644 | 19 | +$schema: http://devicetree.org/meta-schemas/core.yaml# |
20 | --- a/kernel/sched/fair.c | ||
21 | +++ b/kernel/sched/fair.c | ||
22 | @@ -XXX,XX +XXX,XX @@ static inline int has_pushable_tasks(struct rq *rq) | ||
23 | return !plist_head_empty(&rq->cfs.pushable_tasks); | ||
24 | } | ||
25 | |||
26 | +static struct task_struct *pick_next_pushable_fair_task(struct rq *rq) | ||
27 | +{ | ||
28 | + struct task_struct *p; | ||
29 | + | 20 | + |
30 | + if (!has_pushable_tasks(rq)) | 21 | +title: ADMV1014 Microwave Downconverter |
31 | + return NULL; | ||
32 | + | 22 | + |
33 | + p = plist_first_entry(&rq->cfs.pushable_tasks, | 23 | +maintainers: |
34 | + struct task_struct, pushable_tasks); | 24 | + - Antoniu Miclaus <antoniu.miclaus@analog.com> |
35 | + | 25 | + |
36 | + WARN_ON_ONCE(rq->cpu != task_cpu(p)); | 26 | +description: | |
37 | + WARN_ON_ONCE(task_current(rq, p)); | 27 | + Wideband, microwave downconverter optimized for point to point microwave |
38 | + WARN_ON_ONCE(p->nr_cpus_allowed <= 1); | 28 | + radio designs operating in the 24 GHz to 44 GHz frequency range. |
39 | + WARN_ON_ONCE(!task_on_rq_queued(p)); | ||
40 | + | 29 | + |
41 | + /* | 30 | + https://www.analog.com/en/products/admv1014.html |
42 | + * Remove task from the pushable list as we try only once after that | ||
43 | + * the task has been put back in enqueued list. | ||
44 | + */ | ||
45 | + plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
46 | + | 31 | + |
47 | + return p; | 32 | +properties: |
48 | +} | 33 | + compatible: |
34 | + enum: | ||
35 | + - adi,admv1014 | ||
49 | + | 36 | + |
50 | +static void fair_add_pushable_task(struct rq *rq, struct task_struct *p); | 37 | + reg: |
51 | +static void attach_one_task(struct rq *rq, struct task_struct *p); | 38 | + maxItems: 1 |
52 | + | 39 | + |
53 | /* | 40 | + spi-max-frequency: |
54 | * See if the non running fair tasks on this rq can be sent on other CPUs | 41 | + maximum: 1000000 |
55 | * that fits better with their profile. | ||
56 | */ | ||
57 | static bool push_fair_task(struct rq *rq) | ||
58 | { | ||
59 | + struct cpumask *cpus = this_cpu_cpumask_var_ptr(load_balance_mask); | ||
60 | + struct task_struct *p = pick_next_pushable_fair_task(rq); | ||
61 | + int cpu, this_cpu = cpu_of(rq); | ||
62 | + | 42 | + |
63 | + if (!p) | 43 | + clocks: |
64 | + return false; | 44 | + description: |
45 | + Definition of the external clock. | ||
46 | + minItems: 1 | ||
65 | + | 47 | + |
66 | + if (!cpumask_and(cpus, nohz.idle_cpus_mask, housekeeping_cpumask(HK_TYPE_KERNEL_NOISE))) | 48 | + clock-names: |
67 | + goto requeue; | 49 | + items: |
50 | + - const: lo_in | ||
68 | + | 51 | + |
69 | + if (!cpumask_and(cpus, cpus, p->cpus_ptr)) | 52 | + vcm-supply: |
70 | + goto requeue; | 53 | + description: |
54 | + Analog voltage regulator. | ||
71 | + | 55 | + |
72 | + for_each_cpu_wrap(cpu, cpus, this_cpu + 1) { | 56 | + adi,input-mode: |
73 | + struct rq *target_rq; | 57 | + description: |
58 | + Select the input mode. | ||
59 | + iq - in-phase quadrature (I/Q) input | ||
60 | + if - complex intermediate frequency (IF) input | ||
61 | + enum: [iq, if] | ||
74 | + | 62 | + |
75 | + if (!idle_cpu(cpu)) | 63 | + adi,detector-enable: |
76 | + continue; | 64 | + description: |
65 | + Digital Rx Detector Enable. The Square Law Detector output is | ||
66 | + available at output pin VDET. | ||
67 | + type: boolean | ||
77 | + | 68 | + |
78 | + target_rq = cpu_rq(cpu); | 69 | + adi,p1db-comp-enable: |
79 | + deactivate_task(rq, p, 0); | 70 | + description: |
80 | + set_task_cpu(p, cpu); | 71 | + Turn on bits to optimize P1dB. |
81 | + raw_spin_rq_unlock(rq); | 72 | + type: boolean |
82 | + | 73 | + |
83 | + attach_one_task(target_rq, p); | 74 | + adi,quad-se-mode: |
84 | + raw_spin_rq_lock(rq); | 75 | + description: |
76 | + Switch the LO path from differential to single-ended operation. | ||
77 | + se-neg - Single-Ended Mode, Negative Side Disabled. | ||
78 | + se-pos - Single-Ended Mode, Positive Side Disabled. | ||
79 | + diff - Differential Mode. | ||
80 | + enum: [se-neg, se-pos, diff] | ||
85 | + | 81 | + |
86 | + return true; | 82 | + '#clock-cells': |
87 | + } | 83 | + const: 0 |
88 | + | 84 | + |
89 | +requeue: | 85 | +required: |
90 | + fair_add_pushable_task(rq, p); | 86 | + - compatible |
91 | return false; | 87 | + - reg |
92 | } | 88 | + - clocks |
93 | 89 | + - clock-names | |
90 | + - vcm-supply | ||
91 | + | ||
92 | +additionalProperties: false | ||
93 | + | ||
94 | +examples: | ||
95 | + - | | ||
96 | + spi { | ||
97 | + #address-cells = <1>; | ||
98 | + #size-cells = <0>; | ||
99 | + admv1014@0{ | ||
100 | + compatible = "adi,admv1014"; | ||
101 | + reg = <0>; | ||
102 | + spi-max-frequency = <1000000>; | ||
103 | + clocks = <&admv1014_lo>; | ||
104 | + clock-names = "lo_in"; | ||
105 | + vcm-supply = <&vcm>; | ||
106 | + adi,quad-se-mode = "diff"; | ||
107 | + adi,detector-enable; | ||
108 | + adi,p1db-comp-enable; | ||
109 | + }; | ||
110 | + }; | ||
111 | +... | ||
94 | -- | 112 | -- |
95 | 2.34.1 | 113 | 2.34.1 |
114 | diff view generated by jsdifflib |
1 | In presence of pushable tasks on the CPU, set it on the newly introduced | 1 | Add documentation for the use of the Digital Attenuator gain. |
---|---|---|---|
2 | "overloaded+mask" in sched_domain_shared struct. This will be used by | ||
3 | the newidle balance to limit the scanning to these overloaded CPUs since | ||
4 | they contain tasks that could be run on the newly idle target. | ||
5 | 2 | ||
6 | Signed-off-by: K Prateek Nayak <kprateek.nayak@amd.com> | 3 | Signed-off-by: Antoniu Miclaus <antoniu.miclaus@analog.com> |
7 | --- | 4 | --- |
8 | kernel/sched/fair.c | 24 ++++++++++++++++++++++++ | 5 | .../testing/sysfs-bus-iio-frequency-admv1014 | 23 +++++++++++++++++++ |
9 | 1 file changed, 24 insertions(+) | 6 | 1 file changed, 23 insertions(+) |
7 | create mode 100644 Documentation/ABI/testing/sysfs-bus-iio-frequency-admv1014 | ||
10 | 8 | ||
11 | diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c | 9 | diff --git a/Documentation/ABI/testing/sysfs-bus-iio-frequency-admv1014 b/Documentation/ABI/testing/sysfs-bus-iio-frequency-admv1014 |
12 | index XXXXXXX..XXXXXXX 100644 | 10 | new file mode 100644 |
13 | --- a/kernel/sched/fair.c | 11 | index XXXXXXX..XXXXXXX |
14 | +++ b/kernel/sched/fair.c | 12 | --- /dev/null |
15 | @@ -XXX,XX +XXX,XX @@ static int find_energy_efficient_cpu(struct task_struct *p, int prev_cpu) | 13 | +++ b/Documentation/ABI/testing/sysfs-bus-iio-frequency-admv1014 |
16 | return target; | 14 | @@ -XXX,XX +XXX,XX @@ |
17 | } | 15 | +What: /sys/bus/iio/devices/iio:deviceX/in_altvoltage0_i_gain_coarse |
18 | 16 | +KernelVersion: | |
19 | +static inline void update_overloaded_mask(int cpu, bool contains_pushable) | 17 | +Contact: linux-iio@vger.kernel.org |
20 | +{ | 18 | +Description: |
21 | + struct sched_domain_shared *sd_share = rcu_dereference(per_cpu(sd_llc_shared, cpu)); | 19 | + Read/write value for the digital attenuator gain (IF_I) with coarse steps. |
22 | + cpumask_var_t overloaded_mask; | ||
23 | + | 20 | + |
24 | + if (!sd_share) | 21 | +What: /sys/bus/iio/devices/iio:deviceX/in_altvoltage0_q_gain_coarse |
25 | + return; | 22 | +KernelVersion: |
23 | +Contact: linux-iio@vger.kernel.org | ||
24 | +Description: | ||
25 | + Read/write value for the digital attenuator gain (IF_Q) with coarse steps. | ||
26 | + | 26 | + |
27 | + overloaded_mask = sd_share->overloaded_mask; | 27 | +What: /sys/bus/iio/devices/iio:deviceX/in_altvoltage0_i_gain_fine |
28 | + if (!overloaded_mask) | 28 | +KernelVersion: |
29 | + return; | 29 | +Contact: linux-iio@vger.kernel.org |
30 | +Description: | ||
31 | + Read/write value for the digital attenuator gain (IF_I) with fine steps. | ||
30 | + | 32 | + |
31 | + if (contains_pushable) | 33 | +What: /sys/bus/iio/devices/iio:deviceX/in_altvoltage0_q_gain_fine |
32 | + cpumask_set_cpu(cpu, overloaded_mask); | 34 | +KernelVersion: |
33 | + else | 35 | +Contact: linux-iio@vger.kernel.org |
34 | + cpumask_clear_cpu(cpu, overloaded_mask); | 36 | +Description: |
35 | +} | 37 | + Read/write value for the digital attenuator gain (IF_Q) with fine steps. |
36 | + | ||
37 | static inline bool fair_push_task(struct task_struct *p) | ||
38 | { | ||
39 | if (!task_on_rq_queued(p)) | ||
40 | @@ -XXX,XX +XXX,XX @@ static inline void fair_queue_pushable_tasks(struct rq *rq) | ||
41 | static void fair_remove_pushable_task(struct rq *rq, struct task_struct *p) | ||
42 | { | ||
43 | plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
44 | + | ||
45 | + if (!has_pushable_tasks(rq)) | ||
46 | + update_overloaded_mask(rq->cpu, false); | ||
47 | } | ||
48 | |||
49 | static void fair_add_pushable_task(struct rq *rq, struct task_struct *p) | ||
50 | { | ||
51 | if (fair_push_task(p)) { | ||
52 | + if (!has_pushable_tasks(rq)) | ||
53 | + update_overloaded_mask(rq->cpu, true); | ||
54 | + | ||
55 | plist_del(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
56 | plist_node_init(&p->pushable_tasks, p->prio); | ||
57 | plist_add(&p->pushable_tasks, &rq->cfs.pushable_tasks); | ||
58 | -- | 38 | -- |
59 | 2.34.1 | 39 | 2.34.1 |
40 | diff view generated by jsdifflib |