Commit | Line | Data |
---|---|---|
bd9a4c7d OBC |
1 | Hardware Spinlock Framework |
2 | ||
3 | 1. Introduction | |
4 | ||
5 | Hardware spinlock modules provide hardware assistance for synchronization | |
6 | and mutual exclusion between heterogeneous processors and those not operating | |
7 | under a single, shared operating system. | |
8 | ||
9 | For example, OMAP4 has dual Cortex-A9, dual Cortex-M3 and a C64x+ DSP, | |
10 | each of which is running a different Operating System (the master, A9, | |
11 | is usually running Linux and the slave processors, the M3 and the DSP, | |
12 | are running some flavor of RTOS). | |
13 | ||
14 | A generic hwspinlock framework allows platform-independent drivers to use | |
15 | the hwspinlock device in order to access data structures that are shared | |
16 | between remote processors, that otherwise have no alternative mechanism | |
17 | to accomplish synchronization and mutual exclusion operations. | |
18 | ||
19 | This is necessary, for example, for Inter-processor communications: | |
20 | on OMAP4, cpu-intensive multimedia tasks are offloaded by the host to the | |
21 | remote M3 and/or C64x+ slave processors (by an IPC subsystem called Syslink). | |
22 | ||
23 | To achieve fast message-based communications, a minimal kernel support | |
24 | is needed to deliver messages arriving from a remote processor to the | |
25 | appropriate user process. | |
26 | ||
27 | This communication is based on simple data structures that is shared between | |
28 | the remote processors, and access to it is synchronized using the hwspinlock | |
29 | module (remote processor directly places new messages in this shared data | |
30 | structure). | |
31 | ||
32 | A common hwspinlock interface makes it possible to have generic, platform- | |
33 | independent, drivers. | |
34 | ||
35 | 2. User API | |
36 | ||
37 | struct hwspinlock *hwspin_lock_request(void); | |
38 | - dynamically assign an hwspinlock and return its address, or NULL | |
39 | in case an unused hwspinlock isn't available. Users of this | |
40 | API will usually want to communicate the lock's id to the remote core | |
41 | before it can be used to achieve synchronization. | |
42 | Can be called from an atomic context (this function will not sleep) but | |
43 | not from within interrupt context. | |
44 | ||
45 | struct hwspinlock *hwspin_lock_request_specific(unsigned int id); | |
46 | - assign a specific hwspinlock id and return its address, or NULL | |
47 | if that hwspinlock is already in use. Usually board code will | |
48 | be calling this function in order to reserve specific hwspinlock | |
49 | ids for predefined purposes. | |
50 | Can be called from an atomic context (this function will not sleep) but | |
51 | not from within interrupt context. | |
52 | ||
53 | int hwspin_lock_free(struct hwspinlock *hwlock); | |
54 | - free a previously-assigned hwspinlock; returns 0 on success, or an | |
55 | appropriate error code on failure (e.g. -EINVAL if the hwspinlock | |
56 | is already free). | |
57 | Can be called from an atomic context (this function will not sleep) but | |
58 | not from within interrupt context. | |
59 | ||
60 | int hwspin_lock_timeout(struct hwspinlock *hwlock, unsigned int timeout); | |
61 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
62 | msecs). If the hwspinlock is already taken, the function will busy loop | |
63 | waiting for it to be released, but give up when the timeout elapses. | |
64 | Upon a successful return from this function, preemption is disabled so | |
65 | the caller must not sleep, and is advised to release the hwspinlock as | |
66 | soon as possible, in order to minimize remote cores polling on the | |
67 | hardware interconnect. | |
68 | Returns 0 when successful and an appropriate error code otherwise (most | |
69 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
70 | The function will never sleep. | |
71 | ||
72 | int hwspin_lock_timeout_irq(struct hwspinlock *hwlock, unsigned int timeout); | |
73 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
74 | msecs). If the hwspinlock is already taken, the function will busy loop | |
75 | waiting for it to be released, but give up when the timeout elapses. | |
76 | Upon a successful return from this function, preemption and the local | |
77 | interrupts are disabled, so the caller must not sleep, and is advised to | |
78 | release the hwspinlock as soon as possible. | |
79 | Returns 0 when successful and an appropriate error code otherwise (most | |
80 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
81 | The function will never sleep. | |
82 | ||
83 | int hwspin_lock_timeout_irqsave(struct hwspinlock *hwlock, unsigned int to, | |
84 | unsigned long *flags); | |
85 | - lock a previously-assigned hwspinlock with a timeout limit (specified in | |
86 | msecs). If the hwspinlock is already taken, the function will busy loop | |
87 | waiting for it to be released, but give up when the timeout elapses. | |
88 | Upon a successful return from this function, preemption is disabled, | |
89 | local interrupts are disabled and their previous state is saved at the | |
90 | given flags placeholder. The caller must not sleep, and is advised to | |
91 | release the hwspinlock as soon as possible. | |
92 | Returns 0 when successful and an appropriate error code otherwise (most | |
93 | notably -ETIMEDOUT if the hwspinlock is still busy after timeout msecs). | |
94 | The function will never sleep. | |
95 | ||
96 | int hwspin_trylock(struct hwspinlock *hwlock); | |
97 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
98 | it is already taken. | |
99 | Upon a successful return from this function, preemption is disabled so | |
100 | caller must not sleep, and is advised to release the hwspinlock as soon as | |
101 | possible, in order to minimize remote cores polling on the hardware | |
102 | interconnect. | |
103 | Returns 0 on success and an appropriate error code otherwise (most | |
104 | notably -EBUSY if the hwspinlock was already taken). | |
105 | The function will never sleep. | |
106 | ||
107 | int hwspin_trylock_irq(struct hwspinlock *hwlock); | |
108 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
109 | it is already taken. | |
110 | Upon a successful return from this function, preemption and the local | |
111 | interrupts are disabled so caller must not sleep, and is advised to | |
112 | release the hwspinlock as soon as possible. | |
113 | Returns 0 on success and an appropriate error code otherwise (most | |
114 | notably -EBUSY if the hwspinlock was already taken). | |
115 | The function will never sleep. | |
116 | ||
117 | int hwspin_trylock_irqsave(struct hwspinlock *hwlock, unsigned long *flags); | |
118 | - attempt to lock a previously-assigned hwspinlock, but immediately fail if | |
119 | it is already taken. | |
120 | Upon a successful return from this function, preemption is disabled, | |
121 | the local interrupts are disabled and their previous state is saved | |
122 | at the given flags placeholder. The caller must not sleep, and is advised | |
123 | to release the hwspinlock as soon as possible. | |
124 | Returns 0 on success and an appropriate error code otherwise (most | |
125 | notably -EBUSY if the hwspinlock was already taken). | |
126 | The function will never sleep. | |
127 | ||
128 | void hwspin_unlock(struct hwspinlock *hwlock); | |
129 | - unlock a previously-locked hwspinlock. Always succeed, and can be called | |
130 | from any context (the function never sleeps). Note: code should _never_ | |
131 | unlock an hwspinlock which is already unlocked (there is no protection | |
132 | against this). | |
133 | ||
134 | void hwspin_unlock_irq(struct hwspinlock *hwlock); | |
135 | - unlock a previously-locked hwspinlock and enable local interrupts. | |
136 | The caller should _never_ unlock an hwspinlock which is already unlocked. | |
137 | Doing so is considered a bug (there is no protection against this). | |
138 | Upon a successful return from this function, preemption and local | |
139 | interrupts are enabled. This function will never sleep. | |
140 | ||
141 | void | |
142 | hwspin_unlock_irqrestore(struct hwspinlock *hwlock, unsigned long *flags); | |
143 | - unlock a previously-locked hwspinlock. | |
144 | The caller should _never_ unlock an hwspinlock which is already unlocked. | |
145 | Doing so is considered a bug (there is no protection against this). | |
146 | Upon a successful return from this function, preemption is reenabled, | |
147 | and the state of the local interrupts is restored to the state saved at | |
148 | the given flags. This function will never sleep. | |
149 | ||
150 | int hwspin_lock_get_id(struct hwspinlock *hwlock); | |
151 | - retrieve id number of a given hwspinlock. This is needed when an | |
152 | hwspinlock is dynamically assigned: before it can be used to achieve | |
153 | mutual exclusion with a remote cpu, the id number should be communicated | |
154 | to the remote task with which we want to synchronize. | |
155 | Returns the hwspinlock id number, or -EINVAL if hwlock is null. | |
156 | ||
157 | 3. Typical usage | |
158 | ||
159 | #include <linux/hwspinlock.h> | |
160 | #include <linux/err.h> | |
161 | ||
162 | int hwspinlock_example1(void) | |
163 | { | |
164 | struct hwspinlock *hwlock; | |
165 | int ret; | |
166 | ||
167 | /* dynamically assign a hwspinlock */ | |
168 | hwlock = hwspin_lock_request(); | |
169 | if (!hwlock) | |
170 | ... | |
171 | ||
172 | id = hwspin_lock_get_id(hwlock); | |
173 | /* probably need to communicate id to a remote processor now */ | |
174 | ||
175 | /* take the lock, spin for 1 sec if it's already taken */ | |
176 | ret = hwspin_lock_timeout(hwlock, 1000); | |
177 | if (ret) | |
178 | ... | |
179 | ||
180 | /* | |
181 | * we took the lock, do our thing now, but do NOT sleep | |
182 | */ | |
183 | ||
184 | /* release the lock */ | |
185 | hwspin_unlock(hwlock); | |
186 | ||
187 | /* free the lock */ | |
188 | ret = hwspin_lock_free(hwlock); | |
189 | if (ret) | |
190 | ... | |
191 | ||
192 | return ret; | |
193 | } | |
194 | ||
195 | int hwspinlock_example2(void) | |
196 | { | |
197 | struct hwspinlock *hwlock; | |
198 | int ret; | |
199 | ||
200 | /* | |
201 | * assign a specific hwspinlock id - this should be called early | |
202 | * by board init code. | |
203 | */ | |
204 | hwlock = hwspin_lock_request_specific(PREDEFINED_LOCK_ID); | |
205 | if (!hwlock) | |
206 | ... | |
207 | ||
208 | /* try to take it, but don't spin on it */ | |
209 | ret = hwspin_trylock(hwlock); | |
210 | if (!ret) { | |
211 | pr_info("lock is already taken\n"); | |
212 | return -EBUSY; | |
213 | } | |
214 | ||
215 | /* | |
216 | * we took the lock, do our thing now, but do NOT sleep | |
217 | */ | |
218 | ||
219 | /* release the lock */ | |
220 | hwspin_unlock(hwlock); | |
221 | ||
222 | /* free the lock */ | |
223 | ret = hwspin_lock_free(hwlock); | |
224 | if (ret) | |
225 | ... | |
226 | ||
227 | return ret; | |
228 | } | |
229 | ||
230 | ||
231 | 4. API for implementors | |
232 | ||
233 | int hwspin_lock_register(struct hwspinlock *hwlock); | |
234 | - to be called from the underlying platform-specific implementation, in | |
235 | order to register a new hwspinlock instance. Can be called from an atomic | |
236 | context (this function will not sleep) but not from within interrupt | |
237 | context. Returns 0 on success, or appropriate error code on failure. | |
238 | ||
239 | struct hwspinlock *hwspin_lock_unregister(unsigned int id); | |
240 | - to be called from the underlying vendor-specific implementation, in order | |
241 | to unregister an existing (and unused) hwspinlock instance. | |
242 | Can be called from an atomic context (will not sleep) but not from | |
243 | within interrupt context. | |
244 | Returns the address of hwspinlock on success, or NULL on error (e.g. | |
245 | if the hwspinlock is sill in use). | |
246 | ||
247 | 5. struct hwspinlock | |
248 | ||
249 | This struct represents an hwspinlock instance. It is registered by the | |
250 | underlying hwspinlock implementation using the hwspin_lock_register() API. | |
251 | ||
252 | /** | |
253 | * struct hwspinlock - vendor-specific hwspinlock implementation | |
254 | * | |
255 | * @dev: underlying device, will be used with runtime PM api | |
256 | * @ops: vendor-specific hwspinlock handlers | |
257 | * @id: a global, unique, system-wide, index of the lock. | |
258 | * @lock: initialized and used by hwspinlock core | |
259 | * @owner: underlying implementation module, used to maintain module ref count | |
260 | */ | |
261 | struct hwspinlock { | |
262 | struct device *dev; | |
263 | const struct hwspinlock_ops *ops; | |
264 | int id; | |
265 | spinlock_t lock; | |
266 | struct module *owner; | |
267 | }; | |
268 | ||
269 | The underlying implementation is responsible to assign the dev, ops, id and | |
270 | owner members. The lock member, OTOH, is initialized and used by the hwspinlock | |
271 | core. | |
272 | ||
273 | 6. Implementation callbacks | |
274 | ||
275 | There are three possible callbacks defined in 'struct hwspinlock_ops': | |
276 | ||
277 | struct hwspinlock_ops { | |
278 | int (*trylock)(struct hwspinlock *lock); | |
279 | void (*unlock)(struct hwspinlock *lock); | |
280 | void (*relax)(struct hwspinlock *lock); | |
281 | }; | |
282 | ||
283 | The first two callbacks are mandatory: | |
284 | ||
285 | The ->trylock() callback should make a single attempt to take the lock, and | |
286 | return 0 on failure and 1 on success. This callback may _not_ sleep. | |
287 | ||
288 | The ->unlock() callback releases the lock. It always succeed, and it, too, | |
289 | may _not_ sleep. | |
290 | ||
291 | The ->relax() callback is optional. It is called by hwspinlock core while | |
292 | spinning on a lock, and can be used by the underlying implementation to force | |
293 | a delay between two successive invocations of ->trylock(). It may _not_ sleep. |