Automatic date update in version.in
[deliverable/binutils-gdb.git] / bfd / elfnn-aarch64.c
1 /* AArch64-specific support for NN-bit ELF.
2 Copyright (C) 2009-2015 Free Software Foundation, Inc.
3 Contributed by ARM Ltd.
4
5 This file is part of BFD, the Binary File Descriptor library.
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program; see the file COPYING3. If not,
19 see <http://www.gnu.org/licenses/>. */
20
21 /* Notes on implementation:
22
23 Thread Local Store (TLS)
24
25 Overview:
26
27 The implementation currently supports both traditional TLS and TLS
28 descriptors, but only general dynamic (GD).
29
30 For traditional TLS the assembler will present us with code
31 fragments of the form:
32
33 adrp x0, :tlsgd:foo
34 R_AARCH64_TLSGD_ADR_PAGE21(foo)
35 add x0, :tlsgd_lo12:foo
36 R_AARCH64_TLSGD_ADD_LO12_NC(foo)
37 bl __tls_get_addr
38 nop
39
40 For TLS descriptors the assembler will present us with code
41 fragments of the form:
42
43 adrp x0, :tlsdesc:foo R_AARCH64_TLSDESC_ADR_PAGE21(foo)
44 ldr x1, [x0, #:tlsdesc_lo12:foo] R_AARCH64_TLSDESC_LD64_LO12(foo)
45 add x0, x0, #:tlsdesc_lo12:foo R_AARCH64_TLSDESC_ADD_LO12(foo)
46 .tlsdesccall foo
47 blr x1 R_AARCH64_TLSDESC_CALL(foo)
48
49 The relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} against foo
50 indicate that foo is thread local and should be accessed via the
51 traditional TLS mechanims.
52
53 The relocations R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC}
54 against foo indicate that 'foo' is thread local and should be accessed
55 via a TLS descriptor mechanism.
56
57 The precise instruction sequence is only relevant from the
58 perspective of linker relaxation which is currently not implemented.
59
60 The static linker must detect that 'foo' is a TLS object and
61 allocate a double GOT entry. The GOT entry must be created for both
62 global and local TLS symbols. Note that this is different to none
63 TLS local objects which do not need a GOT entry.
64
65 In the traditional TLS mechanism, the double GOT entry is used to
66 provide the tls_index structure, containing module and offset
67 entries. The static linker places the relocation R_AARCH64_TLS_DTPMOD
68 on the module entry. The loader will subsequently fixup this
69 relocation with the module identity.
70
71 For global traditional TLS symbols the static linker places an
72 R_AARCH64_TLS_DTPREL relocation on the offset entry. The loader
73 will subsequently fixup the offset. For local TLS symbols the static
74 linker fixes up offset.
75
76 In the TLS descriptor mechanism the double GOT entry is used to
77 provide the descriptor. The static linker places the relocation
78 R_AARCH64_TLSDESC on the first GOT slot. The loader will
79 subsequently fix this up.
80
81 Implementation:
82
83 The handling of TLS symbols is implemented across a number of
84 different backend functions. The following is a top level view of
85 what processing is performed where.
86
87 The TLS implementation maintains state information for each TLS
88 symbol. The state information for local and global symbols is kept
89 in different places. Global symbols use generic BFD structures while
90 local symbols use backend specific structures that are allocated and
91 maintained entirely by the backend.
92
93 The flow:
94
95 elfNN_aarch64_check_relocs()
96
97 This function is invoked for each relocation.
98
99 The TLS relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} and
100 R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC} are
101 spotted. One time creation of local symbol data structures are
102 created when the first local symbol is seen.
103
104 The reference count for a symbol is incremented. The GOT type for
105 each symbol is marked as general dynamic.
106
107 elfNN_aarch64_allocate_dynrelocs ()
108
109 For each global with positive reference count we allocate a double
110 GOT slot. For a traditional TLS symbol we allocate space for two
111 relocation entries on the GOT, for a TLS descriptor symbol we
112 allocate space for one relocation on the slot. Record the GOT offset
113 for this symbol.
114
115 elfNN_aarch64_size_dynamic_sections ()
116
117 Iterate all input BFDS, look for in the local symbol data structure
118 constructed earlier for local TLS symbols and allocate them double
119 GOT slots along with space for a single GOT relocation. Update the
120 local symbol structure to record the GOT offset allocated.
121
122 elfNN_aarch64_relocate_section ()
123
124 Calls elfNN_aarch64_final_link_relocate ()
125
126 Emit the relevant TLS relocations against the GOT for each TLS
127 symbol. For local TLS symbols emit the GOT offset directly. The GOT
128 relocations are emitted once the first time a TLS symbol is
129 encountered. The implementation uses the LSB of the GOT offset to
130 flag that the relevant GOT relocations for a symbol have been
131 emitted. All of the TLS code that uses the GOT offset needs to take
132 care to mask out this flag bit before using the offset.
133
134 elfNN_aarch64_final_link_relocate ()
135
136 Fixup the R_AARCH64_TLSGD_{ADR_PREL21, ADD_LO12_NC} relocations. */
137
138 #include "sysdep.h"
139 #include "bfd.h"
140 #include "libiberty.h"
141 #include "libbfd.h"
142 #include "bfd_stdint.h"
143 #include "elf-bfd.h"
144 #include "bfdlink.h"
145 #include "objalloc.h"
146 #include "elf/aarch64.h"
147 #include "elfxx-aarch64.h"
148
149 #define ARCH_SIZE NN
150
151 #if ARCH_SIZE == 64
152 #define AARCH64_R(NAME) R_AARCH64_ ## NAME
153 #define AARCH64_R_STR(NAME) "R_AARCH64_" #NAME
154 #define HOWTO64(...) HOWTO (__VA_ARGS__)
155 #define HOWTO32(...) EMPTY_HOWTO (0)
156 #define LOG_FILE_ALIGN 3
157 #endif
158
159 #if ARCH_SIZE == 32
160 #define AARCH64_R(NAME) R_AARCH64_P32_ ## NAME
161 #define AARCH64_R_STR(NAME) "R_AARCH64_P32_" #NAME
162 #define HOWTO64(...) EMPTY_HOWTO (0)
163 #define HOWTO32(...) HOWTO (__VA_ARGS__)
164 #define LOG_FILE_ALIGN 2
165 #endif
166
167 #define IS_AARCH64_TLS_RELOC(R_TYPE) \
168 ((R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21 \
169 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PREL21 \
170 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC \
171 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G1 \
172 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC \
173 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21 \
174 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC \
175 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC \
176 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19 \
177 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12 \
178 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12 \
179 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC \
180 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2 \
181 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 \
182 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC \
183 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0 \
184 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC \
185 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPMOD \
186 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPREL \
187 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_TPREL \
188 || IS_AARCH64_TLSDESC_RELOC ((R_TYPE)))
189
190 #define IS_AARCH64_TLSDESC_RELOC(R_TYPE) \
191 ((R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD_PREL19 \
192 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21 \
193 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21 \
194 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC \
195 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC \
196 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC \
197 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G1 \
198 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G0_NC \
199 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LDR \
200 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD \
201 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_CALL \
202 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC)
203
204 #define ELIMINATE_COPY_RELOCS 0
205
206 /* Return size of a relocation entry. HTAB is the bfd's
207 elf_aarch64_link_hash_entry. */
208 #define RELOC_SIZE(HTAB) (sizeof (ElfNN_External_Rela))
209
210 /* GOT Entry size - 8 bytes in ELF64 and 4 bytes in ELF32. */
211 #define GOT_ENTRY_SIZE (ARCH_SIZE / 8)
212 #define PLT_ENTRY_SIZE (32)
213 #define PLT_SMALL_ENTRY_SIZE (16)
214 #define PLT_TLSDESC_ENTRY_SIZE (32)
215
216 /* Encoding of the nop instruction */
217 #define INSN_NOP 0xd503201f
218
219 #define aarch64_compute_jump_table_size(htab) \
220 (((htab)->root.srelplt == NULL) ? 0 \
221 : (htab)->root.srelplt->reloc_count * GOT_ENTRY_SIZE)
222
223 /* The first entry in a procedure linkage table looks like this
224 if the distance between the PLTGOT and the PLT is < 4GB use
225 these PLT entries. Note that the dynamic linker gets &PLTGOT[2]
226 in x16 and needs to work out PLTGOT[1] by using an address of
227 [x16,#-GOT_ENTRY_SIZE]. */
228 static const bfd_byte elfNN_aarch64_small_plt0_entry[PLT_ENTRY_SIZE] =
229 {
230 0xf0, 0x7b, 0xbf, 0xa9, /* stp x16, x30, [sp, #-16]! */
231 0x10, 0x00, 0x00, 0x90, /* adrp x16, (GOT+16) */
232 #if ARCH_SIZE == 64
233 0x11, 0x0A, 0x40, 0xf9, /* ldr x17, [x16, #PLT_GOT+0x10] */
234 0x10, 0x42, 0x00, 0x91, /* add x16, x16,#PLT_GOT+0x10 */
235 #else
236 0x11, 0x0A, 0x40, 0xb9, /* ldr w17, [x16, #PLT_GOT+0x8] */
237 0x10, 0x22, 0x00, 0x11, /* add w16, w16,#PLT_GOT+0x8 */
238 #endif
239 0x20, 0x02, 0x1f, 0xd6, /* br x17 */
240 0x1f, 0x20, 0x03, 0xd5, /* nop */
241 0x1f, 0x20, 0x03, 0xd5, /* nop */
242 0x1f, 0x20, 0x03, 0xd5, /* nop */
243 };
244
245 /* Per function entry in a procedure linkage table looks like this
246 if the distance between the PLTGOT and the PLT is < 4GB use
247 these PLT entries. */
248 static const bfd_byte elfNN_aarch64_small_plt_entry[PLT_SMALL_ENTRY_SIZE] =
249 {
250 0x10, 0x00, 0x00, 0x90, /* adrp x16, PLTGOT + n * 8 */
251 #if ARCH_SIZE == 64
252 0x11, 0x02, 0x40, 0xf9, /* ldr x17, [x16, PLTGOT + n * 8] */
253 0x10, 0x02, 0x00, 0x91, /* add x16, x16, :lo12:PLTGOT + n * 8 */
254 #else
255 0x11, 0x02, 0x40, 0xb9, /* ldr w17, [x16, PLTGOT + n * 4] */
256 0x10, 0x02, 0x00, 0x11, /* add w16, w16, :lo12:PLTGOT + n * 4 */
257 #endif
258 0x20, 0x02, 0x1f, 0xd6, /* br x17. */
259 };
260
261 static const bfd_byte
262 elfNN_aarch64_tlsdesc_small_plt_entry[PLT_TLSDESC_ENTRY_SIZE] =
263 {
264 0xe2, 0x0f, 0xbf, 0xa9, /* stp x2, x3, [sp, #-16]! */
265 0x02, 0x00, 0x00, 0x90, /* adrp x2, 0 */
266 0x03, 0x00, 0x00, 0x90, /* adrp x3, 0 */
267 #if ARCH_SIZE == 64
268 0x42, 0x00, 0x40, 0xf9, /* ldr x2, [x2, #0] */
269 0x63, 0x00, 0x00, 0x91, /* add x3, x3, 0 */
270 #else
271 0x42, 0x00, 0x40, 0xb9, /* ldr w2, [x2, #0] */
272 0x63, 0x00, 0x00, 0x11, /* add w3, w3, 0 */
273 #endif
274 0x40, 0x00, 0x1f, 0xd6, /* br x2 */
275 0x1f, 0x20, 0x03, 0xd5, /* nop */
276 0x1f, 0x20, 0x03, 0xd5, /* nop */
277 };
278
279 #define elf_info_to_howto elfNN_aarch64_info_to_howto
280 #define elf_info_to_howto_rel elfNN_aarch64_info_to_howto
281
282 #define AARCH64_ELF_ABI_VERSION 0
283
284 /* In case we're on a 32-bit machine, construct a 64-bit "-1" value. */
285 #define ALL_ONES (~ (bfd_vma) 0)
286
287 /* Indexed by the bfd interal reloc enumerators.
288 Therefore, the table needs to be synced with BFD_RELOC_AARCH64_*
289 in reloc.c. */
290
291 static reloc_howto_type elfNN_aarch64_howto_table[] =
292 {
293 EMPTY_HOWTO (0),
294
295 /* Basic data relocations. */
296
297 #if ARCH_SIZE == 64
298 HOWTO (R_AARCH64_NULL, /* type */
299 0, /* rightshift */
300 3, /* size (0 = byte, 1 = short, 2 = long) */
301 0, /* bitsize */
302 FALSE, /* pc_relative */
303 0, /* bitpos */
304 complain_overflow_dont, /* complain_on_overflow */
305 bfd_elf_generic_reloc, /* special_function */
306 "R_AARCH64_NULL", /* name */
307 FALSE, /* partial_inplace */
308 0, /* src_mask */
309 0, /* dst_mask */
310 FALSE), /* pcrel_offset */
311 #else
312 HOWTO (R_AARCH64_NONE, /* type */
313 0, /* rightshift */
314 3, /* size (0 = byte, 1 = short, 2 = long) */
315 0, /* bitsize */
316 FALSE, /* pc_relative */
317 0, /* bitpos */
318 complain_overflow_dont, /* complain_on_overflow */
319 bfd_elf_generic_reloc, /* special_function */
320 "R_AARCH64_NONE", /* name */
321 FALSE, /* partial_inplace */
322 0, /* src_mask */
323 0, /* dst_mask */
324 FALSE), /* pcrel_offset */
325 #endif
326
327 /* .xword: (S+A) */
328 HOWTO64 (AARCH64_R (ABS64), /* type */
329 0, /* rightshift */
330 4, /* size (4 = long long) */
331 64, /* bitsize */
332 FALSE, /* pc_relative */
333 0, /* bitpos */
334 complain_overflow_unsigned, /* complain_on_overflow */
335 bfd_elf_generic_reloc, /* special_function */
336 AARCH64_R_STR (ABS64), /* name */
337 FALSE, /* partial_inplace */
338 ALL_ONES, /* src_mask */
339 ALL_ONES, /* dst_mask */
340 FALSE), /* pcrel_offset */
341
342 /* .word: (S+A) */
343 HOWTO (AARCH64_R (ABS32), /* type */
344 0, /* rightshift */
345 2, /* size (0 = byte, 1 = short, 2 = long) */
346 32, /* bitsize */
347 FALSE, /* pc_relative */
348 0, /* bitpos */
349 complain_overflow_unsigned, /* complain_on_overflow */
350 bfd_elf_generic_reloc, /* special_function */
351 AARCH64_R_STR (ABS32), /* name */
352 FALSE, /* partial_inplace */
353 0xffffffff, /* src_mask */
354 0xffffffff, /* dst_mask */
355 FALSE), /* pcrel_offset */
356
357 /* .half: (S+A) */
358 HOWTO (AARCH64_R (ABS16), /* type */
359 0, /* rightshift */
360 1, /* size (0 = byte, 1 = short, 2 = long) */
361 16, /* bitsize */
362 FALSE, /* pc_relative */
363 0, /* bitpos */
364 complain_overflow_unsigned, /* complain_on_overflow */
365 bfd_elf_generic_reloc, /* special_function */
366 AARCH64_R_STR (ABS16), /* name */
367 FALSE, /* partial_inplace */
368 0xffff, /* src_mask */
369 0xffff, /* dst_mask */
370 FALSE), /* pcrel_offset */
371
372 /* .xword: (S+A-P) */
373 HOWTO64 (AARCH64_R (PREL64), /* type */
374 0, /* rightshift */
375 4, /* size (4 = long long) */
376 64, /* bitsize */
377 TRUE, /* pc_relative */
378 0, /* bitpos */
379 complain_overflow_signed, /* complain_on_overflow */
380 bfd_elf_generic_reloc, /* special_function */
381 AARCH64_R_STR (PREL64), /* name */
382 FALSE, /* partial_inplace */
383 ALL_ONES, /* src_mask */
384 ALL_ONES, /* dst_mask */
385 TRUE), /* pcrel_offset */
386
387 /* .word: (S+A-P) */
388 HOWTO (AARCH64_R (PREL32), /* type */
389 0, /* rightshift */
390 2, /* size (0 = byte, 1 = short, 2 = long) */
391 32, /* bitsize */
392 TRUE, /* pc_relative */
393 0, /* bitpos */
394 complain_overflow_signed, /* complain_on_overflow */
395 bfd_elf_generic_reloc, /* special_function */
396 AARCH64_R_STR (PREL32), /* name */
397 FALSE, /* partial_inplace */
398 0xffffffff, /* src_mask */
399 0xffffffff, /* dst_mask */
400 TRUE), /* pcrel_offset */
401
402 /* .half: (S+A-P) */
403 HOWTO (AARCH64_R (PREL16), /* type */
404 0, /* rightshift */
405 1, /* size (0 = byte, 1 = short, 2 = long) */
406 16, /* bitsize */
407 TRUE, /* pc_relative */
408 0, /* bitpos */
409 complain_overflow_signed, /* complain_on_overflow */
410 bfd_elf_generic_reloc, /* special_function */
411 AARCH64_R_STR (PREL16), /* name */
412 FALSE, /* partial_inplace */
413 0xffff, /* src_mask */
414 0xffff, /* dst_mask */
415 TRUE), /* pcrel_offset */
416
417 /* Group relocations to create a 16, 32, 48 or 64 bit
418 unsigned data or abs address inline. */
419
420 /* MOVZ: ((S+A) >> 0) & 0xffff */
421 HOWTO (AARCH64_R (MOVW_UABS_G0), /* type */
422 0, /* rightshift */
423 2, /* size (0 = byte, 1 = short, 2 = long) */
424 16, /* bitsize */
425 FALSE, /* pc_relative */
426 0, /* bitpos */
427 complain_overflow_unsigned, /* complain_on_overflow */
428 bfd_elf_generic_reloc, /* special_function */
429 AARCH64_R_STR (MOVW_UABS_G0), /* name */
430 FALSE, /* partial_inplace */
431 0xffff, /* src_mask */
432 0xffff, /* dst_mask */
433 FALSE), /* pcrel_offset */
434
435 /* MOVK: ((S+A) >> 0) & 0xffff [no overflow check] */
436 HOWTO (AARCH64_R (MOVW_UABS_G0_NC), /* type */
437 0, /* rightshift */
438 2, /* size (0 = byte, 1 = short, 2 = long) */
439 16, /* bitsize */
440 FALSE, /* pc_relative */
441 0, /* bitpos */
442 complain_overflow_dont, /* complain_on_overflow */
443 bfd_elf_generic_reloc, /* special_function */
444 AARCH64_R_STR (MOVW_UABS_G0_NC), /* name */
445 FALSE, /* partial_inplace */
446 0xffff, /* src_mask */
447 0xffff, /* dst_mask */
448 FALSE), /* pcrel_offset */
449
450 /* MOVZ: ((S+A) >> 16) & 0xffff */
451 HOWTO (AARCH64_R (MOVW_UABS_G1), /* type */
452 16, /* rightshift */
453 2, /* size (0 = byte, 1 = short, 2 = long) */
454 16, /* bitsize */
455 FALSE, /* pc_relative */
456 0, /* bitpos */
457 complain_overflow_unsigned, /* complain_on_overflow */
458 bfd_elf_generic_reloc, /* special_function */
459 AARCH64_R_STR (MOVW_UABS_G1), /* name */
460 FALSE, /* partial_inplace */
461 0xffff, /* src_mask */
462 0xffff, /* dst_mask */
463 FALSE), /* pcrel_offset */
464
465 /* MOVK: ((S+A) >> 16) & 0xffff [no overflow check] */
466 HOWTO64 (AARCH64_R (MOVW_UABS_G1_NC), /* type */
467 16, /* rightshift */
468 2, /* size (0 = byte, 1 = short, 2 = long) */
469 16, /* bitsize */
470 FALSE, /* pc_relative */
471 0, /* bitpos */
472 complain_overflow_dont, /* complain_on_overflow */
473 bfd_elf_generic_reloc, /* special_function */
474 AARCH64_R_STR (MOVW_UABS_G1_NC), /* name */
475 FALSE, /* partial_inplace */
476 0xffff, /* src_mask */
477 0xffff, /* dst_mask */
478 FALSE), /* pcrel_offset */
479
480 /* MOVZ: ((S+A) >> 32) & 0xffff */
481 HOWTO64 (AARCH64_R (MOVW_UABS_G2), /* type */
482 32, /* rightshift */
483 2, /* size (0 = byte, 1 = short, 2 = long) */
484 16, /* bitsize */
485 FALSE, /* pc_relative */
486 0, /* bitpos */
487 complain_overflow_unsigned, /* complain_on_overflow */
488 bfd_elf_generic_reloc, /* special_function */
489 AARCH64_R_STR (MOVW_UABS_G2), /* name */
490 FALSE, /* partial_inplace */
491 0xffff, /* src_mask */
492 0xffff, /* dst_mask */
493 FALSE), /* pcrel_offset */
494
495 /* MOVK: ((S+A) >> 32) & 0xffff [no overflow check] */
496 HOWTO64 (AARCH64_R (MOVW_UABS_G2_NC), /* type */
497 32, /* rightshift */
498 2, /* size (0 = byte, 1 = short, 2 = long) */
499 16, /* bitsize */
500 FALSE, /* pc_relative */
501 0, /* bitpos */
502 complain_overflow_dont, /* complain_on_overflow */
503 bfd_elf_generic_reloc, /* special_function */
504 AARCH64_R_STR (MOVW_UABS_G2_NC), /* name */
505 FALSE, /* partial_inplace */
506 0xffff, /* src_mask */
507 0xffff, /* dst_mask */
508 FALSE), /* pcrel_offset */
509
510 /* MOVZ: ((S+A) >> 48) & 0xffff */
511 HOWTO64 (AARCH64_R (MOVW_UABS_G3), /* type */
512 48, /* rightshift */
513 2, /* size (0 = byte, 1 = short, 2 = long) */
514 16, /* bitsize */
515 FALSE, /* pc_relative */
516 0, /* bitpos */
517 complain_overflow_unsigned, /* complain_on_overflow */
518 bfd_elf_generic_reloc, /* special_function */
519 AARCH64_R_STR (MOVW_UABS_G3), /* name */
520 FALSE, /* partial_inplace */
521 0xffff, /* src_mask */
522 0xffff, /* dst_mask */
523 FALSE), /* pcrel_offset */
524
525 /* Group relocations to create high part of a 16, 32, 48 or 64 bit
526 signed data or abs address inline. Will change instruction
527 to MOVN or MOVZ depending on sign of calculated value. */
528
529 /* MOV[ZN]: ((S+A) >> 0) & 0xffff */
530 HOWTO (AARCH64_R (MOVW_SABS_G0), /* type */
531 0, /* rightshift */
532 2, /* size (0 = byte, 1 = short, 2 = long) */
533 16, /* bitsize */
534 FALSE, /* pc_relative */
535 0, /* bitpos */
536 complain_overflow_signed, /* complain_on_overflow */
537 bfd_elf_generic_reloc, /* special_function */
538 AARCH64_R_STR (MOVW_SABS_G0), /* name */
539 FALSE, /* partial_inplace */
540 0xffff, /* src_mask */
541 0xffff, /* dst_mask */
542 FALSE), /* pcrel_offset */
543
544 /* MOV[ZN]: ((S+A) >> 16) & 0xffff */
545 HOWTO64 (AARCH64_R (MOVW_SABS_G1), /* type */
546 16, /* rightshift */
547 2, /* size (0 = byte, 1 = short, 2 = long) */
548 16, /* bitsize */
549 FALSE, /* pc_relative */
550 0, /* bitpos */
551 complain_overflow_signed, /* complain_on_overflow */
552 bfd_elf_generic_reloc, /* special_function */
553 AARCH64_R_STR (MOVW_SABS_G1), /* name */
554 FALSE, /* partial_inplace */
555 0xffff, /* src_mask */
556 0xffff, /* dst_mask */
557 FALSE), /* pcrel_offset */
558
559 /* MOV[ZN]: ((S+A) >> 32) & 0xffff */
560 HOWTO64 (AARCH64_R (MOVW_SABS_G2), /* type */
561 32, /* rightshift */
562 2, /* size (0 = byte, 1 = short, 2 = long) */
563 16, /* bitsize */
564 FALSE, /* pc_relative */
565 0, /* bitpos */
566 complain_overflow_signed, /* complain_on_overflow */
567 bfd_elf_generic_reloc, /* special_function */
568 AARCH64_R_STR (MOVW_SABS_G2), /* name */
569 FALSE, /* partial_inplace */
570 0xffff, /* src_mask */
571 0xffff, /* dst_mask */
572 FALSE), /* pcrel_offset */
573
574 /* Relocations to generate 19, 21 and 33 bit PC-relative load/store
575 addresses: PG(x) is (x & ~0xfff). */
576
577 /* LD-lit: ((S+A-P) >> 2) & 0x7ffff */
578 HOWTO (AARCH64_R (LD_PREL_LO19), /* type */
579 2, /* rightshift */
580 2, /* size (0 = byte, 1 = short, 2 = long) */
581 19, /* bitsize */
582 TRUE, /* pc_relative */
583 0, /* bitpos */
584 complain_overflow_signed, /* complain_on_overflow */
585 bfd_elf_generic_reloc, /* special_function */
586 AARCH64_R_STR (LD_PREL_LO19), /* name */
587 FALSE, /* partial_inplace */
588 0x7ffff, /* src_mask */
589 0x7ffff, /* dst_mask */
590 TRUE), /* pcrel_offset */
591
592 /* ADR: (S+A-P) & 0x1fffff */
593 HOWTO (AARCH64_R (ADR_PREL_LO21), /* type */
594 0, /* rightshift */
595 2, /* size (0 = byte, 1 = short, 2 = long) */
596 21, /* bitsize */
597 TRUE, /* pc_relative */
598 0, /* bitpos */
599 complain_overflow_signed, /* complain_on_overflow */
600 bfd_elf_generic_reloc, /* special_function */
601 AARCH64_R_STR (ADR_PREL_LO21), /* name */
602 FALSE, /* partial_inplace */
603 0x1fffff, /* src_mask */
604 0x1fffff, /* dst_mask */
605 TRUE), /* pcrel_offset */
606
607 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
608 HOWTO (AARCH64_R (ADR_PREL_PG_HI21), /* type */
609 12, /* rightshift */
610 2, /* size (0 = byte, 1 = short, 2 = long) */
611 21, /* bitsize */
612 TRUE, /* pc_relative */
613 0, /* bitpos */
614 complain_overflow_signed, /* complain_on_overflow */
615 bfd_elf_generic_reloc, /* special_function */
616 AARCH64_R_STR (ADR_PREL_PG_HI21), /* name */
617 FALSE, /* partial_inplace */
618 0x1fffff, /* src_mask */
619 0x1fffff, /* dst_mask */
620 TRUE), /* pcrel_offset */
621
622 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff [no overflow check] */
623 HOWTO64 (AARCH64_R (ADR_PREL_PG_HI21_NC), /* type */
624 12, /* rightshift */
625 2, /* size (0 = byte, 1 = short, 2 = long) */
626 21, /* bitsize */
627 TRUE, /* pc_relative */
628 0, /* bitpos */
629 complain_overflow_dont, /* complain_on_overflow */
630 bfd_elf_generic_reloc, /* special_function */
631 AARCH64_R_STR (ADR_PREL_PG_HI21_NC), /* name */
632 FALSE, /* partial_inplace */
633 0x1fffff, /* src_mask */
634 0x1fffff, /* dst_mask */
635 TRUE), /* pcrel_offset */
636
637 /* ADD: (S+A) & 0xfff [no overflow check] */
638 HOWTO (AARCH64_R (ADD_ABS_LO12_NC), /* type */
639 0, /* rightshift */
640 2, /* size (0 = byte, 1 = short, 2 = long) */
641 12, /* bitsize */
642 FALSE, /* pc_relative */
643 10, /* bitpos */
644 complain_overflow_dont, /* complain_on_overflow */
645 bfd_elf_generic_reloc, /* special_function */
646 AARCH64_R_STR (ADD_ABS_LO12_NC), /* name */
647 FALSE, /* partial_inplace */
648 0x3ffc00, /* src_mask */
649 0x3ffc00, /* dst_mask */
650 FALSE), /* pcrel_offset */
651
652 /* LD/ST8: (S+A) & 0xfff */
653 HOWTO (AARCH64_R (LDST8_ABS_LO12_NC), /* type */
654 0, /* rightshift */
655 2, /* size (0 = byte, 1 = short, 2 = long) */
656 12, /* bitsize */
657 FALSE, /* pc_relative */
658 0, /* bitpos */
659 complain_overflow_dont, /* complain_on_overflow */
660 bfd_elf_generic_reloc, /* special_function */
661 AARCH64_R_STR (LDST8_ABS_LO12_NC), /* name */
662 FALSE, /* partial_inplace */
663 0xfff, /* src_mask */
664 0xfff, /* dst_mask */
665 FALSE), /* pcrel_offset */
666
667 /* Relocations for control-flow instructions. */
668
669 /* TBZ/NZ: ((S+A-P) >> 2) & 0x3fff */
670 HOWTO (AARCH64_R (TSTBR14), /* type */
671 2, /* rightshift */
672 2, /* size (0 = byte, 1 = short, 2 = long) */
673 14, /* bitsize */
674 TRUE, /* pc_relative */
675 0, /* bitpos */
676 complain_overflow_signed, /* complain_on_overflow */
677 bfd_elf_generic_reloc, /* special_function */
678 AARCH64_R_STR (TSTBR14), /* name */
679 FALSE, /* partial_inplace */
680 0x3fff, /* src_mask */
681 0x3fff, /* dst_mask */
682 TRUE), /* pcrel_offset */
683
684 /* B.cond: ((S+A-P) >> 2) & 0x7ffff */
685 HOWTO (AARCH64_R (CONDBR19), /* type */
686 2, /* rightshift */
687 2, /* size (0 = byte, 1 = short, 2 = long) */
688 19, /* bitsize */
689 TRUE, /* pc_relative */
690 0, /* bitpos */
691 complain_overflow_signed, /* complain_on_overflow */
692 bfd_elf_generic_reloc, /* special_function */
693 AARCH64_R_STR (CONDBR19), /* name */
694 FALSE, /* partial_inplace */
695 0x7ffff, /* src_mask */
696 0x7ffff, /* dst_mask */
697 TRUE), /* pcrel_offset */
698
699 /* B: ((S+A-P) >> 2) & 0x3ffffff */
700 HOWTO (AARCH64_R (JUMP26), /* type */
701 2, /* rightshift */
702 2, /* size (0 = byte, 1 = short, 2 = long) */
703 26, /* bitsize */
704 TRUE, /* pc_relative */
705 0, /* bitpos */
706 complain_overflow_signed, /* complain_on_overflow */
707 bfd_elf_generic_reloc, /* special_function */
708 AARCH64_R_STR (JUMP26), /* name */
709 FALSE, /* partial_inplace */
710 0x3ffffff, /* src_mask */
711 0x3ffffff, /* dst_mask */
712 TRUE), /* pcrel_offset */
713
714 /* BL: ((S+A-P) >> 2) & 0x3ffffff */
715 HOWTO (AARCH64_R (CALL26), /* type */
716 2, /* rightshift */
717 2, /* size (0 = byte, 1 = short, 2 = long) */
718 26, /* bitsize */
719 TRUE, /* pc_relative */
720 0, /* bitpos */
721 complain_overflow_signed, /* complain_on_overflow */
722 bfd_elf_generic_reloc, /* special_function */
723 AARCH64_R_STR (CALL26), /* name */
724 FALSE, /* partial_inplace */
725 0x3ffffff, /* src_mask */
726 0x3ffffff, /* dst_mask */
727 TRUE), /* pcrel_offset */
728
729 /* LD/ST16: (S+A) & 0xffe */
730 HOWTO (AARCH64_R (LDST16_ABS_LO12_NC), /* type */
731 1, /* rightshift */
732 2, /* size (0 = byte, 1 = short, 2 = long) */
733 12, /* bitsize */
734 FALSE, /* pc_relative */
735 0, /* bitpos */
736 complain_overflow_dont, /* complain_on_overflow */
737 bfd_elf_generic_reloc, /* special_function */
738 AARCH64_R_STR (LDST16_ABS_LO12_NC), /* name */
739 FALSE, /* partial_inplace */
740 0xffe, /* src_mask */
741 0xffe, /* dst_mask */
742 FALSE), /* pcrel_offset */
743
744 /* LD/ST32: (S+A) & 0xffc */
745 HOWTO (AARCH64_R (LDST32_ABS_LO12_NC), /* type */
746 2, /* rightshift */
747 2, /* size (0 = byte, 1 = short, 2 = long) */
748 12, /* bitsize */
749 FALSE, /* pc_relative */
750 0, /* bitpos */
751 complain_overflow_dont, /* complain_on_overflow */
752 bfd_elf_generic_reloc, /* special_function */
753 AARCH64_R_STR (LDST32_ABS_LO12_NC), /* name */
754 FALSE, /* partial_inplace */
755 0xffc, /* src_mask */
756 0xffc, /* dst_mask */
757 FALSE), /* pcrel_offset */
758
759 /* LD/ST64: (S+A) & 0xff8 */
760 HOWTO (AARCH64_R (LDST64_ABS_LO12_NC), /* type */
761 3, /* rightshift */
762 2, /* size (0 = byte, 1 = short, 2 = long) */
763 12, /* bitsize */
764 FALSE, /* pc_relative */
765 0, /* bitpos */
766 complain_overflow_dont, /* complain_on_overflow */
767 bfd_elf_generic_reloc, /* special_function */
768 AARCH64_R_STR (LDST64_ABS_LO12_NC), /* name */
769 FALSE, /* partial_inplace */
770 0xff8, /* src_mask */
771 0xff8, /* dst_mask */
772 FALSE), /* pcrel_offset */
773
774 /* LD/ST128: (S+A) & 0xff0 */
775 HOWTO (AARCH64_R (LDST128_ABS_LO12_NC), /* type */
776 4, /* rightshift */
777 2, /* size (0 = byte, 1 = short, 2 = long) */
778 12, /* bitsize */
779 FALSE, /* pc_relative */
780 0, /* bitpos */
781 complain_overflow_dont, /* complain_on_overflow */
782 bfd_elf_generic_reloc, /* special_function */
783 AARCH64_R_STR (LDST128_ABS_LO12_NC), /* name */
784 FALSE, /* partial_inplace */
785 0xff0, /* src_mask */
786 0xff0, /* dst_mask */
787 FALSE), /* pcrel_offset */
788
789 /* Set a load-literal immediate field to bits
790 0x1FFFFC of G(S)-P */
791 HOWTO (AARCH64_R (GOT_LD_PREL19), /* type */
792 2, /* rightshift */
793 2, /* size (0 = byte,1 = short,2 = long) */
794 19, /* bitsize */
795 TRUE, /* pc_relative */
796 0, /* bitpos */
797 complain_overflow_signed, /* complain_on_overflow */
798 bfd_elf_generic_reloc, /* special_function */
799 AARCH64_R_STR (GOT_LD_PREL19), /* name */
800 FALSE, /* partial_inplace */
801 0xffffe0, /* src_mask */
802 0xffffe0, /* dst_mask */
803 TRUE), /* pcrel_offset */
804
805 /* Get to the page for the GOT entry for the symbol
806 (G(S) - P) using an ADRP instruction. */
807 HOWTO (AARCH64_R (ADR_GOT_PAGE), /* type */
808 12, /* rightshift */
809 2, /* size (0 = byte, 1 = short, 2 = long) */
810 21, /* bitsize */
811 TRUE, /* pc_relative */
812 0, /* bitpos */
813 complain_overflow_dont, /* complain_on_overflow */
814 bfd_elf_generic_reloc, /* special_function */
815 AARCH64_R_STR (ADR_GOT_PAGE), /* name */
816 FALSE, /* partial_inplace */
817 0x1fffff, /* src_mask */
818 0x1fffff, /* dst_mask */
819 TRUE), /* pcrel_offset */
820
821 /* LD64: GOT offset G(S) & 0xff8 */
822 HOWTO64 (AARCH64_R (LD64_GOT_LO12_NC), /* type */
823 3, /* rightshift */
824 2, /* size (0 = byte, 1 = short, 2 = long) */
825 12, /* bitsize */
826 FALSE, /* pc_relative */
827 0, /* bitpos */
828 complain_overflow_dont, /* complain_on_overflow */
829 bfd_elf_generic_reloc, /* special_function */
830 AARCH64_R_STR (LD64_GOT_LO12_NC), /* name */
831 FALSE, /* partial_inplace */
832 0xff8, /* src_mask */
833 0xff8, /* dst_mask */
834 FALSE), /* pcrel_offset */
835
836 /* LD32: GOT offset G(S) & 0xffc */
837 HOWTO32 (AARCH64_R (LD32_GOT_LO12_NC), /* type */
838 2, /* rightshift */
839 2, /* size (0 = byte, 1 = short, 2 = long) */
840 12, /* bitsize */
841 FALSE, /* pc_relative */
842 0, /* bitpos */
843 complain_overflow_dont, /* complain_on_overflow */
844 bfd_elf_generic_reloc, /* special_function */
845 AARCH64_R_STR (LD32_GOT_LO12_NC), /* name */
846 FALSE, /* partial_inplace */
847 0xffc, /* src_mask */
848 0xffc, /* dst_mask */
849 FALSE), /* pcrel_offset */
850
851 /* LD32: GOT offset to the page address of GOT table.
852 (G(S) - PAGE (_GLOBAL_OFFSET_TABLE_)) & 0x5ffc. */
853 HOWTO32 (AARCH64_R (LD32_GOTPAGE_LO14), /* type */
854 2, /* rightshift */
855 2, /* size (0 = byte, 1 = short, 2 = long) */
856 12, /* bitsize */
857 FALSE, /* pc_relative */
858 0, /* bitpos */
859 complain_overflow_unsigned, /* complain_on_overflow */
860 bfd_elf_generic_reloc, /* special_function */
861 AARCH64_R_STR (LD32_GOTPAGE_LO14), /* name */
862 FALSE, /* partial_inplace */
863 0x5ffc, /* src_mask */
864 0x5ffc, /* dst_mask */
865 FALSE), /* pcrel_offset */
866
867 /* LD64: GOT offset to the page address of GOT table.
868 (G(S) - PAGE (_GLOBAL_OFFSET_TABLE_)) & 0x7ff8. */
869 HOWTO64 (AARCH64_R (LD64_GOTPAGE_LO15), /* type */
870 3, /* rightshift */
871 2, /* size (0 = byte, 1 = short, 2 = long) */
872 12, /* bitsize */
873 FALSE, /* pc_relative */
874 0, /* bitpos */
875 complain_overflow_unsigned, /* complain_on_overflow */
876 bfd_elf_generic_reloc, /* special_function */
877 AARCH64_R_STR (LD64_GOTPAGE_LO15), /* name */
878 FALSE, /* partial_inplace */
879 0x7ff8, /* src_mask */
880 0x7ff8, /* dst_mask */
881 FALSE), /* pcrel_offset */
882
883 /* Get to the page for the GOT entry for the symbol
884 (G(S) - P) using an ADRP instruction. */
885 HOWTO (AARCH64_R (TLSGD_ADR_PAGE21), /* type */
886 12, /* rightshift */
887 2, /* size (0 = byte, 1 = short, 2 = long) */
888 21, /* bitsize */
889 TRUE, /* pc_relative */
890 0, /* bitpos */
891 complain_overflow_dont, /* complain_on_overflow */
892 bfd_elf_generic_reloc, /* special_function */
893 AARCH64_R_STR (TLSGD_ADR_PAGE21), /* name */
894 FALSE, /* partial_inplace */
895 0x1fffff, /* src_mask */
896 0x1fffff, /* dst_mask */
897 TRUE), /* pcrel_offset */
898
899 HOWTO (AARCH64_R (TLSGD_ADR_PREL21), /* type */
900 0, /* rightshift */
901 2, /* size (0 = byte, 1 = short, 2 = long) */
902 21, /* bitsize */
903 TRUE, /* pc_relative */
904 0, /* bitpos */
905 complain_overflow_dont, /* complain_on_overflow */
906 bfd_elf_generic_reloc, /* special_function */
907 AARCH64_R_STR (TLSGD_ADR_PREL21), /* name */
908 FALSE, /* partial_inplace */
909 0x1fffff, /* src_mask */
910 0x1fffff, /* dst_mask */
911 TRUE), /* pcrel_offset */
912
913 /* ADD: GOT offset G(S) & 0xff8 [no overflow check] */
914 HOWTO (AARCH64_R (TLSGD_ADD_LO12_NC), /* type */
915 0, /* rightshift */
916 2, /* size (0 = byte, 1 = short, 2 = long) */
917 12, /* bitsize */
918 FALSE, /* pc_relative */
919 0, /* bitpos */
920 complain_overflow_dont, /* complain_on_overflow */
921 bfd_elf_generic_reloc, /* special_function */
922 AARCH64_R_STR (TLSGD_ADD_LO12_NC), /* name */
923 FALSE, /* partial_inplace */
924 0xfff, /* src_mask */
925 0xfff, /* dst_mask */
926 FALSE), /* pcrel_offset */
927
928 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G1), /* type */
929 16, /* rightshift */
930 2, /* size (0 = byte, 1 = short, 2 = long) */
931 16, /* bitsize */
932 FALSE, /* pc_relative */
933 0, /* bitpos */
934 complain_overflow_dont, /* complain_on_overflow */
935 bfd_elf_generic_reloc, /* special_function */
936 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G1), /* name */
937 FALSE, /* partial_inplace */
938 0xffff, /* src_mask */
939 0xffff, /* dst_mask */
940 FALSE), /* pcrel_offset */
941
942 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G0_NC), /* type */
943 0, /* rightshift */
944 2, /* size (0 = byte, 1 = short, 2 = long) */
945 16, /* bitsize */
946 FALSE, /* pc_relative */
947 0, /* bitpos */
948 complain_overflow_dont, /* complain_on_overflow */
949 bfd_elf_generic_reloc, /* special_function */
950 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G0_NC), /* name */
951 FALSE, /* partial_inplace */
952 0xffff, /* src_mask */
953 0xffff, /* dst_mask */
954 FALSE), /* pcrel_offset */
955
956 HOWTO (AARCH64_R (TLSIE_ADR_GOTTPREL_PAGE21), /* type */
957 12, /* rightshift */
958 2, /* size (0 = byte, 1 = short, 2 = long) */
959 21, /* bitsize */
960 FALSE, /* pc_relative */
961 0, /* bitpos */
962 complain_overflow_dont, /* complain_on_overflow */
963 bfd_elf_generic_reloc, /* special_function */
964 AARCH64_R_STR (TLSIE_ADR_GOTTPREL_PAGE21), /* name */
965 FALSE, /* partial_inplace */
966 0x1fffff, /* src_mask */
967 0x1fffff, /* dst_mask */
968 FALSE), /* pcrel_offset */
969
970 HOWTO64 (AARCH64_R (TLSIE_LD64_GOTTPREL_LO12_NC), /* type */
971 3, /* rightshift */
972 2, /* size (0 = byte, 1 = short, 2 = long) */
973 12, /* bitsize */
974 FALSE, /* pc_relative */
975 0, /* bitpos */
976 complain_overflow_dont, /* complain_on_overflow */
977 bfd_elf_generic_reloc, /* special_function */
978 AARCH64_R_STR (TLSIE_LD64_GOTTPREL_LO12_NC), /* name */
979 FALSE, /* partial_inplace */
980 0xff8, /* src_mask */
981 0xff8, /* dst_mask */
982 FALSE), /* pcrel_offset */
983
984 HOWTO32 (AARCH64_R (TLSIE_LD32_GOTTPREL_LO12_NC), /* type */
985 2, /* rightshift */
986 2, /* size (0 = byte, 1 = short, 2 = long) */
987 12, /* bitsize */
988 FALSE, /* pc_relative */
989 0, /* bitpos */
990 complain_overflow_dont, /* complain_on_overflow */
991 bfd_elf_generic_reloc, /* special_function */
992 AARCH64_R_STR (TLSIE_LD32_GOTTPREL_LO12_NC), /* name */
993 FALSE, /* partial_inplace */
994 0xffc, /* src_mask */
995 0xffc, /* dst_mask */
996 FALSE), /* pcrel_offset */
997
998 HOWTO (AARCH64_R (TLSIE_LD_GOTTPREL_PREL19), /* type */
999 2, /* rightshift */
1000 2, /* size (0 = byte, 1 = short, 2 = long) */
1001 19, /* bitsize */
1002 FALSE, /* pc_relative */
1003 0, /* bitpos */
1004 complain_overflow_dont, /* complain_on_overflow */
1005 bfd_elf_generic_reloc, /* special_function */
1006 AARCH64_R_STR (TLSIE_LD_GOTTPREL_PREL19), /* name */
1007 FALSE, /* partial_inplace */
1008 0x1ffffc, /* src_mask */
1009 0x1ffffc, /* dst_mask */
1010 FALSE), /* pcrel_offset */
1011
1012 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G2), /* type */
1013 32, /* rightshift */
1014 2, /* size (0 = byte, 1 = short, 2 = long) */
1015 16, /* bitsize */
1016 FALSE, /* pc_relative */
1017 0, /* bitpos */
1018 complain_overflow_unsigned, /* complain_on_overflow */
1019 bfd_elf_generic_reloc, /* special_function */
1020 AARCH64_R_STR (TLSLE_MOVW_TPREL_G2), /* name */
1021 FALSE, /* partial_inplace */
1022 0xffff, /* src_mask */
1023 0xffff, /* dst_mask */
1024 FALSE), /* pcrel_offset */
1025
1026 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G1), /* type */
1027 16, /* rightshift */
1028 2, /* size (0 = byte, 1 = short, 2 = long) */
1029 16, /* bitsize */
1030 FALSE, /* pc_relative */
1031 0, /* bitpos */
1032 complain_overflow_dont, /* complain_on_overflow */
1033 bfd_elf_generic_reloc, /* special_function */
1034 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1), /* name */
1035 FALSE, /* partial_inplace */
1036 0xffff, /* src_mask */
1037 0xffff, /* dst_mask */
1038 FALSE), /* pcrel_offset */
1039
1040 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G1_NC), /* type */
1041 16, /* rightshift */
1042 2, /* size (0 = byte, 1 = short, 2 = long) */
1043 16, /* bitsize */
1044 FALSE, /* pc_relative */
1045 0, /* bitpos */
1046 complain_overflow_dont, /* complain_on_overflow */
1047 bfd_elf_generic_reloc, /* special_function */
1048 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1_NC), /* name */
1049 FALSE, /* partial_inplace */
1050 0xffff, /* src_mask */
1051 0xffff, /* dst_mask */
1052 FALSE), /* pcrel_offset */
1053
1054 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0), /* type */
1055 0, /* rightshift */
1056 2, /* size (0 = byte, 1 = short, 2 = long) */
1057 16, /* bitsize */
1058 FALSE, /* pc_relative */
1059 0, /* bitpos */
1060 complain_overflow_dont, /* complain_on_overflow */
1061 bfd_elf_generic_reloc, /* special_function */
1062 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0), /* name */
1063 FALSE, /* partial_inplace */
1064 0xffff, /* src_mask */
1065 0xffff, /* dst_mask */
1066 FALSE), /* pcrel_offset */
1067
1068 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0_NC), /* type */
1069 0, /* rightshift */
1070 2, /* size (0 = byte, 1 = short, 2 = long) */
1071 16, /* bitsize */
1072 FALSE, /* pc_relative */
1073 0, /* bitpos */
1074 complain_overflow_dont, /* complain_on_overflow */
1075 bfd_elf_generic_reloc, /* special_function */
1076 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0_NC), /* name */
1077 FALSE, /* partial_inplace */
1078 0xffff, /* src_mask */
1079 0xffff, /* dst_mask */
1080 FALSE), /* pcrel_offset */
1081
1082 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_HI12), /* type */
1083 12, /* rightshift */
1084 2, /* size (0 = byte, 1 = short, 2 = long) */
1085 12, /* bitsize */
1086 FALSE, /* pc_relative */
1087 0, /* bitpos */
1088 complain_overflow_unsigned, /* complain_on_overflow */
1089 bfd_elf_generic_reloc, /* special_function */
1090 AARCH64_R_STR (TLSLE_ADD_TPREL_HI12), /* name */
1091 FALSE, /* partial_inplace */
1092 0xfff, /* src_mask */
1093 0xfff, /* dst_mask */
1094 FALSE), /* pcrel_offset */
1095
1096 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12), /* type */
1097 0, /* rightshift */
1098 2, /* size (0 = byte, 1 = short, 2 = long) */
1099 12, /* bitsize */
1100 FALSE, /* pc_relative */
1101 0, /* bitpos */
1102 complain_overflow_unsigned, /* complain_on_overflow */
1103 bfd_elf_generic_reloc, /* special_function */
1104 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12), /* name */
1105 FALSE, /* partial_inplace */
1106 0xfff, /* src_mask */
1107 0xfff, /* dst_mask */
1108 FALSE), /* pcrel_offset */
1109
1110 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12_NC), /* type */
1111 0, /* rightshift */
1112 2, /* size (0 = byte, 1 = short, 2 = long) */
1113 12, /* bitsize */
1114 FALSE, /* pc_relative */
1115 0, /* bitpos */
1116 complain_overflow_dont, /* complain_on_overflow */
1117 bfd_elf_generic_reloc, /* special_function */
1118 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12_NC), /* name */
1119 FALSE, /* partial_inplace */
1120 0xfff, /* src_mask */
1121 0xfff, /* dst_mask */
1122 FALSE), /* pcrel_offset */
1123
1124 HOWTO (AARCH64_R (TLSDESC_LD_PREL19), /* type */
1125 2, /* rightshift */
1126 2, /* size (0 = byte, 1 = short, 2 = long) */
1127 19, /* bitsize */
1128 TRUE, /* pc_relative */
1129 0, /* bitpos */
1130 complain_overflow_dont, /* complain_on_overflow */
1131 bfd_elf_generic_reloc, /* special_function */
1132 AARCH64_R_STR (TLSDESC_LD_PREL19), /* name */
1133 FALSE, /* partial_inplace */
1134 0x0ffffe0, /* src_mask */
1135 0x0ffffe0, /* dst_mask */
1136 TRUE), /* pcrel_offset */
1137
1138 HOWTO (AARCH64_R (TLSDESC_ADR_PREL21), /* type */
1139 0, /* rightshift */
1140 2, /* size (0 = byte, 1 = short, 2 = long) */
1141 21, /* bitsize */
1142 TRUE, /* pc_relative */
1143 0, /* bitpos */
1144 complain_overflow_dont, /* complain_on_overflow */
1145 bfd_elf_generic_reloc, /* special_function */
1146 AARCH64_R_STR (TLSDESC_ADR_PREL21), /* name */
1147 FALSE, /* partial_inplace */
1148 0x1fffff, /* src_mask */
1149 0x1fffff, /* dst_mask */
1150 TRUE), /* pcrel_offset */
1151
1152 /* Get to the page for the GOT entry for the symbol
1153 (G(S) - P) using an ADRP instruction. */
1154 HOWTO (AARCH64_R (TLSDESC_ADR_PAGE21), /* type */
1155 12, /* rightshift */
1156 2, /* size (0 = byte, 1 = short, 2 = long) */
1157 21, /* bitsize */
1158 TRUE, /* pc_relative */
1159 0, /* bitpos */
1160 complain_overflow_dont, /* complain_on_overflow */
1161 bfd_elf_generic_reloc, /* special_function */
1162 AARCH64_R_STR (TLSDESC_ADR_PAGE21), /* name */
1163 FALSE, /* partial_inplace */
1164 0x1fffff, /* src_mask */
1165 0x1fffff, /* dst_mask */
1166 TRUE), /* pcrel_offset */
1167
1168 /* LD64: GOT offset G(S) & 0xff8. */
1169 HOWTO64 (AARCH64_R (TLSDESC_LD64_LO12_NC), /* type */
1170 3, /* rightshift */
1171 2, /* size (0 = byte, 1 = short, 2 = long) */
1172 12, /* bitsize */
1173 FALSE, /* pc_relative */
1174 0, /* bitpos */
1175 complain_overflow_dont, /* complain_on_overflow */
1176 bfd_elf_generic_reloc, /* special_function */
1177 AARCH64_R_STR (TLSDESC_LD64_LO12_NC), /* name */
1178 FALSE, /* partial_inplace */
1179 0xff8, /* src_mask */
1180 0xff8, /* dst_mask */
1181 FALSE), /* pcrel_offset */
1182
1183 /* LD32: GOT offset G(S) & 0xffc. */
1184 HOWTO32 (AARCH64_R (TLSDESC_LD32_LO12_NC), /* type */
1185 2, /* rightshift */
1186 2, /* size (0 = byte, 1 = short, 2 = long) */
1187 12, /* bitsize */
1188 FALSE, /* pc_relative */
1189 0, /* bitpos */
1190 complain_overflow_dont, /* complain_on_overflow */
1191 bfd_elf_generic_reloc, /* special_function */
1192 AARCH64_R_STR (TLSDESC_LD32_LO12_NC), /* name */
1193 FALSE, /* partial_inplace */
1194 0xffc, /* src_mask */
1195 0xffc, /* dst_mask */
1196 FALSE), /* pcrel_offset */
1197
1198 /* ADD: GOT offset G(S) & 0xfff. */
1199 HOWTO (AARCH64_R (TLSDESC_ADD_LO12_NC), /* type */
1200 0, /* rightshift */
1201 2, /* size (0 = byte, 1 = short, 2 = long) */
1202 12, /* bitsize */
1203 FALSE, /* pc_relative */
1204 0, /* bitpos */
1205 complain_overflow_dont, /* complain_on_overflow */
1206 bfd_elf_generic_reloc, /* special_function */
1207 AARCH64_R_STR (TLSDESC_ADD_LO12_NC), /* name */
1208 FALSE, /* partial_inplace */
1209 0xfff, /* src_mask */
1210 0xfff, /* dst_mask */
1211 FALSE), /* pcrel_offset */
1212
1213 HOWTO64 (AARCH64_R (TLSDESC_OFF_G1), /* type */
1214 16, /* rightshift */
1215 2, /* size (0 = byte, 1 = short, 2 = long) */
1216 12, /* bitsize */
1217 FALSE, /* pc_relative */
1218 0, /* bitpos */
1219 complain_overflow_dont, /* complain_on_overflow */
1220 bfd_elf_generic_reloc, /* special_function */
1221 AARCH64_R_STR (TLSDESC_OFF_G1), /* name */
1222 FALSE, /* partial_inplace */
1223 0xffff, /* src_mask */
1224 0xffff, /* dst_mask */
1225 FALSE), /* pcrel_offset */
1226
1227 HOWTO64 (AARCH64_R (TLSDESC_OFF_G0_NC), /* type */
1228 0, /* rightshift */
1229 2, /* size (0 = byte, 1 = short, 2 = long) */
1230 12, /* bitsize */
1231 FALSE, /* pc_relative */
1232 0, /* bitpos */
1233 complain_overflow_dont, /* complain_on_overflow */
1234 bfd_elf_generic_reloc, /* special_function */
1235 AARCH64_R_STR (TLSDESC_OFF_G0_NC), /* name */
1236 FALSE, /* partial_inplace */
1237 0xffff, /* src_mask */
1238 0xffff, /* dst_mask */
1239 FALSE), /* pcrel_offset */
1240
1241 HOWTO64 (AARCH64_R (TLSDESC_LDR), /* type */
1242 0, /* rightshift */
1243 2, /* size (0 = byte, 1 = short, 2 = long) */
1244 12, /* bitsize */
1245 FALSE, /* pc_relative */
1246 0, /* bitpos */
1247 complain_overflow_dont, /* complain_on_overflow */
1248 bfd_elf_generic_reloc, /* special_function */
1249 AARCH64_R_STR (TLSDESC_LDR), /* name */
1250 FALSE, /* partial_inplace */
1251 0x0, /* src_mask */
1252 0x0, /* dst_mask */
1253 FALSE), /* pcrel_offset */
1254
1255 HOWTO64 (AARCH64_R (TLSDESC_ADD), /* type */
1256 0, /* rightshift */
1257 2, /* size (0 = byte, 1 = short, 2 = long) */
1258 12, /* bitsize */
1259 FALSE, /* pc_relative */
1260 0, /* bitpos */
1261 complain_overflow_dont, /* complain_on_overflow */
1262 bfd_elf_generic_reloc, /* special_function */
1263 AARCH64_R_STR (TLSDESC_ADD), /* name */
1264 FALSE, /* partial_inplace */
1265 0x0, /* src_mask */
1266 0x0, /* dst_mask */
1267 FALSE), /* pcrel_offset */
1268
1269 HOWTO (AARCH64_R (TLSDESC_CALL), /* type */
1270 0, /* rightshift */
1271 2, /* size (0 = byte, 1 = short, 2 = long) */
1272 0, /* bitsize */
1273 FALSE, /* pc_relative */
1274 0, /* bitpos */
1275 complain_overflow_dont, /* complain_on_overflow */
1276 bfd_elf_generic_reloc, /* special_function */
1277 AARCH64_R_STR (TLSDESC_CALL), /* name */
1278 FALSE, /* partial_inplace */
1279 0x0, /* src_mask */
1280 0x0, /* dst_mask */
1281 FALSE), /* pcrel_offset */
1282
1283 HOWTO (AARCH64_R (COPY), /* type */
1284 0, /* rightshift */
1285 2, /* size (0 = byte, 1 = short, 2 = long) */
1286 64, /* bitsize */
1287 FALSE, /* pc_relative */
1288 0, /* bitpos */
1289 complain_overflow_bitfield, /* complain_on_overflow */
1290 bfd_elf_generic_reloc, /* special_function */
1291 AARCH64_R_STR (COPY), /* name */
1292 TRUE, /* partial_inplace */
1293 0xffffffff, /* src_mask */
1294 0xffffffff, /* dst_mask */
1295 FALSE), /* pcrel_offset */
1296
1297 HOWTO (AARCH64_R (GLOB_DAT), /* type */
1298 0, /* rightshift */
1299 2, /* size (0 = byte, 1 = short, 2 = long) */
1300 64, /* bitsize */
1301 FALSE, /* pc_relative */
1302 0, /* bitpos */
1303 complain_overflow_bitfield, /* complain_on_overflow */
1304 bfd_elf_generic_reloc, /* special_function */
1305 AARCH64_R_STR (GLOB_DAT), /* name */
1306 TRUE, /* partial_inplace */
1307 0xffffffff, /* src_mask */
1308 0xffffffff, /* dst_mask */
1309 FALSE), /* pcrel_offset */
1310
1311 HOWTO (AARCH64_R (JUMP_SLOT), /* type */
1312 0, /* rightshift */
1313 2, /* size (0 = byte, 1 = short, 2 = long) */
1314 64, /* bitsize */
1315 FALSE, /* pc_relative */
1316 0, /* bitpos */
1317 complain_overflow_bitfield, /* complain_on_overflow */
1318 bfd_elf_generic_reloc, /* special_function */
1319 AARCH64_R_STR (JUMP_SLOT), /* name */
1320 TRUE, /* partial_inplace */
1321 0xffffffff, /* src_mask */
1322 0xffffffff, /* dst_mask */
1323 FALSE), /* pcrel_offset */
1324
1325 HOWTO (AARCH64_R (RELATIVE), /* type */
1326 0, /* rightshift */
1327 2, /* size (0 = byte, 1 = short, 2 = long) */
1328 64, /* bitsize */
1329 FALSE, /* pc_relative */
1330 0, /* bitpos */
1331 complain_overflow_bitfield, /* complain_on_overflow */
1332 bfd_elf_generic_reloc, /* special_function */
1333 AARCH64_R_STR (RELATIVE), /* name */
1334 TRUE, /* partial_inplace */
1335 ALL_ONES, /* src_mask */
1336 ALL_ONES, /* dst_mask */
1337 FALSE), /* pcrel_offset */
1338
1339 HOWTO (AARCH64_R (TLS_DTPMOD), /* type */
1340 0, /* rightshift */
1341 2, /* size (0 = byte, 1 = short, 2 = long) */
1342 64, /* bitsize */
1343 FALSE, /* pc_relative */
1344 0, /* bitpos */
1345 complain_overflow_dont, /* complain_on_overflow */
1346 bfd_elf_generic_reloc, /* special_function */
1347 #if ARCH_SIZE == 64
1348 AARCH64_R_STR (TLS_DTPMOD64), /* name */
1349 #else
1350 AARCH64_R_STR (TLS_DTPMOD), /* name */
1351 #endif
1352 FALSE, /* partial_inplace */
1353 0, /* src_mask */
1354 ALL_ONES, /* dst_mask */
1355 FALSE), /* pc_reloffset */
1356
1357 HOWTO (AARCH64_R (TLS_DTPREL), /* type */
1358 0, /* rightshift */
1359 2, /* size (0 = byte, 1 = short, 2 = long) */
1360 64, /* bitsize */
1361 FALSE, /* pc_relative */
1362 0, /* bitpos */
1363 complain_overflow_dont, /* complain_on_overflow */
1364 bfd_elf_generic_reloc, /* special_function */
1365 #if ARCH_SIZE == 64
1366 AARCH64_R_STR (TLS_DTPREL64), /* name */
1367 #else
1368 AARCH64_R_STR (TLS_DTPREL), /* name */
1369 #endif
1370 FALSE, /* partial_inplace */
1371 0, /* src_mask */
1372 ALL_ONES, /* dst_mask */
1373 FALSE), /* pcrel_offset */
1374
1375 HOWTO (AARCH64_R (TLS_TPREL), /* type */
1376 0, /* rightshift */
1377 2, /* size (0 = byte, 1 = short, 2 = long) */
1378 64, /* bitsize */
1379 FALSE, /* pc_relative */
1380 0, /* bitpos */
1381 complain_overflow_dont, /* complain_on_overflow */
1382 bfd_elf_generic_reloc, /* special_function */
1383 #if ARCH_SIZE == 64
1384 AARCH64_R_STR (TLS_TPREL64), /* name */
1385 #else
1386 AARCH64_R_STR (TLS_TPREL), /* name */
1387 #endif
1388 FALSE, /* partial_inplace */
1389 0, /* src_mask */
1390 ALL_ONES, /* dst_mask */
1391 FALSE), /* pcrel_offset */
1392
1393 HOWTO (AARCH64_R (TLSDESC), /* type */
1394 0, /* rightshift */
1395 2, /* size (0 = byte, 1 = short, 2 = long) */
1396 64, /* bitsize */
1397 FALSE, /* pc_relative */
1398 0, /* bitpos */
1399 complain_overflow_dont, /* complain_on_overflow */
1400 bfd_elf_generic_reloc, /* special_function */
1401 AARCH64_R_STR (TLSDESC), /* name */
1402 FALSE, /* partial_inplace */
1403 0, /* src_mask */
1404 ALL_ONES, /* dst_mask */
1405 FALSE), /* pcrel_offset */
1406
1407 HOWTO (AARCH64_R (IRELATIVE), /* type */
1408 0, /* rightshift */
1409 2, /* size (0 = byte, 1 = short, 2 = long) */
1410 64, /* bitsize */
1411 FALSE, /* pc_relative */
1412 0, /* bitpos */
1413 complain_overflow_bitfield, /* complain_on_overflow */
1414 bfd_elf_generic_reloc, /* special_function */
1415 AARCH64_R_STR (IRELATIVE), /* name */
1416 FALSE, /* partial_inplace */
1417 0, /* src_mask */
1418 ALL_ONES, /* dst_mask */
1419 FALSE), /* pcrel_offset */
1420
1421 EMPTY_HOWTO (0),
1422 };
1423
1424 static reloc_howto_type elfNN_aarch64_howto_none =
1425 HOWTO (R_AARCH64_NONE, /* type */
1426 0, /* rightshift */
1427 3, /* size (0 = byte, 1 = short, 2 = long) */
1428 0, /* bitsize */
1429 FALSE, /* pc_relative */
1430 0, /* bitpos */
1431 complain_overflow_dont,/* complain_on_overflow */
1432 bfd_elf_generic_reloc, /* special_function */
1433 "R_AARCH64_NONE", /* name */
1434 FALSE, /* partial_inplace */
1435 0, /* src_mask */
1436 0, /* dst_mask */
1437 FALSE); /* pcrel_offset */
1438
1439 /* Given HOWTO, return the bfd internal relocation enumerator. */
1440
1441 static bfd_reloc_code_real_type
1442 elfNN_aarch64_bfd_reloc_from_howto (reloc_howto_type *howto)
1443 {
1444 const int size
1445 = (int) ARRAY_SIZE (elfNN_aarch64_howto_table);
1446 const ptrdiff_t offset
1447 = howto - elfNN_aarch64_howto_table;
1448
1449 if (offset > 0 && offset < size - 1)
1450 return BFD_RELOC_AARCH64_RELOC_START + offset;
1451
1452 if (howto == &elfNN_aarch64_howto_none)
1453 return BFD_RELOC_AARCH64_NONE;
1454
1455 return BFD_RELOC_AARCH64_RELOC_START;
1456 }
1457
1458 /* Given R_TYPE, return the bfd internal relocation enumerator. */
1459
1460 static bfd_reloc_code_real_type
1461 elfNN_aarch64_bfd_reloc_from_type (unsigned int r_type)
1462 {
1463 static bfd_boolean initialized_p = FALSE;
1464 /* Indexed by R_TYPE, values are offsets in the howto_table. */
1465 static unsigned int offsets[R_AARCH64_end];
1466
1467 if (initialized_p == FALSE)
1468 {
1469 unsigned int i;
1470
1471 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1472 if (elfNN_aarch64_howto_table[i].type != 0)
1473 offsets[elfNN_aarch64_howto_table[i].type] = i;
1474
1475 initialized_p = TRUE;
1476 }
1477
1478 if (r_type == R_AARCH64_NONE || r_type == R_AARCH64_NULL)
1479 return BFD_RELOC_AARCH64_NONE;
1480
1481 /* PR 17512: file: b371e70a. */
1482 if (r_type >= R_AARCH64_end)
1483 {
1484 _bfd_error_handler (_("Invalid AArch64 reloc number: %d"), r_type);
1485 bfd_set_error (bfd_error_bad_value);
1486 return BFD_RELOC_AARCH64_NONE;
1487 }
1488
1489 return BFD_RELOC_AARCH64_RELOC_START + offsets[r_type];
1490 }
1491
1492 struct elf_aarch64_reloc_map
1493 {
1494 bfd_reloc_code_real_type from;
1495 bfd_reloc_code_real_type to;
1496 };
1497
1498 /* Map bfd generic reloc to AArch64-specific reloc. */
1499 static const struct elf_aarch64_reloc_map elf_aarch64_reloc_map[] =
1500 {
1501 {BFD_RELOC_NONE, BFD_RELOC_AARCH64_NONE},
1502
1503 /* Basic data relocations. */
1504 {BFD_RELOC_CTOR, BFD_RELOC_AARCH64_NN},
1505 {BFD_RELOC_64, BFD_RELOC_AARCH64_64},
1506 {BFD_RELOC_32, BFD_RELOC_AARCH64_32},
1507 {BFD_RELOC_16, BFD_RELOC_AARCH64_16},
1508 {BFD_RELOC_64_PCREL, BFD_RELOC_AARCH64_64_PCREL},
1509 {BFD_RELOC_32_PCREL, BFD_RELOC_AARCH64_32_PCREL},
1510 {BFD_RELOC_16_PCREL, BFD_RELOC_AARCH64_16_PCREL},
1511 };
1512
1513 /* Given the bfd internal relocation enumerator in CODE, return the
1514 corresponding howto entry. */
1515
1516 static reloc_howto_type *
1517 elfNN_aarch64_howto_from_bfd_reloc (bfd_reloc_code_real_type code)
1518 {
1519 unsigned int i;
1520
1521 /* Convert bfd generic reloc to AArch64-specific reloc. */
1522 if (code < BFD_RELOC_AARCH64_RELOC_START
1523 || code > BFD_RELOC_AARCH64_RELOC_END)
1524 for (i = 0; i < ARRAY_SIZE (elf_aarch64_reloc_map); i++)
1525 if (elf_aarch64_reloc_map[i].from == code)
1526 {
1527 code = elf_aarch64_reloc_map[i].to;
1528 break;
1529 }
1530
1531 if (code > BFD_RELOC_AARCH64_RELOC_START
1532 && code < BFD_RELOC_AARCH64_RELOC_END)
1533 if (elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START].type)
1534 return &elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START];
1535
1536 if (code == BFD_RELOC_AARCH64_NONE)
1537 return &elfNN_aarch64_howto_none;
1538
1539 return NULL;
1540 }
1541
1542 static reloc_howto_type *
1543 elfNN_aarch64_howto_from_type (unsigned int r_type)
1544 {
1545 bfd_reloc_code_real_type val;
1546 reloc_howto_type *howto;
1547
1548 #if ARCH_SIZE == 32
1549 if (r_type > 256)
1550 {
1551 bfd_set_error (bfd_error_bad_value);
1552 return NULL;
1553 }
1554 #endif
1555
1556 if (r_type == R_AARCH64_NONE)
1557 return &elfNN_aarch64_howto_none;
1558
1559 val = elfNN_aarch64_bfd_reloc_from_type (r_type);
1560 howto = elfNN_aarch64_howto_from_bfd_reloc (val);
1561
1562 if (howto != NULL)
1563 return howto;
1564
1565 bfd_set_error (bfd_error_bad_value);
1566 return NULL;
1567 }
1568
1569 static void
1570 elfNN_aarch64_info_to_howto (bfd *abfd ATTRIBUTE_UNUSED, arelent *bfd_reloc,
1571 Elf_Internal_Rela *elf_reloc)
1572 {
1573 unsigned int r_type;
1574
1575 r_type = ELFNN_R_TYPE (elf_reloc->r_info);
1576 bfd_reloc->howto = elfNN_aarch64_howto_from_type (r_type);
1577 }
1578
1579 static reloc_howto_type *
1580 elfNN_aarch64_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1581 bfd_reloc_code_real_type code)
1582 {
1583 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (code);
1584
1585 if (howto != NULL)
1586 return howto;
1587
1588 bfd_set_error (bfd_error_bad_value);
1589 return NULL;
1590 }
1591
1592 static reloc_howto_type *
1593 elfNN_aarch64_reloc_name_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1594 const char *r_name)
1595 {
1596 unsigned int i;
1597
1598 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1599 if (elfNN_aarch64_howto_table[i].name != NULL
1600 && strcasecmp (elfNN_aarch64_howto_table[i].name, r_name) == 0)
1601 return &elfNN_aarch64_howto_table[i];
1602
1603 return NULL;
1604 }
1605
1606 #define TARGET_LITTLE_SYM aarch64_elfNN_le_vec
1607 #define TARGET_LITTLE_NAME "elfNN-littleaarch64"
1608 #define TARGET_BIG_SYM aarch64_elfNN_be_vec
1609 #define TARGET_BIG_NAME "elfNN-bigaarch64"
1610
1611 /* The linker script knows the section names for placement.
1612 The entry_names are used to do simple name mangling on the stubs.
1613 Given a function name, and its type, the stub can be found. The
1614 name can be changed. The only requirement is the %s be present. */
1615 #define STUB_ENTRY_NAME "__%s_veneer"
1616
1617 /* The name of the dynamic interpreter. This is put in the .interp
1618 section. */
1619 #define ELF_DYNAMIC_INTERPRETER "/lib/ld.so.1"
1620
1621 #define AARCH64_MAX_FWD_BRANCH_OFFSET \
1622 (((1 << 25) - 1) << 2)
1623 #define AARCH64_MAX_BWD_BRANCH_OFFSET \
1624 (-((1 << 25) << 2))
1625
1626 #define AARCH64_MAX_ADRP_IMM ((1 << 20) - 1)
1627 #define AARCH64_MIN_ADRP_IMM (-(1 << 20))
1628
1629 static int
1630 aarch64_valid_for_adrp_p (bfd_vma value, bfd_vma place)
1631 {
1632 bfd_signed_vma offset = (bfd_signed_vma) (PG (value) - PG (place)) >> 12;
1633 return offset <= AARCH64_MAX_ADRP_IMM && offset >= AARCH64_MIN_ADRP_IMM;
1634 }
1635
1636 static int
1637 aarch64_valid_branch_p (bfd_vma value, bfd_vma place)
1638 {
1639 bfd_signed_vma offset = (bfd_signed_vma) (value - place);
1640 return (offset <= AARCH64_MAX_FWD_BRANCH_OFFSET
1641 && offset >= AARCH64_MAX_BWD_BRANCH_OFFSET);
1642 }
1643
1644 static const uint32_t aarch64_adrp_branch_stub [] =
1645 {
1646 0x90000010, /* adrp ip0, X */
1647 /* R_AARCH64_ADR_HI21_PCREL(X) */
1648 0x91000210, /* add ip0, ip0, :lo12:X */
1649 /* R_AARCH64_ADD_ABS_LO12_NC(X) */
1650 0xd61f0200, /* br ip0 */
1651 };
1652
1653 static const uint32_t aarch64_long_branch_stub[] =
1654 {
1655 #if ARCH_SIZE == 64
1656 0x58000090, /* ldr ip0, 1f */
1657 #else
1658 0x18000090, /* ldr wip0, 1f */
1659 #endif
1660 0x10000011, /* adr ip1, #0 */
1661 0x8b110210, /* add ip0, ip0, ip1 */
1662 0xd61f0200, /* br ip0 */
1663 0x00000000, /* 1: .xword or .word
1664 R_AARCH64_PRELNN(X) + 12
1665 */
1666 0x00000000,
1667 };
1668
1669 static const uint32_t aarch64_erratum_835769_stub[] =
1670 {
1671 0x00000000, /* Placeholder for multiply accumulate. */
1672 0x14000000, /* b <label> */
1673 };
1674
1675 static const uint32_t aarch64_erratum_843419_stub[] =
1676 {
1677 0x00000000, /* Placeholder for LDR instruction. */
1678 0x14000000, /* b <label> */
1679 };
1680
1681 /* Section name for stubs is the associated section name plus this
1682 string. */
1683 #define STUB_SUFFIX ".stub"
1684
1685 enum elf_aarch64_stub_type
1686 {
1687 aarch64_stub_none,
1688 aarch64_stub_adrp_branch,
1689 aarch64_stub_long_branch,
1690 aarch64_stub_erratum_835769_veneer,
1691 aarch64_stub_erratum_843419_veneer,
1692 };
1693
1694 struct elf_aarch64_stub_hash_entry
1695 {
1696 /* Base hash table entry structure. */
1697 struct bfd_hash_entry root;
1698
1699 /* The stub section. */
1700 asection *stub_sec;
1701
1702 /* Offset within stub_sec of the beginning of this stub. */
1703 bfd_vma stub_offset;
1704
1705 /* Given the symbol's value and its section we can determine its final
1706 value when building the stubs (so the stub knows where to jump). */
1707 bfd_vma target_value;
1708 asection *target_section;
1709
1710 enum elf_aarch64_stub_type stub_type;
1711
1712 /* The symbol table entry, if any, that this was derived from. */
1713 struct elf_aarch64_link_hash_entry *h;
1714
1715 /* Destination symbol type */
1716 unsigned char st_type;
1717
1718 /* Where this stub is being called from, or, in the case of combined
1719 stub sections, the first input section in the group. */
1720 asection *id_sec;
1721
1722 /* The name for the local symbol at the start of this stub. The
1723 stub name in the hash table has to be unique; this does not, so
1724 it can be friendlier. */
1725 char *output_name;
1726
1727 /* The instruction which caused this stub to be generated (only valid for
1728 erratum 835769 workaround stubs at present). */
1729 uint32_t veneered_insn;
1730
1731 /* In an erratum 843419 workaround stub, the ADRP instruction offset. */
1732 bfd_vma adrp_offset;
1733 };
1734
1735 /* Used to build a map of a section. This is required for mixed-endian
1736 code/data. */
1737
1738 typedef struct elf_elf_section_map
1739 {
1740 bfd_vma vma;
1741 char type;
1742 }
1743 elf_aarch64_section_map;
1744
1745
1746 typedef struct _aarch64_elf_section_data
1747 {
1748 struct bfd_elf_section_data elf;
1749 unsigned int mapcount;
1750 unsigned int mapsize;
1751 elf_aarch64_section_map *map;
1752 }
1753 _aarch64_elf_section_data;
1754
1755 #define elf_aarch64_section_data(sec) \
1756 ((_aarch64_elf_section_data *) elf_section_data (sec))
1757
1758 /* The size of the thread control block which is defined to be two pointers. */
1759 #define TCB_SIZE (ARCH_SIZE/8)*2
1760
1761 struct elf_aarch64_local_symbol
1762 {
1763 unsigned int got_type;
1764 bfd_signed_vma got_refcount;
1765 bfd_vma got_offset;
1766
1767 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The
1768 offset is from the end of the jump table and reserved entries
1769 within the PLTGOT.
1770
1771 The magic value (bfd_vma) -1 indicates that an offset has not be
1772 allocated. */
1773 bfd_vma tlsdesc_got_jump_table_offset;
1774 };
1775
1776 struct elf_aarch64_obj_tdata
1777 {
1778 struct elf_obj_tdata root;
1779
1780 /* local symbol descriptors */
1781 struct elf_aarch64_local_symbol *locals;
1782
1783 /* Zero to warn when linking objects with incompatible enum sizes. */
1784 int no_enum_size_warning;
1785
1786 /* Zero to warn when linking objects with incompatible wchar_t sizes. */
1787 int no_wchar_size_warning;
1788 };
1789
1790 #define elf_aarch64_tdata(bfd) \
1791 ((struct elf_aarch64_obj_tdata *) (bfd)->tdata.any)
1792
1793 #define elf_aarch64_locals(bfd) (elf_aarch64_tdata (bfd)->locals)
1794
1795 #define is_aarch64_elf(bfd) \
1796 (bfd_get_flavour (bfd) == bfd_target_elf_flavour \
1797 && elf_tdata (bfd) != NULL \
1798 && elf_object_id (bfd) == AARCH64_ELF_DATA)
1799
1800 static bfd_boolean
1801 elfNN_aarch64_mkobject (bfd *abfd)
1802 {
1803 return bfd_elf_allocate_object (abfd, sizeof (struct elf_aarch64_obj_tdata),
1804 AARCH64_ELF_DATA);
1805 }
1806
1807 #define elf_aarch64_hash_entry(ent) \
1808 ((struct elf_aarch64_link_hash_entry *)(ent))
1809
1810 #define GOT_UNKNOWN 0
1811 #define GOT_NORMAL 1
1812 #define GOT_TLS_GD 2
1813 #define GOT_TLS_IE 4
1814 #define GOT_TLSDESC_GD 8
1815
1816 #define GOT_TLS_GD_ANY_P(type) ((type & GOT_TLS_GD) || (type & GOT_TLSDESC_GD))
1817
1818 /* AArch64 ELF linker hash entry. */
1819 struct elf_aarch64_link_hash_entry
1820 {
1821 struct elf_link_hash_entry root;
1822
1823 /* Track dynamic relocs copied for this symbol. */
1824 struct elf_dyn_relocs *dyn_relocs;
1825
1826 /* Since PLT entries have variable size, we need to record the
1827 index into .got.plt instead of recomputing it from the PLT
1828 offset. */
1829 bfd_signed_vma plt_got_offset;
1830
1831 /* Bit mask representing the type of GOT entry(s) if any required by
1832 this symbol. */
1833 unsigned int got_type;
1834
1835 /* A pointer to the most recently used stub hash entry against this
1836 symbol. */
1837 struct elf_aarch64_stub_hash_entry *stub_cache;
1838
1839 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The offset
1840 is from the end of the jump table and reserved entries within the PLTGOT.
1841
1842 The magic value (bfd_vma) -1 indicates that an offset has not
1843 be allocated. */
1844 bfd_vma tlsdesc_got_jump_table_offset;
1845 };
1846
1847 static unsigned int
1848 elfNN_aarch64_symbol_got_type (struct elf_link_hash_entry *h,
1849 bfd *abfd,
1850 unsigned long r_symndx)
1851 {
1852 if (h)
1853 return elf_aarch64_hash_entry (h)->got_type;
1854
1855 if (! elf_aarch64_locals (abfd))
1856 return GOT_UNKNOWN;
1857
1858 return elf_aarch64_locals (abfd)[r_symndx].got_type;
1859 }
1860
1861 /* Get the AArch64 elf linker hash table from a link_info structure. */
1862 #define elf_aarch64_hash_table(info) \
1863 ((struct elf_aarch64_link_hash_table *) ((info)->hash))
1864
1865 #define aarch64_stub_hash_lookup(table, string, create, copy) \
1866 ((struct elf_aarch64_stub_hash_entry *) \
1867 bfd_hash_lookup ((table), (string), (create), (copy)))
1868
1869 /* AArch64 ELF linker hash table. */
1870 struct elf_aarch64_link_hash_table
1871 {
1872 /* The main hash table. */
1873 struct elf_link_hash_table root;
1874
1875 /* Nonzero to force PIC branch veneers. */
1876 int pic_veneer;
1877
1878 /* Fix erratum 835769. */
1879 int fix_erratum_835769;
1880
1881 /* Fix erratum 843419. */
1882 int fix_erratum_843419;
1883
1884 /* Enable ADRP->ADR rewrite for erratum 843419 workaround. */
1885 int fix_erratum_843419_adr;
1886
1887 /* The number of bytes in the initial entry in the PLT. */
1888 bfd_size_type plt_header_size;
1889
1890 /* The number of bytes in the subsequent PLT etries. */
1891 bfd_size_type plt_entry_size;
1892
1893 /* Short-cuts to get to dynamic linker sections. */
1894 asection *sdynbss;
1895 asection *srelbss;
1896
1897 /* Small local sym cache. */
1898 struct sym_cache sym_cache;
1899
1900 /* For convenience in allocate_dynrelocs. */
1901 bfd *obfd;
1902
1903 /* The amount of space used by the reserved portion of the sgotplt
1904 section, plus whatever space is used by the jump slots. */
1905 bfd_vma sgotplt_jump_table_size;
1906
1907 /* The stub hash table. */
1908 struct bfd_hash_table stub_hash_table;
1909
1910 /* Linker stub bfd. */
1911 bfd *stub_bfd;
1912
1913 /* Linker call-backs. */
1914 asection *(*add_stub_section) (const char *, asection *);
1915 void (*layout_sections_again) (void);
1916
1917 /* Array to keep track of which stub sections have been created, and
1918 information on stub grouping. */
1919 struct map_stub
1920 {
1921 /* This is the section to which stubs in the group will be
1922 attached. */
1923 asection *link_sec;
1924 /* The stub section. */
1925 asection *stub_sec;
1926 } *stub_group;
1927
1928 /* Assorted information used by elfNN_aarch64_size_stubs. */
1929 unsigned int bfd_count;
1930 int top_index;
1931 asection **input_list;
1932
1933 /* The offset into splt of the PLT entry for the TLS descriptor
1934 resolver. Special values are 0, if not necessary (or not found
1935 to be necessary yet), and -1 if needed but not determined
1936 yet. */
1937 bfd_vma tlsdesc_plt;
1938
1939 /* The GOT offset for the lazy trampoline. Communicated to the
1940 loader via DT_TLSDESC_GOT. The magic value (bfd_vma) -1
1941 indicates an offset is not allocated. */
1942 bfd_vma dt_tlsdesc_got;
1943
1944 /* Used by local STT_GNU_IFUNC symbols. */
1945 htab_t loc_hash_table;
1946 void * loc_hash_memory;
1947 };
1948
1949 /* Create an entry in an AArch64 ELF linker hash table. */
1950
1951 static struct bfd_hash_entry *
1952 elfNN_aarch64_link_hash_newfunc (struct bfd_hash_entry *entry,
1953 struct bfd_hash_table *table,
1954 const char *string)
1955 {
1956 struct elf_aarch64_link_hash_entry *ret =
1957 (struct elf_aarch64_link_hash_entry *) entry;
1958
1959 /* Allocate the structure if it has not already been allocated by a
1960 subclass. */
1961 if (ret == NULL)
1962 ret = bfd_hash_allocate (table,
1963 sizeof (struct elf_aarch64_link_hash_entry));
1964 if (ret == NULL)
1965 return (struct bfd_hash_entry *) ret;
1966
1967 /* Call the allocation method of the superclass. */
1968 ret = ((struct elf_aarch64_link_hash_entry *)
1969 _bfd_elf_link_hash_newfunc ((struct bfd_hash_entry *) ret,
1970 table, string));
1971 if (ret != NULL)
1972 {
1973 ret->dyn_relocs = NULL;
1974 ret->got_type = GOT_UNKNOWN;
1975 ret->plt_got_offset = (bfd_vma) - 1;
1976 ret->stub_cache = NULL;
1977 ret->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
1978 }
1979
1980 return (struct bfd_hash_entry *) ret;
1981 }
1982
1983 /* Initialize an entry in the stub hash table. */
1984
1985 static struct bfd_hash_entry *
1986 stub_hash_newfunc (struct bfd_hash_entry *entry,
1987 struct bfd_hash_table *table, const char *string)
1988 {
1989 /* Allocate the structure if it has not already been allocated by a
1990 subclass. */
1991 if (entry == NULL)
1992 {
1993 entry = bfd_hash_allocate (table,
1994 sizeof (struct
1995 elf_aarch64_stub_hash_entry));
1996 if (entry == NULL)
1997 return entry;
1998 }
1999
2000 /* Call the allocation method of the superclass. */
2001 entry = bfd_hash_newfunc (entry, table, string);
2002 if (entry != NULL)
2003 {
2004 struct elf_aarch64_stub_hash_entry *eh;
2005
2006 /* Initialize the local fields. */
2007 eh = (struct elf_aarch64_stub_hash_entry *) entry;
2008 eh->adrp_offset = 0;
2009 eh->stub_sec = NULL;
2010 eh->stub_offset = 0;
2011 eh->target_value = 0;
2012 eh->target_section = NULL;
2013 eh->stub_type = aarch64_stub_none;
2014 eh->h = NULL;
2015 eh->id_sec = NULL;
2016 }
2017
2018 return entry;
2019 }
2020
2021 /* Compute a hash of a local hash entry. We use elf_link_hash_entry
2022 for local symbol so that we can handle local STT_GNU_IFUNC symbols
2023 as global symbol. We reuse indx and dynstr_index for local symbol
2024 hash since they aren't used by global symbols in this backend. */
2025
2026 static hashval_t
2027 elfNN_aarch64_local_htab_hash (const void *ptr)
2028 {
2029 struct elf_link_hash_entry *h
2030 = (struct elf_link_hash_entry *) ptr;
2031 return ELF_LOCAL_SYMBOL_HASH (h->indx, h->dynstr_index);
2032 }
2033
2034 /* Compare local hash entries. */
2035
2036 static int
2037 elfNN_aarch64_local_htab_eq (const void *ptr1, const void *ptr2)
2038 {
2039 struct elf_link_hash_entry *h1
2040 = (struct elf_link_hash_entry *) ptr1;
2041 struct elf_link_hash_entry *h2
2042 = (struct elf_link_hash_entry *) ptr2;
2043
2044 return h1->indx == h2->indx && h1->dynstr_index == h2->dynstr_index;
2045 }
2046
2047 /* Find and/or create a hash entry for local symbol. */
2048
2049 static struct elf_link_hash_entry *
2050 elfNN_aarch64_get_local_sym_hash (struct elf_aarch64_link_hash_table *htab,
2051 bfd *abfd, const Elf_Internal_Rela *rel,
2052 bfd_boolean create)
2053 {
2054 struct elf_aarch64_link_hash_entry e, *ret;
2055 asection *sec = abfd->sections;
2056 hashval_t h = ELF_LOCAL_SYMBOL_HASH (sec->id,
2057 ELFNN_R_SYM (rel->r_info));
2058 void **slot;
2059
2060 e.root.indx = sec->id;
2061 e.root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2062 slot = htab_find_slot_with_hash (htab->loc_hash_table, &e, h,
2063 create ? INSERT : NO_INSERT);
2064
2065 if (!slot)
2066 return NULL;
2067
2068 if (*slot)
2069 {
2070 ret = (struct elf_aarch64_link_hash_entry *) *slot;
2071 return &ret->root;
2072 }
2073
2074 ret = (struct elf_aarch64_link_hash_entry *)
2075 objalloc_alloc ((struct objalloc *) htab->loc_hash_memory,
2076 sizeof (struct elf_aarch64_link_hash_entry));
2077 if (ret)
2078 {
2079 memset (ret, 0, sizeof (*ret));
2080 ret->root.indx = sec->id;
2081 ret->root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2082 ret->root.dynindx = -1;
2083 *slot = ret;
2084 }
2085 return &ret->root;
2086 }
2087
2088 /* Copy the extra info we tack onto an elf_link_hash_entry. */
2089
2090 static void
2091 elfNN_aarch64_copy_indirect_symbol (struct bfd_link_info *info,
2092 struct elf_link_hash_entry *dir,
2093 struct elf_link_hash_entry *ind)
2094 {
2095 struct elf_aarch64_link_hash_entry *edir, *eind;
2096
2097 edir = (struct elf_aarch64_link_hash_entry *) dir;
2098 eind = (struct elf_aarch64_link_hash_entry *) ind;
2099
2100 if (eind->dyn_relocs != NULL)
2101 {
2102 if (edir->dyn_relocs != NULL)
2103 {
2104 struct elf_dyn_relocs **pp;
2105 struct elf_dyn_relocs *p;
2106
2107 /* Add reloc counts against the indirect sym to the direct sym
2108 list. Merge any entries against the same section. */
2109 for (pp = &eind->dyn_relocs; (p = *pp) != NULL;)
2110 {
2111 struct elf_dyn_relocs *q;
2112
2113 for (q = edir->dyn_relocs; q != NULL; q = q->next)
2114 if (q->sec == p->sec)
2115 {
2116 q->pc_count += p->pc_count;
2117 q->count += p->count;
2118 *pp = p->next;
2119 break;
2120 }
2121 if (q == NULL)
2122 pp = &p->next;
2123 }
2124 *pp = edir->dyn_relocs;
2125 }
2126
2127 edir->dyn_relocs = eind->dyn_relocs;
2128 eind->dyn_relocs = NULL;
2129 }
2130
2131 if (ind->root.type == bfd_link_hash_indirect)
2132 {
2133 /* Copy over PLT info. */
2134 if (dir->got.refcount <= 0)
2135 {
2136 edir->got_type = eind->got_type;
2137 eind->got_type = GOT_UNKNOWN;
2138 }
2139 }
2140
2141 _bfd_elf_link_hash_copy_indirect (info, dir, ind);
2142 }
2143
2144 /* Destroy an AArch64 elf linker hash table. */
2145
2146 static void
2147 elfNN_aarch64_link_hash_table_free (bfd *obfd)
2148 {
2149 struct elf_aarch64_link_hash_table *ret
2150 = (struct elf_aarch64_link_hash_table *) obfd->link.hash;
2151
2152 if (ret->loc_hash_table)
2153 htab_delete (ret->loc_hash_table);
2154 if (ret->loc_hash_memory)
2155 objalloc_free ((struct objalloc *) ret->loc_hash_memory);
2156
2157 bfd_hash_table_free (&ret->stub_hash_table);
2158 _bfd_elf_link_hash_table_free (obfd);
2159 }
2160
2161 /* Create an AArch64 elf linker hash table. */
2162
2163 static struct bfd_link_hash_table *
2164 elfNN_aarch64_link_hash_table_create (bfd *abfd)
2165 {
2166 struct elf_aarch64_link_hash_table *ret;
2167 bfd_size_type amt = sizeof (struct elf_aarch64_link_hash_table);
2168
2169 ret = bfd_zmalloc (amt);
2170 if (ret == NULL)
2171 return NULL;
2172
2173 if (!_bfd_elf_link_hash_table_init
2174 (&ret->root, abfd, elfNN_aarch64_link_hash_newfunc,
2175 sizeof (struct elf_aarch64_link_hash_entry), AARCH64_ELF_DATA))
2176 {
2177 free (ret);
2178 return NULL;
2179 }
2180
2181 ret->plt_header_size = PLT_ENTRY_SIZE;
2182 ret->plt_entry_size = PLT_SMALL_ENTRY_SIZE;
2183 ret->obfd = abfd;
2184 ret->dt_tlsdesc_got = (bfd_vma) - 1;
2185
2186 if (!bfd_hash_table_init (&ret->stub_hash_table, stub_hash_newfunc,
2187 sizeof (struct elf_aarch64_stub_hash_entry)))
2188 {
2189 _bfd_elf_link_hash_table_free (abfd);
2190 return NULL;
2191 }
2192
2193 ret->loc_hash_table = htab_try_create (1024,
2194 elfNN_aarch64_local_htab_hash,
2195 elfNN_aarch64_local_htab_eq,
2196 NULL);
2197 ret->loc_hash_memory = objalloc_create ();
2198 if (!ret->loc_hash_table || !ret->loc_hash_memory)
2199 {
2200 elfNN_aarch64_link_hash_table_free (abfd);
2201 return NULL;
2202 }
2203 ret->root.root.hash_table_free = elfNN_aarch64_link_hash_table_free;
2204
2205 return &ret->root.root;
2206 }
2207
2208 static bfd_boolean
2209 aarch64_relocate (unsigned int r_type, bfd *input_bfd, asection *input_section,
2210 bfd_vma offset, bfd_vma value)
2211 {
2212 reloc_howto_type *howto;
2213 bfd_vma place;
2214
2215 howto = elfNN_aarch64_howto_from_type (r_type);
2216 place = (input_section->output_section->vma + input_section->output_offset
2217 + offset);
2218
2219 r_type = elfNN_aarch64_bfd_reloc_from_type (r_type);
2220 value = _bfd_aarch64_elf_resolve_relocation (r_type, place, value, 0, FALSE);
2221 return _bfd_aarch64_elf_put_addend (input_bfd,
2222 input_section->contents + offset, r_type,
2223 howto, value);
2224 }
2225
2226 static enum elf_aarch64_stub_type
2227 aarch64_select_branch_stub (bfd_vma value, bfd_vma place)
2228 {
2229 if (aarch64_valid_for_adrp_p (value, place))
2230 return aarch64_stub_adrp_branch;
2231 return aarch64_stub_long_branch;
2232 }
2233
2234 /* Determine the type of stub needed, if any, for a call. */
2235
2236 static enum elf_aarch64_stub_type
2237 aarch64_type_of_stub (struct bfd_link_info *info,
2238 asection *input_sec,
2239 const Elf_Internal_Rela *rel,
2240 unsigned char st_type,
2241 struct elf_aarch64_link_hash_entry *hash,
2242 bfd_vma destination)
2243 {
2244 bfd_vma location;
2245 bfd_signed_vma branch_offset;
2246 unsigned int r_type;
2247 struct elf_aarch64_link_hash_table *globals;
2248 enum elf_aarch64_stub_type stub_type = aarch64_stub_none;
2249 bfd_boolean via_plt_p;
2250
2251 if (st_type != STT_FUNC)
2252 return stub_type;
2253
2254 globals = elf_aarch64_hash_table (info);
2255 via_plt_p = (globals->root.splt != NULL && hash != NULL
2256 && hash->root.plt.offset != (bfd_vma) - 1);
2257
2258 if (via_plt_p)
2259 return stub_type;
2260
2261 /* Determine where the call point is. */
2262 location = (input_sec->output_offset
2263 + input_sec->output_section->vma + rel->r_offset);
2264
2265 branch_offset = (bfd_signed_vma) (destination - location);
2266
2267 r_type = ELFNN_R_TYPE (rel->r_info);
2268
2269 /* We don't want to redirect any old unconditional jump in this way,
2270 only one which is being used for a sibcall, where it is
2271 acceptable for the IP0 and IP1 registers to be clobbered. */
2272 if ((r_type == AARCH64_R (CALL26) || r_type == AARCH64_R (JUMP26))
2273 && (branch_offset > AARCH64_MAX_FWD_BRANCH_OFFSET
2274 || branch_offset < AARCH64_MAX_BWD_BRANCH_OFFSET))
2275 {
2276 stub_type = aarch64_stub_long_branch;
2277 }
2278
2279 return stub_type;
2280 }
2281
2282 /* Build a name for an entry in the stub hash table. */
2283
2284 static char *
2285 elfNN_aarch64_stub_name (const asection *input_section,
2286 const asection *sym_sec,
2287 const struct elf_aarch64_link_hash_entry *hash,
2288 const Elf_Internal_Rela *rel)
2289 {
2290 char *stub_name;
2291 bfd_size_type len;
2292
2293 if (hash)
2294 {
2295 len = 8 + 1 + strlen (hash->root.root.root.string) + 1 + 16 + 1;
2296 stub_name = bfd_malloc (len);
2297 if (stub_name != NULL)
2298 snprintf (stub_name, len, "%08x_%s+%" BFD_VMA_FMT "x",
2299 (unsigned int) input_section->id,
2300 hash->root.root.root.string,
2301 rel->r_addend);
2302 }
2303 else
2304 {
2305 len = 8 + 1 + 8 + 1 + 8 + 1 + 16 + 1;
2306 stub_name = bfd_malloc (len);
2307 if (stub_name != NULL)
2308 snprintf (stub_name, len, "%08x_%x:%x+%" BFD_VMA_FMT "x",
2309 (unsigned int) input_section->id,
2310 (unsigned int) sym_sec->id,
2311 (unsigned int) ELFNN_R_SYM (rel->r_info),
2312 rel->r_addend);
2313 }
2314
2315 return stub_name;
2316 }
2317
2318 /* Look up an entry in the stub hash. Stub entries are cached because
2319 creating the stub name takes a bit of time. */
2320
2321 static struct elf_aarch64_stub_hash_entry *
2322 elfNN_aarch64_get_stub_entry (const asection *input_section,
2323 const asection *sym_sec,
2324 struct elf_link_hash_entry *hash,
2325 const Elf_Internal_Rela *rel,
2326 struct elf_aarch64_link_hash_table *htab)
2327 {
2328 struct elf_aarch64_stub_hash_entry *stub_entry;
2329 struct elf_aarch64_link_hash_entry *h =
2330 (struct elf_aarch64_link_hash_entry *) hash;
2331 const asection *id_sec;
2332
2333 if ((input_section->flags & SEC_CODE) == 0)
2334 return NULL;
2335
2336 /* If this input section is part of a group of sections sharing one
2337 stub section, then use the id of the first section in the group.
2338 Stub names need to include a section id, as there may well be
2339 more than one stub used to reach say, printf, and we need to
2340 distinguish between them. */
2341 id_sec = htab->stub_group[input_section->id].link_sec;
2342
2343 if (h != NULL && h->stub_cache != NULL
2344 && h->stub_cache->h == h && h->stub_cache->id_sec == id_sec)
2345 {
2346 stub_entry = h->stub_cache;
2347 }
2348 else
2349 {
2350 char *stub_name;
2351
2352 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, h, rel);
2353 if (stub_name == NULL)
2354 return NULL;
2355
2356 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table,
2357 stub_name, FALSE, FALSE);
2358 if (h != NULL)
2359 h->stub_cache = stub_entry;
2360
2361 free (stub_name);
2362 }
2363
2364 return stub_entry;
2365 }
2366
2367
2368 /* Create a stub section. */
2369
2370 static asection *
2371 _bfd_aarch64_create_stub_section (asection *section,
2372 struct elf_aarch64_link_hash_table *htab)
2373 {
2374 size_t namelen;
2375 bfd_size_type len;
2376 char *s_name;
2377
2378 namelen = strlen (section->name);
2379 len = namelen + sizeof (STUB_SUFFIX);
2380 s_name = bfd_alloc (htab->stub_bfd, len);
2381 if (s_name == NULL)
2382 return NULL;
2383
2384 memcpy (s_name, section->name, namelen);
2385 memcpy (s_name + namelen, STUB_SUFFIX, sizeof (STUB_SUFFIX));
2386 return (*htab->add_stub_section) (s_name, section);
2387 }
2388
2389
2390 /* Find or create a stub section for a link section.
2391
2392 Fix or create the stub section used to collect stubs attached to
2393 the specified link section. */
2394
2395 static asection *
2396 _bfd_aarch64_get_stub_for_link_section (asection *link_section,
2397 struct elf_aarch64_link_hash_table *htab)
2398 {
2399 if (htab->stub_group[link_section->id].stub_sec == NULL)
2400 htab->stub_group[link_section->id].stub_sec
2401 = _bfd_aarch64_create_stub_section (link_section, htab);
2402 return htab->stub_group[link_section->id].stub_sec;
2403 }
2404
2405
2406 /* Find or create a stub section in the stub group for an input
2407 section. */
2408
2409 static asection *
2410 _bfd_aarch64_create_or_find_stub_sec (asection *section,
2411 struct elf_aarch64_link_hash_table *htab)
2412 {
2413 asection *link_sec = htab->stub_group[section->id].link_sec;
2414 return _bfd_aarch64_get_stub_for_link_section (link_sec, htab);
2415 }
2416
2417
2418 /* Add a new stub entry in the stub group associated with an input
2419 section to the stub hash. Not all fields of the new stub entry are
2420 initialised. */
2421
2422 static struct elf_aarch64_stub_hash_entry *
2423 _bfd_aarch64_add_stub_entry_in_group (const char *stub_name,
2424 asection *section,
2425 struct elf_aarch64_link_hash_table *htab)
2426 {
2427 asection *link_sec;
2428 asection *stub_sec;
2429 struct elf_aarch64_stub_hash_entry *stub_entry;
2430
2431 link_sec = htab->stub_group[section->id].link_sec;
2432 stub_sec = _bfd_aarch64_create_or_find_stub_sec (section, htab);
2433
2434 /* Enter this entry into the linker stub hash table. */
2435 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2436 TRUE, FALSE);
2437 if (stub_entry == NULL)
2438 {
2439 (*_bfd_error_handler) (_("%s: cannot create stub entry %s"),
2440 section->owner, stub_name);
2441 return NULL;
2442 }
2443
2444 stub_entry->stub_sec = stub_sec;
2445 stub_entry->stub_offset = 0;
2446 stub_entry->id_sec = link_sec;
2447
2448 return stub_entry;
2449 }
2450
2451 /* Add a new stub entry in the final stub section to the stub hash.
2452 Not all fields of the new stub entry are initialised. */
2453
2454 static struct elf_aarch64_stub_hash_entry *
2455 _bfd_aarch64_add_stub_entry_after (const char *stub_name,
2456 asection *link_section,
2457 struct elf_aarch64_link_hash_table *htab)
2458 {
2459 asection *stub_sec;
2460 struct elf_aarch64_stub_hash_entry *stub_entry;
2461
2462 stub_sec = _bfd_aarch64_get_stub_for_link_section (link_section, htab);
2463 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2464 TRUE, FALSE);
2465 if (stub_entry == NULL)
2466 {
2467 (*_bfd_error_handler) (_("cannot create stub entry %s"), stub_name);
2468 return NULL;
2469 }
2470
2471 stub_entry->stub_sec = stub_sec;
2472 stub_entry->stub_offset = 0;
2473 stub_entry->id_sec = link_section;
2474
2475 return stub_entry;
2476 }
2477
2478
2479 static bfd_boolean
2480 aarch64_build_one_stub (struct bfd_hash_entry *gen_entry,
2481 void *in_arg ATTRIBUTE_UNUSED)
2482 {
2483 struct elf_aarch64_stub_hash_entry *stub_entry;
2484 asection *stub_sec;
2485 bfd *stub_bfd;
2486 bfd_byte *loc;
2487 bfd_vma sym_value;
2488 bfd_vma veneered_insn_loc;
2489 bfd_vma veneer_entry_loc;
2490 bfd_signed_vma branch_offset = 0;
2491 unsigned int template_size;
2492 const uint32_t *template;
2493 unsigned int i;
2494
2495 /* Massage our args to the form they really have. */
2496 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2497
2498 stub_sec = stub_entry->stub_sec;
2499
2500 /* Make a note of the offset within the stubs for this entry. */
2501 stub_entry->stub_offset = stub_sec->size;
2502 loc = stub_sec->contents + stub_entry->stub_offset;
2503
2504 stub_bfd = stub_sec->owner;
2505
2506 /* This is the address of the stub destination. */
2507 sym_value = (stub_entry->target_value
2508 + stub_entry->target_section->output_offset
2509 + stub_entry->target_section->output_section->vma);
2510
2511 if (stub_entry->stub_type == aarch64_stub_long_branch)
2512 {
2513 bfd_vma place = (stub_entry->stub_offset + stub_sec->output_section->vma
2514 + stub_sec->output_offset);
2515
2516 /* See if we can relax the stub. */
2517 if (aarch64_valid_for_adrp_p (sym_value, place))
2518 stub_entry->stub_type = aarch64_select_branch_stub (sym_value, place);
2519 }
2520
2521 switch (stub_entry->stub_type)
2522 {
2523 case aarch64_stub_adrp_branch:
2524 template = aarch64_adrp_branch_stub;
2525 template_size = sizeof (aarch64_adrp_branch_stub);
2526 break;
2527 case aarch64_stub_long_branch:
2528 template = aarch64_long_branch_stub;
2529 template_size = sizeof (aarch64_long_branch_stub);
2530 break;
2531 case aarch64_stub_erratum_835769_veneer:
2532 template = aarch64_erratum_835769_stub;
2533 template_size = sizeof (aarch64_erratum_835769_stub);
2534 break;
2535 case aarch64_stub_erratum_843419_veneer:
2536 template = aarch64_erratum_843419_stub;
2537 template_size = sizeof (aarch64_erratum_843419_stub);
2538 break;
2539 default:
2540 abort ();
2541 }
2542
2543 for (i = 0; i < (template_size / sizeof template[0]); i++)
2544 {
2545 bfd_putl32 (template[i], loc);
2546 loc += 4;
2547 }
2548
2549 template_size = (template_size + 7) & ~7;
2550 stub_sec->size += template_size;
2551
2552 switch (stub_entry->stub_type)
2553 {
2554 case aarch64_stub_adrp_branch:
2555 if (aarch64_relocate (AARCH64_R (ADR_PREL_PG_HI21), stub_bfd, stub_sec,
2556 stub_entry->stub_offset, sym_value))
2557 /* The stub would not have been relaxed if the offset was out
2558 of range. */
2559 BFD_FAIL ();
2560
2561 if (aarch64_relocate (AARCH64_R (ADD_ABS_LO12_NC), stub_bfd, stub_sec,
2562 stub_entry->stub_offset + 4, sym_value))
2563 BFD_FAIL ();
2564 break;
2565
2566 case aarch64_stub_long_branch:
2567 /* We want the value relative to the address 12 bytes back from the
2568 value itself. */
2569 if (aarch64_relocate (AARCH64_R (PRELNN), stub_bfd, stub_sec,
2570 stub_entry->stub_offset + 16, sym_value + 12))
2571 BFD_FAIL ();
2572 break;
2573
2574 case aarch64_stub_erratum_835769_veneer:
2575 veneered_insn_loc = stub_entry->target_section->output_section->vma
2576 + stub_entry->target_section->output_offset
2577 + stub_entry->target_value;
2578 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
2579 + stub_entry->stub_sec->output_offset
2580 + stub_entry->stub_offset;
2581 branch_offset = veneered_insn_loc - veneer_entry_loc;
2582 branch_offset >>= 2;
2583 branch_offset &= 0x3ffffff;
2584 bfd_putl32 (stub_entry->veneered_insn,
2585 stub_sec->contents + stub_entry->stub_offset);
2586 bfd_putl32 (template[1] | branch_offset,
2587 stub_sec->contents + stub_entry->stub_offset + 4);
2588 break;
2589
2590 case aarch64_stub_erratum_843419_veneer:
2591 if (aarch64_relocate (AARCH64_R (JUMP26), stub_bfd, stub_sec,
2592 stub_entry->stub_offset + 4, sym_value + 4))
2593 BFD_FAIL ();
2594 break;
2595
2596 default:
2597 abort ();
2598 }
2599
2600 return TRUE;
2601 }
2602
2603 /* As above, but don't actually build the stub. Just bump offset so
2604 we know stub section sizes. */
2605
2606 static bfd_boolean
2607 aarch64_size_one_stub (struct bfd_hash_entry *gen_entry,
2608 void *in_arg ATTRIBUTE_UNUSED)
2609 {
2610 struct elf_aarch64_stub_hash_entry *stub_entry;
2611 int size;
2612
2613 /* Massage our args to the form they really have. */
2614 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2615
2616 switch (stub_entry->stub_type)
2617 {
2618 case aarch64_stub_adrp_branch:
2619 size = sizeof (aarch64_adrp_branch_stub);
2620 break;
2621 case aarch64_stub_long_branch:
2622 size = sizeof (aarch64_long_branch_stub);
2623 break;
2624 case aarch64_stub_erratum_835769_veneer:
2625 size = sizeof (aarch64_erratum_835769_stub);
2626 break;
2627 case aarch64_stub_erratum_843419_veneer:
2628 size = sizeof (aarch64_erratum_843419_stub);
2629 break;
2630 default:
2631 abort ();
2632 }
2633
2634 size = (size + 7) & ~7;
2635 stub_entry->stub_sec->size += size;
2636 return TRUE;
2637 }
2638
2639 /* External entry points for sizing and building linker stubs. */
2640
2641 /* Set up various things so that we can make a list of input sections
2642 for each output section included in the link. Returns -1 on error,
2643 0 when no stubs will be needed, and 1 on success. */
2644
2645 int
2646 elfNN_aarch64_setup_section_lists (bfd *output_bfd,
2647 struct bfd_link_info *info)
2648 {
2649 bfd *input_bfd;
2650 unsigned int bfd_count;
2651 int top_id, top_index;
2652 asection *section;
2653 asection **input_list, **list;
2654 bfd_size_type amt;
2655 struct elf_aarch64_link_hash_table *htab =
2656 elf_aarch64_hash_table (info);
2657
2658 if (!is_elf_hash_table (htab))
2659 return 0;
2660
2661 /* Count the number of input BFDs and find the top input section id. */
2662 for (input_bfd = info->input_bfds, bfd_count = 0, top_id = 0;
2663 input_bfd != NULL; input_bfd = input_bfd->link.next)
2664 {
2665 bfd_count += 1;
2666 for (section = input_bfd->sections;
2667 section != NULL; section = section->next)
2668 {
2669 if (top_id < section->id)
2670 top_id = section->id;
2671 }
2672 }
2673 htab->bfd_count = bfd_count;
2674
2675 amt = sizeof (struct map_stub) * (top_id + 1);
2676 htab->stub_group = bfd_zmalloc (amt);
2677 if (htab->stub_group == NULL)
2678 return -1;
2679
2680 /* We can't use output_bfd->section_count here to find the top output
2681 section index as some sections may have been removed, and
2682 _bfd_strip_section_from_output doesn't renumber the indices. */
2683 for (section = output_bfd->sections, top_index = 0;
2684 section != NULL; section = section->next)
2685 {
2686 if (top_index < section->index)
2687 top_index = section->index;
2688 }
2689
2690 htab->top_index = top_index;
2691 amt = sizeof (asection *) * (top_index + 1);
2692 input_list = bfd_malloc (amt);
2693 htab->input_list = input_list;
2694 if (input_list == NULL)
2695 return -1;
2696
2697 /* For sections we aren't interested in, mark their entries with a
2698 value we can check later. */
2699 list = input_list + top_index;
2700 do
2701 *list = bfd_abs_section_ptr;
2702 while (list-- != input_list);
2703
2704 for (section = output_bfd->sections;
2705 section != NULL; section = section->next)
2706 {
2707 if ((section->flags & SEC_CODE) != 0)
2708 input_list[section->index] = NULL;
2709 }
2710
2711 return 1;
2712 }
2713
2714 /* Used by elfNN_aarch64_next_input_section and group_sections. */
2715 #define PREV_SEC(sec) (htab->stub_group[(sec)->id].link_sec)
2716
2717 /* The linker repeatedly calls this function for each input section,
2718 in the order that input sections are linked into output sections.
2719 Build lists of input sections to determine groupings between which
2720 we may insert linker stubs. */
2721
2722 void
2723 elfNN_aarch64_next_input_section (struct bfd_link_info *info, asection *isec)
2724 {
2725 struct elf_aarch64_link_hash_table *htab =
2726 elf_aarch64_hash_table (info);
2727
2728 if (isec->output_section->index <= htab->top_index)
2729 {
2730 asection **list = htab->input_list + isec->output_section->index;
2731
2732 if (*list != bfd_abs_section_ptr)
2733 {
2734 /* Steal the link_sec pointer for our list. */
2735 /* This happens to make the list in reverse order,
2736 which is what we want. */
2737 PREV_SEC (isec) = *list;
2738 *list = isec;
2739 }
2740 }
2741 }
2742
2743 /* See whether we can group stub sections together. Grouping stub
2744 sections may result in fewer stubs. More importantly, we need to
2745 put all .init* and .fini* stubs at the beginning of the .init or
2746 .fini output sections respectively, because glibc splits the
2747 _init and _fini functions into multiple parts. Putting a stub in
2748 the middle of a function is not a good idea. */
2749
2750 static void
2751 group_sections (struct elf_aarch64_link_hash_table *htab,
2752 bfd_size_type stub_group_size,
2753 bfd_boolean stubs_always_before_branch)
2754 {
2755 asection **list = htab->input_list + htab->top_index;
2756
2757 do
2758 {
2759 asection *tail = *list;
2760
2761 if (tail == bfd_abs_section_ptr)
2762 continue;
2763
2764 while (tail != NULL)
2765 {
2766 asection *curr;
2767 asection *prev;
2768 bfd_size_type total;
2769
2770 curr = tail;
2771 total = tail->size;
2772 while ((prev = PREV_SEC (curr)) != NULL
2773 && ((total += curr->output_offset - prev->output_offset)
2774 < stub_group_size))
2775 curr = prev;
2776
2777 /* OK, the size from the start of CURR to the end is less
2778 than stub_group_size and thus can be handled by one stub
2779 section. (Or the tail section is itself larger than
2780 stub_group_size, in which case we may be toast.)
2781 We should really be keeping track of the total size of
2782 stubs added here, as stubs contribute to the final output
2783 section size. */
2784 do
2785 {
2786 prev = PREV_SEC (tail);
2787 /* Set up this stub group. */
2788 htab->stub_group[tail->id].link_sec = curr;
2789 }
2790 while (tail != curr && (tail = prev) != NULL);
2791
2792 /* But wait, there's more! Input sections up to stub_group_size
2793 bytes before the stub section can be handled by it too. */
2794 if (!stubs_always_before_branch)
2795 {
2796 total = 0;
2797 while (prev != NULL
2798 && ((total += tail->output_offset - prev->output_offset)
2799 < stub_group_size))
2800 {
2801 tail = prev;
2802 prev = PREV_SEC (tail);
2803 htab->stub_group[tail->id].link_sec = curr;
2804 }
2805 }
2806 tail = prev;
2807 }
2808 }
2809 while (list-- != htab->input_list);
2810
2811 free (htab->input_list);
2812 }
2813
2814 #undef PREV_SEC
2815
2816 #define AARCH64_BITS(x, pos, n) (((x) >> (pos)) & ((1 << (n)) - 1))
2817
2818 #define AARCH64_RT(insn) AARCH64_BITS (insn, 0, 5)
2819 #define AARCH64_RT2(insn) AARCH64_BITS (insn, 10, 5)
2820 #define AARCH64_RA(insn) AARCH64_BITS (insn, 10, 5)
2821 #define AARCH64_RD(insn) AARCH64_BITS (insn, 0, 5)
2822 #define AARCH64_RN(insn) AARCH64_BITS (insn, 5, 5)
2823 #define AARCH64_RM(insn) AARCH64_BITS (insn, 16, 5)
2824
2825 #define AARCH64_MAC(insn) (((insn) & 0xff000000) == 0x9b000000)
2826 #define AARCH64_BIT(insn, n) AARCH64_BITS (insn, n, 1)
2827 #define AARCH64_OP31(insn) AARCH64_BITS (insn, 21, 3)
2828 #define AARCH64_ZR 0x1f
2829
2830 /* All ld/st ops. See C4-182 of the ARM ARM. The encoding space for
2831 LD_PCREL, LDST_RO, LDST_UI and LDST_UIMM cover prefetch ops. */
2832
2833 #define AARCH64_LD(insn) (AARCH64_BIT (insn, 22) == 1)
2834 #define AARCH64_LDST(insn) (((insn) & 0x0a000000) == 0x08000000)
2835 #define AARCH64_LDST_EX(insn) (((insn) & 0x3f000000) == 0x08000000)
2836 #define AARCH64_LDST_PCREL(insn) (((insn) & 0x3b000000) == 0x18000000)
2837 #define AARCH64_LDST_NAP(insn) (((insn) & 0x3b800000) == 0x28000000)
2838 #define AARCH64_LDSTP_PI(insn) (((insn) & 0x3b800000) == 0x28800000)
2839 #define AARCH64_LDSTP_O(insn) (((insn) & 0x3b800000) == 0x29000000)
2840 #define AARCH64_LDSTP_PRE(insn) (((insn) & 0x3b800000) == 0x29800000)
2841 #define AARCH64_LDST_UI(insn) (((insn) & 0x3b200c00) == 0x38000000)
2842 #define AARCH64_LDST_PIIMM(insn) (((insn) & 0x3b200c00) == 0x38000400)
2843 #define AARCH64_LDST_U(insn) (((insn) & 0x3b200c00) == 0x38000800)
2844 #define AARCH64_LDST_PREIMM(insn) (((insn) & 0x3b200c00) == 0x38000c00)
2845 #define AARCH64_LDST_RO(insn) (((insn) & 0x3b200c00) == 0x38200800)
2846 #define AARCH64_LDST_UIMM(insn) (((insn) & 0x3b000000) == 0x39000000)
2847 #define AARCH64_LDST_SIMD_M(insn) (((insn) & 0xbfbf0000) == 0x0c000000)
2848 #define AARCH64_LDST_SIMD_M_PI(insn) (((insn) & 0xbfa00000) == 0x0c800000)
2849 #define AARCH64_LDST_SIMD_S(insn) (((insn) & 0xbf9f0000) == 0x0d000000)
2850 #define AARCH64_LDST_SIMD_S_PI(insn) (((insn) & 0xbf800000) == 0x0d800000)
2851
2852 /* Classify an INSN if it is indeed a load/store.
2853
2854 Return TRUE if INSN is a LD/ST instruction otherwise return FALSE.
2855
2856 For scalar LD/ST instructions PAIR is FALSE, RT is returned and RT2
2857 is set equal to RT.
2858
2859 For LD/ST pair instructions PAIR is TRUE, RT and RT2 are returned.
2860
2861 */
2862
2863 static bfd_boolean
2864 aarch64_mem_op_p (uint32_t insn, unsigned int *rt, unsigned int *rt2,
2865 bfd_boolean *pair, bfd_boolean *load)
2866 {
2867 uint32_t opcode;
2868 unsigned int r;
2869 uint32_t opc = 0;
2870 uint32_t v = 0;
2871 uint32_t opc_v = 0;
2872
2873 /* Bail out quickly if INSN doesn't fall into the the load-store
2874 encoding space. */
2875 if (!AARCH64_LDST (insn))
2876 return FALSE;
2877
2878 *pair = FALSE;
2879 *load = FALSE;
2880 if (AARCH64_LDST_EX (insn))
2881 {
2882 *rt = AARCH64_RT (insn);
2883 *rt2 = *rt;
2884 if (AARCH64_BIT (insn, 21) == 1)
2885 {
2886 *pair = TRUE;
2887 *rt2 = AARCH64_RT2 (insn);
2888 }
2889 *load = AARCH64_LD (insn);
2890 return TRUE;
2891 }
2892 else if (AARCH64_LDST_NAP (insn)
2893 || AARCH64_LDSTP_PI (insn)
2894 || AARCH64_LDSTP_O (insn)
2895 || AARCH64_LDSTP_PRE (insn))
2896 {
2897 *pair = TRUE;
2898 *rt = AARCH64_RT (insn);
2899 *rt2 = AARCH64_RT2 (insn);
2900 *load = AARCH64_LD (insn);
2901 return TRUE;
2902 }
2903 else if (AARCH64_LDST_PCREL (insn)
2904 || AARCH64_LDST_UI (insn)
2905 || AARCH64_LDST_PIIMM (insn)
2906 || AARCH64_LDST_U (insn)
2907 || AARCH64_LDST_PREIMM (insn)
2908 || AARCH64_LDST_RO (insn)
2909 || AARCH64_LDST_UIMM (insn))
2910 {
2911 *rt = AARCH64_RT (insn);
2912 *rt2 = *rt;
2913 if (AARCH64_LDST_PCREL (insn))
2914 *load = TRUE;
2915 opc = AARCH64_BITS (insn, 22, 2);
2916 v = AARCH64_BIT (insn, 26);
2917 opc_v = opc | (v << 2);
2918 *load = (opc_v == 1 || opc_v == 2 || opc_v == 3
2919 || opc_v == 5 || opc_v == 7);
2920 return TRUE;
2921 }
2922 else if (AARCH64_LDST_SIMD_M (insn)
2923 || AARCH64_LDST_SIMD_M_PI (insn))
2924 {
2925 *rt = AARCH64_RT (insn);
2926 *load = AARCH64_BIT (insn, 22);
2927 opcode = (insn >> 12) & 0xf;
2928 switch (opcode)
2929 {
2930 case 0:
2931 case 2:
2932 *rt2 = *rt + 3;
2933 break;
2934
2935 case 4:
2936 case 6:
2937 *rt2 = *rt + 2;
2938 break;
2939
2940 case 7:
2941 *rt2 = *rt;
2942 break;
2943
2944 case 8:
2945 case 10:
2946 *rt2 = *rt + 1;
2947 break;
2948
2949 default:
2950 return FALSE;
2951 }
2952 return TRUE;
2953 }
2954 else if (AARCH64_LDST_SIMD_S (insn)
2955 || AARCH64_LDST_SIMD_S_PI (insn))
2956 {
2957 *rt = AARCH64_RT (insn);
2958 r = (insn >> 21) & 1;
2959 *load = AARCH64_BIT (insn, 22);
2960 opcode = (insn >> 13) & 0x7;
2961 switch (opcode)
2962 {
2963 case 0:
2964 case 2:
2965 case 4:
2966 *rt2 = *rt + r;
2967 break;
2968
2969 case 1:
2970 case 3:
2971 case 5:
2972 *rt2 = *rt + (r == 0 ? 2 : 3);
2973 break;
2974
2975 case 6:
2976 *rt2 = *rt + r;
2977 break;
2978
2979 case 7:
2980 *rt2 = *rt + (r == 0 ? 2 : 3);
2981 break;
2982
2983 default:
2984 return FALSE;
2985 }
2986 return TRUE;
2987 }
2988
2989 return FALSE;
2990 }
2991
2992 /* Return TRUE if INSN is multiply-accumulate. */
2993
2994 static bfd_boolean
2995 aarch64_mlxl_p (uint32_t insn)
2996 {
2997 uint32_t op31 = AARCH64_OP31 (insn);
2998
2999 if (AARCH64_MAC (insn)
3000 && (op31 == 0 || op31 == 1 || op31 == 5)
3001 /* Exclude MUL instructions which are encoded as a multiple accumulate
3002 with RA = XZR. */
3003 && AARCH64_RA (insn) != AARCH64_ZR)
3004 return TRUE;
3005
3006 return FALSE;
3007 }
3008
3009 /* Some early revisions of the Cortex-A53 have an erratum (835769) whereby
3010 it is possible for a 64-bit multiply-accumulate instruction to generate an
3011 incorrect result. The details are quite complex and hard to
3012 determine statically, since branches in the code may exist in some
3013 circumstances, but all cases end with a memory (load, store, or
3014 prefetch) instruction followed immediately by the multiply-accumulate
3015 operation. We employ a linker patching technique, by moving the potentially
3016 affected multiply-accumulate instruction into a patch region and replacing
3017 the original instruction with a branch to the patch. This function checks
3018 if INSN_1 is the memory operation followed by a multiply-accumulate
3019 operation (INSN_2). Return TRUE if an erratum sequence is found, FALSE
3020 if INSN_1 and INSN_2 are safe. */
3021
3022 static bfd_boolean
3023 aarch64_erratum_sequence (uint32_t insn_1, uint32_t insn_2)
3024 {
3025 uint32_t rt;
3026 uint32_t rt2;
3027 uint32_t rn;
3028 uint32_t rm;
3029 uint32_t ra;
3030 bfd_boolean pair;
3031 bfd_boolean load;
3032
3033 if (aarch64_mlxl_p (insn_2)
3034 && aarch64_mem_op_p (insn_1, &rt, &rt2, &pair, &load))
3035 {
3036 /* Any SIMD memory op is independent of the subsequent MLA
3037 by definition of the erratum. */
3038 if (AARCH64_BIT (insn_1, 26))
3039 return TRUE;
3040
3041 /* If not SIMD, check for integer memory ops and MLA relationship. */
3042 rn = AARCH64_RN (insn_2);
3043 ra = AARCH64_RA (insn_2);
3044 rm = AARCH64_RM (insn_2);
3045
3046 /* If this is a load and there's a true(RAW) dependency, we are safe
3047 and this is not an erratum sequence. */
3048 if (load &&
3049 (rt == rn || rt == rm || rt == ra
3050 || (pair && (rt2 == rn || rt2 == rm || rt2 == ra))))
3051 return FALSE;
3052
3053 /* We conservatively put out stubs for all other cases (including
3054 writebacks). */
3055 return TRUE;
3056 }
3057
3058 return FALSE;
3059 }
3060
3061 /* Used to order a list of mapping symbols by address. */
3062
3063 static int
3064 elf_aarch64_compare_mapping (const void *a, const void *b)
3065 {
3066 const elf_aarch64_section_map *amap = (const elf_aarch64_section_map *) a;
3067 const elf_aarch64_section_map *bmap = (const elf_aarch64_section_map *) b;
3068
3069 if (amap->vma > bmap->vma)
3070 return 1;
3071 else if (amap->vma < bmap->vma)
3072 return -1;
3073 else if (amap->type > bmap->type)
3074 /* Ensure results do not depend on the host qsort for objects with
3075 multiple mapping symbols at the same address by sorting on type
3076 after vma. */
3077 return 1;
3078 else if (amap->type < bmap->type)
3079 return -1;
3080 else
3081 return 0;
3082 }
3083
3084
3085 static char *
3086 _bfd_aarch64_erratum_835769_stub_name (unsigned num_fixes)
3087 {
3088 char *stub_name = (char *) bfd_malloc
3089 (strlen ("__erratum_835769_veneer_") + 16);
3090 sprintf (stub_name,"__erratum_835769_veneer_%d", num_fixes);
3091 return stub_name;
3092 }
3093
3094 /* Scan for Cortex-A53 erratum 835769 sequence.
3095
3096 Return TRUE else FALSE on abnormal termination. */
3097
3098 static bfd_boolean
3099 _bfd_aarch64_erratum_835769_scan (bfd *input_bfd,
3100 struct bfd_link_info *info,
3101 unsigned int *num_fixes_p)
3102 {
3103 asection *section;
3104 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3105 unsigned int num_fixes = *num_fixes_p;
3106
3107 if (htab == NULL)
3108 return TRUE;
3109
3110 for (section = input_bfd->sections;
3111 section != NULL;
3112 section = section->next)
3113 {
3114 bfd_byte *contents = NULL;
3115 struct _aarch64_elf_section_data *sec_data;
3116 unsigned int span;
3117
3118 if (elf_section_type (section) != SHT_PROGBITS
3119 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3120 || (section->flags & SEC_EXCLUDE) != 0
3121 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3122 || (section->output_section == bfd_abs_section_ptr))
3123 continue;
3124
3125 if (elf_section_data (section)->this_hdr.contents != NULL)
3126 contents = elf_section_data (section)->this_hdr.contents;
3127 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3128 return FALSE;
3129
3130 sec_data = elf_aarch64_section_data (section);
3131
3132 qsort (sec_data->map, sec_data->mapcount,
3133 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3134
3135 for (span = 0; span < sec_data->mapcount; span++)
3136 {
3137 unsigned int span_start = sec_data->map[span].vma;
3138 unsigned int span_end = ((span == sec_data->mapcount - 1)
3139 ? sec_data->map[0].vma + section->size
3140 : sec_data->map[span + 1].vma);
3141 unsigned int i;
3142 char span_type = sec_data->map[span].type;
3143
3144 if (span_type == 'd')
3145 continue;
3146
3147 for (i = span_start; i + 4 < span_end; i += 4)
3148 {
3149 uint32_t insn_1 = bfd_getl32 (contents + i);
3150 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3151
3152 if (aarch64_erratum_sequence (insn_1, insn_2))
3153 {
3154 struct elf_aarch64_stub_hash_entry *stub_entry;
3155 char *stub_name = _bfd_aarch64_erratum_835769_stub_name (num_fixes);
3156 if (! stub_name)
3157 return FALSE;
3158
3159 stub_entry = _bfd_aarch64_add_stub_entry_in_group (stub_name,
3160 section,
3161 htab);
3162 if (! stub_entry)
3163 return FALSE;
3164
3165 stub_entry->stub_type = aarch64_stub_erratum_835769_veneer;
3166 stub_entry->target_section = section;
3167 stub_entry->target_value = i + 4;
3168 stub_entry->veneered_insn = insn_2;
3169 stub_entry->output_name = stub_name;
3170 num_fixes++;
3171 }
3172 }
3173 }
3174 if (elf_section_data (section)->this_hdr.contents == NULL)
3175 free (contents);
3176 }
3177
3178 *num_fixes_p = num_fixes;
3179
3180 return TRUE;
3181 }
3182
3183
3184 /* Test if instruction INSN is ADRP. */
3185
3186 static bfd_boolean
3187 _bfd_aarch64_adrp_p (uint32_t insn)
3188 {
3189 return ((insn & 0x9f000000) == 0x90000000);
3190 }
3191
3192
3193 /* Helper predicate to look for cortex-a53 erratum 843419 sequence 1. */
3194
3195 static bfd_boolean
3196 _bfd_aarch64_erratum_843419_sequence_p (uint32_t insn_1, uint32_t insn_2,
3197 uint32_t insn_3)
3198 {
3199 uint32_t rt;
3200 uint32_t rt2;
3201 bfd_boolean pair;
3202 bfd_boolean load;
3203
3204 return (aarch64_mem_op_p (insn_2, &rt, &rt2, &pair, &load)
3205 && (!pair
3206 || (pair && !load))
3207 && AARCH64_LDST_UIMM (insn_3)
3208 && AARCH64_RN (insn_3) == AARCH64_RD (insn_1));
3209 }
3210
3211
3212 /* Test for the presence of Cortex-A53 erratum 843419 instruction sequence.
3213
3214 Return TRUE if section CONTENTS at offset I contains one of the
3215 erratum 843419 sequences, otherwise return FALSE. If a sequence is
3216 seen set P_VENEER_I to the offset of the final LOAD/STORE
3217 instruction in the sequence.
3218 */
3219
3220 static bfd_boolean
3221 _bfd_aarch64_erratum_843419_p (bfd_byte *contents, bfd_vma vma,
3222 bfd_vma i, bfd_vma span_end,
3223 bfd_vma *p_veneer_i)
3224 {
3225 uint32_t insn_1 = bfd_getl32 (contents + i);
3226
3227 if (!_bfd_aarch64_adrp_p (insn_1))
3228 return FALSE;
3229
3230 if (span_end < i + 12)
3231 return FALSE;
3232
3233 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3234 uint32_t insn_3 = bfd_getl32 (contents + i + 8);
3235
3236 if ((vma & 0xfff) != 0xff8 && (vma & 0xfff) != 0xffc)
3237 return FALSE;
3238
3239 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_3))
3240 {
3241 *p_veneer_i = i + 8;
3242 return TRUE;
3243 }
3244
3245 if (span_end < i + 16)
3246 return FALSE;
3247
3248 uint32_t insn_4 = bfd_getl32 (contents + i + 12);
3249
3250 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_4))
3251 {
3252 *p_veneer_i = i + 12;
3253 return TRUE;
3254 }
3255
3256 return FALSE;
3257 }
3258
3259
3260 /* Resize all stub sections. */
3261
3262 static void
3263 _bfd_aarch64_resize_stubs (struct elf_aarch64_link_hash_table *htab)
3264 {
3265 asection *section;
3266
3267 /* OK, we've added some stubs. Find out the new size of the
3268 stub sections. */
3269 for (section = htab->stub_bfd->sections;
3270 section != NULL; section = section->next)
3271 {
3272 /* Ignore non-stub sections. */
3273 if (!strstr (section->name, STUB_SUFFIX))
3274 continue;
3275 section->size = 0;
3276 }
3277
3278 bfd_hash_traverse (&htab->stub_hash_table, aarch64_size_one_stub, htab);
3279
3280 for (section = htab->stub_bfd->sections;
3281 section != NULL; section = section->next)
3282 {
3283 if (!strstr (section->name, STUB_SUFFIX))
3284 continue;
3285
3286 if (section->size)
3287 section->size += 4;
3288
3289 /* Ensure all stub sections have a size which is a multiple of
3290 4096. This is important in order to ensure that the insertion
3291 of stub sections does not in itself move existing code around
3292 in such a way that new errata sequences are created. */
3293 if (htab->fix_erratum_843419)
3294 if (section->size)
3295 section->size = BFD_ALIGN (section->size, 0x1000);
3296 }
3297 }
3298
3299
3300 /* Construct an erratum 843419 workaround stub name.
3301 */
3302
3303 static char *
3304 _bfd_aarch64_erratum_843419_stub_name (asection *input_section,
3305 bfd_vma offset)
3306 {
3307 const bfd_size_type len = 8 + 4 + 1 + 8 + 1 + 16 + 1;
3308 char *stub_name = bfd_malloc (len);
3309
3310 if (stub_name != NULL)
3311 snprintf (stub_name, len, "e843419@%04x_%08x_%" BFD_VMA_FMT "x",
3312 input_section->owner->id,
3313 input_section->id,
3314 offset);
3315 return stub_name;
3316 }
3317
3318 /* Build a stub_entry structure describing an 843419 fixup.
3319
3320 The stub_entry constructed is populated with the bit pattern INSN
3321 of the instruction located at OFFSET within input SECTION.
3322
3323 Returns TRUE on success. */
3324
3325 static bfd_boolean
3326 _bfd_aarch64_erratum_843419_fixup (uint32_t insn,
3327 bfd_vma adrp_offset,
3328 bfd_vma ldst_offset,
3329 asection *section,
3330 struct bfd_link_info *info)
3331 {
3332 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3333 char *stub_name;
3334 struct elf_aarch64_stub_hash_entry *stub_entry;
3335
3336 stub_name = _bfd_aarch64_erratum_843419_stub_name (section, ldst_offset);
3337 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
3338 FALSE, FALSE);
3339 if (stub_entry)
3340 {
3341 free (stub_name);
3342 return TRUE;
3343 }
3344
3345 /* We always place an 843419 workaround veneer in the stub section
3346 attached to the input section in which an erratum sequence has
3347 been found. This ensures that later in the link process (in
3348 elfNN_aarch64_write_section) when we copy the veneered
3349 instruction from the input section into the stub section the
3350 copied instruction will have had any relocations applied to it.
3351 If we placed workaround veneers in any other stub section then we
3352 could not assume that all relocations have been processed on the
3353 corresponding input section at the point we output the stub
3354 section.
3355 */
3356
3357 stub_entry = _bfd_aarch64_add_stub_entry_after (stub_name, section, htab);
3358 if (stub_entry == NULL)
3359 {
3360 free (stub_name);
3361 return FALSE;
3362 }
3363
3364 stub_entry->adrp_offset = adrp_offset;
3365 stub_entry->target_value = ldst_offset;
3366 stub_entry->target_section = section;
3367 stub_entry->stub_type = aarch64_stub_erratum_843419_veneer;
3368 stub_entry->veneered_insn = insn;
3369 stub_entry->output_name = stub_name;
3370
3371 return TRUE;
3372 }
3373
3374
3375 /* Scan an input section looking for the signature of erratum 843419.
3376
3377 Scans input SECTION in INPUT_BFD looking for erratum 843419
3378 signatures, for each signature found a stub_entry is created
3379 describing the location of the erratum for subsequent fixup.
3380
3381 Return TRUE on successful scan, FALSE on failure to scan.
3382 */
3383
3384 static bfd_boolean
3385 _bfd_aarch64_erratum_843419_scan (bfd *input_bfd, asection *section,
3386 struct bfd_link_info *info)
3387 {
3388 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3389
3390 if (htab == NULL)
3391 return TRUE;
3392
3393 if (elf_section_type (section) != SHT_PROGBITS
3394 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3395 || (section->flags & SEC_EXCLUDE) != 0
3396 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3397 || (section->output_section == bfd_abs_section_ptr))
3398 return TRUE;
3399
3400 do
3401 {
3402 bfd_byte *contents = NULL;
3403 struct _aarch64_elf_section_data *sec_data;
3404 unsigned int span;
3405
3406 if (elf_section_data (section)->this_hdr.contents != NULL)
3407 contents = elf_section_data (section)->this_hdr.contents;
3408 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3409 return FALSE;
3410
3411 sec_data = elf_aarch64_section_data (section);
3412
3413 qsort (sec_data->map, sec_data->mapcount,
3414 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3415
3416 for (span = 0; span < sec_data->mapcount; span++)
3417 {
3418 unsigned int span_start = sec_data->map[span].vma;
3419 unsigned int span_end = ((span == sec_data->mapcount - 1)
3420 ? sec_data->map[0].vma + section->size
3421 : sec_data->map[span + 1].vma);
3422 unsigned int i;
3423 char span_type = sec_data->map[span].type;
3424
3425 if (span_type == 'd')
3426 continue;
3427
3428 for (i = span_start; i + 8 < span_end; i += 4)
3429 {
3430 bfd_vma vma = (section->output_section->vma
3431 + section->output_offset
3432 + i);
3433 bfd_vma veneer_i;
3434
3435 if (_bfd_aarch64_erratum_843419_p
3436 (contents, vma, i, span_end, &veneer_i))
3437 {
3438 uint32_t insn = bfd_getl32 (contents + veneer_i);
3439
3440 if (!_bfd_aarch64_erratum_843419_fixup (insn, i, veneer_i,
3441 section, info))
3442 return FALSE;
3443 }
3444 }
3445 }
3446
3447 if (elf_section_data (section)->this_hdr.contents == NULL)
3448 free (contents);
3449 }
3450 while (0);
3451
3452 return TRUE;
3453 }
3454
3455
3456 /* Determine and set the size of the stub section for a final link.
3457
3458 The basic idea here is to examine all the relocations looking for
3459 PC-relative calls to a target that is unreachable with a "bl"
3460 instruction. */
3461
3462 bfd_boolean
3463 elfNN_aarch64_size_stubs (bfd *output_bfd,
3464 bfd *stub_bfd,
3465 struct bfd_link_info *info,
3466 bfd_signed_vma group_size,
3467 asection * (*add_stub_section) (const char *,
3468 asection *),
3469 void (*layout_sections_again) (void))
3470 {
3471 bfd_size_type stub_group_size;
3472 bfd_boolean stubs_always_before_branch;
3473 bfd_boolean stub_changed = FALSE;
3474 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3475 unsigned int num_erratum_835769_fixes = 0;
3476
3477 /* Propagate mach to stub bfd, because it may not have been
3478 finalized when we created stub_bfd. */
3479 bfd_set_arch_mach (stub_bfd, bfd_get_arch (output_bfd),
3480 bfd_get_mach (output_bfd));
3481
3482 /* Stash our params away. */
3483 htab->stub_bfd = stub_bfd;
3484 htab->add_stub_section = add_stub_section;
3485 htab->layout_sections_again = layout_sections_again;
3486 stubs_always_before_branch = group_size < 0;
3487 if (group_size < 0)
3488 stub_group_size = -group_size;
3489 else
3490 stub_group_size = group_size;
3491
3492 if (stub_group_size == 1)
3493 {
3494 /* Default values. */
3495 /* AArch64 branch range is +-128MB. The value used is 1MB less. */
3496 stub_group_size = 127 * 1024 * 1024;
3497 }
3498
3499 group_sections (htab, stub_group_size, stubs_always_before_branch);
3500
3501 (*htab->layout_sections_again) ();
3502
3503 if (htab->fix_erratum_835769)
3504 {
3505 bfd *input_bfd;
3506
3507 for (input_bfd = info->input_bfds;
3508 input_bfd != NULL; input_bfd = input_bfd->link.next)
3509 if (!_bfd_aarch64_erratum_835769_scan (input_bfd, info,
3510 &num_erratum_835769_fixes))
3511 return FALSE;
3512
3513 _bfd_aarch64_resize_stubs (htab);
3514 (*htab->layout_sections_again) ();
3515 }
3516
3517 if (htab->fix_erratum_843419)
3518 {
3519 bfd *input_bfd;
3520
3521 for (input_bfd = info->input_bfds;
3522 input_bfd != NULL;
3523 input_bfd = input_bfd->link.next)
3524 {
3525 asection *section;
3526
3527 for (section = input_bfd->sections;
3528 section != NULL;
3529 section = section->next)
3530 if (!_bfd_aarch64_erratum_843419_scan (input_bfd, section, info))
3531 return FALSE;
3532 }
3533
3534 _bfd_aarch64_resize_stubs (htab);
3535 (*htab->layout_sections_again) ();
3536 }
3537
3538 while (1)
3539 {
3540 bfd *input_bfd;
3541
3542 for (input_bfd = info->input_bfds;
3543 input_bfd != NULL; input_bfd = input_bfd->link.next)
3544 {
3545 Elf_Internal_Shdr *symtab_hdr;
3546 asection *section;
3547 Elf_Internal_Sym *local_syms = NULL;
3548
3549 /* We'll need the symbol table in a second. */
3550 symtab_hdr = &elf_tdata (input_bfd)->symtab_hdr;
3551 if (symtab_hdr->sh_info == 0)
3552 continue;
3553
3554 /* Walk over each section attached to the input bfd. */
3555 for (section = input_bfd->sections;
3556 section != NULL; section = section->next)
3557 {
3558 Elf_Internal_Rela *internal_relocs, *irelaend, *irela;
3559
3560 /* If there aren't any relocs, then there's nothing more
3561 to do. */
3562 if ((section->flags & SEC_RELOC) == 0
3563 || section->reloc_count == 0
3564 || (section->flags & SEC_CODE) == 0)
3565 continue;
3566
3567 /* If this section is a link-once section that will be
3568 discarded, then don't create any stubs. */
3569 if (section->output_section == NULL
3570 || section->output_section->owner != output_bfd)
3571 continue;
3572
3573 /* Get the relocs. */
3574 internal_relocs
3575 = _bfd_elf_link_read_relocs (input_bfd, section, NULL,
3576 NULL, info->keep_memory);
3577 if (internal_relocs == NULL)
3578 goto error_ret_free_local;
3579
3580 /* Now examine each relocation. */
3581 irela = internal_relocs;
3582 irelaend = irela + section->reloc_count;
3583 for (; irela < irelaend; irela++)
3584 {
3585 unsigned int r_type, r_indx;
3586 enum elf_aarch64_stub_type stub_type;
3587 struct elf_aarch64_stub_hash_entry *stub_entry;
3588 asection *sym_sec;
3589 bfd_vma sym_value;
3590 bfd_vma destination;
3591 struct elf_aarch64_link_hash_entry *hash;
3592 const char *sym_name;
3593 char *stub_name;
3594 const asection *id_sec;
3595 unsigned char st_type;
3596 bfd_size_type len;
3597
3598 r_type = ELFNN_R_TYPE (irela->r_info);
3599 r_indx = ELFNN_R_SYM (irela->r_info);
3600
3601 if (r_type >= (unsigned int) R_AARCH64_end)
3602 {
3603 bfd_set_error (bfd_error_bad_value);
3604 error_ret_free_internal:
3605 if (elf_section_data (section)->relocs == NULL)
3606 free (internal_relocs);
3607 goto error_ret_free_local;
3608 }
3609
3610 /* Only look for stubs on unconditional branch and
3611 branch and link instructions. */
3612 if (r_type != (unsigned int) AARCH64_R (CALL26)
3613 && r_type != (unsigned int) AARCH64_R (JUMP26))
3614 continue;
3615
3616 /* Now determine the call target, its name, value,
3617 section. */
3618 sym_sec = NULL;
3619 sym_value = 0;
3620 destination = 0;
3621 hash = NULL;
3622 sym_name = NULL;
3623 if (r_indx < symtab_hdr->sh_info)
3624 {
3625 /* It's a local symbol. */
3626 Elf_Internal_Sym *sym;
3627 Elf_Internal_Shdr *hdr;
3628
3629 if (local_syms == NULL)
3630 {
3631 local_syms
3632 = (Elf_Internal_Sym *) symtab_hdr->contents;
3633 if (local_syms == NULL)
3634 local_syms
3635 = bfd_elf_get_elf_syms (input_bfd, symtab_hdr,
3636 symtab_hdr->sh_info, 0,
3637 NULL, NULL, NULL);
3638 if (local_syms == NULL)
3639 goto error_ret_free_internal;
3640 }
3641
3642 sym = local_syms + r_indx;
3643 hdr = elf_elfsections (input_bfd)[sym->st_shndx];
3644 sym_sec = hdr->bfd_section;
3645 if (!sym_sec)
3646 /* This is an undefined symbol. It can never
3647 be resolved. */
3648 continue;
3649
3650 if (ELF_ST_TYPE (sym->st_info) != STT_SECTION)
3651 sym_value = sym->st_value;
3652 destination = (sym_value + irela->r_addend
3653 + sym_sec->output_offset
3654 + sym_sec->output_section->vma);
3655 st_type = ELF_ST_TYPE (sym->st_info);
3656 sym_name
3657 = bfd_elf_string_from_elf_section (input_bfd,
3658 symtab_hdr->sh_link,
3659 sym->st_name);
3660 }
3661 else
3662 {
3663 int e_indx;
3664
3665 e_indx = r_indx - symtab_hdr->sh_info;
3666 hash = ((struct elf_aarch64_link_hash_entry *)
3667 elf_sym_hashes (input_bfd)[e_indx]);
3668
3669 while (hash->root.root.type == bfd_link_hash_indirect
3670 || hash->root.root.type == bfd_link_hash_warning)
3671 hash = ((struct elf_aarch64_link_hash_entry *)
3672 hash->root.root.u.i.link);
3673
3674 if (hash->root.root.type == bfd_link_hash_defined
3675 || hash->root.root.type == bfd_link_hash_defweak)
3676 {
3677 struct elf_aarch64_link_hash_table *globals =
3678 elf_aarch64_hash_table (info);
3679 sym_sec = hash->root.root.u.def.section;
3680 sym_value = hash->root.root.u.def.value;
3681 /* For a destination in a shared library,
3682 use the PLT stub as target address to
3683 decide whether a branch stub is
3684 needed. */
3685 if (globals->root.splt != NULL && hash != NULL
3686 && hash->root.plt.offset != (bfd_vma) - 1)
3687 {
3688 sym_sec = globals->root.splt;
3689 sym_value = hash->root.plt.offset;
3690 if (sym_sec->output_section != NULL)
3691 destination = (sym_value
3692 + sym_sec->output_offset
3693 +
3694 sym_sec->output_section->vma);
3695 }
3696 else if (sym_sec->output_section != NULL)
3697 destination = (sym_value + irela->r_addend
3698 + sym_sec->output_offset
3699 + sym_sec->output_section->vma);
3700 }
3701 else if (hash->root.root.type == bfd_link_hash_undefined
3702 || (hash->root.root.type
3703 == bfd_link_hash_undefweak))
3704 {
3705 /* For a shared library, use the PLT stub as
3706 target address to decide whether a long
3707 branch stub is needed.
3708 For absolute code, they cannot be handled. */
3709 struct elf_aarch64_link_hash_table *globals =
3710 elf_aarch64_hash_table (info);
3711
3712 if (globals->root.splt != NULL && hash != NULL
3713 && hash->root.plt.offset != (bfd_vma) - 1)
3714 {
3715 sym_sec = globals->root.splt;
3716 sym_value = hash->root.plt.offset;
3717 if (sym_sec->output_section != NULL)
3718 destination = (sym_value
3719 + sym_sec->output_offset
3720 +
3721 sym_sec->output_section->vma);
3722 }
3723 else
3724 continue;
3725 }
3726 else
3727 {
3728 bfd_set_error (bfd_error_bad_value);
3729 goto error_ret_free_internal;
3730 }
3731 st_type = ELF_ST_TYPE (hash->root.type);
3732 sym_name = hash->root.root.root.string;
3733 }
3734
3735 /* Determine what (if any) linker stub is needed. */
3736 stub_type = aarch64_type_of_stub
3737 (info, section, irela, st_type, hash, destination);
3738 if (stub_type == aarch64_stub_none)
3739 continue;
3740
3741 /* Support for grouping stub sections. */
3742 id_sec = htab->stub_group[section->id].link_sec;
3743
3744 /* Get the name of this stub. */
3745 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, hash,
3746 irela);
3747 if (!stub_name)
3748 goto error_ret_free_internal;
3749
3750 stub_entry =
3751 aarch64_stub_hash_lookup (&htab->stub_hash_table,
3752 stub_name, FALSE, FALSE);
3753 if (stub_entry != NULL)
3754 {
3755 /* The proper stub has already been created. */
3756 free (stub_name);
3757 continue;
3758 }
3759
3760 stub_entry = _bfd_aarch64_add_stub_entry_in_group
3761 (stub_name, section, htab);
3762 if (stub_entry == NULL)
3763 {
3764 free (stub_name);
3765 goto error_ret_free_internal;
3766 }
3767
3768 stub_entry->target_value = sym_value;
3769 stub_entry->target_section = sym_sec;
3770 stub_entry->stub_type = stub_type;
3771 stub_entry->h = hash;
3772 stub_entry->st_type = st_type;
3773
3774 if (sym_name == NULL)
3775 sym_name = "unnamed";
3776 len = sizeof (STUB_ENTRY_NAME) + strlen (sym_name);
3777 stub_entry->output_name = bfd_alloc (htab->stub_bfd, len);
3778 if (stub_entry->output_name == NULL)
3779 {
3780 free (stub_name);
3781 goto error_ret_free_internal;
3782 }
3783
3784 snprintf (stub_entry->output_name, len, STUB_ENTRY_NAME,
3785 sym_name);
3786
3787 stub_changed = TRUE;
3788 }
3789
3790 /* We're done with the internal relocs, free them. */
3791 if (elf_section_data (section)->relocs == NULL)
3792 free (internal_relocs);
3793 }
3794 }
3795
3796 if (!stub_changed)
3797 break;
3798
3799 _bfd_aarch64_resize_stubs (htab);
3800
3801 /* Ask the linker to do its stuff. */
3802 (*htab->layout_sections_again) ();
3803 stub_changed = FALSE;
3804 }
3805
3806 return TRUE;
3807
3808 error_ret_free_local:
3809 return FALSE;
3810 }
3811
3812 /* Build all the stubs associated with the current output file. The
3813 stubs are kept in a hash table attached to the main linker hash
3814 table. We also set up the .plt entries for statically linked PIC
3815 functions here. This function is called via aarch64_elf_finish in the
3816 linker. */
3817
3818 bfd_boolean
3819 elfNN_aarch64_build_stubs (struct bfd_link_info *info)
3820 {
3821 asection *stub_sec;
3822 struct bfd_hash_table *table;
3823 struct elf_aarch64_link_hash_table *htab;
3824
3825 htab = elf_aarch64_hash_table (info);
3826
3827 for (stub_sec = htab->stub_bfd->sections;
3828 stub_sec != NULL; stub_sec = stub_sec->next)
3829 {
3830 bfd_size_type size;
3831
3832 /* Ignore non-stub sections. */
3833 if (!strstr (stub_sec->name, STUB_SUFFIX))
3834 continue;
3835
3836 /* Allocate memory to hold the linker stubs. */
3837 size = stub_sec->size;
3838 stub_sec->contents = bfd_zalloc (htab->stub_bfd, size);
3839 if (stub_sec->contents == NULL && size != 0)
3840 return FALSE;
3841 stub_sec->size = 0;
3842
3843 bfd_putl32 (0x14000000 | (size >> 2), stub_sec->contents);
3844 stub_sec->size += 4;
3845 }
3846
3847 /* Build the stubs as directed by the stub hash table. */
3848 table = &htab->stub_hash_table;
3849 bfd_hash_traverse (table, aarch64_build_one_stub, info);
3850
3851 return TRUE;
3852 }
3853
3854
3855 /* Add an entry to the code/data map for section SEC. */
3856
3857 static void
3858 elfNN_aarch64_section_map_add (asection *sec, char type, bfd_vma vma)
3859 {
3860 struct _aarch64_elf_section_data *sec_data =
3861 elf_aarch64_section_data (sec);
3862 unsigned int newidx;
3863
3864 if (sec_data->map == NULL)
3865 {
3866 sec_data->map = bfd_malloc (sizeof (elf_aarch64_section_map));
3867 sec_data->mapcount = 0;
3868 sec_data->mapsize = 1;
3869 }
3870
3871 newidx = sec_data->mapcount++;
3872
3873 if (sec_data->mapcount > sec_data->mapsize)
3874 {
3875 sec_data->mapsize *= 2;
3876 sec_data->map = bfd_realloc_or_free
3877 (sec_data->map, sec_data->mapsize * sizeof (elf_aarch64_section_map));
3878 }
3879
3880 if (sec_data->map)
3881 {
3882 sec_data->map[newidx].vma = vma;
3883 sec_data->map[newidx].type = type;
3884 }
3885 }
3886
3887
3888 /* Initialise maps of insn/data for input BFDs. */
3889 void
3890 bfd_elfNN_aarch64_init_maps (bfd *abfd)
3891 {
3892 Elf_Internal_Sym *isymbuf;
3893 Elf_Internal_Shdr *hdr;
3894 unsigned int i, localsyms;
3895
3896 /* Make sure that we are dealing with an AArch64 elf binary. */
3897 if (!is_aarch64_elf (abfd))
3898 return;
3899
3900 if ((abfd->flags & DYNAMIC) != 0)
3901 return;
3902
3903 hdr = &elf_symtab_hdr (abfd);
3904 localsyms = hdr->sh_info;
3905
3906 /* Obtain a buffer full of symbols for this BFD. The hdr->sh_info field
3907 should contain the number of local symbols, which should come before any
3908 global symbols. Mapping symbols are always local. */
3909 isymbuf = bfd_elf_get_elf_syms (abfd, hdr, localsyms, 0, NULL, NULL, NULL);
3910
3911 /* No internal symbols read? Skip this BFD. */
3912 if (isymbuf == NULL)
3913 return;
3914
3915 for (i = 0; i < localsyms; i++)
3916 {
3917 Elf_Internal_Sym *isym = &isymbuf[i];
3918 asection *sec = bfd_section_from_elf_index (abfd, isym->st_shndx);
3919 const char *name;
3920
3921 if (sec != NULL && ELF_ST_BIND (isym->st_info) == STB_LOCAL)
3922 {
3923 name = bfd_elf_string_from_elf_section (abfd,
3924 hdr->sh_link,
3925 isym->st_name);
3926
3927 if (bfd_is_aarch64_special_symbol_name
3928 (name, BFD_AARCH64_SPECIAL_SYM_TYPE_MAP))
3929 elfNN_aarch64_section_map_add (sec, name[1], isym->st_value);
3930 }
3931 }
3932 }
3933
3934 /* Set option values needed during linking. */
3935 void
3936 bfd_elfNN_aarch64_set_options (struct bfd *output_bfd,
3937 struct bfd_link_info *link_info,
3938 int no_enum_warn,
3939 int no_wchar_warn, int pic_veneer,
3940 int fix_erratum_835769,
3941 int fix_erratum_843419)
3942 {
3943 struct elf_aarch64_link_hash_table *globals;
3944
3945 globals = elf_aarch64_hash_table (link_info);
3946 globals->pic_veneer = pic_veneer;
3947 globals->fix_erratum_835769 = fix_erratum_835769;
3948 globals->fix_erratum_843419 = fix_erratum_843419;
3949 globals->fix_erratum_843419_adr = TRUE;
3950
3951 BFD_ASSERT (is_aarch64_elf (output_bfd));
3952 elf_aarch64_tdata (output_bfd)->no_enum_size_warning = no_enum_warn;
3953 elf_aarch64_tdata (output_bfd)->no_wchar_size_warning = no_wchar_warn;
3954 }
3955
3956 static bfd_vma
3957 aarch64_calculate_got_entry_vma (struct elf_link_hash_entry *h,
3958 struct elf_aarch64_link_hash_table
3959 *globals, struct bfd_link_info *info,
3960 bfd_vma value, bfd *output_bfd,
3961 bfd_boolean *unresolved_reloc_p)
3962 {
3963 bfd_vma off = (bfd_vma) - 1;
3964 asection *basegot = globals->root.sgot;
3965 bfd_boolean dyn = globals->root.dynamic_sections_created;
3966
3967 if (h != NULL)
3968 {
3969 BFD_ASSERT (basegot != NULL);
3970 off = h->got.offset;
3971 BFD_ASSERT (off != (bfd_vma) - 1);
3972 if (!WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, info->shared, h)
3973 || (info->shared
3974 && SYMBOL_REFERENCES_LOCAL (info, h))
3975 || (ELF_ST_VISIBILITY (h->other)
3976 && h->root.type == bfd_link_hash_undefweak))
3977 {
3978 /* This is actually a static link, or it is a -Bsymbolic link
3979 and the symbol is defined locally. We must initialize this
3980 entry in the global offset table. Since the offset must
3981 always be a multiple of 8 (4 in the case of ILP32), we use
3982 the least significant bit to record whether we have
3983 initialized it already.
3984 When doing a dynamic link, we create a .rel(a).got relocation
3985 entry to initialize the value. This is done in the
3986 finish_dynamic_symbol routine. */
3987 if ((off & 1) != 0)
3988 off &= ~1;
3989 else
3990 {
3991 bfd_put_NN (output_bfd, value, basegot->contents + off);
3992 h->got.offset |= 1;
3993 }
3994 }
3995 else
3996 *unresolved_reloc_p = FALSE;
3997
3998 off = off + basegot->output_section->vma + basegot->output_offset;
3999 }
4000
4001 return off;
4002 }
4003
4004 /* Change R_TYPE to a more efficient access model where possible,
4005 return the new reloc type. */
4006
4007 static bfd_reloc_code_real_type
4008 aarch64_tls_transition_without_check (bfd_reloc_code_real_type r_type,
4009 struct elf_link_hash_entry *h)
4010 {
4011 bfd_boolean is_local = h == NULL;
4012
4013 switch (r_type)
4014 {
4015 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4016 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4017 return (is_local
4018 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
4019 : BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21);
4020
4021 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4022 return (is_local
4023 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
4024 : r_type);
4025
4026 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4027 return (is_local
4028 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
4029 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
4030
4031 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
4032 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4033 return (is_local
4034 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
4035 : BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC);
4036
4037 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4038 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 : r_type;
4039
4040 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
4041 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC : r_type;
4042
4043 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4044 return r_type;
4045
4046 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4047 return (is_local
4048 ? BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12
4049 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
4050
4051 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4052 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4053 /* Instructions with these relocations will become NOPs. */
4054 return BFD_RELOC_AARCH64_NONE;
4055
4056 default:
4057 break;
4058 }
4059
4060 return r_type;
4061 }
4062
4063 static unsigned int
4064 aarch64_reloc_got_type (bfd_reloc_code_real_type r_type)
4065 {
4066 switch (r_type)
4067 {
4068 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4069 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4070 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
4071 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4072 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4073 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4074 return GOT_NORMAL;
4075
4076 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4077 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4078 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4079 return GOT_TLS_GD;
4080
4081 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4082 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4083 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4084 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4085 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
4086 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
4087 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4088 return GOT_TLSDESC_GD;
4089
4090 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4091 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4092 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4093 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4094 return GOT_TLS_IE;
4095
4096 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4097 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4098 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
4099 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
4100 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
4101 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
4102 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
4103 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
4104 return GOT_UNKNOWN;
4105
4106 default:
4107 break;
4108 }
4109 return GOT_UNKNOWN;
4110 }
4111
4112 static bfd_boolean
4113 aarch64_can_relax_tls (bfd *input_bfd,
4114 struct bfd_link_info *info,
4115 bfd_reloc_code_real_type r_type,
4116 struct elf_link_hash_entry *h,
4117 unsigned long r_symndx)
4118 {
4119 unsigned int symbol_got_type;
4120 unsigned int reloc_got_type;
4121
4122 if (! IS_AARCH64_TLS_RELOC (r_type))
4123 return FALSE;
4124
4125 symbol_got_type = elfNN_aarch64_symbol_got_type (h, input_bfd, r_symndx);
4126 reloc_got_type = aarch64_reloc_got_type (r_type);
4127
4128 if (symbol_got_type == GOT_TLS_IE && GOT_TLS_GD_ANY_P (reloc_got_type))
4129 return TRUE;
4130
4131 if (info->shared)
4132 return FALSE;
4133
4134 if (h && h->root.type == bfd_link_hash_undefweak)
4135 return FALSE;
4136
4137 return TRUE;
4138 }
4139
4140 /* Given the relocation code R_TYPE, return the relaxed bfd reloc
4141 enumerator. */
4142
4143 static bfd_reloc_code_real_type
4144 aarch64_tls_transition (bfd *input_bfd,
4145 struct bfd_link_info *info,
4146 unsigned int r_type,
4147 struct elf_link_hash_entry *h,
4148 unsigned long r_symndx)
4149 {
4150 bfd_reloc_code_real_type bfd_r_type
4151 = elfNN_aarch64_bfd_reloc_from_type (r_type);
4152
4153 if (! aarch64_can_relax_tls (input_bfd, info, bfd_r_type, h, r_symndx))
4154 return bfd_r_type;
4155
4156 return aarch64_tls_transition_without_check (bfd_r_type, h);
4157 }
4158
4159 /* Return the base VMA address which should be subtracted from real addresses
4160 when resolving R_AARCH64_TLS_DTPREL relocation. */
4161
4162 static bfd_vma
4163 dtpoff_base (struct bfd_link_info *info)
4164 {
4165 /* If tls_sec is NULL, we should have signalled an error already. */
4166 BFD_ASSERT (elf_hash_table (info)->tls_sec != NULL);
4167 return elf_hash_table (info)->tls_sec->vma;
4168 }
4169
4170 /* Return the base VMA address which should be subtracted from real addresses
4171 when resolving R_AARCH64_TLS_GOTTPREL64 relocations. */
4172
4173 static bfd_vma
4174 tpoff_base (struct bfd_link_info *info)
4175 {
4176 struct elf_link_hash_table *htab = elf_hash_table (info);
4177
4178 /* If tls_sec is NULL, we should have signalled an error already. */
4179 BFD_ASSERT (htab->tls_sec != NULL);
4180
4181 bfd_vma base = align_power ((bfd_vma) TCB_SIZE,
4182 htab->tls_sec->alignment_power);
4183 return htab->tls_sec->vma - base;
4184 }
4185
4186 static bfd_vma *
4187 symbol_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4188 unsigned long r_symndx)
4189 {
4190 /* Calculate the address of the GOT entry for symbol
4191 referred to in h. */
4192 if (h != NULL)
4193 return &h->got.offset;
4194 else
4195 {
4196 /* local symbol */
4197 struct elf_aarch64_local_symbol *l;
4198
4199 l = elf_aarch64_locals (input_bfd);
4200 return &l[r_symndx].got_offset;
4201 }
4202 }
4203
4204 static void
4205 symbol_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4206 unsigned long r_symndx)
4207 {
4208 bfd_vma *p;
4209 p = symbol_got_offset_ref (input_bfd, h, r_symndx);
4210 *p |= 1;
4211 }
4212
4213 static int
4214 symbol_got_offset_mark_p (bfd *input_bfd, struct elf_link_hash_entry *h,
4215 unsigned long r_symndx)
4216 {
4217 bfd_vma value;
4218 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4219 return value & 1;
4220 }
4221
4222 static bfd_vma
4223 symbol_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4224 unsigned long r_symndx)
4225 {
4226 bfd_vma value;
4227 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4228 value &= ~1;
4229 return value;
4230 }
4231
4232 static bfd_vma *
4233 symbol_tlsdesc_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4234 unsigned long r_symndx)
4235 {
4236 /* Calculate the address of the GOT entry for symbol
4237 referred to in h. */
4238 if (h != NULL)
4239 {
4240 struct elf_aarch64_link_hash_entry *eh;
4241 eh = (struct elf_aarch64_link_hash_entry *) h;
4242 return &eh->tlsdesc_got_jump_table_offset;
4243 }
4244 else
4245 {
4246 /* local symbol */
4247 struct elf_aarch64_local_symbol *l;
4248
4249 l = elf_aarch64_locals (input_bfd);
4250 return &l[r_symndx].tlsdesc_got_jump_table_offset;
4251 }
4252 }
4253
4254 static void
4255 symbol_tlsdesc_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4256 unsigned long r_symndx)
4257 {
4258 bfd_vma *p;
4259 p = symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4260 *p |= 1;
4261 }
4262
4263 static int
4264 symbol_tlsdesc_got_offset_mark_p (bfd *input_bfd,
4265 struct elf_link_hash_entry *h,
4266 unsigned long r_symndx)
4267 {
4268 bfd_vma value;
4269 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4270 return value & 1;
4271 }
4272
4273 static bfd_vma
4274 symbol_tlsdesc_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4275 unsigned long r_symndx)
4276 {
4277 bfd_vma value;
4278 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4279 value &= ~1;
4280 return value;
4281 }
4282
4283 /* Data for make_branch_to_erratum_835769_stub(). */
4284
4285 struct erratum_835769_branch_to_stub_data
4286 {
4287 struct bfd_link_info *info;
4288 asection *output_section;
4289 bfd_byte *contents;
4290 };
4291
4292 /* Helper to insert branches to erratum 835769 stubs in the right
4293 places for a particular section. */
4294
4295 static bfd_boolean
4296 make_branch_to_erratum_835769_stub (struct bfd_hash_entry *gen_entry,
4297 void *in_arg)
4298 {
4299 struct elf_aarch64_stub_hash_entry *stub_entry;
4300 struct erratum_835769_branch_to_stub_data *data;
4301 bfd_byte *contents;
4302 unsigned long branch_insn = 0;
4303 bfd_vma veneered_insn_loc, veneer_entry_loc;
4304 bfd_signed_vma branch_offset;
4305 unsigned int target;
4306 bfd *abfd;
4307
4308 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4309 data = (struct erratum_835769_branch_to_stub_data *) in_arg;
4310
4311 if (stub_entry->target_section != data->output_section
4312 || stub_entry->stub_type != aarch64_stub_erratum_835769_veneer)
4313 return TRUE;
4314
4315 contents = data->contents;
4316 veneered_insn_loc = stub_entry->target_section->output_section->vma
4317 + stub_entry->target_section->output_offset
4318 + stub_entry->target_value;
4319 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4320 + stub_entry->stub_sec->output_offset
4321 + stub_entry->stub_offset;
4322 branch_offset = veneer_entry_loc - veneered_insn_loc;
4323
4324 abfd = stub_entry->target_section->owner;
4325 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4326 (*_bfd_error_handler)
4327 (_("%B: error: Erratum 835769 stub out "
4328 "of range (input file too large)"), abfd);
4329
4330 target = stub_entry->target_value;
4331 branch_insn = 0x14000000;
4332 branch_offset >>= 2;
4333 branch_offset &= 0x3ffffff;
4334 branch_insn |= branch_offset;
4335 bfd_putl32 (branch_insn, &contents[target]);
4336
4337 return TRUE;
4338 }
4339
4340
4341 static bfd_boolean
4342 _bfd_aarch64_erratum_843419_branch_to_stub (struct bfd_hash_entry *gen_entry,
4343 void *in_arg)
4344 {
4345 struct elf_aarch64_stub_hash_entry *stub_entry
4346 = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4347 struct erratum_835769_branch_to_stub_data *data
4348 = (struct erratum_835769_branch_to_stub_data *) in_arg;
4349 struct bfd_link_info *info;
4350 struct elf_aarch64_link_hash_table *htab;
4351 bfd_byte *contents;
4352 asection *section;
4353 bfd *abfd;
4354 bfd_vma place;
4355 uint32_t insn;
4356
4357 info = data->info;
4358 contents = data->contents;
4359 section = data->output_section;
4360
4361 htab = elf_aarch64_hash_table (info);
4362
4363 if (stub_entry->target_section != section
4364 || stub_entry->stub_type != aarch64_stub_erratum_843419_veneer)
4365 return TRUE;
4366
4367 insn = bfd_getl32 (contents + stub_entry->target_value);
4368 bfd_putl32 (insn,
4369 stub_entry->stub_sec->contents + stub_entry->stub_offset);
4370
4371 place = (section->output_section->vma + section->output_offset
4372 + stub_entry->adrp_offset);
4373 insn = bfd_getl32 (contents + stub_entry->adrp_offset);
4374
4375 if ((insn & AARCH64_ADRP_OP_MASK) != AARCH64_ADRP_OP)
4376 abort ();
4377
4378 bfd_signed_vma imm =
4379 (_bfd_aarch64_sign_extend
4380 ((bfd_vma) _bfd_aarch64_decode_adrp_imm (insn) << 12, 33)
4381 - (place & 0xfff));
4382
4383 if (htab->fix_erratum_843419_adr
4384 && (imm >= AARCH64_MIN_ADRP_IMM && imm <= AARCH64_MAX_ADRP_IMM))
4385 {
4386 insn = (_bfd_aarch64_reencode_adr_imm (AARCH64_ADR_OP, imm)
4387 | AARCH64_RT (insn));
4388 bfd_putl32 (insn, contents + stub_entry->adrp_offset);
4389 }
4390 else
4391 {
4392 bfd_vma veneered_insn_loc;
4393 bfd_vma veneer_entry_loc;
4394 bfd_signed_vma branch_offset;
4395 uint32_t branch_insn;
4396
4397 veneered_insn_loc = stub_entry->target_section->output_section->vma
4398 + stub_entry->target_section->output_offset
4399 + stub_entry->target_value;
4400 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4401 + stub_entry->stub_sec->output_offset
4402 + stub_entry->stub_offset;
4403 branch_offset = veneer_entry_loc - veneered_insn_loc;
4404
4405 abfd = stub_entry->target_section->owner;
4406 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4407 (*_bfd_error_handler)
4408 (_("%B: error: Erratum 843419 stub out "
4409 "of range (input file too large)"), abfd);
4410
4411 branch_insn = 0x14000000;
4412 branch_offset >>= 2;
4413 branch_offset &= 0x3ffffff;
4414 branch_insn |= branch_offset;
4415 bfd_putl32 (branch_insn, contents + stub_entry->target_value);
4416 }
4417 return TRUE;
4418 }
4419
4420
4421 static bfd_boolean
4422 elfNN_aarch64_write_section (bfd *output_bfd ATTRIBUTE_UNUSED,
4423 struct bfd_link_info *link_info,
4424 asection *sec,
4425 bfd_byte *contents)
4426
4427 {
4428 struct elf_aarch64_link_hash_table *globals =
4429 elf_aarch64_hash_table (link_info);
4430
4431 if (globals == NULL)
4432 return FALSE;
4433
4434 /* Fix code to point to erratum 835769 stubs. */
4435 if (globals->fix_erratum_835769)
4436 {
4437 struct erratum_835769_branch_to_stub_data data;
4438
4439 data.info = link_info;
4440 data.output_section = sec;
4441 data.contents = contents;
4442 bfd_hash_traverse (&globals->stub_hash_table,
4443 make_branch_to_erratum_835769_stub, &data);
4444 }
4445
4446 if (globals->fix_erratum_843419)
4447 {
4448 struct erratum_835769_branch_to_stub_data data;
4449
4450 data.info = link_info;
4451 data.output_section = sec;
4452 data.contents = contents;
4453 bfd_hash_traverse (&globals->stub_hash_table,
4454 _bfd_aarch64_erratum_843419_branch_to_stub, &data);
4455 }
4456
4457 return FALSE;
4458 }
4459
4460 /* Perform a relocation as part of a final link. */
4461 static bfd_reloc_status_type
4462 elfNN_aarch64_final_link_relocate (reloc_howto_type *howto,
4463 bfd *input_bfd,
4464 bfd *output_bfd,
4465 asection *input_section,
4466 bfd_byte *contents,
4467 Elf_Internal_Rela *rel,
4468 bfd_vma value,
4469 struct bfd_link_info *info,
4470 asection *sym_sec,
4471 struct elf_link_hash_entry *h,
4472 bfd_boolean *unresolved_reloc_p,
4473 bfd_boolean save_addend,
4474 bfd_vma *saved_addend,
4475 Elf_Internal_Sym *sym)
4476 {
4477 Elf_Internal_Shdr *symtab_hdr;
4478 unsigned int r_type = howto->type;
4479 bfd_reloc_code_real_type bfd_r_type
4480 = elfNN_aarch64_bfd_reloc_from_howto (howto);
4481 bfd_reloc_code_real_type new_bfd_r_type;
4482 unsigned long r_symndx;
4483 bfd_byte *hit_data = contents + rel->r_offset;
4484 bfd_vma place, off;
4485 bfd_signed_vma signed_addend;
4486 struct elf_aarch64_link_hash_table *globals;
4487 bfd_boolean weak_undef_p;
4488 asection *base_got;
4489
4490 globals = elf_aarch64_hash_table (info);
4491
4492 symtab_hdr = &elf_symtab_hdr (input_bfd);
4493
4494 BFD_ASSERT (is_aarch64_elf (input_bfd));
4495
4496 r_symndx = ELFNN_R_SYM (rel->r_info);
4497
4498 /* It is possible to have linker relaxations on some TLS access
4499 models. Update our information here. */
4500 new_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type, h, r_symndx);
4501 if (new_bfd_r_type != bfd_r_type)
4502 {
4503 bfd_r_type = new_bfd_r_type;
4504 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
4505 BFD_ASSERT (howto != NULL);
4506 r_type = howto->type;
4507 }
4508
4509 place = input_section->output_section->vma
4510 + input_section->output_offset + rel->r_offset;
4511
4512 /* Get addend, accumulating the addend for consecutive relocs
4513 which refer to the same offset. */
4514 signed_addend = saved_addend ? *saved_addend : 0;
4515 signed_addend += rel->r_addend;
4516
4517 weak_undef_p = (h ? h->root.type == bfd_link_hash_undefweak
4518 : bfd_is_und_section (sym_sec));
4519
4520 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle
4521 it here if it is defined in a non-shared object. */
4522 if (h != NULL
4523 && h->type == STT_GNU_IFUNC
4524 && h->def_regular)
4525 {
4526 asection *plt;
4527 const char *name;
4528 bfd_vma addend = 0;
4529
4530 if ((input_section->flags & SEC_ALLOC) == 0
4531 || h->plt.offset == (bfd_vma) -1)
4532 abort ();
4533
4534 /* STT_GNU_IFUNC symbol must go through PLT. */
4535 plt = globals->root.splt ? globals->root.splt : globals->root.iplt;
4536 value = (plt->output_section->vma + plt->output_offset + h->plt.offset);
4537
4538 switch (bfd_r_type)
4539 {
4540 default:
4541 if (h->root.root.string)
4542 name = h->root.root.string;
4543 else
4544 name = bfd_elf_sym_name (input_bfd, symtab_hdr, sym,
4545 NULL);
4546 (*_bfd_error_handler)
4547 (_("%B: relocation %s against STT_GNU_IFUNC "
4548 "symbol `%s' isn't handled by %s"), input_bfd,
4549 howto->name, name, __FUNCTION__);
4550 bfd_set_error (bfd_error_bad_value);
4551 return FALSE;
4552
4553 case BFD_RELOC_AARCH64_NN:
4554 if (rel->r_addend != 0)
4555 {
4556 if (h->root.root.string)
4557 name = h->root.root.string;
4558 else
4559 name = bfd_elf_sym_name (input_bfd, symtab_hdr,
4560 sym, NULL);
4561 (*_bfd_error_handler)
4562 (_("%B: relocation %s against STT_GNU_IFUNC "
4563 "symbol `%s' has non-zero addend: %d"),
4564 input_bfd, howto->name, name, rel->r_addend);
4565 bfd_set_error (bfd_error_bad_value);
4566 return FALSE;
4567 }
4568
4569 /* Generate dynamic relocation only when there is a
4570 non-GOT reference in a shared object. */
4571 if (info->shared && h->non_got_ref)
4572 {
4573 Elf_Internal_Rela outrel;
4574 asection *sreloc;
4575
4576 /* Need a dynamic relocation to get the real function
4577 address. */
4578 outrel.r_offset = _bfd_elf_section_offset (output_bfd,
4579 info,
4580 input_section,
4581 rel->r_offset);
4582 if (outrel.r_offset == (bfd_vma) -1
4583 || outrel.r_offset == (bfd_vma) -2)
4584 abort ();
4585
4586 outrel.r_offset += (input_section->output_section->vma
4587 + input_section->output_offset);
4588
4589 if (h->dynindx == -1
4590 || h->forced_local
4591 || info->executable)
4592 {
4593 /* This symbol is resolved locally. */
4594 outrel.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
4595 outrel.r_addend = (h->root.u.def.value
4596 + h->root.u.def.section->output_section->vma
4597 + h->root.u.def.section->output_offset);
4598 }
4599 else
4600 {
4601 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4602 outrel.r_addend = 0;
4603 }
4604
4605 sreloc = globals->root.irelifunc;
4606 elf_append_rela (output_bfd, sreloc, &outrel);
4607
4608 /* If this reloc is against an external symbol, we
4609 do not want to fiddle with the addend. Otherwise,
4610 we need to include the symbol value so that it
4611 becomes an addend for the dynamic reloc. For an
4612 internal symbol, we have updated addend. */
4613 return bfd_reloc_ok;
4614 }
4615 /* FALLTHROUGH */
4616 case BFD_RELOC_AARCH64_CALL26:
4617 case BFD_RELOC_AARCH64_JUMP26:
4618 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4619 signed_addend,
4620 weak_undef_p);
4621 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
4622 howto, value);
4623 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4624 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4625 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
4626 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4627 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4628 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4629 base_got = globals->root.sgot;
4630 off = h->got.offset;
4631
4632 if (base_got == NULL)
4633 abort ();
4634
4635 if (off == (bfd_vma) -1)
4636 {
4637 bfd_vma plt_index;
4638
4639 /* We can't use h->got.offset here to save state, or
4640 even just remember the offset, as finish_dynamic_symbol
4641 would use that as offset into .got. */
4642
4643 if (globals->root.splt != NULL)
4644 {
4645 plt_index = ((h->plt.offset - globals->plt_header_size) /
4646 globals->plt_entry_size);
4647 off = (plt_index + 3) * GOT_ENTRY_SIZE;
4648 base_got = globals->root.sgotplt;
4649 }
4650 else
4651 {
4652 plt_index = h->plt.offset / globals->plt_entry_size;
4653 off = plt_index * GOT_ENTRY_SIZE;
4654 base_got = globals->root.igotplt;
4655 }
4656
4657 if (h->dynindx == -1
4658 || h->forced_local
4659 || info->symbolic)
4660 {
4661 /* This references the local definition. We must
4662 initialize this entry in the global offset table.
4663 Since the offset must always be a multiple of 8,
4664 we use the least significant bit to record
4665 whether we have initialized it already.
4666
4667 When doing a dynamic link, we create a .rela.got
4668 relocation entry to initialize the value. This
4669 is done in the finish_dynamic_symbol routine. */
4670 if ((off & 1) != 0)
4671 off &= ~1;
4672 else
4673 {
4674 bfd_put_NN (output_bfd, value,
4675 base_got->contents + off);
4676 /* Note that this is harmless as -1 | 1 still is -1. */
4677 h->got.offset |= 1;
4678 }
4679 }
4680 value = (base_got->output_section->vma
4681 + base_got->output_offset + off);
4682 }
4683 else
4684 value = aarch64_calculate_got_entry_vma (h, globals, info,
4685 value, output_bfd,
4686 unresolved_reloc_p);
4687 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15
4688 || bfd_r_type == BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14)
4689 addend = (globals->root.sgot->output_section->vma
4690 + globals->root.sgot->output_offset);
4691 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4692 addend, weak_undef_p);
4693 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type, howto, value);
4694 case BFD_RELOC_AARCH64_ADD_LO12:
4695 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4696 break;
4697 }
4698 }
4699
4700 switch (bfd_r_type)
4701 {
4702 case BFD_RELOC_AARCH64_NONE:
4703 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4704 *unresolved_reloc_p = FALSE;
4705 return bfd_reloc_ok;
4706
4707 case BFD_RELOC_AARCH64_NN:
4708
4709 /* When generating a shared object or relocatable executable, these
4710 relocations are copied into the output file to be resolved at
4711 run time. */
4712 if (((info->shared == TRUE) || globals->root.is_relocatable_executable)
4713 && (input_section->flags & SEC_ALLOC)
4714 && (h == NULL
4715 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
4716 || h->root.type != bfd_link_hash_undefweak))
4717 {
4718 Elf_Internal_Rela outrel;
4719 bfd_byte *loc;
4720 bfd_boolean skip, relocate;
4721 asection *sreloc;
4722
4723 *unresolved_reloc_p = FALSE;
4724
4725 skip = FALSE;
4726 relocate = FALSE;
4727
4728 outrel.r_addend = signed_addend;
4729 outrel.r_offset =
4730 _bfd_elf_section_offset (output_bfd, info, input_section,
4731 rel->r_offset);
4732 if (outrel.r_offset == (bfd_vma) - 1)
4733 skip = TRUE;
4734 else if (outrel.r_offset == (bfd_vma) - 2)
4735 {
4736 skip = TRUE;
4737 relocate = TRUE;
4738 }
4739
4740 outrel.r_offset += (input_section->output_section->vma
4741 + input_section->output_offset);
4742
4743 if (skip)
4744 memset (&outrel, 0, sizeof outrel);
4745 else if (h != NULL
4746 && h->dynindx != -1
4747 && (!info->shared || !SYMBOLIC_BIND (info, h) || !h->def_regular))
4748 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4749 else
4750 {
4751 int symbol;
4752
4753 /* On SVR4-ish systems, the dynamic loader cannot
4754 relocate the text and data segments independently,
4755 so the symbol does not matter. */
4756 symbol = 0;
4757 outrel.r_info = ELFNN_R_INFO (symbol, AARCH64_R (RELATIVE));
4758 outrel.r_addend += value;
4759 }
4760
4761 sreloc = elf_section_data (input_section)->sreloc;
4762 if (sreloc == NULL || sreloc->contents == NULL)
4763 return bfd_reloc_notsupported;
4764
4765 loc = sreloc->contents + sreloc->reloc_count++ * RELOC_SIZE (globals);
4766 bfd_elfNN_swap_reloca_out (output_bfd, &outrel, loc);
4767
4768 if (sreloc->reloc_count * RELOC_SIZE (globals) > sreloc->size)
4769 {
4770 /* Sanity to check that we have previously allocated
4771 sufficient space in the relocation section for the
4772 number of relocations we actually want to emit. */
4773 abort ();
4774 }
4775
4776 /* If this reloc is against an external symbol, we do not want to
4777 fiddle with the addend. Otherwise, we need to include the symbol
4778 value so that it becomes an addend for the dynamic reloc. */
4779 if (!relocate)
4780 return bfd_reloc_ok;
4781
4782 return _bfd_final_link_relocate (howto, input_bfd, input_section,
4783 contents, rel->r_offset, value,
4784 signed_addend);
4785 }
4786 else
4787 value += signed_addend;
4788 break;
4789
4790 case BFD_RELOC_AARCH64_CALL26:
4791 case BFD_RELOC_AARCH64_JUMP26:
4792 {
4793 asection *splt = globals->root.splt;
4794 bfd_boolean via_plt_p =
4795 splt != NULL && h != NULL && h->plt.offset != (bfd_vma) - 1;
4796
4797 /* A call to an undefined weak symbol is converted to a jump to
4798 the next instruction unless a PLT entry will be created.
4799 The jump to the next instruction is optimized as a NOP.
4800 Do the same for local undefined symbols. */
4801 if (weak_undef_p && ! via_plt_p)
4802 {
4803 bfd_putl32 (INSN_NOP, hit_data);
4804 return bfd_reloc_ok;
4805 }
4806
4807 /* If the call goes through a PLT entry, make sure to
4808 check distance to the right destination address. */
4809 if (via_plt_p)
4810 {
4811 value = (splt->output_section->vma
4812 + splt->output_offset + h->plt.offset);
4813 *unresolved_reloc_p = FALSE;
4814 }
4815
4816 /* If the target symbol is global and marked as a function the
4817 relocation applies a function call or a tail call. In this
4818 situation we can veneer out of range branches. The veneers
4819 use IP0 and IP1 hence cannot be used arbitrary out of range
4820 branches that occur within the body of a function. */
4821 if (h && h->type == STT_FUNC)
4822 {
4823 /* Check if a stub has to be inserted because the destination
4824 is too far away. */
4825 if (! aarch64_valid_branch_p (value, place))
4826 {
4827 /* The target is out of reach, so redirect the branch to
4828 the local stub for this function. */
4829 struct elf_aarch64_stub_hash_entry *stub_entry;
4830 stub_entry = elfNN_aarch64_get_stub_entry (input_section,
4831 sym_sec, h,
4832 rel, globals);
4833 if (stub_entry != NULL)
4834 value = (stub_entry->stub_offset
4835 + stub_entry->stub_sec->output_offset
4836 + stub_entry->stub_sec->output_section->vma);
4837 }
4838 }
4839 }
4840 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4841 signed_addend, weak_undef_p);
4842 break;
4843
4844 case BFD_RELOC_AARCH64_16_PCREL:
4845 case BFD_RELOC_AARCH64_32_PCREL:
4846 case BFD_RELOC_AARCH64_64_PCREL:
4847 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
4848 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4849 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
4850 case BFD_RELOC_AARCH64_LD_LO19_PCREL:
4851 if (info->shared
4852 && (input_section->flags & SEC_ALLOC) != 0
4853 && (input_section->flags & SEC_READONLY) != 0
4854 && h != NULL
4855 && !h->def_regular)
4856 {
4857 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
4858
4859 (*_bfd_error_handler)
4860 (_("%B: relocation %s against external symbol `%s' can not be used"
4861 " when making a shared object; recompile with -fPIC"),
4862 input_bfd, elfNN_aarch64_howto_table[howto_index].name,
4863 h->root.root.string);
4864 bfd_set_error (bfd_error_bad_value);
4865 return FALSE;
4866 }
4867
4868 case BFD_RELOC_AARCH64_16:
4869 #if ARCH_SIZE == 64
4870 case BFD_RELOC_AARCH64_32:
4871 #endif
4872 case BFD_RELOC_AARCH64_ADD_LO12:
4873 case BFD_RELOC_AARCH64_BRANCH19:
4874 case BFD_RELOC_AARCH64_LDST128_LO12:
4875 case BFD_RELOC_AARCH64_LDST16_LO12:
4876 case BFD_RELOC_AARCH64_LDST32_LO12:
4877 case BFD_RELOC_AARCH64_LDST64_LO12:
4878 case BFD_RELOC_AARCH64_LDST8_LO12:
4879 case BFD_RELOC_AARCH64_MOVW_G0:
4880 case BFD_RELOC_AARCH64_MOVW_G0_NC:
4881 case BFD_RELOC_AARCH64_MOVW_G0_S:
4882 case BFD_RELOC_AARCH64_MOVW_G1:
4883 case BFD_RELOC_AARCH64_MOVW_G1_NC:
4884 case BFD_RELOC_AARCH64_MOVW_G1_S:
4885 case BFD_RELOC_AARCH64_MOVW_G2:
4886 case BFD_RELOC_AARCH64_MOVW_G2_NC:
4887 case BFD_RELOC_AARCH64_MOVW_G2_S:
4888 case BFD_RELOC_AARCH64_MOVW_G3:
4889 case BFD_RELOC_AARCH64_TSTBR14:
4890 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4891 signed_addend, weak_undef_p);
4892 break;
4893
4894 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4895 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4896 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
4897 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4898 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4899 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4900 if (globals->root.sgot == NULL)
4901 BFD_ASSERT (h != NULL);
4902
4903 if (h != NULL)
4904 {
4905 bfd_vma addend = 0;
4906 value = aarch64_calculate_got_entry_vma (h, globals, info, value,
4907 output_bfd,
4908 unresolved_reloc_p);
4909 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15
4910 || bfd_r_type == BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14)
4911 addend = (globals->root.sgot->output_section->vma
4912 + globals->root.sgot->output_offset);
4913 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4914 addend, weak_undef_p);
4915 }
4916 else
4917 {
4918 bfd_vma addend = 0;
4919 struct elf_aarch64_local_symbol *locals
4920 = elf_aarch64_locals (input_bfd);
4921
4922 if (locals == NULL)
4923 {
4924 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
4925 (*_bfd_error_handler)
4926 (_("%B: Local symbol descriptor table be NULL when applying "
4927 "relocation %s against local symbol"),
4928 input_bfd, elfNN_aarch64_howto_table[howto_index].name);
4929 abort ();
4930 }
4931
4932 off = symbol_got_offset (input_bfd, h, r_symndx);
4933 base_got = globals->root.sgot;
4934 bfd_vma got_entry_addr = (base_got->output_section->vma
4935 + base_got->output_offset + off);
4936
4937 if (!symbol_got_offset_mark_p (input_bfd, h, r_symndx))
4938 {
4939 bfd_put_64 (output_bfd, value, base_got->contents + off);
4940
4941 if (info->shared)
4942 {
4943 asection *s;
4944 Elf_Internal_Rela outrel;
4945
4946 /* For local symbol, we have done absolute relocation in static
4947 linking stageh. While for share library, we need to update
4948 the content of GOT entry according to the share objects
4949 loading base address. So we need to generate a
4950 R_AARCH64_RELATIVE reloc for dynamic linker. */
4951 s = globals->root.srelgot;
4952 if (s == NULL)
4953 abort ();
4954
4955 outrel.r_offset = got_entry_addr;
4956 outrel.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
4957 outrel.r_addend = value;
4958 elf_append_rela (output_bfd, s, &outrel);
4959 }
4960
4961 symbol_got_offset_mark (input_bfd, h, r_symndx);
4962 }
4963
4964 /* Update the relocation value to GOT entry addr as we have transformed
4965 the direct data access into indirect data access through GOT. */
4966 value = got_entry_addr;
4967
4968 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15
4969 || bfd_r_type == BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14)
4970 addend = base_got->output_section->vma + base_got->output_offset;
4971
4972 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4973 addend, weak_undef_p);
4974 }
4975
4976 break;
4977
4978 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4979 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4980 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4981 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4982 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4983 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4984 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4985 if (globals->root.sgot == NULL)
4986 return bfd_reloc_notsupported;
4987
4988 value = (symbol_got_offset (input_bfd, h, r_symndx)
4989 + globals->root.sgot->output_section->vma
4990 + globals->root.sgot->output_offset);
4991
4992 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4993 0, weak_undef_p);
4994 *unresolved_reloc_p = FALSE;
4995 break;
4996
4997 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4998 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4999 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5000 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5001 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5002 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5003 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5004 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5005 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
5006 signed_addend - tpoff_base (info),
5007 weak_undef_p);
5008 *unresolved_reloc_p = FALSE;
5009 break;
5010
5011 case BFD_RELOC_AARCH64_TLSDESC_ADD:
5012 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5013 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5014 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5015 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
5016 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
5017 case BFD_RELOC_AARCH64_TLSDESC_LDR:
5018 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5019 if (globals->root.sgot == NULL)
5020 return bfd_reloc_notsupported;
5021 value = (symbol_tlsdesc_got_offset (input_bfd, h, r_symndx)
5022 + globals->root.sgotplt->output_section->vma
5023 + globals->root.sgotplt->output_offset
5024 + globals->sgotplt_jump_table_size);
5025
5026 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
5027 0, weak_undef_p);
5028 *unresolved_reloc_p = FALSE;
5029 break;
5030
5031 default:
5032 return bfd_reloc_notsupported;
5033 }
5034
5035 if (saved_addend)
5036 *saved_addend = value;
5037
5038 /* Only apply the final relocation in a sequence. */
5039 if (save_addend)
5040 return bfd_reloc_continue;
5041
5042 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
5043 howto, value);
5044 }
5045
5046 /* Handle TLS relaxations. Relaxing is possible for symbols that use
5047 R_AARCH64_TLSDESC_ADR_{PAGE, LD64_LO12_NC, ADD_LO12_NC} during a static
5048 link.
5049
5050 Return bfd_reloc_ok if we're done, bfd_reloc_continue if the caller
5051 is to then call final_link_relocate. Return other values in the
5052 case of error. */
5053
5054 static bfd_reloc_status_type
5055 elfNN_aarch64_tls_relax (struct elf_aarch64_link_hash_table *globals,
5056 bfd *input_bfd, bfd_byte *contents,
5057 Elf_Internal_Rela *rel, struct elf_link_hash_entry *h)
5058 {
5059 bfd_boolean is_local = h == NULL;
5060 unsigned int r_type = ELFNN_R_TYPE (rel->r_info);
5061 unsigned long insn;
5062
5063 BFD_ASSERT (globals && input_bfd && contents && rel);
5064
5065 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
5066 {
5067 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5068 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5069 if (is_local)
5070 {
5071 /* GD->LE relaxation:
5072 adrp x0, :tlsgd:var => movz x0, :tprel_g1:var
5073 or
5074 adrp x0, :tlsdesc:var => movz x0, :tprel_g1:var
5075 */
5076 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
5077 return bfd_reloc_continue;
5078 }
5079 else
5080 {
5081 /* GD->IE relaxation:
5082 adrp x0, :tlsgd:var => adrp x0, :gottprel:var
5083 or
5084 adrp x0, :tlsdesc:var => adrp x0, :gottprel:var
5085 */
5086 return bfd_reloc_continue;
5087 }
5088
5089 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5090 BFD_ASSERT (0);
5091 break;
5092
5093 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5094 if (is_local)
5095 {
5096 /* Tiny TLSDESC->LE relaxation:
5097 ldr x1, :tlsdesc:var => movz x0, #:tprel_g1:var
5098 adr x0, :tlsdesc:var => movk x0, #:tprel_g0_nc:var
5099 .tlsdesccall var
5100 blr x1 => nop
5101 */
5102 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
5103 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
5104
5105 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5106 AARCH64_R (TLSLE_MOVW_TPREL_G0_NC));
5107 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5108
5109 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
5110 bfd_putl32 (0xf2800000, contents + rel->r_offset + 4);
5111 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
5112 return bfd_reloc_continue;
5113 }
5114 else
5115 {
5116 /* Tiny TLSDESC->IE relaxation:
5117 ldr x1, :tlsdesc:var => ldr x0, :gottprel:var
5118 adr x0, :tlsdesc:var => nop
5119 .tlsdesccall var
5120 blr x1 => nop
5121 */
5122 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
5123 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
5124
5125 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5126 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5127
5128 bfd_putl32 (0x58000000, contents + rel->r_offset);
5129 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 4);
5130 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
5131 return bfd_reloc_continue;
5132 }
5133
5134 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5135 if (is_local)
5136 {
5137 /* Tiny GD->LE relaxation:
5138 adr x0, :tlsgd:var => mrs x1, tpidr_el0
5139 bl __tls_get_addr => add x0, x1, #:tprel_hi12:x, lsl #12
5140 nop => add x0, x0, #:tprel_lo12_nc:x
5141 */
5142
5143 /* First kill the tls_get_addr reloc on the bl instruction. */
5144 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5145
5146 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 0);
5147 bfd_putl32 (0x91400020, contents + rel->r_offset + 4);
5148 bfd_putl32 (0x91000000, contents + rel->r_offset + 8);
5149
5150 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5151 AARCH64_R (TLSLE_ADD_TPREL_LO12_NC));
5152 rel[1].r_offset = rel->r_offset + 8;
5153
5154 /* Move the current relocation to the second instruction in
5155 the sequence. */
5156 rel->r_offset += 4;
5157 rel->r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5158 AARCH64_R (TLSLE_ADD_TPREL_HI12));
5159 return bfd_reloc_continue;
5160 }
5161 else
5162 {
5163 /* Tiny GD->IE relaxation:
5164 adr x0, :tlsgd:var => ldr x0, :gottprel:var
5165 bl __tls_get_addr => mrs x1, tpidr_el0
5166 nop => add x0, x0, x1
5167 */
5168
5169 /* First kill the tls_get_addr reloc on the bl instruction. */
5170 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5171 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5172
5173 bfd_putl32 (0x58000000, contents + rel->r_offset);
5174 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5175 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5176 return bfd_reloc_continue;
5177 }
5178
5179 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5180 return bfd_reloc_continue;
5181
5182 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5183 if (is_local)
5184 {
5185 /* GD->LE relaxation:
5186 ldr xd, [x0, #:tlsdesc_lo12:var] => movk x0, :tprel_g0_nc:var
5187 */
5188 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5189 return bfd_reloc_continue;
5190 }
5191 else
5192 {
5193 /* GD->IE relaxation:
5194 ldr xd, [x0, #:tlsdesc_lo12:var] => ldr x0, [x0, #:gottprel_lo12:var]
5195 */
5196 insn = bfd_getl32 (contents + rel->r_offset);
5197 insn &= 0xffffffe0;
5198 bfd_putl32 (insn, contents + rel->r_offset);
5199 return bfd_reloc_continue;
5200 }
5201
5202 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5203 if (is_local)
5204 {
5205 /* GD->LE relaxation
5206 add x0, #:tlsgd_lo12:var => movk x0, :tprel_g0_nc:var
5207 bl __tls_get_addr => mrs x1, tpidr_el0
5208 nop => add x0, x1, x0
5209 */
5210
5211 /* First kill the tls_get_addr reloc on the bl instruction. */
5212 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5213 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5214
5215 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5216 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5217 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5218 return bfd_reloc_continue;
5219 }
5220 else
5221 {
5222 /* GD->IE relaxation
5223 ADD x0, #:tlsgd_lo12:var => ldr x0, [x0, #:gottprel_lo12:var]
5224 BL __tls_get_addr => mrs x1, tpidr_el0
5225 R_AARCH64_CALL26
5226 NOP => add x0, x1, x0
5227 */
5228
5229 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (CALL26));
5230
5231 /* Remove the relocation on the BL instruction. */
5232 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5233
5234 bfd_putl32 (0xf9400000, contents + rel->r_offset);
5235
5236 /* We choose to fixup the BL and NOP instructions using the
5237 offset from the second relocation to allow flexibility in
5238 scheduling instructions between the ADD and BL. */
5239 bfd_putl32 (0xd53bd041, contents + rel[1].r_offset);
5240 bfd_putl32 (0x8b000020, contents + rel[1].r_offset + 4);
5241 return bfd_reloc_continue;
5242 }
5243
5244 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5245 case BFD_RELOC_AARCH64_TLSDESC_CALL:
5246 /* GD->IE/LE relaxation:
5247 add x0, x0, #:tlsdesc_lo12:var => nop
5248 blr xd => nop
5249 */
5250 bfd_putl32 (INSN_NOP, contents + rel->r_offset);
5251 return bfd_reloc_ok;
5252
5253 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5254 /* IE->LE relaxation:
5255 adrp xd, :gottprel:var => movz xd, :tprel_g1:var
5256 */
5257 if (is_local)
5258 {
5259 insn = bfd_getl32 (contents + rel->r_offset);
5260 bfd_putl32 (0xd2a00000 | (insn & 0x1f), contents + rel->r_offset);
5261 }
5262 return bfd_reloc_continue;
5263
5264 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5265 /* IE->LE relaxation:
5266 ldr xd, [xm, #:gottprel_lo12:var] => movk xd, :tprel_g0_nc:var
5267 */
5268 if (is_local)
5269 {
5270 insn = bfd_getl32 (contents + rel->r_offset);
5271 bfd_putl32 (0xf2800000 | (insn & 0x1f), contents + rel->r_offset);
5272 }
5273 return bfd_reloc_continue;
5274
5275 default:
5276 return bfd_reloc_continue;
5277 }
5278
5279 return bfd_reloc_ok;
5280 }
5281
5282 /* Relocate an AArch64 ELF section. */
5283
5284 static bfd_boolean
5285 elfNN_aarch64_relocate_section (bfd *output_bfd,
5286 struct bfd_link_info *info,
5287 bfd *input_bfd,
5288 asection *input_section,
5289 bfd_byte *contents,
5290 Elf_Internal_Rela *relocs,
5291 Elf_Internal_Sym *local_syms,
5292 asection **local_sections)
5293 {
5294 Elf_Internal_Shdr *symtab_hdr;
5295 struct elf_link_hash_entry **sym_hashes;
5296 Elf_Internal_Rela *rel;
5297 Elf_Internal_Rela *relend;
5298 const char *name;
5299 struct elf_aarch64_link_hash_table *globals;
5300 bfd_boolean save_addend = FALSE;
5301 bfd_vma addend = 0;
5302
5303 globals = elf_aarch64_hash_table (info);
5304
5305 symtab_hdr = &elf_symtab_hdr (input_bfd);
5306 sym_hashes = elf_sym_hashes (input_bfd);
5307
5308 rel = relocs;
5309 relend = relocs + input_section->reloc_count;
5310 for (; rel < relend; rel++)
5311 {
5312 unsigned int r_type;
5313 bfd_reloc_code_real_type bfd_r_type;
5314 bfd_reloc_code_real_type relaxed_bfd_r_type;
5315 reloc_howto_type *howto;
5316 unsigned long r_symndx;
5317 Elf_Internal_Sym *sym;
5318 asection *sec;
5319 struct elf_link_hash_entry *h;
5320 bfd_vma relocation;
5321 bfd_reloc_status_type r;
5322 arelent bfd_reloc;
5323 char sym_type;
5324 bfd_boolean unresolved_reloc = FALSE;
5325 char *error_message = NULL;
5326
5327 r_symndx = ELFNN_R_SYM (rel->r_info);
5328 r_type = ELFNN_R_TYPE (rel->r_info);
5329
5330 bfd_reloc.howto = elfNN_aarch64_howto_from_type (r_type);
5331 howto = bfd_reloc.howto;
5332
5333 if (howto == NULL)
5334 {
5335 (*_bfd_error_handler)
5336 (_("%B: unrecognized relocation (0x%x) in section `%A'"),
5337 input_bfd, input_section, r_type);
5338 return FALSE;
5339 }
5340 bfd_r_type = elfNN_aarch64_bfd_reloc_from_howto (howto);
5341
5342 h = NULL;
5343 sym = NULL;
5344 sec = NULL;
5345
5346 if (r_symndx < symtab_hdr->sh_info)
5347 {
5348 sym = local_syms + r_symndx;
5349 sym_type = ELFNN_ST_TYPE (sym->st_info);
5350 sec = local_sections[r_symndx];
5351
5352 /* An object file might have a reference to a local
5353 undefined symbol. This is a daft object file, but we
5354 should at least do something about it. */
5355 if (r_type != R_AARCH64_NONE && r_type != R_AARCH64_NULL
5356 && bfd_is_und_section (sec)
5357 && ELF_ST_BIND (sym->st_info) != STB_WEAK)
5358 {
5359 if (!info->callbacks->undefined_symbol
5360 (info, bfd_elf_string_from_elf_section
5361 (input_bfd, symtab_hdr->sh_link, sym->st_name),
5362 input_bfd, input_section, rel->r_offset, TRUE))
5363 return FALSE;
5364 }
5365
5366 relocation = _bfd_elf_rela_local_sym (output_bfd, sym, &sec, rel);
5367
5368 /* Relocate against local STT_GNU_IFUNC symbol. */
5369 if (!info->relocatable
5370 && ELF_ST_TYPE (sym->st_info) == STT_GNU_IFUNC)
5371 {
5372 h = elfNN_aarch64_get_local_sym_hash (globals, input_bfd,
5373 rel, FALSE);
5374 if (h == NULL)
5375 abort ();
5376
5377 /* Set STT_GNU_IFUNC symbol value. */
5378 h->root.u.def.value = sym->st_value;
5379 h->root.u.def.section = sec;
5380 }
5381 }
5382 else
5383 {
5384 bfd_boolean warned, ignored;
5385
5386 RELOC_FOR_GLOBAL_SYMBOL (info, input_bfd, input_section, rel,
5387 r_symndx, symtab_hdr, sym_hashes,
5388 h, sec, relocation,
5389 unresolved_reloc, warned, ignored);
5390
5391 sym_type = h->type;
5392 }
5393
5394 if (sec != NULL && discarded_section (sec))
5395 RELOC_AGAINST_DISCARDED_SECTION (info, input_bfd, input_section,
5396 rel, 1, relend, howto, 0, contents);
5397
5398 if (info->relocatable)
5399 continue;
5400
5401 if (h != NULL)
5402 name = h->root.root.string;
5403 else
5404 {
5405 name = (bfd_elf_string_from_elf_section
5406 (input_bfd, symtab_hdr->sh_link, sym->st_name));
5407 if (name == NULL || *name == '\0')
5408 name = bfd_section_name (input_bfd, sec);
5409 }
5410
5411 if (r_symndx != 0
5412 && r_type != R_AARCH64_NONE
5413 && r_type != R_AARCH64_NULL
5414 && (h == NULL
5415 || h->root.type == bfd_link_hash_defined
5416 || h->root.type == bfd_link_hash_defweak)
5417 && IS_AARCH64_TLS_RELOC (bfd_r_type) != (sym_type == STT_TLS))
5418 {
5419 (*_bfd_error_handler)
5420 ((sym_type == STT_TLS
5421 ? _("%B(%A+0x%lx): %s used with TLS symbol %s")
5422 : _("%B(%A+0x%lx): %s used with non-TLS symbol %s")),
5423 input_bfd,
5424 input_section, (long) rel->r_offset, howto->name, name);
5425 }
5426
5427 /* We relax only if we can see that there can be a valid transition
5428 from a reloc type to another.
5429 We call elfNN_aarch64_final_link_relocate unless we're completely
5430 done, i.e., the relaxation produced the final output we want. */
5431
5432 relaxed_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type,
5433 h, r_symndx);
5434 if (relaxed_bfd_r_type != bfd_r_type)
5435 {
5436 bfd_r_type = relaxed_bfd_r_type;
5437 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
5438 BFD_ASSERT (howto != NULL);
5439 r_type = howto->type;
5440 r = elfNN_aarch64_tls_relax (globals, input_bfd, contents, rel, h);
5441 unresolved_reloc = 0;
5442 }
5443 else
5444 r = bfd_reloc_continue;
5445
5446 /* There may be multiple consecutive relocations for the
5447 same offset. In that case we are supposed to treat the
5448 output of each relocation as the addend for the next. */
5449 if (rel + 1 < relend
5450 && rel->r_offset == rel[1].r_offset
5451 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NONE
5452 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NULL)
5453 save_addend = TRUE;
5454 else
5455 save_addend = FALSE;
5456
5457 if (r == bfd_reloc_continue)
5458 r = elfNN_aarch64_final_link_relocate (howto, input_bfd, output_bfd,
5459 input_section, contents, rel,
5460 relocation, info, sec,
5461 h, &unresolved_reloc,
5462 save_addend, &addend, sym);
5463
5464 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
5465 {
5466 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5467 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5468 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5469 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5470 {
5471 bfd_boolean need_relocs = FALSE;
5472 bfd_byte *loc;
5473 int indx;
5474 bfd_vma off;
5475
5476 off = symbol_got_offset (input_bfd, h, r_symndx);
5477 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5478
5479 need_relocs =
5480 (info->shared || indx != 0) &&
5481 (h == NULL
5482 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5483 || h->root.type != bfd_link_hash_undefweak);
5484
5485 BFD_ASSERT (globals->root.srelgot != NULL);
5486
5487 if (need_relocs)
5488 {
5489 Elf_Internal_Rela rela;
5490 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPMOD));
5491 rela.r_addend = 0;
5492 rela.r_offset = globals->root.sgot->output_section->vma +
5493 globals->root.sgot->output_offset + off;
5494
5495
5496 loc = globals->root.srelgot->contents;
5497 loc += globals->root.srelgot->reloc_count++
5498 * RELOC_SIZE (htab);
5499 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5500
5501 if (indx == 0)
5502 {
5503 bfd_put_NN (output_bfd,
5504 relocation - dtpoff_base (info),
5505 globals->root.sgot->contents + off
5506 + GOT_ENTRY_SIZE);
5507 }
5508 else
5509 {
5510 /* This TLS symbol is global. We emit a
5511 relocation to fixup the tls offset at load
5512 time. */
5513 rela.r_info =
5514 ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPREL));
5515 rela.r_addend = 0;
5516 rela.r_offset =
5517 (globals->root.sgot->output_section->vma
5518 + globals->root.sgot->output_offset + off
5519 + GOT_ENTRY_SIZE);
5520
5521 loc = globals->root.srelgot->contents;
5522 loc += globals->root.srelgot->reloc_count++
5523 * RELOC_SIZE (globals);
5524 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5525 bfd_put_NN (output_bfd, (bfd_vma) 0,
5526 globals->root.sgot->contents + off
5527 + GOT_ENTRY_SIZE);
5528 }
5529 }
5530 else
5531 {
5532 bfd_put_NN (output_bfd, (bfd_vma) 1,
5533 globals->root.sgot->contents + off);
5534 bfd_put_NN (output_bfd,
5535 relocation - dtpoff_base (info),
5536 globals->root.sgot->contents + off
5537 + GOT_ENTRY_SIZE);
5538 }
5539
5540 symbol_got_offset_mark (input_bfd, h, r_symndx);
5541 }
5542 break;
5543
5544 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5545 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5546 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5547 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5548 {
5549 bfd_boolean need_relocs = FALSE;
5550 bfd_byte *loc;
5551 int indx;
5552 bfd_vma off;
5553
5554 off = symbol_got_offset (input_bfd, h, r_symndx);
5555
5556 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5557
5558 need_relocs =
5559 (info->shared || indx != 0) &&
5560 (h == NULL
5561 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5562 || h->root.type != bfd_link_hash_undefweak);
5563
5564 BFD_ASSERT (globals->root.srelgot != NULL);
5565
5566 if (need_relocs)
5567 {
5568 Elf_Internal_Rela rela;
5569
5570 if (indx == 0)
5571 rela.r_addend = relocation - dtpoff_base (info);
5572 else
5573 rela.r_addend = 0;
5574
5575 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_TPREL));
5576 rela.r_offset = globals->root.sgot->output_section->vma +
5577 globals->root.sgot->output_offset + off;
5578
5579 loc = globals->root.srelgot->contents;
5580 loc += globals->root.srelgot->reloc_count++
5581 * RELOC_SIZE (htab);
5582
5583 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5584
5585 bfd_put_NN (output_bfd, rela.r_addend,
5586 globals->root.sgot->contents + off);
5587 }
5588 else
5589 bfd_put_NN (output_bfd, relocation - tpoff_base (info),
5590 globals->root.sgot->contents + off);
5591
5592 symbol_got_offset_mark (input_bfd, h, r_symndx);
5593 }
5594 break;
5595
5596 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5597 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5598 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5599 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5600 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5601 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5602 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5603 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5604 break;
5605
5606 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5607 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5608 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5609 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5610 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5611 if (! symbol_tlsdesc_got_offset_mark_p (input_bfd, h, r_symndx))
5612 {
5613 bfd_boolean need_relocs = FALSE;
5614 int indx = h && h->dynindx != -1 ? h->dynindx : 0;
5615 bfd_vma off = symbol_tlsdesc_got_offset (input_bfd, h, r_symndx);
5616
5617 need_relocs = (h == NULL
5618 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5619 || h->root.type != bfd_link_hash_undefweak);
5620
5621 BFD_ASSERT (globals->root.srelgot != NULL);
5622 BFD_ASSERT (globals->root.sgot != NULL);
5623
5624 if (need_relocs)
5625 {
5626 bfd_byte *loc;
5627 Elf_Internal_Rela rela;
5628 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLSDESC));
5629
5630 rela.r_addend = 0;
5631 rela.r_offset = (globals->root.sgotplt->output_section->vma
5632 + globals->root.sgotplt->output_offset
5633 + off + globals->sgotplt_jump_table_size);
5634
5635 if (indx == 0)
5636 rela.r_addend = relocation - dtpoff_base (info);
5637
5638 /* Allocate the next available slot in the PLT reloc
5639 section to hold our R_AARCH64_TLSDESC, the next
5640 available slot is determined from reloc_count,
5641 which we step. But note, reloc_count was
5642 artifically moved down while allocating slots for
5643 real PLT relocs such that all of the PLT relocs
5644 will fit above the initial reloc_count and the
5645 extra stuff will fit below. */
5646 loc = globals->root.srelplt->contents;
5647 loc += globals->root.srelplt->reloc_count++
5648 * RELOC_SIZE (globals);
5649
5650 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5651
5652 bfd_put_NN (output_bfd, (bfd_vma) 0,
5653 globals->root.sgotplt->contents + off +
5654 globals->sgotplt_jump_table_size);
5655 bfd_put_NN (output_bfd, (bfd_vma) 0,
5656 globals->root.sgotplt->contents + off +
5657 globals->sgotplt_jump_table_size +
5658 GOT_ENTRY_SIZE);
5659 }
5660
5661 symbol_tlsdesc_got_offset_mark (input_bfd, h, r_symndx);
5662 }
5663 break;
5664 default:
5665 break;
5666 }
5667
5668 if (!save_addend)
5669 addend = 0;
5670
5671
5672 /* Dynamic relocs are not propagated for SEC_DEBUGGING sections
5673 because such sections are not SEC_ALLOC and thus ld.so will
5674 not process them. */
5675 if (unresolved_reloc
5676 && !((input_section->flags & SEC_DEBUGGING) != 0
5677 && h->def_dynamic)
5678 && _bfd_elf_section_offset (output_bfd, info, input_section,
5679 +rel->r_offset) != (bfd_vma) - 1)
5680 {
5681 (*_bfd_error_handler)
5682 (_
5683 ("%B(%A+0x%lx): unresolvable %s relocation against symbol `%s'"),
5684 input_bfd, input_section, (long) rel->r_offset, howto->name,
5685 h->root.root.string);
5686 return FALSE;
5687 }
5688
5689 if (r != bfd_reloc_ok && r != bfd_reloc_continue)
5690 {
5691 switch (r)
5692 {
5693 case bfd_reloc_overflow:
5694 if (!(*info->callbacks->reloc_overflow)
5695 (info, (h ? &h->root : NULL), name, howto->name, (bfd_vma) 0,
5696 input_bfd, input_section, rel->r_offset))
5697 return FALSE;
5698 break;
5699
5700 case bfd_reloc_undefined:
5701 if (!((*info->callbacks->undefined_symbol)
5702 (info, name, input_bfd, input_section,
5703 rel->r_offset, TRUE)))
5704 return FALSE;
5705 break;
5706
5707 case bfd_reloc_outofrange:
5708 error_message = _("out of range");
5709 goto common_error;
5710
5711 case bfd_reloc_notsupported:
5712 error_message = _("unsupported relocation");
5713 goto common_error;
5714
5715 case bfd_reloc_dangerous:
5716 /* error_message should already be set. */
5717 goto common_error;
5718
5719 default:
5720 error_message = _("unknown error");
5721 /* Fall through. */
5722
5723 common_error:
5724 BFD_ASSERT (error_message != NULL);
5725 if (!((*info->callbacks->reloc_dangerous)
5726 (info, error_message, input_bfd, input_section,
5727 rel->r_offset)))
5728 return FALSE;
5729 break;
5730 }
5731 }
5732 }
5733
5734 return TRUE;
5735 }
5736
5737 /* Set the right machine number. */
5738
5739 static bfd_boolean
5740 elfNN_aarch64_object_p (bfd *abfd)
5741 {
5742 #if ARCH_SIZE == 32
5743 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64_ilp32);
5744 #else
5745 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64);
5746 #endif
5747 return TRUE;
5748 }
5749
5750 /* Function to keep AArch64 specific flags in the ELF header. */
5751
5752 static bfd_boolean
5753 elfNN_aarch64_set_private_flags (bfd *abfd, flagword flags)
5754 {
5755 if (elf_flags_init (abfd) && elf_elfheader (abfd)->e_flags != flags)
5756 {
5757 }
5758 else
5759 {
5760 elf_elfheader (abfd)->e_flags = flags;
5761 elf_flags_init (abfd) = TRUE;
5762 }
5763
5764 return TRUE;
5765 }
5766
5767 /* Merge backend specific data from an object file to the output
5768 object file when linking. */
5769
5770 static bfd_boolean
5771 elfNN_aarch64_merge_private_bfd_data (bfd *ibfd, bfd *obfd)
5772 {
5773 flagword out_flags;
5774 flagword in_flags;
5775 bfd_boolean flags_compatible = TRUE;
5776 asection *sec;
5777
5778 /* Check if we have the same endianess. */
5779 if (!_bfd_generic_verify_endian_match (ibfd, obfd))
5780 return FALSE;
5781
5782 if (!is_aarch64_elf (ibfd) || !is_aarch64_elf (obfd))
5783 return TRUE;
5784
5785 /* The input BFD must have had its flags initialised. */
5786 /* The following seems bogus to me -- The flags are initialized in
5787 the assembler but I don't think an elf_flags_init field is
5788 written into the object. */
5789 /* BFD_ASSERT (elf_flags_init (ibfd)); */
5790
5791 in_flags = elf_elfheader (ibfd)->e_flags;
5792 out_flags = elf_elfheader (obfd)->e_flags;
5793
5794 if (!elf_flags_init (obfd))
5795 {
5796 /* If the input is the default architecture and had the default
5797 flags then do not bother setting the flags for the output
5798 architecture, instead allow future merges to do this. If no
5799 future merges ever set these flags then they will retain their
5800 uninitialised values, which surprise surprise, correspond
5801 to the default values. */
5802 if (bfd_get_arch_info (ibfd)->the_default
5803 && elf_elfheader (ibfd)->e_flags == 0)
5804 return TRUE;
5805
5806 elf_flags_init (obfd) = TRUE;
5807 elf_elfheader (obfd)->e_flags = in_flags;
5808
5809 if (bfd_get_arch (obfd) == bfd_get_arch (ibfd)
5810 && bfd_get_arch_info (obfd)->the_default)
5811 return bfd_set_arch_mach (obfd, bfd_get_arch (ibfd),
5812 bfd_get_mach (ibfd));
5813
5814 return TRUE;
5815 }
5816
5817 /* Identical flags must be compatible. */
5818 if (in_flags == out_flags)
5819 return TRUE;
5820
5821 /* Check to see if the input BFD actually contains any sections. If
5822 not, its flags may not have been initialised either, but it
5823 cannot actually cause any incompatiblity. Do not short-circuit
5824 dynamic objects; their section list may be emptied by
5825 elf_link_add_object_symbols.
5826
5827 Also check to see if there are no code sections in the input.
5828 In this case there is no need to check for code specific flags.
5829 XXX - do we need to worry about floating-point format compatability
5830 in data sections ? */
5831 if (!(ibfd->flags & DYNAMIC))
5832 {
5833 bfd_boolean null_input_bfd = TRUE;
5834 bfd_boolean only_data_sections = TRUE;
5835
5836 for (sec = ibfd->sections; sec != NULL; sec = sec->next)
5837 {
5838 if ((bfd_get_section_flags (ibfd, sec)
5839 & (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5840 == (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5841 only_data_sections = FALSE;
5842
5843 null_input_bfd = FALSE;
5844 break;
5845 }
5846
5847 if (null_input_bfd || only_data_sections)
5848 return TRUE;
5849 }
5850
5851 return flags_compatible;
5852 }
5853
5854 /* Display the flags field. */
5855
5856 static bfd_boolean
5857 elfNN_aarch64_print_private_bfd_data (bfd *abfd, void *ptr)
5858 {
5859 FILE *file = (FILE *) ptr;
5860 unsigned long flags;
5861
5862 BFD_ASSERT (abfd != NULL && ptr != NULL);
5863
5864 /* Print normal ELF private data. */
5865 _bfd_elf_print_private_bfd_data (abfd, ptr);
5866
5867 flags = elf_elfheader (abfd)->e_flags;
5868 /* Ignore init flag - it may not be set, despite the flags field
5869 containing valid data. */
5870
5871 /* xgettext:c-format */
5872 fprintf (file, _("private flags = %lx:"), elf_elfheader (abfd)->e_flags);
5873
5874 if (flags)
5875 fprintf (file, _("<Unrecognised flag bits set>"));
5876
5877 fputc ('\n', file);
5878
5879 return TRUE;
5880 }
5881
5882 /* Update the got entry reference counts for the section being removed. */
5883
5884 static bfd_boolean
5885 elfNN_aarch64_gc_sweep_hook (bfd *abfd,
5886 struct bfd_link_info *info,
5887 asection *sec,
5888 const Elf_Internal_Rela * relocs)
5889 {
5890 struct elf_aarch64_link_hash_table *htab;
5891 Elf_Internal_Shdr *symtab_hdr;
5892 struct elf_link_hash_entry **sym_hashes;
5893 struct elf_aarch64_local_symbol *locals;
5894 const Elf_Internal_Rela *rel, *relend;
5895
5896 if (info->relocatable)
5897 return TRUE;
5898
5899 htab = elf_aarch64_hash_table (info);
5900
5901 if (htab == NULL)
5902 return FALSE;
5903
5904 elf_section_data (sec)->local_dynrel = NULL;
5905
5906 symtab_hdr = &elf_symtab_hdr (abfd);
5907 sym_hashes = elf_sym_hashes (abfd);
5908
5909 locals = elf_aarch64_locals (abfd);
5910
5911 relend = relocs + sec->reloc_count;
5912 for (rel = relocs; rel < relend; rel++)
5913 {
5914 unsigned long r_symndx;
5915 unsigned int r_type;
5916 struct elf_link_hash_entry *h = NULL;
5917
5918 r_symndx = ELFNN_R_SYM (rel->r_info);
5919
5920 if (r_symndx >= symtab_hdr->sh_info)
5921 {
5922
5923 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
5924 while (h->root.type == bfd_link_hash_indirect
5925 || h->root.type == bfd_link_hash_warning)
5926 h = (struct elf_link_hash_entry *) h->root.u.i.link;
5927 }
5928 else
5929 {
5930 Elf_Internal_Sym *isym;
5931
5932 /* A local symbol. */
5933 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
5934 abfd, r_symndx);
5935
5936 /* Check relocation against local STT_GNU_IFUNC symbol. */
5937 if (isym != NULL
5938 && ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
5939 {
5940 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel, FALSE);
5941 if (h == NULL)
5942 abort ();
5943 }
5944 }
5945
5946 if (h)
5947 {
5948 struct elf_aarch64_link_hash_entry *eh;
5949 struct elf_dyn_relocs **pp;
5950 struct elf_dyn_relocs *p;
5951
5952 eh = (struct elf_aarch64_link_hash_entry *) h;
5953
5954 for (pp = &eh->dyn_relocs; (p = *pp) != NULL; pp = &p->next)
5955 if (p->sec == sec)
5956 {
5957 /* Everything must go for SEC. */
5958 *pp = p->next;
5959 break;
5960 }
5961 }
5962
5963 r_type = ELFNN_R_TYPE (rel->r_info);
5964 switch (aarch64_tls_transition (abfd,info, r_type, h ,r_symndx))
5965 {
5966 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
5967 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
5968 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
5969 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
5970 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
5971 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
5972 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5973 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5974 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5975 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
5976 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
5977 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5978 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5979 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5980 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5981 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5982 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
5983 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
5984 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5985 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5986 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5987 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5988 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5989 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5990 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5991 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5992 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5993 if (h != NULL)
5994 {
5995 if (h->got.refcount > 0)
5996 h->got.refcount -= 1;
5997
5998 if (h->type == STT_GNU_IFUNC)
5999 {
6000 if (h->plt.refcount > 0)
6001 h->plt.refcount -= 1;
6002 }
6003 }
6004 else if (locals != NULL)
6005 {
6006 if (locals[r_symndx].got_refcount > 0)
6007 locals[r_symndx].got_refcount -= 1;
6008 }
6009 break;
6010
6011 case BFD_RELOC_AARCH64_CALL26:
6012 case BFD_RELOC_AARCH64_JUMP26:
6013 /* If this is a local symbol then we resolve it
6014 directly without creating a PLT entry. */
6015 if (h == NULL)
6016 continue;
6017
6018 if (h->plt.refcount > 0)
6019 h->plt.refcount -= 1;
6020 break;
6021
6022 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
6023 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6024 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
6025 case BFD_RELOC_AARCH64_MOVW_G0_NC:
6026 case BFD_RELOC_AARCH64_MOVW_G1_NC:
6027 case BFD_RELOC_AARCH64_MOVW_G2_NC:
6028 case BFD_RELOC_AARCH64_MOVW_G3:
6029 case BFD_RELOC_AARCH64_NN:
6030 if (h != NULL && info->executable)
6031 {
6032 if (h->plt.refcount > 0)
6033 h->plt.refcount -= 1;
6034 }
6035 break;
6036
6037 default:
6038 break;
6039 }
6040 }
6041
6042 return TRUE;
6043 }
6044
6045 /* Adjust a symbol defined by a dynamic object and referenced by a
6046 regular object. The current definition is in some section of the
6047 dynamic object, but we're not including those sections. We have to
6048 change the definition to something the rest of the link can
6049 understand. */
6050
6051 static bfd_boolean
6052 elfNN_aarch64_adjust_dynamic_symbol (struct bfd_link_info *info,
6053 struct elf_link_hash_entry *h)
6054 {
6055 struct elf_aarch64_link_hash_table *htab;
6056 asection *s;
6057
6058 /* If this is a function, put it in the procedure linkage table. We
6059 will fill in the contents of the procedure linkage table later,
6060 when we know the address of the .got section. */
6061 if (h->type == STT_FUNC || h->type == STT_GNU_IFUNC || h->needs_plt)
6062 {
6063 if (h->plt.refcount <= 0
6064 || (h->type != STT_GNU_IFUNC
6065 && (SYMBOL_CALLS_LOCAL (info, h)
6066 || (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT
6067 && h->root.type == bfd_link_hash_undefweak))))
6068 {
6069 /* This case can occur if we saw a CALL26 reloc in
6070 an input file, but the symbol wasn't referred to
6071 by a dynamic object or all references were
6072 garbage collected. In which case we can end up
6073 resolving. */
6074 h->plt.offset = (bfd_vma) - 1;
6075 h->needs_plt = 0;
6076 }
6077
6078 return TRUE;
6079 }
6080 else
6081 /* Otherwise, reset to -1. */
6082 h->plt.offset = (bfd_vma) - 1;
6083
6084
6085 /* If this is a weak symbol, and there is a real definition, the
6086 processor independent code will have arranged for us to see the
6087 real definition first, and we can just use the same value. */
6088 if (h->u.weakdef != NULL)
6089 {
6090 BFD_ASSERT (h->u.weakdef->root.type == bfd_link_hash_defined
6091 || h->u.weakdef->root.type == bfd_link_hash_defweak);
6092 h->root.u.def.section = h->u.weakdef->root.u.def.section;
6093 h->root.u.def.value = h->u.weakdef->root.u.def.value;
6094 if (ELIMINATE_COPY_RELOCS || info->nocopyreloc)
6095 h->non_got_ref = h->u.weakdef->non_got_ref;
6096 return TRUE;
6097 }
6098
6099 /* If we are creating a shared library, we must presume that the
6100 only references to the symbol are via the global offset table.
6101 For such cases we need not do anything here; the relocations will
6102 be handled correctly by relocate_section. */
6103 if (info->shared)
6104 return TRUE;
6105
6106 /* If there are no references to this symbol that do not use the
6107 GOT, we don't need to generate a copy reloc. */
6108 if (!h->non_got_ref)
6109 return TRUE;
6110
6111 /* If -z nocopyreloc was given, we won't generate them either. */
6112 if (info->nocopyreloc)
6113 {
6114 h->non_got_ref = 0;
6115 return TRUE;
6116 }
6117
6118 /* We must allocate the symbol in our .dynbss section, which will
6119 become part of the .bss section of the executable. There will be
6120 an entry for this symbol in the .dynsym section. The dynamic
6121 object will contain position independent code, so all references
6122 from the dynamic object to this symbol will go through the global
6123 offset table. The dynamic linker will use the .dynsym entry to
6124 determine the address it must put in the global offset table, so
6125 both the dynamic object and the regular object will refer to the
6126 same memory location for the variable. */
6127
6128 htab = elf_aarch64_hash_table (info);
6129
6130 /* We must generate a R_AARCH64_COPY reloc to tell the dynamic linker
6131 to copy the initial value out of the dynamic object and into the
6132 runtime process image. */
6133 if ((h->root.u.def.section->flags & SEC_ALLOC) != 0 && h->size != 0)
6134 {
6135 htab->srelbss->size += RELOC_SIZE (htab);
6136 h->needs_copy = 1;
6137 }
6138
6139 s = htab->sdynbss;
6140
6141 return _bfd_elf_adjust_dynamic_copy (info, h, s);
6142
6143 }
6144
6145 static bfd_boolean
6146 elfNN_aarch64_allocate_local_symbols (bfd *abfd, unsigned number)
6147 {
6148 struct elf_aarch64_local_symbol *locals;
6149 locals = elf_aarch64_locals (abfd);
6150 if (locals == NULL)
6151 {
6152 locals = (struct elf_aarch64_local_symbol *)
6153 bfd_zalloc (abfd, number * sizeof (struct elf_aarch64_local_symbol));
6154 if (locals == NULL)
6155 return FALSE;
6156 elf_aarch64_locals (abfd) = locals;
6157 }
6158 return TRUE;
6159 }
6160
6161 /* Create the .got section to hold the global offset table. */
6162
6163 static bfd_boolean
6164 aarch64_elf_create_got_section (bfd *abfd, struct bfd_link_info *info)
6165 {
6166 const struct elf_backend_data *bed = get_elf_backend_data (abfd);
6167 flagword flags;
6168 asection *s;
6169 struct elf_link_hash_entry *h;
6170 struct elf_link_hash_table *htab = elf_hash_table (info);
6171
6172 /* This function may be called more than once. */
6173 s = bfd_get_linker_section (abfd, ".got");
6174 if (s != NULL)
6175 return TRUE;
6176
6177 flags = bed->dynamic_sec_flags;
6178
6179 s = bfd_make_section_anyway_with_flags (abfd,
6180 (bed->rela_plts_and_copies_p
6181 ? ".rela.got" : ".rel.got"),
6182 (bed->dynamic_sec_flags
6183 | SEC_READONLY));
6184 if (s == NULL
6185 || ! bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6186 return FALSE;
6187 htab->srelgot = s;
6188
6189 s = bfd_make_section_anyway_with_flags (abfd, ".got", flags);
6190 if (s == NULL
6191 || !bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6192 return FALSE;
6193 htab->sgot = s;
6194 htab->sgot->size += GOT_ENTRY_SIZE;
6195
6196 if (bed->want_got_sym)
6197 {
6198 /* Define the symbol _GLOBAL_OFFSET_TABLE_ at the start of the .got
6199 (or .got.plt) section. We don't do this in the linker script
6200 because we don't want to define the symbol if we are not creating
6201 a global offset table. */
6202 h = _bfd_elf_define_linkage_sym (abfd, info, s,
6203 "_GLOBAL_OFFSET_TABLE_");
6204 elf_hash_table (info)->hgot = h;
6205 if (h == NULL)
6206 return FALSE;
6207 }
6208
6209 if (bed->want_got_plt)
6210 {
6211 s = bfd_make_section_anyway_with_flags (abfd, ".got.plt", flags);
6212 if (s == NULL
6213 || !bfd_set_section_alignment (abfd, s,
6214 bed->s->log_file_align))
6215 return FALSE;
6216 htab->sgotplt = s;
6217 }
6218
6219 /* The first bit of the global offset table is the header. */
6220 s->size += bed->got_header_size;
6221
6222 return TRUE;
6223 }
6224
6225 /* Look through the relocs for a section during the first phase. */
6226
6227 static bfd_boolean
6228 elfNN_aarch64_check_relocs (bfd *abfd, struct bfd_link_info *info,
6229 asection *sec, const Elf_Internal_Rela *relocs)
6230 {
6231 Elf_Internal_Shdr *symtab_hdr;
6232 struct elf_link_hash_entry **sym_hashes;
6233 const Elf_Internal_Rela *rel;
6234 const Elf_Internal_Rela *rel_end;
6235 asection *sreloc;
6236
6237 struct elf_aarch64_link_hash_table *htab;
6238
6239 if (info->relocatable)
6240 return TRUE;
6241
6242 BFD_ASSERT (is_aarch64_elf (abfd));
6243
6244 htab = elf_aarch64_hash_table (info);
6245 sreloc = NULL;
6246
6247 symtab_hdr = &elf_symtab_hdr (abfd);
6248 sym_hashes = elf_sym_hashes (abfd);
6249
6250 rel_end = relocs + sec->reloc_count;
6251 for (rel = relocs; rel < rel_end; rel++)
6252 {
6253 struct elf_link_hash_entry *h;
6254 unsigned long r_symndx;
6255 unsigned int r_type;
6256 bfd_reloc_code_real_type bfd_r_type;
6257 Elf_Internal_Sym *isym;
6258
6259 r_symndx = ELFNN_R_SYM (rel->r_info);
6260 r_type = ELFNN_R_TYPE (rel->r_info);
6261
6262 if (r_symndx >= NUM_SHDR_ENTRIES (symtab_hdr))
6263 {
6264 (*_bfd_error_handler) (_("%B: bad symbol index: %d"), abfd,
6265 r_symndx);
6266 return FALSE;
6267 }
6268
6269 if (r_symndx < symtab_hdr->sh_info)
6270 {
6271 /* A local symbol. */
6272 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6273 abfd, r_symndx);
6274 if (isym == NULL)
6275 return FALSE;
6276
6277 /* Check relocation against local STT_GNU_IFUNC symbol. */
6278 if (ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
6279 {
6280 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel,
6281 TRUE);
6282 if (h == NULL)
6283 return FALSE;
6284
6285 /* Fake a STT_GNU_IFUNC symbol. */
6286 h->type = STT_GNU_IFUNC;
6287 h->def_regular = 1;
6288 h->ref_regular = 1;
6289 h->forced_local = 1;
6290 h->root.type = bfd_link_hash_defined;
6291 }
6292 else
6293 h = NULL;
6294 }
6295 else
6296 {
6297 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
6298 while (h->root.type == bfd_link_hash_indirect
6299 || h->root.type == bfd_link_hash_warning)
6300 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6301
6302 /* PR15323, ref flags aren't set for references in the same
6303 object. */
6304 h->root.non_ir_ref = 1;
6305 }
6306
6307 /* Could be done earlier, if h were already available. */
6308 bfd_r_type = aarch64_tls_transition (abfd, info, r_type, h, r_symndx);
6309
6310 if (h != NULL)
6311 {
6312 /* Create the ifunc sections for static executables. If we
6313 never see an indirect function symbol nor we are building
6314 a static executable, those sections will be empty and
6315 won't appear in output. */
6316 switch (bfd_r_type)
6317 {
6318 default:
6319 break;
6320
6321 case BFD_RELOC_AARCH64_ADD_LO12:
6322 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6323 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6324 case BFD_RELOC_AARCH64_CALL26:
6325 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6326 case BFD_RELOC_AARCH64_JUMP26:
6327 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
6328 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6329 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
6330 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6331 case BFD_RELOC_AARCH64_NN:
6332 if (htab->root.dynobj == NULL)
6333 htab->root.dynobj = abfd;
6334 if (!_bfd_elf_create_ifunc_sections (htab->root.dynobj, info))
6335 return FALSE;
6336 break;
6337 }
6338
6339 /* It is referenced by a non-shared object. */
6340 h->ref_regular = 1;
6341 h->root.non_ir_ref = 1;
6342 }
6343
6344 switch (bfd_r_type)
6345 {
6346 case BFD_RELOC_AARCH64_NN:
6347
6348 /* We don't need to handle relocs into sections not going into
6349 the "real" output. */
6350 if ((sec->flags & SEC_ALLOC) == 0)
6351 break;
6352
6353 if (h != NULL)
6354 {
6355 if (!info->shared)
6356 h->non_got_ref = 1;
6357
6358 h->plt.refcount += 1;
6359 h->pointer_equality_needed = 1;
6360 }
6361
6362 /* No need to do anything if we're not creating a shared
6363 object. */
6364 if (! info->shared)
6365 break;
6366
6367 {
6368 struct elf_dyn_relocs *p;
6369 struct elf_dyn_relocs **head;
6370
6371 /* We must copy these reloc types into the output file.
6372 Create a reloc section in dynobj and make room for
6373 this reloc. */
6374 if (sreloc == NULL)
6375 {
6376 if (htab->root.dynobj == NULL)
6377 htab->root.dynobj = abfd;
6378
6379 sreloc = _bfd_elf_make_dynamic_reloc_section
6380 (sec, htab->root.dynobj, LOG_FILE_ALIGN, abfd, /*rela? */ TRUE);
6381
6382 if (sreloc == NULL)
6383 return FALSE;
6384 }
6385
6386 /* If this is a global symbol, we count the number of
6387 relocations we need for this symbol. */
6388 if (h != NULL)
6389 {
6390 struct elf_aarch64_link_hash_entry *eh;
6391 eh = (struct elf_aarch64_link_hash_entry *) h;
6392 head = &eh->dyn_relocs;
6393 }
6394 else
6395 {
6396 /* Track dynamic relocs needed for local syms too.
6397 We really need local syms available to do this
6398 easily. Oh well. */
6399
6400 asection *s;
6401 void **vpp;
6402
6403 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6404 abfd, r_symndx);
6405 if (isym == NULL)
6406 return FALSE;
6407
6408 s = bfd_section_from_elf_index (abfd, isym->st_shndx);
6409 if (s == NULL)
6410 s = sec;
6411
6412 /* Beware of type punned pointers vs strict aliasing
6413 rules. */
6414 vpp = &(elf_section_data (s)->local_dynrel);
6415 head = (struct elf_dyn_relocs **) vpp;
6416 }
6417
6418 p = *head;
6419 if (p == NULL || p->sec != sec)
6420 {
6421 bfd_size_type amt = sizeof *p;
6422 p = ((struct elf_dyn_relocs *)
6423 bfd_zalloc (htab->root.dynobj, amt));
6424 if (p == NULL)
6425 return FALSE;
6426 p->next = *head;
6427 *head = p;
6428 p->sec = sec;
6429 }
6430
6431 p->count += 1;
6432
6433 }
6434 break;
6435
6436 /* RR: We probably want to keep a consistency check that
6437 there are no dangling GOT_PAGE relocs. */
6438 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6439 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6440 case BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14:
6441 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6442 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
6443 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6444 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
6445 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
6446 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
6447 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
6448 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
6449 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
6450 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
6451 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
6452 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
6453 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
6454 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
6455 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
6456 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
6457 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
6458 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
6459 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
6460 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
6461 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
6462 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
6463 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
6464 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
6465 {
6466 unsigned got_type;
6467 unsigned old_got_type;
6468
6469 got_type = aarch64_reloc_got_type (bfd_r_type);
6470
6471 if (h)
6472 {
6473 h->got.refcount += 1;
6474 old_got_type = elf_aarch64_hash_entry (h)->got_type;
6475 }
6476 else
6477 {
6478 struct elf_aarch64_local_symbol *locals;
6479
6480 if (!elfNN_aarch64_allocate_local_symbols
6481 (abfd, symtab_hdr->sh_info))
6482 return FALSE;
6483
6484 locals = elf_aarch64_locals (abfd);
6485 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6486 locals[r_symndx].got_refcount += 1;
6487 old_got_type = locals[r_symndx].got_type;
6488 }
6489
6490 /* If a variable is accessed with both general dynamic TLS
6491 methods, two slots may be created. */
6492 if (GOT_TLS_GD_ANY_P (old_got_type) && GOT_TLS_GD_ANY_P (got_type))
6493 got_type |= old_got_type;
6494
6495 /* We will already have issued an error message if there
6496 is a TLS/non-TLS mismatch, based on the symbol type.
6497 So just combine any TLS types needed. */
6498 if (old_got_type != GOT_UNKNOWN && old_got_type != GOT_NORMAL
6499 && got_type != GOT_NORMAL)
6500 got_type |= old_got_type;
6501
6502 /* If the symbol is accessed by both IE and GD methods, we
6503 are able to relax. Turn off the GD flag, without
6504 messing up with any other kind of TLS types that may be
6505 involved. */
6506 if ((got_type & GOT_TLS_IE) && GOT_TLS_GD_ANY_P (got_type))
6507 got_type &= ~ (GOT_TLSDESC_GD | GOT_TLS_GD);
6508
6509 if (old_got_type != got_type)
6510 {
6511 if (h != NULL)
6512 elf_aarch64_hash_entry (h)->got_type = got_type;
6513 else
6514 {
6515 struct elf_aarch64_local_symbol *locals;
6516 locals = elf_aarch64_locals (abfd);
6517 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6518 locals[r_symndx].got_type = got_type;
6519 }
6520 }
6521
6522 if (htab->root.dynobj == NULL)
6523 htab->root.dynobj = abfd;
6524 if (! aarch64_elf_create_got_section (htab->root.dynobj, info))
6525 return FALSE;
6526 break;
6527 }
6528
6529 case BFD_RELOC_AARCH64_MOVW_G0_NC:
6530 case BFD_RELOC_AARCH64_MOVW_G1_NC:
6531 case BFD_RELOC_AARCH64_MOVW_G2_NC:
6532 case BFD_RELOC_AARCH64_MOVW_G3:
6533 if (info->shared)
6534 {
6535 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
6536 (*_bfd_error_handler)
6537 (_("%B: relocation %s against `%s' can not be used when making "
6538 "a shared object; recompile with -fPIC"),
6539 abfd, elfNN_aarch64_howto_table[howto_index].name,
6540 (h) ? h->root.root.string : "a local symbol");
6541 bfd_set_error (bfd_error_bad_value);
6542 return FALSE;
6543 }
6544
6545 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
6546 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6547 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
6548 if (h != NULL && info->executable)
6549 {
6550 /* If this reloc is in a read-only section, we might
6551 need a copy reloc. We can't check reliably at this
6552 stage whether the section is read-only, as input
6553 sections have not yet been mapped to output sections.
6554 Tentatively set the flag for now, and correct in
6555 adjust_dynamic_symbol. */
6556 h->non_got_ref = 1;
6557 h->plt.refcount += 1;
6558 h->pointer_equality_needed = 1;
6559 }
6560 /* FIXME:: RR need to handle these in shared libraries
6561 and essentially bomb out as these being non-PIC
6562 relocations in shared libraries. */
6563 break;
6564
6565 case BFD_RELOC_AARCH64_CALL26:
6566 case BFD_RELOC_AARCH64_JUMP26:
6567 /* If this is a local symbol then we resolve it
6568 directly without creating a PLT entry. */
6569 if (h == NULL)
6570 continue;
6571
6572 h->needs_plt = 1;
6573 if (h->plt.refcount <= 0)
6574 h->plt.refcount = 1;
6575 else
6576 h->plt.refcount += 1;
6577 break;
6578
6579 default:
6580 break;
6581 }
6582 }
6583
6584 return TRUE;
6585 }
6586
6587 /* Treat mapping symbols as special target symbols. */
6588
6589 static bfd_boolean
6590 elfNN_aarch64_is_target_special_symbol (bfd *abfd ATTRIBUTE_UNUSED,
6591 asymbol *sym)
6592 {
6593 return bfd_is_aarch64_special_symbol_name (sym->name,
6594 BFD_AARCH64_SPECIAL_SYM_TYPE_ANY);
6595 }
6596
6597 /* This is a copy of elf_find_function () from elf.c except that
6598 AArch64 mapping symbols are ignored when looking for function names. */
6599
6600 static bfd_boolean
6601 aarch64_elf_find_function (bfd *abfd ATTRIBUTE_UNUSED,
6602 asymbol **symbols,
6603 asection *section,
6604 bfd_vma offset,
6605 const char **filename_ptr,
6606 const char **functionname_ptr)
6607 {
6608 const char *filename = NULL;
6609 asymbol *func = NULL;
6610 bfd_vma low_func = 0;
6611 asymbol **p;
6612
6613 for (p = symbols; *p != NULL; p++)
6614 {
6615 elf_symbol_type *q;
6616
6617 q = (elf_symbol_type *) * p;
6618
6619 switch (ELF_ST_TYPE (q->internal_elf_sym.st_info))
6620 {
6621 default:
6622 break;
6623 case STT_FILE:
6624 filename = bfd_asymbol_name (&q->symbol);
6625 break;
6626 case STT_FUNC:
6627 case STT_NOTYPE:
6628 /* Skip mapping symbols. */
6629 if ((q->symbol.flags & BSF_LOCAL)
6630 && (bfd_is_aarch64_special_symbol_name
6631 (q->symbol.name, BFD_AARCH64_SPECIAL_SYM_TYPE_ANY)))
6632 continue;
6633 /* Fall through. */
6634 if (bfd_get_section (&q->symbol) == section
6635 && q->symbol.value >= low_func && q->symbol.value <= offset)
6636 {
6637 func = (asymbol *) q;
6638 low_func = q->symbol.value;
6639 }
6640 break;
6641 }
6642 }
6643
6644 if (func == NULL)
6645 return FALSE;
6646
6647 if (filename_ptr)
6648 *filename_ptr = filename;
6649 if (functionname_ptr)
6650 *functionname_ptr = bfd_asymbol_name (func);
6651
6652 return TRUE;
6653 }
6654
6655
6656 /* Find the nearest line to a particular section and offset, for error
6657 reporting. This code is a duplicate of the code in elf.c, except
6658 that it uses aarch64_elf_find_function. */
6659
6660 static bfd_boolean
6661 elfNN_aarch64_find_nearest_line (bfd *abfd,
6662 asymbol **symbols,
6663 asection *section,
6664 bfd_vma offset,
6665 const char **filename_ptr,
6666 const char **functionname_ptr,
6667 unsigned int *line_ptr,
6668 unsigned int *discriminator_ptr)
6669 {
6670 bfd_boolean found = FALSE;
6671
6672 if (_bfd_dwarf2_find_nearest_line (abfd, symbols, NULL, section, offset,
6673 filename_ptr, functionname_ptr,
6674 line_ptr, discriminator_ptr,
6675 dwarf_debug_sections, 0,
6676 &elf_tdata (abfd)->dwarf2_find_line_info))
6677 {
6678 if (!*functionname_ptr)
6679 aarch64_elf_find_function (abfd, symbols, section, offset,
6680 *filename_ptr ? NULL : filename_ptr,
6681 functionname_ptr);
6682
6683 return TRUE;
6684 }
6685
6686 /* Skip _bfd_dwarf1_find_nearest_line since no known AArch64
6687 toolchain uses DWARF1. */
6688
6689 if (!_bfd_stab_section_find_nearest_line (abfd, symbols, section, offset,
6690 &found, filename_ptr,
6691 functionname_ptr, line_ptr,
6692 &elf_tdata (abfd)->line_info))
6693 return FALSE;
6694
6695 if (found && (*functionname_ptr || *line_ptr))
6696 return TRUE;
6697
6698 if (symbols == NULL)
6699 return FALSE;
6700
6701 if (!aarch64_elf_find_function (abfd, symbols, section, offset,
6702 filename_ptr, functionname_ptr))
6703 return FALSE;
6704
6705 *line_ptr = 0;
6706 return TRUE;
6707 }
6708
6709 static bfd_boolean
6710 elfNN_aarch64_find_inliner_info (bfd *abfd,
6711 const char **filename_ptr,
6712 const char **functionname_ptr,
6713 unsigned int *line_ptr)
6714 {
6715 bfd_boolean found;
6716 found = _bfd_dwarf2_find_inliner_info
6717 (abfd, filename_ptr,
6718 functionname_ptr, line_ptr, &elf_tdata (abfd)->dwarf2_find_line_info);
6719 return found;
6720 }
6721
6722
6723 static void
6724 elfNN_aarch64_post_process_headers (bfd *abfd,
6725 struct bfd_link_info *link_info)
6726 {
6727 Elf_Internal_Ehdr *i_ehdrp; /* ELF file header, internal form. */
6728
6729 i_ehdrp = elf_elfheader (abfd);
6730 i_ehdrp->e_ident[EI_ABIVERSION] = AARCH64_ELF_ABI_VERSION;
6731
6732 _bfd_elf_post_process_headers (abfd, link_info);
6733 }
6734
6735 static enum elf_reloc_type_class
6736 elfNN_aarch64_reloc_type_class (const struct bfd_link_info *info ATTRIBUTE_UNUSED,
6737 const asection *rel_sec ATTRIBUTE_UNUSED,
6738 const Elf_Internal_Rela *rela)
6739 {
6740 switch ((int) ELFNN_R_TYPE (rela->r_info))
6741 {
6742 case AARCH64_R (RELATIVE):
6743 return reloc_class_relative;
6744 case AARCH64_R (JUMP_SLOT):
6745 return reloc_class_plt;
6746 case AARCH64_R (COPY):
6747 return reloc_class_copy;
6748 default:
6749 return reloc_class_normal;
6750 }
6751 }
6752
6753 /* Handle an AArch64 specific section when reading an object file. This is
6754 called when bfd_section_from_shdr finds a section with an unknown
6755 type. */
6756
6757 static bfd_boolean
6758 elfNN_aarch64_section_from_shdr (bfd *abfd,
6759 Elf_Internal_Shdr *hdr,
6760 const char *name, int shindex)
6761 {
6762 /* There ought to be a place to keep ELF backend specific flags, but
6763 at the moment there isn't one. We just keep track of the
6764 sections by their name, instead. Fortunately, the ABI gives
6765 names for all the AArch64 specific sections, so we will probably get
6766 away with this. */
6767 switch (hdr->sh_type)
6768 {
6769 case SHT_AARCH64_ATTRIBUTES:
6770 break;
6771
6772 default:
6773 return FALSE;
6774 }
6775
6776 if (!_bfd_elf_make_section_from_shdr (abfd, hdr, name, shindex))
6777 return FALSE;
6778
6779 return TRUE;
6780 }
6781
6782 /* A structure used to record a list of sections, independently
6783 of the next and prev fields in the asection structure. */
6784 typedef struct section_list
6785 {
6786 asection *sec;
6787 struct section_list *next;
6788 struct section_list *prev;
6789 }
6790 section_list;
6791
6792 /* Unfortunately we need to keep a list of sections for which
6793 an _aarch64_elf_section_data structure has been allocated. This
6794 is because it is possible for functions like elfNN_aarch64_write_section
6795 to be called on a section which has had an elf_data_structure
6796 allocated for it (and so the used_by_bfd field is valid) but
6797 for which the AArch64 extended version of this structure - the
6798 _aarch64_elf_section_data structure - has not been allocated. */
6799 static section_list *sections_with_aarch64_elf_section_data = NULL;
6800
6801 static void
6802 record_section_with_aarch64_elf_section_data (asection *sec)
6803 {
6804 struct section_list *entry;
6805
6806 entry = bfd_malloc (sizeof (*entry));
6807 if (entry == NULL)
6808 return;
6809 entry->sec = sec;
6810 entry->next = sections_with_aarch64_elf_section_data;
6811 entry->prev = NULL;
6812 if (entry->next != NULL)
6813 entry->next->prev = entry;
6814 sections_with_aarch64_elf_section_data = entry;
6815 }
6816
6817 static struct section_list *
6818 find_aarch64_elf_section_entry (asection *sec)
6819 {
6820 struct section_list *entry;
6821 static struct section_list *last_entry = NULL;
6822
6823 /* This is a short cut for the typical case where the sections are added
6824 to the sections_with_aarch64_elf_section_data list in forward order and
6825 then looked up here in backwards order. This makes a real difference
6826 to the ld-srec/sec64k.exp linker test. */
6827 entry = sections_with_aarch64_elf_section_data;
6828 if (last_entry != NULL)
6829 {
6830 if (last_entry->sec == sec)
6831 entry = last_entry;
6832 else if (last_entry->next != NULL && last_entry->next->sec == sec)
6833 entry = last_entry->next;
6834 }
6835
6836 for (; entry; entry = entry->next)
6837 if (entry->sec == sec)
6838 break;
6839
6840 if (entry)
6841 /* Record the entry prior to this one - it is the entry we are
6842 most likely to want to locate next time. Also this way if we
6843 have been called from
6844 unrecord_section_with_aarch64_elf_section_data () we will not
6845 be caching a pointer that is about to be freed. */
6846 last_entry = entry->prev;
6847
6848 return entry;
6849 }
6850
6851 static void
6852 unrecord_section_with_aarch64_elf_section_data (asection *sec)
6853 {
6854 struct section_list *entry;
6855
6856 entry = find_aarch64_elf_section_entry (sec);
6857
6858 if (entry)
6859 {
6860 if (entry->prev != NULL)
6861 entry->prev->next = entry->next;
6862 if (entry->next != NULL)
6863 entry->next->prev = entry->prev;
6864 if (entry == sections_with_aarch64_elf_section_data)
6865 sections_with_aarch64_elf_section_data = entry->next;
6866 free (entry);
6867 }
6868 }
6869
6870
6871 typedef struct
6872 {
6873 void *finfo;
6874 struct bfd_link_info *info;
6875 asection *sec;
6876 int sec_shndx;
6877 int (*func) (void *, const char *, Elf_Internal_Sym *,
6878 asection *, struct elf_link_hash_entry *);
6879 } output_arch_syminfo;
6880
6881 enum map_symbol_type
6882 {
6883 AARCH64_MAP_INSN,
6884 AARCH64_MAP_DATA
6885 };
6886
6887
6888 /* Output a single mapping symbol. */
6889
6890 static bfd_boolean
6891 elfNN_aarch64_output_map_sym (output_arch_syminfo *osi,
6892 enum map_symbol_type type, bfd_vma offset)
6893 {
6894 static const char *names[2] = { "$x", "$d" };
6895 Elf_Internal_Sym sym;
6896
6897 sym.st_value = (osi->sec->output_section->vma
6898 + osi->sec->output_offset + offset);
6899 sym.st_size = 0;
6900 sym.st_other = 0;
6901 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_NOTYPE);
6902 sym.st_shndx = osi->sec_shndx;
6903 return osi->func (osi->finfo, names[type], &sym, osi->sec, NULL) == 1;
6904 }
6905
6906
6907
6908 /* Output mapping symbols for PLT entries associated with H. */
6909
6910 static bfd_boolean
6911 elfNN_aarch64_output_plt_map (struct elf_link_hash_entry *h, void *inf)
6912 {
6913 output_arch_syminfo *osi = (output_arch_syminfo *) inf;
6914 bfd_vma addr;
6915
6916 if (h->root.type == bfd_link_hash_indirect)
6917 return TRUE;
6918
6919 if (h->root.type == bfd_link_hash_warning)
6920 /* When warning symbols are created, they **replace** the "real"
6921 entry in the hash table, thus we never get to see the real
6922 symbol in a hash traversal. So look at it now. */
6923 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6924
6925 if (h->plt.offset == (bfd_vma) - 1)
6926 return TRUE;
6927
6928 addr = h->plt.offset;
6929 if (addr == 32)
6930 {
6931 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6932 return FALSE;
6933 }
6934 return TRUE;
6935 }
6936
6937
6938 /* Output a single local symbol for a generated stub. */
6939
6940 static bfd_boolean
6941 elfNN_aarch64_output_stub_sym (output_arch_syminfo *osi, const char *name,
6942 bfd_vma offset, bfd_vma size)
6943 {
6944 Elf_Internal_Sym sym;
6945
6946 sym.st_value = (osi->sec->output_section->vma
6947 + osi->sec->output_offset + offset);
6948 sym.st_size = size;
6949 sym.st_other = 0;
6950 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_FUNC);
6951 sym.st_shndx = osi->sec_shndx;
6952 return osi->func (osi->finfo, name, &sym, osi->sec, NULL) == 1;
6953 }
6954
6955 static bfd_boolean
6956 aarch64_map_one_stub (struct bfd_hash_entry *gen_entry, void *in_arg)
6957 {
6958 struct elf_aarch64_stub_hash_entry *stub_entry;
6959 asection *stub_sec;
6960 bfd_vma addr;
6961 char *stub_name;
6962 output_arch_syminfo *osi;
6963
6964 /* Massage our args to the form they really have. */
6965 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
6966 osi = (output_arch_syminfo *) in_arg;
6967
6968 stub_sec = stub_entry->stub_sec;
6969
6970 /* Ensure this stub is attached to the current section being
6971 processed. */
6972 if (stub_sec != osi->sec)
6973 return TRUE;
6974
6975 addr = (bfd_vma) stub_entry->stub_offset;
6976
6977 stub_name = stub_entry->output_name;
6978
6979 switch (stub_entry->stub_type)
6980 {
6981 case aarch64_stub_adrp_branch:
6982 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6983 sizeof (aarch64_adrp_branch_stub)))
6984 return FALSE;
6985 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6986 return FALSE;
6987 break;
6988 case aarch64_stub_long_branch:
6989 if (!elfNN_aarch64_output_stub_sym
6990 (osi, stub_name, addr, sizeof (aarch64_long_branch_stub)))
6991 return FALSE;
6992 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6993 return FALSE;
6994 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_DATA, addr + 16))
6995 return FALSE;
6996 break;
6997 case aarch64_stub_erratum_835769_veneer:
6998 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6999 sizeof (aarch64_erratum_835769_stub)))
7000 return FALSE;
7001 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
7002 return FALSE;
7003 break;
7004 case aarch64_stub_erratum_843419_veneer:
7005 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
7006 sizeof (aarch64_erratum_843419_stub)))
7007 return FALSE;
7008 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
7009 return FALSE;
7010 break;
7011
7012 default:
7013 abort ();
7014 }
7015
7016 return TRUE;
7017 }
7018
7019 /* Output mapping symbols for linker generated sections. */
7020
7021 static bfd_boolean
7022 elfNN_aarch64_output_arch_local_syms (bfd *output_bfd,
7023 struct bfd_link_info *info,
7024 void *finfo,
7025 int (*func) (void *, const char *,
7026 Elf_Internal_Sym *,
7027 asection *,
7028 struct elf_link_hash_entry
7029 *))
7030 {
7031 output_arch_syminfo osi;
7032 struct elf_aarch64_link_hash_table *htab;
7033
7034 htab = elf_aarch64_hash_table (info);
7035
7036 osi.finfo = finfo;
7037 osi.info = info;
7038 osi.func = func;
7039
7040 /* Long calls stubs. */
7041 if (htab->stub_bfd && htab->stub_bfd->sections)
7042 {
7043 asection *stub_sec;
7044
7045 for (stub_sec = htab->stub_bfd->sections;
7046 stub_sec != NULL; stub_sec = stub_sec->next)
7047 {
7048 /* Ignore non-stub sections. */
7049 if (!strstr (stub_sec->name, STUB_SUFFIX))
7050 continue;
7051
7052 osi.sec = stub_sec;
7053
7054 osi.sec_shndx = _bfd_elf_section_from_bfd_section
7055 (output_bfd, osi.sec->output_section);
7056
7057 /* The first instruction in a stub is always a branch. */
7058 if (!elfNN_aarch64_output_map_sym (&osi, AARCH64_MAP_INSN, 0))
7059 return FALSE;
7060
7061 bfd_hash_traverse (&htab->stub_hash_table, aarch64_map_one_stub,
7062 &osi);
7063 }
7064 }
7065
7066 /* Finally, output mapping symbols for the PLT. */
7067 if (!htab->root.splt || htab->root.splt->size == 0)
7068 return TRUE;
7069
7070 /* For now live without mapping symbols for the plt. */
7071 osi.sec_shndx = _bfd_elf_section_from_bfd_section
7072 (output_bfd, htab->root.splt->output_section);
7073 osi.sec = htab->root.splt;
7074
7075 elf_link_hash_traverse (&htab->root, elfNN_aarch64_output_plt_map,
7076 (void *) &osi);
7077
7078 return TRUE;
7079
7080 }
7081
7082 /* Allocate target specific section data. */
7083
7084 static bfd_boolean
7085 elfNN_aarch64_new_section_hook (bfd *abfd, asection *sec)
7086 {
7087 if (!sec->used_by_bfd)
7088 {
7089 _aarch64_elf_section_data *sdata;
7090 bfd_size_type amt = sizeof (*sdata);
7091
7092 sdata = bfd_zalloc (abfd, amt);
7093 if (sdata == NULL)
7094 return FALSE;
7095 sec->used_by_bfd = sdata;
7096 }
7097
7098 record_section_with_aarch64_elf_section_data (sec);
7099
7100 return _bfd_elf_new_section_hook (abfd, sec);
7101 }
7102
7103
7104 static void
7105 unrecord_section_via_map_over_sections (bfd *abfd ATTRIBUTE_UNUSED,
7106 asection *sec,
7107 void *ignore ATTRIBUTE_UNUSED)
7108 {
7109 unrecord_section_with_aarch64_elf_section_data (sec);
7110 }
7111
7112 static bfd_boolean
7113 elfNN_aarch64_close_and_cleanup (bfd *abfd)
7114 {
7115 if (abfd->sections)
7116 bfd_map_over_sections (abfd,
7117 unrecord_section_via_map_over_sections, NULL);
7118
7119 return _bfd_elf_close_and_cleanup (abfd);
7120 }
7121
7122 static bfd_boolean
7123 elfNN_aarch64_bfd_free_cached_info (bfd *abfd)
7124 {
7125 if (abfd->sections)
7126 bfd_map_over_sections (abfd,
7127 unrecord_section_via_map_over_sections, NULL);
7128
7129 return _bfd_free_cached_info (abfd);
7130 }
7131
7132 /* Create dynamic sections. This is different from the ARM backend in that
7133 the got, plt, gotplt and their relocation sections are all created in the
7134 standard part of the bfd elf backend. */
7135
7136 static bfd_boolean
7137 elfNN_aarch64_create_dynamic_sections (bfd *dynobj,
7138 struct bfd_link_info *info)
7139 {
7140 struct elf_aarch64_link_hash_table *htab;
7141
7142 /* We need to create .got section. */
7143 if (!aarch64_elf_create_got_section (dynobj, info))
7144 return FALSE;
7145
7146 if (!_bfd_elf_create_dynamic_sections (dynobj, info))
7147 return FALSE;
7148
7149 htab = elf_aarch64_hash_table (info);
7150 htab->sdynbss = bfd_get_linker_section (dynobj, ".dynbss");
7151 if (!info->shared)
7152 htab->srelbss = bfd_get_linker_section (dynobj, ".rela.bss");
7153
7154 if (!htab->sdynbss || (!info->shared && !htab->srelbss))
7155 abort ();
7156
7157 return TRUE;
7158 }
7159
7160
7161 /* Allocate space in .plt, .got and associated reloc sections for
7162 dynamic relocs. */
7163
7164 static bfd_boolean
7165 elfNN_aarch64_allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf)
7166 {
7167 struct bfd_link_info *info;
7168 struct elf_aarch64_link_hash_table *htab;
7169 struct elf_aarch64_link_hash_entry *eh;
7170 struct elf_dyn_relocs *p;
7171
7172 /* An example of a bfd_link_hash_indirect symbol is versioned
7173 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7174 -> __gxx_personality_v0(bfd_link_hash_defined)
7175
7176 There is no need to process bfd_link_hash_indirect symbols here
7177 because we will also be presented with the concrete instance of
7178 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7179 called to copy all relevant data from the generic to the concrete
7180 symbol instance.
7181 */
7182 if (h->root.type == bfd_link_hash_indirect)
7183 return TRUE;
7184
7185 if (h->root.type == bfd_link_hash_warning)
7186 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7187
7188 info = (struct bfd_link_info *) inf;
7189 htab = elf_aarch64_hash_table (info);
7190
7191 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7192 here if it is defined and referenced in a non-shared object. */
7193 if (h->type == STT_GNU_IFUNC
7194 && h->def_regular)
7195 return TRUE;
7196 else if (htab->root.dynamic_sections_created && h->plt.refcount > 0)
7197 {
7198 /* Make sure this symbol is output as a dynamic symbol.
7199 Undefined weak syms won't yet be marked as dynamic. */
7200 if (h->dynindx == -1 && !h->forced_local)
7201 {
7202 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7203 return FALSE;
7204 }
7205
7206 if (info->shared || WILL_CALL_FINISH_DYNAMIC_SYMBOL (1, 0, h))
7207 {
7208 asection *s = htab->root.splt;
7209
7210 /* If this is the first .plt entry, make room for the special
7211 first entry. */
7212 if (s->size == 0)
7213 s->size += htab->plt_header_size;
7214
7215 h->plt.offset = s->size;
7216
7217 /* If this symbol is not defined in a regular file, and we are
7218 not generating a shared library, then set the symbol to this
7219 location in the .plt. This is required to make function
7220 pointers compare as equal between the normal executable and
7221 the shared library. */
7222 if (!info->shared && !h->def_regular)
7223 {
7224 h->root.u.def.section = s;
7225 h->root.u.def.value = h->plt.offset;
7226 }
7227
7228 /* Make room for this entry. For now we only create the
7229 small model PLT entries. We later need to find a way
7230 of relaxing into these from the large model PLT entries. */
7231 s->size += PLT_SMALL_ENTRY_SIZE;
7232
7233 /* We also need to make an entry in the .got.plt section, which
7234 will be placed in the .got section by the linker script. */
7235 htab->root.sgotplt->size += GOT_ENTRY_SIZE;
7236
7237 /* We also need to make an entry in the .rela.plt section. */
7238 htab->root.srelplt->size += RELOC_SIZE (htab);
7239
7240 /* We need to ensure that all GOT entries that serve the PLT
7241 are consecutive with the special GOT slots [0] [1] and
7242 [2]. Any addtional relocations, such as
7243 R_AARCH64_TLSDESC, must be placed after the PLT related
7244 entries. We abuse the reloc_count such that during
7245 sizing we adjust reloc_count to indicate the number of
7246 PLT related reserved entries. In subsequent phases when
7247 filling in the contents of the reloc entries, PLT related
7248 entries are placed by computing their PLT index (0
7249 .. reloc_count). While other none PLT relocs are placed
7250 at the slot indicated by reloc_count and reloc_count is
7251 updated. */
7252
7253 htab->root.srelplt->reloc_count++;
7254 }
7255 else
7256 {
7257 h->plt.offset = (bfd_vma) - 1;
7258 h->needs_plt = 0;
7259 }
7260 }
7261 else
7262 {
7263 h->plt.offset = (bfd_vma) - 1;
7264 h->needs_plt = 0;
7265 }
7266
7267 eh = (struct elf_aarch64_link_hash_entry *) h;
7268 eh->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7269
7270 if (h->got.refcount > 0)
7271 {
7272 bfd_boolean dyn;
7273 unsigned got_type = elf_aarch64_hash_entry (h)->got_type;
7274
7275 h->got.offset = (bfd_vma) - 1;
7276
7277 dyn = htab->root.dynamic_sections_created;
7278
7279 /* Make sure this symbol is output as a dynamic symbol.
7280 Undefined weak syms won't yet be marked as dynamic. */
7281 if (dyn && h->dynindx == -1 && !h->forced_local)
7282 {
7283 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7284 return FALSE;
7285 }
7286
7287 if (got_type == GOT_UNKNOWN)
7288 {
7289 }
7290 else if (got_type == GOT_NORMAL)
7291 {
7292 h->got.offset = htab->root.sgot->size;
7293 htab->root.sgot->size += GOT_ENTRY_SIZE;
7294 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7295 || h->root.type != bfd_link_hash_undefweak)
7296 && (info->shared
7297 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7298 {
7299 htab->root.srelgot->size += RELOC_SIZE (htab);
7300 }
7301 }
7302 else
7303 {
7304 int indx;
7305 if (got_type & GOT_TLSDESC_GD)
7306 {
7307 eh->tlsdesc_got_jump_table_offset =
7308 (htab->root.sgotplt->size
7309 - aarch64_compute_jump_table_size (htab));
7310 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7311 h->got.offset = (bfd_vma) - 2;
7312 }
7313
7314 if (got_type & GOT_TLS_GD)
7315 {
7316 h->got.offset = htab->root.sgot->size;
7317 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7318 }
7319
7320 if (got_type & GOT_TLS_IE)
7321 {
7322 h->got.offset = htab->root.sgot->size;
7323 htab->root.sgot->size += GOT_ENTRY_SIZE;
7324 }
7325
7326 indx = h && h->dynindx != -1 ? h->dynindx : 0;
7327 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7328 || h->root.type != bfd_link_hash_undefweak)
7329 && (info->shared
7330 || indx != 0
7331 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7332 {
7333 if (got_type & GOT_TLSDESC_GD)
7334 {
7335 htab->root.srelplt->size += RELOC_SIZE (htab);
7336 /* Note reloc_count not incremented here! We have
7337 already adjusted reloc_count for this relocation
7338 type. */
7339
7340 /* TLSDESC PLT is now needed, but not yet determined. */
7341 htab->tlsdesc_plt = (bfd_vma) - 1;
7342 }
7343
7344 if (got_type & GOT_TLS_GD)
7345 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7346
7347 if (got_type & GOT_TLS_IE)
7348 htab->root.srelgot->size += RELOC_SIZE (htab);
7349 }
7350 }
7351 }
7352 else
7353 {
7354 h->got.offset = (bfd_vma) - 1;
7355 }
7356
7357 if (eh->dyn_relocs == NULL)
7358 return TRUE;
7359
7360 /* In the shared -Bsymbolic case, discard space allocated for
7361 dynamic pc-relative relocs against symbols which turn out to be
7362 defined in regular objects. For the normal shared case, discard
7363 space for pc-relative relocs that have become local due to symbol
7364 visibility changes. */
7365
7366 if (info->shared)
7367 {
7368 /* Relocs that use pc_count are those that appear on a call
7369 insn, or certain REL relocs that can generated via assembly.
7370 We want calls to protected symbols to resolve directly to the
7371 function rather than going via the plt. If people want
7372 function pointer comparisons to work as expected then they
7373 should avoid writing weird assembly. */
7374 if (SYMBOL_CALLS_LOCAL (info, h))
7375 {
7376 struct elf_dyn_relocs **pp;
7377
7378 for (pp = &eh->dyn_relocs; (p = *pp) != NULL;)
7379 {
7380 p->count -= p->pc_count;
7381 p->pc_count = 0;
7382 if (p->count == 0)
7383 *pp = p->next;
7384 else
7385 pp = &p->next;
7386 }
7387 }
7388
7389 /* Also discard relocs on undefined weak syms with non-default
7390 visibility. */
7391 if (eh->dyn_relocs != NULL && h->root.type == bfd_link_hash_undefweak)
7392 {
7393 if (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7394 eh->dyn_relocs = NULL;
7395
7396 /* Make sure undefined weak symbols are output as a dynamic
7397 symbol in PIEs. */
7398 else if (h->dynindx == -1
7399 && !h->forced_local
7400 && !bfd_elf_link_record_dynamic_symbol (info, h))
7401 return FALSE;
7402 }
7403
7404 }
7405 else if (ELIMINATE_COPY_RELOCS)
7406 {
7407 /* For the non-shared case, discard space for relocs against
7408 symbols which turn out to need copy relocs or are not
7409 dynamic. */
7410
7411 if (!h->non_got_ref
7412 && ((h->def_dynamic
7413 && !h->def_regular)
7414 || (htab->root.dynamic_sections_created
7415 && (h->root.type == bfd_link_hash_undefweak
7416 || h->root.type == bfd_link_hash_undefined))))
7417 {
7418 /* Make sure this symbol is output as a dynamic symbol.
7419 Undefined weak syms won't yet be marked as dynamic. */
7420 if (h->dynindx == -1
7421 && !h->forced_local
7422 && !bfd_elf_link_record_dynamic_symbol (info, h))
7423 return FALSE;
7424
7425 /* If that succeeded, we know we'll be keeping all the
7426 relocs. */
7427 if (h->dynindx != -1)
7428 goto keep;
7429 }
7430
7431 eh->dyn_relocs = NULL;
7432
7433 keep:;
7434 }
7435
7436 /* Finally, allocate space. */
7437 for (p = eh->dyn_relocs; p != NULL; p = p->next)
7438 {
7439 asection *sreloc;
7440
7441 sreloc = elf_section_data (p->sec)->sreloc;
7442
7443 BFD_ASSERT (sreloc != NULL);
7444
7445 sreloc->size += p->count * RELOC_SIZE (htab);
7446 }
7447
7448 return TRUE;
7449 }
7450
7451 /* Allocate space in .plt, .got and associated reloc sections for
7452 ifunc dynamic relocs. */
7453
7454 static bfd_boolean
7455 elfNN_aarch64_allocate_ifunc_dynrelocs (struct elf_link_hash_entry *h,
7456 void *inf)
7457 {
7458 struct bfd_link_info *info;
7459 struct elf_aarch64_link_hash_table *htab;
7460 struct elf_aarch64_link_hash_entry *eh;
7461
7462 /* An example of a bfd_link_hash_indirect symbol is versioned
7463 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7464 -> __gxx_personality_v0(bfd_link_hash_defined)
7465
7466 There is no need to process bfd_link_hash_indirect symbols here
7467 because we will also be presented with the concrete instance of
7468 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7469 called to copy all relevant data from the generic to the concrete
7470 symbol instance.
7471 */
7472 if (h->root.type == bfd_link_hash_indirect)
7473 return TRUE;
7474
7475 if (h->root.type == bfd_link_hash_warning)
7476 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7477
7478 info = (struct bfd_link_info *) inf;
7479 htab = elf_aarch64_hash_table (info);
7480
7481 eh = (struct elf_aarch64_link_hash_entry *) h;
7482
7483 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7484 here if it is defined and referenced in a non-shared object. */
7485 if (h->type == STT_GNU_IFUNC
7486 && h->def_regular)
7487 return _bfd_elf_allocate_ifunc_dyn_relocs (info, h,
7488 &eh->dyn_relocs,
7489 htab->plt_entry_size,
7490 htab->plt_header_size,
7491 GOT_ENTRY_SIZE);
7492 return TRUE;
7493 }
7494
7495 /* Allocate space in .plt, .got and associated reloc sections for
7496 local dynamic relocs. */
7497
7498 static bfd_boolean
7499 elfNN_aarch64_allocate_local_dynrelocs (void **slot, void *inf)
7500 {
7501 struct elf_link_hash_entry *h
7502 = (struct elf_link_hash_entry *) *slot;
7503
7504 if (h->type != STT_GNU_IFUNC
7505 || !h->def_regular
7506 || !h->ref_regular
7507 || !h->forced_local
7508 || h->root.type != bfd_link_hash_defined)
7509 abort ();
7510
7511 return elfNN_aarch64_allocate_dynrelocs (h, inf);
7512 }
7513
7514 /* Allocate space in .plt, .got and associated reloc sections for
7515 local ifunc dynamic relocs. */
7516
7517 static bfd_boolean
7518 elfNN_aarch64_allocate_local_ifunc_dynrelocs (void **slot, void *inf)
7519 {
7520 struct elf_link_hash_entry *h
7521 = (struct elf_link_hash_entry *) *slot;
7522
7523 if (h->type != STT_GNU_IFUNC
7524 || !h->def_regular
7525 || !h->ref_regular
7526 || !h->forced_local
7527 || h->root.type != bfd_link_hash_defined)
7528 abort ();
7529
7530 return elfNN_aarch64_allocate_ifunc_dynrelocs (h, inf);
7531 }
7532
7533 /* This is the most important function of all . Innocuosly named
7534 though ! */
7535 static bfd_boolean
7536 elfNN_aarch64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED,
7537 struct bfd_link_info *info)
7538 {
7539 struct elf_aarch64_link_hash_table *htab;
7540 bfd *dynobj;
7541 asection *s;
7542 bfd_boolean relocs;
7543 bfd *ibfd;
7544
7545 htab = elf_aarch64_hash_table ((info));
7546 dynobj = htab->root.dynobj;
7547
7548 BFD_ASSERT (dynobj != NULL);
7549
7550 if (htab->root.dynamic_sections_created)
7551 {
7552 if (info->executable)
7553 {
7554 s = bfd_get_linker_section (dynobj, ".interp");
7555 if (s == NULL)
7556 abort ();
7557 s->size = sizeof ELF_DYNAMIC_INTERPRETER;
7558 s->contents = (unsigned char *) ELF_DYNAMIC_INTERPRETER;
7559 }
7560 }
7561
7562 /* Set up .got offsets for local syms, and space for local dynamic
7563 relocs. */
7564 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7565 {
7566 struct elf_aarch64_local_symbol *locals = NULL;
7567 Elf_Internal_Shdr *symtab_hdr;
7568 asection *srel;
7569 unsigned int i;
7570
7571 if (!is_aarch64_elf (ibfd))
7572 continue;
7573
7574 for (s = ibfd->sections; s != NULL; s = s->next)
7575 {
7576 struct elf_dyn_relocs *p;
7577
7578 for (p = (struct elf_dyn_relocs *)
7579 (elf_section_data (s)->local_dynrel); p != NULL; p = p->next)
7580 {
7581 if (!bfd_is_abs_section (p->sec)
7582 && bfd_is_abs_section (p->sec->output_section))
7583 {
7584 /* Input section has been discarded, either because
7585 it is a copy of a linkonce section or due to
7586 linker script /DISCARD/, so we'll be discarding
7587 the relocs too. */
7588 }
7589 else if (p->count != 0)
7590 {
7591 srel = elf_section_data (p->sec)->sreloc;
7592 srel->size += p->count * RELOC_SIZE (htab);
7593 if ((p->sec->output_section->flags & SEC_READONLY) != 0)
7594 info->flags |= DF_TEXTREL;
7595 }
7596 }
7597 }
7598
7599 locals = elf_aarch64_locals (ibfd);
7600 if (!locals)
7601 continue;
7602
7603 symtab_hdr = &elf_symtab_hdr (ibfd);
7604 srel = htab->root.srelgot;
7605 for (i = 0; i < symtab_hdr->sh_info; i++)
7606 {
7607 locals[i].got_offset = (bfd_vma) - 1;
7608 locals[i].tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7609 if (locals[i].got_refcount > 0)
7610 {
7611 unsigned got_type = locals[i].got_type;
7612 if (got_type & GOT_TLSDESC_GD)
7613 {
7614 locals[i].tlsdesc_got_jump_table_offset =
7615 (htab->root.sgotplt->size
7616 - aarch64_compute_jump_table_size (htab));
7617 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7618 locals[i].got_offset = (bfd_vma) - 2;
7619 }
7620
7621 if (got_type & GOT_TLS_GD)
7622 {
7623 locals[i].got_offset = htab->root.sgot->size;
7624 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7625 }
7626
7627 if (got_type & GOT_TLS_IE
7628 || got_type & GOT_NORMAL)
7629 {
7630 locals[i].got_offset = htab->root.sgot->size;
7631 htab->root.sgot->size += GOT_ENTRY_SIZE;
7632 }
7633
7634 if (got_type == GOT_UNKNOWN)
7635 {
7636 }
7637
7638 if (info->shared)
7639 {
7640 if (got_type & GOT_TLSDESC_GD)
7641 {
7642 htab->root.srelplt->size += RELOC_SIZE (htab);
7643 /* Note RELOC_COUNT not incremented here! */
7644 htab->tlsdesc_plt = (bfd_vma) - 1;
7645 }
7646
7647 if (got_type & GOT_TLS_GD)
7648 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7649
7650 if (got_type & GOT_TLS_IE
7651 || got_type & GOT_NORMAL)
7652 htab->root.srelgot->size += RELOC_SIZE (htab);
7653 }
7654 }
7655 else
7656 {
7657 locals[i].got_refcount = (bfd_vma) - 1;
7658 }
7659 }
7660 }
7661
7662
7663 /* Allocate global sym .plt and .got entries, and space for global
7664 sym dynamic relocs. */
7665 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_dynrelocs,
7666 info);
7667
7668 /* Allocate global ifunc sym .plt and .got entries, and space for global
7669 ifunc sym dynamic relocs. */
7670 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_ifunc_dynrelocs,
7671 info);
7672
7673 /* Allocate .plt and .got entries, and space for local symbols. */
7674 htab_traverse (htab->loc_hash_table,
7675 elfNN_aarch64_allocate_local_dynrelocs,
7676 info);
7677
7678 /* Allocate .plt and .got entries, and space for local ifunc symbols. */
7679 htab_traverse (htab->loc_hash_table,
7680 elfNN_aarch64_allocate_local_ifunc_dynrelocs,
7681 info);
7682
7683 /* For every jump slot reserved in the sgotplt, reloc_count is
7684 incremented. However, when we reserve space for TLS descriptors,
7685 it's not incremented, so in order to compute the space reserved
7686 for them, it suffices to multiply the reloc count by the jump
7687 slot size. */
7688
7689 if (htab->root.srelplt)
7690 htab->sgotplt_jump_table_size = aarch64_compute_jump_table_size (htab);
7691
7692 if (htab->tlsdesc_plt)
7693 {
7694 if (htab->root.splt->size == 0)
7695 htab->root.splt->size += PLT_ENTRY_SIZE;
7696
7697 htab->tlsdesc_plt = htab->root.splt->size;
7698 htab->root.splt->size += PLT_TLSDESC_ENTRY_SIZE;
7699
7700 /* If we're not using lazy TLS relocations, don't generate the
7701 GOT entry required. */
7702 if (!(info->flags & DF_BIND_NOW))
7703 {
7704 htab->dt_tlsdesc_got = htab->root.sgot->size;
7705 htab->root.sgot->size += GOT_ENTRY_SIZE;
7706 }
7707 }
7708
7709 /* Init mapping symbols information to use later to distingush between
7710 code and data while scanning for errata. */
7711 if (htab->fix_erratum_835769 || htab->fix_erratum_843419)
7712 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7713 {
7714 if (!is_aarch64_elf (ibfd))
7715 continue;
7716 bfd_elfNN_aarch64_init_maps (ibfd);
7717 }
7718
7719 /* We now have determined the sizes of the various dynamic sections.
7720 Allocate memory for them. */
7721 relocs = FALSE;
7722 for (s = dynobj->sections; s != NULL; s = s->next)
7723 {
7724 if ((s->flags & SEC_LINKER_CREATED) == 0)
7725 continue;
7726
7727 if (s == htab->root.splt
7728 || s == htab->root.sgot
7729 || s == htab->root.sgotplt
7730 || s == htab->root.iplt
7731 || s == htab->root.igotplt || s == htab->sdynbss)
7732 {
7733 /* Strip this section if we don't need it; see the
7734 comment below. */
7735 }
7736 else if (CONST_STRNEQ (bfd_get_section_name (dynobj, s), ".rela"))
7737 {
7738 if (s->size != 0 && s != htab->root.srelplt)
7739 relocs = TRUE;
7740
7741 /* We use the reloc_count field as a counter if we need
7742 to copy relocs into the output file. */
7743 if (s != htab->root.srelplt)
7744 s->reloc_count = 0;
7745 }
7746 else
7747 {
7748 /* It's not one of our sections, so don't allocate space. */
7749 continue;
7750 }
7751
7752 if (s->size == 0)
7753 {
7754 /* If we don't need this section, strip it from the
7755 output file. This is mostly to handle .rela.bss and
7756 .rela.plt. We must create both sections in
7757 create_dynamic_sections, because they must be created
7758 before the linker maps input sections to output
7759 sections. The linker does that before
7760 adjust_dynamic_symbol is called, and it is that
7761 function which decides whether anything needs to go
7762 into these sections. */
7763
7764 s->flags |= SEC_EXCLUDE;
7765 continue;
7766 }
7767
7768 if ((s->flags & SEC_HAS_CONTENTS) == 0)
7769 continue;
7770
7771 /* Allocate memory for the section contents. We use bfd_zalloc
7772 here in case unused entries are not reclaimed before the
7773 section's contents are written out. This should not happen,
7774 but this way if it does, we get a R_AARCH64_NONE reloc instead
7775 of garbage. */
7776 s->contents = (bfd_byte *) bfd_zalloc (dynobj, s->size);
7777 if (s->contents == NULL)
7778 return FALSE;
7779 }
7780
7781 if (htab->root.dynamic_sections_created)
7782 {
7783 /* Add some entries to the .dynamic section. We fill in the
7784 values later, in elfNN_aarch64_finish_dynamic_sections, but we
7785 must add the entries now so that we get the correct size for
7786 the .dynamic section. The DT_DEBUG entry is filled in by the
7787 dynamic linker and used by the debugger. */
7788 #define add_dynamic_entry(TAG, VAL) \
7789 _bfd_elf_add_dynamic_entry (info, TAG, VAL)
7790
7791 if (info->executable)
7792 {
7793 if (!add_dynamic_entry (DT_DEBUG, 0))
7794 return FALSE;
7795 }
7796
7797 if (htab->root.splt->size != 0)
7798 {
7799 if (!add_dynamic_entry (DT_PLTGOT, 0)
7800 || !add_dynamic_entry (DT_PLTRELSZ, 0)
7801 || !add_dynamic_entry (DT_PLTREL, DT_RELA)
7802 || !add_dynamic_entry (DT_JMPREL, 0))
7803 return FALSE;
7804
7805 if (htab->tlsdesc_plt
7806 && (!add_dynamic_entry (DT_TLSDESC_PLT, 0)
7807 || !add_dynamic_entry (DT_TLSDESC_GOT, 0)))
7808 return FALSE;
7809 }
7810
7811 if (relocs)
7812 {
7813 if (!add_dynamic_entry (DT_RELA, 0)
7814 || !add_dynamic_entry (DT_RELASZ, 0)
7815 || !add_dynamic_entry (DT_RELAENT, RELOC_SIZE (htab)))
7816 return FALSE;
7817
7818 /* If any dynamic relocs apply to a read-only section,
7819 then we need a DT_TEXTREL entry. */
7820 if ((info->flags & DF_TEXTREL) != 0)
7821 {
7822 if (!add_dynamic_entry (DT_TEXTREL, 0))
7823 return FALSE;
7824 }
7825 }
7826 }
7827 #undef add_dynamic_entry
7828
7829 return TRUE;
7830 }
7831
7832 static inline void
7833 elf_aarch64_update_plt_entry (bfd *output_bfd,
7834 bfd_reloc_code_real_type r_type,
7835 bfd_byte *plt_entry, bfd_vma value)
7836 {
7837 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (r_type);
7838
7839 _bfd_aarch64_elf_put_addend (output_bfd, plt_entry, r_type, howto, value);
7840 }
7841
7842 static void
7843 elfNN_aarch64_create_small_pltn_entry (struct elf_link_hash_entry *h,
7844 struct elf_aarch64_link_hash_table
7845 *htab, bfd *output_bfd,
7846 struct bfd_link_info *info)
7847 {
7848 bfd_byte *plt_entry;
7849 bfd_vma plt_index;
7850 bfd_vma got_offset;
7851 bfd_vma gotplt_entry_address;
7852 bfd_vma plt_entry_address;
7853 Elf_Internal_Rela rela;
7854 bfd_byte *loc;
7855 asection *plt, *gotplt, *relplt;
7856
7857 /* When building a static executable, use .iplt, .igot.plt and
7858 .rela.iplt sections for STT_GNU_IFUNC symbols. */
7859 if (htab->root.splt != NULL)
7860 {
7861 plt = htab->root.splt;
7862 gotplt = htab->root.sgotplt;
7863 relplt = htab->root.srelplt;
7864 }
7865 else
7866 {
7867 plt = htab->root.iplt;
7868 gotplt = htab->root.igotplt;
7869 relplt = htab->root.irelplt;
7870 }
7871
7872 /* Get the index in the procedure linkage table which
7873 corresponds to this symbol. This is the index of this symbol
7874 in all the symbols for which we are making plt entries. The
7875 first entry in the procedure linkage table is reserved.
7876
7877 Get the offset into the .got table of the entry that
7878 corresponds to this function. Each .got entry is GOT_ENTRY_SIZE
7879 bytes. The first three are reserved for the dynamic linker.
7880
7881 For static executables, we don't reserve anything. */
7882
7883 if (plt == htab->root.splt)
7884 {
7885 plt_index = (h->plt.offset - htab->plt_header_size) / htab->plt_entry_size;
7886 got_offset = (plt_index + 3) * GOT_ENTRY_SIZE;
7887 }
7888 else
7889 {
7890 plt_index = h->plt.offset / htab->plt_entry_size;
7891 got_offset = plt_index * GOT_ENTRY_SIZE;
7892 }
7893
7894 plt_entry = plt->contents + h->plt.offset;
7895 plt_entry_address = plt->output_section->vma
7896 + plt->output_offset + h->plt.offset;
7897 gotplt_entry_address = gotplt->output_section->vma +
7898 gotplt->output_offset + got_offset;
7899
7900 /* Copy in the boiler-plate for the PLTn entry. */
7901 memcpy (plt_entry, elfNN_aarch64_small_plt_entry, PLT_SMALL_ENTRY_SIZE);
7902
7903 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
7904 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
7905 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
7906 plt_entry,
7907 PG (gotplt_entry_address) -
7908 PG (plt_entry_address));
7909
7910 /* Fill in the lo12 bits for the load from the pltgot. */
7911 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
7912 plt_entry + 4,
7913 PG_OFFSET (gotplt_entry_address));
7914
7915 /* Fill in the lo12 bits for the add from the pltgot entry. */
7916 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
7917 plt_entry + 8,
7918 PG_OFFSET (gotplt_entry_address));
7919
7920 /* All the GOTPLT Entries are essentially initialized to PLT0. */
7921 bfd_put_NN (output_bfd,
7922 plt->output_section->vma + plt->output_offset,
7923 gotplt->contents + got_offset);
7924
7925 rela.r_offset = gotplt_entry_address;
7926
7927 if (h->dynindx == -1
7928 || ((info->executable
7929 || ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7930 && h->def_regular
7931 && h->type == STT_GNU_IFUNC))
7932 {
7933 /* If an STT_GNU_IFUNC symbol is locally defined, generate
7934 R_AARCH64_IRELATIVE instead of R_AARCH64_JUMP_SLOT. */
7935 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
7936 rela.r_addend = (h->root.u.def.value
7937 + h->root.u.def.section->output_section->vma
7938 + h->root.u.def.section->output_offset);
7939 }
7940 else
7941 {
7942 /* Fill in the entry in the .rela.plt section. */
7943 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (JUMP_SLOT));
7944 rela.r_addend = 0;
7945 }
7946
7947 /* Compute the relocation entry to used based on PLT index and do
7948 not adjust reloc_count. The reloc_count has already been adjusted
7949 to account for this entry. */
7950 loc = relplt->contents + plt_index * RELOC_SIZE (htab);
7951 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
7952 }
7953
7954 /* Size sections even though they're not dynamic. We use it to setup
7955 _TLS_MODULE_BASE_, if needed. */
7956
7957 static bfd_boolean
7958 elfNN_aarch64_always_size_sections (bfd *output_bfd,
7959 struct bfd_link_info *info)
7960 {
7961 asection *tls_sec;
7962
7963 if (info->relocatable)
7964 return TRUE;
7965
7966 tls_sec = elf_hash_table (info)->tls_sec;
7967
7968 if (tls_sec)
7969 {
7970 struct elf_link_hash_entry *tlsbase;
7971
7972 tlsbase = elf_link_hash_lookup (elf_hash_table (info),
7973 "_TLS_MODULE_BASE_", TRUE, TRUE, FALSE);
7974
7975 if (tlsbase)
7976 {
7977 struct bfd_link_hash_entry *h = NULL;
7978 const struct elf_backend_data *bed =
7979 get_elf_backend_data (output_bfd);
7980
7981 if (!(_bfd_generic_link_add_one_symbol
7982 (info, output_bfd, "_TLS_MODULE_BASE_", BSF_LOCAL,
7983 tls_sec, 0, NULL, FALSE, bed->collect, &h)))
7984 return FALSE;
7985
7986 tlsbase->type = STT_TLS;
7987 tlsbase = (struct elf_link_hash_entry *) h;
7988 tlsbase->def_regular = 1;
7989 tlsbase->other = STV_HIDDEN;
7990 (*bed->elf_backend_hide_symbol) (info, tlsbase, TRUE);
7991 }
7992 }
7993
7994 return TRUE;
7995 }
7996
7997 /* Finish up dynamic symbol handling. We set the contents of various
7998 dynamic sections here. */
7999 static bfd_boolean
8000 elfNN_aarch64_finish_dynamic_symbol (bfd *output_bfd,
8001 struct bfd_link_info *info,
8002 struct elf_link_hash_entry *h,
8003 Elf_Internal_Sym *sym)
8004 {
8005 struct elf_aarch64_link_hash_table *htab;
8006 htab = elf_aarch64_hash_table (info);
8007
8008 if (h->plt.offset != (bfd_vma) - 1)
8009 {
8010 asection *plt, *gotplt, *relplt;
8011
8012 /* This symbol has an entry in the procedure linkage table. Set
8013 it up. */
8014
8015 /* When building a static executable, use .iplt, .igot.plt and
8016 .rela.iplt sections for STT_GNU_IFUNC symbols. */
8017 if (htab->root.splt != NULL)
8018 {
8019 plt = htab->root.splt;
8020 gotplt = htab->root.sgotplt;
8021 relplt = htab->root.srelplt;
8022 }
8023 else
8024 {
8025 plt = htab->root.iplt;
8026 gotplt = htab->root.igotplt;
8027 relplt = htab->root.irelplt;
8028 }
8029
8030 /* This symbol has an entry in the procedure linkage table. Set
8031 it up. */
8032 if ((h->dynindx == -1
8033 && !((h->forced_local || info->executable)
8034 && h->def_regular
8035 && h->type == STT_GNU_IFUNC))
8036 || plt == NULL
8037 || gotplt == NULL
8038 || relplt == NULL)
8039 abort ();
8040
8041 elfNN_aarch64_create_small_pltn_entry (h, htab, output_bfd, info);
8042 if (!h->def_regular)
8043 {
8044 /* Mark the symbol as undefined, rather than as defined in
8045 the .plt section. */
8046 sym->st_shndx = SHN_UNDEF;
8047 /* If the symbol is weak we need to clear the value.
8048 Otherwise, the PLT entry would provide a definition for
8049 the symbol even if the symbol wasn't defined anywhere,
8050 and so the symbol would never be NULL. Leave the value if
8051 there were any relocations where pointer equality matters
8052 (this is a clue for the dynamic linker, to make function
8053 pointer comparisons work between an application and shared
8054 library). */
8055 if (!h->ref_regular_nonweak || !h->pointer_equality_needed)
8056 sym->st_value = 0;
8057 }
8058 }
8059
8060 if (h->got.offset != (bfd_vma) - 1
8061 && elf_aarch64_hash_entry (h)->got_type == GOT_NORMAL)
8062 {
8063 Elf_Internal_Rela rela;
8064 bfd_byte *loc;
8065
8066 /* This symbol has an entry in the global offset table. Set it
8067 up. */
8068 if (htab->root.sgot == NULL || htab->root.srelgot == NULL)
8069 abort ();
8070
8071 rela.r_offset = (htab->root.sgot->output_section->vma
8072 + htab->root.sgot->output_offset
8073 + (h->got.offset & ~(bfd_vma) 1));
8074
8075 if (h->def_regular
8076 && h->type == STT_GNU_IFUNC)
8077 {
8078 if (info->shared)
8079 {
8080 /* Generate R_AARCH64_GLOB_DAT. */
8081 goto do_glob_dat;
8082 }
8083 else
8084 {
8085 asection *plt;
8086
8087 if (!h->pointer_equality_needed)
8088 abort ();
8089
8090 /* For non-shared object, we can't use .got.plt, which
8091 contains the real function address if we need pointer
8092 equality. We load the GOT entry with the PLT entry. */
8093 plt = htab->root.splt ? htab->root.splt : htab->root.iplt;
8094 bfd_put_NN (output_bfd, (plt->output_section->vma
8095 + plt->output_offset
8096 + h->plt.offset),
8097 htab->root.sgot->contents
8098 + (h->got.offset & ~(bfd_vma) 1));
8099 return TRUE;
8100 }
8101 }
8102 else if (info->shared && SYMBOL_REFERENCES_LOCAL (info, h))
8103 {
8104 if (!h->def_regular)
8105 return FALSE;
8106
8107 BFD_ASSERT ((h->got.offset & 1) != 0);
8108 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
8109 rela.r_addend = (h->root.u.def.value
8110 + h->root.u.def.section->output_section->vma
8111 + h->root.u.def.section->output_offset);
8112 }
8113 else
8114 {
8115 do_glob_dat:
8116 BFD_ASSERT ((h->got.offset & 1) == 0);
8117 bfd_put_NN (output_bfd, (bfd_vma) 0,
8118 htab->root.sgot->contents + h->got.offset);
8119 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (GLOB_DAT));
8120 rela.r_addend = 0;
8121 }
8122
8123 loc = htab->root.srelgot->contents;
8124 loc += htab->root.srelgot->reloc_count++ * RELOC_SIZE (htab);
8125 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8126 }
8127
8128 if (h->needs_copy)
8129 {
8130 Elf_Internal_Rela rela;
8131 bfd_byte *loc;
8132
8133 /* This symbol needs a copy reloc. Set it up. */
8134
8135 if (h->dynindx == -1
8136 || (h->root.type != bfd_link_hash_defined
8137 && h->root.type != bfd_link_hash_defweak)
8138 || htab->srelbss == NULL)
8139 abort ();
8140
8141 rela.r_offset = (h->root.u.def.value
8142 + h->root.u.def.section->output_section->vma
8143 + h->root.u.def.section->output_offset);
8144 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (COPY));
8145 rela.r_addend = 0;
8146 loc = htab->srelbss->contents;
8147 loc += htab->srelbss->reloc_count++ * RELOC_SIZE (htab);
8148 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8149 }
8150
8151 /* Mark _DYNAMIC and _GLOBAL_OFFSET_TABLE_ as absolute. SYM may
8152 be NULL for local symbols. */
8153 if (sym != NULL
8154 && (h == elf_hash_table (info)->hdynamic
8155 || h == elf_hash_table (info)->hgot))
8156 sym->st_shndx = SHN_ABS;
8157
8158 return TRUE;
8159 }
8160
8161 /* Finish up local dynamic symbol handling. We set the contents of
8162 various dynamic sections here. */
8163
8164 static bfd_boolean
8165 elfNN_aarch64_finish_local_dynamic_symbol (void **slot, void *inf)
8166 {
8167 struct elf_link_hash_entry *h
8168 = (struct elf_link_hash_entry *) *slot;
8169 struct bfd_link_info *info
8170 = (struct bfd_link_info *) inf;
8171
8172 return elfNN_aarch64_finish_dynamic_symbol (info->output_bfd,
8173 info, h, NULL);
8174 }
8175
8176 static void
8177 elfNN_aarch64_init_small_plt0_entry (bfd *output_bfd ATTRIBUTE_UNUSED,
8178 struct elf_aarch64_link_hash_table
8179 *htab)
8180 {
8181 /* Fill in PLT0. Fixme:RR Note this doesn't distinguish between
8182 small and large plts and at the minute just generates
8183 the small PLT. */
8184
8185 /* PLT0 of the small PLT looks like this in ELF64 -
8186 stp x16, x30, [sp, #-16]! // Save the reloc and lr on stack.
8187 adrp x16, PLT_GOT + 16 // Get the page base of the GOTPLT
8188 ldr x17, [x16, #:lo12:PLT_GOT+16] // Load the address of the
8189 // symbol resolver
8190 add x16, x16, #:lo12:PLT_GOT+16 // Load the lo12 bits of the
8191 // GOTPLT entry for this.
8192 br x17
8193 PLT0 will be slightly different in ELF32 due to different got entry
8194 size.
8195 */
8196 bfd_vma plt_got_2nd_ent; /* Address of GOT[2]. */
8197 bfd_vma plt_base;
8198
8199
8200 memcpy (htab->root.splt->contents, elfNN_aarch64_small_plt0_entry,
8201 PLT_ENTRY_SIZE);
8202 elf_section_data (htab->root.splt->output_section)->this_hdr.sh_entsize =
8203 PLT_ENTRY_SIZE;
8204
8205 plt_got_2nd_ent = (htab->root.sgotplt->output_section->vma
8206 + htab->root.sgotplt->output_offset
8207 + GOT_ENTRY_SIZE * 2);
8208
8209 plt_base = htab->root.splt->output_section->vma +
8210 htab->root.splt->output_offset;
8211
8212 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
8213 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
8214 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8215 htab->root.splt->contents + 4,
8216 PG (plt_got_2nd_ent) - PG (plt_base + 4));
8217
8218 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
8219 htab->root.splt->contents + 8,
8220 PG_OFFSET (plt_got_2nd_ent));
8221
8222 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
8223 htab->root.splt->contents + 12,
8224 PG_OFFSET (plt_got_2nd_ent));
8225 }
8226
8227 static bfd_boolean
8228 elfNN_aarch64_finish_dynamic_sections (bfd *output_bfd,
8229 struct bfd_link_info *info)
8230 {
8231 struct elf_aarch64_link_hash_table *htab;
8232 bfd *dynobj;
8233 asection *sdyn;
8234
8235 htab = elf_aarch64_hash_table (info);
8236 dynobj = htab->root.dynobj;
8237 sdyn = bfd_get_linker_section (dynobj, ".dynamic");
8238
8239 if (htab->root.dynamic_sections_created)
8240 {
8241 ElfNN_External_Dyn *dyncon, *dynconend;
8242
8243 if (sdyn == NULL || htab->root.sgot == NULL)
8244 abort ();
8245
8246 dyncon = (ElfNN_External_Dyn *) sdyn->contents;
8247 dynconend = (ElfNN_External_Dyn *) (sdyn->contents + sdyn->size);
8248 for (; dyncon < dynconend; dyncon++)
8249 {
8250 Elf_Internal_Dyn dyn;
8251 asection *s;
8252
8253 bfd_elfNN_swap_dyn_in (dynobj, dyncon, &dyn);
8254
8255 switch (dyn.d_tag)
8256 {
8257 default:
8258 continue;
8259
8260 case DT_PLTGOT:
8261 s = htab->root.sgotplt;
8262 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset;
8263 break;
8264
8265 case DT_JMPREL:
8266 dyn.d_un.d_ptr = htab->root.srelplt->output_section->vma;
8267 break;
8268
8269 case DT_PLTRELSZ:
8270 s = htab->root.srelplt;
8271 dyn.d_un.d_val = s->size;
8272 break;
8273
8274 case DT_RELASZ:
8275 /* The procedure linkage table relocs (DT_JMPREL) should
8276 not be included in the overall relocs (DT_RELA).
8277 Therefore, we override the DT_RELASZ entry here to
8278 make it not include the JMPREL relocs. Since the
8279 linker script arranges for .rela.plt to follow all
8280 other relocation sections, we don't have to worry
8281 about changing the DT_RELA entry. */
8282 if (htab->root.srelplt != NULL)
8283 {
8284 s = htab->root.srelplt;
8285 dyn.d_un.d_val -= s->size;
8286 }
8287 break;
8288
8289 case DT_TLSDESC_PLT:
8290 s = htab->root.splt;
8291 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8292 + htab->tlsdesc_plt;
8293 break;
8294
8295 case DT_TLSDESC_GOT:
8296 s = htab->root.sgot;
8297 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8298 + htab->dt_tlsdesc_got;
8299 break;
8300 }
8301
8302 bfd_elfNN_swap_dyn_out (output_bfd, &dyn, dyncon);
8303 }
8304
8305 }
8306
8307 /* Fill in the special first entry in the procedure linkage table. */
8308 if (htab->root.splt && htab->root.splt->size > 0)
8309 {
8310 elfNN_aarch64_init_small_plt0_entry (output_bfd, htab);
8311
8312 elf_section_data (htab->root.splt->output_section)->
8313 this_hdr.sh_entsize = htab->plt_entry_size;
8314
8315
8316 if (htab->tlsdesc_plt)
8317 {
8318 bfd_put_NN (output_bfd, (bfd_vma) 0,
8319 htab->root.sgot->contents + htab->dt_tlsdesc_got);
8320
8321 memcpy (htab->root.splt->contents + htab->tlsdesc_plt,
8322 elfNN_aarch64_tlsdesc_small_plt_entry,
8323 sizeof (elfNN_aarch64_tlsdesc_small_plt_entry));
8324
8325 {
8326 bfd_vma adrp1_addr =
8327 htab->root.splt->output_section->vma
8328 + htab->root.splt->output_offset + htab->tlsdesc_plt + 4;
8329
8330 bfd_vma adrp2_addr = adrp1_addr + 4;
8331
8332 bfd_vma got_addr =
8333 htab->root.sgot->output_section->vma
8334 + htab->root.sgot->output_offset;
8335
8336 bfd_vma pltgot_addr =
8337 htab->root.sgotplt->output_section->vma
8338 + htab->root.sgotplt->output_offset;
8339
8340 bfd_vma dt_tlsdesc_got = got_addr + htab->dt_tlsdesc_got;
8341
8342 bfd_byte *plt_entry =
8343 htab->root.splt->contents + htab->tlsdesc_plt;
8344
8345 /* adrp x2, DT_TLSDESC_GOT */
8346 elf_aarch64_update_plt_entry (output_bfd,
8347 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8348 plt_entry + 4,
8349 (PG (dt_tlsdesc_got)
8350 - PG (adrp1_addr)));
8351
8352 /* adrp x3, 0 */
8353 elf_aarch64_update_plt_entry (output_bfd,
8354 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8355 plt_entry + 8,
8356 (PG (pltgot_addr)
8357 - PG (adrp2_addr)));
8358
8359 /* ldr x2, [x2, #0] */
8360 elf_aarch64_update_plt_entry (output_bfd,
8361 BFD_RELOC_AARCH64_LDSTNN_LO12,
8362 plt_entry + 12,
8363 PG_OFFSET (dt_tlsdesc_got));
8364
8365 /* add x3, x3, 0 */
8366 elf_aarch64_update_plt_entry (output_bfd,
8367 BFD_RELOC_AARCH64_ADD_LO12,
8368 plt_entry + 16,
8369 PG_OFFSET (pltgot_addr));
8370 }
8371 }
8372 }
8373
8374 if (htab->root.sgotplt)
8375 {
8376 if (bfd_is_abs_section (htab->root.sgotplt->output_section))
8377 {
8378 (*_bfd_error_handler)
8379 (_("discarded output section: `%A'"), htab->root.sgotplt);
8380 return FALSE;
8381 }
8382
8383 /* Fill in the first three entries in the global offset table. */
8384 if (htab->root.sgotplt->size > 0)
8385 {
8386 bfd_put_NN (output_bfd, (bfd_vma) 0, htab->root.sgotplt->contents);
8387
8388 /* Write GOT[1] and GOT[2], needed for the dynamic linker. */
8389 bfd_put_NN (output_bfd,
8390 (bfd_vma) 0,
8391 htab->root.sgotplt->contents + GOT_ENTRY_SIZE);
8392 bfd_put_NN (output_bfd,
8393 (bfd_vma) 0,
8394 htab->root.sgotplt->contents + GOT_ENTRY_SIZE * 2);
8395 }
8396
8397 if (htab->root.sgot)
8398 {
8399 if (htab->root.sgot->size > 0)
8400 {
8401 bfd_vma addr =
8402 sdyn ? sdyn->output_section->vma + sdyn->output_offset : 0;
8403 bfd_put_NN (output_bfd, addr, htab->root.sgot->contents);
8404 }
8405 }
8406
8407 elf_section_data (htab->root.sgotplt->output_section)->
8408 this_hdr.sh_entsize = GOT_ENTRY_SIZE;
8409 }
8410
8411 if (htab->root.sgot && htab->root.sgot->size > 0)
8412 elf_section_data (htab->root.sgot->output_section)->this_hdr.sh_entsize
8413 = GOT_ENTRY_SIZE;
8414
8415 /* Fill PLT and GOT entries for local STT_GNU_IFUNC symbols. */
8416 htab_traverse (htab->loc_hash_table,
8417 elfNN_aarch64_finish_local_dynamic_symbol,
8418 info);
8419
8420 return TRUE;
8421 }
8422
8423 /* Return address for Ith PLT stub in section PLT, for relocation REL
8424 or (bfd_vma) -1 if it should not be included. */
8425
8426 static bfd_vma
8427 elfNN_aarch64_plt_sym_val (bfd_vma i, const asection *plt,
8428 const arelent *rel ATTRIBUTE_UNUSED)
8429 {
8430 return plt->vma + PLT_ENTRY_SIZE + i * PLT_SMALL_ENTRY_SIZE;
8431 }
8432
8433
8434 /* We use this so we can override certain functions
8435 (though currently we don't). */
8436
8437 const struct elf_size_info elfNN_aarch64_size_info =
8438 {
8439 sizeof (ElfNN_External_Ehdr),
8440 sizeof (ElfNN_External_Phdr),
8441 sizeof (ElfNN_External_Shdr),
8442 sizeof (ElfNN_External_Rel),
8443 sizeof (ElfNN_External_Rela),
8444 sizeof (ElfNN_External_Sym),
8445 sizeof (ElfNN_External_Dyn),
8446 sizeof (Elf_External_Note),
8447 4, /* Hash table entry size. */
8448 1, /* Internal relocs per external relocs. */
8449 ARCH_SIZE, /* Arch size. */
8450 LOG_FILE_ALIGN, /* Log_file_align. */
8451 ELFCLASSNN, EV_CURRENT,
8452 bfd_elfNN_write_out_phdrs,
8453 bfd_elfNN_write_shdrs_and_ehdr,
8454 bfd_elfNN_checksum_contents,
8455 bfd_elfNN_write_relocs,
8456 bfd_elfNN_swap_symbol_in,
8457 bfd_elfNN_swap_symbol_out,
8458 bfd_elfNN_slurp_reloc_table,
8459 bfd_elfNN_slurp_symbol_table,
8460 bfd_elfNN_swap_dyn_in,
8461 bfd_elfNN_swap_dyn_out,
8462 bfd_elfNN_swap_reloc_in,
8463 bfd_elfNN_swap_reloc_out,
8464 bfd_elfNN_swap_reloca_in,
8465 bfd_elfNN_swap_reloca_out
8466 };
8467
8468 #define ELF_ARCH bfd_arch_aarch64
8469 #define ELF_MACHINE_CODE EM_AARCH64
8470 #define ELF_MAXPAGESIZE 0x10000
8471 #define ELF_MINPAGESIZE 0x1000
8472 #define ELF_COMMONPAGESIZE 0x1000
8473
8474 #define bfd_elfNN_close_and_cleanup \
8475 elfNN_aarch64_close_and_cleanup
8476
8477 #define bfd_elfNN_bfd_free_cached_info \
8478 elfNN_aarch64_bfd_free_cached_info
8479
8480 #define bfd_elfNN_bfd_is_target_special_symbol \
8481 elfNN_aarch64_is_target_special_symbol
8482
8483 #define bfd_elfNN_bfd_link_hash_table_create \
8484 elfNN_aarch64_link_hash_table_create
8485
8486 #define bfd_elfNN_bfd_merge_private_bfd_data \
8487 elfNN_aarch64_merge_private_bfd_data
8488
8489 #define bfd_elfNN_bfd_print_private_bfd_data \
8490 elfNN_aarch64_print_private_bfd_data
8491
8492 #define bfd_elfNN_bfd_reloc_type_lookup \
8493 elfNN_aarch64_reloc_type_lookup
8494
8495 #define bfd_elfNN_bfd_reloc_name_lookup \
8496 elfNN_aarch64_reloc_name_lookup
8497
8498 #define bfd_elfNN_bfd_set_private_flags \
8499 elfNN_aarch64_set_private_flags
8500
8501 #define bfd_elfNN_find_inliner_info \
8502 elfNN_aarch64_find_inliner_info
8503
8504 #define bfd_elfNN_find_nearest_line \
8505 elfNN_aarch64_find_nearest_line
8506
8507 #define bfd_elfNN_mkobject \
8508 elfNN_aarch64_mkobject
8509
8510 #define bfd_elfNN_new_section_hook \
8511 elfNN_aarch64_new_section_hook
8512
8513 #define elf_backend_adjust_dynamic_symbol \
8514 elfNN_aarch64_adjust_dynamic_symbol
8515
8516 #define elf_backend_always_size_sections \
8517 elfNN_aarch64_always_size_sections
8518
8519 #define elf_backend_check_relocs \
8520 elfNN_aarch64_check_relocs
8521
8522 #define elf_backend_copy_indirect_symbol \
8523 elfNN_aarch64_copy_indirect_symbol
8524
8525 /* Create .dynbss, and .rela.bss sections in DYNOBJ, and set up shortcuts
8526 to them in our hash. */
8527 #define elf_backend_create_dynamic_sections \
8528 elfNN_aarch64_create_dynamic_sections
8529
8530 #define elf_backend_init_index_section \
8531 _bfd_elf_init_2_index_sections
8532
8533 #define elf_backend_finish_dynamic_sections \
8534 elfNN_aarch64_finish_dynamic_sections
8535
8536 #define elf_backend_finish_dynamic_symbol \
8537 elfNN_aarch64_finish_dynamic_symbol
8538
8539 #define elf_backend_gc_sweep_hook \
8540 elfNN_aarch64_gc_sweep_hook
8541
8542 #define elf_backend_object_p \
8543 elfNN_aarch64_object_p
8544
8545 #define elf_backend_output_arch_local_syms \
8546 elfNN_aarch64_output_arch_local_syms
8547
8548 #define elf_backend_plt_sym_val \
8549 elfNN_aarch64_plt_sym_val
8550
8551 #define elf_backend_post_process_headers \
8552 elfNN_aarch64_post_process_headers
8553
8554 #define elf_backend_relocate_section \
8555 elfNN_aarch64_relocate_section
8556
8557 #define elf_backend_reloc_type_class \
8558 elfNN_aarch64_reloc_type_class
8559
8560 #define elf_backend_section_from_shdr \
8561 elfNN_aarch64_section_from_shdr
8562
8563 #define elf_backend_size_dynamic_sections \
8564 elfNN_aarch64_size_dynamic_sections
8565
8566 #define elf_backend_size_info \
8567 elfNN_aarch64_size_info
8568
8569 #define elf_backend_write_section \
8570 elfNN_aarch64_write_section
8571
8572 #define elf_backend_can_refcount 1
8573 #define elf_backend_can_gc_sections 1
8574 #define elf_backend_plt_readonly 1
8575 #define elf_backend_want_got_plt 1
8576 #define elf_backend_want_plt_sym 0
8577 #define elf_backend_may_use_rel_p 0
8578 #define elf_backend_may_use_rela_p 1
8579 #define elf_backend_default_use_rela_p 1
8580 #define elf_backend_rela_normal 1
8581 #define elf_backend_got_header_size (GOT_ENTRY_SIZE * 3)
8582 #define elf_backend_default_execstack 0
8583
8584 #undef elf_backend_obj_attrs_section
8585 #define elf_backend_obj_attrs_section ".ARM.attributes"
8586
8587 #include "elfNN-target.h"
This page took 0.222553 seconds and 4 git commands to generate.