Treat assembler generated local labels as local.
[deliverable/binutils-gdb.git] / bfd / elfnn-aarch64.c
1 /* AArch64-specific support for NN-bit ELF.
2 Copyright (C) 2009-2015 Free Software Foundation, Inc.
3 Contributed by ARM Ltd.
4
5 This file is part of BFD, the Binary File Descriptor library.
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program; see the file COPYING3. If not,
19 see <http://www.gnu.org/licenses/>. */
20
21 /* Notes on implementation:
22
23 Thread Local Store (TLS)
24
25 Overview:
26
27 The implementation currently supports both traditional TLS and TLS
28 descriptors, but only general dynamic (GD).
29
30 For traditional TLS the assembler will present us with code
31 fragments of the form:
32
33 adrp x0, :tlsgd:foo
34 R_AARCH64_TLSGD_ADR_PAGE21(foo)
35 add x0, :tlsgd_lo12:foo
36 R_AARCH64_TLSGD_ADD_LO12_NC(foo)
37 bl __tls_get_addr
38 nop
39
40 For TLS descriptors the assembler will present us with code
41 fragments of the form:
42
43 adrp x0, :tlsdesc:foo R_AARCH64_TLSDESC_ADR_PAGE21(foo)
44 ldr x1, [x0, #:tlsdesc_lo12:foo] R_AARCH64_TLSDESC_LD64_LO12(foo)
45 add x0, x0, #:tlsdesc_lo12:foo R_AARCH64_TLSDESC_ADD_LO12(foo)
46 .tlsdesccall foo
47 blr x1 R_AARCH64_TLSDESC_CALL(foo)
48
49 The relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} against foo
50 indicate that foo is thread local and should be accessed via the
51 traditional TLS mechanims.
52
53 The relocations R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC}
54 against foo indicate that 'foo' is thread local and should be accessed
55 via a TLS descriptor mechanism.
56
57 The precise instruction sequence is only relevant from the
58 perspective of linker relaxation which is currently not implemented.
59
60 The static linker must detect that 'foo' is a TLS object and
61 allocate a double GOT entry. The GOT entry must be created for both
62 global and local TLS symbols. Note that this is different to none
63 TLS local objects which do not need a GOT entry.
64
65 In the traditional TLS mechanism, the double GOT entry is used to
66 provide the tls_index structure, containing module and offset
67 entries. The static linker places the relocation R_AARCH64_TLS_DTPMOD
68 on the module entry. The loader will subsequently fixup this
69 relocation with the module identity.
70
71 For global traditional TLS symbols the static linker places an
72 R_AARCH64_TLS_DTPREL relocation on the offset entry. The loader
73 will subsequently fixup the offset. For local TLS symbols the static
74 linker fixes up offset.
75
76 In the TLS descriptor mechanism the double GOT entry is used to
77 provide the descriptor. The static linker places the relocation
78 R_AARCH64_TLSDESC on the first GOT slot. The loader will
79 subsequently fix this up.
80
81 Implementation:
82
83 The handling of TLS symbols is implemented across a number of
84 different backend functions. The following is a top level view of
85 what processing is performed where.
86
87 The TLS implementation maintains state information for each TLS
88 symbol. The state information for local and global symbols is kept
89 in different places. Global symbols use generic BFD structures while
90 local symbols use backend specific structures that are allocated and
91 maintained entirely by the backend.
92
93 The flow:
94
95 elfNN_aarch64_check_relocs()
96
97 This function is invoked for each relocation.
98
99 The TLS relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} and
100 R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC} are
101 spotted. One time creation of local symbol data structures are
102 created when the first local symbol is seen.
103
104 The reference count for a symbol is incremented. The GOT type for
105 each symbol is marked as general dynamic.
106
107 elfNN_aarch64_allocate_dynrelocs ()
108
109 For each global with positive reference count we allocate a double
110 GOT slot. For a traditional TLS symbol we allocate space for two
111 relocation entries on the GOT, for a TLS descriptor symbol we
112 allocate space for one relocation on the slot. Record the GOT offset
113 for this symbol.
114
115 elfNN_aarch64_size_dynamic_sections ()
116
117 Iterate all input BFDS, look for in the local symbol data structure
118 constructed earlier for local TLS symbols and allocate them double
119 GOT slots along with space for a single GOT relocation. Update the
120 local symbol structure to record the GOT offset allocated.
121
122 elfNN_aarch64_relocate_section ()
123
124 Calls elfNN_aarch64_final_link_relocate ()
125
126 Emit the relevant TLS relocations against the GOT for each TLS
127 symbol. For local TLS symbols emit the GOT offset directly. The GOT
128 relocations are emitted once the first time a TLS symbol is
129 encountered. The implementation uses the LSB of the GOT offset to
130 flag that the relevant GOT relocations for a symbol have been
131 emitted. All of the TLS code that uses the GOT offset needs to take
132 care to mask out this flag bit before using the offset.
133
134 elfNN_aarch64_final_link_relocate ()
135
136 Fixup the R_AARCH64_TLSGD_{ADR_PREL21, ADD_LO12_NC} relocations. */
137
138 #include "sysdep.h"
139 #include "bfd.h"
140 #include "libiberty.h"
141 #include "libbfd.h"
142 #include "bfd_stdint.h"
143 #include "elf-bfd.h"
144 #include "bfdlink.h"
145 #include "objalloc.h"
146 #include "elf/aarch64.h"
147 #include "elfxx-aarch64.h"
148
149 #define ARCH_SIZE NN
150
151 #if ARCH_SIZE == 64
152 #define AARCH64_R(NAME) R_AARCH64_ ## NAME
153 #define AARCH64_R_STR(NAME) "R_AARCH64_" #NAME
154 #define HOWTO64(...) HOWTO (__VA_ARGS__)
155 #define HOWTO32(...) EMPTY_HOWTO (0)
156 #define LOG_FILE_ALIGN 3
157 #endif
158
159 #if ARCH_SIZE == 32
160 #define AARCH64_R(NAME) R_AARCH64_P32_ ## NAME
161 #define AARCH64_R_STR(NAME) "R_AARCH64_P32_" #NAME
162 #define HOWTO64(...) EMPTY_HOWTO (0)
163 #define HOWTO32(...) HOWTO (__VA_ARGS__)
164 #define LOG_FILE_ALIGN 2
165 #endif
166
167 #define IS_AARCH64_TLS_RELOC(R_TYPE) \
168 ((R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21 \
169 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PREL21 \
170 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC \
171 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G1 \
172 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC \
173 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21 \
174 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC \
175 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC \
176 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19 \
177 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12 \
178 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12 \
179 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC \
180 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2 \
181 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 \
182 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC \
183 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0 \
184 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC \
185 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPMOD \
186 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPREL \
187 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_TPREL \
188 || IS_AARCH64_TLSDESC_RELOC ((R_TYPE)))
189
190 #define IS_AARCH64_TLSDESC_RELOC(R_TYPE) \
191 ((R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD_PREL19 \
192 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21 \
193 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21 \
194 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC \
195 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC \
196 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC \
197 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G1 \
198 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G0_NC \
199 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LDR \
200 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD \
201 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_CALL \
202 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC)
203
204 #define ELIMINATE_COPY_RELOCS 0
205
206 /* Return size of a relocation entry. HTAB is the bfd's
207 elf_aarch64_link_hash_entry. */
208 #define RELOC_SIZE(HTAB) (sizeof (ElfNN_External_Rela))
209
210 /* GOT Entry size - 8 bytes in ELF64 and 4 bytes in ELF32. */
211 #define GOT_ENTRY_SIZE (ARCH_SIZE / 8)
212 #define PLT_ENTRY_SIZE (32)
213 #define PLT_SMALL_ENTRY_SIZE (16)
214 #define PLT_TLSDESC_ENTRY_SIZE (32)
215
216 /* Encoding of the nop instruction */
217 #define INSN_NOP 0xd503201f
218
219 #define aarch64_compute_jump_table_size(htab) \
220 (((htab)->root.srelplt == NULL) ? 0 \
221 : (htab)->root.srelplt->reloc_count * GOT_ENTRY_SIZE)
222
223 /* The first entry in a procedure linkage table looks like this
224 if the distance between the PLTGOT and the PLT is < 4GB use
225 these PLT entries. Note that the dynamic linker gets &PLTGOT[2]
226 in x16 and needs to work out PLTGOT[1] by using an address of
227 [x16,#-GOT_ENTRY_SIZE]. */
228 static const bfd_byte elfNN_aarch64_small_plt0_entry[PLT_ENTRY_SIZE] =
229 {
230 0xf0, 0x7b, 0xbf, 0xa9, /* stp x16, x30, [sp, #-16]! */
231 0x10, 0x00, 0x00, 0x90, /* adrp x16, (GOT+16) */
232 #if ARCH_SIZE == 64
233 0x11, 0x0A, 0x40, 0xf9, /* ldr x17, [x16, #PLT_GOT+0x10] */
234 0x10, 0x42, 0x00, 0x91, /* add x16, x16,#PLT_GOT+0x10 */
235 #else
236 0x11, 0x0A, 0x40, 0xb9, /* ldr w17, [x16, #PLT_GOT+0x8] */
237 0x10, 0x22, 0x00, 0x11, /* add w16, w16,#PLT_GOT+0x8 */
238 #endif
239 0x20, 0x02, 0x1f, 0xd6, /* br x17 */
240 0x1f, 0x20, 0x03, 0xd5, /* nop */
241 0x1f, 0x20, 0x03, 0xd5, /* nop */
242 0x1f, 0x20, 0x03, 0xd5, /* nop */
243 };
244
245 /* Per function entry in a procedure linkage table looks like this
246 if the distance between the PLTGOT and the PLT is < 4GB use
247 these PLT entries. */
248 static const bfd_byte elfNN_aarch64_small_plt_entry[PLT_SMALL_ENTRY_SIZE] =
249 {
250 0x10, 0x00, 0x00, 0x90, /* adrp x16, PLTGOT + n * 8 */
251 #if ARCH_SIZE == 64
252 0x11, 0x02, 0x40, 0xf9, /* ldr x17, [x16, PLTGOT + n * 8] */
253 0x10, 0x02, 0x00, 0x91, /* add x16, x16, :lo12:PLTGOT + n * 8 */
254 #else
255 0x11, 0x02, 0x40, 0xb9, /* ldr w17, [x16, PLTGOT + n * 4] */
256 0x10, 0x02, 0x00, 0x11, /* add w16, w16, :lo12:PLTGOT + n * 4 */
257 #endif
258 0x20, 0x02, 0x1f, 0xd6, /* br x17. */
259 };
260
261 static const bfd_byte
262 elfNN_aarch64_tlsdesc_small_plt_entry[PLT_TLSDESC_ENTRY_SIZE] =
263 {
264 0xe2, 0x0f, 0xbf, 0xa9, /* stp x2, x3, [sp, #-16]! */
265 0x02, 0x00, 0x00, 0x90, /* adrp x2, 0 */
266 0x03, 0x00, 0x00, 0x90, /* adrp x3, 0 */
267 #if ARCH_SIZE == 64
268 0x42, 0x00, 0x40, 0xf9, /* ldr x2, [x2, #0] */
269 0x63, 0x00, 0x00, 0x91, /* add x3, x3, 0 */
270 #else
271 0x42, 0x00, 0x40, 0xb9, /* ldr w2, [x2, #0] */
272 0x63, 0x00, 0x00, 0x11, /* add w3, w3, 0 */
273 #endif
274 0x40, 0x00, 0x1f, 0xd6, /* br x2 */
275 0x1f, 0x20, 0x03, 0xd5, /* nop */
276 0x1f, 0x20, 0x03, 0xd5, /* nop */
277 };
278
279 #define elf_info_to_howto elfNN_aarch64_info_to_howto
280 #define elf_info_to_howto_rel elfNN_aarch64_info_to_howto
281
282 #define AARCH64_ELF_ABI_VERSION 0
283
284 /* In case we're on a 32-bit machine, construct a 64-bit "-1" value. */
285 #define ALL_ONES (~ (bfd_vma) 0)
286
287 /* Indexed by the bfd interal reloc enumerators.
288 Therefore, the table needs to be synced with BFD_RELOC_AARCH64_*
289 in reloc.c. */
290
291 static reloc_howto_type elfNN_aarch64_howto_table[] =
292 {
293 EMPTY_HOWTO (0),
294
295 /* Basic data relocations. */
296
297 #if ARCH_SIZE == 64
298 HOWTO (R_AARCH64_NULL, /* type */
299 0, /* rightshift */
300 3, /* size (0 = byte, 1 = short, 2 = long) */
301 0, /* bitsize */
302 FALSE, /* pc_relative */
303 0, /* bitpos */
304 complain_overflow_dont, /* complain_on_overflow */
305 bfd_elf_generic_reloc, /* special_function */
306 "R_AARCH64_NULL", /* name */
307 FALSE, /* partial_inplace */
308 0, /* src_mask */
309 0, /* dst_mask */
310 FALSE), /* pcrel_offset */
311 #else
312 HOWTO (R_AARCH64_NONE, /* type */
313 0, /* rightshift */
314 3, /* size (0 = byte, 1 = short, 2 = long) */
315 0, /* bitsize */
316 FALSE, /* pc_relative */
317 0, /* bitpos */
318 complain_overflow_dont, /* complain_on_overflow */
319 bfd_elf_generic_reloc, /* special_function */
320 "R_AARCH64_NONE", /* name */
321 FALSE, /* partial_inplace */
322 0, /* src_mask */
323 0, /* dst_mask */
324 FALSE), /* pcrel_offset */
325 #endif
326
327 /* .xword: (S+A) */
328 HOWTO64 (AARCH64_R (ABS64), /* type */
329 0, /* rightshift */
330 4, /* size (4 = long long) */
331 64, /* bitsize */
332 FALSE, /* pc_relative */
333 0, /* bitpos */
334 complain_overflow_unsigned, /* complain_on_overflow */
335 bfd_elf_generic_reloc, /* special_function */
336 AARCH64_R_STR (ABS64), /* name */
337 FALSE, /* partial_inplace */
338 ALL_ONES, /* src_mask */
339 ALL_ONES, /* dst_mask */
340 FALSE), /* pcrel_offset */
341
342 /* .word: (S+A) */
343 HOWTO (AARCH64_R (ABS32), /* type */
344 0, /* rightshift */
345 2, /* size (0 = byte, 1 = short, 2 = long) */
346 32, /* bitsize */
347 FALSE, /* pc_relative */
348 0, /* bitpos */
349 complain_overflow_unsigned, /* complain_on_overflow */
350 bfd_elf_generic_reloc, /* special_function */
351 AARCH64_R_STR (ABS32), /* name */
352 FALSE, /* partial_inplace */
353 0xffffffff, /* src_mask */
354 0xffffffff, /* dst_mask */
355 FALSE), /* pcrel_offset */
356
357 /* .half: (S+A) */
358 HOWTO (AARCH64_R (ABS16), /* type */
359 0, /* rightshift */
360 1, /* size (0 = byte, 1 = short, 2 = long) */
361 16, /* bitsize */
362 FALSE, /* pc_relative */
363 0, /* bitpos */
364 complain_overflow_unsigned, /* complain_on_overflow */
365 bfd_elf_generic_reloc, /* special_function */
366 AARCH64_R_STR (ABS16), /* name */
367 FALSE, /* partial_inplace */
368 0xffff, /* src_mask */
369 0xffff, /* dst_mask */
370 FALSE), /* pcrel_offset */
371
372 /* .xword: (S+A-P) */
373 HOWTO64 (AARCH64_R (PREL64), /* type */
374 0, /* rightshift */
375 4, /* size (4 = long long) */
376 64, /* bitsize */
377 TRUE, /* pc_relative */
378 0, /* bitpos */
379 complain_overflow_signed, /* complain_on_overflow */
380 bfd_elf_generic_reloc, /* special_function */
381 AARCH64_R_STR (PREL64), /* name */
382 FALSE, /* partial_inplace */
383 ALL_ONES, /* src_mask */
384 ALL_ONES, /* dst_mask */
385 TRUE), /* pcrel_offset */
386
387 /* .word: (S+A-P) */
388 HOWTO (AARCH64_R (PREL32), /* type */
389 0, /* rightshift */
390 2, /* size (0 = byte, 1 = short, 2 = long) */
391 32, /* bitsize */
392 TRUE, /* pc_relative */
393 0, /* bitpos */
394 complain_overflow_signed, /* complain_on_overflow */
395 bfd_elf_generic_reloc, /* special_function */
396 AARCH64_R_STR (PREL32), /* name */
397 FALSE, /* partial_inplace */
398 0xffffffff, /* src_mask */
399 0xffffffff, /* dst_mask */
400 TRUE), /* pcrel_offset */
401
402 /* .half: (S+A-P) */
403 HOWTO (AARCH64_R (PREL16), /* type */
404 0, /* rightshift */
405 1, /* size (0 = byte, 1 = short, 2 = long) */
406 16, /* bitsize */
407 TRUE, /* pc_relative */
408 0, /* bitpos */
409 complain_overflow_signed, /* complain_on_overflow */
410 bfd_elf_generic_reloc, /* special_function */
411 AARCH64_R_STR (PREL16), /* name */
412 FALSE, /* partial_inplace */
413 0xffff, /* src_mask */
414 0xffff, /* dst_mask */
415 TRUE), /* pcrel_offset */
416
417 /* Group relocations to create a 16, 32, 48 or 64 bit
418 unsigned data or abs address inline. */
419
420 /* MOVZ: ((S+A) >> 0) & 0xffff */
421 HOWTO (AARCH64_R (MOVW_UABS_G0), /* type */
422 0, /* rightshift */
423 2, /* size (0 = byte, 1 = short, 2 = long) */
424 16, /* bitsize */
425 FALSE, /* pc_relative */
426 0, /* bitpos */
427 complain_overflow_unsigned, /* complain_on_overflow */
428 bfd_elf_generic_reloc, /* special_function */
429 AARCH64_R_STR (MOVW_UABS_G0), /* name */
430 FALSE, /* partial_inplace */
431 0xffff, /* src_mask */
432 0xffff, /* dst_mask */
433 FALSE), /* pcrel_offset */
434
435 /* MOVK: ((S+A) >> 0) & 0xffff [no overflow check] */
436 HOWTO (AARCH64_R (MOVW_UABS_G0_NC), /* type */
437 0, /* rightshift */
438 2, /* size (0 = byte, 1 = short, 2 = long) */
439 16, /* bitsize */
440 FALSE, /* pc_relative */
441 0, /* bitpos */
442 complain_overflow_dont, /* complain_on_overflow */
443 bfd_elf_generic_reloc, /* special_function */
444 AARCH64_R_STR (MOVW_UABS_G0_NC), /* name */
445 FALSE, /* partial_inplace */
446 0xffff, /* src_mask */
447 0xffff, /* dst_mask */
448 FALSE), /* pcrel_offset */
449
450 /* MOVZ: ((S+A) >> 16) & 0xffff */
451 HOWTO (AARCH64_R (MOVW_UABS_G1), /* type */
452 16, /* rightshift */
453 2, /* size (0 = byte, 1 = short, 2 = long) */
454 16, /* bitsize */
455 FALSE, /* pc_relative */
456 0, /* bitpos */
457 complain_overflow_unsigned, /* complain_on_overflow */
458 bfd_elf_generic_reloc, /* special_function */
459 AARCH64_R_STR (MOVW_UABS_G1), /* name */
460 FALSE, /* partial_inplace */
461 0xffff, /* src_mask */
462 0xffff, /* dst_mask */
463 FALSE), /* pcrel_offset */
464
465 /* MOVK: ((S+A) >> 16) & 0xffff [no overflow check] */
466 HOWTO64 (AARCH64_R (MOVW_UABS_G1_NC), /* type */
467 16, /* rightshift */
468 2, /* size (0 = byte, 1 = short, 2 = long) */
469 16, /* bitsize */
470 FALSE, /* pc_relative */
471 0, /* bitpos */
472 complain_overflow_dont, /* complain_on_overflow */
473 bfd_elf_generic_reloc, /* special_function */
474 AARCH64_R_STR (MOVW_UABS_G1_NC), /* name */
475 FALSE, /* partial_inplace */
476 0xffff, /* src_mask */
477 0xffff, /* dst_mask */
478 FALSE), /* pcrel_offset */
479
480 /* MOVZ: ((S+A) >> 32) & 0xffff */
481 HOWTO64 (AARCH64_R (MOVW_UABS_G2), /* type */
482 32, /* rightshift */
483 2, /* size (0 = byte, 1 = short, 2 = long) */
484 16, /* bitsize */
485 FALSE, /* pc_relative */
486 0, /* bitpos */
487 complain_overflow_unsigned, /* complain_on_overflow */
488 bfd_elf_generic_reloc, /* special_function */
489 AARCH64_R_STR (MOVW_UABS_G2), /* name */
490 FALSE, /* partial_inplace */
491 0xffff, /* src_mask */
492 0xffff, /* dst_mask */
493 FALSE), /* pcrel_offset */
494
495 /* MOVK: ((S+A) >> 32) & 0xffff [no overflow check] */
496 HOWTO64 (AARCH64_R (MOVW_UABS_G2_NC), /* type */
497 32, /* rightshift */
498 2, /* size (0 = byte, 1 = short, 2 = long) */
499 16, /* bitsize */
500 FALSE, /* pc_relative */
501 0, /* bitpos */
502 complain_overflow_dont, /* complain_on_overflow */
503 bfd_elf_generic_reloc, /* special_function */
504 AARCH64_R_STR (MOVW_UABS_G2_NC), /* name */
505 FALSE, /* partial_inplace */
506 0xffff, /* src_mask */
507 0xffff, /* dst_mask */
508 FALSE), /* pcrel_offset */
509
510 /* MOVZ: ((S+A) >> 48) & 0xffff */
511 HOWTO64 (AARCH64_R (MOVW_UABS_G3), /* type */
512 48, /* rightshift */
513 2, /* size (0 = byte, 1 = short, 2 = long) */
514 16, /* bitsize */
515 FALSE, /* pc_relative */
516 0, /* bitpos */
517 complain_overflow_unsigned, /* complain_on_overflow */
518 bfd_elf_generic_reloc, /* special_function */
519 AARCH64_R_STR (MOVW_UABS_G3), /* name */
520 FALSE, /* partial_inplace */
521 0xffff, /* src_mask */
522 0xffff, /* dst_mask */
523 FALSE), /* pcrel_offset */
524
525 /* Group relocations to create high part of a 16, 32, 48 or 64 bit
526 signed data or abs address inline. Will change instruction
527 to MOVN or MOVZ depending on sign of calculated value. */
528
529 /* MOV[ZN]: ((S+A) >> 0) & 0xffff */
530 HOWTO (AARCH64_R (MOVW_SABS_G0), /* type */
531 0, /* rightshift */
532 2, /* size (0 = byte, 1 = short, 2 = long) */
533 16, /* bitsize */
534 FALSE, /* pc_relative */
535 0, /* bitpos */
536 complain_overflow_signed, /* complain_on_overflow */
537 bfd_elf_generic_reloc, /* special_function */
538 AARCH64_R_STR (MOVW_SABS_G0), /* name */
539 FALSE, /* partial_inplace */
540 0xffff, /* src_mask */
541 0xffff, /* dst_mask */
542 FALSE), /* pcrel_offset */
543
544 /* MOV[ZN]: ((S+A) >> 16) & 0xffff */
545 HOWTO64 (AARCH64_R (MOVW_SABS_G1), /* type */
546 16, /* rightshift */
547 2, /* size (0 = byte, 1 = short, 2 = long) */
548 16, /* bitsize */
549 FALSE, /* pc_relative */
550 0, /* bitpos */
551 complain_overflow_signed, /* complain_on_overflow */
552 bfd_elf_generic_reloc, /* special_function */
553 AARCH64_R_STR (MOVW_SABS_G1), /* name */
554 FALSE, /* partial_inplace */
555 0xffff, /* src_mask */
556 0xffff, /* dst_mask */
557 FALSE), /* pcrel_offset */
558
559 /* MOV[ZN]: ((S+A) >> 32) & 0xffff */
560 HOWTO64 (AARCH64_R (MOVW_SABS_G2), /* type */
561 32, /* rightshift */
562 2, /* size (0 = byte, 1 = short, 2 = long) */
563 16, /* bitsize */
564 FALSE, /* pc_relative */
565 0, /* bitpos */
566 complain_overflow_signed, /* complain_on_overflow */
567 bfd_elf_generic_reloc, /* special_function */
568 AARCH64_R_STR (MOVW_SABS_G2), /* name */
569 FALSE, /* partial_inplace */
570 0xffff, /* src_mask */
571 0xffff, /* dst_mask */
572 FALSE), /* pcrel_offset */
573
574 /* Relocations to generate 19, 21 and 33 bit PC-relative load/store
575 addresses: PG(x) is (x & ~0xfff). */
576
577 /* LD-lit: ((S+A-P) >> 2) & 0x7ffff */
578 HOWTO (AARCH64_R (LD_PREL_LO19), /* type */
579 2, /* rightshift */
580 2, /* size (0 = byte, 1 = short, 2 = long) */
581 19, /* bitsize */
582 TRUE, /* pc_relative */
583 0, /* bitpos */
584 complain_overflow_signed, /* complain_on_overflow */
585 bfd_elf_generic_reloc, /* special_function */
586 AARCH64_R_STR (LD_PREL_LO19), /* name */
587 FALSE, /* partial_inplace */
588 0x7ffff, /* src_mask */
589 0x7ffff, /* dst_mask */
590 TRUE), /* pcrel_offset */
591
592 /* ADR: (S+A-P) & 0x1fffff */
593 HOWTO (AARCH64_R (ADR_PREL_LO21), /* type */
594 0, /* rightshift */
595 2, /* size (0 = byte, 1 = short, 2 = long) */
596 21, /* bitsize */
597 TRUE, /* pc_relative */
598 0, /* bitpos */
599 complain_overflow_signed, /* complain_on_overflow */
600 bfd_elf_generic_reloc, /* special_function */
601 AARCH64_R_STR (ADR_PREL_LO21), /* name */
602 FALSE, /* partial_inplace */
603 0x1fffff, /* src_mask */
604 0x1fffff, /* dst_mask */
605 TRUE), /* pcrel_offset */
606
607 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
608 HOWTO (AARCH64_R (ADR_PREL_PG_HI21), /* type */
609 12, /* rightshift */
610 2, /* size (0 = byte, 1 = short, 2 = long) */
611 21, /* bitsize */
612 TRUE, /* pc_relative */
613 0, /* bitpos */
614 complain_overflow_signed, /* complain_on_overflow */
615 bfd_elf_generic_reloc, /* special_function */
616 AARCH64_R_STR (ADR_PREL_PG_HI21), /* name */
617 FALSE, /* partial_inplace */
618 0x1fffff, /* src_mask */
619 0x1fffff, /* dst_mask */
620 TRUE), /* pcrel_offset */
621
622 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff [no overflow check] */
623 HOWTO64 (AARCH64_R (ADR_PREL_PG_HI21_NC), /* type */
624 12, /* rightshift */
625 2, /* size (0 = byte, 1 = short, 2 = long) */
626 21, /* bitsize */
627 TRUE, /* pc_relative */
628 0, /* bitpos */
629 complain_overflow_dont, /* complain_on_overflow */
630 bfd_elf_generic_reloc, /* special_function */
631 AARCH64_R_STR (ADR_PREL_PG_HI21_NC), /* name */
632 FALSE, /* partial_inplace */
633 0x1fffff, /* src_mask */
634 0x1fffff, /* dst_mask */
635 TRUE), /* pcrel_offset */
636
637 /* ADD: (S+A) & 0xfff [no overflow check] */
638 HOWTO (AARCH64_R (ADD_ABS_LO12_NC), /* type */
639 0, /* rightshift */
640 2, /* size (0 = byte, 1 = short, 2 = long) */
641 12, /* bitsize */
642 FALSE, /* pc_relative */
643 10, /* bitpos */
644 complain_overflow_dont, /* complain_on_overflow */
645 bfd_elf_generic_reloc, /* special_function */
646 AARCH64_R_STR (ADD_ABS_LO12_NC), /* name */
647 FALSE, /* partial_inplace */
648 0x3ffc00, /* src_mask */
649 0x3ffc00, /* dst_mask */
650 FALSE), /* pcrel_offset */
651
652 /* LD/ST8: (S+A) & 0xfff */
653 HOWTO (AARCH64_R (LDST8_ABS_LO12_NC), /* type */
654 0, /* rightshift */
655 2, /* size (0 = byte, 1 = short, 2 = long) */
656 12, /* bitsize */
657 FALSE, /* pc_relative */
658 0, /* bitpos */
659 complain_overflow_dont, /* complain_on_overflow */
660 bfd_elf_generic_reloc, /* special_function */
661 AARCH64_R_STR (LDST8_ABS_LO12_NC), /* name */
662 FALSE, /* partial_inplace */
663 0xfff, /* src_mask */
664 0xfff, /* dst_mask */
665 FALSE), /* pcrel_offset */
666
667 /* Relocations for control-flow instructions. */
668
669 /* TBZ/NZ: ((S+A-P) >> 2) & 0x3fff */
670 HOWTO (AARCH64_R (TSTBR14), /* type */
671 2, /* rightshift */
672 2, /* size (0 = byte, 1 = short, 2 = long) */
673 14, /* bitsize */
674 TRUE, /* pc_relative */
675 0, /* bitpos */
676 complain_overflow_signed, /* complain_on_overflow */
677 bfd_elf_generic_reloc, /* special_function */
678 AARCH64_R_STR (TSTBR14), /* name */
679 FALSE, /* partial_inplace */
680 0x3fff, /* src_mask */
681 0x3fff, /* dst_mask */
682 TRUE), /* pcrel_offset */
683
684 /* B.cond: ((S+A-P) >> 2) & 0x7ffff */
685 HOWTO (AARCH64_R (CONDBR19), /* type */
686 2, /* rightshift */
687 2, /* size (0 = byte, 1 = short, 2 = long) */
688 19, /* bitsize */
689 TRUE, /* pc_relative */
690 0, /* bitpos */
691 complain_overflow_signed, /* complain_on_overflow */
692 bfd_elf_generic_reloc, /* special_function */
693 AARCH64_R_STR (CONDBR19), /* name */
694 FALSE, /* partial_inplace */
695 0x7ffff, /* src_mask */
696 0x7ffff, /* dst_mask */
697 TRUE), /* pcrel_offset */
698
699 /* B: ((S+A-P) >> 2) & 0x3ffffff */
700 HOWTO (AARCH64_R (JUMP26), /* type */
701 2, /* rightshift */
702 2, /* size (0 = byte, 1 = short, 2 = long) */
703 26, /* bitsize */
704 TRUE, /* pc_relative */
705 0, /* bitpos */
706 complain_overflow_signed, /* complain_on_overflow */
707 bfd_elf_generic_reloc, /* special_function */
708 AARCH64_R_STR (JUMP26), /* name */
709 FALSE, /* partial_inplace */
710 0x3ffffff, /* src_mask */
711 0x3ffffff, /* dst_mask */
712 TRUE), /* pcrel_offset */
713
714 /* BL: ((S+A-P) >> 2) & 0x3ffffff */
715 HOWTO (AARCH64_R (CALL26), /* type */
716 2, /* rightshift */
717 2, /* size (0 = byte, 1 = short, 2 = long) */
718 26, /* bitsize */
719 TRUE, /* pc_relative */
720 0, /* bitpos */
721 complain_overflow_signed, /* complain_on_overflow */
722 bfd_elf_generic_reloc, /* special_function */
723 AARCH64_R_STR (CALL26), /* name */
724 FALSE, /* partial_inplace */
725 0x3ffffff, /* src_mask */
726 0x3ffffff, /* dst_mask */
727 TRUE), /* pcrel_offset */
728
729 /* LD/ST16: (S+A) & 0xffe */
730 HOWTO (AARCH64_R (LDST16_ABS_LO12_NC), /* type */
731 1, /* rightshift */
732 2, /* size (0 = byte, 1 = short, 2 = long) */
733 12, /* bitsize */
734 FALSE, /* pc_relative */
735 0, /* bitpos */
736 complain_overflow_dont, /* complain_on_overflow */
737 bfd_elf_generic_reloc, /* special_function */
738 AARCH64_R_STR (LDST16_ABS_LO12_NC), /* name */
739 FALSE, /* partial_inplace */
740 0xffe, /* src_mask */
741 0xffe, /* dst_mask */
742 FALSE), /* pcrel_offset */
743
744 /* LD/ST32: (S+A) & 0xffc */
745 HOWTO (AARCH64_R (LDST32_ABS_LO12_NC), /* type */
746 2, /* rightshift */
747 2, /* size (0 = byte, 1 = short, 2 = long) */
748 12, /* bitsize */
749 FALSE, /* pc_relative */
750 0, /* bitpos */
751 complain_overflow_dont, /* complain_on_overflow */
752 bfd_elf_generic_reloc, /* special_function */
753 AARCH64_R_STR (LDST32_ABS_LO12_NC), /* name */
754 FALSE, /* partial_inplace */
755 0xffc, /* src_mask */
756 0xffc, /* dst_mask */
757 FALSE), /* pcrel_offset */
758
759 /* LD/ST64: (S+A) & 0xff8 */
760 HOWTO (AARCH64_R (LDST64_ABS_LO12_NC), /* type */
761 3, /* rightshift */
762 2, /* size (0 = byte, 1 = short, 2 = long) */
763 12, /* bitsize */
764 FALSE, /* pc_relative */
765 0, /* bitpos */
766 complain_overflow_dont, /* complain_on_overflow */
767 bfd_elf_generic_reloc, /* special_function */
768 AARCH64_R_STR (LDST64_ABS_LO12_NC), /* name */
769 FALSE, /* partial_inplace */
770 0xff8, /* src_mask */
771 0xff8, /* dst_mask */
772 FALSE), /* pcrel_offset */
773
774 /* LD/ST128: (S+A) & 0xff0 */
775 HOWTO (AARCH64_R (LDST128_ABS_LO12_NC), /* type */
776 4, /* rightshift */
777 2, /* size (0 = byte, 1 = short, 2 = long) */
778 12, /* bitsize */
779 FALSE, /* pc_relative */
780 0, /* bitpos */
781 complain_overflow_dont, /* complain_on_overflow */
782 bfd_elf_generic_reloc, /* special_function */
783 AARCH64_R_STR (LDST128_ABS_LO12_NC), /* name */
784 FALSE, /* partial_inplace */
785 0xff0, /* src_mask */
786 0xff0, /* dst_mask */
787 FALSE), /* pcrel_offset */
788
789 /* Set a load-literal immediate field to bits
790 0x1FFFFC of G(S)-P */
791 HOWTO (AARCH64_R (GOT_LD_PREL19), /* type */
792 2, /* rightshift */
793 2, /* size (0 = byte,1 = short,2 = long) */
794 19, /* bitsize */
795 TRUE, /* pc_relative */
796 0, /* bitpos */
797 complain_overflow_signed, /* complain_on_overflow */
798 bfd_elf_generic_reloc, /* special_function */
799 AARCH64_R_STR (GOT_LD_PREL19), /* name */
800 FALSE, /* partial_inplace */
801 0xffffe0, /* src_mask */
802 0xffffe0, /* dst_mask */
803 TRUE), /* pcrel_offset */
804
805 /* Get to the page for the GOT entry for the symbol
806 (G(S) - P) using an ADRP instruction. */
807 HOWTO (AARCH64_R (ADR_GOT_PAGE), /* type */
808 12, /* rightshift */
809 2, /* size (0 = byte, 1 = short, 2 = long) */
810 21, /* bitsize */
811 TRUE, /* pc_relative */
812 0, /* bitpos */
813 complain_overflow_dont, /* complain_on_overflow */
814 bfd_elf_generic_reloc, /* special_function */
815 AARCH64_R_STR (ADR_GOT_PAGE), /* name */
816 FALSE, /* partial_inplace */
817 0x1fffff, /* src_mask */
818 0x1fffff, /* dst_mask */
819 TRUE), /* pcrel_offset */
820
821 /* LD64: GOT offset G(S) & 0xff8 */
822 HOWTO64 (AARCH64_R (LD64_GOT_LO12_NC), /* type */
823 3, /* rightshift */
824 2, /* size (0 = byte, 1 = short, 2 = long) */
825 12, /* bitsize */
826 FALSE, /* pc_relative */
827 0, /* bitpos */
828 complain_overflow_dont, /* complain_on_overflow */
829 bfd_elf_generic_reloc, /* special_function */
830 AARCH64_R_STR (LD64_GOT_LO12_NC), /* name */
831 FALSE, /* partial_inplace */
832 0xff8, /* src_mask */
833 0xff8, /* dst_mask */
834 FALSE), /* pcrel_offset */
835
836 /* LD32: GOT offset G(S) & 0xffc */
837 HOWTO32 (AARCH64_R (LD32_GOT_LO12_NC), /* type */
838 2, /* rightshift */
839 2, /* size (0 = byte, 1 = short, 2 = long) */
840 12, /* bitsize */
841 FALSE, /* pc_relative */
842 0, /* bitpos */
843 complain_overflow_dont, /* complain_on_overflow */
844 bfd_elf_generic_reloc, /* special_function */
845 AARCH64_R_STR (LD32_GOT_LO12_NC), /* name */
846 FALSE, /* partial_inplace */
847 0xffc, /* src_mask */
848 0xffc, /* dst_mask */
849 FALSE), /* pcrel_offset */
850
851 /* Get to the page for the GOT entry for the symbol
852 (G(S) - P) using an ADRP instruction. */
853 HOWTO (AARCH64_R (TLSGD_ADR_PAGE21), /* type */
854 12, /* rightshift */
855 2, /* size (0 = byte, 1 = short, 2 = long) */
856 21, /* bitsize */
857 TRUE, /* pc_relative */
858 0, /* bitpos */
859 complain_overflow_dont, /* complain_on_overflow */
860 bfd_elf_generic_reloc, /* special_function */
861 AARCH64_R_STR (TLSGD_ADR_PAGE21), /* name */
862 FALSE, /* partial_inplace */
863 0x1fffff, /* src_mask */
864 0x1fffff, /* dst_mask */
865 TRUE), /* pcrel_offset */
866
867 HOWTO (AARCH64_R (TLSGD_ADR_PREL21), /* type */
868 0, /* rightshift */
869 2, /* size (0 = byte, 1 = short, 2 = long) */
870 21, /* bitsize */
871 TRUE, /* pc_relative */
872 0, /* bitpos */
873 complain_overflow_dont, /* complain_on_overflow */
874 bfd_elf_generic_reloc, /* special_function */
875 AARCH64_R_STR (TLSGD_ADR_PREL21), /* name */
876 FALSE, /* partial_inplace */
877 0x1fffff, /* src_mask */
878 0x1fffff, /* dst_mask */
879 TRUE), /* pcrel_offset */
880
881 /* ADD: GOT offset G(S) & 0xff8 [no overflow check] */
882 HOWTO (AARCH64_R (TLSGD_ADD_LO12_NC), /* type */
883 0, /* rightshift */
884 2, /* size (0 = byte, 1 = short, 2 = long) */
885 12, /* bitsize */
886 FALSE, /* pc_relative */
887 0, /* bitpos */
888 complain_overflow_dont, /* complain_on_overflow */
889 bfd_elf_generic_reloc, /* special_function */
890 AARCH64_R_STR (TLSGD_ADD_LO12_NC), /* name */
891 FALSE, /* partial_inplace */
892 0xfff, /* src_mask */
893 0xfff, /* dst_mask */
894 FALSE), /* pcrel_offset */
895
896 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G1), /* type */
897 16, /* rightshift */
898 2, /* size (0 = byte, 1 = short, 2 = long) */
899 16, /* bitsize */
900 FALSE, /* pc_relative */
901 0, /* bitpos */
902 complain_overflow_dont, /* complain_on_overflow */
903 bfd_elf_generic_reloc, /* special_function */
904 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G1), /* name */
905 FALSE, /* partial_inplace */
906 0xffff, /* src_mask */
907 0xffff, /* dst_mask */
908 FALSE), /* pcrel_offset */
909
910 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G0_NC), /* type */
911 0, /* rightshift */
912 2, /* size (0 = byte, 1 = short, 2 = long) */
913 16, /* bitsize */
914 FALSE, /* pc_relative */
915 0, /* bitpos */
916 complain_overflow_dont, /* complain_on_overflow */
917 bfd_elf_generic_reloc, /* special_function */
918 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G0_NC), /* name */
919 FALSE, /* partial_inplace */
920 0xffff, /* src_mask */
921 0xffff, /* dst_mask */
922 FALSE), /* pcrel_offset */
923
924 HOWTO (AARCH64_R (TLSIE_ADR_GOTTPREL_PAGE21), /* type */
925 12, /* rightshift */
926 2, /* size (0 = byte, 1 = short, 2 = long) */
927 21, /* bitsize */
928 FALSE, /* pc_relative */
929 0, /* bitpos */
930 complain_overflow_dont, /* complain_on_overflow */
931 bfd_elf_generic_reloc, /* special_function */
932 AARCH64_R_STR (TLSIE_ADR_GOTTPREL_PAGE21), /* name */
933 FALSE, /* partial_inplace */
934 0x1fffff, /* src_mask */
935 0x1fffff, /* dst_mask */
936 FALSE), /* pcrel_offset */
937
938 HOWTO64 (AARCH64_R (TLSIE_LD64_GOTTPREL_LO12_NC), /* type */
939 3, /* rightshift */
940 2, /* size (0 = byte, 1 = short, 2 = long) */
941 12, /* bitsize */
942 FALSE, /* pc_relative */
943 0, /* bitpos */
944 complain_overflow_dont, /* complain_on_overflow */
945 bfd_elf_generic_reloc, /* special_function */
946 AARCH64_R_STR (TLSIE_LD64_GOTTPREL_LO12_NC), /* name */
947 FALSE, /* partial_inplace */
948 0xff8, /* src_mask */
949 0xff8, /* dst_mask */
950 FALSE), /* pcrel_offset */
951
952 HOWTO32 (AARCH64_R (TLSIE_LD32_GOTTPREL_LO12_NC), /* type */
953 2, /* rightshift */
954 2, /* size (0 = byte, 1 = short, 2 = long) */
955 12, /* bitsize */
956 FALSE, /* pc_relative */
957 0, /* bitpos */
958 complain_overflow_dont, /* complain_on_overflow */
959 bfd_elf_generic_reloc, /* special_function */
960 AARCH64_R_STR (TLSIE_LD32_GOTTPREL_LO12_NC), /* name */
961 FALSE, /* partial_inplace */
962 0xffc, /* src_mask */
963 0xffc, /* dst_mask */
964 FALSE), /* pcrel_offset */
965
966 HOWTO (AARCH64_R (TLSIE_LD_GOTTPREL_PREL19), /* type */
967 2, /* rightshift */
968 2, /* size (0 = byte, 1 = short, 2 = long) */
969 19, /* bitsize */
970 FALSE, /* pc_relative */
971 0, /* bitpos */
972 complain_overflow_dont, /* complain_on_overflow */
973 bfd_elf_generic_reloc, /* special_function */
974 AARCH64_R_STR (TLSIE_LD_GOTTPREL_PREL19), /* name */
975 FALSE, /* partial_inplace */
976 0x1ffffc, /* src_mask */
977 0x1ffffc, /* dst_mask */
978 FALSE), /* pcrel_offset */
979
980 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G2), /* type */
981 32, /* rightshift */
982 2, /* size (0 = byte, 1 = short, 2 = long) */
983 16, /* bitsize */
984 FALSE, /* pc_relative */
985 0, /* bitpos */
986 complain_overflow_unsigned, /* complain_on_overflow */
987 bfd_elf_generic_reloc, /* special_function */
988 AARCH64_R_STR (TLSLE_MOVW_TPREL_G2), /* name */
989 FALSE, /* partial_inplace */
990 0xffff, /* src_mask */
991 0xffff, /* dst_mask */
992 FALSE), /* pcrel_offset */
993
994 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G1), /* type */
995 16, /* rightshift */
996 2, /* size (0 = byte, 1 = short, 2 = long) */
997 16, /* bitsize */
998 FALSE, /* pc_relative */
999 0, /* bitpos */
1000 complain_overflow_dont, /* complain_on_overflow */
1001 bfd_elf_generic_reloc, /* special_function */
1002 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1), /* name */
1003 FALSE, /* partial_inplace */
1004 0xffff, /* src_mask */
1005 0xffff, /* dst_mask */
1006 FALSE), /* pcrel_offset */
1007
1008 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G1_NC), /* type */
1009 16, /* rightshift */
1010 2, /* size (0 = byte, 1 = short, 2 = long) */
1011 16, /* bitsize */
1012 FALSE, /* pc_relative */
1013 0, /* bitpos */
1014 complain_overflow_dont, /* complain_on_overflow */
1015 bfd_elf_generic_reloc, /* special_function */
1016 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1_NC), /* name */
1017 FALSE, /* partial_inplace */
1018 0xffff, /* src_mask */
1019 0xffff, /* dst_mask */
1020 FALSE), /* pcrel_offset */
1021
1022 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0), /* type */
1023 0, /* rightshift */
1024 2, /* size (0 = byte, 1 = short, 2 = long) */
1025 16, /* bitsize */
1026 FALSE, /* pc_relative */
1027 0, /* bitpos */
1028 complain_overflow_dont, /* complain_on_overflow */
1029 bfd_elf_generic_reloc, /* special_function */
1030 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0), /* name */
1031 FALSE, /* partial_inplace */
1032 0xffff, /* src_mask */
1033 0xffff, /* dst_mask */
1034 FALSE), /* pcrel_offset */
1035
1036 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0_NC), /* type */
1037 0, /* rightshift */
1038 2, /* size (0 = byte, 1 = short, 2 = long) */
1039 16, /* bitsize */
1040 FALSE, /* pc_relative */
1041 0, /* bitpos */
1042 complain_overflow_dont, /* complain_on_overflow */
1043 bfd_elf_generic_reloc, /* special_function */
1044 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0_NC), /* name */
1045 FALSE, /* partial_inplace */
1046 0xffff, /* src_mask */
1047 0xffff, /* dst_mask */
1048 FALSE), /* pcrel_offset */
1049
1050 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_HI12), /* type */
1051 12, /* rightshift */
1052 2, /* size (0 = byte, 1 = short, 2 = long) */
1053 12, /* bitsize */
1054 FALSE, /* pc_relative */
1055 0, /* bitpos */
1056 complain_overflow_unsigned, /* complain_on_overflow */
1057 bfd_elf_generic_reloc, /* special_function */
1058 AARCH64_R_STR (TLSLE_ADD_TPREL_HI12), /* name */
1059 FALSE, /* partial_inplace */
1060 0xfff, /* src_mask */
1061 0xfff, /* dst_mask */
1062 FALSE), /* pcrel_offset */
1063
1064 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12), /* type */
1065 0, /* rightshift */
1066 2, /* size (0 = byte, 1 = short, 2 = long) */
1067 12, /* bitsize */
1068 FALSE, /* pc_relative */
1069 0, /* bitpos */
1070 complain_overflow_dont, /* complain_on_overflow */
1071 bfd_elf_generic_reloc, /* special_function */
1072 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12), /* name */
1073 FALSE, /* partial_inplace */
1074 0xfff, /* src_mask */
1075 0xfff, /* dst_mask */
1076 FALSE), /* pcrel_offset */
1077
1078 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12_NC), /* type */
1079 0, /* rightshift */
1080 2, /* size (0 = byte, 1 = short, 2 = long) */
1081 12, /* bitsize */
1082 FALSE, /* pc_relative */
1083 0, /* bitpos */
1084 complain_overflow_dont, /* complain_on_overflow */
1085 bfd_elf_generic_reloc, /* special_function */
1086 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12_NC), /* name */
1087 FALSE, /* partial_inplace */
1088 0xfff, /* src_mask */
1089 0xfff, /* dst_mask */
1090 FALSE), /* pcrel_offset */
1091
1092 HOWTO (AARCH64_R (TLSDESC_LD_PREL19), /* type */
1093 2, /* rightshift */
1094 2, /* size (0 = byte, 1 = short, 2 = long) */
1095 19, /* bitsize */
1096 TRUE, /* pc_relative */
1097 0, /* bitpos */
1098 complain_overflow_dont, /* complain_on_overflow */
1099 bfd_elf_generic_reloc, /* special_function */
1100 AARCH64_R_STR (TLSDESC_LD_PREL19), /* name */
1101 FALSE, /* partial_inplace */
1102 0x0ffffe0, /* src_mask */
1103 0x0ffffe0, /* dst_mask */
1104 TRUE), /* pcrel_offset */
1105
1106 HOWTO (AARCH64_R (TLSDESC_ADR_PREL21), /* type */
1107 0, /* rightshift */
1108 2, /* size (0 = byte, 1 = short, 2 = long) */
1109 21, /* bitsize */
1110 TRUE, /* pc_relative */
1111 0, /* bitpos */
1112 complain_overflow_dont, /* complain_on_overflow */
1113 bfd_elf_generic_reloc, /* special_function */
1114 AARCH64_R_STR (TLSDESC_ADR_PREL21), /* name */
1115 FALSE, /* partial_inplace */
1116 0x1fffff, /* src_mask */
1117 0x1fffff, /* dst_mask */
1118 TRUE), /* pcrel_offset */
1119
1120 /* Get to the page for the GOT entry for the symbol
1121 (G(S) - P) using an ADRP instruction. */
1122 HOWTO (AARCH64_R (TLSDESC_ADR_PAGE21), /* type */
1123 12, /* rightshift */
1124 2, /* size (0 = byte, 1 = short, 2 = long) */
1125 21, /* bitsize */
1126 TRUE, /* pc_relative */
1127 0, /* bitpos */
1128 complain_overflow_dont, /* complain_on_overflow */
1129 bfd_elf_generic_reloc, /* special_function */
1130 AARCH64_R_STR (TLSDESC_ADR_PAGE21), /* name */
1131 FALSE, /* partial_inplace */
1132 0x1fffff, /* src_mask */
1133 0x1fffff, /* dst_mask */
1134 TRUE), /* pcrel_offset */
1135
1136 /* LD64: GOT offset G(S) & 0xff8. */
1137 HOWTO64 (AARCH64_R (TLSDESC_LD64_LO12_NC), /* type */
1138 3, /* rightshift */
1139 2, /* size (0 = byte, 1 = short, 2 = long) */
1140 12, /* bitsize */
1141 FALSE, /* pc_relative */
1142 0, /* bitpos */
1143 complain_overflow_dont, /* complain_on_overflow */
1144 bfd_elf_generic_reloc, /* special_function */
1145 AARCH64_R_STR (TLSDESC_LD64_LO12_NC), /* name */
1146 FALSE, /* partial_inplace */
1147 0xff8, /* src_mask */
1148 0xff8, /* dst_mask */
1149 FALSE), /* pcrel_offset */
1150
1151 /* LD32: GOT offset G(S) & 0xffc. */
1152 HOWTO32 (AARCH64_R (TLSDESC_LD32_LO12_NC), /* type */
1153 2, /* rightshift */
1154 2, /* size (0 = byte, 1 = short, 2 = long) */
1155 12, /* bitsize */
1156 FALSE, /* pc_relative */
1157 0, /* bitpos */
1158 complain_overflow_dont, /* complain_on_overflow */
1159 bfd_elf_generic_reloc, /* special_function */
1160 AARCH64_R_STR (TLSDESC_LD32_LO12_NC), /* name */
1161 FALSE, /* partial_inplace */
1162 0xffc, /* src_mask */
1163 0xffc, /* dst_mask */
1164 FALSE), /* pcrel_offset */
1165
1166 /* ADD: GOT offset G(S) & 0xfff. */
1167 HOWTO (AARCH64_R (TLSDESC_ADD_LO12_NC), /* type */
1168 0, /* rightshift */
1169 2, /* size (0 = byte, 1 = short, 2 = long) */
1170 12, /* bitsize */
1171 FALSE, /* pc_relative */
1172 0, /* bitpos */
1173 complain_overflow_dont, /* complain_on_overflow */
1174 bfd_elf_generic_reloc, /* special_function */
1175 AARCH64_R_STR (TLSDESC_ADD_LO12_NC), /* name */
1176 FALSE, /* partial_inplace */
1177 0xfff, /* src_mask */
1178 0xfff, /* dst_mask */
1179 FALSE), /* pcrel_offset */
1180
1181 HOWTO64 (AARCH64_R (TLSDESC_OFF_G1), /* type */
1182 16, /* rightshift */
1183 2, /* size (0 = byte, 1 = short, 2 = long) */
1184 12, /* bitsize */
1185 FALSE, /* pc_relative */
1186 0, /* bitpos */
1187 complain_overflow_dont, /* complain_on_overflow */
1188 bfd_elf_generic_reloc, /* special_function */
1189 AARCH64_R_STR (TLSDESC_OFF_G1), /* name */
1190 FALSE, /* partial_inplace */
1191 0xffff, /* src_mask */
1192 0xffff, /* dst_mask */
1193 FALSE), /* pcrel_offset */
1194
1195 HOWTO64 (AARCH64_R (TLSDESC_OFF_G0_NC), /* type */
1196 0, /* rightshift */
1197 2, /* size (0 = byte, 1 = short, 2 = long) */
1198 12, /* bitsize */
1199 FALSE, /* pc_relative */
1200 0, /* bitpos */
1201 complain_overflow_dont, /* complain_on_overflow */
1202 bfd_elf_generic_reloc, /* special_function */
1203 AARCH64_R_STR (TLSDESC_OFF_G0_NC), /* name */
1204 FALSE, /* partial_inplace */
1205 0xffff, /* src_mask */
1206 0xffff, /* dst_mask */
1207 FALSE), /* pcrel_offset */
1208
1209 HOWTO64 (AARCH64_R (TLSDESC_LDR), /* type */
1210 0, /* rightshift */
1211 2, /* size (0 = byte, 1 = short, 2 = long) */
1212 12, /* bitsize */
1213 FALSE, /* pc_relative */
1214 0, /* bitpos */
1215 complain_overflow_dont, /* complain_on_overflow */
1216 bfd_elf_generic_reloc, /* special_function */
1217 AARCH64_R_STR (TLSDESC_LDR), /* name */
1218 FALSE, /* partial_inplace */
1219 0x0, /* src_mask */
1220 0x0, /* dst_mask */
1221 FALSE), /* pcrel_offset */
1222
1223 HOWTO64 (AARCH64_R (TLSDESC_ADD), /* type */
1224 0, /* rightshift */
1225 2, /* size (0 = byte, 1 = short, 2 = long) */
1226 12, /* bitsize */
1227 FALSE, /* pc_relative */
1228 0, /* bitpos */
1229 complain_overflow_dont, /* complain_on_overflow */
1230 bfd_elf_generic_reloc, /* special_function */
1231 AARCH64_R_STR (TLSDESC_ADD), /* name */
1232 FALSE, /* partial_inplace */
1233 0x0, /* src_mask */
1234 0x0, /* dst_mask */
1235 FALSE), /* pcrel_offset */
1236
1237 HOWTO (AARCH64_R (TLSDESC_CALL), /* type */
1238 0, /* rightshift */
1239 2, /* size (0 = byte, 1 = short, 2 = long) */
1240 0, /* bitsize */
1241 FALSE, /* pc_relative */
1242 0, /* bitpos */
1243 complain_overflow_dont, /* complain_on_overflow */
1244 bfd_elf_generic_reloc, /* special_function */
1245 AARCH64_R_STR (TLSDESC_CALL), /* name */
1246 FALSE, /* partial_inplace */
1247 0x0, /* src_mask */
1248 0x0, /* dst_mask */
1249 FALSE), /* pcrel_offset */
1250
1251 HOWTO (AARCH64_R (COPY), /* type */
1252 0, /* rightshift */
1253 2, /* size (0 = byte, 1 = short, 2 = long) */
1254 64, /* bitsize */
1255 FALSE, /* pc_relative */
1256 0, /* bitpos */
1257 complain_overflow_bitfield, /* complain_on_overflow */
1258 bfd_elf_generic_reloc, /* special_function */
1259 AARCH64_R_STR (COPY), /* name */
1260 TRUE, /* partial_inplace */
1261 0xffffffff, /* src_mask */
1262 0xffffffff, /* dst_mask */
1263 FALSE), /* pcrel_offset */
1264
1265 HOWTO (AARCH64_R (GLOB_DAT), /* type */
1266 0, /* rightshift */
1267 2, /* size (0 = byte, 1 = short, 2 = long) */
1268 64, /* bitsize */
1269 FALSE, /* pc_relative */
1270 0, /* bitpos */
1271 complain_overflow_bitfield, /* complain_on_overflow */
1272 bfd_elf_generic_reloc, /* special_function */
1273 AARCH64_R_STR (GLOB_DAT), /* name */
1274 TRUE, /* partial_inplace */
1275 0xffffffff, /* src_mask */
1276 0xffffffff, /* dst_mask */
1277 FALSE), /* pcrel_offset */
1278
1279 HOWTO (AARCH64_R (JUMP_SLOT), /* type */
1280 0, /* rightshift */
1281 2, /* size (0 = byte, 1 = short, 2 = long) */
1282 64, /* bitsize */
1283 FALSE, /* pc_relative */
1284 0, /* bitpos */
1285 complain_overflow_bitfield, /* complain_on_overflow */
1286 bfd_elf_generic_reloc, /* special_function */
1287 AARCH64_R_STR (JUMP_SLOT), /* name */
1288 TRUE, /* partial_inplace */
1289 0xffffffff, /* src_mask */
1290 0xffffffff, /* dst_mask */
1291 FALSE), /* pcrel_offset */
1292
1293 HOWTO (AARCH64_R (RELATIVE), /* type */
1294 0, /* rightshift */
1295 2, /* size (0 = byte, 1 = short, 2 = long) */
1296 64, /* bitsize */
1297 FALSE, /* pc_relative */
1298 0, /* bitpos */
1299 complain_overflow_bitfield, /* complain_on_overflow */
1300 bfd_elf_generic_reloc, /* special_function */
1301 AARCH64_R_STR (RELATIVE), /* name */
1302 TRUE, /* partial_inplace */
1303 ALL_ONES, /* src_mask */
1304 ALL_ONES, /* dst_mask */
1305 FALSE), /* pcrel_offset */
1306
1307 HOWTO (AARCH64_R (TLS_DTPMOD), /* type */
1308 0, /* rightshift */
1309 2, /* size (0 = byte, 1 = short, 2 = long) */
1310 64, /* bitsize */
1311 FALSE, /* pc_relative */
1312 0, /* bitpos */
1313 complain_overflow_dont, /* complain_on_overflow */
1314 bfd_elf_generic_reloc, /* special_function */
1315 #if ARCH_SIZE == 64
1316 AARCH64_R_STR (TLS_DTPMOD64), /* name */
1317 #else
1318 AARCH64_R_STR (TLS_DTPMOD), /* name */
1319 #endif
1320 FALSE, /* partial_inplace */
1321 0, /* src_mask */
1322 ALL_ONES, /* dst_mask */
1323 FALSE), /* pc_reloffset */
1324
1325 HOWTO (AARCH64_R (TLS_DTPREL), /* type */
1326 0, /* rightshift */
1327 2, /* size (0 = byte, 1 = short, 2 = long) */
1328 64, /* bitsize */
1329 FALSE, /* pc_relative */
1330 0, /* bitpos */
1331 complain_overflow_dont, /* complain_on_overflow */
1332 bfd_elf_generic_reloc, /* special_function */
1333 #if ARCH_SIZE == 64
1334 AARCH64_R_STR (TLS_DTPREL64), /* name */
1335 #else
1336 AARCH64_R_STR (TLS_DTPREL), /* name */
1337 #endif
1338 FALSE, /* partial_inplace */
1339 0, /* src_mask */
1340 ALL_ONES, /* dst_mask */
1341 FALSE), /* pcrel_offset */
1342
1343 HOWTO (AARCH64_R (TLS_TPREL), /* type */
1344 0, /* rightshift */
1345 2, /* size (0 = byte, 1 = short, 2 = long) */
1346 64, /* bitsize */
1347 FALSE, /* pc_relative */
1348 0, /* bitpos */
1349 complain_overflow_dont, /* complain_on_overflow */
1350 bfd_elf_generic_reloc, /* special_function */
1351 #if ARCH_SIZE == 64
1352 AARCH64_R_STR (TLS_TPREL64), /* name */
1353 #else
1354 AARCH64_R_STR (TLS_TPREL), /* name */
1355 #endif
1356 FALSE, /* partial_inplace */
1357 0, /* src_mask */
1358 ALL_ONES, /* dst_mask */
1359 FALSE), /* pcrel_offset */
1360
1361 HOWTO (AARCH64_R (TLSDESC), /* type */
1362 0, /* rightshift */
1363 2, /* size (0 = byte, 1 = short, 2 = long) */
1364 64, /* bitsize */
1365 FALSE, /* pc_relative */
1366 0, /* bitpos */
1367 complain_overflow_dont, /* complain_on_overflow */
1368 bfd_elf_generic_reloc, /* special_function */
1369 AARCH64_R_STR (TLSDESC), /* name */
1370 FALSE, /* partial_inplace */
1371 0, /* src_mask */
1372 ALL_ONES, /* dst_mask */
1373 FALSE), /* pcrel_offset */
1374
1375 HOWTO (AARCH64_R (IRELATIVE), /* type */
1376 0, /* rightshift */
1377 2, /* size (0 = byte, 1 = short, 2 = long) */
1378 64, /* bitsize */
1379 FALSE, /* pc_relative */
1380 0, /* bitpos */
1381 complain_overflow_bitfield, /* complain_on_overflow */
1382 bfd_elf_generic_reloc, /* special_function */
1383 AARCH64_R_STR (IRELATIVE), /* name */
1384 FALSE, /* partial_inplace */
1385 0, /* src_mask */
1386 ALL_ONES, /* dst_mask */
1387 FALSE), /* pcrel_offset */
1388
1389 EMPTY_HOWTO (0),
1390 };
1391
1392 static reloc_howto_type elfNN_aarch64_howto_none =
1393 HOWTO (R_AARCH64_NONE, /* type */
1394 0, /* rightshift */
1395 3, /* size (0 = byte, 1 = short, 2 = long) */
1396 0, /* bitsize */
1397 FALSE, /* pc_relative */
1398 0, /* bitpos */
1399 complain_overflow_dont,/* complain_on_overflow */
1400 bfd_elf_generic_reloc, /* special_function */
1401 "R_AARCH64_NONE", /* name */
1402 FALSE, /* partial_inplace */
1403 0, /* src_mask */
1404 0, /* dst_mask */
1405 FALSE); /* pcrel_offset */
1406
1407 /* Given HOWTO, return the bfd internal relocation enumerator. */
1408
1409 static bfd_reloc_code_real_type
1410 elfNN_aarch64_bfd_reloc_from_howto (reloc_howto_type *howto)
1411 {
1412 const int size
1413 = (int) ARRAY_SIZE (elfNN_aarch64_howto_table);
1414 const ptrdiff_t offset
1415 = howto - elfNN_aarch64_howto_table;
1416
1417 if (offset > 0 && offset < size - 1)
1418 return BFD_RELOC_AARCH64_RELOC_START + offset;
1419
1420 if (howto == &elfNN_aarch64_howto_none)
1421 return BFD_RELOC_AARCH64_NONE;
1422
1423 return BFD_RELOC_AARCH64_RELOC_START;
1424 }
1425
1426 /* Given R_TYPE, return the bfd internal relocation enumerator. */
1427
1428 static bfd_reloc_code_real_type
1429 elfNN_aarch64_bfd_reloc_from_type (unsigned int r_type)
1430 {
1431 static bfd_boolean initialized_p = FALSE;
1432 /* Indexed by R_TYPE, values are offsets in the howto_table. */
1433 static unsigned int offsets[R_AARCH64_end];
1434
1435 if (initialized_p == FALSE)
1436 {
1437 unsigned int i;
1438
1439 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1440 if (elfNN_aarch64_howto_table[i].type != 0)
1441 offsets[elfNN_aarch64_howto_table[i].type] = i;
1442
1443 initialized_p = TRUE;
1444 }
1445
1446 if (r_type == R_AARCH64_NONE || r_type == R_AARCH64_NULL)
1447 return BFD_RELOC_AARCH64_NONE;
1448
1449 /* PR 17512: file: b371e70a. */
1450 if (r_type >= R_AARCH64_end)
1451 {
1452 _bfd_error_handler (_("Invalid AArch64 reloc number: %d"), r_type);
1453 bfd_set_error (bfd_error_bad_value);
1454 return BFD_RELOC_AARCH64_NONE;
1455 }
1456
1457 return BFD_RELOC_AARCH64_RELOC_START + offsets[r_type];
1458 }
1459
1460 struct elf_aarch64_reloc_map
1461 {
1462 bfd_reloc_code_real_type from;
1463 bfd_reloc_code_real_type to;
1464 };
1465
1466 /* Map bfd generic reloc to AArch64-specific reloc. */
1467 static const struct elf_aarch64_reloc_map elf_aarch64_reloc_map[] =
1468 {
1469 {BFD_RELOC_NONE, BFD_RELOC_AARCH64_NONE},
1470
1471 /* Basic data relocations. */
1472 {BFD_RELOC_CTOR, BFD_RELOC_AARCH64_NN},
1473 {BFD_RELOC_64, BFD_RELOC_AARCH64_64},
1474 {BFD_RELOC_32, BFD_RELOC_AARCH64_32},
1475 {BFD_RELOC_16, BFD_RELOC_AARCH64_16},
1476 {BFD_RELOC_64_PCREL, BFD_RELOC_AARCH64_64_PCREL},
1477 {BFD_RELOC_32_PCREL, BFD_RELOC_AARCH64_32_PCREL},
1478 {BFD_RELOC_16_PCREL, BFD_RELOC_AARCH64_16_PCREL},
1479 };
1480
1481 /* Given the bfd internal relocation enumerator in CODE, return the
1482 corresponding howto entry. */
1483
1484 static reloc_howto_type *
1485 elfNN_aarch64_howto_from_bfd_reloc (bfd_reloc_code_real_type code)
1486 {
1487 unsigned int i;
1488
1489 /* Convert bfd generic reloc to AArch64-specific reloc. */
1490 if (code < BFD_RELOC_AARCH64_RELOC_START
1491 || code > BFD_RELOC_AARCH64_RELOC_END)
1492 for (i = 0; i < ARRAY_SIZE (elf_aarch64_reloc_map); i++)
1493 if (elf_aarch64_reloc_map[i].from == code)
1494 {
1495 code = elf_aarch64_reloc_map[i].to;
1496 break;
1497 }
1498
1499 if (code > BFD_RELOC_AARCH64_RELOC_START
1500 && code < BFD_RELOC_AARCH64_RELOC_END)
1501 if (elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START].type)
1502 return &elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START];
1503
1504 if (code == BFD_RELOC_AARCH64_NONE)
1505 return &elfNN_aarch64_howto_none;
1506
1507 return NULL;
1508 }
1509
1510 static reloc_howto_type *
1511 elfNN_aarch64_howto_from_type (unsigned int r_type)
1512 {
1513 bfd_reloc_code_real_type val;
1514 reloc_howto_type *howto;
1515
1516 #if ARCH_SIZE == 32
1517 if (r_type > 256)
1518 {
1519 bfd_set_error (bfd_error_bad_value);
1520 return NULL;
1521 }
1522 #endif
1523
1524 if (r_type == R_AARCH64_NONE)
1525 return &elfNN_aarch64_howto_none;
1526
1527 val = elfNN_aarch64_bfd_reloc_from_type (r_type);
1528 howto = elfNN_aarch64_howto_from_bfd_reloc (val);
1529
1530 if (howto != NULL)
1531 return howto;
1532
1533 bfd_set_error (bfd_error_bad_value);
1534 return NULL;
1535 }
1536
1537 static void
1538 elfNN_aarch64_info_to_howto (bfd *abfd ATTRIBUTE_UNUSED, arelent *bfd_reloc,
1539 Elf_Internal_Rela *elf_reloc)
1540 {
1541 unsigned int r_type;
1542
1543 r_type = ELFNN_R_TYPE (elf_reloc->r_info);
1544 bfd_reloc->howto = elfNN_aarch64_howto_from_type (r_type);
1545 }
1546
1547 static reloc_howto_type *
1548 elfNN_aarch64_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1549 bfd_reloc_code_real_type code)
1550 {
1551 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (code);
1552
1553 if (howto != NULL)
1554 return howto;
1555
1556 bfd_set_error (bfd_error_bad_value);
1557 return NULL;
1558 }
1559
1560 static reloc_howto_type *
1561 elfNN_aarch64_reloc_name_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1562 const char *r_name)
1563 {
1564 unsigned int i;
1565
1566 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1567 if (elfNN_aarch64_howto_table[i].name != NULL
1568 && strcasecmp (elfNN_aarch64_howto_table[i].name, r_name) == 0)
1569 return &elfNN_aarch64_howto_table[i];
1570
1571 return NULL;
1572 }
1573
1574 #define TARGET_LITTLE_SYM aarch64_elfNN_le_vec
1575 #define TARGET_LITTLE_NAME "elfNN-littleaarch64"
1576 #define TARGET_BIG_SYM aarch64_elfNN_be_vec
1577 #define TARGET_BIG_NAME "elfNN-bigaarch64"
1578
1579 /* The linker script knows the section names for placement.
1580 The entry_names are used to do simple name mangling on the stubs.
1581 Given a function name, and its type, the stub can be found. The
1582 name can be changed. The only requirement is the %s be present. */
1583 #define STUB_ENTRY_NAME "__%s_veneer"
1584
1585 /* The name of the dynamic interpreter. This is put in the .interp
1586 section. */
1587 #define ELF_DYNAMIC_INTERPRETER "/lib/ld.so.1"
1588
1589 #define AARCH64_MAX_FWD_BRANCH_OFFSET \
1590 (((1 << 25) - 1) << 2)
1591 #define AARCH64_MAX_BWD_BRANCH_OFFSET \
1592 (-((1 << 25) << 2))
1593
1594 #define AARCH64_MAX_ADRP_IMM ((1 << 20) - 1)
1595 #define AARCH64_MIN_ADRP_IMM (-(1 << 20))
1596
1597 static int
1598 aarch64_valid_for_adrp_p (bfd_vma value, bfd_vma place)
1599 {
1600 bfd_signed_vma offset = (bfd_signed_vma) (PG (value) - PG (place)) >> 12;
1601 return offset <= AARCH64_MAX_ADRP_IMM && offset >= AARCH64_MIN_ADRP_IMM;
1602 }
1603
1604 static int
1605 aarch64_valid_branch_p (bfd_vma value, bfd_vma place)
1606 {
1607 bfd_signed_vma offset = (bfd_signed_vma) (value - place);
1608 return (offset <= AARCH64_MAX_FWD_BRANCH_OFFSET
1609 && offset >= AARCH64_MAX_BWD_BRANCH_OFFSET);
1610 }
1611
1612 static const uint32_t aarch64_adrp_branch_stub [] =
1613 {
1614 0x90000010, /* adrp ip0, X */
1615 /* R_AARCH64_ADR_HI21_PCREL(X) */
1616 0x91000210, /* add ip0, ip0, :lo12:X */
1617 /* R_AARCH64_ADD_ABS_LO12_NC(X) */
1618 0xd61f0200, /* br ip0 */
1619 };
1620
1621 static const uint32_t aarch64_long_branch_stub[] =
1622 {
1623 #if ARCH_SIZE == 64
1624 0x58000090, /* ldr ip0, 1f */
1625 #else
1626 0x18000090, /* ldr wip0, 1f */
1627 #endif
1628 0x10000011, /* adr ip1, #0 */
1629 0x8b110210, /* add ip0, ip0, ip1 */
1630 0xd61f0200, /* br ip0 */
1631 0x00000000, /* 1: .xword or .word
1632 R_AARCH64_PRELNN(X) + 12
1633 */
1634 0x00000000,
1635 };
1636
1637 static const uint32_t aarch64_erratum_835769_stub[] =
1638 {
1639 0x00000000, /* Placeholder for multiply accumulate. */
1640 0x14000000, /* b <label> */
1641 };
1642
1643 static const uint32_t aarch64_erratum_843419_stub[] =
1644 {
1645 0x00000000, /* Placeholder for LDR instruction. */
1646 0x14000000, /* b <label> */
1647 };
1648
1649 /* Section name for stubs is the associated section name plus this
1650 string. */
1651 #define STUB_SUFFIX ".stub"
1652
1653 enum elf_aarch64_stub_type
1654 {
1655 aarch64_stub_none,
1656 aarch64_stub_adrp_branch,
1657 aarch64_stub_long_branch,
1658 aarch64_stub_erratum_835769_veneer,
1659 aarch64_stub_erratum_843419_veneer,
1660 };
1661
1662 struct elf_aarch64_stub_hash_entry
1663 {
1664 /* Base hash table entry structure. */
1665 struct bfd_hash_entry root;
1666
1667 /* The stub section. */
1668 asection *stub_sec;
1669
1670 /* Offset within stub_sec of the beginning of this stub. */
1671 bfd_vma stub_offset;
1672
1673 /* Given the symbol's value and its section we can determine its final
1674 value when building the stubs (so the stub knows where to jump). */
1675 bfd_vma target_value;
1676 asection *target_section;
1677
1678 enum elf_aarch64_stub_type stub_type;
1679
1680 /* The symbol table entry, if any, that this was derived from. */
1681 struct elf_aarch64_link_hash_entry *h;
1682
1683 /* Destination symbol type */
1684 unsigned char st_type;
1685
1686 /* Where this stub is being called from, or, in the case of combined
1687 stub sections, the first input section in the group. */
1688 asection *id_sec;
1689
1690 /* The name for the local symbol at the start of this stub. The
1691 stub name in the hash table has to be unique; this does not, so
1692 it can be friendlier. */
1693 char *output_name;
1694
1695 /* The instruction which caused this stub to be generated (only valid for
1696 erratum 835769 workaround stubs at present). */
1697 uint32_t veneered_insn;
1698
1699 /* In an erratum 843419 workaround stub, the ADRP instruction offset. */
1700 bfd_vma adrp_offset;
1701 };
1702
1703 /* Used to build a map of a section. This is required for mixed-endian
1704 code/data. */
1705
1706 typedef struct elf_elf_section_map
1707 {
1708 bfd_vma vma;
1709 char type;
1710 }
1711 elf_aarch64_section_map;
1712
1713
1714 typedef struct _aarch64_elf_section_data
1715 {
1716 struct bfd_elf_section_data elf;
1717 unsigned int mapcount;
1718 unsigned int mapsize;
1719 elf_aarch64_section_map *map;
1720 }
1721 _aarch64_elf_section_data;
1722
1723 #define elf_aarch64_section_data(sec) \
1724 ((_aarch64_elf_section_data *) elf_section_data (sec))
1725
1726 /* The size of the thread control block which is defined to be two pointers. */
1727 #define TCB_SIZE (ARCH_SIZE/8)*2
1728
1729 struct elf_aarch64_local_symbol
1730 {
1731 unsigned int got_type;
1732 bfd_signed_vma got_refcount;
1733 bfd_vma got_offset;
1734
1735 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The
1736 offset is from the end of the jump table and reserved entries
1737 within the PLTGOT.
1738
1739 The magic value (bfd_vma) -1 indicates that an offset has not be
1740 allocated. */
1741 bfd_vma tlsdesc_got_jump_table_offset;
1742 };
1743
1744 struct elf_aarch64_obj_tdata
1745 {
1746 struct elf_obj_tdata root;
1747
1748 /* local symbol descriptors */
1749 struct elf_aarch64_local_symbol *locals;
1750
1751 /* Zero to warn when linking objects with incompatible enum sizes. */
1752 int no_enum_size_warning;
1753
1754 /* Zero to warn when linking objects with incompatible wchar_t sizes. */
1755 int no_wchar_size_warning;
1756 };
1757
1758 #define elf_aarch64_tdata(bfd) \
1759 ((struct elf_aarch64_obj_tdata *) (bfd)->tdata.any)
1760
1761 #define elf_aarch64_locals(bfd) (elf_aarch64_tdata (bfd)->locals)
1762
1763 #define is_aarch64_elf(bfd) \
1764 (bfd_get_flavour (bfd) == bfd_target_elf_flavour \
1765 && elf_tdata (bfd) != NULL \
1766 && elf_object_id (bfd) == AARCH64_ELF_DATA)
1767
1768 static bfd_boolean
1769 elfNN_aarch64_mkobject (bfd *abfd)
1770 {
1771 return bfd_elf_allocate_object (abfd, sizeof (struct elf_aarch64_obj_tdata),
1772 AARCH64_ELF_DATA);
1773 }
1774
1775 #define elf_aarch64_hash_entry(ent) \
1776 ((struct elf_aarch64_link_hash_entry *)(ent))
1777
1778 #define GOT_UNKNOWN 0
1779 #define GOT_NORMAL 1
1780 #define GOT_TLS_GD 2
1781 #define GOT_TLS_IE 4
1782 #define GOT_TLSDESC_GD 8
1783
1784 #define GOT_TLS_GD_ANY_P(type) ((type & GOT_TLS_GD) || (type & GOT_TLSDESC_GD))
1785
1786 /* AArch64 ELF linker hash entry. */
1787 struct elf_aarch64_link_hash_entry
1788 {
1789 struct elf_link_hash_entry root;
1790
1791 /* Track dynamic relocs copied for this symbol. */
1792 struct elf_dyn_relocs *dyn_relocs;
1793
1794 /* Since PLT entries have variable size, we need to record the
1795 index into .got.plt instead of recomputing it from the PLT
1796 offset. */
1797 bfd_signed_vma plt_got_offset;
1798
1799 /* Bit mask representing the type of GOT entry(s) if any required by
1800 this symbol. */
1801 unsigned int got_type;
1802
1803 /* A pointer to the most recently used stub hash entry against this
1804 symbol. */
1805 struct elf_aarch64_stub_hash_entry *stub_cache;
1806
1807 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The offset
1808 is from the end of the jump table and reserved entries within the PLTGOT.
1809
1810 The magic value (bfd_vma) -1 indicates that an offset has not
1811 be allocated. */
1812 bfd_vma tlsdesc_got_jump_table_offset;
1813 };
1814
1815 static unsigned int
1816 elfNN_aarch64_symbol_got_type (struct elf_link_hash_entry *h,
1817 bfd *abfd,
1818 unsigned long r_symndx)
1819 {
1820 if (h)
1821 return elf_aarch64_hash_entry (h)->got_type;
1822
1823 if (! elf_aarch64_locals (abfd))
1824 return GOT_UNKNOWN;
1825
1826 return elf_aarch64_locals (abfd)[r_symndx].got_type;
1827 }
1828
1829 /* Get the AArch64 elf linker hash table from a link_info structure. */
1830 #define elf_aarch64_hash_table(info) \
1831 ((struct elf_aarch64_link_hash_table *) ((info)->hash))
1832
1833 #define aarch64_stub_hash_lookup(table, string, create, copy) \
1834 ((struct elf_aarch64_stub_hash_entry *) \
1835 bfd_hash_lookup ((table), (string), (create), (copy)))
1836
1837 /* AArch64 ELF linker hash table. */
1838 struct elf_aarch64_link_hash_table
1839 {
1840 /* The main hash table. */
1841 struct elf_link_hash_table root;
1842
1843 /* Nonzero to force PIC branch veneers. */
1844 int pic_veneer;
1845
1846 /* Fix erratum 835769. */
1847 int fix_erratum_835769;
1848
1849 /* Fix erratum 843419. */
1850 int fix_erratum_843419;
1851
1852 /* Enable ADRP->ADR rewrite for erratum 843419 workaround. */
1853 int fix_erratum_843419_adr;
1854
1855 /* The number of bytes in the initial entry in the PLT. */
1856 bfd_size_type plt_header_size;
1857
1858 /* The number of bytes in the subsequent PLT etries. */
1859 bfd_size_type plt_entry_size;
1860
1861 /* Short-cuts to get to dynamic linker sections. */
1862 asection *sdynbss;
1863 asection *srelbss;
1864
1865 /* Small local sym cache. */
1866 struct sym_cache sym_cache;
1867
1868 /* For convenience in allocate_dynrelocs. */
1869 bfd *obfd;
1870
1871 /* The amount of space used by the reserved portion of the sgotplt
1872 section, plus whatever space is used by the jump slots. */
1873 bfd_vma sgotplt_jump_table_size;
1874
1875 /* The stub hash table. */
1876 struct bfd_hash_table stub_hash_table;
1877
1878 /* Linker stub bfd. */
1879 bfd *stub_bfd;
1880
1881 /* Linker call-backs. */
1882 asection *(*add_stub_section) (const char *, asection *);
1883 void (*layout_sections_again) (void);
1884
1885 /* Array to keep track of which stub sections have been created, and
1886 information on stub grouping. */
1887 struct map_stub
1888 {
1889 /* This is the section to which stubs in the group will be
1890 attached. */
1891 asection *link_sec;
1892 /* The stub section. */
1893 asection *stub_sec;
1894 } *stub_group;
1895
1896 /* Assorted information used by elfNN_aarch64_size_stubs. */
1897 unsigned int bfd_count;
1898 int top_index;
1899 asection **input_list;
1900
1901 /* The offset into splt of the PLT entry for the TLS descriptor
1902 resolver. Special values are 0, if not necessary (or not found
1903 to be necessary yet), and -1 if needed but not determined
1904 yet. */
1905 bfd_vma tlsdesc_plt;
1906
1907 /* The GOT offset for the lazy trampoline. Communicated to the
1908 loader via DT_TLSDESC_GOT. The magic value (bfd_vma) -1
1909 indicates an offset is not allocated. */
1910 bfd_vma dt_tlsdesc_got;
1911
1912 /* Used by local STT_GNU_IFUNC symbols. */
1913 htab_t loc_hash_table;
1914 void * loc_hash_memory;
1915 };
1916
1917 /* Create an entry in an AArch64 ELF linker hash table. */
1918
1919 static struct bfd_hash_entry *
1920 elfNN_aarch64_link_hash_newfunc (struct bfd_hash_entry *entry,
1921 struct bfd_hash_table *table,
1922 const char *string)
1923 {
1924 struct elf_aarch64_link_hash_entry *ret =
1925 (struct elf_aarch64_link_hash_entry *) entry;
1926
1927 /* Allocate the structure if it has not already been allocated by a
1928 subclass. */
1929 if (ret == NULL)
1930 ret = bfd_hash_allocate (table,
1931 sizeof (struct elf_aarch64_link_hash_entry));
1932 if (ret == NULL)
1933 return (struct bfd_hash_entry *) ret;
1934
1935 /* Call the allocation method of the superclass. */
1936 ret = ((struct elf_aarch64_link_hash_entry *)
1937 _bfd_elf_link_hash_newfunc ((struct bfd_hash_entry *) ret,
1938 table, string));
1939 if (ret != NULL)
1940 {
1941 ret->dyn_relocs = NULL;
1942 ret->got_type = GOT_UNKNOWN;
1943 ret->plt_got_offset = (bfd_vma) - 1;
1944 ret->stub_cache = NULL;
1945 ret->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
1946 }
1947
1948 return (struct bfd_hash_entry *) ret;
1949 }
1950
1951 /* Initialize an entry in the stub hash table. */
1952
1953 static struct bfd_hash_entry *
1954 stub_hash_newfunc (struct bfd_hash_entry *entry,
1955 struct bfd_hash_table *table, const char *string)
1956 {
1957 /* Allocate the structure if it has not already been allocated by a
1958 subclass. */
1959 if (entry == NULL)
1960 {
1961 entry = bfd_hash_allocate (table,
1962 sizeof (struct
1963 elf_aarch64_stub_hash_entry));
1964 if (entry == NULL)
1965 return entry;
1966 }
1967
1968 /* Call the allocation method of the superclass. */
1969 entry = bfd_hash_newfunc (entry, table, string);
1970 if (entry != NULL)
1971 {
1972 struct elf_aarch64_stub_hash_entry *eh;
1973
1974 /* Initialize the local fields. */
1975 eh = (struct elf_aarch64_stub_hash_entry *) entry;
1976 eh->adrp_offset = 0;
1977 eh->stub_sec = NULL;
1978 eh->stub_offset = 0;
1979 eh->target_value = 0;
1980 eh->target_section = NULL;
1981 eh->stub_type = aarch64_stub_none;
1982 eh->h = NULL;
1983 eh->id_sec = NULL;
1984 }
1985
1986 return entry;
1987 }
1988
1989 /* Compute a hash of a local hash entry. We use elf_link_hash_entry
1990 for local symbol so that we can handle local STT_GNU_IFUNC symbols
1991 as global symbol. We reuse indx and dynstr_index for local symbol
1992 hash since they aren't used by global symbols in this backend. */
1993
1994 static hashval_t
1995 elfNN_aarch64_local_htab_hash (const void *ptr)
1996 {
1997 struct elf_link_hash_entry *h
1998 = (struct elf_link_hash_entry *) ptr;
1999 return ELF_LOCAL_SYMBOL_HASH (h->indx, h->dynstr_index);
2000 }
2001
2002 /* Compare local hash entries. */
2003
2004 static int
2005 elfNN_aarch64_local_htab_eq (const void *ptr1, const void *ptr2)
2006 {
2007 struct elf_link_hash_entry *h1
2008 = (struct elf_link_hash_entry *) ptr1;
2009 struct elf_link_hash_entry *h2
2010 = (struct elf_link_hash_entry *) ptr2;
2011
2012 return h1->indx == h2->indx && h1->dynstr_index == h2->dynstr_index;
2013 }
2014
2015 /* Find and/or create a hash entry for local symbol. */
2016
2017 static struct elf_link_hash_entry *
2018 elfNN_aarch64_get_local_sym_hash (struct elf_aarch64_link_hash_table *htab,
2019 bfd *abfd, const Elf_Internal_Rela *rel,
2020 bfd_boolean create)
2021 {
2022 struct elf_aarch64_link_hash_entry e, *ret;
2023 asection *sec = abfd->sections;
2024 hashval_t h = ELF_LOCAL_SYMBOL_HASH (sec->id,
2025 ELFNN_R_SYM (rel->r_info));
2026 void **slot;
2027
2028 e.root.indx = sec->id;
2029 e.root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2030 slot = htab_find_slot_with_hash (htab->loc_hash_table, &e, h,
2031 create ? INSERT : NO_INSERT);
2032
2033 if (!slot)
2034 return NULL;
2035
2036 if (*slot)
2037 {
2038 ret = (struct elf_aarch64_link_hash_entry *) *slot;
2039 return &ret->root;
2040 }
2041
2042 ret = (struct elf_aarch64_link_hash_entry *)
2043 objalloc_alloc ((struct objalloc *) htab->loc_hash_memory,
2044 sizeof (struct elf_aarch64_link_hash_entry));
2045 if (ret)
2046 {
2047 memset (ret, 0, sizeof (*ret));
2048 ret->root.indx = sec->id;
2049 ret->root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2050 ret->root.dynindx = -1;
2051 *slot = ret;
2052 }
2053 return &ret->root;
2054 }
2055
2056 /* Copy the extra info we tack onto an elf_link_hash_entry. */
2057
2058 static void
2059 elfNN_aarch64_copy_indirect_symbol (struct bfd_link_info *info,
2060 struct elf_link_hash_entry *dir,
2061 struct elf_link_hash_entry *ind)
2062 {
2063 struct elf_aarch64_link_hash_entry *edir, *eind;
2064
2065 edir = (struct elf_aarch64_link_hash_entry *) dir;
2066 eind = (struct elf_aarch64_link_hash_entry *) ind;
2067
2068 if (eind->dyn_relocs != NULL)
2069 {
2070 if (edir->dyn_relocs != NULL)
2071 {
2072 struct elf_dyn_relocs **pp;
2073 struct elf_dyn_relocs *p;
2074
2075 /* Add reloc counts against the indirect sym to the direct sym
2076 list. Merge any entries against the same section. */
2077 for (pp = &eind->dyn_relocs; (p = *pp) != NULL;)
2078 {
2079 struct elf_dyn_relocs *q;
2080
2081 for (q = edir->dyn_relocs; q != NULL; q = q->next)
2082 if (q->sec == p->sec)
2083 {
2084 q->pc_count += p->pc_count;
2085 q->count += p->count;
2086 *pp = p->next;
2087 break;
2088 }
2089 if (q == NULL)
2090 pp = &p->next;
2091 }
2092 *pp = edir->dyn_relocs;
2093 }
2094
2095 edir->dyn_relocs = eind->dyn_relocs;
2096 eind->dyn_relocs = NULL;
2097 }
2098
2099 if (ind->root.type == bfd_link_hash_indirect)
2100 {
2101 /* Copy over PLT info. */
2102 if (dir->got.refcount <= 0)
2103 {
2104 edir->got_type = eind->got_type;
2105 eind->got_type = GOT_UNKNOWN;
2106 }
2107 }
2108
2109 _bfd_elf_link_hash_copy_indirect (info, dir, ind);
2110 }
2111
2112 /* Destroy an AArch64 elf linker hash table. */
2113
2114 static void
2115 elfNN_aarch64_link_hash_table_free (bfd *obfd)
2116 {
2117 struct elf_aarch64_link_hash_table *ret
2118 = (struct elf_aarch64_link_hash_table *) obfd->link.hash;
2119
2120 if (ret->loc_hash_table)
2121 htab_delete (ret->loc_hash_table);
2122 if (ret->loc_hash_memory)
2123 objalloc_free ((struct objalloc *) ret->loc_hash_memory);
2124
2125 bfd_hash_table_free (&ret->stub_hash_table);
2126 _bfd_elf_link_hash_table_free (obfd);
2127 }
2128
2129 /* Create an AArch64 elf linker hash table. */
2130
2131 static struct bfd_link_hash_table *
2132 elfNN_aarch64_link_hash_table_create (bfd *abfd)
2133 {
2134 struct elf_aarch64_link_hash_table *ret;
2135 bfd_size_type amt = sizeof (struct elf_aarch64_link_hash_table);
2136
2137 ret = bfd_zmalloc (amt);
2138 if (ret == NULL)
2139 return NULL;
2140
2141 if (!_bfd_elf_link_hash_table_init
2142 (&ret->root, abfd, elfNN_aarch64_link_hash_newfunc,
2143 sizeof (struct elf_aarch64_link_hash_entry), AARCH64_ELF_DATA))
2144 {
2145 free (ret);
2146 return NULL;
2147 }
2148
2149 ret->plt_header_size = PLT_ENTRY_SIZE;
2150 ret->plt_entry_size = PLT_SMALL_ENTRY_SIZE;
2151 ret->obfd = abfd;
2152 ret->dt_tlsdesc_got = (bfd_vma) - 1;
2153
2154 if (!bfd_hash_table_init (&ret->stub_hash_table, stub_hash_newfunc,
2155 sizeof (struct elf_aarch64_stub_hash_entry)))
2156 {
2157 _bfd_elf_link_hash_table_free (abfd);
2158 return NULL;
2159 }
2160
2161 ret->loc_hash_table = htab_try_create (1024,
2162 elfNN_aarch64_local_htab_hash,
2163 elfNN_aarch64_local_htab_eq,
2164 NULL);
2165 ret->loc_hash_memory = objalloc_create ();
2166 if (!ret->loc_hash_table || !ret->loc_hash_memory)
2167 {
2168 elfNN_aarch64_link_hash_table_free (abfd);
2169 return NULL;
2170 }
2171 ret->root.root.hash_table_free = elfNN_aarch64_link_hash_table_free;
2172
2173 return &ret->root.root;
2174 }
2175
2176 static bfd_boolean
2177 aarch64_relocate (unsigned int r_type, bfd *input_bfd, asection *input_section,
2178 bfd_vma offset, bfd_vma value)
2179 {
2180 reloc_howto_type *howto;
2181 bfd_vma place;
2182
2183 howto = elfNN_aarch64_howto_from_type (r_type);
2184 place = (input_section->output_section->vma + input_section->output_offset
2185 + offset);
2186
2187 r_type = elfNN_aarch64_bfd_reloc_from_type (r_type);
2188 value = _bfd_aarch64_elf_resolve_relocation (r_type, place, value, 0, FALSE);
2189 return _bfd_aarch64_elf_put_addend (input_bfd,
2190 input_section->contents + offset, r_type,
2191 howto, value);
2192 }
2193
2194 static enum elf_aarch64_stub_type
2195 aarch64_select_branch_stub (bfd_vma value, bfd_vma place)
2196 {
2197 if (aarch64_valid_for_adrp_p (value, place))
2198 return aarch64_stub_adrp_branch;
2199 return aarch64_stub_long_branch;
2200 }
2201
2202 /* Determine the type of stub needed, if any, for a call. */
2203
2204 static enum elf_aarch64_stub_type
2205 aarch64_type_of_stub (struct bfd_link_info *info,
2206 asection *input_sec,
2207 const Elf_Internal_Rela *rel,
2208 unsigned char st_type,
2209 struct elf_aarch64_link_hash_entry *hash,
2210 bfd_vma destination)
2211 {
2212 bfd_vma location;
2213 bfd_signed_vma branch_offset;
2214 unsigned int r_type;
2215 struct elf_aarch64_link_hash_table *globals;
2216 enum elf_aarch64_stub_type stub_type = aarch64_stub_none;
2217 bfd_boolean via_plt_p;
2218
2219 if (st_type != STT_FUNC)
2220 return stub_type;
2221
2222 globals = elf_aarch64_hash_table (info);
2223 via_plt_p = (globals->root.splt != NULL && hash != NULL
2224 && hash->root.plt.offset != (bfd_vma) - 1);
2225
2226 if (via_plt_p)
2227 return stub_type;
2228
2229 /* Determine where the call point is. */
2230 location = (input_sec->output_offset
2231 + input_sec->output_section->vma + rel->r_offset);
2232
2233 branch_offset = (bfd_signed_vma) (destination - location);
2234
2235 r_type = ELFNN_R_TYPE (rel->r_info);
2236
2237 /* We don't want to redirect any old unconditional jump in this way,
2238 only one which is being used for a sibcall, where it is
2239 acceptable for the IP0 and IP1 registers to be clobbered. */
2240 if ((r_type == AARCH64_R (CALL26) || r_type == AARCH64_R (JUMP26))
2241 && (branch_offset > AARCH64_MAX_FWD_BRANCH_OFFSET
2242 || branch_offset < AARCH64_MAX_BWD_BRANCH_OFFSET))
2243 {
2244 stub_type = aarch64_stub_long_branch;
2245 }
2246
2247 return stub_type;
2248 }
2249
2250 /* Build a name for an entry in the stub hash table. */
2251
2252 static char *
2253 elfNN_aarch64_stub_name (const asection *input_section,
2254 const asection *sym_sec,
2255 const struct elf_aarch64_link_hash_entry *hash,
2256 const Elf_Internal_Rela *rel)
2257 {
2258 char *stub_name;
2259 bfd_size_type len;
2260
2261 if (hash)
2262 {
2263 len = 8 + 1 + strlen (hash->root.root.root.string) + 1 + 16 + 1;
2264 stub_name = bfd_malloc (len);
2265 if (stub_name != NULL)
2266 snprintf (stub_name, len, "%08x_%s+%" BFD_VMA_FMT "x",
2267 (unsigned int) input_section->id,
2268 hash->root.root.root.string,
2269 rel->r_addend);
2270 }
2271 else
2272 {
2273 len = 8 + 1 + 8 + 1 + 8 + 1 + 16 + 1;
2274 stub_name = bfd_malloc (len);
2275 if (stub_name != NULL)
2276 snprintf (stub_name, len, "%08x_%x:%x+%" BFD_VMA_FMT "x",
2277 (unsigned int) input_section->id,
2278 (unsigned int) sym_sec->id,
2279 (unsigned int) ELFNN_R_SYM (rel->r_info),
2280 rel->r_addend);
2281 }
2282
2283 return stub_name;
2284 }
2285
2286 /* Look up an entry in the stub hash. Stub entries are cached because
2287 creating the stub name takes a bit of time. */
2288
2289 static struct elf_aarch64_stub_hash_entry *
2290 elfNN_aarch64_get_stub_entry (const asection *input_section,
2291 const asection *sym_sec,
2292 struct elf_link_hash_entry *hash,
2293 const Elf_Internal_Rela *rel,
2294 struct elf_aarch64_link_hash_table *htab)
2295 {
2296 struct elf_aarch64_stub_hash_entry *stub_entry;
2297 struct elf_aarch64_link_hash_entry *h =
2298 (struct elf_aarch64_link_hash_entry *) hash;
2299 const asection *id_sec;
2300
2301 if ((input_section->flags & SEC_CODE) == 0)
2302 return NULL;
2303
2304 /* If this input section is part of a group of sections sharing one
2305 stub section, then use the id of the first section in the group.
2306 Stub names need to include a section id, as there may well be
2307 more than one stub used to reach say, printf, and we need to
2308 distinguish between them. */
2309 id_sec = htab->stub_group[input_section->id].link_sec;
2310
2311 if (h != NULL && h->stub_cache != NULL
2312 && h->stub_cache->h == h && h->stub_cache->id_sec == id_sec)
2313 {
2314 stub_entry = h->stub_cache;
2315 }
2316 else
2317 {
2318 char *stub_name;
2319
2320 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, h, rel);
2321 if (stub_name == NULL)
2322 return NULL;
2323
2324 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table,
2325 stub_name, FALSE, FALSE);
2326 if (h != NULL)
2327 h->stub_cache = stub_entry;
2328
2329 free (stub_name);
2330 }
2331
2332 return stub_entry;
2333 }
2334
2335
2336 /* Create a stub section. */
2337
2338 static asection *
2339 _bfd_aarch64_create_stub_section (asection *section,
2340 struct elf_aarch64_link_hash_table *htab)
2341 {
2342 size_t namelen;
2343 bfd_size_type len;
2344 char *s_name;
2345
2346 namelen = strlen (section->name);
2347 len = namelen + sizeof (STUB_SUFFIX);
2348 s_name = bfd_alloc (htab->stub_bfd, len);
2349 if (s_name == NULL)
2350 return NULL;
2351
2352 memcpy (s_name, section->name, namelen);
2353 memcpy (s_name + namelen, STUB_SUFFIX, sizeof (STUB_SUFFIX));
2354 return (*htab->add_stub_section) (s_name, section);
2355 }
2356
2357
2358 /* Find or create a stub section for a link section.
2359
2360 Fix or create the stub section used to collect stubs attached to
2361 the specified link section. */
2362
2363 static asection *
2364 _bfd_aarch64_get_stub_for_link_section (asection *link_section,
2365 struct elf_aarch64_link_hash_table *htab)
2366 {
2367 if (htab->stub_group[link_section->id].stub_sec == NULL)
2368 htab->stub_group[link_section->id].stub_sec
2369 = _bfd_aarch64_create_stub_section (link_section, htab);
2370 return htab->stub_group[link_section->id].stub_sec;
2371 }
2372
2373
2374 /* Find or create a stub section in the stub group for an input
2375 section. */
2376
2377 static asection *
2378 _bfd_aarch64_create_or_find_stub_sec (asection *section,
2379 struct elf_aarch64_link_hash_table *htab)
2380 {
2381 asection *link_sec = htab->stub_group[section->id].link_sec;
2382 return _bfd_aarch64_get_stub_for_link_section (link_sec, htab);
2383 }
2384
2385
2386 /* Add a new stub entry in the stub group associated with an input
2387 section to the stub hash. Not all fields of the new stub entry are
2388 initialised. */
2389
2390 static struct elf_aarch64_stub_hash_entry *
2391 _bfd_aarch64_add_stub_entry_in_group (const char *stub_name,
2392 asection *section,
2393 struct elf_aarch64_link_hash_table *htab)
2394 {
2395 asection *link_sec;
2396 asection *stub_sec;
2397 struct elf_aarch64_stub_hash_entry *stub_entry;
2398
2399 link_sec = htab->stub_group[section->id].link_sec;
2400 stub_sec = _bfd_aarch64_create_or_find_stub_sec (section, htab);
2401
2402 /* Enter this entry into the linker stub hash table. */
2403 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2404 TRUE, FALSE);
2405 if (stub_entry == NULL)
2406 {
2407 (*_bfd_error_handler) (_("%s: cannot create stub entry %s"),
2408 section->owner, stub_name);
2409 return NULL;
2410 }
2411
2412 stub_entry->stub_sec = stub_sec;
2413 stub_entry->stub_offset = 0;
2414 stub_entry->id_sec = link_sec;
2415
2416 return stub_entry;
2417 }
2418
2419 /* Add a new stub entry in the final stub section to the stub hash.
2420 Not all fields of the new stub entry are initialised. */
2421
2422 static struct elf_aarch64_stub_hash_entry *
2423 _bfd_aarch64_add_stub_entry_after (const char *stub_name,
2424 asection *link_section,
2425 struct elf_aarch64_link_hash_table *htab)
2426 {
2427 asection *stub_sec;
2428 struct elf_aarch64_stub_hash_entry *stub_entry;
2429
2430 stub_sec = _bfd_aarch64_get_stub_for_link_section (link_section, htab);
2431 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2432 TRUE, FALSE);
2433 if (stub_entry == NULL)
2434 {
2435 (*_bfd_error_handler) (_("cannot create stub entry %s"), stub_name);
2436 return NULL;
2437 }
2438
2439 stub_entry->stub_sec = stub_sec;
2440 stub_entry->stub_offset = 0;
2441 stub_entry->id_sec = link_section;
2442
2443 return stub_entry;
2444 }
2445
2446
2447 static bfd_boolean
2448 aarch64_build_one_stub (struct bfd_hash_entry *gen_entry,
2449 void *in_arg ATTRIBUTE_UNUSED)
2450 {
2451 struct elf_aarch64_stub_hash_entry *stub_entry;
2452 asection *stub_sec;
2453 bfd *stub_bfd;
2454 bfd_byte *loc;
2455 bfd_vma sym_value;
2456 bfd_vma veneered_insn_loc;
2457 bfd_vma veneer_entry_loc;
2458 bfd_signed_vma branch_offset = 0;
2459 unsigned int template_size;
2460 const uint32_t *template;
2461 unsigned int i;
2462
2463 /* Massage our args to the form they really have. */
2464 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2465
2466 stub_sec = stub_entry->stub_sec;
2467
2468 /* Make a note of the offset within the stubs for this entry. */
2469 stub_entry->stub_offset = stub_sec->size;
2470 loc = stub_sec->contents + stub_entry->stub_offset;
2471
2472 stub_bfd = stub_sec->owner;
2473
2474 /* This is the address of the stub destination. */
2475 sym_value = (stub_entry->target_value
2476 + stub_entry->target_section->output_offset
2477 + stub_entry->target_section->output_section->vma);
2478
2479 if (stub_entry->stub_type == aarch64_stub_long_branch)
2480 {
2481 bfd_vma place = (stub_entry->stub_offset + stub_sec->output_section->vma
2482 + stub_sec->output_offset);
2483
2484 /* See if we can relax the stub. */
2485 if (aarch64_valid_for_adrp_p (sym_value, place))
2486 stub_entry->stub_type = aarch64_select_branch_stub (sym_value, place);
2487 }
2488
2489 switch (stub_entry->stub_type)
2490 {
2491 case aarch64_stub_adrp_branch:
2492 template = aarch64_adrp_branch_stub;
2493 template_size = sizeof (aarch64_adrp_branch_stub);
2494 break;
2495 case aarch64_stub_long_branch:
2496 template = aarch64_long_branch_stub;
2497 template_size = sizeof (aarch64_long_branch_stub);
2498 break;
2499 case aarch64_stub_erratum_835769_veneer:
2500 template = aarch64_erratum_835769_stub;
2501 template_size = sizeof (aarch64_erratum_835769_stub);
2502 break;
2503 case aarch64_stub_erratum_843419_veneer:
2504 template = aarch64_erratum_843419_stub;
2505 template_size = sizeof (aarch64_erratum_843419_stub);
2506 break;
2507 default:
2508 abort ();
2509 }
2510
2511 for (i = 0; i < (template_size / sizeof template[0]); i++)
2512 {
2513 bfd_putl32 (template[i], loc);
2514 loc += 4;
2515 }
2516
2517 template_size = (template_size + 7) & ~7;
2518 stub_sec->size += template_size;
2519
2520 switch (stub_entry->stub_type)
2521 {
2522 case aarch64_stub_adrp_branch:
2523 if (aarch64_relocate (AARCH64_R (ADR_PREL_PG_HI21), stub_bfd, stub_sec,
2524 stub_entry->stub_offset, sym_value))
2525 /* The stub would not have been relaxed if the offset was out
2526 of range. */
2527 BFD_FAIL ();
2528
2529 if (aarch64_relocate (AARCH64_R (ADD_ABS_LO12_NC), stub_bfd, stub_sec,
2530 stub_entry->stub_offset + 4, sym_value))
2531 BFD_FAIL ();
2532 break;
2533
2534 case aarch64_stub_long_branch:
2535 /* We want the value relative to the address 12 bytes back from the
2536 value itself. */
2537 if (aarch64_relocate (AARCH64_R (PRELNN), stub_bfd, stub_sec,
2538 stub_entry->stub_offset + 16, sym_value + 12))
2539 BFD_FAIL ();
2540 break;
2541
2542 case aarch64_stub_erratum_835769_veneer:
2543 veneered_insn_loc = stub_entry->target_section->output_section->vma
2544 + stub_entry->target_section->output_offset
2545 + stub_entry->target_value;
2546 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
2547 + stub_entry->stub_sec->output_offset
2548 + stub_entry->stub_offset;
2549 branch_offset = veneered_insn_loc - veneer_entry_loc;
2550 branch_offset >>= 2;
2551 branch_offset &= 0x3ffffff;
2552 bfd_putl32 (stub_entry->veneered_insn,
2553 stub_sec->contents + stub_entry->stub_offset);
2554 bfd_putl32 (template[1] | branch_offset,
2555 stub_sec->contents + stub_entry->stub_offset + 4);
2556 break;
2557
2558 case aarch64_stub_erratum_843419_veneer:
2559 if (aarch64_relocate (AARCH64_R (JUMP26), stub_bfd, stub_sec,
2560 stub_entry->stub_offset + 4, sym_value + 4))
2561 BFD_FAIL ();
2562 break;
2563
2564 default:
2565 abort ();
2566 }
2567
2568 return TRUE;
2569 }
2570
2571 /* As above, but don't actually build the stub. Just bump offset so
2572 we know stub section sizes. */
2573
2574 static bfd_boolean
2575 aarch64_size_one_stub (struct bfd_hash_entry *gen_entry,
2576 void *in_arg ATTRIBUTE_UNUSED)
2577 {
2578 struct elf_aarch64_stub_hash_entry *stub_entry;
2579 int size;
2580
2581 /* Massage our args to the form they really have. */
2582 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2583
2584 switch (stub_entry->stub_type)
2585 {
2586 case aarch64_stub_adrp_branch:
2587 size = sizeof (aarch64_adrp_branch_stub);
2588 break;
2589 case aarch64_stub_long_branch:
2590 size = sizeof (aarch64_long_branch_stub);
2591 break;
2592 case aarch64_stub_erratum_835769_veneer:
2593 size = sizeof (aarch64_erratum_835769_stub);
2594 break;
2595 case aarch64_stub_erratum_843419_veneer:
2596 size = sizeof (aarch64_erratum_843419_stub);
2597 break;
2598 default:
2599 abort ();
2600 }
2601
2602 size = (size + 7) & ~7;
2603 stub_entry->stub_sec->size += size;
2604 return TRUE;
2605 }
2606
2607 /* External entry points for sizing and building linker stubs. */
2608
2609 /* Set up various things so that we can make a list of input sections
2610 for each output section included in the link. Returns -1 on error,
2611 0 when no stubs will be needed, and 1 on success. */
2612
2613 int
2614 elfNN_aarch64_setup_section_lists (bfd *output_bfd,
2615 struct bfd_link_info *info)
2616 {
2617 bfd *input_bfd;
2618 unsigned int bfd_count;
2619 int top_id, top_index;
2620 asection *section;
2621 asection **input_list, **list;
2622 bfd_size_type amt;
2623 struct elf_aarch64_link_hash_table *htab =
2624 elf_aarch64_hash_table (info);
2625
2626 if (!is_elf_hash_table (htab))
2627 return 0;
2628
2629 /* Count the number of input BFDs and find the top input section id. */
2630 for (input_bfd = info->input_bfds, bfd_count = 0, top_id = 0;
2631 input_bfd != NULL; input_bfd = input_bfd->link.next)
2632 {
2633 bfd_count += 1;
2634 for (section = input_bfd->sections;
2635 section != NULL; section = section->next)
2636 {
2637 if (top_id < section->id)
2638 top_id = section->id;
2639 }
2640 }
2641 htab->bfd_count = bfd_count;
2642
2643 amt = sizeof (struct map_stub) * (top_id + 1);
2644 htab->stub_group = bfd_zmalloc (amt);
2645 if (htab->stub_group == NULL)
2646 return -1;
2647
2648 /* We can't use output_bfd->section_count here to find the top output
2649 section index as some sections may have been removed, and
2650 _bfd_strip_section_from_output doesn't renumber the indices. */
2651 for (section = output_bfd->sections, top_index = 0;
2652 section != NULL; section = section->next)
2653 {
2654 if (top_index < section->index)
2655 top_index = section->index;
2656 }
2657
2658 htab->top_index = top_index;
2659 amt = sizeof (asection *) * (top_index + 1);
2660 input_list = bfd_malloc (amt);
2661 htab->input_list = input_list;
2662 if (input_list == NULL)
2663 return -1;
2664
2665 /* For sections we aren't interested in, mark their entries with a
2666 value we can check later. */
2667 list = input_list + top_index;
2668 do
2669 *list = bfd_abs_section_ptr;
2670 while (list-- != input_list);
2671
2672 for (section = output_bfd->sections;
2673 section != NULL; section = section->next)
2674 {
2675 if ((section->flags & SEC_CODE) != 0)
2676 input_list[section->index] = NULL;
2677 }
2678
2679 return 1;
2680 }
2681
2682 /* Used by elfNN_aarch64_next_input_section and group_sections. */
2683 #define PREV_SEC(sec) (htab->stub_group[(sec)->id].link_sec)
2684
2685 /* The linker repeatedly calls this function for each input section,
2686 in the order that input sections are linked into output sections.
2687 Build lists of input sections to determine groupings between which
2688 we may insert linker stubs. */
2689
2690 void
2691 elfNN_aarch64_next_input_section (struct bfd_link_info *info, asection *isec)
2692 {
2693 struct elf_aarch64_link_hash_table *htab =
2694 elf_aarch64_hash_table (info);
2695
2696 if (isec->output_section->index <= htab->top_index)
2697 {
2698 asection **list = htab->input_list + isec->output_section->index;
2699
2700 if (*list != bfd_abs_section_ptr)
2701 {
2702 /* Steal the link_sec pointer for our list. */
2703 /* This happens to make the list in reverse order,
2704 which is what we want. */
2705 PREV_SEC (isec) = *list;
2706 *list = isec;
2707 }
2708 }
2709 }
2710
2711 /* See whether we can group stub sections together. Grouping stub
2712 sections may result in fewer stubs. More importantly, we need to
2713 put all .init* and .fini* stubs at the beginning of the .init or
2714 .fini output sections respectively, because glibc splits the
2715 _init and _fini functions into multiple parts. Putting a stub in
2716 the middle of a function is not a good idea. */
2717
2718 static void
2719 group_sections (struct elf_aarch64_link_hash_table *htab,
2720 bfd_size_type stub_group_size,
2721 bfd_boolean stubs_always_before_branch)
2722 {
2723 asection **list = htab->input_list + htab->top_index;
2724
2725 do
2726 {
2727 asection *tail = *list;
2728
2729 if (tail == bfd_abs_section_ptr)
2730 continue;
2731
2732 while (tail != NULL)
2733 {
2734 asection *curr;
2735 asection *prev;
2736 bfd_size_type total;
2737
2738 curr = tail;
2739 total = tail->size;
2740 while ((prev = PREV_SEC (curr)) != NULL
2741 && ((total += curr->output_offset - prev->output_offset)
2742 < stub_group_size))
2743 curr = prev;
2744
2745 /* OK, the size from the start of CURR to the end is less
2746 than stub_group_size and thus can be handled by one stub
2747 section. (Or the tail section is itself larger than
2748 stub_group_size, in which case we may be toast.)
2749 We should really be keeping track of the total size of
2750 stubs added here, as stubs contribute to the final output
2751 section size. */
2752 do
2753 {
2754 prev = PREV_SEC (tail);
2755 /* Set up this stub group. */
2756 htab->stub_group[tail->id].link_sec = curr;
2757 }
2758 while (tail != curr && (tail = prev) != NULL);
2759
2760 /* But wait, there's more! Input sections up to stub_group_size
2761 bytes before the stub section can be handled by it too. */
2762 if (!stubs_always_before_branch)
2763 {
2764 total = 0;
2765 while (prev != NULL
2766 && ((total += tail->output_offset - prev->output_offset)
2767 < stub_group_size))
2768 {
2769 tail = prev;
2770 prev = PREV_SEC (tail);
2771 htab->stub_group[tail->id].link_sec = curr;
2772 }
2773 }
2774 tail = prev;
2775 }
2776 }
2777 while (list-- != htab->input_list);
2778
2779 free (htab->input_list);
2780 }
2781
2782 #undef PREV_SEC
2783
2784 #define AARCH64_BITS(x, pos, n) (((x) >> (pos)) & ((1 << (n)) - 1))
2785
2786 #define AARCH64_RT(insn) AARCH64_BITS (insn, 0, 5)
2787 #define AARCH64_RT2(insn) AARCH64_BITS (insn, 10, 5)
2788 #define AARCH64_RA(insn) AARCH64_BITS (insn, 10, 5)
2789 #define AARCH64_RD(insn) AARCH64_BITS (insn, 0, 5)
2790 #define AARCH64_RN(insn) AARCH64_BITS (insn, 5, 5)
2791 #define AARCH64_RM(insn) AARCH64_BITS (insn, 16, 5)
2792
2793 #define AARCH64_MAC(insn) (((insn) & 0xff000000) == 0x9b000000)
2794 #define AARCH64_BIT(insn, n) AARCH64_BITS (insn, n, 1)
2795 #define AARCH64_OP31(insn) AARCH64_BITS (insn, 21, 3)
2796 #define AARCH64_ZR 0x1f
2797
2798 /* All ld/st ops. See C4-182 of the ARM ARM. The encoding space for
2799 LD_PCREL, LDST_RO, LDST_UI and LDST_UIMM cover prefetch ops. */
2800
2801 #define AARCH64_LD(insn) (AARCH64_BIT (insn, 22) == 1)
2802 #define AARCH64_LDST(insn) (((insn) & 0x0a000000) == 0x08000000)
2803 #define AARCH64_LDST_EX(insn) (((insn) & 0x3f000000) == 0x08000000)
2804 #define AARCH64_LDST_PCREL(insn) (((insn) & 0x3b000000) == 0x18000000)
2805 #define AARCH64_LDST_NAP(insn) (((insn) & 0x3b800000) == 0x28000000)
2806 #define AARCH64_LDSTP_PI(insn) (((insn) & 0x3b800000) == 0x28800000)
2807 #define AARCH64_LDSTP_O(insn) (((insn) & 0x3b800000) == 0x29000000)
2808 #define AARCH64_LDSTP_PRE(insn) (((insn) & 0x3b800000) == 0x29800000)
2809 #define AARCH64_LDST_UI(insn) (((insn) & 0x3b200c00) == 0x38000000)
2810 #define AARCH64_LDST_PIIMM(insn) (((insn) & 0x3b200c00) == 0x38000400)
2811 #define AARCH64_LDST_U(insn) (((insn) & 0x3b200c00) == 0x38000800)
2812 #define AARCH64_LDST_PREIMM(insn) (((insn) & 0x3b200c00) == 0x38000c00)
2813 #define AARCH64_LDST_RO(insn) (((insn) & 0x3b200c00) == 0x38200800)
2814 #define AARCH64_LDST_UIMM(insn) (((insn) & 0x3b000000) == 0x39000000)
2815 #define AARCH64_LDST_SIMD_M(insn) (((insn) & 0xbfbf0000) == 0x0c000000)
2816 #define AARCH64_LDST_SIMD_M_PI(insn) (((insn) & 0xbfa00000) == 0x0c800000)
2817 #define AARCH64_LDST_SIMD_S(insn) (((insn) & 0xbf9f0000) == 0x0d000000)
2818 #define AARCH64_LDST_SIMD_S_PI(insn) (((insn) & 0xbf800000) == 0x0d800000)
2819
2820 /* Classify an INSN if it is indeed a load/store.
2821
2822 Return TRUE if INSN is a LD/ST instruction otherwise return FALSE.
2823
2824 For scalar LD/ST instructions PAIR is FALSE, RT is returned and RT2
2825 is set equal to RT.
2826
2827 For LD/ST pair instructions PAIR is TRUE, RT and RT2 are returned.
2828
2829 */
2830
2831 static bfd_boolean
2832 aarch64_mem_op_p (uint32_t insn, unsigned int *rt, unsigned int *rt2,
2833 bfd_boolean *pair, bfd_boolean *load)
2834 {
2835 uint32_t opcode;
2836 unsigned int r;
2837 uint32_t opc = 0;
2838 uint32_t v = 0;
2839 uint32_t opc_v = 0;
2840
2841 /* Bail out quickly if INSN doesn't fall into the the load-store
2842 encoding space. */
2843 if (!AARCH64_LDST (insn))
2844 return FALSE;
2845
2846 *pair = FALSE;
2847 *load = FALSE;
2848 if (AARCH64_LDST_EX (insn))
2849 {
2850 *rt = AARCH64_RT (insn);
2851 *rt2 = *rt;
2852 if (AARCH64_BIT (insn, 21) == 1)
2853 {
2854 *pair = TRUE;
2855 *rt2 = AARCH64_RT2 (insn);
2856 }
2857 *load = AARCH64_LD (insn);
2858 return TRUE;
2859 }
2860 else if (AARCH64_LDST_NAP (insn)
2861 || AARCH64_LDSTP_PI (insn)
2862 || AARCH64_LDSTP_O (insn)
2863 || AARCH64_LDSTP_PRE (insn))
2864 {
2865 *pair = TRUE;
2866 *rt = AARCH64_RT (insn);
2867 *rt2 = AARCH64_RT2 (insn);
2868 *load = AARCH64_LD (insn);
2869 return TRUE;
2870 }
2871 else if (AARCH64_LDST_PCREL (insn)
2872 || AARCH64_LDST_UI (insn)
2873 || AARCH64_LDST_PIIMM (insn)
2874 || AARCH64_LDST_U (insn)
2875 || AARCH64_LDST_PREIMM (insn)
2876 || AARCH64_LDST_RO (insn)
2877 || AARCH64_LDST_UIMM (insn))
2878 {
2879 *rt = AARCH64_RT (insn);
2880 *rt2 = *rt;
2881 if (AARCH64_LDST_PCREL (insn))
2882 *load = TRUE;
2883 opc = AARCH64_BITS (insn, 22, 2);
2884 v = AARCH64_BIT (insn, 26);
2885 opc_v = opc | (v << 2);
2886 *load = (opc_v == 1 || opc_v == 2 || opc_v == 3
2887 || opc_v == 5 || opc_v == 7);
2888 return TRUE;
2889 }
2890 else if (AARCH64_LDST_SIMD_M (insn)
2891 || AARCH64_LDST_SIMD_M_PI (insn))
2892 {
2893 *rt = AARCH64_RT (insn);
2894 *load = AARCH64_BIT (insn, 22);
2895 opcode = (insn >> 12) & 0xf;
2896 switch (opcode)
2897 {
2898 case 0:
2899 case 2:
2900 *rt2 = *rt + 3;
2901 break;
2902
2903 case 4:
2904 case 6:
2905 *rt2 = *rt + 2;
2906 break;
2907
2908 case 7:
2909 *rt2 = *rt;
2910 break;
2911
2912 case 8:
2913 case 10:
2914 *rt2 = *rt + 1;
2915 break;
2916
2917 default:
2918 return FALSE;
2919 }
2920 return TRUE;
2921 }
2922 else if (AARCH64_LDST_SIMD_S (insn)
2923 || AARCH64_LDST_SIMD_S_PI (insn))
2924 {
2925 *rt = AARCH64_RT (insn);
2926 r = (insn >> 21) & 1;
2927 *load = AARCH64_BIT (insn, 22);
2928 opcode = (insn >> 13) & 0x7;
2929 switch (opcode)
2930 {
2931 case 0:
2932 case 2:
2933 case 4:
2934 *rt2 = *rt + r;
2935 break;
2936
2937 case 1:
2938 case 3:
2939 case 5:
2940 *rt2 = *rt + (r == 0 ? 2 : 3);
2941 break;
2942
2943 case 6:
2944 *rt2 = *rt + r;
2945 break;
2946
2947 case 7:
2948 *rt2 = *rt + (r == 0 ? 2 : 3);
2949 break;
2950
2951 default:
2952 return FALSE;
2953 }
2954 return TRUE;
2955 }
2956
2957 return FALSE;
2958 }
2959
2960 /* Return TRUE if INSN is multiply-accumulate. */
2961
2962 static bfd_boolean
2963 aarch64_mlxl_p (uint32_t insn)
2964 {
2965 uint32_t op31 = AARCH64_OP31 (insn);
2966
2967 if (AARCH64_MAC (insn)
2968 && (op31 == 0 || op31 == 1 || op31 == 5)
2969 /* Exclude MUL instructions which are encoded as a multiple accumulate
2970 with RA = XZR. */
2971 && AARCH64_RA (insn) != AARCH64_ZR)
2972 return TRUE;
2973
2974 return FALSE;
2975 }
2976
2977 /* Some early revisions of the Cortex-A53 have an erratum (835769) whereby
2978 it is possible for a 64-bit multiply-accumulate instruction to generate an
2979 incorrect result. The details are quite complex and hard to
2980 determine statically, since branches in the code may exist in some
2981 circumstances, but all cases end with a memory (load, store, or
2982 prefetch) instruction followed immediately by the multiply-accumulate
2983 operation. We employ a linker patching technique, by moving the potentially
2984 affected multiply-accumulate instruction into a patch region and replacing
2985 the original instruction with a branch to the patch. This function checks
2986 if INSN_1 is the memory operation followed by a multiply-accumulate
2987 operation (INSN_2). Return TRUE if an erratum sequence is found, FALSE
2988 if INSN_1 and INSN_2 are safe. */
2989
2990 static bfd_boolean
2991 aarch64_erratum_sequence (uint32_t insn_1, uint32_t insn_2)
2992 {
2993 uint32_t rt;
2994 uint32_t rt2;
2995 uint32_t rn;
2996 uint32_t rm;
2997 uint32_t ra;
2998 bfd_boolean pair;
2999 bfd_boolean load;
3000
3001 if (aarch64_mlxl_p (insn_2)
3002 && aarch64_mem_op_p (insn_1, &rt, &rt2, &pair, &load))
3003 {
3004 /* Any SIMD memory op is independent of the subsequent MLA
3005 by definition of the erratum. */
3006 if (AARCH64_BIT (insn_1, 26))
3007 return TRUE;
3008
3009 /* If not SIMD, check for integer memory ops and MLA relationship. */
3010 rn = AARCH64_RN (insn_2);
3011 ra = AARCH64_RA (insn_2);
3012 rm = AARCH64_RM (insn_2);
3013
3014 /* If this is a load and there's a true(RAW) dependency, we are safe
3015 and this is not an erratum sequence. */
3016 if (load &&
3017 (rt == rn || rt == rm || rt == ra
3018 || (pair && (rt2 == rn || rt2 == rm || rt2 == ra))))
3019 return FALSE;
3020
3021 /* We conservatively put out stubs for all other cases (including
3022 writebacks). */
3023 return TRUE;
3024 }
3025
3026 return FALSE;
3027 }
3028
3029 /* Used to order a list of mapping symbols by address. */
3030
3031 static int
3032 elf_aarch64_compare_mapping (const void *a, const void *b)
3033 {
3034 const elf_aarch64_section_map *amap = (const elf_aarch64_section_map *) a;
3035 const elf_aarch64_section_map *bmap = (const elf_aarch64_section_map *) b;
3036
3037 if (amap->vma > bmap->vma)
3038 return 1;
3039 else if (amap->vma < bmap->vma)
3040 return -1;
3041 else if (amap->type > bmap->type)
3042 /* Ensure results do not depend on the host qsort for objects with
3043 multiple mapping symbols at the same address by sorting on type
3044 after vma. */
3045 return 1;
3046 else if (amap->type < bmap->type)
3047 return -1;
3048 else
3049 return 0;
3050 }
3051
3052
3053 static char *
3054 _bfd_aarch64_erratum_835769_stub_name (unsigned num_fixes)
3055 {
3056 char *stub_name = (char *) bfd_malloc
3057 (strlen ("__erratum_835769_veneer_") + 16);
3058 sprintf (stub_name,"__erratum_835769_veneer_%d", num_fixes);
3059 return stub_name;
3060 }
3061
3062 /* Scan for Cortex-A53 erratum 835769 sequence.
3063
3064 Return TRUE else FALSE on abnormal termination. */
3065
3066 static bfd_boolean
3067 _bfd_aarch64_erratum_835769_scan (bfd *input_bfd,
3068 struct bfd_link_info *info,
3069 unsigned int *num_fixes_p)
3070 {
3071 asection *section;
3072 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3073 unsigned int num_fixes = *num_fixes_p;
3074
3075 if (htab == NULL)
3076 return TRUE;
3077
3078 for (section = input_bfd->sections;
3079 section != NULL;
3080 section = section->next)
3081 {
3082 bfd_byte *contents = NULL;
3083 struct _aarch64_elf_section_data *sec_data;
3084 unsigned int span;
3085
3086 if (elf_section_type (section) != SHT_PROGBITS
3087 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3088 || (section->flags & SEC_EXCLUDE) != 0
3089 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3090 || (section->output_section == bfd_abs_section_ptr))
3091 continue;
3092
3093 if (elf_section_data (section)->this_hdr.contents != NULL)
3094 contents = elf_section_data (section)->this_hdr.contents;
3095 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3096 return FALSE;
3097
3098 sec_data = elf_aarch64_section_data (section);
3099
3100 qsort (sec_data->map, sec_data->mapcount,
3101 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3102
3103 for (span = 0; span < sec_data->mapcount; span++)
3104 {
3105 unsigned int span_start = sec_data->map[span].vma;
3106 unsigned int span_end = ((span == sec_data->mapcount - 1)
3107 ? sec_data->map[0].vma + section->size
3108 : sec_data->map[span + 1].vma);
3109 unsigned int i;
3110 char span_type = sec_data->map[span].type;
3111
3112 if (span_type == 'd')
3113 continue;
3114
3115 for (i = span_start; i + 4 < span_end; i += 4)
3116 {
3117 uint32_t insn_1 = bfd_getl32 (contents + i);
3118 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3119
3120 if (aarch64_erratum_sequence (insn_1, insn_2))
3121 {
3122 struct elf_aarch64_stub_hash_entry *stub_entry;
3123 char *stub_name = _bfd_aarch64_erratum_835769_stub_name (num_fixes);
3124 if (! stub_name)
3125 return FALSE;
3126
3127 stub_entry = _bfd_aarch64_add_stub_entry_in_group (stub_name,
3128 section,
3129 htab);
3130 if (! stub_entry)
3131 return FALSE;
3132
3133 stub_entry->stub_type = aarch64_stub_erratum_835769_veneer;
3134 stub_entry->target_section = section;
3135 stub_entry->target_value = i + 4;
3136 stub_entry->veneered_insn = insn_2;
3137 stub_entry->output_name = stub_name;
3138 num_fixes++;
3139 }
3140 }
3141 }
3142 if (elf_section_data (section)->this_hdr.contents == NULL)
3143 free (contents);
3144 }
3145
3146 *num_fixes_p = num_fixes;
3147
3148 return TRUE;
3149 }
3150
3151
3152 /* Test if instruction INSN is ADRP. */
3153
3154 static bfd_boolean
3155 _bfd_aarch64_adrp_p (uint32_t insn)
3156 {
3157 return ((insn & 0x9f000000) == 0x90000000);
3158 }
3159
3160
3161 /* Helper predicate to look for cortex-a53 erratum 843419 sequence 1. */
3162
3163 static bfd_boolean
3164 _bfd_aarch64_erratum_843419_sequence_p (uint32_t insn_1, uint32_t insn_2,
3165 uint32_t insn_3)
3166 {
3167 uint32_t rt;
3168 uint32_t rt2;
3169 bfd_boolean pair;
3170 bfd_boolean load;
3171
3172 return (aarch64_mem_op_p (insn_2, &rt, &rt2, &pair, &load)
3173 && (!pair
3174 || (pair && !load))
3175 && AARCH64_LDST_UIMM (insn_3)
3176 && AARCH64_RN (insn_3) == AARCH64_RD (insn_1));
3177 }
3178
3179
3180 /* Test for the presence of Cortex-A53 erratum 843419 instruction sequence.
3181
3182 Return TRUE if section CONTENTS at offset I contains one of the
3183 erratum 843419 sequences, otherwise return FALSE. If a sequence is
3184 seen set P_VENEER_I to the offset of the final LOAD/STORE
3185 instruction in the sequence.
3186 */
3187
3188 static bfd_boolean
3189 _bfd_aarch64_erratum_843419_p (bfd_byte *contents, bfd_vma vma,
3190 bfd_vma i, bfd_vma span_end,
3191 bfd_vma *p_veneer_i)
3192 {
3193 uint32_t insn_1 = bfd_getl32 (contents + i);
3194
3195 if (!_bfd_aarch64_adrp_p (insn_1))
3196 return FALSE;
3197
3198 if (span_end < i + 12)
3199 return FALSE;
3200
3201 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3202 uint32_t insn_3 = bfd_getl32 (contents + i + 8);
3203
3204 if ((vma & 0xfff) != 0xff8 && (vma & 0xfff) != 0xffc)
3205 return FALSE;
3206
3207 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_3))
3208 {
3209 *p_veneer_i = i + 8;
3210 return TRUE;
3211 }
3212
3213 if (span_end < i + 16)
3214 return FALSE;
3215
3216 uint32_t insn_4 = bfd_getl32 (contents + i + 12);
3217
3218 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_4))
3219 {
3220 *p_veneer_i = i + 12;
3221 return TRUE;
3222 }
3223
3224 return FALSE;
3225 }
3226
3227
3228 /* Resize all stub sections. */
3229
3230 static void
3231 _bfd_aarch64_resize_stubs (struct elf_aarch64_link_hash_table *htab)
3232 {
3233 asection *section;
3234
3235 /* OK, we've added some stubs. Find out the new size of the
3236 stub sections. */
3237 for (section = htab->stub_bfd->sections;
3238 section != NULL; section = section->next)
3239 {
3240 /* Ignore non-stub sections. */
3241 if (!strstr (section->name, STUB_SUFFIX))
3242 continue;
3243 section->size = 0;
3244 }
3245
3246 bfd_hash_traverse (&htab->stub_hash_table, aarch64_size_one_stub, htab);
3247
3248 for (section = htab->stub_bfd->sections;
3249 section != NULL; section = section->next)
3250 {
3251 if (!strstr (section->name, STUB_SUFFIX))
3252 continue;
3253
3254 if (section->size)
3255 section->size += 4;
3256
3257 /* Ensure all stub sections have a size which is a multiple of
3258 4096. This is important in order to ensure that the insertion
3259 of stub sections does not in itself move existing code around
3260 in such a way that new errata sequences are created. */
3261 if (htab->fix_erratum_843419)
3262 if (section->size)
3263 section->size = BFD_ALIGN (section->size, 0x1000);
3264 }
3265 }
3266
3267
3268 /* Construct an erratum 843419 workaround stub name.
3269 */
3270
3271 static char *
3272 _bfd_aarch64_erratum_843419_stub_name (asection *input_section,
3273 bfd_vma offset)
3274 {
3275 const bfd_size_type len = 8 + 4 + 1 + 8 + 1 + 16 + 1;
3276 char *stub_name = bfd_malloc (len);
3277
3278 if (stub_name != NULL)
3279 snprintf (stub_name, len, "e843419@%04x_%08x_%" BFD_VMA_FMT "x",
3280 input_section->owner->id,
3281 input_section->id,
3282 offset);
3283 return stub_name;
3284 }
3285
3286 /* Build a stub_entry structure describing an 843419 fixup.
3287
3288 The stub_entry constructed is populated with the bit pattern INSN
3289 of the instruction located at OFFSET within input SECTION.
3290
3291 Returns TRUE on success. */
3292
3293 static bfd_boolean
3294 _bfd_aarch64_erratum_843419_fixup (uint32_t insn,
3295 bfd_vma adrp_offset,
3296 bfd_vma ldst_offset,
3297 asection *section,
3298 struct bfd_link_info *info)
3299 {
3300 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3301 char *stub_name;
3302 struct elf_aarch64_stub_hash_entry *stub_entry;
3303
3304 stub_name = _bfd_aarch64_erratum_843419_stub_name (section, ldst_offset);
3305 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
3306 FALSE, FALSE);
3307 if (stub_entry)
3308 {
3309 free (stub_name);
3310 return TRUE;
3311 }
3312
3313 /* We always place an 843419 workaround veneer in the stub section
3314 attached to the input section in which an erratum sequence has
3315 been found. This ensures that later in the link process (in
3316 elfNN_aarch64_write_section) when we copy the veneered
3317 instruction from the input section into the stub section the
3318 copied instruction will have had any relocations applied to it.
3319 If we placed workaround veneers in any other stub section then we
3320 could not assume that all relocations have been processed on the
3321 corresponding input section at the point we output the stub
3322 section.
3323 */
3324
3325 stub_entry = _bfd_aarch64_add_stub_entry_after (stub_name, section, htab);
3326 if (stub_entry == NULL)
3327 {
3328 free (stub_name);
3329 return FALSE;
3330 }
3331
3332 stub_entry->adrp_offset = adrp_offset;
3333 stub_entry->target_value = ldst_offset;
3334 stub_entry->target_section = section;
3335 stub_entry->stub_type = aarch64_stub_erratum_843419_veneer;
3336 stub_entry->veneered_insn = insn;
3337 stub_entry->output_name = stub_name;
3338
3339 return TRUE;
3340 }
3341
3342
3343 /* Scan an input section looking for the signature of erratum 843419.
3344
3345 Scans input SECTION in INPUT_BFD looking for erratum 843419
3346 signatures, for each signature found a stub_entry is created
3347 describing the location of the erratum for subsequent fixup.
3348
3349 Return TRUE on successful scan, FALSE on failure to scan.
3350 */
3351
3352 static bfd_boolean
3353 _bfd_aarch64_erratum_843419_scan (bfd *input_bfd, asection *section,
3354 struct bfd_link_info *info)
3355 {
3356 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3357
3358 if (htab == NULL)
3359 return TRUE;
3360
3361 if (elf_section_type (section) != SHT_PROGBITS
3362 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3363 || (section->flags & SEC_EXCLUDE) != 0
3364 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3365 || (section->output_section == bfd_abs_section_ptr))
3366 return TRUE;
3367
3368 do
3369 {
3370 bfd_byte *contents = NULL;
3371 struct _aarch64_elf_section_data *sec_data;
3372 unsigned int span;
3373
3374 if (elf_section_data (section)->this_hdr.contents != NULL)
3375 contents = elf_section_data (section)->this_hdr.contents;
3376 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3377 return FALSE;
3378
3379 sec_data = elf_aarch64_section_data (section);
3380
3381 qsort (sec_data->map, sec_data->mapcount,
3382 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3383
3384 for (span = 0; span < sec_data->mapcount; span++)
3385 {
3386 unsigned int span_start = sec_data->map[span].vma;
3387 unsigned int span_end = ((span == sec_data->mapcount - 1)
3388 ? sec_data->map[0].vma + section->size
3389 : sec_data->map[span + 1].vma);
3390 unsigned int i;
3391 char span_type = sec_data->map[span].type;
3392
3393 if (span_type == 'd')
3394 continue;
3395
3396 for (i = span_start; i + 8 < span_end; i += 4)
3397 {
3398 bfd_vma vma = (section->output_section->vma
3399 + section->output_offset
3400 + i);
3401 bfd_vma veneer_i;
3402
3403 if (_bfd_aarch64_erratum_843419_p
3404 (contents, vma, i, span_end, &veneer_i))
3405 {
3406 uint32_t insn = bfd_getl32 (contents + veneer_i);
3407
3408 if (!_bfd_aarch64_erratum_843419_fixup (insn, i, veneer_i,
3409 section, info))
3410 return FALSE;
3411 }
3412 }
3413 }
3414
3415 if (elf_section_data (section)->this_hdr.contents == NULL)
3416 free (contents);
3417 }
3418 while (0);
3419
3420 return TRUE;
3421 }
3422
3423
3424 /* Determine and set the size of the stub section for a final link.
3425
3426 The basic idea here is to examine all the relocations looking for
3427 PC-relative calls to a target that is unreachable with a "bl"
3428 instruction. */
3429
3430 bfd_boolean
3431 elfNN_aarch64_size_stubs (bfd *output_bfd,
3432 bfd *stub_bfd,
3433 struct bfd_link_info *info,
3434 bfd_signed_vma group_size,
3435 asection * (*add_stub_section) (const char *,
3436 asection *),
3437 void (*layout_sections_again) (void))
3438 {
3439 bfd_size_type stub_group_size;
3440 bfd_boolean stubs_always_before_branch;
3441 bfd_boolean stub_changed = FALSE;
3442 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3443 unsigned int num_erratum_835769_fixes = 0;
3444
3445 /* Propagate mach to stub bfd, because it may not have been
3446 finalized when we created stub_bfd. */
3447 bfd_set_arch_mach (stub_bfd, bfd_get_arch (output_bfd),
3448 bfd_get_mach (output_bfd));
3449
3450 /* Stash our params away. */
3451 htab->stub_bfd = stub_bfd;
3452 htab->add_stub_section = add_stub_section;
3453 htab->layout_sections_again = layout_sections_again;
3454 stubs_always_before_branch = group_size < 0;
3455 if (group_size < 0)
3456 stub_group_size = -group_size;
3457 else
3458 stub_group_size = group_size;
3459
3460 if (stub_group_size == 1)
3461 {
3462 /* Default values. */
3463 /* AArch64 branch range is +-128MB. The value used is 1MB less. */
3464 stub_group_size = 127 * 1024 * 1024;
3465 }
3466
3467 group_sections (htab, stub_group_size, stubs_always_before_branch);
3468
3469 (*htab->layout_sections_again) ();
3470
3471 if (htab->fix_erratum_835769)
3472 {
3473 bfd *input_bfd;
3474
3475 for (input_bfd = info->input_bfds;
3476 input_bfd != NULL; input_bfd = input_bfd->link.next)
3477 if (!_bfd_aarch64_erratum_835769_scan (input_bfd, info,
3478 &num_erratum_835769_fixes))
3479 return FALSE;
3480
3481 _bfd_aarch64_resize_stubs (htab);
3482 (*htab->layout_sections_again) ();
3483 }
3484
3485 if (htab->fix_erratum_843419)
3486 {
3487 bfd *input_bfd;
3488
3489 for (input_bfd = info->input_bfds;
3490 input_bfd != NULL;
3491 input_bfd = input_bfd->link.next)
3492 {
3493 asection *section;
3494
3495 for (section = input_bfd->sections;
3496 section != NULL;
3497 section = section->next)
3498 if (!_bfd_aarch64_erratum_843419_scan (input_bfd, section, info))
3499 return FALSE;
3500 }
3501
3502 _bfd_aarch64_resize_stubs (htab);
3503 (*htab->layout_sections_again) ();
3504 }
3505
3506 while (1)
3507 {
3508 bfd *input_bfd;
3509
3510 for (input_bfd = info->input_bfds;
3511 input_bfd != NULL; input_bfd = input_bfd->link.next)
3512 {
3513 Elf_Internal_Shdr *symtab_hdr;
3514 asection *section;
3515 Elf_Internal_Sym *local_syms = NULL;
3516
3517 /* We'll need the symbol table in a second. */
3518 symtab_hdr = &elf_tdata (input_bfd)->symtab_hdr;
3519 if (symtab_hdr->sh_info == 0)
3520 continue;
3521
3522 /* Walk over each section attached to the input bfd. */
3523 for (section = input_bfd->sections;
3524 section != NULL; section = section->next)
3525 {
3526 Elf_Internal_Rela *internal_relocs, *irelaend, *irela;
3527
3528 /* If there aren't any relocs, then there's nothing more
3529 to do. */
3530 if ((section->flags & SEC_RELOC) == 0
3531 || section->reloc_count == 0
3532 || (section->flags & SEC_CODE) == 0)
3533 continue;
3534
3535 /* If this section is a link-once section that will be
3536 discarded, then don't create any stubs. */
3537 if (section->output_section == NULL
3538 || section->output_section->owner != output_bfd)
3539 continue;
3540
3541 /* Get the relocs. */
3542 internal_relocs
3543 = _bfd_elf_link_read_relocs (input_bfd, section, NULL,
3544 NULL, info->keep_memory);
3545 if (internal_relocs == NULL)
3546 goto error_ret_free_local;
3547
3548 /* Now examine each relocation. */
3549 irela = internal_relocs;
3550 irelaend = irela + section->reloc_count;
3551 for (; irela < irelaend; irela++)
3552 {
3553 unsigned int r_type, r_indx;
3554 enum elf_aarch64_stub_type stub_type;
3555 struct elf_aarch64_stub_hash_entry *stub_entry;
3556 asection *sym_sec;
3557 bfd_vma sym_value;
3558 bfd_vma destination;
3559 struct elf_aarch64_link_hash_entry *hash;
3560 const char *sym_name;
3561 char *stub_name;
3562 const asection *id_sec;
3563 unsigned char st_type;
3564 bfd_size_type len;
3565
3566 r_type = ELFNN_R_TYPE (irela->r_info);
3567 r_indx = ELFNN_R_SYM (irela->r_info);
3568
3569 if (r_type >= (unsigned int) R_AARCH64_end)
3570 {
3571 bfd_set_error (bfd_error_bad_value);
3572 error_ret_free_internal:
3573 if (elf_section_data (section)->relocs == NULL)
3574 free (internal_relocs);
3575 goto error_ret_free_local;
3576 }
3577
3578 /* Only look for stubs on unconditional branch and
3579 branch and link instructions. */
3580 if (r_type != (unsigned int) AARCH64_R (CALL26)
3581 && r_type != (unsigned int) AARCH64_R (JUMP26))
3582 continue;
3583
3584 /* Now determine the call target, its name, value,
3585 section. */
3586 sym_sec = NULL;
3587 sym_value = 0;
3588 destination = 0;
3589 hash = NULL;
3590 sym_name = NULL;
3591 if (r_indx < symtab_hdr->sh_info)
3592 {
3593 /* It's a local symbol. */
3594 Elf_Internal_Sym *sym;
3595 Elf_Internal_Shdr *hdr;
3596
3597 if (local_syms == NULL)
3598 {
3599 local_syms
3600 = (Elf_Internal_Sym *) symtab_hdr->contents;
3601 if (local_syms == NULL)
3602 local_syms
3603 = bfd_elf_get_elf_syms (input_bfd, symtab_hdr,
3604 symtab_hdr->sh_info, 0,
3605 NULL, NULL, NULL);
3606 if (local_syms == NULL)
3607 goto error_ret_free_internal;
3608 }
3609
3610 sym = local_syms + r_indx;
3611 hdr = elf_elfsections (input_bfd)[sym->st_shndx];
3612 sym_sec = hdr->bfd_section;
3613 if (!sym_sec)
3614 /* This is an undefined symbol. It can never
3615 be resolved. */
3616 continue;
3617
3618 if (ELF_ST_TYPE (sym->st_info) != STT_SECTION)
3619 sym_value = sym->st_value;
3620 destination = (sym_value + irela->r_addend
3621 + sym_sec->output_offset
3622 + sym_sec->output_section->vma);
3623 st_type = ELF_ST_TYPE (sym->st_info);
3624 sym_name
3625 = bfd_elf_string_from_elf_section (input_bfd,
3626 symtab_hdr->sh_link,
3627 sym->st_name);
3628 }
3629 else
3630 {
3631 int e_indx;
3632
3633 e_indx = r_indx - symtab_hdr->sh_info;
3634 hash = ((struct elf_aarch64_link_hash_entry *)
3635 elf_sym_hashes (input_bfd)[e_indx]);
3636
3637 while (hash->root.root.type == bfd_link_hash_indirect
3638 || hash->root.root.type == bfd_link_hash_warning)
3639 hash = ((struct elf_aarch64_link_hash_entry *)
3640 hash->root.root.u.i.link);
3641
3642 if (hash->root.root.type == bfd_link_hash_defined
3643 || hash->root.root.type == bfd_link_hash_defweak)
3644 {
3645 struct elf_aarch64_link_hash_table *globals =
3646 elf_aarch64_hash_table (info);
3647 sym_sec = hash->root.root.u.def.section;
3648 sym_value = hash->root.root.u.def.value;
3649 /* For a destination in a shared library,
3650 use the PLT stub as target address to
3651 decide whether a branch stub is
3652 needed. */
3653 if (globals->root.splt != NULL && hash != NULL
3654 && hash->root.plt.offset != (bfd_vma) - 1)
3655 {
3656 sym_sec = globals->root.splt;
3657 sym_value = hash->root.plt.offset;
3658 if (sym_sec->output_section != NULL)
3659 destination = (sym_value
3660 + sym_sec->output_offset
3661 +
3662 sym_sec->output_section->vma);
3663 }
3664 else if (sym_sec->output_section != NULL)
3665 destination = (sym_value + irela->r_addend
3666 + sym_sec->output_offset
3667 + sym_sec->output_section->vma);
3668 }
3669 else if (hash->root.root.type == bfd_link_hash_undefined
3670 || (hash->root.root.type
3671 == bfd_link_hash_undefweak))
3672 {
3673 /* For a shared library, use the PLT stub as
3674 target address to decide whether a long
3675 branch stub is needed.
3676 For absolute code, they cannot be handled. */
3677 struct elf_aarch64_link_hash_table *globals =
3678 elf_aarch64_hash_table (info);
3679
3680 if (globals->root.splt != NULL && hash != NULL
3681 && hash->root.plt.offset != (bfd_vma) - 1)
3682 {
3683 sym_sec = globals->root.splt;
3684 sym_value = hash->root.plt.offset;
3685 if (sym_sec->output_section != NULL)
3686 destination = (sym_value
3687 + sym_sec->output_offset
3688 +
3689 sym_sec->output_section->vma);
3690 }
3691 else
3692 continue;
3693 }
3694 else
3695 {
3696 bfd_set_error (bfd_error_bad_value);
3697 goto error_ret_free_internal;
3698 }
3699 st_type = ELF_ST_TYPE (hash->root.type);
3700 sym_name = hash->root.root.root.string;
3701 }
3702
3703 /* Determine what (if any) linker stub is needed. */
3704 stub_type = aarch64_type_of_stub
3705 (info, section, irela, st_type, hash, destination);
3706 if (stub_type == aarch64_stub_none)
3707 continue;
3708
3709 /* Support for grouping stub sections. */
3710 id_sec = htab->stub_group[section->id].link_sec;
3711
3712 /* Get the name of this stub. */
3713 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, hash,
3714 irela);
3715 if (!stub_name)
3716 goto error_ret_free_internal;
3717
3718 stub_entry =
3719 aarch64_stub_hash_lookup (&htab->stub_hash_table,
3720 stub_name, FALSE, FALSE);
3721 if (stub_entry != NULL)
3722 {
3723 /* The proper stub has already been created. */
3724 free (stub_name);
3725 continue;
3726 }
3727
3728 stub_entry = _bfd_aarch64_add_stub_entry_in_group
3729 (stub_name, section, htab);
3730 if (stub_entry == NULL)
3731 {
3732 free (stub_name);
3733 goto error_ret_free_internal;
3734 }
3735
3736 stub_entry->target_value = sym_value;
3737 stub_entry->target_section = sym_sec;
3738 stub_entry->stub_type = stub_type;
3739 stub_entry->h = hash;
3740 stub_entry->st_type = st_type;
3741
3742 if (sym_name == NULL)
3743 sym_name = "unnamed";
3744 len = sizeof (STUB_ENTRY_NAME) + strlen (sym_name);
3745 stub_entry->output_name = bfd_alloc (htab->stub_bfd, len);
3746 if (stub_entry->output_name == NULL)
3747 {
3748 free (stub_name);
3749 goto error_ret_free_internal;
3750 }
3751
3752 snprintf (stub_entry->output_name, len, STUB_ENTRY_NAME,
3753 sym_name);
3754
3755 stub_changed = TRUE;
3756 }
3757
3758 /* We're done with the internal relocs, free them. */
3759 if (elf_section_data (section)->relocs == NULL)
3760 free (internal_relocs);
3761 }
3762 }
3763
3764 if (!stub_changed)
3765 break;
3766
3767 _bfd_aarch64_resize_stubs (htab);
3768
3769 /* Ask the linker to do its stuff. */
3770 (*htab->layout_sections_again) ();
3771 stub_changed = FALSE;
3772 }
3773
3774 return TRUE;
3775
3776 error_ret_free_local:
3777 return FALSE;
3778 }
3779
3780 /* Build all the stubs associated with the current output file. The
3781 stubs are kept in a hash table attached to the main linker hash
3782 table. We also set up the .plt entries for statically linked PIC
3783 functions here. This function is called via aarch64_elf_finish in the
3784 linker. */
3785
3786 bfd_boolean
3787 elfNN_aarch64_build_stubs (struct bfd_link_info *info)
3788 {
3789 asection *stub_sec;
3790 struct bfd_hash_table *table;
3791 struct elf_aarch64_link_hash_table *htab;
3792
3793 htab = elf_aarch64_hash_table (info);
3794
3795 for (stub_sec = htab->stub_bfd->sections;
3796 stub_sec != NULL; stub_sec = stub_sec->next)
3797 {
3798 bfd_size_type size;
3799
3800 /* Ignore non-stub sections. */
3801 if (!strstr (stub_sec->name, STUB_SUFFIX))
3802 continue;
3803
3804 /* Allocate memory to hold the linker stubs. */
3805 size = stub_sec->size;
3806 stub_sec->contents = bfd_zalloc (htab->stub_bfd, size);
3807 if (stub_sec->contents == NULL && size != 0)
3808 return FALSE;
3809 stub_sec->size = 0;
3810
3811 bfd_putl32 (0x14000000 | (size >> 2), stub_sec->contents);
3812 stub_sec->size += 4;
3813 }
3814
3815 /* Build the stubs as directed by the stub hash table. */
3816 table = &htab->stub_hash_table;
3817 bfd_hash_traverse (table, aarch64_build_one_stub, info);
3818
3819 return TRUE;
3820 }
3821
3822
3823 /* Add an entry to the code/data map for section SEC. */
3824
3825 static void
3826 elfNN_aarch64_section_map_add (asection *sec, char type, bfd_vma vma)
3827 {
3828 struct _aarch64_elf_section_data *sec_data =
3829 elf_aarch64_section_data (sec);
3830 unsigned int newidx;
3831
3832 if (sec_data->map == NULL)
3833 {
3834 sec_data->map = bfd_malloc (sizeof (elf_aarch64_section_map));
3835 sec_data->mapcount = 0;
3836 sec_data->mapsize = 1;
3837 }
3838
3839 newidx = sec_data->mapcount++;
3840
3841 if (sec_data->mapcount > sec_data->mapsize)
3842 {
3843 sec_data->mapsize *= 2;
3844 sec_data->map = bfd_realloc_or_free
3845 (sec_data->map, sec_data->mapsize * sizeof (elf_aarch64_section_map));
3846 }
3847
3848 if (sec_data->map)
3849 {
3850 sec_data->map[newidx].vma = vma;
3851 sec_data->map[newidx].type = type;
3852 }
3853 }
3854
3855
3856 /* Initialise maps of insn/data for input BFDs. */
3857 void
3858 bfd_elfNN_aarch64_init_maps (bfd *abfd)
3859 {
3860 Elf_Internal_Sym *isymbuf;
3861 Elf_Internal_Shdr *hdr;
3862 unsigned int i, localsyms;
3863
3864 /* Make sure that we are dealing with an AArch64 elf binary. */
3865 if (!is_aarch64_elf (abfd))
3866 return;
3867
3868 if ((abfd->flags & DYNAMIC) != 0)
3869 return;
3870
3871 hdr = &elf_symtab_hdr (abfd);
3872 localsyms = hdr->sh_info;
3873
3874 /* Obtain a buffer full of symbols for this BFD. The hdr->sh_info field
3875 should contain the number of local symbols, which should come before any
3876 global symbols. Mapping symbols are always local. */
3877 isymbuf = bfd_elf_get_elf_syms (abfd, hdr, localsyms, 0, NULL, NULL, NULL);
3878
3879 /* No internal symbols read? Skip this BFD. */
3880 if (isymbuf == NULL)
3881 return;
3882
3883 for (i = 0; i < localsyms; i++)
3884 {
3885 Elf_Internal_Sym *isym = &isymbuf[i];
3886 asection *sec = bfd_section_from_elf_index (abfd, isym->st_shndx);
3887 const char *name;
3888
3889 if (sec != NULL && ELF_ST_BIND (isym->st_info) == STB_LOCAL)
3890 {
3891 name = bfd_elf_string_from_elf_section (abfd,
3892 hdr->sh_link,
3893 isym->st_name);
3894
3895 if (bfd_is_aarch64_special_symbol_name
3896 (name, BFD_AARCH64_SPECIAL_SYM_TYPE_MAP))
3897 elfNN_aarch64_section_map_add (sec, name[1], isym->st_value);
3898 }
3899 }
3900 }
3901
3902 /* Set option values needed during linking. */
3903 void
3904 bfd_elfNN_aarch64_set_options (struct bfd *output_bfd,
3905 struct bfd_link_info *link_info,
3906 int no_enum_warn,
3907 int no_wchar_warn, int pic_veneer,
3908 int fix_erratum_835769,
3909 int fix_erratum_843419)
3910 {
3911 struct elf_aarch64_link_hash_table *globals;
3912
3913 globals = elf_aarch64_hash_table (link_info);
3914 globals->pic_veneer = pic_veneer;
3915 globals->fix_erratum_835769 = fix_erratum_835769;
3916 globals->fix_erratum_843419 = fix_erratum_843419;
3917 globals->fix_erratum_843419_adr = TRUE;
3918
3919 BFD_ASSERT (is_aarch64_elf (output_bfd));
3920 elf_aarch64_tdata (output_bfd)->no_enum_size_warning = no_enum_warn;
3921 elf_aarch64_tdata (output_bfd)->no_wchar_size_warning = no_wchar_warn;
3922 }
3923
3924 static bfd_vma
3925 aarch64_calculate_got_entry_vma (struct elf_link_hash_entry *h,
3926 struct elf_aarch64_link_hash_table
3927 *globals, struct bfd_link_info *info,
3928 bfd_vma value, bfd *output_bfd,
3929 bfd_boolean *unresolved_reloc_p)
3930 {
3931 bfd_vma off = (bfd_vma) - 1;
3932 asection *basegot = globals->root.sgot;
3933 bfd_boolean dyn = globals->root.dynamic_sections_created;
3934
3935 if (h != NULL)
3936 {
3937 BFD_ASSERT (basegot != NULL);
3938 off = h->got.offset;
3939 BFD_ASSERT (off != (bfd_vma) - 1);
3940 if (!WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, info->shared, h)
3941 || (info->shared
3942 && SYMBOL_REFERENCES_LOCAL (info, h))
3943 || (ELF_ST_VISIBILITY (h->other)
3944 && h->root.type == bfd_link_hash_undefweak))
3945 {
3946 /* This is actually a static link, or it is a -Bsymbolic link
3947 and the symbol is defined locally. We must initialize this
3948 entry in the global offset table. Since the offset must
3949 always be a multiple of 8 (4 in the case of ILP32), we use
3950 the least significant bit to record whether we have
3951 initialized it already.
3952 When doing a dynamic link, we create a .rel(a).got relocation
3953 entry to initialize the value. This is done in the
3954 finish_dynamic_symbol routine. */
3955 if ((off & 1) != 0)
3956 off &= ~1;
3957 else
3958 {
3959 bfd_put_NN (output_bfd, value, basegot->contents + off);
3960 h->got.offset |= 1;
3961 }
3962 }
3963 else
3964 *unresolved_reloc_p = FALSE;
3965
3966 off = off + basegot->output_section->vma + basegot->output_offset;
3967 }
3968
3969 return off;
3970 }
3971
3972 /* Change R_TYPE to a more efficient access model where possible,
3973 return the new reloc type. */
3974
3975 static bfd_reloc_code_real_type
3976 aarch64_tls_transition_without_check (bfd_reloc_code_real_type r_type,
3977 struct elf_link_hash_entry *h)
3978 {
3979 bfd_boolean is_local = h == NULL;
3980
3981 switch (r_type)
3982 {
3983 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
3984 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
3985 return (is_local
3986 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
3987 : BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21);
3988
3989 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
3990 return (is_local
3991 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
3992 : r_type);
3993
3994 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
3995 return (is_local
3996 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
3997 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
3998
3999 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4000 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
4001 return (is_local
4002 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
4003 : BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC);
4004
4005 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4006 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 : r_type;
4007
4008 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
4009 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC : r_type;
4010
4011 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4012 return r_type;
4013
4014 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4015 return (is_local
4016 ? BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12
4017 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
4018
4019 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4020 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4021 /* Instructions with these relocations will become NOPs. */
4022 return BFD_RELOC_AARCH64_NONE;
4023
4024 default:
4025 break;
4026 }
4027
4028 return r_type;
4029 }
4030
4031 static unsigned int
4032 aarch64_reloc_got_type (bfd_reloc_code_real_type r_type)
4033 {
4034 switch (r_type)
4035 {
4036 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4037 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4038 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4039 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4040 return GOT_NORMAL;
4041
4042 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4043 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4044 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4045 return GOT_TLS_GD;
4046
4047 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4048 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4049 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4050 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4051 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
4052 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
4053 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4054 return GOT_TLSDESC_GD;
4055
4056 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4057 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4058 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4059 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4060 return GOT_TLS_IE;
4061
4062 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4063 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4064 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
4065 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
4066 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
4067 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
4068 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
4069 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
4070 return GOT_UNKNOWN;
4071
4072 default:
4073 break;
4074 }
4075 return GOT_UNKNOWN;
4076 }
4077
4078 static bfd_boolean
4079 aarch64_can_relax_tls (bfd *input_bfd,
4080 struct bfd_link_info *info,
4081 bfd_reloc_code_real_type r_type,
4082 struct elf_link_hash_entry *h,
4083 unsigned long r_symndx)
4084 {
4085 unsigned int symbol_got_type;
4086 unsigned int reloc_got_type;
4087
4088 if (! IS_AARCH64_TLS_RELOC (r_type))
4089 return FALSE;
4090
4091 symbol_got_type = elfNN_aarch64_symbol_got_type (h, input_bfd, r_symndx);
4092 reloc_got_type = aarch64_reloc_got_type (r_type);
4093
4094 if (symbol_got_type == GOT_TLS_IE && GOT_TLS_GD_ANY_P (reloc_got_type))
4095 return TRUE;
4096
4097 if (info->shared)
4098 return FALSE;
4099
4100 if (h && h->root.type == bfd_link_hash_undefweak)
4101 return FALSE;
4102
4103 return TRUE;
4104 }
4105
4106 /* Given the relocation code R_TYPE, return the relaxed bfd reloc
4107 enumerator. */
4108
4109 static bfd_reloc_code_real_type
4110 aarch64_tls_transition (bfd *input_bfd,
4111 struct bfd_link_info *info,
4112 unsigned int r_type,
4113 struct elf_link_hash_entry *h,
4114 unsigned long r_symndx)
4115 {
4116 bfd_reloc_code_real_type bfd_r_type
4117 = elfNN_aarch64_bfd_reloc_from_type (r_type);
4118
4119 if (! aarch64_can_relax_tls (input_bfd, info, bfd_r_type, h, r_symndx))
4120 return bfd_r_type;
4121
4122 return aarch64_tls_transition_without_check (bfd_r_type, h);
4123 }
4124
4125 /* Return the base VMA address which should be subtracted from real addresses
4126 when resolving R_AARCH64_TLS_DTPREL relocation. */
4127
4128 static bfd_vma
4129 dtpoff_base (struct bfd_link_info *info)
4130 {
4131 /* If tls_sec is NULL, we should have signalled an error already. */
4132 BFD_ASSERT (elf_hash_table (info)->tls_sec != NULL);
4133 return elf_hash_table (info)->tls_sec->vma;
4134 }
4135
4136 /* Return the base VMA address which should be subtracted from real addresses
4137 when resolving R_AARCH64_TLS_GOTTPREL64 relocations. */
4138
4139 static bfd_vma
4140 tpoff_base (struct bfd_link_info *info)
4141 {
4142 struct elf_link_hash_table *htab = elf_hash_table (info);
4143
4144 /* If tls_sec is NULL, we should have signalled an error already. */
4145 BFD_ASSERT (htab->tls_sec != NULL);
4146
4147 bfd_vma base = align_power ((bfd_vma) TCB_SIZE,
4148 htab->tls_sec->alignment_power);
4149 return htab->tls_sec->vma - base;
4150 }
4151
4152 static bfd_vma *
4153 symbol_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4154 unsigned long r_symndx)
4155 {
4156 /* Calculate the address of the GOT entry for symbol
4157 referred to in h. */
4158 if (h != NULL)
4159 return &h->got.offset;
4160 else
4161 {
4162 /* local symbol */
4163 struct elf_aarch64_local_symbol *l;
4164
4165 l = elf_aarch64_locals (input_bfd);
4166 return &l[r_symndx].got_offset;
4167 }
4168 }
4169
4170 static void
4171 symbol_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4172 unsigned long r_symndx)
4173 {
4174 bfd_vma *p;
4175 p = symbol_got_offset_ref (input_bfd, h, r_symndx);
4176 *p |= 1;
4177 }
4178
4179 static int
4180 symbol_got_offset_mark_p (bfd *input_bfd, struct elf_link_hash_entry *h,
4181 unsigned long r_symndx)
4182 {
4183 bfd_vma value;
4184 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4185 return value & 1;
4186 }
4187
4188 static bfd_vma
4189 symbol_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4190 unsigned long r_symndx)
4191 {
4192 bfd_vma value;
4193 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4194 value &= ~1;
4195 return value;
4196 }
4197
4198 static bfd_vma *
4199 symbol_tlsdesc_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4200 unsigned long r_symndx)
4201 {
4202 /* Calculate the address of the GOT entry for symbol
4203 referred to in h. */
4204 if (h != NULL)
4205 {
4206 struct elf_aarch64_link_hash_entry *eh;
4207 eh = (struct elf_aarch64_link_hash_entry *) h;
4208 return &eh->tlsdesc_got_jump_table_offset;
4209 }
4210 else
4211 {
4212 /* local symbol */
4213 struct elf_aarch64_local_symbol *l;
4214
4215 l = elf_aarch64_locals (input_bfd);
4216 return &l[r_symndx].tlsdesc_got_jump_table_offset;
4217 }
4218 }
4219
4220 static void
4221 symbol_tlsdesc_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4222 unsigned long r_symndx)
4223 {
4224 bfd_vma *p;
4225 p = symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4226 *p |= 1;
4227 }
4228
4229 static int
4230 symbol_tlsdesc_got_offset_mark_p (bfd *input_bfd,
4231 struct elf_link_hash_entry *h,
4232 unsigned long r_symndx)
4233 {
4234 bfd_vma value;
4235 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4236 return value & 1;
4237 }
4238
4239 static bfd_vma
4240 symbol_tlsdesc_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4241 unsigned long r_symndx)
4242 {
4243 bfd_vma value;
4244 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4245 value &= ~1;
4246 return value;
4247 }
4248
4249 /* Data for make_branch_to_erratum_835769_stub(). */
4250
4251 struct erratum_835769_branch_to_stub_data
4252 {
4253 struct bfd_link_info *info;
4254 asection *output_section;
4255 bfd_byte *contents;
4256 };
4257
4258 /* Helper to insert branches to erratum 835769 stubs in the right
4259 places for a particular section. */
4260
4261 static bfd_boolean
4262 make_branch_to_erratum_835769_stub (struct bfd_hash_entry *gen_entry,
4263 void *in_arg)
4264 {
4265 struct elf_aarch64_stub_hash_entry *stub_entry;
4266 struct erratum_835769_branch_to_stub_data *data;
4267 bfd_byte *contents;
4268 unsigned long branch_insn = 0;
4269 bfd_vma veneered_insn_loc, veneer_entry_loc;
4270 bfd_signed_vma branch_offset;
4271 unsigned int target;
4272 bfd *abfd;
4273
4274 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4275 data = (struct erratum_835769_branch_to_stub_data *) in_arg;
4276
4277 if (stub_entry->target_section != data->output_section
4278 || stub_entry->stub_type != aarch64_stub_erratum_835769_veneer)
4279 return TRUE;
4280
4281 contents = data->contents;
4282 veneered_insn_loc = stub_entry->target_section->output_section->vma
4283 + stub_entry->target_section->output_offset
4284 + stub_entry->target_value;
4285 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4286 + stub_entry->stub_sec->output_offset
4287 + stub_entry->stub_offset;
4288 branch_offset = veneer_entry_loc - veneered_insn_loc;
4289
4290 abfd = stub_entry->target_section->owner;
4291 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4292 (*_bfd_error_handler)
4293 (_("%B: error: Erratum 835769 stub out "
4294 "of range (input file too large)"), abfd);
4295
4296 target = stub_entry->target_value;
4297 branch_insn = 0x14000000;
4298 branch_offset >>= 2;
4299 branch_offset &= 0x3ffffff;
4300 branch_insn |= branch_offset;
4301 bfd_putl32 (branch_insn, &contents[target]);
4302
4303 return TRUE;
4304 }
4305
4306
4307 static bfd_boolean
4308 _bfd_aarch64_erratum_843419_branch_to_stub (struct bfd_hash_entry *gen_entry,
4309 void *in_arg)
4310 {
4311 struct elf_aarch64_stub_hash_entry *stub_entry
4312 = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4313 struct erratum_835769_branch_to_stub_data *data
4314 = (struct erratum_835769_branch_to_stub_data *) in_arg;
4315 struct bfd_link_info *info;
4316 struct elf_aarch64_link_hash_table *htab;
4317 bfd_byte *contents;
4318 asection *section;
4319 bfd *abfd;
4320 bfd_vma place;
4321 uint32_t insn;
4322
4323 info = data->info;
4324 contents = data->contents;
4325 section = data->output_section;
4326
4327 htab = elf_aarch64_hash_table (info);
4328
4329 if (stub_entry->target_section != section
4330 || stub_entry->stub_type != aarch64_stub_erratum_843419_veneer)
4331 return TRUE;
4332
4333 insn = bfd_getl32 (contents + stub_entry->target_value);
4334 bfd_putl32 (insn,
4335 stub_entry->stub_sec->contents + stub_entry->stub_offset);
4336
4337 place = (section->output_section->vma + section->output_offset
4338 + stub_entry->adrp_offset);
4339 insn = bfd_getl32 (contents + stub_entry->adrp_offset);
4340
4341 if ((insn & AARCH64_ADRP_OP_MASK) != AARCH64_ADRP_OP)
4342 abort ();
4343
4344 bfd_signed_vma imm =
4345 (_bfd_aarch64_sign_extend
4346 ((bfd_vma) _bfd_aarch64_decode_adrp_imm (insn) << 12, 33)
4347 - (place & 0xfff));
4348
4349 if (htab->fix_erratum_843419_adr
4350 && (imm >= AARCH64_MIN_ADRP_IMM && imm <= AARCH64_MAX_ADRP_IMM))
4351 {
4352 insn = (_bfd_aarch64_reencode_adr_imm (AARCH64_ADR_OP, imm)
4353 | AARCH64_RT (insn));
4354 bfd_putl32 (insn, contents + stub_entry->adrp_offset);
4355 }
4356 else
4357 {
4358 bfd_vma veneered_insn_loc;
4359 bfd_vma veneer_entry_loc;
4360 bfd_signed_vma branch_offset;
4361 uint32_t branch_insn;
4362
4363 veneered_insn_loc = stub_entry->target_section->output_section->vma
4364 + stub_entry->target_section->output_offset
4365 + stub_entry->target_value;
4366 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4367 + stub_entry->stub_sec->output_offset
4368 + stub_entry->stub_offset;
4369 branch_offset = veneer_entry_loc - veneered_insn_loc;
4370
4371 abfd = stub_entry->target_section->owner;
4372 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4373 (*_bfd_error_handler)
4374 (_("%B: error: Erratum 843419 stub out "
4375 "of range (input file too large)"), abfd);
4376
4377 branch_insn = 0x14000000;
4378 branch_offset >>= 2;
4379 branch_offset &= 0x3ffffff;
4380 branch_insn |= branch_offset;
4381 bfd_putl32 (branch_insn, contents + stub_entry->target_value);
4382 }
4383 return TRUE;
4384 }
4385
4386
4387 static bfd_boolean
4388 elfNN_aarch64_write_section (bfd *output_bfd ATTRIBUTE_UNUSED,
4389 struct bfd_link_info *link_info,
4390 asection *sec,
4391 bfd_byte *contents)
4392
4393 {
4394 struct elf_aarch64_link_hash_table *globals =
4395 elf_aarch64_hash_table (link_info);
4396
4397 if (globals == NULL)
4398 return FALSE;
4399
4400 /* Fix code to point to erratum 835769 stubs. */
4401 if (globals->fix_erratum_835769)
4402 {
4403 struct erratum_835769_branch_to_stub_data data;
4404
4405 data.info = link_info;
4406 data.output_section = sec;
4407 data.contents = contents;
4408 bfd_hash_traverse (&globals->stub_hash_table,
4409 make_branch_to_erratum_835769_stub, &data);
4410 }
4411
4412 if (globals->fix_erratum_843419)
4413 {
4414 struct erratum_835769_branch_to_stub_data data;
4415
4416 data.info = link_info;
4417 data.output_section = sec;
4418 data.contents = contents;
4419 bfd_hash_traverse (&globals->stub_hash_table,
4420 _bfd_aarch64_erratum_843419_branch_to_stub, &data);
4421 }
4422
4423 return FALSE;
4424 }
4425
4426 /* Perform a relocation as part of a final link. */
4427 static bfd_reloc_status_type
4428 elfNN_aarch64_final_link_relocate (reloc_howto_type *howto,
4429 bfd *input_bfd,
4430 bfd *output_bfd,
4431 asection *input_section,
4432 bfd_byte *contents,
4433 Elf_Internal_Rela *rel,
4434 bfd_vma value,
4435 struct bfd_link_info *info,
4436 asection *sym_sec,
4437 struct elf_link_hash_entry *h,
4438 bfd_boolean *unresolved_reloc_p,
4439 bfd_boolean save_addend,
4440 bfd_vma *saved_addend,
4441 Elf_Internal_Sym *sym)
4442 {
4443 Elf_Internal_Shdr *symtab_hdr;
4444 unsigned int r_type = howto->type;
4445 bfd_reloc_code_real_type bfd_r_type
4446 = elfNN_aarch64_bfd_reloc_from_howto (howto);
4447 bfd_reloc_code_real_type new_bfd_r_type;
4448 unsigned long r_symndx;
4449 bfd_byte *hit_data = contents + rel->r_offset;
4450 bfd_vma place;
4451 bfd_signed_vma signed_addend;
4452 struct elf_aarch64_link_hash_table *globals;
4453 bfd_boolean weak_undef_p;
4454
4455 globals = elf_aarch64_hash_table (info);
4456
4457 symtab_hdr = &elf_symtab_hdr (input_bfd);
4458
4459 BFD_ASSERT (is_aarch64_elf (input_bfd));
4460
4461 r_symndx = ELFNN_R_SYM (rel->r_info);
4462
4463 /* It is possible to have linker relaxations on some TLS access
4464 models. Update our information here. */
4465 new_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type, h, r_symndx);
4466 if (new_bfd_r_type != bfd_r_type)
4467 {
4468 bfd_r_type = new_bfd_r_type;
4469 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
4470 BFD_ASSERT (howto != NULL);
4471 r_type = howto->type;
4472 }
4473
4474 place = input_section->output_section->vma
4475 + input_section->output_offset + rel->r_offset;
4476
4477 /* Get addend, accumulating the addend for consecutive relocs
4478 which refer to the same offset. */
4479 signed_addend = saved_addend ? *saved_addend : 0;
4480 signed_addend += rel->r_addend;
4481
4482 weak_undef_p = (h ? h->root.type == bfd_link_hash_undefweak
4483 : bfd_is_und_section (sym_sec));
4484
4485 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle
4486 it here if it is defined in a non-shared object. */
4487 if (h != NULL
4488 && h->type == STT_GNU_IFUNC
4489 && h->def_regular)
4490 {
4491 asection *plt;
4492 const char *name;
4493 asection *base_got;
4494 bfd_vma off;
4495
4496 if ((input_section->flags & SEC_ALLOC) == 0
4497 || h->plt.offset == (bfd_vma) -1)
4498 abort ();
4499
4500 /* STT_GNU_IFUNC symbol must go through PLT. */
4501 plt = globals->root.splt ? globals->root.splt : globals->root.iplt;
4502 value = (plt->output_section->vma + plt->output_offset + h->plt.offset);
4503
4504 switch (bfd_r_type)
4505 {
4506 default:
4507 if (h->root.root.string)
4508 name = h->root.root.string;
4509 else
4510 name = bfd_elf_sym_name (input_bfd, symtab_hdr, sym,
4511 NULL);
4512 (*_bfd_error_handler)
4513 (_("%B: relocation %s against STT_GNU_IFUNC "
4514 "symbol `%s' isn't handled by %s"), input_bfd,
4515 howto->name, name, __FUNCTION__);
4516 bfd_set_error (bfd_error_bad_value);
4517 return FALSE;
4518
4519 case BFD_RELOC_AARCH64_NN:
4520 if (rel->r_addend != 0)
4521 {
4522 if (h->root.root.string)
4523 name = h->root.root.string;
4524 else
4525 name = bfd_elf_sym_name (input_bfd, symtab_hdr,
4526 sym, NULL);
4527 (*_bfd_error_handler)
4528 (_("%B: relocation %s against STT_GNU_IFUNC "
4529 "symbol `%s' has non-zero addend: %d"),
4530 input_bfd, howto->name, name, rel->r_addend);
4531 bfd_set_error (bfd_error_bad_value);
4532 return FALSE;
4533 }
4534
4535 /* Generate dynamic relocation only when there is a
4536 non-GOT reference in a shared object. */
4537 if (info->shared && h->non_got_ref)
4538 {
4539 Elf_Internal_Rela outrel;
4540 asection *sreloc;
4541
4542 /* Need a dynamic relocation to get the real function
4543 address. */
4544 outrel.r_offset = _bfd_elf_section_offset (output_bfd,
4545 info,
4546 input_section,
4547 rel->r_offset);
4548 if (outrel.r_offset == (bfd_vma) -1
4549 || outrel.r_offset == (bfd_vma) -2)
4550 abort ();
4551
4552 outrel.r_offset += (input_section->output_section->vma
4553 + input_section->output_offset);
4554
4555 if (h->dynindx == -1
4556 || h->forced_local
4557 || info->executable)
4558 {
4559 /* This symbol is resolved locally. */
4560 outrel.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
4561 outrel.r_addend = (h->root.u.def.value
4562 + h->root.u.def.section->output_section->vma
4563 + h->root.u.def.section->output_offset);
4564 }
4565 else
4566 {
4567 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4568 outrel.r_addend = 0;
4569 }
4570
4571 sreloc = globals->root.irelifunc;
4572 elf_append_rela (output_bfd, sreloc, &outrel);
4573
4574 /* If this reloc is against an external symbol, we
4575 do not want to fiddle with the addend. Otherwise,
4576 we need to include the symbol value so that it
4577 becomes an addend for the dynamic reloc. For an
4578 internal symbol, we have updated addend. */
4579 return bfd_reloc_ok;
4580 }
4581 /* FALLTHROUGH */
4582 case BFD_RELOC_AARCH64_JUMP26:
4583 case BFD_RELOC_AARCH64_CALL26:
4584 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4585 signed_addend,
4586 weak_undef_p);
4587 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
4588 howto, value);
4589 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4590 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4591 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4592 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4593 base_got = globals->root.sgot;
4594 off = h->got.offset;
4595
4596 if (base_got == NULL)
4597 abort ();
4598
4599 if (off == (bfd_vma) -1)
4600 {
4601 bfd_vma plt_index;
4602
4603 /* We can't use h->got.offset here to save state, or
4604 even just remember the offset, as finish_dynamic_symbol
4605 would use that as offset into .got. */
4606
4607 if (globals->root.splt != NULL)
4608 {
4609 plt_index = ((h->plt.offset - globals->plt_header_size) /
4610 globals->plt_entry_size);
4611 off = (plt_index + 3) * GOT_ENTRY_SIZE;
4612 base_got = globals->root.sgotplt;
4613 }
4614 else
4615 {
4616 plt_index = h->plt.offset / globals->plt_entry_size;
4617 off = plt_index * GOT_ENTRY_SIZE;
4618 base_got = globals->root.igotplt;
4619 }
4620
4621 if (h->dynindx == -1
4622 || h->forced_local
4623 || info->symbolic)
4624 {
4625 /* This references the local definition. We must
4626 initialize this entry in the global offset table.
4627 Since the offset must always be a multiple of 8,
4628 we use the least significant bit to record
4629 whether we have initialized it already.
4630
4631 When doing a dynamic link, we create a .rela.got
4632 relocation entry to initialize the value. This
4633 is done in the finish_dynamic_symbol routine. */
4634 if ((off & 1) != 0)
4635 off &= ~1;
4636 else
4637 {
4638 bfd_put_NN (output_bfd, value,
4639 base_got->contents + off);
4640 /* Note that this is harmless as -1 | 1 still is -1. */
4641 h->got.offset |= 1;
4642 }
4643 }
4644 value = (base_got->output_section->vma
4645 + base_got->output_offset + off);
4646 }
4647 else
4648 value = aarch64_calculate_got_entry_vma (h, globals, info,
4649 value, output_bfd,
4650 unresolved_reloc_p);
4651 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4652 0, weak_undef_p);
4653 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type, howto, value);
4654 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4655 case BFD_RELOC_AARCH64_ADD_LO12:
4656 break;
4657 }
4658 }
4659
4660 switch (bfd_r_type)
4661 {
4662 case BFD_RELOC_AARCH64_NONE:
4663 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4664 *unresolved_reloc_p = FALSE;
4665 return bfd_reloc_ok;
4666
4667 case BFD_RELOC_AARCH64_NN:
4668
4669 /* When generating a shared object or relocatable executable, these
4670 relocations are copied into the output file to be resolved at
4671 run time. */
4672 if (((info->shared == TRUE) || globals->root.is_relocatable_executable)
4673 && (input_section->flags & SEC_ALLOC)
4674 && (h == NULL
4675 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
4676 || h->root.type != bfd_link_hash_undefweak))
4677 {
4678 Elf_Internal_Rela outrel;
4679 bfd_byte *loc;
4680 bfd_boolean skip, relocate;
4681 asection *sreloc;
4682
4683 *unresolved_reloc_p = FALSE;
4684
4685 skip = FALSE;
4686 relocate = FALSE;
4687
4688 outrel.r_addend = signed_addend;
4689 outrel.r_offset =
4690 _bfd_elf_section_offset (output_bfd, info, input_section,
4691 rel->r_offset);
4692 if (outrel.r_offset == (bfd_vma) - 1)
4693 skip = TRUE;
4694 else if (outrel.r_offset == (bfd_vma) - 2)
4695 {
4696 skip = TRUE;
4697 relocate = TRUE;
4698 }
4699
4700 outrel.r_offset += (input_section->output_section->vma
4701 + input_section->output_offset);
4702
4703 if (skip)
4704 memset (&outrel, 0, sizeof outrel);
4705 else if (h != NULL
4706 && h->dynindx != -1
4707 && (!info->shared || !SYMBOLIC_BIND (info, h) || !h->def_regular))
4708 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4709 else
4710 {
4711 int symbol;
4712
4713 /* On SVR4-ish systems, the dynamic loader cannot
4714 relocate the text and data segments independently,
4715 so the symbol does not matter. */
4716 symbol = 0;
4717 outrel.r_info = ELFNN_R_INFO (symbol, AARCH64_R (RELATIVE));
4718 outrel.r_addend += value;
4719 }
4720
4721 sreloc = elf_section_data (input_section)->sreloc;
4722 if (sreloc == NULL || sreloc->contents == NULL)
4723 return bfd_reloc_notsupported;
4724
4725 loc = sreloc->contents + sreloc->reloc_count++ * RELOC_SIZE (globals);
4726 bfd_elfNN_swap_reloca_out (output_bfd, &outrel, loc);
4727
4728 if (sreloc->reloc_count * RELOC_SIZE (globals) > sreloc->size)
4729 {
4730 /* Sanity to check that we have previously allocated
4731 sufficient space in the relocation section for the
4732 number of relocations we actually want to emit. */
4733 abort ();
4734 }
4735
4736 /* If this reloc is against an external symbol, we do not want to
4737 fiddle with the addend. Otherwise, we need to include the symbol
4738 value so that it becomes an addend for the dynamic reloc. */
4739 if (!relocate)
4740 return bfd_reloc_ok;
4741
4742 return _bfd_final_link_relocate (howto, input_bfd, input_section,
4743 contents, rel->r_offset, value,
4744 signed_addend);
4745 }
4746 else
4747 value += signed_addend;
4748 break;
4749
4750 case BFD_RELOC_AARCH64_JUMP26:
4751 case BFD_RELOC_AARCH64_CALL26:
4752 {
4753 asection *splt = globals->root.splt;
4754 bfd_boolean via_plt_p =
4755 splt != NULL && h != NULL && h->plt.offset != (bfd_vma) - 1;
4756
4757 /* A call to an undefined weak symbol is converted to a jump to
4758 the next instruction unless a PLT entry will be created.
4759 The jump to the next instruction is optimized as a NOP.
4760 Do the same for local undefined symbols. */
4761 if (weak_undef_p && ! via_plt_p)
4762 {
4763 bfd_putl32 (INSN_NOP, hit_data);
4764 return bfd_reloc_ok;
4765 }
4766
4767 /* If the call goes through a PLT entry, make sure to
4768 check distance to the right destination address. */
4769 if (via_plt_p)
4770 {
4771 value = (splt->output_section->vma
4772 + splt->output_offset + h->plt.offset);
4773 *unresolved_reloc_p = FALSE;
4774 }
4775
4776 /* If the target symbol is global and marked as a function the
4777 relocation applies a function call or a tail call. In this
4778 situation we can veneer out of range branches. The veneers
4779 use IP0 and IP1 hence cannot be used arbitrary out of range
4780 branches that occur within the body of a function. */
4781 if (h && h->type == STT_FUNC)
4782 {
4783 /* Check if a stub has to be inserted because the destination
4784 is too far away. */
4785 if (! aarch64_valid_branch_p (value, place))
4786 {
4787 /* The target is out of reach, so redirect the branch to
4788 the local stub for this function. */
4789 struct elf_aarch64_stub_hash_entry *stub_entry;
4790 stub_entry = elfNN_aarch64_get_stub_entry (input_section,
4791 sym_sec, h,
4792 rel, globals);
4793 if (stub_entry != NULL)
4794 value = (stub_entry->stub_offset
4795 + stub_entry->stub_sec->output_offset
4796 + stub_entry->stub_sec->output_section->vma);
4797 }
4798 }
4799 }
4800 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4801 signed_addend, weak_undef_p);
4802 break;
4803
4804 case BFD_RELOC_AARCH64_16:
4805 #if ARCH_SIZE == 64
4806 case BFD_RELOC_AARCH64_32:
4807 #endif
4808 case BFD_RELOC_AARCH64_ADD_LO12:
4809 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
4810 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4811 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
4812 case BFD_RELOC_AARCH64_BRANCH19:
4813 case BFD_RELOC_AARCH64_LD_LO19_PCREL:
4814 case BFD_RELOC_AARCH64_LDST8_LO12:
4815 case BFD_RELOC_AARCH64_LDST16_LO12:
4816 case BFD_RELOC_AARCH64_LDST32_LO12:
4817 case BFD_RELOC_AARCH64_LDST64_LO12:
4818 case BFD_RELOC_AARCH64_LDST128_LO12:
4819 case BFD_RELOC_AARCH64_MOVW_G0_S:
4820 case BFD_RELOC_AARCH64_MOVW_G1_S:
4821 case BFD_RELOC_AARCH64_MOVW_G2_S:
4822 case BFD_RELOC_AARCH64_MOVW_G0:
4823 case BFD_RELOC_AARCH64_MOVW_G0_NC:
4824 case BFD_RELOC_AARCH64_MOVW_G1:
4825 case BFD_RELOC_AARCH64_MOVW_G1_NC:
4826 case BFD_RELOC_AARCH64_MOVW_G2:
4827 case BFD_RELOC_AARCH64_MOVW_G2_NC:
4828 case BFD_RELOC_AARCH64_MOVW_G3:
4829 case BFD_RELOC_AARCH64_16_PCREL:
4830 case BFD_RELOC_AARCH64_32_PCREL:
4831 case BFD_RELOC_AARCH64_64_PCREL:
4832 case BFD_RELOC_AARCH64_TSTBR14:
4833 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4834 signed_addend, weak_undef_p);
4835 break;
4836
4837 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4838 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4839 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4840 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4841 if (globals->root.sgot == NULL)
4842 BFD_ASSERT (h != NULL);
4843
4844 if (h != NULL)
4845 {
4846 value = aarch64_calculate_got_entry_vma (h, globals, info, value,
4847 output_bfd,
4848 unresolved_reloc_p);
4849 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4850 0, weak_undef_p);
4851 }
4852 break;
4853
4854 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4855 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4856 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4857 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4858 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4859 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4860 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4861 if (globals->root.sgot == NULL)
4862 return bfd_reloc_notsupported;
4863
4864 value = (symbol_got_offset (input_bfd, h, r_symndx)
4865 + globals->root.sgot->output_section->vma
4866 + globals->root.sgot->output_offset);
4867
4868 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4869 0, weak_undef_p);
4870 *unresolved_reloc_p = FALSE;
4871 break;
4872
4873 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4874 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4875 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
4876 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
4877 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
4878 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
4879 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
4880 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
4881 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4882 signed_addend - tpoff_base (info),
4883 weak_undef_p);
4884 *unresolved_reloc_p = FALSE;
4885 break;
4886
4887 case BFD_RELOC_AARCH64_TLSDESC_ADD:
4888 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4889 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4890 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4891 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
4892 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
4893 case BFD_RELOC_AARCH64_TLSDESC_LDR:
4894 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4895 if (globals->root.sgot == NULL)
4896 return bfd_reloc_notsupported;
4897 value = (symbol_tlsdesc_got_offset (input_bfd, h, r_symndx)
4898 + globals->root.sgotplt->output_section->vma
4899 + globals->root.sgotplt->output_offset
4900 + globals->sgotplt_jump_table_size);
4901
4902 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4903 0, weak_undef_p);
4904 *unresolved_reloc_p = FALSE;
4905 break;
4906
4907 default:
4908 return bfd_reloc_notsupported;
4909 }
4910
4911 if (saved_addend)
4912 *saved_addend = value;
4913
4914 /* Only apply the final relocation in a sequence. */
4915 if (save_addend)
4916 return bfd_reloc_continue;
4917
4918 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
4919 howto, value);
4920 }
4921
4922 /* Handle TLS relaxations. Relaxing is possible for symbols that use
4923 R_AARCH64_TLSDESC_ADR_{PAGE, LD64_LO12_NC, ADD_LO12_NC} during a static
4924 link.
4925
4926 Return bfd_reloc_ok if we're done, bfd_reloc_continue if the caller
4927 is to then call final_link_relocate. Return other values in the
4928 case of error. */
4929
4930 static bfd_reloc_status_type
4931 elfNN_aarch64_tls_relax (struct elf_aarch64_link_hash_table *globals,
4932 bfd *input_bfd, bfd_byte *contents,
4933 Elf_Internal_Rela *rel, struct elf_link_hash_entry *h)
4934 {
4935 bfd_boolean is_local = h == NULL;
4936 unsigned int r_type = ELFNN_R_TYPE (rel->r_info);
4937 unsigned long insn;
4938
4939 BFD_ASSERT (globals && input_bfd && contents && rel);
4940
4941 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
4942 {
4943 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4944 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4945 if (is_local)
4946 {
4947 /* GD->LE relaxation:
4948 adrp x0, :tlsgd:var => movz x0, :tprel_g1:var
4949 or
4950 adrp x0, :tlsdesc:var => movz x0, :tprel_g1:var
4951 */
4952 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
4953 return bfd_reloc_continue;
4954 }
4955 else
4956 {
4957 /* GD->IE relaxation:
4958 adrp x0, :tlsgd:var => adrp x0, :gottprel:var
4959 or
4960 adrp x0, :tlsdesc:var => adrp x0, :gottprel:var
4961 */
4962 return bfd_reloc_continue;
4963 }
4964
4965 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4966 BFD_ASSERT (0);
4967 break;
4968
4969 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4970 if (is_local)
4971 {
4972 /* Tiny TLSDESC->LE relaxation:
4973 ldr x1, :tlsdesc:var => movz x0, #:tprel_g1:var
4974 adr x0, :tlsdesc:var => movk x0, #:tprel_g0_nc:var
4975 .tlsdesccall var
4976 blr x1 => nop
4977 */
4978 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
4979 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
4980
4981 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
4982 AARCH64_R (TLSLE_MOVW_TPREL_G0_NC));
4983 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
4984
4985 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
4986 bfd_putl32 (0xf2800000, contents + rel->r_offset + 4);
4987 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
4988 return bfd_reloc_continue;
4989 }
4990 else
4991 {
4992 /* Tiny TLSDESC->IE relaxation:
4993 ldr x1, :tlsdesc:var => ldr x0, :gottprel:var
4994 adr x0, :tlsdesc:var => nop
4995 .tlsdesccall var
4996 blr x1 => nop
4997 */
4998 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
4999 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
5000
5001 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5002 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5003
5004 bfd_putl32 (0x58000000, contents + rel->r_offset);
5005 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 4);
5006 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
5007 return bfd_reloc_continue;
5008 }
5009
5010 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5011 if (is_local)
5012 {
5013 /* Tiny GD->LE relaxation:
5014 adr x0, :tlsgd:var => mrs x1, tpidr_el0
5015 bl __tls_get_addr => add x0, x1, #:tprel_hi12:x, lsl #12
5016 nop => add x0, x0, #:tprel_lo12_nc:x
5017 */
5018
5019 /* First kill the tls_get_addr reloc on the bl instruction. */
5020 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5021
5022 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 0);
5023 bfd_putl32 (0x91400020, contents + rel->r_offset + 4);
5024 bfd_putl32 (0x91000000, contents + rel->r_offset + 8);
5025
5026 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5027 AARCH64_R (TLSLE_ADD_TPREL_LO12_NC));
5028 rel[1].r_offset = rel->r_offset + 8;
5029
5030 /* Move the current relocation to the second instruction in
5031 the sequence. */
5032 rel->r_offset += 4;
5033 rel->r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5034 AARCH64_R (TLSLE_ADD_TPREL_HI12));
5035 return bfd_reloc_continue;
5036 }
5037 else
5038 {
5039 /* Tiny GD->IE relaxation:
5040 adr x0, :tlsgd:var => ldr x0, :gottprel:var
5041 bl __tls_get_addr => mrs x1, tpidr_el0
5042 nop => add x0, x0, x1
5043 */
5044
5045 /* First kill the tls_get_addr reloc on the bl instruction. */
5046 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5047 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5048
5049 bfd_putl32 (0x58000000, contents + rel->r_offset);
5050 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5051 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5052 return bfd_reloc_continue;
5053 }
5054
5055 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5056 return bfd_reloc_continue;
5057
5058 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5059 if (is_local)
5060 {
5061 /* GD->LE relaxation:
5062 ldr xd, [x0, #:tlsdesc_lo12:var] => movk x0, :tprel_g0_nc:var
5063 */
5064 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5065 return bfd_reloc_continue;
5066 }
5067 else
5068 {
5069 /* GD->IE relaxation:
5070 ldr xd, [x0, #:tlsdesc_lo12:var] => ldr x0, [x0, #:gottprel_lo12:var]
5071 */
5072 insn = bfd_getl32 (contents + rel->r_offset);
5073 insn &= 0xffffffe0;
5074 bfd_putl32 (insn, contents + rel->r_offset);
5075 return bfd_reloc_continue;
5076 }
5077
5078 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5079 if (is_local)
5080 {
5081 /* GD->LE relaxation
5082 add x0, #:tlsgd_lo12:var => movk x0, :tprel_g0_nc:var
5083 bl __tls_get_addr => mrs x1, tpidr_el0
5084 nop => add x0, x1, x0
5085 */
5086
5087 /* First kill the tls_get_addr reloc on the bl instruction. */
5088 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5089 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5090
5091 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5092 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5093 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5094 return bfd_reloc_continue;
5095 }
5096 else
5097 {
5098 /* GD->IE relaxation
5099 ADD x0, #:tlsgd_lo12:var => ldr x0, [x0, #:gottprel_lo12:var]
5100 BL __tls_get_addr => mrs x1, tpidr_el0
5101 R_AARCH64_CALL26
5102 NOP => add x0, x1, x0
5103 */
5104
5105 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (CALL26));
5106
5107 /* Remove the relocation on the BL instruction. */
5108 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5109
5110 bfd_putl32 (0xf9400000, contents + rel->r_offset);
5111
5112 /* We choose to fixup the BL and NOP instructions using the
5113 offset from the second relocation to allow flexibility in
5114 scheduling instructions between the ADD and BL. */
5115 bfd_putl32 (0xd53bd041, contents + rel[1].r_offset);
5116 bfd_putl32 (0x8b000020, contents + rel[1].r_offset + 4);
5117 return bfd_reloc_continue;
5118 }
5119
5120 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5121 case BFD_RELOC_AARCH64_TLSDESC_CALL:
5122 /* GD->IE/LE relaxation:
5123 add x0, x0, #:tlsdesc_lo12:var => nop
5124 blr xd => nop
5125 */
5126 bfd_putl32 (INSN_NOP, contents + rel->r_offset);
5127 return bfd_reloc_ok;
5128
5129 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5130 /* IE->LE relaxation:
5131 adrp xd, :gottprel:var => movz xd, :tprel_g1:var
5132 */
5133 if (is_local)
5134 {
5135 insn = bfd_getl32 (contents + rel->r_offset);
5136 bfd_putl32 (0xd2a00000 | (insn & 0x1f), contents + rel->r_offset);
5137 }
5138 return bfd_reloc_continue;
5139
5140 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5141 /* IE->LE relaxation:
5142 ldr xd, [xm, #:gottprel_lo12:var] => movk xd, :tprel_g0_nc:var
5143 */
5144 if (is_local)
5145 {
5146 insn = bfd_getl32 (contents + rel->r_offset);
5147 bfd_putl32 (0xf2800000 | (insn & 0x1f), contents + rel->r_offset);
5148 }
5149 return bfd_reloc_continue;
5150
5151 default:
5152 return bfd_reloc_continue;
5153 }
5154
5155 return bfd_reloc_ok;
5156 }
5157
5158 /* Relocate an AArch64 ELF section. */
5159
5160 static bfd_boolean
5161 elfNN_aarch64_relocate_section (bfd *output_bfd,
5162 struct bfd_link_info *info,
5163 bfd *input_bfd,
5164 asection *input_section,
5165 bfd_byte *contents,
5166 Elf_Internal_Rela *relocs,
5167 Elf_Internal_Sym *local_syms,
5168 asection **local_sections)
5169 {
5170 Elf_Internal_Shdr *symtab_hdr;
5171 struct elf_link_hash_entry **sym_hashes;
5172 Elf_Internal_Rela *rel;
5173 Elf_Internal_Rela *relend;
5174 const char *name;
5175 struct elf_aarch64_link_hash_table *globals;
5176 bfd_boolean save_addend = FALSE;
5177 bfd_vma addend = 0;
5178
5179 globals = elf_aarch64_hash_table (info);
5180
5181 symtab_hdr = &elf_symtab_hdr (input_bfd);
5182 sym_hashes = elf_sym_hashes (input_bfd);
5183
5184 rel = relocs;
5185 relend = relocs + input_section->reloc_count;
5186 for (; rel < relend; rel++)
5187 {
5188 unsigned int r_type;
5189 bfd_reloc_code_real_type bfd_r_type;
5190 bfd_reloc_code_real_type relaxed_bfd_r_type;
5191 reloc_howto_type *howto;
5192 unsigned long r_symndx;
5193 Elf_Internal_Sym *sym;
5194 asection *sec;
5195 struct elf_link_hash_entry *h;
5196 bfd_vma relocation;
5197 bfd_reloc_status_type r;
5198 arelent bfd_reloc;
5199 char sym_type;
5200 bfd_boolean unresolved_reloc = FALSE;
5201 char *error_message = NULL;
5202
5203 r_symndx = ELFNN_R_SYM (rel->r_info);
5204 r_type = ELFNN_R_TYPE (rel->r_info);
5205
5206 bfd_reloc.howto = elfNN_aarch64_howto_from_type (r_type);
5207 howto = bfd_reloc.howto;
5208
5209 if (howto == NULL)
5210 {
5211 (*_bfd_error_handler)
5212 (_("%B: unrecognized relocation (0x%x) in section `%A'"),
5213 input_bfd, input_section, r_type);
5214 return FALSE;
5215 }
5216 bfd_r_type = elfNN_aarch64_bfd_reloc_from_howto (howto);
5217
5218 h = NULL;
5219 sym = NULL;
5220 sec = NULL;
5221
5222 if (r_symndx < symtab_hdr->sh_info)
5223 {
5224 sym = local_syms + r_symndx;
5225 sym_type = ELFNN_ST_TYPE (sym->st_info);
5226 sec = local_sections[r_symndx];
5227
5228 /* An object file might have a reference to a local
5229 undefined symbol. This is a daft object file, but we
5230 should at least do something about it. */
5231 if (r_type != R_AARCH64_NONE && r_type != R_AARCH64_NULL
5232 && bfd_is_und_section (sec)
5233 && ELF_ST_BIND (sym->st_info) != STB_WEAK)
5234 {
5235 if (!info->callbacks->undefined_symbol
5236 (info, bfd_elf_string_from_elf_section
5237 (input_bfd, symtab_hdr->sh_link, sym->st_name),
5238 input_bfd, input_section, rel->r_offset, TRUE))
5239 return FALSE;
5240 }
5241
5242 relocation = _bfd_elf_rela_local_sym (output_bfd, sym, &sec, rel);
5243
5244 /* Relocate against local STT_GNU_IFUNC symbol. */
5245 if (!info->relocatable
5246 && ELF_ST_TYPE (sym->st_info) == STT_GNU_IFUNC)
5247 {
5248 h = elfNN_aarch64_get_local_sym_hash (globals, input_bfd,
5249 rel, FALSE);
5250 if (h == NULL)
5251 abort ();
5252
5253 /* Set STT_GNU_IFUNC symbol value. */
5254 h->root.u.def.value = sym->st_value;
5255 h->root.u.def.section = sec;
5256 }
5257 }
5258 else
5259 {
5260 bfd_boolean warned, ignored;
5261
5262 RELOC_FOR_GLOBAL_SYMBOL (info, input_bfd, input_section, rel,
5263 r_symndx, symtab_hdr, sym_hashes,
5264 h, sec, relocation,
5265 unresolved_reloc, warned, ignored);
5266
5267 sym_type = h->type;
5268 }
5269
5270 if (sec != NULL && discarded_section (sec))
5271 RELOC_AGAINST_DISCARDED_SECTION (info, input_bfd, input_section,
5272 rel, 1, relend, howto, 0, contents);
5273
5274 if (info->relocatable)
5275 continue;
5276
5277 if (h != NULL)
5278 name = h->root.root.string;
5279 else
5280 {
5281 name = (bfd_elf_string_from_elf_section
5282 (input_bfd, symtab_hdr->sh_link, sym->st_name));
5283 if (name == NULL || *name == '\0')
5284 name = bfd_section_name (input_bfd, sec);
5285 }
5286
5287 if (r_symndx != 0
5288 && r_type != R_AARCH64_NONE
5289 && r_type != R_AARCH64_NULL
5290 && (h == NULL
5291 || h->root.type == bfd_link_hash_defined
5292 || h->root.type == bfd_link_hash_defweak)
5293 && IS_AARCH64_TLS_RELOC (bfd_r_type) != (sym_type == STT_TLS))
5294 {
5295 (*_bfd_error_handler)
5296 ((sym_type == STT_TLS
5297 ? _("%B(%A+0x%lx): %s used with TLS symbol %s")
5298 : _("%B(%A+0x%lx): %s used with non-TLS symbol %s")),
5299 input_bfd,
5300 input_section, (long) rel->r_offset, howto->name, name);
5301 }
5302
5303 /* We relax only if we can see that there can be a valid transition
5304 from a reloc type to another.
5305 We call elfNN_aarch64_final_link_relocate unless we're completely
5306 done, i.e., the relaxation produced the final output we want. */
5307
5308 relaxed_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type,
5309 h, r_symndx);
5310 if (relaxed_bfd_r_type != bfd_r_type)
5311 {
5312 bfd_r_type = relaxed_bfd_r_type;
5313 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
5314 BFD_ASSERT (howto != NULL);
5315 r_type = howto->type;
5316 r = elfNN_aarch64_tls_relax (globals, input_bfd, contents, rel, h);
5317 unresolved_reloc = 0;
5318 }
5319 else
5320 r = bfd_reloc_continue;
5321
5322 /* There may be multiple consecutive relocations for the
5323 same offset. In that case we are supposed to treat the
5324 output of each relocation as the addend for the next. */
5325 if (rel + 1 < relend
5326 && rel->r_offset == rel[1].r_offset
5327 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NONE
5328 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NULL)
5329 save_addend = TRUE;
5330 else
5331 save_addend = FALSE;
5332
5333 if (r == bfd_reloc_continue)
5334 r = elfNN_aarch64_final_link_relocate (howto, input_bfd, output_bfd,
5335 input_section, contents, rel,
5336 relocation, info, sec,
5337 h, &unresolved_reloc,
5338 save_addend, &addend, sym);
5339
5340 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
5341 {
5342 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5343 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5344 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5345 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5346 {
5347 bfd_boolean need_relocs = FALSE;
5348 bfd_byte *loc;
5349 int indx;
5350 bfd_vma off;
5351
5352 off = symbol_got_offset (input_bfd, h, r_symndx);
5353 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5354
5355 need_relocs =
5356 (info->shared || indx != 0) &&
5357 (h == NULL
5358 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5359 || h->root.type != bfd_link_hash_undefweak);
5360
5361 BFD_ASSERT (globals->root.srelgot != NULL);
5362
5363 if (need_relocs)
5364 {
5365 Elf_Internal_Rela rela;
5366 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPMOD));
5367 rela.r_addend = 0;
5368 rela.r_offset = globals->root.sgot->output_section->vma +
5369 globals->root.sgot->output_offset + off;
5370
5371
5372 loc = globals->root.srelgot->contents;
5373 loc += globals->root.srelgot->reloc_count++
5374 * RELOC_SIZE (htab);
5375 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5376
5377 if (indx == 0)
5378 {
5379 bfd_put_NN (output_bfd,
5380 relocation - dtpoff_base (info),
5381 globals->root.sgot->contents + off
5382 + GOT_ENTRY_SIZE);
5383 }
5384 else
5385 {
5386 /* This TLS symbol is global. We emit a
5387 relocation to fixup the tls offset at load
5388 time. */
5389 rela.r_info =
5390 ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPREL));
5391 rela.r_addend = 0;
5392 rela.r_offset =
5393 (globals->root.sgot->output_section->vma
5394 + globals->root.sgot->output_offset + off
5395 + GOT_ENTRY_SIZE);
5396
5397 loc = globals->root.srelgot->contents;
5398 loc += globals->root.srelgot->reloc_count++
5399 * RELOC_SIZE (globals);
5400 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5401 bfd_put_NN (output_bfd, (bfd_vma) 0,
5402 globals->root.sgot->contents + off
5403 + GOT_ENTRY_SIZE);
5404 }
5405 }
5406 else
5407 {
5408 bfd_put_NN (output_bfd, (bfd_vma) 1,
5409 globals->root.sgot->contents + off);
5410 bfd_put_NN (output_bfd,
5411 relocation - dtpoff_base (info),
5412 globals->root.sgot->contents + off
5413 + GOT_ENTRY_SIZE);
5414 }
5415
5416 symbol_got_offset_mark (input_bfd, h, r_symndx);
5417 }
5418 break;
5419
5420 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5421 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5422 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5423 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5424 {
5425 bfd_boolean need_relocs = FALSE;
5426 bfd_byte *loc;
5427 int indx;
5428 bfd_vma off;
5429
5430 off = symbol_got_offset (input_bfd, h, r_symndx);
5431
5432 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5433
5434 need_relocs =
5435 (info->shared || indx != 0) &&
5436 (h == NULL
5437 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5438 || h->root.type != bfd_link_hash_undefweak);
5439
5440 BFD_ASSERT (globals->root.srelgot != NULL);
5441
5442 if (need_relocs)
5443 {
5444 Elf_Internal_Rela rela;
5445
5446 if (indx == 0)
5447 rela.r_addend = relocation - dtpoff_base (info);
5448 else
5449 rela.r_addend = 0;
5450
5451 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_TPREL));
5452 rela.r_offset = globals->root.sgot->output_section->vma +
5453 globals->root.sgot->output_offset + off;
5454
5455 loc = globals->root.srelgot->contents;
5456 loc += globals->root.srelgot->reloc_count++
5457 * RELOC_SIZE (htab);
5458
5459 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5460
5461 bfd_put_NN (output_bfd, rela.r_addend,
5462 globals->root.sgot->contents + off);
5463 }
5464 else
5465 bfd_put_NN (output_bfd, relocation - tpoff_base (info),
5466 globals->root.sgot->contents + off);
5467
5468 symbol_got_offset_mark (input_bfd, h, r_symndx);
5469 }
5470 break;
5471
5472 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5473 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5474 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5475 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5476 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5477 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5478 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5479 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5480 break;
5481
5482 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5483 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5484 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5485 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5486 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5487 if (! symbol_tlsdesc_got_offset_mark_p (input_bfd, h, r_symndx))
5488 {
5489 bfd_boolean need_relocs = FALSE;
5490 int indx = h && h->dynindx != -1 ? h->dynindx : 0;
5491 bfd_vma off = symbol_tlsdesc_got_offset (input_bfd, h, r_symndx);
5492
5493 need_relocs = (h == NULL
5494 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5495 || h->root.type != bfd_link_hash_undefweak);
5496
5497 BFD_ASSERT (globals->root.srelgot != NULL);
5498 BFD_ASSERT (globals->root.sgot != NULL);
5499
5500 if (need_relocs)
5501 {
5502 bfd_byte *loc;
5503 Elf_Internal_Rela rela;
5504 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLSDESC));
5505
5506 rela.r_addend = 0;
5507 rela.r_offset = (globals->root.sgotplt->output_section->vma
5508 + globals->root.sgotplt->output_offset
5509 + off + globals->sgotplt_jump_table_size);
5510
5511 if (indx == 0)
5512 rela.r_addend = relocation - dtpoff_base (info);
5513
5514 /* Allocate the next available slot in the PLT reloc
5515 section to hold our R_AARCH64_TLSDESC, the next
5516 available slot is determined from reloc_count,
5517 which we step. But note, reloc_count was
5518 artifically moved down while allocating slots for
5519 real PLT relocs such that all of the PLT relocs
5520 will fit above the initial reloc_count and the
5521 extra stuff will fit below. */
5522 loc = globals->root.srelplt->contents;
5523 loc += globals->root.srelplt->reloc_count++
5524 * RELOC_SIZE (globals);
5525
5526 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5527
5528 bfd_put_NN (output_bfd, (bfd_vma) 0,
5529 globals->root.sgotplt->contents + off +
5530 globals->sgotplt_jump_table_size);
5531 bfd_put_NN (output_bfd, (bfd_vma) 0,
5532 globals->root.sgotplt->contents + off +
5533 globals->sgotplt_jump_table_size +
5534 GOT_ENTRY_SIZE);
5535 }
5536
5537 symbol_tlsdesc_got_offset_mark (input_bfd, h, r_symndx);
5538 }
5539 break;
5540 default:
5541 break;
5542 }
5543
5544 if (!save_addend)
5545 addend = 0;
5546
5547
5548 /* Dynamic relocs are not propagated for SEC_DEBUGGING sections
5549 because such sections are not SEC_ALLOC and thus ld.so will
5550 not process them. */
5551 if (unresolved_reloc
5552 && !((input_section->flags & SEC_DEBUGGING) != 0
5553 && h->def_dynamic)
5554 && _bfd_elf_section_offset (output_bfd, info, input_section,
5555 +rel->r_offset) != (bfd_vma) - 1)
5556 {
5557 (*_bfd_error_handler)
5558 (_
5559 ("%B(%A+0x%lx): unresolvable %s relocation against symbol `%s'"),
5560 input_bfd, input_section, (long) rel->r_offset, howto->name,
5561 h->root.root.string);
5562 return FALSE;
5563 }
5564
5565 if (r != bfd_reloc_ok && r != bfd_reloc_continue)
5566 {
5567 switch (r)
5568 {
5569 case bfd_reloc_overflow:
5570 /* If the overflowing reloc was to an undefined symbol,
5571 we have already printed one error message and there
5572 is no point complaining again. */
5573 if ((!h ||
5574 h->root.type != bfd_link_hash_undefined)
5575 && (!((*info->callbacks->reloc_overflow)
5576 (info, (h ? &h->root : NULL), name, howto->name,
5577 (bfd_vma) 0, input_bfd, input_section,
5578 rel->r_offset))))
5579 return FALSE;
5580 break;
5581
5582 case bfd_reloc_undefined:
5583 if (!((*info->callbacks->undefined_symbol)
5584 (info, name, input_bfd, input_section,
5585 rel->r_offset, TRUE)))
5586 return FALSE;
5587 break;
5588
5589 case bfd_reloc_outofrange:
5590 error_message = _("out of range");
5591 goto common_error;
5592
5593 case bfd_reloc_notsupported:
5594 error_message = _("unsupported relocation");
5595 goto common_error;
5596
5597 case bfd_reloc_dangerous:
5598 /* error_message should already be set. */
5599 goto common_error;
5600
5601 default:
5602 error_message = _("unknown error");
5603 /* Fall through. */
5604
5605 common_error:
5606 BFD_ASSERT (error_message != NULL);
5607 if (!((*info->callbacks->reloc_dangerous)
5608 (info, error_message, input_bfd, input_section,
5609 rel->r_offset)))
5610 return FALSE;
5611 break;
5612 }
5613 }
5614 }
5615
5616 return TRUE;
5617 }
5618
5619 /* Set the right machine number. */
5620
5621 static bfd_boolean
5622 elfNN_aarch64_object_p (bfd *abfd)
5623 {
5624 #if ARCH_SIZE == 32
5625 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64_ilp32);
5626 #else
5627 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64);
5628 #endif
5629 return TRUE;
5630 }
5631
5632 /* Function to keep AArch64 specific flags in the ELF header. */
5633
5634 static bfd_boolean
5635 elfNN_aarch64_set_private_flags (bfd *abfd, flagword flags)
5636 {
5637 if (elf_flags_init (abfd) && elf_elfheader (abfd)->e_flags != flags)
5638 {
5639 }
5640 else
5641 {
5642 elf_elfheader (abfd)->e_flags = flags;
5643 elf_flags_init (abfd) = TRUE;
5644 }
5645
5646 return TRUE;
5647 }
5648
5649 /* Merge backend specific data from an object file to the output
5650 object file when linking. */
5651
5652 static bfd_boolean
5653 elfNN_aarch64_merge_private_bfd_data (bfd *ibfd, bfd *obfd)
5654 {
5655 flagword out_flags;
5656 flagword in_flags;
5657 bfd_boolean flags_compatible = TRUE;
5658 asection *sec;
5659
5660 /* Check if we have the same endianess. */
5661 if (!_bfd_generic_verify_endian_match (ibfd, obfd))
5662 return FALSE;
5663
5664 if (!is_aarch64_elf (ibfd) || !is_aarch64_elf (obfd))
5665 return TRUE;
5666
5667 /* The input BFD must have had its flags initialised. */
5668 /* The following seems bogus to me -- The flags are initialized in
5669 the assembler but I don't think an elf_flags_init field is
5670 written into the object. */
5671 /* BFD_ASSERT (elf_flags_init (ibfd)); */
5672
5673 in_flags = elf_elfheader (ibfd)->e_flags;
5674 out_flags = elf_elfheader (obfd)->e_flags;
5675
5676 if (!elf_flags_init (obfd))
5677 {
5678 /* If the input is the default architecture and had the default
5679 flags then do not bother setting the flags for the output
5680 architecture, instead allow future merges to do this. If no
5681 future merges ever set these flags then they will retain their
5682 uninitialised values, which surprise surprise, correspond
5683 to the default values. */
5684 if (bfd_get_arch_info (ibfd)->the_default
5685 && elf_elfheader (ibfd)->e_flags == 0)
5686 return TRUE;
5687
5688 elf_flags_init (obfd) = TRUE;
5689 elf_elfheader (obfd)->e_flags = in_flags;
5690
5691 if (bfd_get_arch (obfd) == bfd_get_arch (ibfd)
5692 && bfd_get_arch_info (obfd)->the_default)
5693 return bfd_set_arch_mach (obfd, bfd_get_arch (ibfd),
5694 bfd_get_mach (ibfd));
5695
5696 return TRUE;
5697 }
5698
5699 /* Identical flags must be compatible. */
5700 if (in_flags == out_flags)
5701 return TRUE;
5702
5703 /* Check to see if the input BFD actually contains any sections. If
5704 not, its flags may not have been initialised either, but it
5705 cannot actually cause any incompatiblity. Do not short-circuit
5706 dynamic objects; their section list may be emptied by
5707 elf_link_add_object_symbols.
5708
5709 Also check to see if there are no code sections in the input.
5710 In this case there is no need to check for code specific flags.
5711 XXX - do we need to worry about floating-point format compatability
5712 in data sections ? */
5713 if (!(ibfd->flags & DYNAMIC))
5714 {
5715 bfd_boolean null_input_bfd = TRUE;
5716 bfd_boolean only_data_sections = TRUE;
5717
5718 for (sec = ibfd->sections; sec != NULL; sec = sec->next)
5719 {
5720 if ((bfd_get_section_flags (ibfd, sec)
5721 & (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5722 == (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5723 only_data_sections = FALSE;
5724
5725 null_input_bfd = FALSE;
5726 break;
5727 }
5728
5729 if (null_input_bfd || only_data_sections)
5730 return TRUE;
5731 }
5732
5733 return flags_compatible;
5734 }
5735
5736 /* Display the flags field. */
5737
5738 static bfd_boolean
5739 elfNN_aarch64_print_private_bfd_data (bfd *abfd, void *ptr)
5740 {
5741 FILE *file = (FILE *) ptr;
5742 unsigned long flags;
5743
5744 BFD_ASSERT (abfd != NULL && ptr != NULL);
5745
5746 /* Print normal ELF private data. */
5747 _bfd_elf_print_private_bfd_data (abfd, ptr);
5748
5749 flags = elf_elfheader (abfd)->e_flags;
5750 /* Ignore init flag - it may not be set, despite the flags field
5751 containing valid data. */
5752
5753 /* xgettext:c-format */
5754 fprintf (file, _("private flags = %lx:"), elf_elfheader (abfd)->e_flags);
5755
5756 if (flags)
5757 fprintf (file, _("<Unrecognised flag bits set>"));
5758
5759 fputc ('\n', file);
5760
5761 return TRUE;
5762 }
5763
5764 /* Update the got entry reference counts for the section being removed. */
5765
5766 static bfd_boolean
5767 elfNN_aarch64_gc_sweep_hook (bfd *abfd,
5768 struct bfd_link_info *info,
5769 asection *sec,
5770 const Elf_Internal_Rela * relocs)
5771 {
5772 struct elf_aarch64_link_hash_table *htab;
5773 Elf_Internal_Shdr *symtab_hdr;
5774 struct elf_link_hash_entry **sym_hashes;
5775 struct elf_aarch64_local_symbol *locals;
5776 const Elf_Internal_Rela *rel, *relend;
5777
5778 if (info->relocatable)
5779 return TRUE;
5780
5781 htab = elf_aarch64_hash_table (info);
5782
5783 if (htab == NULL)
5784 return FALSE;
5785
5786 elf_section_data (sec)->local_dynrel = NULL;
5787
5788 symtab_hdr = &elf_symtab_hdr (abfd);
5789 sym_hashes = elf_sym_hashes (abfd);
5790
5791 locals = elf_aarch64_locals (abfd);
5792
5793 relend = relocs + sec->reloc_count;
5794 for (rel = relocs; rel < relend; rel++)
5795 {
5796 unsigned long r_symndx;
5797 unsigned int r_type;
5798 struct elf_link_hash_entry *h = NULL;
5799
5800 r_symndx = ELFNN_R_SYM (rel->r_info);
5801
5802 if (r_symndx >= symtab_hdr->sh_info)
5803 {
5804
5805 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
5806 while (h->root.type == bfd_link_hash_indirect
5807 || h->root.type == bfd_link_hash_warning)
5808 h = (struct elf_link_hash_entry *) h->root.u.i.link;
5809 }
5810 else
5811 {
5812 Elf_Internal_Sym *isym;
5813
5814 /* A local symbol. */
5815 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
5816 abfd, r_symndx);
5817
5818 /* Check relocation against local STT_GNU_IFUNC symbol. */
5819 if (isym != NULL
5820 && ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
5821 {
5822 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel, FALSE);
5823 if (h == NULL)
5824 abort ();
5825 }
5826 }
5827
5828 if (h)
5829 {
5830 struct elf_aarch64_link_hash_entry *eh;
5831 struct elf_dyn_relocs **pp;
5832 struct elf_dyn_relocs *p;
5833
5834 eh = (struct elf_aarch64_link_hash_entry *) h;
5835
5836 for (pp = &eh->dyn_relocs; (p = *pp) != NULL; pp = &p->next)
5837 if (p->sec == sec)
5838 {
5839 /* Everything must go for SEC. */
5840 *pp = p->next;
5841 break;
5842 }
5843 }
5844
5845 r_type = ELFNN_R_TYPE (rel->r_info);
5846 switch (aarch64_tls_transition (abfd,info, r_type, h ,r_symndx))
5847 {
5848 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
5849 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
5850 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
5851 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
5852 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5853 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5854 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5855 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
5856 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
5857 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5858 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5859 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5860 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5861 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5862 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
5863 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
5864 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5865 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5866 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5867 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5868 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5869 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5870 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5871 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5872 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5873 if (h != NULL)
5874 {
5875 if (h->got.refcount > 0)
5876 h->got.refcount -= 1;
5877
5878 if (h->type == STT_GNU_IFUNC)
5879 {
5880 if (h->plt.refcount > 0)
5881 h->plt.refcount -= 1;
5882 }
5883 }
5884 else if (locals != NULL)
5885 {
5886 if (locals[r_symndx].got_refcount > 0)
5887 locals[r_symndx].got_refcount -= 1;
5888 }
5889 break;
5890
5891 case BFD_RELOC_AARCH64_CALL26:
5892 case BFD_RELOC_AARCH64_JUMP26:
5893 /* If this is a local symbol then we resolve it
5894 directly without creating a PLT entry. */
5895 if (h == NULL)
5896 continue;
5897
5898 if (h->plt.refcount > 0)
5899 h->plt.refcount -= 1;
5900 break;
5901
5902 case BFD_RELOC_AARCH64_MOVW_G0_NC:
5903 case BFD_RELOC_AARCH64_MOVW_G1_NC:
5904 case BFD_RELOC_AARCH64_MOVW_G2_NC:
5905 case BFD_RELOC_AARCH64_MOVW_G3:
5906 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
5907 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
5908 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
5909 case BFD_RELOC_AARCH64_NN:
5910 if (h != NULL && info->executable)
5911 {
5912 if (h->plt.refcount > 0)
5913 h->plt.refcount -= 1;
5914 }
5915 break;
5916
5917 default:
5918 break;
5919 }
5920 }
5921
5922 return TRUE;
5923 }
5924
5925 /* Adjust a symbol defined by a dynamic object and referenced by a
5926 regular object. The current definition is in some section of the
5927 dynamic object, but we're not including those sections. We have to
5928 change the definition to something the rest of the link can
5929 understand. */
5930
5931 static bfd_boolean
5932 elfNN_aarch64_adjust_dynamic_symbol (struct bfd_link_info *info,
5933 struct elf_link_hash_entry *h)
5934 {
5935 struct elf_aarch64_link_hash_table *htab;
5936 asection *s;
5937
5938 /* If this is a function, put it in the procedure linkage table. We
5939 will fill in the contents of the procedure linkage table later,
5940 when we know the address of the .got section. */
5941 if (h->type == STT_FUNC || h->type == STT_GNU_IFUNC || h->needs_plt)
5942 {
5943 if (h->plt.refcount <= 0
5944 || (h->type != STT_GNU_IFUNC
5945 && (SYMBOL_CALLS_LOCAL (info, h)
5946 || (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT
5947 && h->root.type == bfd_link_hash_undefweak))))
5948 {
5949 /* This case can occur if we saw a CALL26 reloc in
5950 an input file, but the symbol wasn't referred to
5951 by a dynamic object or all references were
5952 garbage collected. In which case we can end up
5953 resolving. */
5954 h->plt.offset = (bfd_vma) - 1;
5955 h->needs_plt = 0;
5956 }
5957
5958 return TRUE;
5959 }
5960 else
5961 /* It's possible that we incorrectly decided a .plt reloc was
5962 needed for an R_X86_64_PC32 reloc to a non-function sym in
5963 check_relocs. We can't decide accurately between function and
5964 non-function syms in check-relocs; Objects loaded later in
5965 the link may change h->type. So fix it now. */
5966 h->plt.offset = (bfd_vma) - 1;
5967
5968
5969 /* If this is a weak symbol, and there is a real definition, the
5970 processor independent code will have arranged for us to see the
5971 real definition first, and we can just use the same value. */
5972 if (h->u.weakdef != NULL)
5973 {
5974 BFD_ASSERT (h->u.weakdef->root.type == bfd_link_hash_defined
5975 || h->u.weakdef->root.type == bfd_link_hash_defweak);
5976 h->root.u.def.section = h->u.weakdef->root.u.def.section;
5977 h->root.u.def.value = h->u.weakdef->root.u.def.value;
5978 if (ELIMINATE_COPY_RELOCS || info->nocopyreloc)
5979 h->non_got_ref = h->u.weakdef->non_got_ref;
5980 return TRUE;
5981 }
5982
5983 /* If we are creating a shared library, we must presume that the
5984 only references to the symbol are via the global offset table.
5985 For such cases we need not do anything here; the relocations will
5986 be handled correctly by relocate_section. */
5987 if (info->shared)
5988 return TRUE;
5989
5990 /* If there are no references to this symbol that do not use the
5991 GOT, we don't need to generate a copy reloc. */
5992 if (!h->non_got_ref)
5993 return TRUE;
5994
5995 /* If -z nocopyreloc was given, we won't generate them either. */
5996 if (info->nocopyreloc)
5997 {
5998 h->non_got_ref = 0;
5999 return TRUE;
6000 }
6001
6002 /* We must allocate the symbol in our .dynbss section, which will
6003 become part of the .bss section of the executable. There will be
6004 an entry for this symbol in the .dynsym section. The dynamic
6005 object will contain position independent code, so all references
6006 from the dynamic object to this symbol will go through the global
6007 offset table. The dynamic linker will use the .dynsym entry to
6008 determine the address it must put in the global offset table, so
6009 both the dynamic object and the regular object will refer to the
6010 same memory location for the variable. */
6011
6012 htab = elf_aarch64_hash_table (info);
6013
6014 /* We must generate a R_AARCH64_COPY reloc to tell the dynamic linker
6015 to copy the initial value out of the dynamic object and into the
6016 runtime process image. */
6017 if ((h->root.u.def.section->flags & SEC_ALLOC) != 0 && h->size != 0)
6018 {
6019 htab->srelbss->size += RELOC_SIZE (htab);
6020 h->needs_copy = 1;
6021 }
6022
6023 s = htab->sdynbss;
6024
6025 return _bfd_elf_adjust_dynamic_copy (info, h, s);
6026
6027 }
6028
6029 static bfd_boolean
6030 elfNN_aarch64_allocate_local_symbols (bfd *abfd, unsigned number)
6031 {
6032 struct elf_aarch64_local_symbol *locals;
6033 locals = elf_aarch64_locals (abfd);
6034 if (locals == NULL)
6035 {
6036 locals = (struct elf_aarch64_local_symbol *)
6037 bfd_zalloc (abfd, number * sizeof (struct elf_aarch64_local_symbol));
6038 if (locals == NULL)
6039 return FALSE;
6040 elf_aarch64_locals (abfd) = locals;
6041 }
6042 return TRUE;
6043 }
6044
6045 /* Create the .got section to hold the global offset table. */
6046
6047 static bfd_boolean
6048 aarch64_elf_create_got_section (bfd *abfd, struct bfd_link_info *info)
6049 {
6050 const struct elf_backend_data *bed = get_elf_backend_data (abfd);
6051 flagword flags;
6052 asection *s;
6053 struct elf_link_hash_entry *h;
6054 struct elf_link_hash_table *htab = elf_hash_table (info);
6055
6056 /* This function may be called more than once. */
6057 s = bfd_get_linker_section (abfd, ".got");
6058 if (s != NULL)
6059 return TRUE;
6060
6061 flags = bed->dynamic_sec_flags;
6062
6063 s = bfd_make_section_anyway_with_flags (abfd,
6064 (bed->rela_plts_and_copies_p
6065 ? ".rela.got" : ".rel.got"),
6066 (bed->dynamic_sec_flags
6067 | SEC_READONLY));
6068 if (s == NULL
6069 || ! bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6070 return FALSE;
6071 htab->srelgot = s;
6072
6073 s = bfd_make_section_anyway_with_flags (abfd, ".got", flags);
6074 if (s == NULL
6075 || !bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6076 return FALSE;
6077 htab->sgot = s;
6078 htab->sgot->size += GOT_ENTRY_SIZE;
6079
6080 if (bed->want_got_sym)
6081 {
6082 /* Define the symbol _GLOBAL_OFFSET_TABLE_ at the start of the .got
6083 (or .got.plt) section. We don't do this in the linker script
6084 because we don't want to define the symbol if we are not creating
6085 a global offset table. */
6086 h = _bfd_elf_define_linkage_sym (abfd, info, s,
6087 "_GLOBAL_OFFSET_TABLE_");
6088 elf_hash_table (info)->hgot = h;
6089 if (h == NULL)
6090 return FALSE;
6091 }
6092
6093 if (bed->want_got_plt)
6094 {
6095 s = bfd_make_section_anyway_with_flags (abfd, ".got.plt", flags);
6096 if (s == NULL
6097 || !bfd_set_section_alignment (abfd, s,
6098 bed->s->log_file_align))
6099 return FALSE;
6100 htab->sgotplt = s;
6101 }
6102
6103 /* The first bit of the global offset table is the header. */
6104 s->size += bed->got_header_size;
6105
6106 return TRUE;
6107 }
6108
6109 /* Look through the relocs for a section during the first phase. */
6110
6111 static bfd_boolean
6112 elfNN_aarch64_check_relocs (bfd *abfd, struct bfd_link_info *info,
6113 asection *sec, const Elf_Internal_Rela *relocs)
6114 {
6115 Elf_Internal_Shdr *symtab_hdr;
6116 struct elf_link_hash_entry **sym_hashes;
6117 const Elf_Internal_Rela *rel;
6118 const Elf_Internal_Rela *rel_end;
6119 asection *sreloc;
6120
6121 struct elf_aarch64_link_hash_table *htab;
6122
6123 if (info->relocatable)
6124 return TRUE;
6125
6126 BFD_ASSERT (is_aarch64_elf (abfd));
6127
6128 htab = elf_aarch64_hash_table (info);
6129 sreloc = NULL;
6130
6131 symtab_hdr = &elf_symtab_hdr (abfd);
6132 sym_hashes = elf_sym_hashes (abfd);
6133
6134 rel_end = relocs + sec->reloc_count;
6135 for (rel = relocs; rel < rel_end; rel++)
6136 {
6137 struct elf_link_hash_entry *h;
6138 unsigned long r_symndx;
6139 unsigned int r_type;
6140 bfd_reloc_code_real_type bfd_r_type;
6141 Elf_Internal_Sym *isym;
6142
6143 r_symndx = ELFNN_R_SYM (rel->r_info);
6144 r_type = ELFNN_R_TYPE (rel->r_info);
6145
6146 if (r_symndx >= NUM_SHDR_ENTRIES (symtab_hdr))
6147 {
6148 (*_bfd_error_handler) (_("%B: bad symbol index: %d"), abfd,
6149 r_symndx);
6150 return FALSE;
6151 }
6152
6153 if (r_symndx < symtab_hdr->sh_info)
6154 {
6155 /* A local symbol. */
6156 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6157 abfd, r_symndx);
6158 if (isym == NULL)
6159 return FALSE;
6160
6161 /* Check relocation against local STT_GNU_IFUNC symbol. */
6162 if (ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
6163 {
6164 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel,
6165 TRUE);
6166 if (h == NULL)
6167 return FALSE;
6168
6169 /* Fake a STT_GNU_IFUNC symbol. */
6170 h->type = STT_GNU_IFUNC;
6171 h->def_regular = 1;
6172 h->ref_regular = 1;
6173 h->forced_local = 1;
6174 h->root.type = bfd_link_hash_defined;
6175 }
6176 else
6177 h = NULL;
6178 }
6179 else
6180 {
6181 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
6182 while (h->root.type == bfd_link_hash_indirect
6183 || h->root.type == bfd_link_hash_warning)
6184 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6185
6186 /* PR15323, ref flags aren't set for references in the same
6187 object. */
6188 h->root.non_ir_ref = 1;
6189 }
6190
6191 /* Could be done earlier, if h were already available. */
6192 bfd_r_type = aarch64_tls_transition (abfd, info, r_type, h, r_symndx);
6193
6194 if (h != NULL)
6195 {
6196 /* Create the ifunc sections for static executables. If we
6197 never see an indirect function symbol nor we are building
6198 a static executable, those sections will be empty and
6199 won't appear in output. */
6200 switch (bfd_r_type)
6201 {
6202 default:
6203 break;
6204
6205 case BFD_RELOC_AARCH64_NN:
6206 case BFD_RELOC_AARCH64_CALL26:
6207 case BFD_RELOC_AARCH64_JUMP26:
6208 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6209 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6210 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6211 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6212 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6213 case BFD_RELOC_AARCH64_ADD_LO12:
6214 if (htab->root.dynobj == NULL)
6215 htab->root.dynobj = abfd;
6216 if (!_bfd_elf_create_ifunc_sections (htab->root.dynobj, info))
6217 return FALSE;
6218 break;
6219 }
6220
6221 /* It is referenced by a non-shared object. */
6222 h->ref_regular = 1;
6223 h->root.non_ir_ref = 1;
6224 }
6225
6226 switch (bfd_r_type)
6227 {
6228 case BFD_RELOC_AARCH64_NN:
6229
6230 /* We don't need to handle relocs into sections not going into
6231 the "real" output. */
6232 if ((sec->flags & SEC_ALLOC) == 0)
6233 break;
6234
6235 if (h != NULL)
6236 {
6237 if (!info->shared)
6238 h->non_got_ref = 1;
6239
6240 h->plt.refcount += 1;
6241 h->pointer_equality_needed = 1;
6242 }
6243
6244 /* No need to do anything if we're not creating a shared
6245 object. */
6246 if (! info->shared)
6247 break;
6248
6249 {
6250 struct elf_dyn_relocs *p;
6251 struct elf_dyn_relocs **head;
6252
6253 /* We must copy these reloc types into the output file.
6254 Create a reloc section in dynobj and make room for
6255 this reloc. */
6256 if (sreloc == NULL)
6257 {
6258 if (htab->root.dynobj == NULL)
6259 htab->root.dynobj = abfd;
6260
6261 sreloc = _bfd_elf_make_dynamic_reloc_section
6262 (sec, htab->root.dynobj, LOG_FILE_ALIGN, abfd, /*rela? */ TRUE);
6263
6264 if (sreloc == NULL)
6265 return FALSE;
6266 }
6267
6268 /* If this is a global symbol, we count the number of
6269 relocations we need for this symbol. */
6270 if (h != NULL)
6271 {
6272 struct elf_aarch64_link_hash_entry *eh;
6273 eh = (struct elf_aarch64_link_hash_entry *) h;
6274 head = &eh->dyn_relocs;
6275 }
6276 else
6277 {
6278 /* Track dynamic relocs needed for local syms too.
6279 We really need local syms available to do this
6280 easily. Oh well. */
6281
6282 asection *s;
6283 void **vpp;
6284
6285 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6286 abfd, r_symndx);
6287 if (isym == NULL)
6288 return FALSE;
6289
6290 s = bfd_section_from_elf_index (abfd, isym->st_shndx);
6291 if (s == NULL)
6292 s = sec;
6293
6294 /* Beware of type punned pointers vs strict aliasing
6295 rules. */
6296 vpp = &(elf_section_data (s)->local_dynrel);
6297 head = (struct elf_dyn_relocs **) vpp;
6298 }
6299
6300 p = *head;
6301 if (p == NULL || p->sec != sec)
6302 {
6303 bfd_size_type amt = sizeof *p;
6304 p = ((struct elf_dyn_relocs *)
6305 bfd_zalloc (htab->root.dynobj, amt));
6306 if (p == NULL)
6307 return FALSE;
6308 p->next = *head;
6309 *head = p;
6310 p->sec = sec;
6311 }
6312
6313 p->count += 1;
6314
6315 }
6316 break;
6317
6318 /* RR: We probably want to keep a consistency check that
6319 there are no dangling GOT_PAGE relocs. */
6320 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6321 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6322 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6323 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6324 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
6325 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
6326 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
6327 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
6328 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
6329 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
6330 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
6331 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
6332 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
6333 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
6334 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
6335 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
6336 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
6337 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
6338 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
6339 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
6340 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
6341 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
6342 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
6343 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
6344 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
6345 {
6346 unsigned got_type;
6347 unsigned old_got_type;
6348
6349 got_type = aarch64_reloc_got_type (bfd_r_type);
6350
6351 if (h)
6352 {
6353 h->got.refcount += 1;
6354 old_got_type = elf_aarch64_hash_entry (h)->got_type;
6355 }
6356 else
6357 {
6358 struct elf_aarch64_local_symbol *locals;
6359
6360 if (!elfNN_aarch64_allocate_local_symbols
6361 (abfd, symtab_hdr->sh_info))
6362 return FALSE;
6363
6364 locals = elf_aarch64_locals (abfd);
6365 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6366 locals[r_symndx].got_refcount += 1;
6367 old_got_type = locals[r_symndx].got_type;
6368 }
6369
6370 /* If a variable is accessed with both general dynamic TLS
6371 methods, two slots may be created. */
6372 if (GOT_TLS_GD_ANY_P (old_got_type) && GOT_TLS_GD_ANY_P (got_type))
6373 got_type |= old_got_type;
6374
6375 /* We will already have issued an error message if there
6376 is a TLS/non-TLS mismatch, based on the symbol type.
6377 So just combine any TLS types needed. */
6378 if (old_got_type != GOT_UNKNOWN && old_got_type != GOT_NORMAL
6379 && got_type != GOT_NORMAL)
6380 got_type |= old_got_type;
6381
6382 /* If the symbol is accessed by both IE and GD methods, we
6383 are able to relax. Turn off the GD flag, without
6384 messing up with any other kind of TLS types that may be
6385 involved. */
6386 if ((got_type & GOT_TLS_IE) && GOT_TLS_GD_ANY_P (got_type))
6387 got_type &= ~ (GOT_TLSDESC_GD | GOT_TLS_GD);
6388
6389 if (old_got_type != got_type)
6390 {
6391 if (h != NULL)
6392 elf_aarch64_hash_entry (h)->got_type = got_type;
6393 else
6394 {
6395 struct elf_aarch64_local_symbol *locals;
6396 locals = elf_aarch64_locals (abfd);
6397 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6398 locals[r_symndx].got_type = got_type;
6399 }
6400 }
6401
6402 if (htab->root.dynobj == NULL)
6403 htab->root.dynobj = abfd;
6404 if (! aarch64_elf_create_got_section (htab->root.dynobj, info))
6405 return FALSE;
6406 break;
6407 }
6408
6409 case BFD_RELOC_AARCH64_MOVW_G0_NC:
6410 case BFD_RELOC_AARCH64_MOVW_G1_NC:
6411 case BFD_RELOC_AARCH64_MOVW_G2_NC:
6412 case BFD_RELOC_AARCH64_MOVW_G3:
6413 if (info->shared)
6414 {
6415 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
6416 (*_bfd_error_handler)
6417 (_("%B: relocation %s against `%s' can not be used when making "
6418 "a shared object; recompile with -fPIC"),
6419 abfd, elfNN_aarch64_howto_table[howto_index].name,
6420 (h) ? h->root.root.string : "a local symbol");
6421 bfd_set_error (bfd_error_bad_value);
6422 return FALSE;
6423 }
6424
6425 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
6426 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6427 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
6428 if (h != NULL && info->executable)
6429 {
6430 /* If this reloc is in a read-only section, we might
6431 need a copy reloc. We can't check reliably at this
6432 stage whether the section is read-only, as input
6433 sections have not yet been mapped to output sections.
6434 Tentatively set the flag for now, and correct in
6435 adjust_dynamic_symbol. */
6436 h->non_got_ref = 1;
6437 h->plt.refcount += 1;
6438 h->pointer_equality_needed = 1;
6439 }
6440 /* FIXME:: RR need to handle these in shared libraries
6441 and essentially bomb out as these being non-PIC
6442 relocations in shared libraries. */
6443 break;
6444
6445 case BFD_RELOC_AARCH64_CALL26:
6446 case BFD_RELOC_AARCH64_JUMP26:
6447 /* If this is a local symbol then we resolve it
6448 directly without creating a PLT entry. */
6449 if (h == NULL)
6450 continue;
6451
6452 h->needs_plt = 1;
6453 if (h->plt.refcount <= 0)
6454 h->plt.refcount = 1;
6455 else
6456 h->plt.refcount += 1;
6457 break;
6458
6459 default:
6460 break;
6461 }
6462 }
6463
6464 return TRUE;
6465 }
6466
6467 /* Treat mapping symbols as special target symbols. */
6468
6469 static bfd_boolean
6470 elfNN_aarch64_is_target_special_symbol (bfd *abfd ATTRIBUTE_UNUSED,
6471 asymbol *sym)
6472 {
6473 return bfd_is_aarch64_special_symbol_name (sym->name,
6474 BFD_AARCH64_SPECIAL_SYM_TYPE_ANY);
6475 }
6476
6477 /* This is a copy of elf_find_function () from elf.c except that
6478 AArch64 mapping symbols are ignored when looking for function names. */
6479
6480 static bfd_boolean
6481 aarch64_elf_find_function (bfd *abfd ATTRIBUTE_UNUSED,
6482 asymbol **symbols,
6483 asection *section,
6484 bfd_vma offset,
6485 const char **filename_ptr,
6486 const char **functionname_ptr)
6487 {
6488 const char *filename = NULL;
6489 asymbol *func = NULL;
6490 bfd_vma low_func = 0;
6491 asymbol **p;
6492
6493 for (p = symbols; *p != NULL; p++)
6494 {
6495 elf_symbol_type *q;
6496
6497 q = (elf_symbol_type *) * p;
6498
6499 switch (ELF_ST_TYPE (q->internal_elf_sym.st_info))
6500 {
6501 default:
6502 break;
6503 case STT_FILE:
6504 filename = bfd_asymbol_name (&q->symbol);
6505 break;
6506 case STT_FUNC:
6507 case STT_NOTYPE:
6508 /* Skip mapping symbols. */
6509 if ((q->symbol.flags & BSF_LOCAL)
6510 && (bfd_is_aarch64_special_symbol_name
6511 (q->symbol.name, BFD_AARCH64_SPECIAL_SYM_TYPE_ANY)))
6512 continue;
6513 /* Fall through. */
6514 if (bfd_get_section (&q->symbol) == section
6515 && q->symbol.value >= low_func && q->symbol.value <= offset)
6516 {
6517 func = (asymbol *) q;
6518 low_func = q->symbol.value;
6519 }
6520 break;
6521 }
6522 }
6523
6524 if (func == NULL)
6525 return FALSE;
6526
6527 if (filename_ptr)
6528 *filename_ptr = filename;
6529 if (functionname_ptr)
6530 *functionname_ptr = bfd_asymbol_name (func);
6531
6532 return TRUE;
6533 }
6534
6535
6536 /* Find the nearest line to a particular section and offset, for error
6537 reporting. This code is a duplicate of the code in elf.c, except
6538 that it uses aarch64_elf_find_function. */
6539
6540 static bfd_boolean
6541 elfNN_aarch64_find_nearest_line (bfd *abfd,
6542 asymbol **symbols,
6543 asection *section,
6544 bfd_vma offset,
6545 const char **filename_ptr,
6546 const char **functionname_ptr,
6547 unsigned int *line_ptr,
6548 unsigned int *discriminator_ptr)
6549 {
6550 bfd_boolean found = FALSE;
6551
6552 if (_bfd_dwarf2_find_nearest_line (abfd, symbols, NULL, section, offset,
6553 filename_ptr, functionname_ptr,
6554 line_ptr, discriminator_ptr,
6555 dwarf_debug_sections, 0,
6556 &elf_tdata (abfd)->dwarf2_find_line_info))
6557 {
6558 if (!*functionname_ptr)
6559 aarch64_elf_find_function (abfd, symbols, section, offset,
6560 *filename_ptr ? NULL : filename_ptr,
6561 functionname_ptr);
6562
6563 return TRUE;
6564 }
6565
6566 /* Skip _bfd_dwarf1_find_nearest_line since no known AArch64
6567 toolchain uses DWARF1. */
6568
6569 if (!_bfd_stab_section_find_nearest_line (abfd, symbols, section, offset,
6570 &found, filename_ptr,
6571 functionname_ptr, line_ptr,
6572 &elf_tdata (abfd)->line_info))
6573 return FALSE;
6574
6575 if (found && (*functionname_ptr || *line_ptr))
6576 return TRUE;
6577
6578 if (symbols == NULL)
6579 return FALSE;
6580
6581 if (!aarch64_elf_find_function (abfd, symbols, section, offset,
6582 filename_ptr, functionname_ptr))
6583 return FALSE;
6584
6585 *line_ptr = 0;
6586 return TRUE;
6587 }
6588
6589 static bfd_boolean
6590 elfNN_aarch64_find_inliner_info (bfd *abfd,
6591 const char **filename_ptr,
6592 const char **functionname_ptr,
6593 unsigned int *line_ptr)
6594 {
6595 bfd_boolean found;
6596 found = _bfd_dwarf2_find_inliner_info
6597 (abfd, filename_ptr,
6598 functionname_ptr, line_ptr, &elf_tdata (abfd)->dwarf2_find_line_info);
6599 return found;
6600 }
6601
6602
6603 static void
6604 elfNN_aarch64_post_process_headers (bfd *abfd,
6605 struct bfd_link_info *link_info)
6606 {
6607 Elf_Internal_Ehdr *i_ehdrp; /* ELF file header, internal form. */
6608
6609 i_ehdrp = elf_elfheader (abfd);
6610 i_ehdrp->e_ident[EI_ABIVERSION] = AARCH64_ELF_ABI_VERSION;
6611
6612 _bfd_elf_post_process_headers (abfd, link_info);
6613 }
6614
6615 static enum elf_reloc_type_class
6616 elfNN_aarch64_reloc_type_class (const struct bfd_link_info *info ATTRIBUTE_UNUSED,
6617 const asection *rel_sec ATTRIBUTE_UNUSED,
6618 const Elf_Internal_Rela *rela)
6619 {
6620 switch ((int) ELFNN_R_TYPE (rela->r_info))
6621 {
6622 case AARCH64_R (RELATIVE):
6623 return reloc_class_relative;
6624 case AARCH64_R (JUMP_SLOT):
6625 return reloc_class_plt;
6626 case AARCH64_R (COPY):
6627 return reloc_class_copy;
6628 default:
6629 return reloc_class_normal;
6630 }
6631 }
6632
6633 /* Handle an AArch64 specific section when reading an object file. This is
6634 called when bfd_section_from_shdr finds a section with an unknown
6635 type. */
6636
6637 static bfd_boolean
6638 elfNN_aarch64_section_from_shdr (bfd *abfd,
6639 Elf_Internal_Shdr *hdr,
6640 const char *name, int shindex)
6641 {
6642 /* There ought to be a place to keep ELF backend specific flags, but
6643 at the moment there isn't one. We just keep track of the
6644 sections by their name, instead. Fortunately, the ABI gives
6645 names for all the AArch64 specific sections, so we will probably get
6646 away with this. */
6647 switch (hdr->sh_type)
6648 {
6649 case SHT_AARCH64_ATTRIBUTES:
6650 break;
6651
6652 default:
6653 return FALSE;
6654 }
6655
6656 if (!_bfd_elf_make_section_from_shdr (abfd, hdr, name, shindex))
6657 return FALSE;
6658
6659 return TRUE;
6660 }
6661
6662 /* A structure used to record a list of sections, independently
6663 of the next and prev fields in the asection structure. */
6664 typedef struct section_list
6665 {
6666 asection *sec;
6667 struct section_list *next;
6668 struct section_list *prev;
6669 }
6670 section_list;
6671
6672 /* Unfortunately we need to keep a list of sections for which
6673 an _aarch64_elf_section_data structure has been allocated. This
6674 is because it is possible for functions like elfNN_aarch64_write_section
6675 to be called on a section which has had an elf_data_structure
6676 allocated for it (and so the used_by_bfd field is valid) but
6677 for which the AArch64 extended version of this structure - the
6678 _aarch64_elf_section_data structure - has not been allocated. */
6679 static section_list *sections_with_aarch64_elf_section_data = NULL;
6680
6681 static void
6682 record_section_with_aarch64_elf_section_data (asection *sec)
6683 {
6684 struct section_list *entry;
6685
6686 entry = bfd_malloc (sizeof (*entry));
6687 if (entry == NULL)
6688 return;
6689 entry->sec = sec;
6690 entry->next = sections_with_aarch64_elf_section_data;
6691 entry->prev = NULL;
6692 if (entry->next != NULL)
6693 entry->next->prev = entry;
6694 sections_with_aarch64_elf_section_data = entry;
6695 }
6696
6697 static struct section_list *
6698 find_aarch64_elf_section_entry (asection *sec)
6699 {
6700 struct section_list *entry;
6701 static struct section_list *last_entry = NULL;
6702
6703 /* This is a short cut for the typical case where the sections are added
6704 to the sections_with_aarch64_elf_section_data list in forward order and
6705 then looked up here in backwards order. This makes a real difference
6706 to the ld-srec/sec64k.exp linker test. */
6707 entry = sections_with_aarch64_elf_section_data;
6708 if (last_entry != NULL)
6709 {
6710 if (last_entry->sec == sec)
6711 entry = last_entry;
6712 else if (last_entry->next != NULL && last_entry->next->sec == sec)
6713 entry = last_entry->next;
6714 }
6715
6716 for (; entry; entry = entry->next)
6717 if (entry->sec == sec)
6718 break;
6719
6720 if (entry)
6721 /* Record the entry prior to this one - it is the entry we are
6722 most likely to want to locate next time. Also this way if we
6723 have been called from
6724 unrecord_section_with_aarch64_elf_section_data () we will not
6725 be caching a pointer that is about to be freed. */
6726 last_entry = entry->prev;
6727
6728 return entry;
6729 }
6730
6731 static void
6732 unrecord_section_with_aarch64_elf_section_data (asection *sec)
6733 {
6734 struct section_list *entry;
6735
6736 entry = find_aarch64_elf_section_entry (sec);
6737
6738 if (entry)
6739 {
6740 if (entry->prev != NULL)
6741 entry->prev->next = entry->next;
6742 if (entry->next != NULL)
6743 entry->next->prev = entry->prev;
6744 if (entry == sections_with_aarch64_elf_section_data)
6745 sections_with_aarch64_elf_section_data = entry->next;
6746 free (entry);
6747 }
6748 }
6749
6750
6751 typedef struct
6752 {
6753 void *finfo;
6754 struct bfd_link_info *info;
6755 asection *sec;
6756 int sec_shndx;
6757 int (*func) (void *, const char *, Elf_Internal_Sym *,
6758 asection *, struct elf_link_hash_entry *);
6759 } output_arch_syminfo;
6760
6761 enum map_symbol_type
6762 {
6763 AARCH64_MAP_INSN,
6764 AARCH64_MAP_DATA
6765 };
6766
6767
6768 /* Output a single mapping symbol. */
6769
6770 static bfd_boolean
6771 elfNN_aarch64_output_map_sym (output_arch_syminfo *osi,
6772 enum map_symbol_type type, bfd_vma offset)
6773 {
6774 static const char *names[2] = { "$x", "$d" };
6775 Elf_Internal_Sym sym;
6776
6777 sym.st_value = (osi->sec->output_section->vma
6778 + osi->sec->output_offset + offset);
6779 sym.st_size = 0;
6780 sym.st_other = 0;
6781 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_NOTYPE);
6782 sym.st_shndx = osi->sec_shndx;
6783 return osi->func (osi->finfo, names[type], &sym, osi->sec, NULL) == 1;
6784 }
6785
6786
6787
6788 /* Output mapping symbols for PLT entries associated with H. */
6789
6790 static bfd_boolean
6791 elfNN_aarch64_output_plt_map (struct elf_link_hash_entry *h, void *inf)
6792 {
6793 output_arch_syminfo *osi = (output_arch_syminfo *) inf;
6794 bfd_vma addr;
6795
6796 if (h->root.type == bfd_link_hash_indirect)
6797 return TRUE;
6798
6799 if (h->root.type == bfd_link_hash_warning)
6800 /* When warning symbols are created, they **replace** the "real"
6801 entry in the hash table, thus we never get to see the real
6802 symbol in a hash traversal. So look at it now. */
6803 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6804
6805 if (h->plt.offset == (bfd_vma) - 1)
6806 return TRUE;
6807
6808 addr = h->plt.offset;
6809 if (addr == 32)
6810 {
6811 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6812 return FALSE;
6813 }
6814 return TRUE;
6815 }
6816
6817
6818 /* Output a single local symbol for a generated stub. */
6819
6820 static bfd_boolean
6821 elfNN_aarch64_output_stub_sym (output_arch_syminfo *osi, const char *name,
6822 bfd_vma offset, bfd_vma size)
6823 {
6824 Elf_Internal_Sym sym;
6825
6826 sym.st_value = (osi->sec->output_section->vma
6827 + osi->sec->output_offset + offset);
6828 sym.st_size = size;
6829 sym.st_other = 0;
6830 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_FUNC);
6831 sym.st_shndx = osi->sec_shndx;
6832 return osi->func (osi->finfo, name, &sym, osi->sec, NULL) == 1;
6833 }
6834
6835 static bfd_boolean
6836 aarch64_map_one_stub (struct bfd_hash_entry *gen_entry, void *in_arg)
6837 {
6838 struct elf_aarch64_stub_hash_entry *stub_entry;
6839 asection *stub_sec;
6840 bfd_vma addr;
6841 char *stub_name;
6842 output_arch_syminfo *osi;
6843
6844 /* Massage our args to the form they really have. */
6845 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
6846 osi = (output_arch_syminfo *) in_arg;
6847
6848 stub_sec = stub_entry->stub_sec;
6849
6850 /* Ensure this stub is attached to the current section being
6851 processed. */
6852 if (stub_sec != osi->sec)
6853 return TRUE;
6854
6855 addr = (bfd_vma) stub_entry->stub_offset;
6856
6857 stub_name = stub_entry->output_name;
6858
6859 switch (stub_entry->stub_type)
6860 {
6861 case aarch64_stub_adrp_branch:
6862 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6863 sizeof (aarch64_adrp_branch_stub)))
6864 return FALSE;
6865 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6866 return FALSE;
6867 break;
6868 case aarch64_stub_long_branch:
6869 if (!elfNN_aarch64_output_stub_sym
6870 (osi, stub_name, addr, sizeof (aarch64_long_branch_stub)))
6871 return FALSE;
6872 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6873 return FALSE;
6874 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_DATA, addr + 16))
6875 return FALSE;
6876 break;
6877 case aarch64_stub_erratum_835769_veneer:
6878 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6879 sizeof (aarch64_erratum_835769_stub)))
6880 return FALSE;
6881 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6882 return FALSE;
6883 break;
6884 case aarch64_stub_erratum_843419_veneer:
6885 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6886 sizeof (aarch64_erratum_843419_stub)))
6887 return FALSE;
6888 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6889 return FALSE;
6890 break;
6891
6892 default:
6893 abort ();
6894 }
6895
6896 return TRUE;
6897 }
6898
6899 /* Output mapping symbols for linker generated sections. */
6900
6901 static bfd_boolean
6902 elfNN_aarch64_output_arch_local_syms (bfd *output_bfd,
6903 struct bfd_link_info *info,
6904 void *finfo,
6905 int (*func) (void *, const char *,
6906 Elf_Internal_Sym *,
6907 asection *,
6908 struct elf_link_hash_entry
6909 *))
6910 {
6911 output_arch_syminfo osi;
6912 struct elf_aarch64_link_hash_table *htab;
6913
6914 htab = elf_aarch64_hash_table (info);
6915
6916 osi.finfo = finfo;
6917 osi.info = info;
6918 osi.func = func;
6919
6920 /* Long calls stubs. */
6921 if (htab->stub_bfd && htab->stub_bfd->sections)
6922 {
6923 asection *stub_sec;
6924
6925 for (stub_sec = htab->stub_bfd->sections;
6926 stub_sec != NULL; stub_sec = stub_sec->next)
6927 {
6928 /* Ignore non-stub sections. */
6929 if (!strstr (stub_sec->name, STUB_SUFFIX))
6930 continue;
6931
6932 osi.sec = stub_sec;
6933
6934 osi.sec_shndx = _bfd_elf_section_from_bfd_section
6935 (output_bfd, osi.sec->output_section);
6936
6937 /* The first instruction in a stub is always a branch. */
6938 if (!elfNN_aarch64_output_map_sym (&osi, AARCH64_MAP_INSN, 0))
6939 return FALSE;
6940
6941 bfd_hash_traverse (&htab->stub_hash_table, aarch64_map_one_stub,
6942 &osi);
6943 }
6944 }
6945
6946 /* Finally, output mapping symbols for the PLT. */
6947 if (!htab->root.splt || htab->root.splt->size == 0)
6948 return TRUE;
6949
6950 /* For now live without mapping symbols for the plt. */
6951 osi.sec_shndx = _bfd_elf_section_from_bfd_section
6952 (output_bfd, htab->root.splt->output_section);
6953 osi.sec = htab->root.splt;
6954
6955 elf_link_hash_traverse (&htab->root, elfNN_aarch64_output_plt_map,
6956 (void *) &osi);
6957
6958 return TRUE;
6959
6960 }
6961
6962 /* Allocate target specific section data. */
6963
6964 static bfd_boolean
6965 elfNN_aarch64_new_section_hook (bfd *abfd, asection *sec)
6966 {
6967 if (!sec->used_by_bfd)
6968 {
6969 _aarch64_elf_section_data *sdata;
6970 bfd_size_type amt = sizeof (*sdata);
6971
6972 sdata = bfd_zalloc (abfd, amt);
6973 if (sdata == NULL)
6974 return FALSE;
6975 sec->used_by_bfd = sdata;
6976 }
6977
6978 record_section_with_aarch64_elf_section_data (sec);
6979
6980 return _bfd_elf_new_section_hook (abfd, sec);
6981 }
6982
6983
6984 static void
6985 unrecord_section_via_map_over_sections (bfd *abfd ATTRIBUTE_UNUSED,
6986 asection *sec,
6987 void *ignore ATTRIBUTE_UNUSED)
6988 {
6989 unrecord_section_with_aarch64_elf_section_data (sec);
6990 }
6991
6992 static bfd_boolean
6993 elfNN_aarch64_close_and_cleanup (bfd *abfd)
6994 {
6995 if (abfd->sections)
6996 bfd_map_over_sections (abfd,
6997 unrecord_section_via_map_over_sections, NULL);
6998
6999 return _bfd_elf_close_and_cleanup (abfd);
7000 }
7001
7002 static bfd_boolean
7003 elfNN_aarch64_bfd_free_cached_info (bfd *abfd)
7004 {
7005 if (abfd->sections)
7006 bfd_map_over_sections (abfd,
7007 unrecord_section_via_map_over_sections, NULL);
7008
7009 return _bfd_free_cached_info (abfd);
7010 }
7011
7012 /* Create dynamic sections. This is different from the ARM backend in that
7013 the got, plt, gotplt and their relocation sections are all created in the
7014 standard part of the bfd elf backend. */
7015
7016 static bfd_boolean
7017 elfNN_aarch64_create_dynamic_sections (bfd *dynobj,
7018 struct bfd_link_info *info)
7019 {
7020 struct elf_aarch64_link_hash_table *htab;
7021
7022 /* We need to create .got section. */
7023 if (!aarch64_elf_create_got_section (dynobj, info))
7024 return FALSE;
7025
7026 if (!_bfd_elf_create_dynamic_sections (dynobj, info))
7027 return FALSE;
7028
7029 htab = elf_aarch64_hash_table (info);
7030 htab->sdynbss = bfd_get_linker_section (dynobj, ".dynbss");
7031 if (!info->shared)
7032 htab->srelbss = bfd_get_linker_section (dynobj, ".rela.bss");
7033
7034 if (!htab->sdynbss || (!info->shared && !htab->srelbss))
7035 abort ();
7036
7037 return TRUE;
7038 }
7039
7040
7041 /* Allocate space in .plt, .got and associated reloc sections for
7042 dynamic relocs. */
7043
7044 static bfd_boolean
7045 elfNN_aarch64_allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf)
7046 {
7047 struct bfd_link_info *info;
7048 struct elf_aarch64_link_hash_table *htab;
7049 struct elf_aarch64_link_hash_entry *eh;
7050 struct elf_dyn_relocs *p;
7051
7052 /* An example of a bfd_link_hash_indirect symbol is versioned
7053 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7054 -> __gxx_personality_v0(bfd_link_hash_defined)
7055
7056 There is no need to process bfd_link_hash_indirect symbols here
7057 because we will also be presented with the concrete instance of
7058 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7059 called to copy all relevant data from the generic to the concrete
7060 symbol instance.
7061 */
7062 if (h->root.type == bfd_link_hash_indirect)
7063 return TRUE;
7064
7065 if (h->root.type == bfd_link_hash_warning)
7066 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7067
7068 info = (struct bfd_link_info *) inf;
7069 htab = elf_aarch64_hash_table (info);
7070
7071 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7072 here if it is defined and referenced in a non-shared object. */
7073 if (h->type == STT_GNU_IFUNC
7074 && h->def_regular)
7075 return TRUE;
7076 else if (htab->root.dynamic_sections_created && h->plt.refcount > 0)
7077 {
7078 /* Make sure this symbol is output as a dynamic symbol.
7079 Undefined weak syms won't yet be marked as dynamic. */
7080 if (h->dynindx == -1 && !h->forced_local)
7081 {
7082 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7083 return FALSE;
7084 }
7085
7086 if (info->shared || WILL_CALL_FINISH_DYNAMIC_SYMBOL (1, 0, h))
7087 {
7088 asection *s = htab->root.splt;
7089
7090 /* If this is the first .plt entry, make room for the special
7091 first entry. */
7092 if (s->size == 0)
7093 s->size += htab->plt_header_size;
7094
7095 h->plt.offset = s->size;
7096
7097 /* If this symbol is not defined in a regular file, and we are
7098 not generating a shared library, then set the symbol to this
7099 location in the .plt. This is required to make function
7100 pointers compare as equal between the normal executable and
7101 the shared library. */
7102 if (!info->shared && !h->def_regular)
7103 {
7104 h->root.u.def.section = s;
7105 h->root.u.def.value = h->plt.offset;
7106 }
7107
7108 /* Make room for this entry. For now we only create the
7109 small model PLT entries. We later need to find a way
7110 of relaxing into these from the large model PLT entries. */
7111 s->size += PLT_SMALL_ENTRY_SIZE;
7112
7113 /* We also need to make an entry in the .got.plt section, which
7114 will be placed in the .got section by the linker script. */
7115 htab->root.sgotplt->size += GOT_ENTRY_SIZE;
7116
7117 /* We also need to make an entry in the .rela.plt section. */
7118 htab->root.srelplt->size += RELOC_SIZE (htab);
7119
7120 /* We need to ensure that all GOT entries that serve the PLT
7121 are consecutive with the special GOT slots [0] [1] and
7122 [2]. Any addtional relocations, such as
7123 R_AARCH64_TLSDESC, must be placed after the PLT related
7124 entries. We abuse the reloc_count such that during
7125 sizing we adjust reloc_count to indicate the number of
7126 PLT related reserved entries. In subsequent phases when
7127 filling in the contents of the reloc entries, PLT related
7128 entries are placed by computing their PLT index (0
7129 .. reloc_count). While other none PLT relocs are placed
7130 at the slot indicated by reloc_count and reloc_count is
7131 updated. */
7132
7133 htab->root.srelplt->reloc_count++;
7134 }
7135 else
7136 {
7137 h->plt.offset = (bfd_vma) - 1;
7138 h->needs_plt = 0;
7139 }
7140 }
7141 else
7142 {
7143 h->plt.offset = (bfd_vma) - 1;
7144 h->needs_plt = 0;
7145 }
7146
7147 eh = (struct elf_aarch64_link_hash_entry *) h;
7148 eh->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7149
7150 if (h->got.refcount > 0)
7151 {
7152 bfd_boolean dyn;
7153 unsigned got_type = elf_aarch64_hash_entry (h)->got_type;
7154
7155 h->got.offset = (bfd_vma) - 1;
7156
7157 dyn = htab->root.dynamic_sections_created;
7158
7159 /* Make sure this symbol is output as a dynamic symbol.
7160 Undefined weak syms won't yet be marked as dynamic. */
7161 if (dyn && h->dynindx == -1 && !h->forced_local)
7162 {
7163 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7164 return FALSE;
7165 }
7166
7167 if (got_type == GOT_UNKNOWN)
7168 {
7169 }
7170 else if (got_type == GOT_NORMAL)
7171 {
7172 h->got.offset = htab->root.sgot->size;
7173 htab->root.sgot->size += GOT_ENTRY_SIZE;
7174 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7175 || h->root.type != bfd_link_hash_undefweak)
7176 && (info->shared
7177 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7178 {
7179 htab->root.srelgot->size += RELOC_SIZE (htab);
7180 }
7181 }
7182 else
7183 {
7184 int indx;
7185 if (got_type & GOT_TLSDESC_GD)
7186 {
7187 eh->tlsdesc_got_jump_table_offset =
7188 (htab->root.sgotplt->size
7189 - aarch64_compute_jump_table_size (htab));
7190 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7191 h->got.offset = (bfd_vma) - 2;
7192 }
7193
7194 if (got_type & GOT_TLS_GD)
7195 {
7196 h->got.offset = htab->root.sgot->size;
7197 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7198 }
7199
7200 if (got_type & GOT_TLS_IE)
7201 {
7202 h->got.offset = htab->root.sgot->size;
7203 htab->root.sgot->size += GOT_ENTRY_SIZE;
7204 }
7205
7206 indx = h && h->dynindx != -1 ? h->dynindx : 0;
7207 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7208 || h->root.type != bfd_link_hash_undefweak)
7209 && (info->shared
7210 || indx != 0
7211 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7212 {
7213 if (got_type & GOT_TLSDESC_GD)
7214 {
7215 htab->root.srelplt->size += RELOC_SIZE (htab);
7216 /* Note reloc_count not incremented here! We have
7217 already adjusted reloc_count for this relocation
7218 type. */
7219
7220 /* TLSDESC PLT is now needed, but not yet determined. */
7221 htab->tlsdesc_plt = (bfd_vma) - 1;
7222 }
7223
7224 if (got_type & GOT_TLS_GD)
7225 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7226
7227 if (got_type & GOT_TLS_IE)
7228 htab->root.srelgot->size += RELOC_SIZE (htab);
7229 }
7230 }
7231 }
7232 else
7233 {
7234 h->got.offset = (bfd_vma) - 1;
7235 }
7236
7237 if (eh->dyn_relocs == NULL)
7238 return TRUE;
7239
7240 /* In the shared -Bsymbolic case, discard space allocated for
7241 dynamic pc-relative relocs against symbols which turn out to be
7242 defined in regular objects. For the normal shared case, discard
7243 space for pc-relative relocs that have become local due to symbol
7244 visibility changes. */
7245
7246 if (info->shared)
7247 {
7248 /* Relocs that use pc_count are those that appear on a call
7249 insn, or certain REL relocs that can generated via assembly.
7250 We want calls to protected symbols to resolve directly to the
7251 function rather than going via the plt. If people want
7252 function pointer comparisons to work as expected then they
7253 should avoid writing weird assembly. */
7254 if (SYMBOL_CALLS_LOCAL (info, h))
7255 {
7256 struct elf_dyn_relocs **pp;
7257
7258 for (pp = &eh->dyn_relocs; (p = *pp) != NULL;)
7259 {
7260 p->count -= p->pc_count;
7261 p->pc_count = 0;
7262 if (p->count == 0)
7263 *pp = p->next;
7264 else
7265 pp = &p->next;
7266 }
7267 }
7268
7269 /* Also discard relocs on undefined weak syms with non-default
7270 visibility. */
7271 if (eh->dyn_relocs != NULL && h->root.type == bfd_link_hash_undefweak)
7272 {
7273 if (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7274 eh->dyn_relocs = NULL;
7275
7276 /* Make sure undefined weak symbols are output as a dynamic
7277 symbol in PIEs. */
7278 else if (h->dynindx == -1
7279 && !h->forced_local
7280 && !bfd_elf_link_record_dynamic_symbol (info, h))
7281 return FALSE;
7282 }
7283
7284 }
7285 else if (ELIMINATE_COPY_RELOCS)
7286 {
7287 /* For the non-shared case, discard space for relocs against
7288 symbols which turn out to need copy relocs or are not
7289 dynamic. */
7290
7291 if (!h->non_got_ref
7292 && ((h->def_dynamic
7293 && !h->def_regular)
7294 || (htab->root.dynamic_sections_created
7295 && (h->root.type == bfd_link_hash_undefweak
7296 || h->root.type == bfd_link_hash_undefined))))
7297 {
7298 /* Make sure this symbol is output as a dynamic symbol.
7299 Undefined weak syms won't yet be marked as dynamic. */
7300 if (h->dynindx == -1
7301 && !h->forced_local
7302 && !bfd_elf_link_record_dynamic_symbol (info, h))
7303 return FALSE;
7304
7305 /* If that succeeded, we know we'll be keeping all the
7306 relocs. */
7307 if (h->dynindx != -1)
7308 goto keep;
7309 }
7310
7311 eh->dyn_relocs = NULL;
7312
7313 keep:;
7314 }
7315
7316 /* Finally, allocate space. */
7317 for (p = eh->dyn_relocs; p != NULL; p = p->next)
7318 {
7319 asection *sreloc;
7320
7321 sreloc = elf_section_data (p->sec)->sreloc;
7322
7323 BFD_ASSERT (sreloc != NULL);
7324
7325 sreloc->size += p->count * RELOC_SIZE (htab);
7326 }
7327
7328 return TRUE;
7329 }
7330
7331 /* Allocate space in .plt, .got and associated reloc sections for
7332 ifunc dynamic relocs. */
7333
7334 static bfd_boolean
7335 elfNN_aarch64_allocate_ifunc_dynrelocs (struct elf_link_hash_entry *h,
7336 void *inf)
7337 {
7338 struct bfd_link_info *info;
7339 struct elf_aarch64_link_hash_table *htab;
7340 struct elf_aarch64_link_hash_entry *eh;
7341
7342 /* An example of a bfd_link_hash_indirect symbol is versioned
7343 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7344 -> __gxx_personality_v0(bfd_link_hash_defined)
7345
7346 There is no need to process bfd_link_hash_indirect symbols here
7347 because we will also be presented with the concrete instance of
7348 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7349 called to copy all relevant data from the generic to the concrete
7350 symbol instance.
7351 */
7352 if (h->root.type == bfd_link_hash_indirect)
7353 return TRUE;
7354
7355 if (h->root.type == bfd_link_hash_warning)
7356 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7357
7358 info = (struct bfd_link_info *) inf;
7359 htab = elf_aarch64_hash_table (info);
7360
7361 eh = (struct elf_aarch64_link_hash_entry *) h;
7362
7363 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7364 here if it is defined and referenced in a non-shared object. */
7365 if (h->type == STT_GNU_IFUNC
7366 && h->def_regular)
7367 return _bfd_elf_allocate_ifunc_dyn_relocs (info, h,
7368 &eh->dyn_relocs,
7369 htab->plt_entry_size,
7370 htab->plt_header_size,
7371 GOT_ENTRY_SIZE);
7372 return TRUE;
7373 }
7374
7375 /* Allocate space in .plt, .got and associated reloc sections for
7376 local dynamic relocs. */
7377
7378 static bfd_boolean
7379 elfNN_aarch64_allocate_local_dynrelocs (void **slot, void *inf)
7380 {
7381 struct elf_link_hash_entry *h
7382 = (struct elf_link_hash_entry *) *slot;
7383
7384 if (h->type != STT_GNU_IFUNC
7385 || !h->def_regular
7386 || !h->ref_regular
7387 || !h->forced_local
7388 || h->root.type != bfd_link_hash_defined)
7389 abort ();
7390
7391 return elfNN_aarch64_allocate_dynrelocs (h, inf);
7392 }
7393
7394 /* Allocate space in .plt, .got and associated reloc sections for
7395 local ifunc dynamic relocs. */
7396
7397 static bfd_boolean
7398 elfNN_aarch64_allocate_local_ifunc_dynrelocs (void **slot, void *inf)
7399 {
7400 struct elf_link_hash_entry *h
7401 = (struct elf_link_hash_entry *) *slot;
7402
7403 if (h->type != STT_GNU_IFUNC
7404 || !h->def_regular
7405 || !h->ref_regular
7406 || !h->forced_local
7407 || h->root.type != bfd_link_hash_defined)
7408 abort ();
7409
7410 return elfNN_aarch64_allocate_ifunc_dynrelocs (h, inf);
7411 }
7412
7413 /* This is the most important function of all . Innocuosly named
7414 though ! */
7415 static bfd_boolean
7416 elfNN_aarch64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED,
7417 struct bfd_link_info *info)
7418 {
7419 struct elf_aarch64_link_hash_table *htab;
7420 bfd *dynobj;
7421 asection *s;
7422 bfd_boolean relocs;
7423 bfd *ibfd;
7424
7425 htab = elf_aarch64_hash_table ((info));
7426 dynobj = htab->root.dynobj;
7427
7428 BFD_ASSERT (dynobj != NULL);
7429
7430 if (htab->root.dynamic_sections_created)
7431 {
7432 if (info->executable)
7433 {
7434 s = bfd_get_linker_section (dynobj, ".interp");
7435 if (s == NULL)
7436 abort ();
7437 s->size = sizeof ELF_DYNAMIC_INTERPRETER;
7438 s->contents = (unsigned char *) ELF_DYNAMIC_INTERPRETER;
7439 }
7440 }
7441
7442 /* Set up .got offsets for local syms, and space for local dynamic
7443 relocs. */
7444 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7445 {
7446 struct elf_aarch64_local_symbol *locals = NULL;
7447 Elf_Internal_Shdr *symtab_hdr;
7448 asection *srel;
7449 unsigned int i;
7450
7451 if (!is_aarch64_elf (ibfd))
7452 continue;
7453
7454 for (s = ibfd->sections; s != NULL; s = s->next)
7455 {
7456 struct elf_dyn_relocs *p;
7457
7458 for (p = (struct elf_dyn_relocs *)
7459 (elf_section_data (s)->local_dynrel); p != NULL; p = p->next)
7460 {
7461 if (!bfd_is_abs_section (p->sec)
7462 && bfd_is_abs_section (p->sec->output_section))
7463 {
7464 /* Input section has been discarded, either because
7465 it is a copy of a linkonce section or due to
7466 linker script /DISCARD/, so we'll be discarding
7467 the relocs too. */
7468 }
7469 else if (p->count != 0)
7470 {
7471 srel = elf_section_data (p->sec)->sreloc;
7472 srel->size += p->count * RELOC_SIZE (htab);
7473 if ((p->sec->output_section->flags & SEC_READONLY) != 0)
7474 info->flags |= DF_TEXTREL;
7475 }
7476 }
7477 }
7478
7479 locals = elf_aarch64_locals (ibfd);
7480 if (!locals)
7481 continue;
7482
7483 symtab_hdr = &elf_symtab_hdr (ibfd);
7484 srel = htab->root.srelgot;
7485 for (i = 0; i < symtab_hdr->sh_info; i++)
7486 {
7487 locals[i].got_offset = (bfd_vma) - 1;
7488 locals[i].tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7489 if (locals[i].got_refcount > 0)
7490 {
7491 unsigned got_type = locals[i].got_type;
7492 if (got_type & GOT_TLSDESC_GD)
7493 {
7494 locals[i].tlsdesc_got_jump_table_offset =
7495 (htab->root.sgotplt->size
7496 - aarch64_compute_jump_table_size (htab));
7497 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7498 locals[i].got_offset = (bfd_vma) - 2;
7499 }
7500
7501 if (got_type & GOT_TLS_GD)
7502 {
7503 locals[i].got_offset = htab->root.sgot->size;
7504 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7505 }
7506
7507 if (got_type & GOT_TLS_IE)
7508 {
7509 locals[i].got_offset = htab->root.sgot->size;
7510 htab->root.sgot->size += GOT_ENTRY_SIZE;
7511 }
7512
7513 if (got_type == GOT_UNKNOWN)
7514 {
7515 }
7516
7517 if (got_type == GOT_NORMAL)
7518 {
7519 }
7520
7521 if (info->shared)
7522 {
7523 if (got_type & GOT_TLSDESC_GD)
7524 {
7525 htab->root.srelplt->size += RELOC_SIZE (htab);
7526 /* Note RELOC_COUNT not incremented here! */
7527 htab->tlsdesc_plt = (bfd_vma) - 1;
7528 }
7529
7530 if (got_type & GOT_TLS_GD)
7531 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7532
7533 if (got_type & GOT_TLS_IE)
7534 htab->root.srelgot->size += RELOC_SIZE (htab);
7535 }
7536 }
7537 else
7538 {
7539 locals[i].got_refcount = (bfd_vma) - 1;
7540 }
7541 }
7542 }
7543
7544
7545 /* Allocate global sym .plt and .got entries, and space for global
7546 sym dynamic relocs. */
7547 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_dynrelocs,
7548 info);
7549
7550 /* Allocate global ifunc sym .plt and .got entries, and space for global
7551 ifunc sym dynamic relocs. */
7552 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_ifunc_dynrelocs,
7553 info);
7554
7555 /* Allocate .plt and .got entries, and space for local symbols. */
7556 htab_traverse (htab->loc_hash_table,
7557 elfNN_aarch64_allocate_local_dynrelocs,
7558 info);
7559
7560 /* Allocate .plt and .got entries, and space for local ifunc symbols. */
7561 htab_traverse (htab->loc_hash_table,
7562 elfNN_aarch64_allocate_local_ifunc_dynrelocs,
7563 info);
7564
7565 /* For every jump slot reserved in the sgotplt, reloc_count is
7566 incremented. However, when we reserve space for TLS descriptors,
7567 it's not incremented, so in order to compute the space reserved
7568 for them, it suffices to multiply the reloc count by the jump
7569 slot size. */
7570
7571 if (htab->root.srelplt)
7572 htab->sgotplt_jump_table_size = aarch64_compute_jump_table_size (htab);
7573
7574 if (htab->tlsdesc_plt)
7575 {
7576 if (htab->root.splt->size == 0)
7577 htab->root.splt->size += PLT_ENTRY_SIZE;
7578
7579 htab->tlsdesc_plt = htab->root.splt->size;
7580 htab->root.splt->size += PLT_TLSDESC_ENTRY_SIZE;
7581
7582 /* If we're not using lazy TLS relocations, don't generate the
7583 GOT entry required. */
7584 if (!(info->flags & DF_BIND_NOW))
7585 {
7586 htab->dt_tlsdesc_got = htab->root.sgot->size;
7587 htab->root.sgot->size += GOT_ENTRY_SIZE;
7588 }
7589 }
7590
7591 /* Init mapping symbols information to use later to distingush between
7592 code and data while scanning for errata. */
7593 if (htab->fix_erratum_835769 || htab->fix_erratum_843419)
7594 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7595 {
7596 if (!is_aarch64_elf (ibfd))
7597 continue;
7598 bfd_elfNN_aarch64_init_maps (ibfd);
7599 }
7600
7601 /* We now have determined the sizes of the various dynamic sections.
7602 Allocate memory for them. */
7603 relocs = FALSE;
7604 for (s = dynobj->sections; s != NULL; s = s->next)
7605 {
7606 if ((s->flags & SEC_LINKER_CREATED) == 0)
7607 continue;
7608
7609 if (s == htab->root.splt
7610 || s == htab->root.sgot
7611 || s == htab->root.sgotplt
7612 || s == htab->root.iplt
7613 || s == htab->root.igotplt || s == htab->sdynbss)
7614 {
7615 /* Strip this section if we don't need it; see the
7616 comment below. */
7617 }
7618 else if (CONST_STRNEQ (bfd_get_section_name (dynobj, s), ".rela"))
7619 {
7620 if (s->size != 0 && s != htab->root.srelplt)
7621 relocs = TRUE;
7622
7623 /* We use the reloc_count field as a counter if we need
7624 to copy relocs into the output file. */
7625 if (s != htab->root.srelplt)
7626 s->reloc_count = 0;
7627 }
7628 else
7629 {
7630 /* It's not one of our sections, so don't allocate space. */
7631 continue;
7632 }
7633
7634 if (s->size == 0)
7635 {
7636 /* If we don't need this section, strip it from the
7637 output file. This is mostly to handle .rela.bss and
7638 .rela.plt. We must create both sections in
7639 create_dynamic_sections, because they must be created
7640 before the linker maps input sections to output
7641 sections. The linker does that before
7642 adjust_dynamic_symbol is called, and it is that
7643 function which decides whether anything needs to go
7644 into these sections. */
7645
7646 s->flags |= SEC_EXCLUDE;
7647 continue;
7648 }
7649
7650 if ((s->flags & SEC_HAS_CONTENTS) == 0)
7651 continue;
7652
7653 /* Allocate memory for the section contents. We use bfd_zalloc
7654 here in case unused entries are not reclaimed before the
7655 section's contents are written out. This should not happen,
7656 but this way if it does, we get a R_AARCH64_NONE reloc instead
7657 of garbage. */
7658 s->contents = (bfd_byte *) bfd_zalloc (dynobj, s->size);
7659 if (s->contents == NULL)
7660 return FALSE;
7661 }
7662
7663 if (htab->root.dynamic_sections_created)
7664 {
7665 /* Add some entries to the .dynamic section. We fill in the
7666 values later, in elfNN_aarch64_finish_dynamic_sections, but we
7667 must add the entries now so that we get the correct size for
7668 the .dynamic section. The DT_DEBUG entry is filled in by the
7669 dynamic linker and used by the debugger. */
7670 #define add_dynamic_entry(TAG, VAL) \
7671 _bfd_elf_add_dynamic_entry (info, TAG, VAL)
7672
7673 if (info->executable)
7674 {
7675 if (!add_dynamic_entry (DT_DEBUG, 0))
7676 return FALSE;
7677 }
7678
7679 if (htab->root.splt->size != 0)
7680 {
7681 if (!add_dynamic_entry (DT_PLTGOT, 0)
7682 || !add_dynamic_entry (DT_PLTRELSZ, 0)
7683 || !add_dynamic_entry (DT_PLTREL, DT_RELA)
7684 || !add_dynamic_entry (DT_JMPREL, 0))
7685 return FALSE;
7686
7687 if (htab->tlsdesc_plt
7688 && (!add_dynamic_entry (DT_TLSDESC_PLT, 0)
7689 || !add_dynamic_entry (DT_TLSDESC_GOT, 0)))
7690 return FALSE;
7691 }
7692
7693 if (relocs)
7694 {
7695 if (!add_dynamic_entry (DT_RELA, 0)
7696 || !add_dynamic_entry (DT_RELASZ, 0)
7697 || !add_dynamic_entry (DT_RELAENT, RELOC_SIZE (htab)))
7698 return FALSE;
7699
7700 /* If any dynamic relocs apply to a read-only section,
7701 then we need a DT_TEXTREL entry. */
7702 if ((info->flags & DF_TEXTREL) != 0)
7703 {
7704 if (!add_dynamic_entry (DT_TEXTREL, 0))
7705 return FALSE;
7706 }
7707 }
7708 }
7709 #undef add_dynamic_entry
7710
7711 return TRUE;
7712 }
7713
7714 static inline void
7715 elf_aarch64_update_plt_entry (bfd *output_bfd,
7716 bfd_reloc_code_real_type r_type,
7717 bfd_byte *plt_entry, bfd_vma value)
7718 {
7719 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (r_type);
7720
7721 _bfd_aarch64_elf_put_addend (output_bfd, plt_entry, r_type, howto, value);
7722 }
7723
7724 static void
7725 elfNN_aarch64_create_small_pltn_entry (struct elf_link_hash_entry *h,
7726 struct elf_aarch64_link_hash_table
7727 *htab, bfd *output_bfd,
7728 struct bfd_link_info *info)
7729 {
7730 bfd_byte *plt_entry;
7731 bfd_vma plt_index;
7732 bfd_vma got_offset;
7733 bfd_vma gotplt_entry_address;
7734 bfd_vma plt_entry_address;
7735 Elf_Internal_Rela rela;
7736 bfd_byte *loc;
7737 asection *plt, *gotplt, *relplt;
7738
7739 /* When building a static executable, use .iplt, .igot.plt and
7740 .rela.iplt sections for STT_GNU_IFUNC symbols. */
7741 if (htab->root.splt != NULL)
7742 {
7743 plt = htab->root.splt;
7744 gotplt = htab->root.sgotplt;
7745 relplt = htab->root.srelplt;
7746 }
7747 else
7748 {
7749 plt = htab->root.iplt;
7750 gotplt = htab->root.igotplt;
7751 relplt = htab->root.irelplt;
7752 }
7753
7754 /* Get the index in the procedure linkage table which
7755 corresponds to this symbol. This is the index of this symbol
7756 in all the symbols for which we are making plt entries. The
7757 first entry in the procedure linkage table is reserved.
7758
7759 Get the offset into the .got table of the entry that
7760 corresponds to this function. Each .got entry is GOT_ENTRY_SIZE
7761 bytes. The first three are reserved for the dynamic linker.
7762
7763 For static executables, we don't reserve anything. */
7764
7765 if (plt == htab->root.splt)
7766 {
7767 plt_index = (h->plt.offset - htab->plt_header_size) / htab->plt_entry_size;
7768 got_offset = (plt_index + 3) * GOT_ENTRY_SIZE;
7769 }
7770 else
7771 {
7772 plt_index = h->plt.offset / htab->plt_entry_size;
7773 got_offset = plt_index * GOT_ENTRY_SIZE;
7774 }
7775
7776 plt_entry = plt->contents + h->plt.offset;
7777 plt_entry_address = plt->output_section->vma
7778 + plt->output_offset + h->plt.offset;
7779 gotplt_entry_address = gotplt->output_section->vma +
7780 gotplt->output_offset + got_offset;
7781
7782 /* Copy in the boiler-plate for the PLTn entry. */
7783 memcpy (plt_entry, elfNN_aarch64_small_plt_entry, PLT_SMALL_ENTRY_SIZE);
7784
7785 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
7786 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
7787 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
7788 plt_entry,
7789 PG (gotplt_entry_address) -
7790 PG (plt_entry_address));
7791
7792 /* Fill in the lo12 bits for the load from the pltgot. */
7793 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
7794 plt_entry + 4,
7795 PG_OFFSET (gotplt_entry_address));
7796
7797 /* Fill in the lo12 bits for the add from the pltgot entry. */
7798 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
7799 plt_entry + 8,
7800 PG_OFFSET (gotplt_entry_address));
7801
7802 /* All the GOTPLT Entries are essentially initialized to PLT0. */
7803 bfd_put_NN (output_bfd,
7804 plt->output_section->vma + plt->output_offset,
7805 gotplt->contents + got_offset);
7806
7807 rela.r_offset = gotplt_entry_address;
7808
7809 if (h->dynindx == -1
7810 || ((info->executable
7811 || ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7812 && h->def_regular
7813 && h->type == STT_GNU_IFUNC))
7814 {
7815 /* If an STT_GNU_IFUNC symbol is locally defined, generate
7816 R_AARCH64_IRELATIVE instead of R_AARCH64_JUMP_SLOT. */
7817 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
7818 rela.r_addend = (h->root.u.def.value
7819 + h->root.u.def.section->output_section->vma
7820 + h->root.u.def.section->output_offset);
7821 }
7822 else
7823 {
7824 /* Fill in the entry in the .rela.plt section. */
7825 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (JUMP_SLOT));
7826 rela.r_addend = 0;
7827 }
7828
7829 /* Compute the relocation entry to used based on PLT index and do
7830 not adjust reloc_count. The reloc_count has already been adjusted
7831 to account for this entry. */
7832 loc = relplt->contents + plt_index * RELOC_SIZE (htab);
7833 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
7834 }
7835
7836 /* Size sections even though they're not dynamic. We use it to setup
7837 _TLS_MODULE_BASE_, if needed. */
7838
7839 static bfd_boolean
7840 elfNN_aarch64_always_size_sections (bfd *output_bfd,
7841 struct bfd_link_info *info)
7842 {
7843 asection *tls_sec;
7844
7845 if (info->relocatable)
7846 return TRUE;
7847
7848 tls_sec = elf_hash_table (info)->tls_sec;
7849
7850 if (tls_sec)
7851 {
7852 struct elf_link_hash_entry *tlsbase;
7853
7854 tlsbase = elf_link_hash_lookup (elf_hash_table (info),
7855 "_TLS_MODULE_BASE_", TRUE, TRUE, FALSE);
7856
7857 if (tlsbase)
7858 {
7859 struct bfd_link_hash_entry *h = NULL;
7860 const struct elf_backend_data *bed =
7861 get_elf_backend_data (output_bfd);
7862
7863 if (!(_bfd_generic_link_add_one_symbol
7864 (info, output_bfd, "_TLS_MODULE_BASE_", BSF_LOCAL,
7865 tls_sec, 0, NULL, FALSE, bed->collect, &h)))
7866 return FALSE;
7867
7868 tlsbase->type = STT_TLS;
7869 tlsbase = (struct elf_link_hash_entry *) h;
7870 tlsbase->def_regular = 1;
7871 tlsbase->other = STV_HIDDEN;
7872 (*bed->elf_backend_hide_symbol) (info, tlsbase, TRUE);
7873 }
7874 }
7875
7876 return TRUE;
7877 }
7878
7879 /* Finish up dynamic symbol handling. We set the contents of various
7880 dynamic sections here. */
7881 static bfd_boolean
7882 elfNN_aarch64_finish_dynamic_symbol (bfd *output_bfd,
7883 struct bfd_link_info *info,
7884 struct elf_link_hash_entry *h,
7885 Elf_Internal_Sym *sym)
7886 {
7887 struct elf_aarch64_link_hash_table *htab;
7888 htab = elf_aarch64_hash_table (info);
7889
7890 if (h->plt.offset != (bfd_vma) - 1)
7891 {
7892 asection *plt, *gotplt, *relplt;
7893
7894 /* This symbol has an entry in the procedure linkage table. Set
7895 it up. */
7896
7897 /* When building a static executable, use .iplt, .igot.plt and
7898 .rela.iplt sections for STT_GNU_IFUNC symbols. */
7899 if (htab->root.splt != NULL)
7900 {
7901 plt = htab->root.splt;
7902 gotplt = htab->root.sgotplt;
7903 relplt = htab->root.srelplt;
7904 }
7905 else
7906 {
7907 plt = htab->root.iplt;
7908 gotplt = htab->root.igotplt;
7909 relplt = htab->root.irelplt;
7910 }
7911
7912 /* This symbol has an entry in the procedure linkage table. Set
7913 it up. */
7914 if ((h->dynindx == -1
7915 && !((h->forced_local || info->executable)
7916 && h->def_regular
7917 && h->type == STT_GNU_IFUNC))
7918 || plt == NULL
7919 || gotplt == NULL
7920 || relplt == NULL)
7921 abort ();
7922
7923 elfNN_aarch64_create_small_pltn_entry (h, htab, output_bfd, info);
7924 if (!h->def_regular)
7925 {
7926 /* Mark the symbol as undefined, rather than as defined in
7927 the .plt section. */
7928 sym->st_shndx = SHN_UNDEF;
7929 /* If the symbol is weak we need to clear the value.
7930 Otherwise, the PLT entry would provide a definition for
7931 the symbol even if the symbol wasn't defined anywhere,
7932 and so the symbol would never be NULL. Leave the value if
7933 there were any relocations where pointer equality matters
7934 (this is a clue for the dynamic linker, to make function
7935 pointer comparisons work between an application and shared
7936 library). */
7937 if (!h->ref_regular_nonweak || !h->pointer_equality_needed)
7938 sym->st_value = 0;
7939 }
7940 }
7941
7942 if (h->got.offset != (bfd_vma) - 1
7943 && elf_aarch64_hash_entry (h)->got_type == GOT_NORMAL)
7944 {
7945 Elf_Internal_Rela rela;
7946 bfd_byte *loc;
7947
7948 /* This symbol has an entry in the global offset table. Set it
7949 up. */
7950 if (htab->root.sgot == NULL || htab->root.srelgot == NULL)
7951 abort ();
7952
7953 rela.r_offset = (htab->root.sgot->output_section->vma
7954 + htab->root.sgot->output_offset
7955 + (h->got.offset & ~(bfd_vma) 1));
7956
7957 if (h->def_regular
7958 && h->type == STT_GNU_IFUNC)
7959 {
7960 if (info->shared)
7961 {
7962 /* Generate R_AARCH64_GLOB_DAT. */
7963 goto do_glob_dat;
7964 }
7965 else
7966 {
7967 asection *plt;
7968
7969 if (!h->pointer_equality_needed)
7970 abort ();
7971
7972 /* For non-shared object, we can't use .got.plt, which
7973 contains the real function address if we need pointer
7974 equality. We load the GOT entry with the PLT entry. */
7975 plt = htab->root.splt ? htab->root.splt : htab->root.iplt;
7976 bfd_put_NN (output_bfd, (plt->output_section->vma
7977 + plt->output_offset
7978 + h->plt.offset),
7979 htab->root.sgot->contents
7980 + (h->got.offset & ~(bfd_vma) 1));
7981 return TRUE;
7982 }
7983 }
7984 else if (info->shared && SYMBOL_REFERENCES_LOCAL (info, h))
7985 {
7986 if (!h->def_regular)
7987 return FALSE;
7988
7989 BFD_ASSERT ((h->got.offset & 1) != 0);
7990 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
7991 rela.r_addend = (h->root.u.def.value
7992 + h->root.u.def.section->output_section->vma
7993 + h->root.u.def.section->output_offset);
7994 }
7995 else
7996 {
7997 do_glob_dat:
7998 BFD_ASSERT ((h->got.offset & 1) == 0);
7999 bfd_put_NN (output_bfd, (bfd_vma) 0,
8000 htab->root.sgot->contents + h->got.offset);
8001 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (GLOB_DAT));
8002 rela.r_addend = 0;
8003 }
8004
8005 loc = htab->root.srelgot->contents;
8006 loc += htab->root.srelgot->reloc_count++ * RELOC_SIZE (htab);
8007 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8008 }
8009
8010 if (h->needs_copy)
8011 {
8012 Elf_Internal_Rela rela;
8013 bfd_byte *loc;
8014
8015 /* This symbol needs a copy reloc. Set it up. */
8016
8017 if (h->dynindx == -1
8018 || (h->root.type != bfd_link_hash_defined
8019 && h->root.type != bfd_link_hash_defweak)
8020 || htab->srelbss == NULL)
8021 abort ();
8022
8023 rela.r_offset = (h->root.u.def.value
8024 + h->root.u.def.section->output_section->vma
8025 + h->root.u.def.section->output_offset);
8026 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (COPY));
8027 rela.r_addend = 0;
8028 loc = htab->srelbss->contents;
8029 loc += htab->srelbss->reloc_count++ * RELOC_SIZE (htab);
8030 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8031 }
8032
8033 /* Mark _DYNAMIC and _GLOBAL_OFFSET_TABLE_ as absolute. SYM may
8034 be NULL for local symbols. */
8035 if (sym != NULL
8036 && (h == elf_hash_table (info)->hdynamic
8037 || h == elf_hash_table (info)->hgot))
8038 sym->st_shndx = SHN_ABS;
8039
8040 return TRUE;
8041 }
8042
8043 /* Finish up local dynamic symbol handling. We set the contents of
8044 various dynamic sections here. */
8045
8046 static bfd_boolean
8047 elfNN_aarch64_finish_local_dynamic_symbol (void **slot, void *inf)
8048 {
8049 struct elf_link_hash_entry *h
8050 = (struct elf_link_hash_entry *) *slot;
8051 struct bfd_link_info *info
8052 = (struct bfd_link_info *) inf;
8053
8054 return elfNN_aarch64_finish_dynamic_symbol (info->output_bfd,
8055 info, h, NULL);
8056 }
8057
8058 static void
8059 elfNN_aarch64_init_small_plt0_entry (bfd *output_bfd ATTRIBUTE_UNUSED,
8060 struct elf_aarch64_link_hash_table
8061 *htab)
8062 {
8063 /* Fill in PLT0. Fixme:RR Note this doesn't distinguish between
8064 small and large plts and at the minute just generates
8065 the small PLT. */
8066
8067 /* PLT0 of the small PLT looks like this in ELF64 -
8068 stp x16, x30, [sp, #-16]! // Save the reloc and lr on stack.
8069 adrp x16, PLT_GOT + 16 // Get the page base of the GOTPLT
8070 ldr x17, [x16, #:lo12:PLT_GOT+16] // Load the address of the
8071 // symbol resolver
8072 add x16, x16, #:lo12:PLT_GOT+16 // Load the lo12 bits of the
8073 // GOTPLT entry for this.
8074 br x17
8075 PLT0 will be slightly different in ELF32 due to different got entry
8076 size.
8077 */
8078 bfd_vma plt_got_2nd_ent; /* Address of GOT[2]. */
8079 bfd_vma plt_base;
8080
8081
8082 memcpy (htab->root.splt->contents, elfNN_aarch64_small_plt0_entry,
8083 PLT_ENTRY_SIZE);
8084 elf_section_data (htab->root.splt->output_section)->this_hdr.sh_entsize =
8085 PLT_ENTRY_SIZE;
8086
8087 plt_got_2nd_ent = (htab->root.sgotplt->output_section->vma
8088 + htab->root.sgotplt->output_offset
8089 + GOT_ENTRY_SIZE * 2);
8090
8091 plt_base = htab->root.splt->output_section->vma +
8092 htab->root.splt->output_offset;
8093
8094 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
8095 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
8096 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8097 htab->root.splt->contents + 4,
8098 PG (plt_got_2nd_ent) - PG (plt_base + 4));
8099
8100 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
8101 htab->root.splt->contents + 8,
8102 PG_OFFSET (plt_got_2nd_ent));
8103
8104 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
8105 htab->root.splt->contents + 12,
8106 PG_OFFSET (plt_got_2nd_ent));
8107 }
8108
8109 static bfd_boolean
8110 elfNN_aarch64_finish_dynamic_sections (bfd *output_bfd,
8111 struct bfd_link_info *info)
8112 {
8113 struct elf_aarch64_link_hash_table *htab;
8114 bfd *dynobj;
8115 asection *sdyn;
8116
8117 htab = elf_aarch64_hash_table (info);
8118 dynobj = htab->root.dynobj;
8119 sdyn = bfd_get_linker_section (dynobj, ".dynamic");
8120
8121 if (htab->root.dynamic_sections_created)
8122 {
8123 ElfNN_External_Dyn *dyncon, *dynconend;
8124
8125 if (sdyn == NULL || htab->root.sgot == NULL)
8126 abort ();
8127
8128 dyncon = (ElfNN_External_Dyn *) sdyn->contents;
8129 dynconend = (ElfNN_External_Dyn *) (sdyn->contents + sdyn->size);
8130 for (; dyncon < dynconend; dyncon++)
8131 {
8132 Elf_Internal_Dyn dyn;
8133 asection *s;
8134
8135 bfd_elfNN_swap_dyn_in (dynobj, dyncon, &dyn);
8136
8137 switch (dyn.d_tag)
8138 {
8139 default:
8140 continue;
8141
8142 case DT_PLTGOT:
8143 s = htab->root.sgotplt;
8144 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset;
8145 break;
8146
8147 case DT_JMPREL:
8148 dyn.d_un.d_ptr = htab->root.srelplt->output_section->vma;
8149 break;
8150
8151 case DT_PLTRELSZ:
8152 s = htab->root.srelplt;
8153 dyn.d_un.d_val = s->size;
8154 break;
8155
8156 case DT_RELASZ:
8157 /* The procedure linkage table relocs (DT_JMPREL) should
8158 not be included in the overall relocs (DT_RELA).
8159 Therefore, we override the DT_RELASZ entry here to
8160 make it not include the JMPREL relocs. Since the
8161 linker script arranges for .rela.plt to follow all
8162 other relocation sections, we don't have to worry
8163 about changing the DT_RELA entry. */
8164 if (htab->root.srelplt != NULL)
8165 {
8166 s = htab->root.srelplt;
8167 dyn.d_un.d_val -= s->size;
8168 }
8169 break;
8170
8171 case DT_TLSDESC_PLT:
8172 s = htab->root.splt;
8173 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8174 + htab->tlsdesc_plt;
8175 break;
8176
8177 case DT_TLSDESC_GOT:
8178 s = htab->root.sgot;
8179 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8180 + htab->dt_tlsdesc_got;
8181 break;
8182 }
8183
8184 bfd_elfNN_swap_dyn_out (output_bfd, &dyn, dyncon);
8185 }
8186
8187 }
8188
8189 /* Fill in the special first entry in the procedure linkage table. */
8190 if (htab->root.splt && htab->root.splt->size > 0)
8191 {
8192 elfNN_aarch64_init_small_plt0_entry (output_bfd, htab);
8193
8194 elf_section_data (htab->root.splt->output_section)->
8195 this_hdr.sh_entsize = htab->plt_entry_size;
8196
8197
8198 if (htab->tlsdesc_plt)
8199 {
8200 bfd_put_NN (output_bfd, (bfd_vma) 0,
8201 htab->root.sgot->contents + htab->dt_tlsdesc_got);
8202
8203 memcpy (htab->root.splt->contents + htab->tlsdesc_plt,
8204 elfNN_aarch64_tlsdesc_small_plt_entry,
8205 sizeof (elfNN_aarch64_tlsdesc_small_plt_entry));
8206
8207 {
8208 bfd_vma adrp1_addr =
8209 htab->root.splt->output_section->vma
8210 + htab->root.splt->output_offset + htab->tlsdesc_plt + 4;
8211
8212 bfd_vma adrp2_addr = adrp1_addr + 4;
8213
8214 bfd_vma got_addr =
8215 htab->root.sgot->output_section->vma
8216 + htab->root.sgot->output_offset;
8217
8218 bfd_vma pltgot_addr =
8219 htab->root.sgotplt->output_section->vma
8220 + htab->root.sgotplt->output_offset;
8221
8222 bfd_vma dt_tlsdesc_got = got_addr + htab->dt_tlsdesc_got;
8223
8224 bfd_byte *plt_entry =
8225 htab->root.splt->contents + htab->tlsdesc_plt;
8226
8227 /* adrp x2, DT_TLSDESC_GOT */
8228 elf_aarch64_update_plt_entry (output_bfd,
8229 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8230 plt_entry + 4,
8231 (PG (dt_tlsdesc_got)
8232 - PG (adrp1_addr)));
8233
8234 /* adrp x3, 0 */
8235 elf_aarch64_update_plt_entry (output_bfd,
8236 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8237 plt_entry + 8,
8238 (PG (pltgot_addr)
8239 - PG (adrp2_addr)));
8240
8241 /* ldr x2, [x2, #0] */
8242 elf_aarch64_update_plt_entry (output_bfd,
8243 BFD_RELOC_AARCH64_LDSTNN_LO12,
8244 plt_entry + 12,
8245 PG_OFFSET (dt_tlsdesc_got));
8246
8247 /* add x3, x3, 0 */
8248 elf_aarch64_update_plt_entry (output_bfd,
8249 BFD_RELOC_AARCH64_ADD_LO12,
8250 plt_entry + 16,
8251 PG_OFFSET (pltgot_addr));
8252 }
8253 }
8254 }
8255
8256 if (htab->root.sgotplt)
8257 {
8258 if (bfd_is_abs_section (htab->root.sgotplt->output_section))
8259 {
8260 (*_bfd_error_handler)
8261 (_("discarded output section: `%A'"), htab->root.sgotplt);
8262 return FALSE;
8263 }
8264
8265 /* Fill in the first three entries in the global offset table. */
8266 if (htab->root.sgotplt->size > 0)
8267 {
8268 bfd_put_NN (output_bfd, (bfd_vma) 0, htab->root.sgotplt->contents);
8269
8270 /* Write GOT[1] and GOT[2], needed for the dynamic linker. */
8271 bfd_put_NN (output_bfd,
8272 (bfd_vma) 0,
8273 htab->root.sgotplt->contents + GOT_ENTRY_SIZE);
8274 bfd_put_NN (output_bfd,
8275 (bfd_vma) 0,
8276 htab->root.sgotplt->contents + GOT_ENTRY_SIZE * 2);
8277 }
8278
8279 if (htab->root.sgot)
8280 {
8281 if (htab->root.sgot->size > 0)
8282 {
8283 bfd_vma addr =
8284 sdyn ? sdyn->output_section->vma + sdyn->output_offset : 0;
8285 bfd_put_NN (output_bfd, addr, htab->root.sgot->contents);
8286 }
8287 }
8288
8289 elf_section_data (htab->root.sgotplt->output_section)->
8290 this_hdr.sh_entsize = GOT_ENTRY_SIZE;
8291 }
8292
8293 if (htab->root.sgot && htab->root.sgot->size > 0)
8294 elf_section_data (htab->root.sgot->output_section)->this_hdr.sh_entsize
8295 = GOT_ENTRY_SIZE;
8296
8297 /* Fill PLT and GOT entries for local STT_GNU_IFUNC symbols. */
8298 htab_traverse (htab->loc_hash_table,
8299 elfNN_aarch64_finish_local_dynamic_symbol,
8300 info);
8301
8302 return TRUE;
8303 }
8304
8305 /* Return address for Ith PLT stub in section PLT, for relocation REL
8306 or (bfd_vma) -1 if it should not be included. */
8307
8308 static bfd_vma
8309 elfNN_aarch64_plt_sym_val (bfd_vma i, const asection *plt,
8310 const arelent *rel ATTRIBUTE_UNUSED)
8311 {
8312 return plt->vma + PLT_ENTRY_SIZE + i * PLT_SMALL_ENTRY_SIZE;
8313 }
8314
8315
8316 /* We use this so we can override certain functions
8317 (though currently we don't). */
8318
8319 const struct elf_size_info elfNN_aarch64_size_info =
8320 {
8321 sizeof (ElfNN_External_Ehdr),
8322 sizeof (ElfNN_External_Phdr),
8323 sizeof (ElfNN_External_Shdr),
8324 sizeof (ElfNN_External_Rel),
8325 sizeof (ElfNN_External_Rela),
8326 sizeof (ElfNN_External_Sym),
8327 sizeof (ElfNN_External_Dyn),
8328 sizeof (Elf_External_Note),
8329 4, /* Hash table entry size. */
8330 1, /* Internal relocs per external relocs. */
8331 ARCH_SIZE, /* Arch size. */
8332 LOG_FILE_ALIGN, /* Log_file_align. */
8333 ELFCLASSNN, EV_CURRENT,
8334 bfd_elfNN_write_out_phdrs,
8335 bfd_elfNN_write_shdrs_and_ehdr,
8336 bfd_elfNN_checksum_contents,
8337 bfd_elfNN_write_relocs,
8338 bfd_elfNN_swap_symbol_in,
8339 bfd_elfNN_swap_symbol_out,
8340 bfd_elfNN_slurp_reloc_table,
8341 bfd_elfNN_slurp_symbol_table,
8342 bfd_elfNN_swap_dyn_in,
8343 bfd_elfNN_swap_dyn_out,
8344 bfd_elfNN_swap_reloc_in,
8345 bfd_elfNN_swap_reloc_out,
8346 bfd_elfNN_swap_reloca_in,
8347 bfd_elfNN_swap_reloca_out
8348 };
8349
8350 #define ELF_ARCH bfd_arch_aarch64
8351 #define ELF_MACHINE_CODE EM_AARCH64
8352 #define ELF_MAXPAGESIZE 0x10000
8353 #define ELF_MINPAGESIZE 0x1000
8354 #define ELF_COMMONPAGESIZE 0x1000
8355
8356 #define bfd_elfNN_close_and_cleanup \
8357 elfNN_aarch64_close_and_cleanup
8358
8359 #define bfd_elfNN_bfd_free_cached_info \
8360 elfNN_aarch64_bfd_free_cached_info
8361
8362 #define bfd_elfNN_bfd_is_target_special_symbol \
8363 elfNN_aarch64_is_target_special_symbol
8364
8365 #define bfd_elfNN_bfd_link_hash_table_create \
8366 elfNN_aarch64_link_hash_table_create
8367
8368 #define bfd_elfNN_bfd_merge_private_bfd_data \
8369 elfNN_aarch64_merge_private_bfd_data
8370
8371 #define bfd_elfNN_bfd_print_private_bfd_data \
8372 elfNN_aarch64_print_private_bfd_data
8373
8374 #define bfd_elfNN_bfd_reloc_type_lookup \
8375 elfNN_aarch64_reloc_type_lookup
8376
8377 #define bfd_elfNN_bfd_reloc_name_lookup \
8378 elfNN_aarch64_reloc_name_lookup
8379
8380 #define bfd_elfNN_bfd_set_private_flags \
8381 elfNN_aarch64_set_private_flags
8382
8383 #define bfd_elfNN_find_inliner_info \
8384 elfNN_aarch64_find_inliner_info
8385
8386 #define bfd_elfNN_find_nearest_line \
8387 elfNN_aarch64_find_nearest_line
8388
8389 #define bfd_elfNN_mkobject \
8390 elfNN_aarch64_mkobject
8391
8392 #define bfd_elfNN_new_section_hook \
8393 elfNN_aarch64_new_section_hook
8394
8395 #define elf_backend_adjust_dynamic_symbol \
8396 elfNN_aarch64_adjust_dynamic_symbol
8397
8398 #define elf_backend_always_size_sections \
8399 elfNN_aarch64_always_size_sections
8400
8401 #define elf_backend_check_relocs \
8402 elfNN_aarch64_check_relocs
8403
8404 #define elf_backend_copy_indirect_symbol \
8405 elfNN_aarch64_copy_indirect_symbol
8406
8407 /* Create .dynbss, and .rela.bss sections in DYNOBJ, and set up shortcuts
8408 to them in our hash. */
8409 #define elf_backend_create_dynamic_sections \
8410 elfNN_aarch64_create_dynamic_sections
8411
8412 #define elf_backend_init_index_section \
8413 _bfd_elf_init_2_index_sections
8414
8415 #define elf_backend_finish_dynamic_sections \
8416 elfNN_aarch64_finish_dynamic_sections
8417
8418 #define elf_backend_finish_dynamic_symbol \
8419 elfNN_aarch64_finish_dynamic_symbol
8420
8421 #define elf_backend_gc_sweep_hook \
8422 elfNN_aarch64_gc_sweep_hook
8423
8424 #define elf_backend_object_p \
8425 elfNN_aarch64_object_p
8426
8427 #define elf_backend_output_arch_local_syms \
8428 elfNN_aarch64_output_arch_local_syms
8429
8430 #define elf_backend_plt_sym_val \
8431 elfNN_aarch64_plt_sym_val
8432
8433 #define elf_backend_post_process_headers \
8434 elfNN_aarch64_post_process_headers
8435
8436 #define elf_backend_relocate_section \
8437 elfNN_aarch64_relocate_section
8438
8439 #define elf_backend_reloc_type_class \
8440 elfNN_aarch64_reloc_type_class
8441
8442 #define elf_backend_section_from_shdr \
8443 elfNN_aarch64_section_from_shdr
8444
8445 #define elf_backend_size_dynamic_sections \
8446 elfNN_aarch64_size_dynamic_sections
8447
8448 #define elf_backend_size_info \
8449 elfNN_aarch64_size_info
8450
8451 #define elf_backend_write_section \
8452 elfNN_aarch64_write_section
8453
8454 #define elf_backend_can_refcount 1
8455 #define elf_backend_can_gc_sections 1
8456 #define elf_backend_plt_readonly 1
8457 #define elf_backend_want_got_plt 1
8458 #define elf_backend_want_plt_sym 0
8459 #define elf_backend_may_use_rel_p 0
8460 #define elf_backend_may_use_rela_p 1
8461 #define elf_backend_default_use_rela_p 1
8462 #define elf_backend_rela_normal 1
8463 #define elf_backend_got_header_size (GOT_ENTRY_SIZE * 3)
8464 #define elf_backend_default_execstack 0
8465
8466 #undef elf_backend_obj_attrs_section
8467 #define elf_backend_obj_attrs_section ".ARM.attributes"
8468
8469 #include "elfNN-target.h"
This page took 0.273267 seconds and 4 git commands to generate.