[AArch64] GAS support BFD_RELOC_AARCH64_LD32_GOTPAGE_LO14
[deliverable/binutils-gdb.git] / bfd / elfnn-aarch64.c
... / ...
CommitLineData
1/* AArch64-specific support for NN-bit ELF.
2 Copyright (C) 2009-2015 Free Software Foundation, Inc.
3 Contributed by ARM Ltd.
4
5 This file is part of BFD, the Binary File Descriptor library.
6
7 This program is free software; you can redistribute it and/or modify
8 it under the terms of the GNU General Public License as published by
9 the Free Software Foundation; either version 3 of the License, or
10 (at your option) any later version.
11
12 This program is distributed in the hope that it will be useful,
13 but WITHOUT ANY WARRANTY; without even the implied warranty of
14 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 GNU General Public License for more details.
16
17 You should have received a copy of the GNU General Public License
18 along with this program; see the file COPYING3. If not,
19 see <http://www.gnu.org/licenses/>. */
20
21/* Notes on implementation:
22
23 Thread Local Store (TLS)
24
25 Overview:
26
27 The implementation currently supports both traditional TLS and TLS
28 descriptors, but only general dynamic (GD).
29
30 For traditional TLS the assembler will present us with code
31 fragments of the form:
32
33 adrp x0, :tlsgd:foo
34 R_AARCH64_TLSGD_ADR_PAGE21(foo)
35 add x0, :tlsgd_lo12:foo
36 R_AARCH64_TLSGD_ADD_LO12_NC(foo)
37 bl __tls_get_addr
38 nop
39
40 For TLS descriptors the assembler will present us with code
41 fragments of the form:
42
43 adrp x0, :tlsdesc:foo R_AARCH64_TLSDESC_ADR_PAGE21(foo)
44 ldr x1, [x0, #:tlsdesc_lo12:foo] R_AARCH64_TLSDESC_LD64_LO12(foo)
45 add x0, x0, #:tlsdesc_lo12:foo R_AARCH64_TLSDESC_ADD_LO12(foo)
46 .tlsdesccall foo
47 blr x1 R_AARCH64_TLSDESC_CALL(foo)
48
49 The relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} against foo
50 indicate that foo is thread local and should be accessed via the
51 traditional TLS mechanims.
52
53 The relocations R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC}
54 against foo indicate that 'foo' is thread local and should be accessed
55 via a TLS descriptor mechanism.
56
57 The precise instruction sequence is only relevant from the
58 perspective of linker relaxation which is currently not implemented.
59
60 The static linker must detect that 'foo' is a TLS object and
61 allocate a double GOT entry. The GOT entry must be created for both
62 global and local TLS symbols. Note that this is different to none
63 TLS local objects which do not need a GOT entry.
64
65 In the traditional TLS mechanism, the double GOT entry is used to
66 provide the tls_index structure, containing module and offset
67 entries. The static linker places the relocation R_AARCH64_TLS_DTPMOD
68 on the module entry. The loader will subsequently fixup this
69 relocation with the module identity.
70
71 For global traditional TLS symbols the static linker places an
72 R_AARCH64_TLS_DTPREL relocation on the offset entry. The loader
73 will subsequently fixup the offset. For local TLS symbols the static
74 linker fixes up offset.
75
76 In the TLS descriptor mechanism the double GOT entry is used to
77 provide the descriptor. The static linker places the relocation
78 R_AARCH64_TLSDESC on the first GOT slot. The loader will
79 subsequently fix this up.
80
81 Implementation:
82
83 The handling of TLS symbols is implemented across a number of
84 different backend functions. The following is a top level view of
85 what processing is performed where.
86
87 The TLS implementation maintains state information for each TLS
88 symbol. The state information for local and global symbols is kept
89 in different places. Global symbols use generic BFD structures while
90 local symbols use backend specific structures that are allocated and
91 maintained entirely by the backend.
92
93 The flow:
94
95 elfNN_aarch64_check_relocs()
96
97 This function is invoked for each relocation.
98
99 The TLS relocations R_AARCH64_TLSGD_{ADR_PREL21,ADD_LO12_NC} and
100 R_AARCH64_TLSDESC_{ADR_PAGE21,LD64_LO12_NC,ADD_LO12_NC} are
101 spotted. One time creation of local symbol data structures are
102 created when the first local symbol is seen.
103
104 The reference count for a symbol is incremented. The GOT type for
105 each symbol is marked as general dynamic.
106
107 elfNN_aarch64_allocate_dynrelocs ()
108
109 For each global with positive reference count we allocate a double
110 GOT slot. For a traditional TLS symbol we allocate space for two
111 relocation entries on the GOT, for a TLS descriptor symbol we
112 allocate space for one relocation on the slot. Record the GOT offset
113 for this symbol.
114
115 elfNN_aarch64_size_dynamic_sections ()
116
117 Iterate all input BFDS, look for in the local symbol data structure
118 constructed earlier for local TLS symbols and allocate them double
119 GOT slots along with space for a single GOT relocation. Update the
120 local symbol structure to record the GOT offset allocated.
121
122 elfNN_aarch64_relocate_section ()
123
124 Calls elfNN_aarch64_final_link_relocate ()
125
126 Emit the relevant TLS relocations against the GOT for each TLS
127 symbol. For local TLS symbols emit the GOT offset directly. The GOT
128 relocations are emitted once the first time a TLS symbol is
129 encountered. The implementation uses the LSB of the GOT offset to
130 flag that the relevant GOT relocations for a symbol have been
131 emitted. All of the TLS code that uses the GOT offset needs to take
132 care to mask out this flag bit before using the offset.
133
134 elfNN_aarch64_final_link_relocate ()
135
136 Fixup the R_AARCH64_TLSGD_{ADR_PREL21, ADD_LO12_NC} relocations. */
137
138#include "sysdep.h"
139#include "bfd.h"
140#include "libiberty.h"
141#include "libbfd.h"
142#include "bfd_stdint.h"
143#include "elf-bfd.h"
144#include "bfdlink.h"
145#include "objalloc.h"
146#include "elf/aarch64.h"
147#include "elfxx-aarch64.h"
148
149#define ARCH_SIZE NN
150
151#if ARCH_SIZE == 64
152#define AARCH64_R(NAME) R_AARCH64_ ## NAME
153#define AARCH64_R_STR(NAME) "R_AARCH64_" #NAME
154#define HOWTO64(...) HOWTO (__VA_ARGS__)
155#define HOWTO32(...) EMPTY_HOWTO (0)
156#define LOG_FILE_ALIGN 3
157#endif
158
159#if ARCH_SIZE == 32
160#define AARCH64_R(NAME) R_AARCH64_P32_ ## NAME
161#define AARCH64_R_STR(NAME) "R_AARCH64_P32_" #NAME
162#define HOWTO64(...) EMPTY_HOWTO (0)
163#define HOWTO32(...) HOWTO (__VA_ARGS__)
164#define LOG_FILE_ALIGN 2
165#endif
166
167#define IS_AARCH64_TLS_RELOC(R_TYPE) \
168 ((R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21 \
169 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADR_PREL21 \
170 || (R_TYPE) == BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC \
171 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G1 \
172 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_MOVW_GOTTPREL_G0_NC \
173 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21 \
174 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC \
175 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC \
176 || (R_TYPE) == BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19 \
177 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12 \
178 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12 \
179 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC \
180 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2 \
181 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 \
182 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC \
183 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0 \
184 || (R_TYPE) == BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC \
185 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPMOD \
186 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_DTPREL \
187 || (R_TYPE) == BFD_RELOC_AARCH64_TLS_TPREL \
188 || IS_AARCH64_TLSDESC_RELOC ((R_TYPE)))
189
190#define IS_AARCH64_TLSDESC_RELOC(R_TYPE) \
191 ((R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD_PREL19 \
192 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21 \
193 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21 \
194 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC \
195 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC \
196 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC \
197 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G1 \
198 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_OFF_G0_NC \
199 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_LDR \
200 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_ADD \
201 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC_CALL \
202 || (R_TYPE) == BFD_RELOC_AARCH64_TLSDESC)
203
204#define ELIMINATE_COPY_RELOCS 0
205
206/* Return size of a relocation entry. HTAB is the bfd's
207 elf_aarch64_link_hash_entry. */
208#define RELOC_SIZE(HTAB) (sizeof (ElfNN_External_Rela))
209
210/* GOT Entry size - 8 bytes in ELF64 and 4 bytes in ELF32. */
211#define GOT_ENTRY_SIZE (ARCH_SIZE / 8)
212#define PLT_ENTRY_SIZE (32)
213#define PLT_SMALL_ENTRY_SIZE (16)
214#define PLT_TLSDESC_ENTRY_SIZE (32)
215
216/* Encoding of the nop instruction */
217#define INSN_NOP 0xd503201f
218
219#define aarch64_compute_jump_table_size(htab) \
220 (((htab)->root.srelplt == NULL) ? 0 \
221 : (htab)->root.srelplt->reloc_count * GOT_ENTRY_SIZE)
222
223/* The first entry in a procedure linkage table looks like this
224 if the distance between the PLTGOT and the PLT is < 4GB use
225 these PLT entries. Note that the dynamic linker gets &PLTGOT[2]
226 in x16 and needs to work out PLTGOT[1] by using an address of
227 [x16,#-GOT_ENTRY_SIZE]. */
228static const bfd_byte elfNN_aarch64_small_plt0_entry[PLT_ENTRY_SIZE] =
229{
230 0xf0, 0x7b, 0xbf, 0xa9, /* stp x16, x30, [sp, #-16]! */
231 0x10, 0x00, 0x00, 0x90, /* adrp x16, (GOT+16) */
232#if ARCH_SIZE == 64
233 0x11, 0x0A, 0x40, 0xf9, /* ldr x17, [x16, #PLT_GOT+0x10] */
234 0x10, 0x42, 0x00, 0x91, /* add x16, x16,#PLT_GOT+0x10 */
235#else
236 0x11, 0x0A, 0x40, 0xb9, /* ldr w17, [x16, #PLT_GOT+0x8] */
237 0x10, 0x22, 0x00, 0x11, /* add w16, w16,#PLT_GOT+0x8 */
238#endif
239 0x20, 0x02, 0x1f, 0xd6, /* br x17 */
240 0x1f, 0x20, 0x03, 0xd5, /* nop */
241 0x1f, 0x20, 0x03, 0xd5, /* nop */
242 0x1f, 0x20, 0x03, 0xd5, /* nop */
243};
244
245/* Per function entry in a procedure linkage table looks like this
246 if the distance between the PLTGOT and the PLT is < 4GB use
247 these PLT entries. */
248static const bfd_byte elfNN_aarch64_small_plt_entry[PLT_SMALL_ENTRY_SIZE] =
249{
250 0x10, 0x00, 0x00, 0x90, /* adrp x16, PLTGOT + n * 8 */
251#if ARCH_SIZE == 64
252 0x11, 0x02, 0x40, 0xf9, /* ldr x17, [x16, PLTGOT + n * 8] */
253 0x10, 0x02, 0x00, 0x91, /* add x16, x16, :lo12:PLTGOT + n * 8 */
254#else
255 0x11, 0x02, 0x40, 0xb9, /* ldr w17, [x16, PLTGOT + n * 4] */
256 0x10, 0x02, 0x00, 0x11, /* add w16, w16, :lo12:PLTGOT + n * 4 */
257#endif
258 0x20, 0x02, 0x1f, 0xd6, /* br x17. */
259};
260
261static const bfd_byte
262elfNN_aarch64_tlsdesc_small_plt_entry[PLT_TLSDESC_ENTRY_SIZE] =
263{
264 0xe2, 0x0f, 0xbf, 0xa9, /* stp x2, x3, [sp, #-16]! */
265 0x02, 0x00, 0x00, 0x90, /* adrp x2, 0 */
266 0x03, 0x00, 0x00, 0x90, /* adrp x3, 0 */
267#if ARCH_SIZE == 64
268 0x42, 0x00, 0x40, 0xf9, /* ldr x2, [x2, #0] */
269 0x63, 0x00, 0x00, 0x91, /* add x3, x3, 0 */
270#else
271 0x42, 0x00, 0x40, 0xb9, /* ldr w2, [x2, #0] */
272 0x63, 0x00, 0x00, 0x11, /* add w3, w3, 0 */
273#endif
274 0x40, 0x00, 0x1f, 0xd6, /* br x2 */
275 0x1f, 0x20, 0x03, 0xd5, /* nop */
276 0x1f, 0x20, 0x03, 0xd5, /* nop */
277};
278
279#define elf_info_to_howto elfNN_aarch64_info_to_howto
280#define elf_info_to_howto_rel elfNN_aarch64_info_to_howto
281
282#define AARCH64_ELF_ABI_VERSION 0
283
284/* In case we're on a 32-bit machine, construct a 64-bit "-1" value. */
285#define ALL_ONES (~ (bfd_vma) 0)
286
287/* Indexed by the bfd interal reloc enumerators.
288 Therefore, the table needs to be synced with BFD_RELOC_AARCH64_*
289 in reloc.c. */
290
291static reloc_howto_type elfNN_aarch64_howto_table[] =
292{
293 EMPTY_HOWTO (0),
294
295 /* Basic data relocations. */
296
297#if ARCH_SIZE == 64
298 HOWTO (R_AARCH64_NULL, /* type */
299 0, /* rightshift */
300 3, /* size (0 = byte, 1 = short, 2 = long) */
301 0, /* bitsize */
302 FALSE, /* pc_relative */
303 0, /* bitpos */
304 complain_overflow_dont, /* complain_on_overflow */
305 bfd_elf_generic_reloc, /* special_function */
306 "R_AARCH64_NULL", /* name */
307 FALSE, /* partial_inplace */
308 0, /* src_mask */
309 0, /* dst_mask */
310 FALSE), /* pcrel_offset */
311#else
312 HOWTO (R_AARCH64_NONE, /* type */
313 0, /* rightshift */
314 3, /* size (0 = byte, 1 = short, 2 = long) */
315 0, /* bitsize */
316 FALSE, /* pc_relative */
317 0, /* bitpos */
318 complain_overflow_dont, /* complain_on_overflow */
319 bfd_elf_generic_reloc, /* special_function */
320 "R_AARCH64_NONE", /* name */
321 FALSE, /* partial_inplace */
322 0, /* src_mask */
323 0, /* dst_mask */
324 FALSE), /* pcrel_offset */
325#endif
326
327 /* .xword: (S+A) */
328 HOWTO64 (AARCH64_R (ABS64), /* type */
329 0, /* rightshift */
330 4, /* size (4 = long long) */
331 64, /* bitsize */
332 FALSE, /* pc_relative */
333 0, /* bitpos */
334 complain_overflow_unsigned, /* complain_on_overflow */
335 bfd_elf_generic_reloc, /* special_function */
336 AARCH64_R_STR (ABS64), /* name */
337 FALSE, /* partial_inplace */
338 ALL_ONES, /* src_mask */
339 ALL_ONES, /* dst_mask */
340 FALSE), /* pcrel_offset */
341
342 /* .word: (S+A) */
343 HOWTO (AARCH64_R (ABS32), /* type */
344 0, /* rightshift */
345 2, /* size (0 = byte, 1 = short, 2 = long) */
346 32, /* bitsize */
347 FALSE, /* pc_relative */
348 0, /* bitpos */
349 complain_overflow_unsigned, /* complain_on_overflow */
350 bfd_elf_generic_reloc, /* special_function */
351 AARCH64_R_STR (ABS32), /* name */
352 FALSE, /* partial_inplace */
353 0xffffffff, /* src_mask */
354 0xffffffff, /* dst_mask */
355 FALSE), /* pcrel_offset */
356
357 /* .half: (S+A) */
358 HOWTO (AARCH64_R (ABS16), /* type */
359 0, /* rightshift */
360 1, /* size (0 = byte, 1 = short, 2 = long) */
361 16, /* bitsize */
362 FALSE, /* pc_relative */
363 0, /* bitpos */
364 complain_overflow_unsigned, /* complain_on_overflow */
365 bfd_elf_generic_reloc, /* special_function */
366 AARCH64_R_STR (ABS16), /* name */
367 FALSE, /* partial_inplace */
368 0xffff, /* src_mask */
369 0xffff, /* dst_mask */
370 FALSE), /* pcrel_offset */
371
372 /* .xword: (S+A-P) */
373 HOWTO64 (AARCH64_R (PREL64), /* type */
374 0, /* rightshift */
375 4, /* size (4 = long long) */
376 64, /* bitsize */
377 TRUE, /* pc_relative */
378 0, /* bitpos */
379 complain_overflow_signed, /* complain_on_overflow */
380 bfd_elf_generic_reloc, /* special_function */
381 AARCH64_R_STR (PREL64), /* name */
382 FALSE, /* partial_inplace */
383 ALL_ONES, /* src_mask */
384 ALL_ONES, /* dst_mask */
385 TRUE), /* pcrel_offset */
386
387 /* .word: (S+A-P) */
388 HOWTO (AARCH64_R (PREL32), /* type */
389 0, /* rightshift */
390 2, /* size (0 = byte, 1 = short, 2 = long) */
391 32, /* bitsize */
392 TRUE, /* pc_relative */
393 0, /* bitpos */
394 complain_overflow_signed, /* complain_on_overflow */
395 bfd_elf_generic_reloc, /* special_function */
396 AARCH64_R_STR (PREL32), /* name */
397 FALSE, /* partial_inplace */
398 0xffffffff, /* src_mask */
399 0xffffffff, /* dst_mask */
400 TRUE), /* pcrel_offset */
401
402 /* .half: (S+A-P) */
403 HOWTO (AARCH64_R (PREL16), /* type */
404 0, /* rightshift */
405 1, /* size (0 = byte, 1 = short, 2 = long) */
406 16, /* bitsize */
407 TRUE, /* pc_relative */
408 0, /* bitpos */
409 complain_overflow_signed, /* complain_on_overflow */
410 bfd_elf_generic_reloc, /* special_function */
411 AARCH64_R_STR (PREL16), /* name */
412 FALSE, /* partial_inplace */
413 0xffff, /* src_mask */
414 0xffff, /* dst_mask */
415 TRUE), /* pcrel_offset */
416
417 /* Group relocations to create a 16, 32, 48 or 64 bit
418 unsigned data or abs address inline. */
419
420 /* MOVZ: ((S+A) >> 0) & 0xffff */
421 HOWTO (AARCH64_R (MOVW_UABS_G0), /* type */
422 0, /* rightshift */
423 2, /* size (0 = byte, 1 = short, 2 = long) */
424 16, /* bitsize */
425 FALSE, /* pc_relative */
426 0, /* bitpos */
427 complain_overflow_unsigned, /* complain_on_overflow */
428 bfd_elf_generic_reloc, /* special_function */
429 AARCH64_R_STR (MOVW_UABS_G0), /* name */
430 FALSE, /* partial_inplace */
431 0xffff, /* src_mask */
432 0xffff, /* dst_mask */
433 FALSE), /* pcrel_offset */
434
435 /* MOVK: ((S+A) >> 0) & 0xffff [no overflow check] */
436 HOWTO (AARCH64_R (MOVW_UABS_G0_NC), /* type */
437 0, /* rightshift */
438 2, /* size (0 = byte, 1 = short, 2 = long) */
439 16, /* bitsize */
440 FALSE, /* pc_relative */
441 0, /* bitpos */
442 complain_overflow_dont, /* complain_on_overflow */
443 bfd_elf_generic_reloc, /* special_function */
444 AARCH64_R_STR (MOVW_UABS_G0_NC), /* name */
445 FALSE, /* partial_inplace */
446 0xffff, /* src_mask */
447 0xffff, /* dst_mask */
448 FALSE), /* pcrel_offset */
449
450 /* MOVZ: ((S+A) >> 16) & 0xffff */
451 HOWTO (AARCH64_R (MOVW_UABS_G1), /* type */
452 16, /* rightshift */
453 2, /* size (0 = byte, 1 = short, 2 = long) */
454 16, /* bitsize */
455 FALSE, /* pc_relative */
456 0, /* bitpos */
457 complain_overflow_unsigned, /* complain_on_overflow */
458 bfd_elf_generic_reloc, /* special_function */
459 AARCH64_R_STR (MOVW_UABS_G1), /* name */
460 FALSE, /* partial_inplace */
461 0xffff, /* src_mask */
462 0xffff, /* dst_mask */
463 FALSE), /* pcrel_offset */
464
465 /* MOVK: ((S+A) >> 16) & 0xffff [no overflow check] */
466 HOWTO64 (AARCH64_R (MOVW_UABS_G1_NC), /* type */
467 16, /* rightshift */
468 2, /* size (0 = byte, 1 = short, 2 = long) */
469 16, /* bitsize */
470 FALSE, /* pc_relative */
471 0, /* bitpos */
472 complain_overflow_dont, /* complain_on_overflow */
473 bfd_elf_generic_reloc, /* special_function */
474 AARCH64_R_STR (MOVW_UABS_G1_NC), /* name */
475 FALSE, /* partial_inplace */
476 0xffff, /* src_mask */
477 0xffff, /* dst_mask */
478 FALSE), /* pcrel_offset */
479
480 /* MOVZ: ((S+A) >> 32) & 0xffff */
481 HOWTO64 (AARCH64_R (MOVW_UABS_G2), /* type */
482 32, /* rightshift */
483 2, /* size (0 = byte, 1 = short, 2 = long) */
484 16, /* bitsize */
485 FALSE, /* pc_relative */
486 0, /* bitpos */
487 complain_overflow_unsigned, /* complain_on_overflow */
488 bfd_elf_generic_reloc, /* special_function */
489 AARCH64_R_STR (MOVW_UABS_G2), /* name */
490 FALSE, /* partial_inplace */
491 0xffff, /* src_mask */
492 0xffff, /* dst_mask */
493 FALSE), /* pcrel_offset */
494
495 /* MOVK: ((S+A) >> 32) & 0xffff [no overflow check] */
496 HOWTO64 (AARCH64_R (MOVW_UABS_G2_NC), /* type */
497 32, /* rightshift */
498 2, /* size (0 = byte, 1 = short, 2 = long) */
499 16, /* bitsize */
500 FALSE, /* pc_relative */
501 0, /* bitpos */
502 complain_overflow_dont, /* complain_on_overflow */
503 bfd_elf_generic_reloc, /* special_function */
504 AARCH64_R_STR (MOVW_UABS_G2_NC), /* name */
505 FALSE, /* partial_inplace */
506 0xffff, /* src_mask */
507 0xffff, /* dst_mask */
508 FALSE), /* pcrel_offset */
509
510 /* MOVZ: ((S+A) >> 48) & 0xffff */
511 HOWTO64 (AARCH64_R (MOVW_UABS_G3), /* type */
512 48, /* rightshift */
513 2, /* size (0 = byte, 1 = short, 2 = long) */
514 16, /* bitsize */
515 FALSE, /* pc_relative */
516 0, /* bitpos */
517 complain_overflow_unsigned, /* complain_on_overflow */
518 bfd_elf_generic_reloc, /* special_function */
519 AARCH64_R_STR (MOVW_UABS_G3), /* name */
520 FALSE, /* partial_inplace */
521 0xffff, /* src_mask */
522 0xffff, /* dst_mask */
523 FALSE), /* pcrel_offset */
524
525 /* Group relocations to create high part of a 16, 32, 48 or 64 bit
526 signed data or abs address inline. Will change instruction
527 to MOVN or MOVZ depending on sign of calculated value. */
528
529 /* MOV[ZN]: ((S+A) >> 0) & 0xffff */
530 HOWTO (AARCH64_R (MOVW_SABS_G0), /* type */
531 0, /* rightshift */
532 2, /* size (0 = byte, 1 = short, 2 = long) */
533 16, /* bitsize */
534 FALSE, /* pc_relative */
535 0, /* bitpos */
536 complain_overflow_signed, /* complain_on_overflow */
537 bfd_elf_generic_reloc, /* special_function */
538 AARCH64_R_STR (MOVW_SABS_G0), /* name */
539 FALSE, /* partial_inplace */
540 0xffff, /* src_mask */
541 0xffff, /* dst_mask */
542 FALSE), /* pcrel_offset */
543
544 /* MOV[ZN]: ((S+A) >> 16) & 0xffff */
545 HOWTO64 (AARCH64_R (MOVW_SABS_G1), /* type */
546 16, /* rightshift */
547 2, /* size (0 = byte, 1 = short, 2 = long) */
548 16, /* bitsize */
549 FALSE, /* pc_relative */
550 0, /* bitpos */
551 complain_overflow_signed, /* complain_on_overflow */
552 bfd_elf_generic_reloc, /* special_function */
553 AARCH64_R_STR (MOVW_SABS_G1), /* name */
554 FALSE, /* partial_inplace */
555 0xffff, /* src_mask */
556 0xffff, /* dst_mask */
557 FALSE), /* pcrel_offset */
558
559 /* MOV[ZN]: ((S+A) >> 32) & 0xffff */
560 HOWTO64 (AARCH64_R (MOVW_SABS_G2), /* type */
561 32, /* rightshift */
562 2, /* size (0 = byte, 1 = short, 2 = long) */
563 16, /* bitsize */
564 FALSE, /* pc_relative */
565 0, /* bitpos */
566 complain_overflow_signed, /* complain_on_overflow */
567 bfd_elf_generic_reloc, /* special_function */
568 AARCH64_R_STR (MOVW_SABS_G2), /* name */
569 FALSE, /* partial_inplace */
570 0xffff, /* src_mask */
571 0xffff, /* dst_mask */
572 FALSE), /* pcrel_offset */
573
574/* Relocations to generate 19, 21 and 33 bit PC-relative load/store
575 addresses: PG(x) is (x & ~0xfff). */
576
577 /* LD-lit: ((S+A-P) >> 2) & 0x7ffff */
578 HOWTO (AARCH64_R (LD_PREL_LO19), /* type */
579 2, /* rightshift */
580 2, /* size (0 = byte, 1 = short, 2 = long) */
581 19, /* bitsize */
582 TRUE, /* pc_relative */
583 0, /* bitpos */
584 complain_overflow_signed, /* complain_on_overflow */
585 bfd_elf_generic_reloc, /* special_function */
586 AARCH64_R_STR (LD_PREL_LO19), /* name */
587 FALSE, /* partial_inplace */
588 0x7ffff, /* src_mask */
589 0x7ffff, /* dst_mask */
590 TRUE), /* pcrel_offset */
591
592 /* ADR: (S+A-P) & 0x1fffff */
593 HOWTO (AARCH64_R (ADR_PREL_LO21), /* type */
594 0, /* rightshift */
595 2, /* size (0 = byte, 1 = short, 2 = long) */
596 21, /* bitsize */
597 TRUE, /* pc_relative */
598 0, /* bitpos */
599 complain_overflow_signed, /* complain_on_overflow */
600 bfd_elf_generic_reloc, /* special_function */
601 AARCH64_R_STR (ADR_PREL_LO21), /* name */
602 FALSE, /* partial_inplace */
603 0x1fffff, /* src_mask */
604 0x1fffff, /* dst_mask */
605 TRUE), /* pcrel_offset */
606
607 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
608 HOWTO (AARCH64_R (ADR_PREL_PG_HI21), /* type */
609 12, /* rightshift */
610 2, /* size (0 = byte, 1 = short, 2 = long) */
611 21, /* bitsize */
612 TRUE, /* pc_relative */
613 0, /* bitpos */
614 complain_overflow_signed, /* complain_on_overflow */
615 bfd_elf_generic_reloc, /* special_function */
616 AARCH64_R_STR (ADR_PREL_PG_HI21), /* name */
617 FALSE, /* partial_inplace */
618 0x1fffff, /* src_mask */
619 0x1fffff, /* dst_mask */
620 TRUE), /* pcrel_offset */
621
622 /* ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff [no overflow check] */
623 HOWTO64 (AARCH64_R (ADR_PREL_PG_HI21_NC), /* type */
624 12, /* rightshift */
625 2, /* size (0 = byte, 1 = short, 2 = long) */
626 21, /* bitsize */
627 TRUE, /* pc_relative */
628 0, /* bitpos */
629 complain_overflow_dont, /* complain_on_overflow */
630 bfd_elf_generic_reloc, /* special_function */
631 AARCH64_R_STR (ADR_PREL_PG_HI21_NC), /* name */
632 FALSE, /* partial_inplace */
633 0x1fffff, /* src_mask */
634 0x1fffff, /* dst_mask */
635 TRUE), /* pcrel_offset */
636
637 /* ADD: (S+A) & 0xfff [no overflow check] */
638 HOWTO (AARCH64_R (ADD_ABS_LO12_NC), /* type */
639 0, /* rightshift */
640 2, /* size (0 = byte, 1 = short, 2 = long) */
641 12, /* bitsize */
642 FALSE, /* pc_relative */
643 10, /* bitpos */
644 complain_overflow_dont, /* complain_on_overflow */
645 bfd_elf_generic_reloc, /* special_function */
646 AARCH64_R_STR (ADD_ABS_LO12_NC), /* name */
647 FALSE, /* partial_inplace */
648 0x3ffc00, /* src_mask */
649 0x3ffc00, /* dst_mask */
650 FALSE), /* pcrel_offset */
651
652 /* LD/ST8: (S+A) & 0xfff */
653 HOWTO (AARCH64_R (LDST8_ABS_LO12_NC), /* type */
654 0, /* rightshift */
655 2, /* size (0 = byte, 1 = short, 2 = long) */
656 12, /* bitsize */
657 FALSE, /* pc_relative */
658 0, /* bitpos */
659 complain_overflow_dont, /* complain_on_overflow */
660 bfd_elf_generic_reloc, /* special_function */
661 AARCH64_R_STR (LDST8_ABS_LO12_NC), /* name */
662 FALSE, /* partial_inplace */
663 0xfff, /* src_mask */
664 0xfff, /* dst_mask */
665 FALSE), /* pcrel_offset */
666
667 /* Relocations for control-flow instructions. */
668
669 /* TBZ/NZ: ((S+A-P) >> 2) & 0x3fff */
670 HOWTO (AARCH64_R (TSTBR14), /* type */
671 2, /* rightshift */
672 2, /* size (0 = byte, 1 = short, 2 = long) */
673 14, /* bitsize */
674 TRUE, /* pc_relative */
675 0, /* bitpos */
676 complain_overflow_signed, /* complain_on_overflow */
677 bfd_elf_generic_reloc, /* special_function */
678 AARCH64_R_STR (TSTBR14), /* name */
679 FALSE, /* partial_inplace */
680 0x3fff, /* src_mask */
681 0x3fff, /* dst_mask */
682 TRUE), /* pcrel_offset */
683
684 /* B.cond: ((S+A-P) >> 2) & 0x7ffff */
685 HOWTO (AARCH64_R (CONDBR19), /* type */
686 2, /* rightshift */
687 2, /* size (0 = byte, 1 = short, 2 = long) */
688 19, /* bitsize */
689 TRUE, /* pc_relative */
690 0, /* bitpos */
691 complain_overflow_signed, /* complain_on_overflow */
692 bfd_elf_generic_reloc, /* special_function */
693 AARCH64_R_STR (CONDBR19), /* name */
694 FALSE, /* partial_inplace */
695 0x7ffff, /* src_mask */
696 0x7ffff, /* dst_mask */
697 TRUE), /* pcrel_offset */
698
699 /* B: ((S+A-P) >> 2) & 0x3ffffff */
700 HOWTO (AARCH64_R (JUMP26), /* type */
701 2, /* rightshift */
702 2, /* size (0 = byte, 1 = short, 2 = long) */
703 26, /* bitsize */
704 TRUE, /* pc_relative */
705 0, /* bitpos */
706 complain_overflow_signed, /* complain_on_overflow */
707 bfd_elf_generic_reloc, /* special_function */
708 AARCH64_R_STR (JUMP26), /* name */
709 FALSE, /* partial_inplace */
710 0x3ffffff, /* src_mask */
711 0x3ffffff, /* dst_mask */
712 TRUE), /* pcrel_offset */
713
714 /* BL: ((S+A-P) >> 2) & 0x3ffffff */
715 HOWTO (AARCH64_R (CALL26), /* type */
716 2, /* rightshift */
717 2, /* size (0 = byte, 1 = short, 2 = long) */
718 26, /* bitsize */
719 TRUE, /* pc_relative */
720 0, /* bitpos */
721 complain_overflow_signed, /* complain_on_overflow */
722 bfd_elf_generic_reloc, /* special_function */
723 AARCH64_R_STR (CALL26), /* name */
724 FALSE, /* partial_inplace */
725 0x3ffffff, /* src_mask */
726 0x3ffffff, /* dst_mask */
727 TRUE), /* pcrel_offset */
728
729 /* LD/ST16: (S+A) & 0xffe */
730 HOWTO (AARCH64_R (LDST16_ABS_LO12_NC), /* type */
731 1, /* rightshift */
732 2, /* size (0 = byte, 1 = short, 2 = long) */
733 12, /* bitsize */
734 FALSE, /* pc_relative */
735 0, /* bitpos */
736 complain_overflow_dont, /* complain_on_overflow */
737 bfd_elf_generic_reloc, /* special_function */
738 AARCH64_R_STR (LDST16_ABS_LO12_NC), /* name */
739 FALSE, /* partial_inplace */
740 0xffe, /* src_mask */
741 0xffe, /* dst_mask */
742 FALSE), /* pcrel_offset */
743
744 /* LD/ST32: (S+A) & 0xffc */
745 HOWTO (AARCH64_R (LDST32_ABS_LO12_NC), /* type */
746 2, /* rightshift */
747 2, /* size (0 = byte, 1 = short, 2 = long) */
748 12, /* bitsize */
749 FALSE, /* pc_relative */
750 0, /* bitpos */
751 complain_overflow_dont, /* complain_on_overflow */
752 bfd_elf_generic_reloc, /* special_function */
753 AARCH64_R_STR (LDST32_ABS_LO12_NC), /* name */
754 FALSE, /* partial_inplace */
755 0xffc, /* src_mask */
756 0xffc, /* dst_mask */
757 FALSE), /* pcrel_offset */
758
759 /* LD/ST64: (S+A) & 0xff8 */
760 HOWTO (AARCH64_R (LDST64_ABS_LO12_NC), /* type */
761 3, /* rightshift */
762 2, /* size (0 = byte, 1 = short, 2 = long) */
763 12, /* bitsize */
764 FALSE, /* pc_relative */
765 0, /* bitpos */
766 complain_overflow_dont, /* complain_on_overflow */
767 bfd_elf_generic_reloc, /* special_function */
768 AARCH64_R_STR (LDST64_ABS_LO12_NC), /* name */
769 FALSE, /* partial_inplace */
770 0xff8, /* src_mask */
771 0xff8, /* dst_mask */
772 FALSE), /* pcrel_offset */
773
774 /* LD/ST128: (S+A) & 0xff0 */
775 HOWTO (AARCH64_R (LDST128_ABS_LO12_NC), /* type */
776 4, /* rightshift */
777 2, /* size (0 = byte, 1 = short, 2 = long) */
778 12, /* bitsize */
779 FALSE, /* pc_relative */
780 0, /* bitpos */
781 complain_overflow_dont, /* complain_on_overflow */
782 bfd_elf_generic_reloc, /* special_function */
783 AARCH64_R_STR (LDST128_ABS_LO12_NC), /* name */
784 FALSE, /* partial_inplace */
785 0xff0, /* src_mask */
786 0xff0, /* dst_mask */
787 FALSE), /* pcrel_offset */
788
789 /* Set a load-literal immediate field to bits
790 0x1FFFFC of G(S)-P */
791 HOWTO (AARCH64_R (GOT_LD_PREL19), /* type */
792 2, /* rightshift */
793 2, /* size (0 = byte,1 = short,2 = long) */
794 19, /* bitsize */
795 TRUE, /* pc_relative */
796 0, /* bitpos */
797 complain_overflow_signed, /* complain_on_overflow */
798 bfd_elf_generic_reloc, /* special_function */
799 AARCH64_R_STR (GOT_LD_PREL19), /* name */
800 FALSE, /* partial_inplace */
801 0xffffe0, /* src_mask */
802 0xffffe0, /* dst_mask */
803 TRUE), /* pcrel_offset */
804
805 /* Get to the page for the GOT entry for the symbol
806 (G(S) - P) using an ADRP instruction. */
807 HOWTO (AARCH64_R (ADR_GOT_PAGE), /* type */
808 12, /* rightshift */
809 2, /* size (0 = byte, 1 = short, 2 = long) */
810 21, /* bitsize */
811 TRUE, /* pc_relative */
812 0, /* bitpos */
813 complain_overflow_dont, /* complain_on_overflow */
814 bfd_elf_generic_reloc, /* special_function */
815 AARCH64_R_STR (ADR_GOT_PAGE), /* name */
816 FALSE, /* partial_inplace */
817 0x1fffff, /* src_mask */
818 0x1fffff, /* dst_mask */
819 TRUE), /* pcrel_offset */
820
821 /* LD64: GOT offset G(S) & 0xff8 */
822 HOWTO64 (AARCH64_R (LD64_GOT_LO12_NC), /* type */
823 3, /* rightshift */
824 2, /* size (0 = byte, 1 = short, 2 = long) */
825 12, /* bitsize */
826 FALSE, /* pc_relative */
827 0, /* bitpos */
828 complain_overflow_dont, /* complain_on_overflow */
829 bfd_elf_generic_reloc, /* special_function */
830 AARCH64_R_STR (LD64_GOT_LO12_NC), /* name */
831 FALSE, /* partial_inplace */
832 0xff8, /* src_mask */
833 0xff8, /* dst_mask */
834 FALSE), /* pcrel_offset */
835
836 /* LD32: GOT offset G(S) & 0xffc */
837 HOWTO32 (AARCH64_R (LD32_GOT_LO12_NC), /* type */
838 2, /* rightshift */
839 2, /* size (0 = byte, 1 = short, 2 = long) */
840 12, /* bitsize */
841 FALSE, /* pc_relative */
842 0, /* bitpos */
843 complain_overflow_dont, /* complain_on_overflow */
844 bfd_elf_generic_reloc, /* special_function */
845 AARCH64_R_STR (LD32_GOT_LO12_NC), /* name */
846 FALSE, /* partial_inplace */
847 0xffc, /* src_mask */
848 0xffc, /* dst_mask */
849 FALSE), /* pcrel_offset */
850
851 /* LD32: GOT offset to the page address of GOT table.
852 (G(S) - PAGE (_GLOBAL_OFFSET_TABLE_)) & 0x5ffc. */
853 HOWTO32 (AARCH64_R (LD32_GOTPAGE_LO14), /* type */
854 2, /* rightshift */
855 2, /* size (0 = byte, 1 = short, 2 = long) */
856 12, /* bitsize */
857 FALSE, /* pc_relative */
858 0, /* bitpos */
859 complain_overflow_unsigned, /* complain_on_overflow */
860 bfd_elf_generic_reloc, /* special_function */
861 AARCH64_R_STR (LD32_GOTPAGE_LO14), /* name */
862 FALSE, /* partial_inplace */
863 0x5ffc, /* src_mask */
864 0x5ffc, /* dst_mask */
865 FALSE), /* pcrel_offset */
866
867 /* LD64: GOT offset to the page address of GOT table.
868 (G(S) - PAGE (_GLOBAL_OFFSET_TABLE_)) & 0x7ff8. */
869 HOWTO64 (AARCH64_R (LD64_GOTPAGE_LO15), /* type */
870 3, /* rightshift */
871 2, /* size (0 = byte, 1 = short, 2 = long) */
872 12, /* bitsize */
873 FALSE, /* pc_relative */
874 0, /* bitpos */
875 complain_overflow_unsigned, /* complain_on_overflow */
876 bfd_elf_generic_reloc, /* special_function */
877 AARCH64_R_STR (LD64_GOTPAGE_LO15), /* name */
878 FALSE, /* partial_inplace */
879 0x7ff8, /* src_mask */
880 0x7ff8, /* dst_mask */
881 FALSE), /* pcrel_offset */
882
883 /* Get to the page for the GOT entry for the symbol
884 (G(S) - P) using an ADRP instruction. */
885 HOWTO (AARCH64_R (TLSGD_ADR_PAGE21), /* type */
886 12, /* rightshift */
887 2, /* size (0 = byte, 1 = short, 2 = long) */
888 21, /* bitsize */
889 TRUE, /* pc_relative */
890 0, /* bitpos */
891 complain_overflow_dont, /* complain_on_overflow */
892 bfd_elf_generic_reloc, /* special_function */
893 AARCH64_R_STR (TLSGD_ADR_PAGE21), /* name */
894 FALSE, /* partial_inplace */
895 0x1fffff, /* src_mask */
896 0x1fffff, /* dst_mask */
897 TRUE), /* pcrel_offset */
898
899 HOWTO (AARCH64_R (TLSGD_ADR_PREL21), /* type */
900 0, /* rightshift */
901 2, /* size (0 = byte, 1 = short, 2 = long) */
902 21, /* bitsize */
903 TRUE, /* pc_relative */
904 0, /* bitpos */
905 complain_overflow_dont, /* complain_on_overflow */
906 bfd_elf_generic_reloc, /* special_function */
907 AARCH64_R_STR (TLSGD_ADR_PREL21), /* name */
908 FALSE, /* partial_inplace */
909 0x1fffff, /* src_mask */
910 0x1fffff, /* dst_mask */
911 TRUE), /* pcrel_offset */
912
913 /* ADD: GOT offset G(S) & 0xff8 [no overflow check] */
914 HOWTO (AARCH64_R (TLSGD_ADD_LO12_NC), /* type */
915 0, /* rightshift */
916 2, /* size (0 = byte, 1 = short, 2 = long) */
917 12, /* bitsize */
918 FALSE, /* pc_relative */
919 0, /* bitpos */
920 complain_overflow_dont, /* complain_on_overflow */
921 bfd_elf_generic_reloc, /* special_function */
922 AARCH64_R_STR (TLSGD_ADD_LO12_NC), /* name */
923 FALSE, /* partial_inplace */
924 0xfff, /* src_mask */
925 0xfff, /* dst_mask */
926 FALSE), /* pcrel_offset */
927
928 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G1), /* type */
929 16, /* rightshift */
930 2, /* size (0 = byte, 1 = short, 2 = long) */
931 16, /* bitsize */
932 FALSE, /* pc_relative */
933 0, /* bitpos */
934 complain_overflow_dont, /* complain_on_overflow */
935 bfd_elf_generic_reloc, /* special_function */
936 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G1), /* name */
937 FALSE, /* partial_inplace */
938 0xffff, /* src_mask */
939 0xffff, /* dst_mask */
940 FALSE), /* pcrel_offset */
941
942 HOWTO64 (AARCH64_R (TLSIE_MOVW_GOTTPREL_G0_NC), /* type */
943 0, /* rightshift */
944 2, /* size (0 = byte, 1 = short, 2 = long) */
945 16, /* bitsize */
946 FALSE, /* pc_relative */
947 0, /* bitpos */
948 complain_overflow_dont, /* complain_on_overflow */
949 bfd_elf_generic_reloc, /* special_function */
950 AARCH64_R_STR (TLSIE_MOVW_GOTTPREL_G0_NC), /* name */
951 FALSE, /* partial_inplace */
952 0xffff, /* src_mask */
953 0xffff, /* dst_mask */
954 FALSE), /* pcrel_offset */
955
956 HOWTO (AARCH64_R (TLSIE_ADR_GOTTPREL_PAGE21), /* type */
957 12, /* rightshift */
958 2, /* size (0 = byte, 1 = short, 2 = long) */
959 21, /* bitsize */
960 FALSE, /* pc_relative */
961 0, /* bitpos */
962 complain_overflow_dont, /* complain_on_overflow */
963 bfd_elf_generic_reloc, /* special_function */
964 AARCH64_R_STR (TLSIE_ADR_GOTTPREL_PAGE21), /* name */
965 FALSE, /* partial_inplace */
966 0x1fffff, /* src_mask */
967 0x1fffff, /* dst_mask */
968 FALSE), /* pcrel_offset */
969
970 HOWTO64 (AARCH64_R (TLSIE_LD64_GOTTPREL_LO12_NC), /* type */
971 3, /* rightshift */
972 2, /* size (0 = byte, 1 = short, 2 = long) */
973 12, /* bitsize */
974 FALSE, /* pc_relative */
975 0, /* bitpos */
976 complain_overflow_dont, /* complain_on_overflow */
977 bfd_elf_generic_reloc, /* special_function */
978 AARCH64_R_STR (TLSIE_LD64_GOTTPREL_LO12_NC), /* name */
979 FALSE, /* partial_inplace */
980 0xff8, /* src_mask */
981 0xff8, /* dst_mask */
982 FALSE), /* pcrel_offset */
983
984 HOWTO32 (AARCH64_R (TLSIE_LD32_GOTTPREL_LO12_NC), /* type */
985 2, /* rightshift */
986 2, /* size (0 = byte, 1 = short, 2 = long) */
987 12, /* bitsize */
988 FALSE, /* pc_relative */
989 0, /* bitpos */
990 complain_overflow_dont, /* complain_on_overflow */
991 bfd_elf_generic_reloc, /* special_function */
992 AARCH64_R_STR (TLSIE_LD32_GOTTPREL_LO12_NC), /* name */
993 FALSE, /* partial_inplace */
994 0xffc, /* src_mask */
995 0xffc, /* dst_mask */
996 FALSE), /* pcrel_offset */
997
998 HOWTO (AARCH64_R (TLSIE_LD_GOTTPREL_PREL19), /* type */
999 2, /* rightshift */
1000 2, /* size (0 = byte, 1 = short, 2 = long) */
1001 19, /* bitsize */
1002 FALSE, /* pc_relative */
1003 0, /* bitpos */
1004 complain_overflow_dont, /* complain_on_overflow */
1005 bfd_elf_generic_reloc, /* special_function */
1006 AARCH64_R_STR (TLSIE_LD_GOTTPREL_PREL19), /* name */
1007 FALSE, /* partial_inplace */
1008 0x1ffffc, /* src_mask */
1009 0x1ffffc, /* dst_mask */
1010 FALSE), /* pcrel_offset */
1011
1012 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G2), /* type */
1013 32, /* rightshift */
1014 2, /* size (0 = byte, 1 = short, 2 = long) */
1015 16, /* bitsize */
1016 FALSE, /* pc_relative */
1017 0, /* bitpos */
1018 complain_overflow_unsigned, /* complain_on_overflow */
1019 bfd_elf_generic_reloc, /* special_function */
1020 AARCH64_R_STR (TLSLE_MOVW_TPREL_G2), /* name */
1021 FALSE, /* partial_inplace */
1022 0xffff, /* src_mask */
1023 0xffff, /* dst_mask */
1024 FALSE), /* pcrel_offset */
1025
1026 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G1), /* type */
1027 16, /* rightshift */
1028 2, /* size (0 = byte, 1 = short, 2 = long) */
1029 16, /* bitsize */
1030 FALSE, /* pc_relative */
1031 0, /* bitpos */
1032 complain_overflow_dont, /* complain_on_overflow */
1033 bfd_elf_generic_reloc, /* special_function */
1034 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1), /* name */
1035 FALSE, /* partial_inplace */
1036 0xffff, /* src_mask */
1037 0xffff, /* dst_mask */
1038 FALSE), /* pcrel_offset */
1039
1040 HOWTO64 (AARCH64_R (TLSLE_MOVW_TPREL_G1_NC), /* type */
1041 16, /* rightshift */
1042 2, /* size (0 = byte, 1 = short, 2 = long) */
1043 16, /* bitsize */
1044 FALSE, /* pc_relative */
1045 0, /* bitpos */
1046 complain_overflow_dont, /* complain_on_overflow */
1047 bfd_elf_generic_reloc, /* special_function */
1048 AARCH64_R_STR (TLSLE_MOVW_TPREL_G1_NC), /* name */
1049 FALSE, /* partial_inplace */
1050 0xffff, /* src_mask */
1051 0xffff, /* dst_mask */
1052 FALSE), /* pcrel_offset */
1053
1054 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0), /* type */
1055 0, /* rightshift */
1056 2, /* size (0 = byte, 1 = short, 2 = long) */
1057 16, /* bitsize */
1058 FALSE, /* pc_relative */
1059 0, /* bitpos */
1060 complain_overflow_dont, /* complain_on_overflow */
1061 bfd_elf_generic_reloc, /* special_function */
1062 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0), /* name */
1063 FALSE, /* partial_inplace */
1064 0xffff, /* src_mask */
1065 0xffff, /* dst_mask */
1066 FALSE), /* pcrel_offset */
1067
1068 HOWTO (AARCH64_R (TLSLE_MOVW_TPREL_G0_NC), /* type */
1069 0, /* rightshift */
1070 2, /* size (0 = byte, 1 = short, 2 = long) */
1071 16, /* bitsize */
1072 FALSE, /* pc_relative */
1073 0, /* bitpos */
1074 complain_overflow_dont, /* complain_on_overflow */
1075 bfd_elf_generic_reloc, /* special_function */
1076 AARCH64_R_STR (TLSLE_MOVW_TPREL_G0_NC), /* name */
1077 FALSE, /* partial_inplace */
1078 0xffff, /* src_mask */
1079 0xffff, /* dst_mask */
1080 FALSE), /* pcrel_offset */
1081
1082 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_HI12), /* type */
1083 12, /* rightshift */
1084 2, /* size (0 = byte, 1 = short, 2 = long) */
1085 12, /* bitsize */
1086 FALSE, /* pc_relative */
1087 0, /* bitpos */
1088 complain_overflow_unsigned, /* complain_on_overflow */
1089 bfd_elf_generic_reloc, /* special_function */
1090 AARCH64_R_STR (TLSLE_ADD_TPREL_HI12), /* name */
1091 FALSE, /* partial_inplace */
1092 0xfff, /* src_mask */
1093 0xfff, /* dst_mask */
1094 FALSE), /* pcrel_offset */
1095
1096 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12), /* type */
1097 0, /* rightshift */
1098 2, /* size (0 = byte, 1 = short, 2 = long) */
1099 12, /* bitsize */
1100 FALSE, /* pc_relative */
1101 0, /* bitpos */
1102 complain_overflow_unsigned, /* complain_on_overflow */
1103 bfd_elf_generic_reloc, /* special_function */
1104 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12), /* name */
1105 FALSE, /* partial_inplace */
1106 0xfff, /* src_mask */
1107 0xfff, /* dst_mask */
1108 FALSE), /* pcrel_offset */
1109
1110 HOWTO (AARCH64_R (TLSLE_ADD_TPREL_LO12_NC), /* type */
1111 0, /* rightshift */
1112 2, /* size (0 = byte, 1 = short, 2 = long) */
1113 12, /* bitsize */
1114 FALSE, /* pc_relative */
1115 0, /* bitpos */
1116 complain_overflow_dont, /* complain_on_overflow */
1117 bfd_elf_generic_reloc, /* special_function */
1118 AARCH64_R_STR (TLSLE_ADD_TPREL_LO12_NC), /* name */
1119 FALSE, /* partial_inplace */
1120 0xfff, /* src_mask */
1121 0xfff, /* dst_mask */
1122 FALSE), /* pcrel_offset */
1123
1124 HOWTO (AARCH64_R (TLSDESC_LD_PREL19), /* type */
1125 2, /* rightshift */
1126 2, /* size (0 = byte, 1 = short, 2 = long) */
1127 19, /* bitsize */
1128 TRUE, /* pc_relative */
1129 0, /* bitpos */
1130 complain_overflow_dont, /* complain_on_overflow */
1131 bfd_elf_generic_reloc, /* special_function */
1132 AARCH64_R_STR (TLSDESC_LD_PREL19), /* name */
1133 FALSE, /* partial_inplace */
1134 0x0ffffe0, /* src_mask */
1135 0x0ffffe0, /* dst_mask */
1136 TRUE), /* pcrel_offset */
1137
1138 HOWTO (AARCH64_R (TLSDESC_ADR_PREL21), /* type */
1139 0, /* rightshift */
1140 2, /* size (0 = byte, 1 = short, 2 = long) */
1141 21, /* bitsize */
1142 TRUE, /* pc_relative */
1143 0, /* bitpos */
1144 complain_overflow_dont, /* complain_on_overflow */
1145 bfd_elf_generic_reloc, /* special_function */
1146 AARCH64_R_STR (TLSDESC_ADR_PREL21), /* name */
1147 FALSE, /* partial_inplace */
1148 0x1fffff, /* src_mask */
1149 0x1fffff, /* dst_mask */
1150 TRUE), /* pcrel_offset */
1151
1152 /* Get to the page for the GOT entry for the symbol
1153 (G(S) - P) using an ADRP instruction. */
1154 HOWTO (AARCH64_R (TLSDESC_ADR_PAGE21), /* type */
1155 12, /* rightshift */
1156 2, /* size (0 = byte, 1 = short, 2 = long) */
1157 21, /* bitsize */
1158 TRUE, /* pc_relative */
1159 0, /* bitpos */
1160 complain_overflow_dont, /* complain_on_overflow */
1161 bfd_elf_generic_reloc, /* special_function */
1162 AARCH64_R_STR (TLSDESC_ADR_PAGE21), /* name */
1163 FALSE, /* partial_inplace */
1164 0x1fffff, /* src_mask */
1165 0x1fffff, /* dst_mask */
1166 TRUE), /* pcrel_offset */
1167
1168 /* LD64: GOT offset G(S) & 0xff8. */
1169 HOWTO64 (AARCH64_R (TLSDESC_LD64_LO12_NC), /* type */
1170 3, /* rightshift */
1171 2, /* size (0 = byte, 1 = short, 2 = long) */
1172 12, /* bitsize */
1173 FALSE, /* pc_relative */
1174 0, /* bitpos */
1175 complain_overflow_dont, /* complain_on_overflow */
1176 bfd_elf_generic_reloc, /* special_function */
1177 AARCH64_R_STR (TLSDESC_LD64_LO12_NC), /* name */
1178 FALSE, /* partial_inplace */
1179 0xff8, /* src_mask */
1180 0xff8, /* dst_mask */
1181 FALSE), /* pcrel_offset */
1182
1183 /* LD32: GOT offset G(S) & 0xffc. */
1184 HOWTO32 (AARCH64_R (TLSDESC_LD32_LO12_NC), /* type */
1185 2, /* rightshift */
1186 2, /* size (0 = byte, 1 = short, 2 = long) */
1187 12, /* bitsize */
1188 FALSE, /* pc_relative */
1189 0, /* bitpos */
1190 complain_overflow_dont, /* complain_on_overflow */
1191 bfd_elf_generic_reloc, /* special_function */
1192 AARCH64_R_STR (TLSDESC_LD32_LO12_NC), /* name */
1193 FALSE, /* partial_inplace */
1194 0xffc, /* src_mask */
1195 0xffc, /* dst_mask */
1196 FALSE), /* pcrel_offset */
1197
1198 /* ADD: GOT offset G(S) & 0xfff. */
1199 HOWTO (AARCH64_R (TLSDESC_ADD_LO12_NC), /* type */
1200 0, /* rightshift */
1201 2, /* size (0 = byte, 1 = short, 2 = long) */
1202 12, /* bitsize */
1203 FALSE, /* pc_relative */
1204 0, /* bitpos */
1205 complain_overflow_dont, /* complain_on_overflow */
1206 bfd_elf_generic_reloc, /* special_function */
1207 AARCH64_R_STR (TLSDESC_ADD_LO12_NC), /* name */
1208 FALSE, /* partial_inplace */
1209 0xfff, /* src_mask */
1210 0xfff, /* dst_mask */
1211 FALSE), /* pcrel_offset */
1212
1213 HOWTO64 (AARCH64_R (TLSDESC_OFF_G1), /* type */
1214 16, /* rightshift */
1215 2, /* size (0 = byte, 1 = short, 2 = long) */
1216 12, /* bitsize */
1217 FALSE, /* pc_relative */
1218 0, /* bitpos */
1219 complain_overflow_dont, /* complain_on_overflow */
1220 bfd_elf_generic_reloc, /* special_function */
1221 AARCH64_R_STR (TLSDESC_OFF_G1), /* name */
1222 FALSE, /* partial_inplace */
1223 0xffff, /* src_mask */
1224 0xffff, /* dst_mask */
1225 FALSE), /* pcrel_offset */
1226
1227 HOWTO64 (AARCH64_R (TLSDESC_OFF_G0_NC), /* type */
1228 0, /* rightshift */
1229 2, /* size (0 = byte, 1 = short, 2 = long) */
1230 12, /* bitsize */
1231 FALSE, /* pc_relative */
1232 0, /* bitpos */
1233 complain_overflow_dont, /* complain_on_overflow */
1234 bfd_elf_generic_reloc, /* special_function */
1235 AARCH64_R_STR (TLSDESC_OFF_G0_NC), /* name */
1236 FALSE, /* partial_inplace */
1237 0xffff, /* src_mask */
1238 0xffff, /* dst_mask */
1239 FALSE), /* pcrel_offset */
1240
1241 HOWTO64 (AARCH64_R (TLSDESC_LDR), /* type */
1242 0, /* rightshift */
1243 2, /* size (0 = byte, 1 = short, 2 = long) */
1244 12, /* bitsize */
1245 FALSE, /* pc_relative */
1246 0, /* bitpos */
1247 complain_overflow_dont, /* complain_on_overflow */
1248 bfd_elf_generic_reloc, /* special_function */
1249 AARCH64_R_STR (TLSDESC_LDR), /* name */
1250 FALSE, /* partial_inplace */
1251 0x0, /* src_mask */
1252 0x0, /* dst_mask */
1253 FALSE), /* pcrel_offset */
1254
1255 HOWTO64 (AARCH64_R (TLSDESC_ADD), /* type */
1256 0, /* rightshift */
1257 2, /* size (0 = byte, 1 = short, 2 = long) */
1258 12, /* bitsize */
1259 FALSE, /* pc_relative */
1260 0, /* bitpos */
1261 complain_overflow_dont, /* complain_on_overflow */
1262 bfd_elf_generic_reloc, /* special_function */
1263 AARCH64_R_STR (TLSDESC_ADD), /* name */
1264 FALSE, /* partial_inplace */
1265 0x0, /* src_mask */
1266 0x0, /* dst_mask */
1267 FALSE), /* pcrel_offset */
1268
1269 HOWTO (AARCH64_R (TLSDESC_CALL), /* type */
1270 0, /* rightshift */
1271 2, /* size (0 = byte, 1 = short, 2 = long) */
1272 0, /* bitsize */
1273 FALSE, /* pc_relative */
1274 0, /* bitpos */
1275 complain_overflow_dont, /* complain_on_overflow */
1276 bfd_elf_generic_reloc, /* special_function */
1277 AARCH64_R_STR (TLSDESC_CALL), /* name */
1278 FALSE, /* partial_inplace */
1279 0x0, /* src_mask */
1280 0x0, /* dst_mask */
1281 FALSE), /* pcrel_offset */
1282
1283 HOWTO (AARCH64_R (COPY), /* type */
1284 0, /* rightshift */
1285 2, /* size (0 = byte, 1 = short, 2 = long) */
1286 64, /* bitsize */
1287 FALSE, /* pc_relative */
1288 0, /* bitpos */
1289 complain_overflow_bitfield, /* complain_on_overflow */
1290 bfd_elf_generic_reloc, /* special_function */
1291 AARCH64_R_STR (COPY), /* name */
1292 TRUE, /* partial_inplace */
1293 0xffffffff, /* src_mask */
1294 0xffffffff, /* dst_mask */
1295 FALSE), /* pcrel_offset */
1296
1297 HOWTO (AARCH64_R (GLOB_DAT), /* type */
1298 0, /* rightshift */
1299 2, /* size (0 = byte, 1 = short, 2 = long) */
1300 64, /* bitsize */
1301 FALSE, /* pc_relative */
1302 0, /* bitpos */
1303 complain_overflow_bitfield, /* complain_on_overflow */
1304 bfd_elf_generic_reloc, /* special_function */
1305 AARCH64_R_STR (GLOB_DAT), /* name */
1306 TRUE, /* partial_inplace */
1307 0xffffffff, /* src_mask */
1308 0xffffffff, /* dst_mask */
1309 FALSE), /* pcrel_offset */
1310
1311 HOWTO (AARCH64_R (JUMP_SLOT), /* type */
1312 0, /* rightshift */
1313 2, /* size (0 = byte, 1 = short, 2 = long) */
1314 64, /* bitsize */
1315 FALSE, /* pc_relative */
1316 0, /* bitpos */
1317 complain_overflow_bitfield, /* complain_on_overflow */
1318 bfd_elf_generic_reloc, /* special_function */
1319 AARCH64_R_STR (JUMP_SLOT), /* name */
1320 TRUE, /* partial_inplace */
1321 0xffffffff, /* src_mask */
1322 0xffffffff, /* dst_mask */
1323 FALSE), /* pcrel_offset */
1324
1325 HOWTO (AARCH64_R (RELATIVE), /* type */
1326 0, /* rightshift */
1327 2, /* size (0 = byte, 1 = short, 2 = long) */
1328 64, /* bitsize */
1329 FALSE, /* pc_relative */
1330 0, /* bitpos */
1331 complain_overflow_bitfield, /* complain_on_overflow */
1332 bfd_elf_generic_reloc, /* special_function */
1333 AARCH64_R_STR (RELATIVE), /* name */
1334 TRUE, /* partial_inplace */
1335 ALL_ONES, /* src_mask */
1336 ALL_ONES, /* dst_mask */
1337 FALSE), /* pcrel_offset */
1338
1339 HOWTO (AARCH64_R (TLS_DTPMOD), /* type */
1340 0, /* rightshift */
1341 2, /* size (0 = byte, 1 = short, 2 = long) */
1342 64, /* bitsize */
1343 FALSE, /* pc_relative */
1344 0, /* bitpos */
1345 complain_overflow_dont, /* complain_on_overflow */
1346 bfd_elf_generic_reloc, /* special_function */
1347#if ARCH_SIZE == 64
1348 AARCH64_R_STR (TLS_DTPMOD64), /* name */
1349#else
1350 AARCH64_R_STR (TLS_DTPMOD), /* name */
1351#endif
1352 FALSE, /* partial_inplace */
1353 0, /* src_mask */
1354 ALL_ONES, /* dst_mask */
1355 FALSE), /* pc_reloffset */
1356
1357 HOWTO (AARCH64_R (TLS_DTPREL), /* type */
1358 0, /* rightshift */
1359 2, /* size (0 = byte, 1 = short, 2 = long) */
1360 64, /* bitsize */
1361 FALSE, /* pc_relative */
1362 0, /* bitpos */
1363 complain_overflow_dont, /* complain_on_overflow */
1364 bfd_elf_generic_reloc, /* special_function */
1365#if ARCH_SIZE == 64
1366 AARCH64_R_STR (TLS_DTPREL64), /* name */
1367#else
1368 AARCH64_R_STR (TLS_DTPREL), /* name */
1369#endif
1370 FALSE, /* partial_inplace */
1371 0, /* src_mask */
1372 ALL_ONES, /* dst_mask */
1373 FALSE), /* pcrel_offset */
1374
1375 HOWTO (AARCH64_R (TLS_TPREL), /* type */
1376 0, /* rightshift */
1377 2, /* size (0 = byte, 1 = short, 2 = long) */
1378 64, /* bitsize */
1379 FALSE, /* pc_relative */
1380 0, /* bitpos */
1381 complain_overflow_dont, /* complain_on_overflow */
1382 bfd_elf_generic_reloc, /* special_function */
1383#if ARCH_SIZE == 64
1384 AARCH64_R_STR (TLS_TPREL64), /* name */
1385#else
1386 AARCH64_R_STR (TLS_TPREL), /* name */
1387#endif
1388 FALSE, /* partial_inplace */
1389 0, /* src_mask */
1390 ALL_ONES, /* dst_mask */
1391 FALSE), /* pcrel_offset */
1392
1393 HOWTO (AARCH64_R (TLSDESC), /* type */
1394 0, /* rightshift */
1395 2, /* size (0 = byte, 1 = short, 2 = long) */
1396 64, /* bitsize */
1397 FALSE, /* pc_relative */
1398 0, /* bitpos */
1399 complain_overflow_dont, /* complain_on_overflow */
1400 bfd_elf_generic_reloc, /* special_function */
1401 AARCH64_R_STR (TLSDESC), /* name */
1402 FALSE, /* partial_inplace */
1403 0, /* src_mask */
1404 ALL_ONES, /* dst_mask */
1405 FALSE), /* pcrel_offset */
1406
1407 HOWTO (AARCH64_R (IRELATIVE), /* type */
1408 0, /* rightshift */
1409 2, /* size (0 = byte, 1 = short, 2 = long) */
1410 64, /* bitsize */
1411 FALSE, /* pc_relative */
1412 0, /* bitpos */
1413 complain_overflow_bitfield, /* complain_on_overflow */
1414 bfd_elf_generic_reloc, /* special_function */
1415 AARCH64_R_STR (IRELATIVE), /* name */
1416 FALSE, /* partial_inplace */
1417 0, /* src_mask */
1418 ALL_ONES, /* dst_mask */
1419 FALSE), /* pcrel_offset */
1420
1421 EMPTY_HOWTO (0),
1422};
1423
1424static reloc_howto_type elfNN_aarch64_howto_none =
1425 HOWTO (R_AARCH64_NONE, /* type */
1426 0, /* rightshift */
1427 3, /* size (0 = byte, 1 = short, 2 = long) */
1428 0, /* bitsize */
1429 FALSE, /* pc_relative */
1430 0, /* bitpos */
1431 complain_overflow_dont,/* complain_on_overflow */
1432 bfd_elf_generic_reloc, /* special_function */
1433 "R_AARCH64_NONE", /* name */
1434 FALSE, /* partial_inplace */
1435 0, /* src_mask */
1436 0, /* dst_mask */
1437 FALSE); /* pcrel_offset */
1438
1439/* Given HOWTO, return the bfd internal relocation enumerator. */
1440
1441static bfd_reloc_code_real_type
1442elfNN_aarch64_bfd_reloc_from_howto (reloc_howto_type *howto)
1443{
1444 const int size
1445 = (int) ARRAY_SIZE (elfNN_aarch64_howto_table);
1446 const ptrdiff_t offset
1447 = howto - elfNN_aarch64_howto_table;
1448
1449 if (offset > 0 && offset < size - 1)
1450 return BFD_RELOC_AARCH64_RELOC_START + offset;
1451
1452 if (howto == &elfNN_aarch64_howto_none)
1453 return BFD_RELOC_AARCH64_NONE;
1454
1455 return BFD_RELOC_AARCH64_RELOC_START;
1456}
1457
1458/* Given R_TYPE, return the bfd internal relocation enumerator. */
1459
1460static bfd_reloc_code_real_type
1461elfNN_aarch64_bfd_reloc_from_type (unsigned int r_type)
1462{
1463 static bfd_boolean initialized_p = FALSE;
1464 /* Indexed by R_TYPE, values are offsets in the howto_table. */
1465 static unsigned int offsets[R_AARCH64_end];
1466
1467 if (initialized_p == FALSE)
1468 {
1469 unsigned int i;
1470
1471 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1472 if (elfNN_aarch64_howto_table[i].type != 0)
1473 offsets[elfNN_aarch64_howto_table[i].type] = i;
1474
1475 initialized_p = TRUE;
1476 }
1477
1478 if (r_type == R_AARCH64_NONE || r_type == R_AARCH64_NULL)
1479 return BFD_RELOC_AARCH64_NONE;
1480
1481 /* PR 17512: file: b371e70a. */
1482 if (r_type >= R_AARCH64_end)
1483 {
1484 _bfd_error_handler (_("Invalid AArch64 reloc number: %d"), r_type);
1485 bfd_set_error (bfd_error_bad_value);
1486 return BFD_RELOC_AARCH64_NONE;
1487 }
1488
1489 return BFD_RELOC_AARCH64_RELOC_START + offsets[r_type];
1490}
1491
1492struct elf_aarch64_reloc_map
1493{
1494 bfd_reloc_code_real_type from;
1495 bfd_reloc_code_real_type to;
1496};
1497
1498/* Map bfd generic reloc to AArch64-specific reloc. */
1499static const struct elf_aarch64_reloc_map elf_aarch64_reloc_map[] =
1500{
1501 {BFD_RELOC_NONE, BFD_RELOC_AARCH64_NONE},
1502
1503 /* Basic data relocations. */
1504 {BFD_RELOC_CTOR, BFD_RELOC_AARCH64_NN},
1505 {BFD_RELOC_64, BFD_RELOC_AARCH64_64},
1506 {BFD_RELOC_32, BFD_RELOC_AARCH64_32},
1507 {BFD_RELOC_16, BFD_RELOC_AARCH64_16},
1508 {BFD_RELOC_64_PCREL, BFD_RELOC_AARCH64_64_PCREL},
1509 {BFD_RELOC_32_PCREL, BFD_RELOC_AARCH64_32_PCREL},
1510 {BFD_RELOC_16_PCREL, BFD_RELOC_AARCH64_16_PCREL},
1511};
1512
1513/* Given the bfd internal relocation enumerator in CODE, return the
1514 corresponding howto entry. */
1515
1516static reloc_howto_type *
1517elfNN_aarch64_howto_from_bfd_reloc (bfd_reloc_code_real_type code)
1518{
1519 unsigned int i;
1520
1521 /* Convert bfd generic reloc to AArch64-specific reloc. */
1522 if (code < BFD_RELOC_AARCH64_RELOC_START
1523 || code > BFD_RELOC_AARCH64_RELOC_END)
1524 for (i = 0; i < ARRAY_SIZE (elf_aarch64_reloc_map); i++)
1525 if (elf_aarch64_reloc_map[i].from == code)
1526 {
1527 code = elf_aarch64_reloc_map[i].to;
1528 break;
1529 }
1530
1531 if (code > BFD_RELOC_AARCH64_RELOC_START
1532 && code < BFD_RELOC_AARCH64_RELOC_END)
1533 if (elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START].type)
1534 return &elfNN_aarch64_howto_table[code - BFD_RELOC_AARCH64_RELOC_START];
1535
1536 if (code == BFD_RELOC_AARCH64_NONE)
1537 return &elfNN_aarch64_howto_none;
1538
1539 return NULL;
1540}
1541
1542static reloc_howto_type *
1543elfNN_aarch64_howto_from_type (unsigned int r_type)
1544{
1545 bfd_reloc_code_real_type val;
1546 reloc_howto_type *howto;
1547
1548#if ARCH_SIZE == 32
1549 if (r_type > 256)
1550 {
1551 bfd_set_error (bfd_error_bad_value);
1552 return NULL;
1553 }
1554#endif
1555
1556 if (r_type == R_AARCH64_NONE)
1557 return &elfNN_aarch64_howto_none;
1558
1559 val = elfNN_aarch64_bfd_reloc_from_type (r_type);
1560 howto = elfNN_aarch64_howto_from_bfd_reloc (val);
1561
1562 if (howto != NULL)
1563 return howto;
1564
1565 bfd_set_error (bfd_error_bad_value);
1566 return NULL;
1567}
1568
1569static void
1570elfNN_aarch64_info_to_howto (bfd *abfd ATTRIBUTE_UNUSED, arelent *bfd_reloc,
1571 Elf_Internal_Rela *elf_reloc)
1572{
1573 unsigned int r_type;
1574
1575 r_type = ELFNN_R_TYPE (elf_reloc->r_info);
1576 bfd_reloc->howto = elfNN_aarch64_howto_from_type (r_type);
1577}
1578
1579static reloc_howto_type *
1580elfNN_aarch64_reloc_type_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1581 bfd_reloc_code_real_type code)
1582{
1583 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (code);
1584
1585 if (howto != NULL)
1586 return howto;
1587
1588 bfd_set_error (bfd_error_bad_value);
1589 return NULL;
1590}
1591
1592static reloc_howto_type *
1593elfNN_aarch64_reloc_name_lookup (bfd *abfd ATTRIBUTE_UNUSED,
1594 const char *r_name)
1595{
1596 unsigned int i;
1597
1598 for (i = 1; i < ARRAY_SIZE (elfNN_aarch64_howto_table) - 1; ++i)
1599 if (elfNN_aarch64_howto_table[i].name != NULL
1600 && strcasecmp (elfNN_aarch64_howto_table[i].name, r_name) == 0)
1601 return &elfNN_aarch64_howto_table[i];
1602
1603 return NULL;
1604}
1605
1606#define TARGET_LITTLE_SYM aarch64_elfNN_le_vec
1607#define TARGET_LITTLE_NAME "elfNN-littleaarch64"
1608#define TARGET_BIG_SYM aarch64_elfNN_be_vec
1609#define TARGET_BIG_NAME "elfNN-bigaarch64"
1610
1611/* The linker script knows the section names for placement.
1612 The entry_names are used to do simple name mangling on the stubs.
1613 Given a function name, and its type, the stub can be found. The
1614 name can be changed. The only requirement is the %s be present. */
1615#define STUB_ENTRY_NAME "__%s_veneer"
1616
1617/* The name of the dynamic interpreter. This is put in the .interp
1618 section. */
1619#define ELF_DYNAMIC_INTERPRETER "/lib/ld.so.1"
1620
1621#define AARCH64_MAX_FWD_BRANCH_OFFSET \
1622 (((1 << 25) - 1) << 2)
1623#define AARCH64_MAX_BWD_BRANCH_OFFSET \
1624 (-((1 << 25) << 2))
1625
1626#define AARCH64_MAX_ADRP_IMM ((1 << 20) - 1)
1627#define AARCH64_MIN_ADRP_IMM (-(1 << 20))
1628
1629static int
1630aarch64_valid_for_adrp_p (bfd_vma value, bfd_vma place)
1631{
1632 bfd_signed_vma offset = (bfd_signed_vma) (PG (value) - PG (place)) >> 12;
1633 return offset <= AARCH64_MAX_ADRP_IMM && offset >= AARCH64_MIN_ADRP_IMM;
1634}
1635
1636static int
1637aarch64_valid_branch_p (bfd_vma value, bfd_vma place)
1638{
1639 bfd_signed_vma offset = (bfd_signed_vma) (value - place);
1640 return (offset <= AARCH64_MAX_FWD_BRANCH_OFFSET
1641 && offset >= AARCH64_MAX_BWD_BRANCH_OFFSET);
1642}
1643
1644static const uint32_t aarch64_adrp_branch_stub [] =
1645{
1646 0x90000010, /* adrp ip0, X */
1647 /* R_AARCH64_ADR_HI21_PCREL(X) */
1648 0x91000210, /* add ip0, ip0, :lo12:X */
1649 /* R_AARCH64_ADD_ABS_LO12_NC(X) */
1650 0xd61f0200, /* br ip0 */
1651};
1652
1653static const uint32_t aarch64_long_branch_stub[] =
1654{
1655#if ARCH_SIZE == 64
1656 0x58000090, /* ldr ip0, 1f */
1657#else
1658 0x18000090, /* ldr wip0, 1f */
1659#endif
1660 0x10000011, /* adr ip1, #0 */
1661 0x8b110210, /* add ip0, ip0, ip1 */
1662 0xd61f0200, /* br ip0 */
1663 0x00000000, /* 1: .xword or .word
1664 R_AARCH64_PRELNN(X) + 12
1665 */
1666 0x00000000,
1667};
1668
1669static const uint32_t aarch64_erratum_835769_stub[] =
1670{
1671 0x00000000, /* Placeholder for multiply accumulate. */
1672 0x14000000, /* b <label> */
1673};
1674
1675static const uint32_t aarch64_erratum_843419_stub[] =
1676{
1677 0x00000000, /* Placeholder for LDR instruction. */
1678 0x14000000, /* b <label> */
1679};
1680
1681/* Section name for stubs is the associated section name plus this
1682 string. */
1683#define STUB_SUFFIX ".stub"
1684
1685enum elf_aarch64_stub_type
1686{
1687 aarch64_stub_none,
1688 aarch64_stub_adrp_branch,
1689 aarch64_stub_long_branch,
1690 aarch64_stub_erratum_835769_veneer,
1691 aarch64_stub_erratum_843419_veneer,
1692};
1693
1694struct elf_aarch64_stub_hash_entry
1695{
1696 /* Base hash table entry structure. */
1697 struct bfd_hash_entry root;
1698
1699 /* The stub section. */
1700 asection *stub_sec;
1701
1702 /* Offset within stub_sec of the beginning of this stub. */
1703 bfd_vma stub_offset;
1704
1705 /* Given the symbol's value and its section we can determine its final
1706 value when building the stubs (so the stub knows where to jump). */
1707 bfd_vma target_value;
1708 asection *target_section;
1709
1710 enum elf_aarch64_stub_type stub_type;
1711
1712 /* The symbol table entry, if any, that this was derived from. */
1713 struct elf_aarch64_link_hash_entry *h;
1714
1715 /* Destination symbol type */
1716 unsigned char st_type;
1717
1718 /* Where this stub is being called from, or, in the case of combined
1719 stub sections, the first input section in the group. */
1720 asection *id_sec;
1721
1722 /* The name for the local symbol at the start of this stub. The
1723 stub name in the hash table has to be unique; this does not, so
1724 it can be friendlier. */
1725 char *output_name;
1726
1727 /* The instruction which caused this stub to be generated (only valid for
1728 erratum 835769 workaround stubs at present). */
1729 uint32_t veneered_insn;
1730
1731 /* In an erratum 843419 workaround stub, the ADRP instruction offset. */
1732 bfd_vma adrp_offset;
1733};
1734
1735/* Used to build a map of a section. This is required for mixed-endian
1736 code/data. */
1737
1738typedef struct elf_elf_section_map
1739{
1740 bfd_vma vma;
1741 char type;
1742}
1743elf_aarch64_section_map;
1744
1745
1746typedef struct _aarch64_elf_section_data
1747{
1748 struct bfd_elf_section_data elf;
1749 unsigned int mapcount;
1750 unsigned int mapsize;
1751 elf_aarch64_section_map *map;
1752}
1753_aarch64_elf_section_data;
1754
1755#define elf_aarch64_section_data(sec) \
1756 ((_aarch64_elf_section_data *) elf_section_data (sec))
1757
1758/* The size of the thread control block which is defined to be two pointers. */
1759#define TCB_SIZE (ARCH_SIZE/8)*2
1760
1761struct elf_aarch64_local_symbol
1762{
1763 unsigned int got_type;
1764 bfd_signed_vma got_refcount;
1765 bfd_vma got_offset;
1766
1767 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The
1768 offset is from the end of the jump table and reserved entries
1769 within the PLTGOT.
1770
1771 The magic value (bfd_vma) -1 indicates that an offset has not be
1772 allocated. */
1773 bfd_vma tlsdesc_got_jump_table_offset;
1774};
1775
1776struct elf_aarch64_obj_tdata
1777{
1778 struct elf_obj_tdata root;
1779
1780 /* local symbol descriptors */
1781 struct elf_aarch64_local_symbol *locals;
1782
1783 /* Zero to warn when linking objects with incompatible enum sizes. */
1784 int no_enum_size_warning;
1785
1786 /* Zero to warn when linking objects with incompatible wchar_t sizes. */
1787 int no_wchar_size_warning;
1788};
1789
1790#define elf_aarch64_tdata(bfd) \
1791 ((struct elf_aarch64_obj_tdata *) (bfd)->tdata.any)
1792
1793#define elf_aarch64_locals(bfd) (elf_aarch64_tdata (bfd)->locals)
1794
1795#define is_aarch64_elf(bfd) \
1796 (bfd_get_flavour (bfd) == bfd_target_elf_flavour \
1797 && elf_tdata (bfd) != NULL \
1798 && elf_object_id (bfd) == AARCH64_ELF_DATA)
1799
1800static bfd_boolean
1801elfNN_aarch64_mkobject (bfd *abfd)
1802{
1803 return bfd_elf_allocate_object (abfd, sizeof (struct elf_aarch64_obj_tdata),
1804 AARCH64_ELF_DATA);
1805}
1806
1807#define elf_aarch64_hash_entry(ent) \
1808 ((struct elf_aarch64_link_hash_entry *)(ent))
1809
1810#define GOT_UNKNOWN 0
1811#define GOT_NORMAL 1
1812#define GOT_TLS_GD 2
1813#define GOT_TLS_IE 4
1814#define GOT_TLSDESC_GD 8
1815
1816#define GOT_TLS_GD_ANY_P(type) ((type & GOT_TLS_GD) || (type & GOT_TLSDESC_GD))
1817
1818/* AArch64 ELF linker hash entry. */
1819struct elf_aarch64_link_hash_entry
1820{
1821 struct elf_link_hash_entry root;
1822
1823 /* Track dynamic relocs copied for this symbol. */
1824 struct elf_dyn_relocs *dyn_relocs;
1825
1826 /* Since PLT entries have variable size, we need to record the
1827 index into .got.plt instead of recomputing it from the PLT
1828 offset. */
1829 bfd_signed_vma plt_got_offset;
1830
1831 /* Bit mask representing the type of GOT entry(s) if any required by
1832 this symbol. */
1833 unsigned int got_type;
1834
1835 /* A pointer to the most recently used stub hash entry against this
1836 symbol. */
1837 struct elf_aarch64_stub_hash_entry *stub_cache;
1838
1839 /* Offset of the GOTPLT entry reserved for the TLS descriptor. The offset
1840 is from the end of the jump table and reserved entries within the PLTGOT.
1841
1842 The magic value (bfd_vma) -1 indicates that an offset has not
1843 be allocated. */
1844 bfd_vma tlsdesc_got_jump_table_offset;
1845};
1846
1847static unsigned int
1848elfNN_aarch64_symbol_got_type (struct elf_link_hash_entry *h,
1849 bfd *abfd,
1850 unsigned long r_symndx)
1851{
1852 if (h)
1853 return elf_aarch64_hash_entry (h)->got_type;
1854
1855 if (! elf_aarch64_locals (abfd))
1856 return GOT_UNKNOWN;
1857
1858 return elf_aarch64_locals (abfd)[r_symndx].got_type;
1859}
1860
1861/* Get the AArch64 elf linker hash table from a link_info structure. */
1862#define elf_aarch64_hash_table(info) \
1863 ((struct elf_aarch64_link_hash_table *) ((info)->hash))
1864
1865#define aarch64_stub_hash_lookup(table, string, create, copy) \
1866 ((struct elf_aarch64_stub_hash_entry *) \
1867 bfd_hash_lookup ((table), (string), (create), (copy)))
1868
1869/* AArch64 ELF linker hash table. */
1870struct elf_aarch64_link_hash_table
1871{
1872 /* The main hash table. */
1873 struct elf_link_hash_table root;
1874
1875 /* Nonzero to force PIC branch veneers. */
1876 int pic_veneer;
1877
1878 /* Fix erratum 835769. */
1879 int fix_erratum_835769;
1880
1881 /* Fix erratum 843419. */
1882 int fix_erratum_843419;
1883
1884 /* Enable ADRP->ADR rewrite for erratum 843419 workaround. */
1885 int fix_erratum_843419_adr;
1886
1887 /* The number of bytes in the initial entry in the PLT. */
1888 bfd_size_type plt_header_size;
1889
1890 /* The number of bytes in the subsequent PLT etries. */
1891 bfd_size_type plt_entry_size;
1892
1893 /* Short-cuts to get to dynamic linker sections. */
1894 asection *sdynbss;
1895 asection *srelbss;
1896
1897 /* Small local sym cache. */
1898 struct sym_cache sym_cache;
1899
1900 /* For convenience in allocate_dynrelocs. */
1901 bfd *obfd;
1902
1903 /* The amount of space used by the reserved portion of the sgotplt
1904 section, plus whatever space is used by the jump slots. */
1905 bfd_vma sgotplt_jump_table_size;
1906
1907 /* The stub hash table. */
1908 struct bfd_hash_table stub_hash_table;
1909
1910 /* Linker stub bfd. */
1911 bfd *stub_bfd;
1912
1913 /* Linker call-backs. */
1914 asection *(*add_stub_section) (const char *, asection *);
1915 void (*layout_sections_again) (void);
1916
1917 /* Array to keep track of which stub sections have been created, and
1918 information on stub grouping. */
1919 struct map_stub
1920 {
1921 /* This is the section to which stubs in the group will be
1922 attached. */
1923 asection *link_sec;
1924 /* The stub section. */
1925 asection *stub_sec;
1926 } *stub_group;
1927
1928 /* Assorted information used by elfNN_aarch64_size_stubs. */
1929 unsigned int bfd_count;
1930 int top_index;
1931 asection **input_list;
1932
1933 /* The offset into splt of the PLT entry for the TLS descriptor
1934 resolver. Special values are 0, if not necessary (or not found
1935 to be necessary yet), and -1 if needed but not determined
1936 yet. */
1937 bfd_vma tlsdesc_plt;
1938
1939 /* The GOT offset for the lazy trampoline. Communicated to the
1940 loader via DT_TLSDESC_GOT. The magic value (bfd_vma) -1
1941 indicates an offset is not allocated. */
1942 bfd_vma dt_tlsdesc_got;
1943
1944 /* Used by local STT_GNU_IFUNC symbols. */
1945 htab_t loc_hash_table;
1946 void * loc_hash_memory;
1947};
1948
1949/* Create an entry in an AArch64 ELF linker hash table. */
1950
1951static struct bfd_hash_entry *
1952elfNN_aarch64_link_hash_newfunc (struct bfd_hash_entry *entry,
1953 struct bfd_hash_table *table,
1954 const char *string)
1955{
1956 struct elf_aarch64_link_hash_entry *ret =
1957 (struct elf_aarch64_link_hash_entry *) entry;
1958
1959 /* Allocate the structure if it has not already been allocated by a
1960 subclass. */
1961 if (ret == NULL)
1962 ret = bfd_hash_allocate (table,
1963 sizeof (struct elf_aarch64_link_hash_entry));
1964 if (ret == NULL)
1965 return (struct bfd_hash_entry *) ret;
1966
1967 /* Call the allocation method of the superclass. */
1968 ret = ((struct elf_aarch64_link_hash_entry *)
1969 _bfd_elf_link_hash_newfunc ((struct bfd_hash_entry *) ret,
1970 table, string));
1971 if (ret != NULL)
1972 {
1973 ret->dyn_relocs = NULL;
1974 ret->got_type = GOT_UNKNOWN;
1975 ret->plt_got_offset = (bfd_vma) - 1;
1976 ret->stub_cache = NULL;
1977 ret->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
1978 }
1979
1980 return (struct bfd_hash_entry *) ret;
1981}
1982
1983/* Initialize an entry in the stub hash table. */
1984
1985static struct bfd_hash_entry *
1986stub_hash_newfunc (struct bfd_hash_entry *entry,
1987 struct bfd_hash_table *table, const char *string)
1988{
1989 /* Allocate the structure if it has not already been allocated by a
1990 subclass. */
1991 if (entry == NULL)
1992 {
1993 entry = bfd_hash_allocate (table,
1994 sizeof (struct
1995 elf_aarch64_stub_hash_entry));
1996 if (entry == NULL)
1997 return entry;
1998 }
1999
2000 /* Call the allocation method of the superclass. */
2001 entry = bfd_hash_newfunc (entry, table, string);
2002 if (entry != NULL)
2003 {
2004 struct elf_aarch64_stub_hash_entry *eh;
2005
2006 /* Initialize the local fields. */
2007 eh = (struct elf_aarch64_stub_hash_entry *) entry;
2008 eh->adrp_offset = 0;
2009 eh->stub_sec = NULL;
2010 eh->stub_offset = 0;
2011 eh->target_value = 0;
2012 eh->target_section = NULL;
2013 eh->stub_type = aarch64_stub_none;
2014 eh->h = NULL;
2015 eh->id_sec = NULL;
2016 }
2017
2018 return entry;
2019}
2020
2021/* Compute a hash of a local hash entry. We use elf_link_hash_entry
2022 for local symbol so that we can handle local STT_GNU_IFUNC symbols
2023 as global symbol. We reuse indx and dynstr_index for local symbol
2024 hash since they aren't used by global symbols in this backend. */
2025
2026static hashval_t
2027elfNN_aarch64_local_htab_hash (const void *ptr)
2028{
2029 struct elf_link_hash_entry *h
2030 = (struct elf_link_hash_entry *) ptr;
2031 return ELF_LOCAL_SYMBOL_HASH (h->indx, h->dynstr_index);
2032}
2033
2034/* Compare local hash entries. */
2035
2036static int
2037elfNN_aarch64_local_htab_eq (const void *ptr1, const void *ptr2)
2038{
2039 struct elf_link_hash_entry *h1
2040 = (struct elf_link_hash_entry *) ptr1;
2041 struct elf_link_hash_entry *h2
2042 = (struct elf_link_hash_entry *) ptr2;
2043
2044 return h1->indx == h2->indx && h1->dynstr_index == h2->dynstr_index;
2045}
2046
2047/* Find and/or create a hash entry for local symbol. */
2048
2049static struct elf_link_hash_entry *
2050elfNN_aarch64_get_local_sym_hash (struct elf_aarch64_link_hash_table *htab,
2051 bfd *abfd, const Elf_Internal_Rela *rel,
2052 bfd_boolean create)
2053{
2054 struct elf_aarch64_link_hash_entry e, *ret;
2055 asection *sec = abfd->sections;
2056 hashval_t h = ELF_LOCAL_SYMBOL_HASH (sec->id,
2057 ELFNN_R_SYM (rel->r_info));
2058 void **slot;
2059
2060 e.root.indx = sec->id;
2061 e.root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2062 slot = htab_find_slot_with_hash (htab->loc_hash_table, &e, h,
2063 create ? INSERT : NO_INSERT);
2064
2065 if (!slot)
2066 return NULL;
2067
2068 if (*slot)
2069 {
2070 ret = (struct elf_aarch64_link_hash_entry *) *slot;
2071 return &ret->root;
2072 }
2073
2074 ret = (struct elf_aarch64_link_hash_entry *)
2075 objalloc_alloc ((struct objalloc *) htab->loc_hash_memory,
2076 sizeof (struct elf_aarch64_link_hash_entry));
2077 if (ret)
2078 {
2079 memset (ret, 0, sizeof (*ret));
2080 ret->root.indx = sec->id;
2081 ret->root.dynstr_index = ELFNN_R_SYM (rel->r_info);
2082 ret->root.dynindx = -1;
2083 *slot = ret;
2084 }
2085 return &ret->root;
2086}
2087
2088/* Copy the extra info we tack onto an elf_link_hash_entry. */
2089
2090static void
2091elfNN_aarch64_copy_indirect_symbol (struct bfd_link_info *info,
2092 struct elf_link_hash_entry *dir,
2093 struct elf_link_hash_entry *ind)
2094{
2095 struct elf_aarch64_link_hash_entry *edir, *eind;
2096
2097 edir = (struct elf_aarch64_link_hash_entry *) dir;
2098 eind = (struct elf_aarch64_link_hash_entry *) ind;
2099
2100 if (eind->dyn_relocs != NULL)
2101 {
2102 if (edir->dyn_relocs != NULL)
2103 {
2104 struct elf_dyn_relocs **pp;
2105 struct elf_dyn_relocs *p;
2106
2107 /* Add reloc counts against the indirect sym to the direct sym
2108 list. Merge any entries against the same section. */
2109 for (pp = &eind->dyn_relocs; (p = *pp) != NULL;)
2110 {
2111 struct elf_dyn_relocs *q;
2112
2113 for (q = edir->dyn_relocs; q != NULL; q = q->next)
2114 if (q->sec == p->sec)
2115 {
2116 q->pc_count += p->pc_count;
2117 q->count += p->count;
2118 *pp = p->next;
2119 break;
2120 }
2121 if (q == NULL)
2122 pp = &p->next;
2123 }
2124 *pp = edir->dyn_relocs;
2125 }
2126
2127 edir->dyn_relocs = eind->dyn_relocs;
2128 eind->dyn_relocs = NULL;
2129 }
2130
2131 if (ind->root.type == bfd_link_hash_indirect)
2132 {
2133 /* Copy over PLT info. */
2134 if (dir->got.refcount <= 0)
2135 {
2136 edir->got_type = eind->got_type;
2137 eind->got_type = GOT_UNKNOWN;
2138 }
2139 }
2140
2141 _bfd_elf_link_hash_copy_indirect (info, dir, ind);
2142}
2143
2144/* Destroy an AArch64 elf linker hash table. */
2145
2146static void
2147elfNN_aarch64_link_hash_table_free (bfd *obfd)
2148{
2149 struct elf_aarch64_link_hash_table *ret
2150 = (struct elf_aarch64_link_hash_table *) obfd->link.hash;
2151
2152 if (ret->loc_hash_table)
2153 htab_delete (ret->loc_hash_table);
2154 if (ret->loc_hash_memory)
2155 objalloc_free ((struct objalloc *) ret->loc_hash_memory);
2156
2157 bfd_hash_table_free (&ret->stub_hash_table);
2158 _bfd_elf_link_hash_table_free (obfd);
2159}
2160
2161/* Create an AArch64 elf linker hash table. */
2162
2163static struct bfd_link_hash_table *
2164elfNN_aarch64_link_hash_table_create (bfd *abfd)
2165{
2166 struct elf_aarch64_link_hash_table *ret;
2167 bfd_size_type amt = sizeof (struct elf_aarch64_link_hash_table);
2168
2169 ret = bfd_zmalloc (amt);
2170 if (ret == NULL)
2171 return NULL;
2172
2173 if (!_bfd_elf_link_hash_table_init
2174 (&ret->root, abfd, elfNN_aarch64_link_hash_newfunc,
2175 sizeof (struct elf_aarch64_link_hash_entry), AARCH64_ELF_DATA))
2176 {
2177 free (ret);
2178 return NULL;
2179 }
2180
2181 ret->plt_header_size = PLT_ENTRY_SIZE;
2182 ret->plt_entry_size = PLT_SMALL_ENTRY_SIZE;
2183 ret->obfd = abfd;
2184 ret->dt_tlsdesc_got = (bfd_vma) - 1;
2185
2186 if (!bfd_hash_table_init (&ret->stub_hash_table, stub_hash_newfunc,
2187 sizeof (struct elf_aarch64_stub_hash_entry)))
2188 {
2189 _bfd_elf_link_hash_table_free (abfd);
2190 return NULL;
2191 }
2192
2193 ret->loc_hash_table = htab_try_create (1024,
2194 elfNN_aarch64_local_htab_hash,
2195 elfNN_aarch64_local_htab_eq,
2196 NULL);
2197 ret->loc_hash_memory = objalloc_create ();
2198 if (!ret->loc_hash_table || !ret->loc_hash_memory)
2199 {
2200 elfNN_aarch64_link_hash_table_free (abfd);
2201 return NULL;
2202 }
2203 ret->root.root.hash_table_free = elfNN_aarch64_link_hash_table_free;
2204
2205 return &ret->root.root;
2206}
2207
2208static bfd_boolean
2209aarch64_relocate (unsigned int r_type, bfd *input_bfd, asection *input_section,
2210 bfd_vma offset, bfd_vma value)
2211{
2212 reloc_howto_type *howto;
2213 bfd_vma place;
2214
2215 howto = elfNN_aarch64_howto_from_type (r_type);
2216 place = (input_section->output_section->vma + input_section->output_offset
2217 + offset);
2218
2219 r_type = elfNN_aarch64_bfd_reloc_from_type (r_type);
2220 value = _bfd_aarch64_elf_resolve_relocation (r_type, place, value, 0, FALSE);
2221 return _bfd_aarch64_elf_put_addend (input_bfd,
2222 input_section->contents + offset, r_type,
2223 howto, value);
2224}
2225
2226static enum elf_aarch64_stub_type
2227aarch64_select_branch_stub (bfd_vma value, bfd_vma place)
2228{
2229 if (aarch64_valid_for_adrp_p (value, place))
2230 return aarch64_stub_adrp_branch;
2231 return aarch64_stub_long_branch;
2232}
2233
2234/* Determine the type of stub needed, if any, for a call. */
2235
2236static enum elf_aarch64_stub_type
2237aarch64_type_of_stub (struct bfd_link_info *info,
2238 asection *input_sec,
2239 const Elf_Internal_Rela *rel,
2240 unsigned char st_type,
2241 struct elf_aarch64_link_hash_entry *hash,
2242 bfd_vma destination)
2243{
2244 bfd_vma location;
2245 bfd_signed_vma branch_offset;
2246 unsigned int r_type;
2247 struct elf_aarch64_link_hash_table *globals;
2248 enum elf_aarch64_stub_type stub_type = aarch64_stub_none;
2249 bfd_boolean via_plt_p;
2250
2251 if (st_type != STT_FUNC)
2252 return stub_type;
2253
2254 globals = elf_aarch64_hash_table (info);
2255 via_plt_p = (globals->root.splt != NULL && hash != NULL
2256 && hash->root.plt.offset != (bfd_vma) - 1);
2257
2258 if (via_plt_p)
2259 return stub_type;
2260
2261 /* Determine where the call point is. */
2262 location = (input_sec->output_offset
2263 + input_sec->output_section->vma + rel->r_offset);
2264
2265 branch_offset = (bfd_signed_vma) (destination - location);
2266
2267 r_type = ELFNN_R_TYPE (rel->r_info);
2268
2269 /* We don't want to redirect any old unconditional jump in this way,
2270 only one which is being used for a sibcall, where it is
2271 acceptable for the IP0 and IP1 registers to be clobbered. */
2272 if ((r_type == AARCH64_R (CALL26) || r_type == AARCH64_R (JUMP26))
2273 && (branch_offset > AARCH64_MAX_FWD_BRANCH_OFFSET
2274 || branch_offset < AARCH64_MAX_BWD_BRANCH_OFFSET))
2275 {
2276 stub_type = aarch64_stub_long_branch;
2277 }
2278
2279 return stub_type;
2280}
2281
2282/* Build a name for an entry in the stub hash table. */
2283
2284static char *
2285elfNN_aarch64_stub_name (const asection *input_section,
2286 const asection *sym_sec,
2287 const struct elf_aarch64_link_hash_entry *hash,
2288 const Elf_Internal_Rela *rel)
2289{
2290 char *stub_name;
2291 bfd_size_type len;
2292
2293 if (hash)
2294 {
2295 len = 8 + 1 + strlen (hash->root.root.root.string) + 1 + 16 + 1;
2296 stub_name = bfd_malloc (len);
2297 if (stub_name != NULL)
2298 snprintf (stub_name, len, "%08x_%s+%" BFD_VMA_FMT "x",
2299 (unsigned int) input_section->id,
2300 hash->root.root.root.string,
2301 rel->r_addend);
2302 }
2303 else
2304 {
2305 len = 8 + 1 + 8 + 1 + 8 + 1 + 16 + 1;
2306 stub_name = bfd_malloc (len);
2307 if (stub_name != NULL)
2308 snprintf (stub_name, len, "%08x_%x:%x+%" BFD_VMA_FMT "x",
2309 (unsigned int) input_section->id,
2310 (unsigned int) sym_sec->id,
2311 (unsigned int) ELFNN_R_SYM (rel->r_info),
2312 rel->r_addend);
2313 }
2314
2315 return stub_name;
2316}
2317
2318/* Look up an entry in the stub hash. Stub entries are cached because
2319 creating the stub name takes a bit of time. */
2320
2321static struct elf_aarch64_stub_hash_entry *
2322elfNN_aarch64_get_stub_entry (const asection *input_section,
2323 const asection *sym_sec,
2324 struct elf_link_hash_entry *hash,
2325 const Elf_Internal_Rela *rel,
2326 struct elf_aarch64_link_hash_table *htab)
2327{
2328 struct elf_aarch64_stub_hash_entry *stub_entry;
2329 struct elf_aarch64_link_hash_entry *h =
2330 (struct elf_aarch64_link_hash_entry *) hash;
2331 const asection *id_sec;
2332
2333 if ((input_section->flags & SEC_CODE) == 0)
2334 return NULL;
2335
2336 /* If this input section is part of a group of sections sharing one
2337 stub section, then use the id of the first section in the group.
2338 Stub names need to include a section id, as there may well be
2339 more than one stub used to reach say, printf, and we need to
2340 distinguish between them. */
2341 id_sec = htab->stub_group[input_section->id].link_sec;
2342
2343 if (h != NULL && h->stub_cache != NULL
2344 && h->stub_cache->h == h && h->stub_cache->id_sec == id_sec)
2345 {
2346 stub_entry = h->stub_cache;
2347 }
2348 else
2349 {
2350 char *stub_name;
2351
2352 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, h, rel);
2353 if (stub_name == NULL)
2354 return NULL;
2355
2356 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table,
2357 stub_name, FALSE, FALSE);
2358 if (h != NULL)
2359 h->stub_cache = stub_entry;
2360
2361 free (stub_name);
2362 }
2363
2364 return stub_entry;
2365}
2366
2367
2368/* Create a stub section. */
2369
2370static asection *
2371_bfd_aarch64_create_stub_section (asection *section,
2372 struct elf_aarch64_link_hash_table *htab)
2373{
2374 size_t namelen;
2375 bfd_size_type len;
2376 char *s_name;
2377
2378 namelen = strlen (section->name);
2379 len = namelen + sizeof (STUB_SUFFIX);
2380 s_name = bfd_alloc (htab->stub_bfd, len);
2381 if (s_name == NULL)
2382 return NULL;
2383
2384 memcpy (s_name, section->name, namelen);
2385 memcpy (s_name + namelen, STUB_SUFFIX, sizeof (STUB_SUFFIX));
2386 return (*htab->add_stub_section) (s_name, section);
2387}
2388
2389
2390/* Find or create a stub section for a link section.
2391
2392 Fix or create the stub section used to collect stubs attached to
2393 the specified link section. */
2394
2395static asection *
2396_bfd_aarch64_get_stub_for_link_section (asection *link_section,
2397 struct elf_aarch64_link_hash_table *htab)
2398{
2399 if (htab->stub_group[link_section->id].stub_sec == NULL)
2400 htab->stub_group[link_section->id].stub_sec
2401 = _bfd_aarch64_create_stub_section (link_section, htab);
2402 return htab->stub_group[link_section->id].stub_sec;
2403}
2404
2405
2406/* Find or create a stub section in the stub group for an input
2407 section. */
2408
2409static asection *
2410_bfd_aarch64_create_or_find_stub_sec (asection *section,
2411 struct elf_aarch64_link_hash_table *htab)
2412{
2413 asection *link_sec = htab->stub_group[section->id].link_sec;
2414 return _bfd_aarch64_get_stub_for_link_section (link_sec, htab);
2415}
2416
2417
2418/* Add a new stub entry in the stub group associated with an input
2419 section to the stub hash. Not all fields of the new stub entry are
2420 initialised. */
2421
2422static struct elf_aarch64_stub_hash_entry *
2423_bfd_aarch64_add_stub_entry_in_group (const char *stub_name,
2424 asection *section,
2425 struct elf_aarch64_link_hash_table *htab)
2426{
2427 asection *link_sec;
2428 asection *stub_sec;
2429 struct elf_aarch64_stub_hash_entry *stub_entry;
2430
2431 link_sec = htab->stub_group[section->id].link_sec;
2432 stub_sec = _bfd_aarch64_create_or_find_stub_sec (section, htab);
2433
2434 /* Enter this entry into the linker stub hash table. */
2435 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2436 TRUE, FALSE);
2437 if (stub_entry == NULL)
2438 {
2439 (*_bfd_error_handler) (_("%s: cannot create stub entry %s"),
2440 section->owner, stub_name);
2441 return NULL;
2442 }
2443
2444 stub_entry->stub_sec = stub_sec;
2445 stub_entry->stub_offset = 0;
2446 stub_entry->id_sec = link_sec;
2447
2448 return stub_entry;
2449}
2450
2451/* Add a new stub entry in the final stub section to the stub hash.
2452 Not all fields of the new stub entry are initialised. */
2453
2454static struct elf_aarch64_stub_hash_entry *
2455_bfd_aarch64_add_stub_entry_after (const char *stub_name,
2456 asection *link_section,
2457 struct elf_aarch64_link_hash_table *htab)
2458{
2459 asection *stub_sec;
2460 struct elf_aarch64_stub_hash_entry *stub_entry;
2461
2462 stub_sec = _bfd_aarch64_get_stub_for_link_section (link_section, htab);
2463 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
2464 TRUE, FALSE);
2465 if (stub_entry == NULL)
2466 {
2467 (*_bfd_error_handler) (_("cannot create stub entry %s"), stub_name);
2468 return NULL;
2469 }
2470
2471 stub_entry->stub_sec = stub_sec;
2472 stub_entry->stub_offset = 0;
2473 stub_entry->id_sec = link_section;
2474
2475 return stub_entry;
2476}
2477
2478
2479static bfd_boolean
2480aarch64_build_one_stub (struct bfd_hash_entry *gen_entry,
2481 void *in_arg ATTRIBUTE_UNUSED)
2482{
2483 struct elf_aarch64_stub_hash_entry *stub_entry;
2484 asection *stub_sec;
2485 bfd *stub_bfd;
2486 bfd_byte *loc;
2487 bfd_vma sym_value;
2488 bfd_vma veneered_insn_loc;
2489 bfd_vma veneer_entry_loc;
2490 bfd_signed_vma branch_offset = 0;
2491 unsigned int template_size;
2492 const uint32_t *template;
2493 unsigned int i;
2494
2495 /* Massage our args to the form they really have. */
2496 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2497
2498 stub_sec = stub_entry->stub_sec;
2499
2500 /* Make a note of the offset within the stubs for this entry. */
2501 stub_entry->stub_offset = stub_sec->size;
2502 loc = stub_sec->contents + stub_entry->stub_offset;
2503
2504 stub_bfd = stub_sec->owner;
2505
2506 /* This is the address of the stub destination. */
2507 sym_value = (stub_entry->target_value
2508 + stub_entry->target_section->output_offset
2509 + stub_entry->target_section->output_section->vma);
2510
2511 if (stub_entry->stub_type == aarch64_stub_long_branch)
2512 {
2513 bfd_vma place = (stub_entry->stub_offset + stub_sec->output_section->vma
2514 + stub_sec->output_offset);
2515
2516 /* See if we can relax the stub. */
2517 if (aarch64_valid_for_adrp_p (sym_value, place))
2518 stub_entry->stub_type = aarch64_select_branch_stub (sym_value, place);
2519 }
2520
2521 switch (stub_entry->stub_type)
2522 {
2523 case aarch64_stub_adrp_branch:
2524 template = aarch64_adrp_branch_stub;
2525 template_size = sizeof (aarch64_adrp_branch_stub);
2526 break;
2527 case aarch64_stub_long_branch:
2528 template = aarch64_long_branch_stub;
2529 template_size = sizeof (aarch64_long_branch_stub);
2530 break;
2531 case aarch64_stub_erratum_835769_veneer:
2532 template = aarch64_erratum_835769_stub;
2533 template_size = sizeof (aarch64_erratum_835769_stub);
2534 break;
2535 case aarch64_stub_erratum_843419_veneer:
2536 template = aarch64_erratum_843419_stub;
2537 template_size = sizeof (aarch64_erratum_843419_stub);
2538 break;
2539 default:
2540 abort ();
2541 }
2542
2543 for (i = 0; i < (template_size / sizeof template[0]); i++)
2544 {
2545 bfd_putl32 (template[i], loc);
2546 loc += 4;
2547 }
2548
2549 template_size = (template_size + 7) & ~7;
2550 stub_sec->size += template_size;
2551
2552 switch (stub_entry->stub_type)
2553 {
2554 case aarch64_stub_adrp_branch:
2555 if (aarch64_relocate (AARCH64_R (ADR_PREL_PG_HI21), stub_bfd, stub_sec,
2556 stub_entry->stub_offset, sym_value))
2557 /* The stub would not have been relaxed if the offset was out
2558 of range. */
2559 BFD_FAIL ();
2560
2561 if (aarch64_relocate (AARCH64_R (ADD_ABS_LO12_NC), stub_bfd, stub_sec,
2562 stub_entry->stub_offset + 4, sym_value))
2563 BFD_FAIL ();
2564 break;
2565
2566 case aarch64_stub_long_branch:
2567 /* We want the value relative to the address 12 bytes back from the
2568 value itself. */
2569 if (aarch64_relocate (AARCH64_R (PRELNN), stub_bfd, stub_sec,
2570 stub_entry->stub_offset + 16, sym_value + 12))
2571 BFD_FAIL ();
2572 break;
2573
2574 case aarch64_stub_erratum_835769_veneer:
2575 veneered_insn_loc = stub_entry->target_section->output_section->vma
2576 + stub_entry->target_section->output_offset
2577 + stub_entry->target_value;
2578 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
2579 + stub_entry->stub_sec->output_offset
2580 + stub_entry->stub_offset;
2581 branch_offset = veneered_insn_loc - veneer_entry_loc;
2582 branch_offset >>= 2;
2583 branch_offset &= 0x3ffffff;
2584 bfd_putl32 (stub_entry->veneered_insn,
2585 stub_sec->contents + stub_entry->stub_offset);
2586 bfd_putl32 (template[1] | branch_offset,
2587 stub_sec->contents + stub_entry->stub_offset + 4);
2588 break;
2589
2590 case aarch64_stub_erratum_843419_veneer:
2591 if (aarch64_relocate (AARCH64_R (JUMP26), stub_bfd, stub_sec,
2592 stub_entry->stub_offset + 4, sym_value + 4))
2593 BFD_FAIL ();
2594 break;
2595
2596 default:
2597 abort ();
2598 }
2599
2600 return TRUE;
2601}
2602
2603/* As above, but don't actually build the stub. Just bump offset so
2604 we know stub section sizes. */
2605
2606static bfd_boolean
2607aarch64_size_one_stub (struct bfd_hash_entry *gen_entry,
2608 void *in_arg ATTRIBUTE_UNUSED)
2609{
2610 struct elf_aarch64_stub_hash_entry *stub_entry;
2611 int size;
2612
2613 /* Massage our args to the form they really have. */
2614 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
2615
2616 switch (stub_entry->stub_type)
2617 {
2618 case aarch64_stub_adrp_branch:
2619 size = sizeof (aarch64_adrp_branch_stub);
2620 break;
2621 case aarch64_stub_long_branch:
2622 size = sizeof (aarch64_long_branch_stub);
2623 break;
2624 case aarch64_stub_erratum_835769_veneer:
2625 size = sizeof (aarch64_erratum_835769_stub);
2626 break;
2627 case aarch64_stub_erratum_843419_veneer:
2628 size = sizeof (aarch64_erratum_843419_stub);
2629 break;
2630 default:
2631 abort ();
2632 }
2633
2634 size = (size + 7) & ~7;
2635 stub_entry->stub_sec->size += size;
2636 return TRUE;
2637}
2638
2639/* External entry points for sizing and building linker stubs. */
2640
2641/* Set up various things so that we can make a list of input sections
2642 for each output section included in the link. Returns -1 on error,
2643 0 when no stubs will be needed, and 1 on success. */
2644
2645int
2646elfNN_aarch64_setup_section_lists (bfd *output_bfd,
2647 struct bfd_link_info *info)
2648{
2649 bfd *input_bfd;
2650 unsigned int bfd_count;
2651 int top_id, top_index;
2652 asection *section;
2653 asection **input_list, **list;
2654 bfd_size_type amt;
2655 struct elf_aarch64_link_hash_table *htab =
2656 elf_aarch64_hash_table (info);
2657
2658 if (!is_elf_hash_table (htab))
2659 return 0;
2660
2661 /* Count the number of input BFDs and find the top input section id. */
2662 for (input_bfd = info->input_bfds, bfd_count = 0, top_id = 0;
2663 input_bfd != NULL; input_bfd = input_bfd->link.next)
2664 {
2665 bfd_count += 1;
2666 for (section = input_bfd->sections;
2667 section != NULL; section = section->next)
2668 {
2669 if (top_id < section->id)
2670 top_id = section->id;
2671 }
2672 }
2673 htab->bfd_count = bfd_count;
2674
2675 amt = sizeof (struct map_stub) * (top_id + 1);
2676 htab->stub_group = bfd_zmalloc (amt);
2677 if (htab->stub_group == NULL)
2678 return -1;
2679
2680 /* We can't use output_bfd->section_count here to find the top output
2681 section index as some sections may have been removed, and
2682 _bfd_strip_section_from_output doesn't renumber the indices. */
2683 for (section = output_bfd->sections, top_index = 0;
2684 section != NULL; section = section->next)
2685 {
2686 if (top_index < section->index)
2687 top_index = section->index;
2688 }
2689
2690 htab->top_index = top_index;
2691 amt = sizeof (asection *) * (top_index + 1);
2692 input_list = bfd_malloc (amt);
2693 htab->input_list = input_list;
2694 if (input_list == NULL)
2695 return -1;
2696
2697 /* For sections we aren't interested in, mark their entries with a
2698 value we can check later. */
2699 list = input_list + top_index;
2700 do
2701 *list = bfd_abs_section_ptr;
2702 while (list-- != input_list);
2703
2704 for (section = output_bfd->sections;
2705 section != NULL; section = section->next)
2706 {
2707 if ((section->flags & SEC_CODE) != 0)
2708 input_list[section->index] = NULL;
2709 }
2710
2711 return 1;
2712}
2713
2714/* Used by elfNN_aarch64_next_input_section and group_sections. */
2715#define PREV_SEC(sec) (htab->stub_group[(sec)->id].link_sec)
2716
2717/* The linker repeatedly calls this function for each input section,
2718 in the order that input sections are linked into output sections.
2719 Build lists of input sections to determine groupings between which
2720 we may insert linker stubs. */
2721
2722void
2723elfNN_aarch64_next_input_section (struct bfd_link_info *info, asection *isec)
2724{
2725 struct elf_aarch64_link_hash_table *htab =
2726 elf_aarch64_hash_table (info);
2727
2728 if (isec->output_section->index <= htab->top_index)
2729 {
2730 asection **list = htab->input_list + isec->output_section->index;
2731
2732 if (*list != bfd_abs_section_ptr)
2733 {
2734 /* Steal the link_sec pointer for our list. */
2735 /* This happens to make the list in reverse order,
2736 which is what we want. */
2737 PREV_SEC (isec) = *list;
2738 *list = isec;
2739 }
2740 }
2741}
2742
2743/* See whether we can group stub sections together. Grouping stub
2744 sections may result in fewer stubs. More importantly, we need to
2745 put all .init* and .fini* stubs at the beginning of the .init or
2746 .fini output sections respectively, because glibc splits the
2747 _init and _fini functions into multiple parts. Putting a stub in
2748 the middle of a function is not a good idea. */
2749
2750static void
2751group_sections (struct elf_aarch64_link_hash_table *htab,
2752 bfd_size_type stub_group_size,
2753 bfd_boolean stubs_always_before_branch)
2754{
2755 asection **list = htab->input_list + htab->top_index;
2756
2757 do
2758 {
2759 asection *tail = *list;
2760
2761 if (tail == bfd_abs_section_ptr)
2762 continue;
2763
2764 while (tail != NULL)
2765 {
2766 asection *curr;
2767 asection *prev;
2768 bfd_size_type total;
2769
2770 curr = tail;
2771 total = tail->size;
2772 while ((prev = PREV_SEC (curr)) != NULL
2773 && ((total += curr->output_offset - prev->output_offset)
2774 < stub_group_size))
2775 curr = prev;
2776
2777 /* OK, the size from the start of CURR to the end is less
2778 than stub_group_size and thus can be handled by one stub
2779 section. (Or the tail section is itself larger than
2780 stub_group_size, in which case we may be toast.)
2781 We should really be keeping track of the total size of
2782 stubs added here, as stubs contribute to the final output
2783 section size. */
2784 do
2785 {
2786 prev = PREV_SEC (tail);
2787 /* Set up this stub group. */
2788 htab->stub_group[tail->id].link_sec = curr;
2789 }
2790 while (tail != curr && (tail = prev) != NULL);
2791
2792 /* But wait, there's more! Input sections up to stub_group_size
2793 bytes before the stub section can be handled by it too. */
2794 if (!stubs_always_before_branch)
2795 {
2796 total = 0;
2797 while (prev != NULL
2798 && ((total += tail->output_offset - prev->output_offset)
2799 < stub_group_size))
2800 {
2801 tail = prev;
2802 prev = PREV_SEC (tail);
2803 htab->stub_group[tail->id].link_sec = curr;
2804 }
2805 }
2806 tail = prev;
2807 }
2808 }
2809 while (list-- != htab->input_list);
2810
2811 free (htab->input_list);
2812}
2813
2814#undef PREV_SEC
2815
2816#define AARCH64_BITS(x, pos, n) (((x) >> (pos)) & ((1 << (n)) - 1))
2817
2818#define AARCH64_RT(insn) AARCH64_BITS (insn, 0, 5)
2819#define AARCH64_RT2(insn) AARCH64_BITS (insn, 10, 5)
2820#define AARCH64_RA(insn) AARCH64_BITS (insn, 10, 5)
2821#define AARCH64_RD(insn) AARCH64_BITS (insn, 0, 5)
2822#define AARCH64_RN(insn) AARCH64_BITS (insn, 5, 5)
2823#define AARCH64_RM(insn) AARCH64_BITS (insn, 16, 5)
2824
2825#define AARCH64_MAC(insn) (((insn) & 0xff000000) == 0x9b000000)
2826#define AARCH64_BIT(insn, n) AARCH64_BITS (insn, n, 1)
2827#define AARCH64_OP31(insn) AARCH64_BITS (insn, 21, 3)
2828#define AARCH64_ZR 0x1f
2829
2830/* All ld/st ops. See C4-182 of the ARM ARM. The encoding space for
2831 LD_PCREL, LDST_RO, LDST_UI and LDST_UIMM cover prefetch ops. */
2832
2833#define AARCH64_LD(insn) (AARCH64_BIT (insn, 22) == 1)
2834#define AARCH64_LDST(insn) (((insn) & 0x0a000000) == 0x08000000)
2835#define AARCH64_LDST_EX(insn) (((insn) & 0x3f000000) == 0x08000000)
2836#define AARCH64_LDST_PCREL(insn) (((insn) & 0x3b000000) == 0x18000000)
2837#define AARCH64_LDST_NAP(insn) (((insn) & 0x3b800000) == 0x28000000)
2838#define AARCH64_LDSTP_PI(insn) (((insn) & 0x3b800000) == 0x28800000)
2839#define AARCH64_LDSTP_O(insn) (((insn) & 0x3b800000) == 0x29000000)
2840#define AARCH64_LDSTP_PRE(insn) (((insn) & 0x3b800000) == 0x29800000)
2841#define AARCH64_LDST_UI(insn) (((insn) & 0x3b200c00) == 0x38000000)
2842#define AARCH64_LDST_PIIMM(insn) (((insn) & 0x3b200c00) == 0x38000400)
2843#define AARCH64_LDST_U(insn) (((insn) & 0x3b200c00) == 0x38000800)
2844#define AARCH64_LDST_PREIMM(insn) (((insn) & 0x3b200c00) == 0x38000c00)
2845#define AARCH64_LDST_RO(insn) (((insn) & 0x3b200c00) == 0x38200800)
2846#define AARCH64_LDST_UIMM(insn) (((insn) & 0x3b000000) == 0x39000000)
2847#define AARCH64_LDST_SIMD_M(insn) (((insn) & 0xbfbf0000) == 0x0c000000)
2848#define AARCH64_LDST_SIMD_M_PI(insn) (((insn) & 0xbfa00000) == 0x0c800000)
2849#define AARCH64_LDST_SIMD_S(insn) (((insn) & 0xbf9f0000) == 0x0d000000)
2850#define AARCH64_LDST_SIMD_S_PI(insn) (((insn) & 0xbf800000) == 0x0d800000)
2851
2852/* Classify an INSN if it is indeed a load/store.
2853
2854 Return TRUE if INSN is a LD/ST instruction otherwise return FALSE.
2855
2856 For scalar LD/ST instructions PAIR is FALSE, RT is returned and RT2
2857 is set equal to RT.
2858
2859 For LD/ST pair instructions PAIR is TRUE, RT and RT2 are returned.
2860
2861 */
2862
2863static bfd_boolean
2864aarch64_mem_op_p (uint32_t insn, unsigned int *rt, unsigned int *rt2,
2865 bfd_boolean *pair, bfd_boolean *load)
2866{
2867 uint32_t opcode;
2868 unsigned int r;
2869 uint32_t opc = 0;
2870 uint32_t v = 0;
2871 uint32_t opc_v = 0;
2872
2873 /* Bail out quickly if INSN doesn't fall into the the load-store
2874 encoding space. */
2875 if (!AARCH64_LDST (insn))
2876 return FALSE;
2877
2878 *pair = FALSE;
2879 *load = FALSE;
2880 if (AARCH64_LDST_EX (insn))
2881 {
2882 *rt = AARCH64_RT (insn);
2883 *rt2 = *rt;
2884 if (AARCH64_BIT (insn, 21) == 1)
2885 {
2886 *pair = TRUE;
2887 *rt2 = AARCH64_RT2 (insn);
2888 }
2889 *load = AARCH64_LD (insn);
2890 return TRUE;
2891 }
2892 else if (AARCH64_LDST_NAP (insn)
2893 || AARCH64_LDSTP_PI (insn)
2894 || AARCH64_LDSTP_O (insn)
2895 || AARCH64_LDSTP_PRE (insn))
2896 {
2897 *pair = TRUE;
2898 *rt = AARCH64_RT (insn);
2899 *rt2 = AARCH64_RT2 (insn);
2900 *load = AARCH64_LD (insn);
2901 return TRUE;
2902 }
2903 else if (AARCH64_LDST_PCREL (insn)
2904 || AARCH64_LDST_UI (insn)
2905 || AARCH64_LDST_PIIMM (insn)
2906 || AARCH64_LDST_U (insn)
2907 || AARCH64_LDST_PREIMM (insn)
2908 || AARCH64_LDST_RO (insn)
2909 || AARCH64_LDST_UIMM (insn))
2910 {
2911 *rt = AARCH64_RT (insn);
2912 *rt2 = *rt;
2913 if (AARCH64_LDST_PCREL (insn))
2914 *load = TRUE;
2915 opc = AARCH64_BITS (insn, 22, 2);
2916 v = AARCH64_BIT (insn, 26);
2917 opc_v = opc | (v << 2);
2918 *load = (opc_v == 1 || opc_v == 2 || opc_v == 3
2919 || opc_v == 5 || opc_v == 7);
2920 return TRUE;
2921 }
2922 else if (AARCH64_LDST_SIMD_M (insn)
2923 || AARCH64_LDST_SIMD_M_PI (insn))
2924 {
2925 *rt = AARCH64_RT (insn);
2926 *load = AARCH64_BIT (insn, 22);
2927 opcode = (insn >> 12) & 0xf;
2928 switch (opcode)
2929 {
2930 case 0:
2931 case 2:
2932 *rt2 = *rt + 3;
2933 break;
2934
2935 case 4:
2936 case 6:
2937 *rt2 = *rt + 2;
2938 break;
2939
2940 case 7:
2941 *rt2 = *rt;
2942 break;
2943
2944 case 8:
2945 case 10:
2946 *rt2 = *rt + 1;
2947 break;
2948
2949 default:
2950 return FALSE;
2951 }
2952 return TRUE;
2953 }
2954 else if (AARCH64_LDST_SIMD_S (insn)
2955 || AARCH64_LDST_SIMD_S_PI (insn))
2956 {
2957 *rt = AARCH64_RT (insn);
2958 r = (insn >> 21) & 1;
2959 *load = AARCH64_BIT (insn, 22);
2960 opcode = (insn >> 13) & 0x7;
2961 switch (opcode)
2962 {
2963 case 0:
2964 case 2:
2965 case 4:
2966 *rt2 = *rt + r;
2967 break;
2968
2969 case 1:
2970 case 3:
2971 case 5:
2972 *rt2 = *rt + (r == 0 ? 2 : 3);
2973 break;
2974
2975 case 6:
2976 *rt2 = *rt + r;
2977 break;
2978
2979 case 7:
2980 *rt2 = *rt + (r == 0 ? 2 : 3);
2981 break;
2982
2983 default:
2984 return FALSE;
2985 }
2986 return TRUE;
2987 }
2988
2989 return FALSE;
2990}
2991
2992/* Return TRUE if INSN is multiply-accumulate. */
2993
2994static bfd_boolean
2995aarch64_mlxl_p (uint32_t insn)
2996{
2997 uint32_t op31 = AARCH64_OP31 (insn);
2998
2999 if (AARCH64_MAC (insn)
3000 && (op31 == 0 || op31 == 1 || op31 == 5)
3001 /* Exclude MUL instructions which are encoded as a multiple accumulate
3002 with RA = XZR. */
3003 && AARCH64_RA (insn) != AARCH64_ZR)
3004 return TRUE;
3005
3006 return FALSE;
3007}
3008
3009/* Some early revisions of the Cortex-A53 have an erratum (835769) whereby
3010 it is possible for a 64-bit multiply-accumulate instruction to generate an
3011 incorrect result. The details are quite complex and hard to
3012 determine statically, since branches in the code may exist in some
3013 circumstances, but all cases end with a memory (load, store, or
3014 prefetch) instruction followed immediately by the multiply-accumulate
3015 operation. We employ a linker patching technique, by moving the potentially
3016 affected multiply-accumulate instruction into a patch region and replacing
3017 the original instruction with a branch to the patch. This function checks
3018 if INSN_1 is the memory operation followed by a multiply-accumulate
3019 operation (INSN_2). Return TRUE if an erratum sequence is found, FALSE
3020 if INSN_1 and INSN_2 are safe. */
3021
3022static bfd_boolean
3023aarch64_erratum_sequence (uint32_t insn_1, uint32_t insn_2)
3024{
3025 uint32_t rt;
3026 uint32_t rt2;
3027 uint32_t rn;
3028 uint32_t rm;
3029 uint32_t ra;
3030 bfd_boolean pair;
3031 bfd_boolean load;
3032
3033 if (aarch64_mlxl_p (insn_2)
3034 && aarch64_mem_op_p (insn_1, &rt, &rt2, &pair, &load))
3035 {
3036 /* Any SIMD memory op is independent of the subsequent MLA
3037 by definition of the erratum. */
3038 if (AARCH64_BIT (insn_1, 26))
3039 return TRUE;
3040
3041 /* If not SIMD, check for integer memory ops and MLA relationship. */
3042 rn = AARCH64_RN (insn_2);
3043 ra = AARCH64_RA (insn_2);
3044 rm = AARCH64_RM (insn_2);
3045
3046 /* If this is a load and there's a true(RAW) dependency, we are safe
3047 and this is not an erratum sequence. */
3048 if (load &&
3049 (rt == rn || rt == rm || rt == ra
3050 || (pair && (rt2 == rn || rt2 == rm || rt2 == ra))))
3051 return FALSE;
3052
3053 /* We conservatively put out stubs for all other cases (including
3054 writebacks). */
3055 return TRUE;
3056 }
3057
3058 return FALSE;
3059}
3060
3061/* Used to order a list of mapping symbols by address. */
3062
3063static int
3064elf_aarch64_compare_mapping (const void *a, const void *b)
3065{
3066 const elf_aarch64_section_map *amap = (const elf_aarch64_section_map *) a;
3067 const elf_aarch64_section_map *bmap = (const elf_aarch64_section_map *) b;
3068
3069 if (amap->vma > bmap->vma)
3070 return 1;
3071 else if (amap->vma < bmap->vma)
3072 return -1;
3073 else if (amap->type > bmap->type)
3074 /* Ensure results do not depend on the host qsort for objects with
3075 multiple mapping symbols at the same address by sorting on type
3076 after vma. */
3077 return 1;
3078 else if (amap->type < bmap->type)
3079 return -1;
3080 else
3081 return 0;
3082}
3083
3084
3085static char *
3086_bfd_aarch64_erratum_835769_stub_name (unsigned num_fixes)
3087{
3088 char *stub_name = (char *) bfd_malloc
3089 (strlen ("__erratum_835769_veneer_") + 16);
3090 sprintf (stub_name,"__erratum_835769_veneer_%d", num_fixes);
3091 return stub_name;
3092}
3093
3094/* Scan for Cortex-A53 erratum 835769 sequence.
3095
3096 Return TRUE else FALSE on abnormal termination. */
3097
3098static bfd_boolean
3099_bfd_aarch64_erratum_835769_scan (bfd *input_bfd,
3100 struct bfd_link_info *info,
3101 unsigned int *num_fixes_p)
3102{
3103 asection *section;
3104 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3105 unsigned int num_fixes = *num_fixes_p;
3106
3107 if (htab == NULL)
3108 return TRUE;
3109
3110 for (section = input_bfd->sections;
3111 section != NULL;
3112 section = section->next)
3113 {
3114 bfd_byte *contents = NULL;
3115 struct _aarch64_elf_section_data *sec_data;
3116 unsigned int span;
3117
3118 if (elf_section_type (section) != SHT_PROGBITS
3119 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3120 || (section->flags & SEC_EXCLUDE) != 0
3121 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3122 || (section->output_section == bfd_abs_section_ptr))
3123 continue;
3124
3125 if (elf_section_data (section)->this_hdr.contents != NULL)
3126 contents = elf_section_data (section)->this_hdr.contents;
3127 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3128 return FALSE;
3129
3130 sec_data = elf_aarch64_section_data (section);
3131
3132 qsort (sec_data->map, sec_data->mapcount,
3133 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3134
3135 for (span = 0; span < sec_data->mapcount; span++)
3136 {
3137 unsigned int span_start = sec_data->map[span].vma;
3138 unsigned int span_end = ((span == sec_data->mapcount - 1)
3139 ? sec_data->map[0].vma + section->size
3140 : sec_data->map[span + 1].vma);
3141 unsigned int i;
3142 char span_type = sec_data->map[span].type;
3143
3144 if (span_type == 'd')
3145 continue;
3146
3147 for (i = span_start; i + 4 < span_end; i += 4)
3148 {
3149 uint32_t insn_1 = bfd_getl32 (contents + i);
3150 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3151
3152 if (aarch64_erratum_sequence (insn_1, insn_2))
3153 {
3154 struct elf_aarch64_stub_hash_entry *stub_entry;
3155 char *stub_name = _bfd_aarch64_erratum_835769_stub_name (num_fixes);
3156 if (! stub_name)
3157 return FALSE;
3158
3159 stub_entry = _bfd_aarch64_add_stub_entry_in_group (stub_name,
3160 section,
3161 htab);
3162 if (! stub_entry)
3163 return FALSE;
3164
3165 stub_entry->stub_type = aarch64_stub_erratum_835769_veneer;
3166 stub_entry->target_section = section;
3167 stub_entry->target_value = i + 4;
3168 stub_entry->veneered_insn = insn_2;
3169 stub_entry->output_name = stub_name;
3170 num_fixes++;
3171 }
3172 }
3173 }
3174 if (elf_section_data (section)->this_hdr.contents == NULL)
3175 free (contents);
3176 }
3177
3178 *num_fixes_p = num_fixes;
3179
3180 return TRUE;
3181}
3182
3183
3184/* Test if instruction INSN is ADRP. */
3185
3186static bfd_boolean
3187_bfd_aarch64_adrp_p (uint32_t insn)
3188{
3189 return ((insn & 0x9f000000) == 0x90000000);
3190}
3191
3192
3193/* Helper predicate to look for cortex-a53 erratum 843419 sequence 1. */
3194
3195static bfd_boolean
3196_bfd_aarch64_erratum_843419_sequence_p (uint32_t insn_1, uint32_t insn_2,
3197 uint32_t insn_3)
3198{
3199 uint32_t rt;
3200 uint32_t rt2;
3201 bfd_boolean pair;
3202 bfd_boolean load;
3203
3204 return (aarch64_mem_op_p (insn_2, &rt, &rt2, &pair, &load)
3205 && (!pair
3206 || (pair && !load))
3207 && AARCH64_LDST_UIMM (insn_3)
3208 && AARCH64_RN (insn_3) == AARCH64_RD (insn_1));
3209}
3210
3211
3212/* Test for the presence of Cortex-A53 erratum 843419 instruction sequence.
3213
3214 Return TRUE if section CONTENTS at offset I contains one of the
3215 erratum 843419 sequences, otherwise return FALSE. If a sequence is
3216 seen set P_VENEER_I to the offset of the final LOAD/STORE
3217 instruction in the sequence.
3218 */
3219
3220static bfd_boolean
3221_bfd_aarch64_erratum_843419_p (bfd_byte *contents, bfd_vma vma,
3222 bfd_vma i, bfd_vma span_end,
3223 bfd_vma *p_veneer_i)
3224{
3225 uint32_t insn_1 = bfd_getl32 (contents + i);
3226
3227 if (!_bfd_aarch64_adrp_p (insn_1))
3228 return FALSE;
3229
3230 if (span_end < i + 12)
3231 return FALSE;
3232
3233 uint32_t insn_2 = bfd_getl32 (contents + i + 4);
3234 uint32_t insn_3 = bfd_getl32 (contents + i + 8);
3235
3236 if ((vma & 0xfff) != 0xff8 && (vma & 0xfff) != 0xffc)
3237 return FALSE;
3238
3239 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_3))
3240 {
3241 *p_veneer_i = i + 8;
3242 return TRUE;
3243 }
3244
3245 if (span_end < i + 16)
3246 return FALSE;
3247
3248 uint32_t insn_4 = bfd_getl32 (contents + i + 12);
3249
3250 if (_bfd_aarch64_erratum_843419_sequence_p (insn_1, insn_2, insn_4))
3251 {
3252 *p_veneer_i = i + 12;
3253 return TRUE;
3254 }
3255
3256 return FALSE;
3257}
3258
3259
3260/* Resize all stub sections. */
3261
3262static void
3263_bfd_aarch64_resize_stubs (struct elf_aarch64_link_hash_table *htab)
3264{
3265 asection *section;
3266
3267 /* OK, we've added some stubs. Find out the new size of the
3268 stub sections. */
3269 for (section = htab->stub_bfd->sections;
3270 section != NULL; section = section->next)
3271 {
3272 /* Ignore non-stub sections. */
3273 if (!strstr (section->name, STUB_SUFFIX))
3274 continue;
3275 section->size = 0;
3276 }
3277
3278 bfd_hash_traverse (&htab->stub_hash_table, aarch64_size_one_stub, htab);
3279
3280 for (section = htab->stub_bfd->sections;
3281 section != NULL; section = section->next)
3282 {
3283 if (!strstr (section->name, STUB_SUFFIX))
3284 continue;
3285
3286 if (section->size)
3287 section->size += 4;
3288
3289 /* Ensure all stub sections have a size which is a multiple of
3290 4096. This is important in order to ensure that the insertion
3291 of stub sections does not in itself move existing code around
3292 in such a way that new errata sequences are created. */
3293 if (htab->fix_erratum_843419)
3294 if (section->size)
3295 section->size = BFD_ALIGN (section->size, 0x1000);
3296 }
3297}
3298
3299
3300/* Construct an erratum 843419 workaround stub name.
3301 */
3302
3303static char *
3304_bfd_aarch64_erratum_843419_stub_name (asection *input_section,
3305 bfd_vma offset)
3306{
3307 const bfd_size_type len = 8 + 4 + 1 + 8 + 1 + 16 + 1;
3308 char *stub_name = bfd_malloc (len);
3309
3310 if (stub_name != NULL)
3311 snprintf (stub_name, len, "e843419@%04x_%08x_%" BFD_VMA_FMT "x",
3312 input_section->owner->id,
3313 input_section->id,
3314 offset);
3315 return stub_name;
3316}
3317
3318/* Build a stub_entry structure describing an 843419 fixup.
3319
3320 The stub_entry constructed is populated with the bit pattern INSN
3321 of the instruction located at OFFSET within input SECTION.
3322
3323 Returns TRUE on success. */
3324
3325static bfd_boolean
3326_bfd_aarch64_erratum_843419_fixup (uint32_t insn,
3327 bfd_vma adrp_offset,
3328 bfd_vma ldst_offset,
3329 asection *section,
3330 struct bfd_link_info *info)
3331{
3332 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3333 char *stub_name;
3334 struct elf_aarch64_stub_hash_entry *stub_entry;
3335
3336 stub_name = _bfd_aarch64_erratum_843419_stub_name (section, ldst_offset);
3337 stub_entry = aarch64_stub_hash_lookup (&htab->stub_hash_table, stub_name,
3338 FALSE, FALSE);
3339 if (stub_entry)
3340 {
3341 free (stub_name);
3342 return TRUE;
3343 }
3344
3345 /* We always place an 843419 workaround veneer in the stub section
3346 attached to the input section in which an erratum sequence has
3347 been found. This ensures that later in the link process (in
3348 elfNN_aarch64_write_section) when we copy the veneered
3349 instruction from the input section into the stub section the
3350 copied instruction will have had any relocations applied to it.
3351 If we placed workaround veneers in any other stub section then we
3352 could not assume that all relocations have been processed on the
3353 corresponding input section at the point we output the stub
3354 section.
3355 */
3356
3357 stub_entry = _bfd_aarch64_add_stub_entry_after (stub_name, section, htab);
3358 if (stub_entry == NULL)
3359 {
3360 free (stub_name);
3361 return FALSE;
3362 }
3363
3364 stub_entry->adrp_offset = adrp_offset;
3365 stub_entry->target_value = ldst_offset;
3366 stub_entry->target_section = section;
3367 stub_entry->stub_type = aarch64_stub_erratum_843419_veneer;
3368 stub_entry->veneered_insn = insn;
3369 stub_entry->output_name = stub_name;
3370
3371 return TRUE;
3372}
3373
3374
3375/* Scan an input section looking for the signature of erratum 843419.
3376
3377 Scans input SECTION in INPUT_BFD looking for erratum 843419
3378 signatures, for each signature found a stub_entry is created
3379 describing the location of the erratum for subsequent fixup.
3380
3381 Return TRUE on successful scan, FALSE on failure to scan.
3382 */
3383
3384static bfd_boolean
3385_bfd_aarch64_erratum_843419_scan (bfd *input_bfd, asection *section,
3386 struct bfd_link_info *info)
3387{
3388 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3389
3390 if (htab == NULL)
3391 return TRUE;
3392
3393 if (elf_section_type (section) != SHT_PROGBITS
3394 || (elf_section_flags (section) & SHF_EXECINSTR) == 0
3395 || (section->flags & SEC_EXCLUDE) != 0
3396 || (section->sec_info_type == SEC_INFO_TYPE_JUST_SYMS)
3397 || (section->output_section == bfd_abs_section_ptr))
3398 return TRUE;
3399
3400 do
3401 {
3402 bfd_byte *contents = NULL;
3403 struct _aarch64_elf_section_data *sec_data;
3404 unsigned int span;
3405
3406 if (elf_section_data (section)->this_hdr.contents != NULL)
3407 contents = elf_section_data (section)->this_hdr.contents;
3408 else if (! bfd_malloc_and_get_section (input_bfd, section, &contents))
3409 return FALSE;
3410
3411 sec_data = elf_aarch64_section_data (section);
3412
3413 qsort (sec_data->map, sec_data->mapcount,
3414 sizeof (elf_aarch64_section_map), elf_aarch64_compare_mapping);
3415
3416 for (span = 0; span < sec_data->mapcount; span++)
3417 {
3418 unsigned int span_start = sec_data->map[span].vma;
3419 unsigned int span_end = ((span == sec_data->mapcount - 1)
3420 ? sec_data->map[0].vma + section->size
3421 : sec_data->map[span + 1].vma);
3422 unsigned int i;
3423 char span_type = sec_data->map[span].type;
3424
3425 if (span_type == 'd')
3426 continue;
3427
3428 for (i = span_start; i + 8 < span_end; i += 4)
3429 {
3430 bfd_vma vma = (section->output_section->vma
3431 + section->output_offset
3432 + i);
3433 bfd_vma veneer_i;
3434
3435 if (_bfd_aarch64_erratum_843419_p
3436 (contents, vma, i, span_end, &veneer_i))
3437 {
3438 uint32_t insn = bfd_getl32 (contents + veneer_i);
3439
3440 if (!_bfd_aarch64_erratum_843419_fixup (insn, i, veneer_i,
3441 section, info))
3442 return FALSE;
3443 }
3444 }
3445 }
3446
3447 if (elf_section_data (section)->this_hdr.contents == NULL)
3448 free (contents);
3449 }
3450 while (0);
3451
3452 return TRUE;
3453}
3454
3455
3456/* Determine and set the size of the stub section for a final link.
3457
3458 The basic idea here is to examine all the relocations looking for
3459 PC-relative calls to a target that is unreachable with a "bl"
3460 instruction. */
3461
3462bfd_boolean
3463elfNN_aarch64_size_stubs (bfd *output_bfd,
3464 bfd *stub_bfd,
3465 struct bfd_link_info *info,
3466 bfd_signed_vma group_size,
3467 asection * (*add_stub_section) (const char *,
3468 asection *),
3469 void (*layout_sections_again) (void))
3470{
3471 bfd_size_type stub_group_size;
3472 bfd_boolean stubs_always_before_branch;
3473 bfd_boolean stub_changed = FALSE;
3474 struct elf_aarch64_link_hash_table *htab = elf_aarch64_hash_table (info);
3475 unsigned int num_erratum_835769_fixes = 0;
3476
3477 /* Propagate mach to stub bfd, because it may not have been
3478 finalized when we created stub_bfd. */
3479 bfd_set_arch_mach (stub_bfd, bfd_get_arch (output_bfd),
3480 bfd_get_mach (output_bfd));
3481
3482 /* Stash our params away. */
3483 htab->stub_bfd = stub_bfd;
3484 htab->add_stub_section = add_stub_section;
3485 htab->layout_sections_again = layout_sections_again;
3486 stubs_always_before_branch = group_size < 0;
3487 if (group_size < 0)
3488 stub_group_size = -group_size;
3489 else
3490 stub_group_size = group_size;
3491
3492 if (stub_group_size == 1)
3493 {
3494 /* Default values. */
3495 /* AArch64 branch range is +-128MB. The value used is 1MB less. */
3496 stub_group_size = 127 * 1024 * 1024;
3497 }
3498
3499 group_sections (htab, stub_group_size, stubs_always_before_branch);
3500
3501 (*htab->layout_sections_again) ();
3502
3503 if (htab->fix_erratum_835769)
3504 {
3505 bfd *input_bfd;
3506
3507 for (input_bfd = info->input_bfds;
3508 input_bfd != NULL; input_bfd = input_bfd->link.next)
3509 if (!_bfd_aarch64_erratum_835769_scan (input_bfd, info,
3510 &num_erratum_835769_fixes))
3511 return FALSE;
3512
3513 _bfd_aarch64_resize_stubs (htab);
3514 (*htab->layout_sections_again) ();
3515 }
3516
3517 if (htab->fix_erratum_843419)
3518 {
3519 bfd *input_bfd;
3520
3521 for (input_bfd = info->input_bfds;
3522 input_bfd != NULL;
3523 input_bfd = input_bfd->link.next)
3524 {
3525 asection *section;
3526
3527 for (section = input_bfd->sections;
3528 section != NULL;
3529 section = section->next)
3530 if (!_bfd_aarch64_erratum_843419_scan (input_bfd, section, info))
3531 return FALSE;
3532 }
3533
3534 _bfd_aarch64_resize_stubs (htab);
3535 (*htab->layout_sections_again) ();
3536 }
3537
3538 while (1)
3539 {
3540 bfd *input_bfd;
3541
3542 for (input_bfd = info->input_bfds;
3543 input_bfd != NULL; input_bfd = input_bfd->link.next)
3544 {
3545 Elf_Internal_Shdr *symtab_hdr;
3546 asection *section;
3547 Elf_Internal_Sym *local_syms = NULL;
3548
3549 /* We'll need the symbol table in a second. */
3550 symtab_hdr = &elf_tdata (input_bfd)->symtab_hdr;
3551 if (symtab_hdr->sh_info == 0)
3552 continue;
3553
3554 /* Walk over each section attached to the input bfd. */
3555 for (section = input_bfd->sections;
3556 section != NULL; section = section->next)
3557 {
3558 Elf_Internal_Rela *internal_relocs, *irelaend, *irela;
3559
3560 /* If there aren't any relocs, then there's nothing more
3561 to do. */
3562 if ((section->flags & SEC_RELOC) == 0
3563 || section->reloc_count == 0
3564 || (section->flags & SEC_CODE) == 0)
3565 continue;
3566
3567 /* If this section is a link-once section that will be
3568 discarded, then don't create any stubs. */
3569 if (section->output_section == NULL
3570 || section->output_section->owner != output_bfd)
3571 continue;
3572
3573 /* Get the relocs. */
3574 internal_relocs
3575 = _bfd_elf_link_read_relocs (input_bfd, section, NULL,
3576 NULL, info->keep_memory);
3577 if (internal_relocs == NULL)
3578 goto error_ret_free_local;
3579
3580 /* Now examine each relocation. */
3581 irela = internal_relocs;
3582 irelaend = irela + section->reloc_count;
3583 for (; irela < irelaend; irela++)
3584 {
3585 unsigned int r_type, r_indx;
3586 enum elf_aarch64_stub_type stub_type;
3587 struct elf_aarch64_stub_hash_entry *stub_entry;
3588 asection *sym_sec;
3589 bfd_vma sym_value;
3590 bfd_vma destination;
3591 struct elf_aarch64_link_hash_entry *hash;
3592 const char *sym_name;
3593 char *stub_name;
3594 const asection *id_sec;
3595 unsigned char st_type;
3596 bfd_size_type len;
3597
3598 r_type = ELFNN_R_TYPE (irela->r_info);
3599 r_indx = ELFNN_R_SYM (irela->r_info);
3600
3601 if (r_type >= (unsigned int) R_AARCH64_end)
3602 {
3603 bfd_set_error (bfd_error_bad_value);
3604 error_ret_free_internal:
3605 if (elf_section_data (section)->relocs == NULL)
3606 free (internal_relocs);
3607 goto error_ret_free_local;
3608 }
3609
3610 /* Only look for stubs on unconditional branch and
3611 branch and link instructions. */
3612 if (r_type != (unsigned int) AARCH64_R (CALL26)
3613 && r_type != (unsigned int) AARCH64_R (JUMP26))
3614 continue;
3615
3616 /* Now determine the call target, its name, value,
3617 section. */
3618 sym_sec = NULL;
3619 sym_value = 0;
3620 destination = 0;
3621 hash = NULL;
3622 sym_name = NULL;
3623 if (r_indx < symtab_hdr->sh_info)
3624 {
3625 /* It's a local symbol. */
3626 Elf_Internal_Sym *sym;
3627 Elf_Internal_Shdr *hdr;
3628
3629 if (local_syms == NULL)
3630 {
3631 local_syms
3632 = (Elf_Internal_Sym *) symtab_hdr->contents;
3633 if (local_syms == NULL)
3634 local_syms
3635 = bfd_elf_get_elf_syms (input_bfd, symtab_hdr,
3636 symtab_hdr->sh_info, 0,
3637 NULL, NULL, NULL);
3638 if (local_syms == NULL)
3639 goto error_ret_free_internal;
3640 }
3641
3642 sym = local_syms + r_indx;
3643 hdr = elf_elfsections (input_bfd)[sym->st_shndx];
3644 sym_sec = hdr->bfd_section;
3645 if (!sym_sec)
3646 /* This is an undefined symbol. It can never
3647 be resolved. */
3648 continue;
3649
3650 if (ELF_ST_TYPE (sym->st_info) != STT_SECTION)
3651 sym_value = sym->st_value;
3652 destination = (sym_value + irela->r_addend
3653 + sym_sec->output_offset
3654 + sym_sec->output_section->vma);
3655 st_type = ELF_ST_TYPE (sym->st_info);
3656 sym_name
3657 = bfd_elf_string_from_elf_section (input_bfd,
3658 symtab_hdr->sh_link,
3659 sym->st_name);
3660 }
3661 else
3662 {
3663 int e_indx;
3664
3665 e_indx = r_indx - symtab_hdr->sh_info;
3666 hash = ((struct elf_aarch64_link_hash_entry *)
3667 elf_sym_hashes (input_bfd)[e_indx]);
3668
3669 while (hash->root.root.type == bfd_link_hash_indirect
3670 || hash->root.root.type == bfd_link_hash_warning)
3671 hash = ((struct elf_aarch64_link_hash_entry *)
3672 hash->root.root.u.i.link);
3673
3674 if (hash->root.root.type == bfd_link_hash_defined
3675 || hash->root.root.type == bfd_link_hash_defweak)
3676 {
3677 struct elf_aarch64_link_hash_table *globals =
3678 elf_aarch64_hash_table (info);
3679 sym_sec = hash->root.root.u.def.section;
3680 sym_value = hash->root.root.u.def.value;
3681 /* For a destination in a shared library,
3682 use the PLT stub as target address to
3683 decide whether a branch stub is
3684 needed. */
3685 if (globals->root.splt != NULL && hash != NULL
3686 && hash->root.plt.offset != (bfd_vma) - 1)
3687 {
3688 sym_sec = globals->root.splt;
3689 sym_value = hash->root.plt.offset;
3690 if (sym_sec->output_section != NULL)
3691 destination = (sym_value
3692 + sym_sec->output_offset
3693 +
3694 sym_sec->output_section->vma);
3695 }
3696 else if (sym_sec->output_section != NULL)
3697 destination = (sym_value + irela->r_addend
3698 + sym_sec->output_offset
3699 + sym_sec->output_section->vma);
3700 }
3701 else if (hash->root.root.type == bfd_link_hash_undefined
3702 || (hash->root.root.type
3703 == bfd_link_hash_undefweak))
3704 {
3705 /* For a shared library, use the PLT stub as
3706 target address to decide whether a long
3707 branch stub is needed.
3708 For absolute code, they cannot be handled. */
3709 struct elf_aarch64_link_hash_table *globals =
3710 elf_aarch64_hash_table (info);
3711
3712 if (globals->root.splt != NULL && hash != NULL
3713 && hash->root.plt.offset != (bfd_vma) - 1)
3714 {
3715 sym_sec = globals->root.splt;
3716 sym_value = hash->root.plt.offset;
3717 if (sym_sec->output_section != NULL)
3718 destination = (sym_value
3719 + sym_sec->output_offset
3720 +
3721 sym_sec->output_section->vma);
3722 }
3723 else
3724 continue;
3725 }
3726 else
3727 {
3728 bfd_set_error (bfd_error_bad_value);
3729 goto error_ret_free_internal;
3730 }
3731 st_type = ELF_ST_TYPE (hash->root.type);
3732 sym_name = hash->root.root.root.string;
3733 }
3734
3735 /* Determine what (if any) linker stub is needed. */
3736 stub_type = aarch64_type_of_stub
3737 (info, section, irela, st_type, hash, destination);
3738 if (stub_type == aarch64_stub_none)
3739 continue;
3740
3741 /* Support for grouping stub sections. */
3742 id_sec = htab->stub_group[section->id].link_sec;
3743
3744 /* Get the name of this stub. */
3745 stub_name = elfNN_aarch64_stub_name (id_sec, sym_sec, hash,
3746 irela);
3747 if (!stub_name)
3748 goto error_ret_free_internal;
3749
3750 stub_entry =
3751 aarch64_stub_hash_lookup (&htab->stub_hash_table,
3752 stub_name, FALSE, FALSE);
3753 if (stub_entry != NULL)
3754 {
3755 /* The proper stub has already been created. */
3756 free (stub_name);
3757 continue;
3758 }
3759
3760 stub_entry = _bfd_aarch64_add_stub_entry_in_group
3761 (stub_name, section, htab);
3762 if (stub_entry == NULL)
3763 {
3764 free (stub_name);
3765 goto error_ret_free_internal;
3766 }
3767
3768 stub_entry->target_value = sym_value;
3769 stub_entry->target_section = sym_sec;
3770 stub_entry->stub_type = stub_type;
3771 stub_entry->h = hash;
3772 stub_entry->st_type = st_type;
3773
3774 if (sym_name == NULL)
3775 sym_name = "unnamed";
3776 len = sizeof (STUB_ENTRY_NAME) + strlen (sym_name);
3777 stub_entry->output_name = bfd_alloc (htab->stub_bfd, len);
3778 if (stub_entry->output_name == NULL)
3779 {
3780 free (stub_name);
3781 goto error_ret_free_internal;
3782 }
3783
3784 snprintf (stub_entry->output_name, len, STUB_ENTRY_NAME,
3785 sym_name);
3786
3787 stub_changed = TRUE;
3788 }
3789
3790 /* We're done with the internal relocs, free them. */
3791 if (elf_section_data (section)->relocs == NULL)
3792 free (internal_relocs);
3793 }
3794 }
3795
3796 if (!stub_changed)
3797 break;
3798
3799 _bfd_aarch64_resize_stubs (htab);
3800
3801 /* Ask the linker to do its stuff. */
3802 (*htab->layout_sections_again) ();
3803 stub_changed = FALSE;
3804 }
3805
3806 return TRUE;
3807
3808error_ret_free_local:
3809 return FALSE;
3810}
3811
3812/* Build all the stubs associated with the current output file. The
3813 stubs are kept in a hash table attached to the main linker hash
3814 table. We also set up the .plt entries for statically linked PIC
3815 functions here. This function is called via aarch64_elf_finish in the
3816 linker. */
3817
3818bfd_boolean
3819elfNN_aarch64_build_stubs (struct bfd_link_info *info)
3820{
3821 asection *stub_sec;
3822 struct bfd_hash_table *table;
3823 struct elf_aarch64_link_hash_table *htab;
3824
3825 htab = elf_aarch64_hash_table (info);
3826
3827 for (stub_sec = htab->stub_bfd->sections;
3828 stub_sec != NULL; stub_sec = stub_sec->next)
3829 {
3830 bfd_size_type size;
3831
3832 /* Ignore non-stub sections. */
3833 if (!strstr (stub_sec->name, STUB_SUFFIX))
3834 continue;
3835
3836 /* Allocate memory to hold the linker stubs. */
3837 size = stub_sec->size;
3838 stub_sec->contents = bfd_zalloc (htab->stub_bfd, size);
3839 if (stub_sec->contents == NULL && size != 0)
3840 return FALSE;
3841 stub_sec->size = 0;
3842
3843 bfd_putl32 (0x14000000 | (size >> 2), stub_sec->contents);
3844 stub_sec->size += 4;
3845 }
3846
3847 /* Build the stubs as directed by the stub hash table. */
3848 table = &htab->stub_hash_table;
3849 bfd_hash_traverse (table, aarch64_build_one_stub, info);
3850
3851 return TRUE;
3852}
3853
3854
3855/* Add an entry to the code/data map for section SEC. */
3856
3857static void
3858elfNN_aarch64_section_map_add (asection *sec, char type, bfd_vma vma)
3859{
3860 struct _aarch64_elf_section_data *sec_data =
3861 elf_aarch64_section_data (sec);
3862 unsigned int newidx;
3863
3864 if (sec_data->map == NULL)
3865 {
3866 sec_data->map = bfd_malloc (sizeof (elf_aarch64_section_map));
3867 sec_data->mapcount = 0;
3868 sec_data->mapsize = 1;
3869 }
3870
3871 newidx = sec_data->mapcount++;
3872
3873 if (sec_data->mapcount > sec_data->mapsize)
3874 {
3875 sec_data->mapsize *= 2;
3876 sec_data->map = bfd_realloc_or_free
3877 (sec_data->map, sec_data->mapsize * sizeof (elf_aarch64_section_map));
3878 }
3879
3880 if (sec_data->map)
3881 {
3882 sec_data->map[newidx].vma = vma;
3883 sec_data->map[newidx].type = type;
3884 }
3885}
3886
3887
3888/* Initialise maps of insn/data for input BFDs. */
3889void
3890bfd_elfNN_aarch64_init_maps (bfd *abfd)
3891{
3892 Elf_Internal_Sym *isymbuf;
3893 Elf_Internal_Shdr *hdr;
3894 unsigned int i, localsyms;
3895
3896 /* Make sure that we are dealing with an AArch64 elf binary. */
3897 if (!is_aarch64_elf (abfd))
3898 return;
3899
3900 if ((abfd->flags & DYNAMIC) != 0)
3901 return;
3902
3903 hdr = &elf_symtab_hdr (abfd);
3904 localsyms = hdr->sh_info;
3905
3906 /* Obtain a buffer full of symbols for this BFD. The hdr->sh_info field
3907 should contain the number of local symbols, which should come before any
3908 global symbols. Mapping symbols are always local. */
3909 isymbuf = bfd_elf_get_elf_syms (abfd, hdr, localsyms, 0, NULL, NULL, NULL);
3910
3911 /* No internal symbols read? Skip this BFD. */
3912 if (isymbuf == NULL)
3913 return;
3914
3915 for (i = 0; i < localsyms; i++)
3916 {
3917 Elf_Internal_Sym *isym = &isymbuf[i];
3918 asection *sec = bfd_section_from_elf_index (abfd, isym->st_shndx);
3919 const char *name;
3920
3921 if (sec != NULL && ELF_ST_BIND (isym->st_info) == STB_LOCAL)
3922 {
3923 name = bfd_elf_string_from_elf_section (abfd,
3924 hdr->sh_link,
3925 isym->st_name);
3926
3927 if (bfd_is_aarch64_special_symbol_name
3928 (name, BFD_AARCH64_SPECIAL_SYM_TYPE_MAP))
3929 elfNN_aarch64_section_map_add (sec, name[1], isym->st_value);
3930 }
3931 }
3932}
3933
3934/* Set option values needed during linking. */
3935void
3936bfd_elfNN_aarch64_set_options (struct bfd *output_bfd,
3937 struct bfd_link_info *link_info,
3938 int no_enum_warn,
3939 int no_wchar_warn, int pic_veneer,
3940 int fix_erratum_835769,
3941 int fix_erratum_843419)
3942{
3943 struct elf_aarch64_link_hash_table *globals;
3944
3945 globals = elf_aarch64_hash_table (link_info);
3946 globals->pic_veneer = pic_veneer;
3947 globals->fix_erratum_835769 = fix_erratum_835769;
3948 globals->fix_erratum_843419 = fix_erratum_843419;
3949 globals->fix_erratum_843419_adr = TRUE;
3950
3951 BFD_ASSERT (is_aarch64_elf (output_bfd));
3952 elf_aarch64_tdata (output_bfd)->no_enum_size_warning = no_enum_warn;
3953 elf_aarch64_tdata (output_bfd)->no_wchar_size_warning = no_wchar_warn;
3954}
3955
3956static bfd_vma
3957aarch64_calculate_got_entry_vma (struct elf_link_hash_entry *h,
3958 struct elf_aarch64_link_hash_table
3959 *globals, struct bfd_link_info *info,
3960 bfd_vma value, bfd *output_bfd,
3961 bfd_boolean *unresolved_reloc_p)
3962{
3963 bfd_vma off = (bfd_vma) - 1;
3964 asection *basegot = globals->root.sgot;
3965 bfd_boolean dyn = globals->root.dynamic_sections_created;
3966
3967 if (h != NULL)
3968 {
3969 BFD_ASSERT (basegot != NULL);
3970 off = h->got.offset;
3971 BFD_ASSERT (off != (bfd_vma) - 1);
3972 if (!WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, info->shared, h)
3973 || (info->shared
3974 && SYMBOL_REFERENCES_LOCAL (info, h))
3975 || (ELF_ST_VISIBILITY (h->other)
3976 && h->root.type == bfd_link_hash_undefweak))
3977 {
3978 /* This is actually a static link, or it is a -Bsymbolic link
3979 and the symbol is defined locally. We must initialize this
3980 entry in the global offset table. Since the offset must
3981 always be a multiple of 8 (4 in the case of ILP32), we use
3982 the least significant bit to record whether we have
3983 initialized it already.
3984 When doing a dynamic link, we create a .rel(a).got relocation
3985 entry to initialize the value. This is done in the
3986 finish_dynamic_symbol routine. */
3987 if ((off & 1) != 0)
3988 off &= ~1;
3989 else
3990 {
3991 bfd_put_NN (output_bfd, value, basegot->contents + off);
3992 h->got.offset |= 1;
3993 }
3994 }
3995 else
3996 *unresolved_reloc_p = FALSE;
3997
3998 off = off + basegot->output_section->vma + basegot->output_offset;
3999 }
4000
4001 return off;
4002}
4003
4004/* Change R_TYPE to a more efficient access model where possible,
4005 return the new reloc type. */
4006
4007static bfd_reloc_code_real_type
4008aarch64_tls_transition_without_check (bfd_reloc_code_real_type r_type,
4009 struct elf_link_hash_entry *h)
4010{
4011 bfd_boolean is_local = h == NULL;
4012
4013 switch (r_type)
4014 {
4015 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4016 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4017 return (is_local
4018 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
4019 : BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21);
4020
4021 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4022 return (is_local
4023 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
4024 : r_type);
4025
4026 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4027 return (is_local
4028 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1
4029 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
4030
4031 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
4032 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4033 return (is_local
4034 ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC
4035 : BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC);
4036
4037 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4038 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1 : r_type;
4039
4040 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
4041 return is_local ? BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC : r_type;
4042
4043 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4044 return r_type;
4045
4046 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4047 return (is_local
4048 ? BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12
4049 : BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19);
4050
4051 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4052 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4053 /* Instructions with these relocations will become NOPs. */
4054 return BFD_RELOC_AARCH64_NONE;
4055
4056 default:
4057 break;
4058 }
4059
4060 return r_type;
4061}
4062
4063static unsigned int
4064aarch64_reloc_got_type (bfd_reloc_code_real_type r_type)
4065{
4066 switch (r_type)
4067 {
4068 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4069 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4070 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4071 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4072 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4073 return GOT_NORMAL;
4074
4075 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4076 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4077 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4078 return GOT_TLS_GD;
4079
4080 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
4081 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
4082 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
4083 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4084 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
4085 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
4086 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
4087 return GOT_TLSDESC_GD;
4088
4089 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4090 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4091 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4092 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4093 return GOT_TLS_IE;
4094
4095 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4096 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4097 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
4098 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
4099 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
4100 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
4101 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
4102 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
4103 return GOT_UNKNOWN;
4104
4105 default:
4106 break;
4107 }
4108 return GOT_UNKNOWN;
4109}
4110
4111static bfd_boolean
4112aarch64_can_relax_tls (bfd *input_bfd,
4113 struct bfd_link_info *info,
4114 bfd_reloc_code_real_type r_type,
4115 struct elf_link_hash_entry *h,
4116 unsigned long r_symndx)
4117{
4118 unsigned int symbol_got_type;
4119 unsigned int reloc_got_type;
4120
4121 if (! IS_AARCH64_TLS_RELOC (r_type))
4122 return FALSE;
4123
4124 symbol_got_type = elfNN_aarch64_symbol_got_type (h, input_bfd, r_symndx);
4125 reloc_got_type = aarch64_reloc_got_type (r_type);
4126
4127 if (symbol_got_type == GOT_TLS_IE && GOT_TLS_GD_ANY_P (reloc_got_type))
4128 return TRUE;
4129
4130 if (info->shared)
4131 return FALSE;
4132
4133 if (h && h->root.type == bfd_link_hash_undefweak)
4134 return FALSE;
4135
4136 return TRUE;
4137}
4138
4139/* Given the relocation code R_TYPE, return the relaxed bfd reloc
4140 enumerator. */
4141
4142static bfd_reloc_code_real_type
4143aarch64_tls_transition (bfd *input_bfd,
4144 struct bfd_link_info *info,
4145 unsigned int r_type,
4146 struct elf_link_hash_entry *h,
4147 unsigned long r_symndx)
4148{
4149 bfd_reloc_code_real_type bfd_r_type
4150 = elfNN_aarch64_bfd_reloc_from_type (r_type);
4151
4152 if (! aarch64_can_relax_tls (input_bfd, info, bfd_r_type, h, r_symndx))
4153 return bfd_r_type;
4154
4155 return aarch64_tls_transition_without_check (bfd_r_type, h);
4156}
4157
4158/* Return the base VMA address which should be subtracted from real addresses
4159 when resolving R_AARCH64_TLS_DTPREL relocation. */
4160
4161static bfd_vma
4162dtpoff_base (struct bfd_link_info *info)
4163{
4164 /* If tls_sec is NULL, we should have signalled an error already. */
4165 BFD_ASSERT (elf_hash_table (info)->tls_sec != NULL);
4166 return elf_hash_table (info)->tls_sec->vma;
4167}
4168
4169/* Return the base VMA address which should be subtracted from real addresses
4170 when resolving R_AARCH64_TLS_GOTTPREL64 relocations. */
4171
4172static bfd_vma
4173tpoff_base (struct bfd_link_info *info)
4174{
4175 struct elf_link_hash_table *htab = elf_hash_table (info);
4176
4177 /* If tls_sec is NULL, we should have signalled an error already. */
4178 BFD_ASSERT (htab->tls_sec != NULL);
4179
4180 bfd_vma base = align_power ((bfd_vma) TCB_SIZE,
4181 htab->tls_sec->alignment_power);
4182 return htab->tls_sec->vma - base;
4183}
4184
4185static bfd_vma *
4186symbol_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4187 unsigned long r_symndx)
4188{
4189 /* Calculate the address of the GOT entry for symbol
4190 referred to in h. */
4191 if (h != NULL)
4192 return &h->got.offset;
4193 else
4194 {
4195 /* local symbol */
4196 struct elf_aarch64_local_symbol *l;
4197
4198 l = elf_aarch64_locals (input_bfd);
4199 return &l[r_symndx].got_offset;
4200 }
4201}
4202
4203static void
4204symbol_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4205 unsigned long r_symndx)
4206{
4207 bfd_vma *p;
4208 p = symbol_got_offset_ref (input_bfd, h, r_symndx);
4209 *p |= 1;
4210}
4211
4212static int
4213symbol_got_offset_mark_p (bfd *input_bfd, struct elf_link_hash_entry *h,
4214 unsigned long r_symndx)
4215{
4216 bfd_vma value;
4217 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4218 return value & 1;
4219}
4220
4221static bfd_vma
4222symbol_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4223 unsigned long r_symndx)
4224{
4225 bfd_vma value;
4226 value = * symbol_got_offset_ref (input_bfd, h, r_symndx);
4227 value &= ~1;
4228 return value;
4229}
4230
4231static bfd_vma *
4232symbol_tlsdesc_got_offset_ref (bfd *input_bfd, struct elf_link_hash_entry *h,
4233 unsigned long r_symndx)
4234{
4235 /* Calculate the address of the GOT entry for symbol
4236 referred to in h. */
4237 if (h != NULL)
4238 {
4239 struct elf_aarch64_link_hash_entry *eh;
4240 eh = (struct elf_aarch64_link_hash_entry *) h;
4241 return &eh->tlsdesc_got_jump_table_offset;
4242 }
4243 else
4244 {
4245 /* local symbol */
4246 struct elf_aarch64_local_symbol *l;
4247
4248 l = elf_aarch64_locals (input_bfd);
4249 return &l[r_symndx].tlsdesc_got_jump_table_offset;
4250 }
4251}
4252
4253static void
4254symbol_tlsdesc_got_offset_mark (bfd *input_bfd, struct elf_link_hash_entry *h,
4255 unsigned long r_symndx)
4256{
4257 bfd_vma *p;
4258 p = symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4259 *p |= 1;
4260}
4261
4262static int
4263symbol_tlsdesc_got_offset_mark_p (bfd *input_bfd,
4264 struct elf_link_hash_entry *h,
4265 unsigned long r_symndx)
4266{
4267 bfd_vma value;
4268 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4269 return value & 1;
4270}
4271
4272static bfd_vma
4273symbol_tlsdesc_got_offset (bfd *input_bfd, struct elf_link_hash_entry *h,
4274 unsigned long r_symndx)
4275{
4276 bfd_vma value;
4277 value = * symbol_tlsdesc_got_offset_ref (input_bfd, h, r_symndx);
4278 value &= ~1;
4279 return value;
4280}
4281
4282/* Data for make_branch_to_erratum_835769_stub(). */
4283
4284struct erratum_835769_branch_to_stub_data
4285{
4286 struct bfd_link_info *info;
4287 asection *output_section;
4288 bfd_byte *contents;
4289};
4290
4291/* Helper to insert branches to erratum 835769 stubs in the right
4292 places for a particular section. */
4293
4294static bfd_boolean
4295make_branch_to_erratum_835769_stub (struct bfd_hash_entry *gen_entry,
4296 void *in_arg)
4297{
4298 struct elf_aarch64_stub_hash_entry *stub_entry;
4299 struct erratum_835769_branch_to_stub_data *data;
4300 bfd_byte *contents;
4301 unsigned long branch_insn = 0;
4302 bfd_vma veneered_insn_loc, veneer_entry_loc;
4303 bfd_signed_vma branch_offset;
4304 unsigned int target;
4305 bfd *abfd;
4306
4307 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4308 data = (struct erratum_835769_branch_to_stub_data *) in_arg;
4309
4310 if (stub_entry->target_section != data->output_section
4311 || stub_entry->stub_type != aarch64_stub_erratum_835769_veneer)
4312 return TRUE;
4313
4314 contents = data->contents;
4315 veneered_insn_loc = stub_entry->target_section->output_section->vma
4316 + stub_entry->target_section->output_offset
4317 + stub_entry->target_value;
4318 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4319 + stub_entry->stub_sec->output_offset
4320 + stub_entry->stub_offset;
4321 branch_offset = veneer_entry_loc - veneered_insn_loc;
4322
4323 abfd = stub_entry->target_section->owner;
4324 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4325 (*_bfd_error_handler)
4326 (_("%B: error: Erratum 835769 stub out "
4327 "of range (input file too large)"), abfd);
4328
4329 target = stub_entry->target_value;
4330 branch_insn = 0x14000000;
4331 branch_offset >>= 2;
4332 branch_offset &= 0x3ffffff;
4333 branch_insn |= branch_offset;
4334 bfd_putl32 (branch_insn, &contents[target]);
4335
4336 return TRUE;
4337}
4338
4339
4340static bfd_boolean
4341_bfd_aarch64_erratum_843419_branch_to_stub (struct bfd_hash_entry *gen_entry,
4342 void *in_arg)
4343{
4344 struct elf_aarch64_stub_hash_entry *stub_entry
4345 = (struct elf_aarch64_stub_hash_entry *) gen_entry;
4346 struct erratum_835769_branch_to_stub_data *data
4347 = (struct erratum_835769_branch_to_stub_data *) in_arg;
4348 struct bfd_link_info *info;
4349 struct elf_aarch64_link_hash_table *htab;
4350 bfd_byte *contents;
4351 asection *section;
4352 bfd *abfd;
4353 bfd_vma place;
4354 uint32_t insn;
4355
4356 info = data->info;
4357 contents = data->contents;
4358 section = data->output_section;
4359
4360 htab = elf_aarch64_hash_table (info);
4361
4362 if (stub_entry->target_section != section
4363 || stub_entry->stub_type != aarch64_stub_erratum_843419_veneer)
4364 return TRUE;
4365
4366 insn = bfd_getl32 (contents + stub_entry->target_value);
4367 bfd_putl32 (insn,
4368 stub_entry->stub_sec->contents + stub_entry->stub_offset);
4369
4370 place = (section->output_section->vma + section->output_offset
4371 + stub_entry->adrp_offset);
4372 insn = bfd_getl32 (contents + stub_entry->adrp_offset);
4373
4374 if ((insn & AARCH64_ADRP_OP_MASK) != AARCH64_ADRP_OP)
4375 abort ();
4376
4377 bfd_signed_vma imm =
4378 (_bfd_aarch64_sign_extend
4379 ((bfd_vma) _bfd_aarch64_decode_adrp_imm (insn) << 12, 33)
4380 - (place & 0xfff));
4381
4382 if (htab->fix_erratum_843419_adr
4383 && (imm >= AARCH64_MIN_ADRP_IMM && imm <= AARCH64_MAX_ADRP_IMM))
4384 {
4385 insn = (_bfd_aarch64_reencode_adr_imm (AARCH64_ADR_OP, imm)
4386 | AARCH64_RT (insn));
4387 bfd_putl32 (insn, contents + stub_entry->adrp_offset);
4388 }
4389 else
4390 {
4391 bfd_vma veneered_insn_loc;
4392 bfd_vma veneer_entry_loc;
4393 bfd_signed_vma branch_offset;
4394 uint32_t branch_insn;
4395
4396 veneered_insn_loc = stub_entry->target_section->output_section->vma
4397 + stub_entry->target_section->output_offset
4398 + stub_entry->target_value;
4399 veneer_entry_loc = stub_entry->stub_sec->output_section->vma
4400 + stub_entry->stub_sec->output_offset
4401 + stub_entry->stub_offset;
4402 branch_offset = veneer_entry_loc - veneered_insn_loc;
4403
4404 abfd = stub_entry->target_section->owner;
4405 if (!aarch64_valid_branch_p (veneer_entry_loc, veneered_insn_loc))
4406 (*_bfd_error_handler)
4407 (_("%B: error: Erratum 843419 stub out "
4408 "of range (input file too large)"), abfd);
4409
4410 branch_insn = 0x14000000;
4411 branch_offset >>= 2;
4412 branch_offset &= 0x3ffffff;
4413 branch_insn |= branch_offset;
4414 bfd_putl32 (branch_insn, contents + stub_entry->target_value);
4415 }
4416 return TRUE;
4417}
4418
4419
4420static bfd_boolean
4421elfNN_aarch64_write_section (bfd *output_bfd ATTRIBUTE_UNUSED,
4422 struct bfd_link_info *link_info,
4423 asection *sec,
4424 bfd_byte *contents)
4425
4426{
4427 struct elf_aarch64_link_hash_table *globals =
4428 elf_aarch64_hash_table (link_info);
4429
4430 if (globals == NULL)
4431 return FALSE;
4432
4433 /* Fix code to point to erratum 835769 stubs. */
4434 if (globals->fix_erratum_835769)
4435 {
4436 struct erratum_835769_branch_to_stub_data data;
4437
4438 data.info = link_info;
4439 data.output_section = sec;
4440 data.contents = contents;
4441 bfd_hash_traverse (&globals->stub_hash_table,
4442 make_branch_to_erratum_835769_stub, &data);
4443 }
4444
4445 if (globals->fix_erratum_843419)
4446 {
4447 struct erratum_835769_branch_to_stub_data data;
4448
4449 data.info = link_info;
4450 data.output_section = sec;
4451 data.contents = contents;
4452 bfd_hash_traverse (&globals->stub_hash_table,
4453 _bfd_aarch64_erratum_843419_branch_to_stub, &data);
4454 }
4455
4456 return FALSE;
4457}
4458
4459/* Perform a relocation as part of a final link. */
4460static bfd_reloc_status_type
4461elfNN_aarch64_final_link_relocate (reloc_howto_type *howto,
4462 bfd *input_bfd,
4463 bfd *output_bfd,
4464 asection *input_section,
4465 bfd_byte *contents,
4466 Elf_Internal_Rela *rel,
4467 bfd_vma value,
4468 struct bfd_link_info *info,
4469 asection *sym_sec,
4470 struct elf_link_hash_entry *h,
4471 bfd_boolean *unresolved_reloc_p,
4472 bfd_boolean save_addend,
4473 bfd_vma *saved_addend,
4474 Elf_Internal_Sym *sym)
4475{
4476 Elf_Internal_Shdr *symtab_hdr;
4477 unsigned int r_type = howto->type;
4478 bfd_reloc_code_real_type bfd_r_type
4479 = elfNN_aarch64_bfd_reloc_from_howto (howto);
4480 bfd_reloc_code_real_type new_bfd_r_type;
4481 unsigned long r_symndx;
4482 bfd_byte *hit_data = contents + rel->r_offset;
4483 bfd_vma place, off;
4484 bfd_signed_vma signed_addend;
4485 struct elf_aarch64_link_hash_table *globals;
4486 bfd_boolean weak_undef_p;
4487 asection *base_got;
4488
4489 globals = elf_aarch64_hash_table (info);
4490
4491 symtab_hdr = &elf_symtab_hdr (input_bfd);
4492
4493 BFD_ASSERT (is_aarch64_elf (input_bfd));
4494
4495 r_symndx = ELFNN_R_SYM (rel->r_info);
4496
4497 /* It is possible to have linker relaxations on some TLS access
4498 models. Update our information here. */
4499 new_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type, h, r_symndx);
4500 if (new_bfd_r_type != bfd_r_type)
4501 {
4502 bfd_r_type = new_bfd_r_type;
4503 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
4504 BFD_ASSERT (howto != NULL);
4505 r_type = howto->type;
4506 }
4507
4508 place = input_section->output_section->vma
4509 + input_section->output_offset + rel->r_offset;
4510
4511 /* Get addend, accumulating the addend for consecutive relocs
4512 which refer to the same offset. */
4513 signed_addend = saved_addend ? *saved_addend : 0;
4514 signed_addend += rel->r_addend;
4515
4516 weak_undef_p = (h ? h->root.type == bfd_link_hash_undefweak
4517 : bfd_is_und_section (sym_sec));
4518
4519 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle
4520 it here if it is defined in a non-shared object. */
4521 if (h != NULL
4522 && h->type == STT_GNU_IFUNC
4523 && h->def_regular)
4524 {
4525 asection *plt;
4526 const char *name;
4527 bfd_vma addend = 0;
4528
4529 if ((input_section->flags & SEC_ALLOC) == 0
4530 || h->plt.offset == (bfd_vma) -1)
4531 abort ();
4532
4533 /* STT_GNU_IFUNC symbol must go through PLT. */
4534 plt = globals->root.splt ? globals->root.splt : globals->root.iplt;
4535 value = (plt->output_section->vma + plt->output_offset + h->plt.offset);
4536
4537 switch (bfd_r_type)
4538 {
4539 default:
4540 if (h->root.root.string)
4541 name = h->root.root.string;
4542 else
4543 name = bfd_elf_sym_name (input_bfd, symtab_hdr, sym,
4544 NULL);
4545 (*_bfd_error_handler)
4546 (_("%B: relocation %s against STT_GNU_IFUNC "
4547 "symbol `%s' isn't handled by %s"), input_bfd,
4548 howto->name, name, __FUNCTION__);
4549 bfd_set_error (bfd_error_bad_value);
4550 return FALSE;
4551
4552 case BFD_RELOC_AARCH64_NN:
4553 if (rel->r_addend != 0)
4554 {
4555 if (h->root.root.string)
4556 name = h->root.root.string;
4557 else
4558 name = bfd_elf_sym_name (input_bfd, symtab_hdr,
4559 sym, NULL);
4560 (*_bfd_error_handler)
4561 (_("%B: relocation %s against STT_GNU_IFUNC "
4562 "symbol `%s' has non-zero addend: %d"),
4563 input_bfd, howto->name, name, rel->r_addend);
4564 bfd_set_error (bfd_error_bad_value);
4565 return FALSE;
4566 }
4567
4568 /* Generate dynamic relocation only when there is a
4569 non-GOT reference in a shared object. */
4570 if (info->shared && h->non_got_ref)
4571 {
4572 Elf_Internal_Rela outrel;
4573 asection *sreloc;
4574
4575 /* Need a dynamic relocation to get the real function
4576 address. */
4577 outrel.r_offset = _bfd_elf_section_offset (output_bfd,
4578 info,
4579 input_section,
4580 rel->r_offset);
4581 if (outrel.r_offset == (bfd_vma) -1
4582 || outrel.r_offset == (bfd_vma) -2)
4583 abort ();
4584
4585 outrel.r_offset += (input_section->output_section->vma
4586 + input_section->output_offset);
4587
4588 if (h->dynindx == -1
4589 || h->forced_local
4590 || info->executable)
4591 {
4592 /* This symbol is resolved locally. */
4593 outrel.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
4594 outrel.r_addend = (h->root.u.def.value
4595 + h->root.u.def.section->output_section->vma
4596 + h->root.u.def.section->output_offset);
4597 }
4598 else
4599 {
4600 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4601 outrel.r_addend = 0;
4602 }
4603
4604 sreloc = globals->root.irelifunc;
4605 elf_append_rela (output_bfd, sreloc, &outrel);
4606
4607 /* If this reloc is against an external symbol, we
4608 do not want to fiddle with the addend. Otherwise,
4609 we need to include the symbol value so that it
4610 becomes an addend for the dynamic reloc. For an
4611 internal symbol, we have updated addend. */
4612 return bfd_reloc_ok;
4613 }
4614 /* FALLTHROUGH */
4615 case BFD_RELOC_AARCH64_CALL26:
4616 case BFD_RELOC_AARCH64_JUMP26:
4617 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4618 signed_addend,
4619 weak_undef_p);
4620 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
4621 howto, value);
4622 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4623 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4624 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4625 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4626 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4627 base_got = globals->root.sgot;
4628 off = h->got.offset;
4629
4630 if (base_got == NULL)
4631 abort ();
4632
4633 if (off == (bfd_vma) -1)
4634 {
4635 bfd_vma plt_index;
4636
4637 /* We can't use h->got.offset here to save state, or
4638 even just remember the offset, as finish_dynamic_symbol
4639 would use that as offset into .got. */
4640
4641 if (globals->root.splt != NULL)
4642 {
4643 plt_index = ((h->plt.offset - globals->plt_header_size) /
4644 globals->plt_entry_size);
4645 off = (plt_index + 3) * GOT_ENTRY_SIZE;
4646 base_got = globals->root.sgotplt;
4647 }
4648 else
4649 {
4650 plt_index = h->plt.offset / globals->plt_entry_size;
4651 off = plt_index * GOT_ENTRY_SIZE;
4652 base_got = globals->root.igotplt;
4653 }
4654
4655 if (h->dynindx == -1
4656 || h->forced_local
4657 || info->symbolic)
4658 {
4659 /* This references the local definition. We must
4660 initialize this entry in the global offset table.
4661 Since the offset must always be a multiple of 8,
4662 we use the least significant bit to record
4663 whether we have initialized it already.
4664
4665 When doing a dynamic link, we create a .rela.got
4666 relocation entry to initialize the value. This
4667 is done in the finish_dynamic_symbol routine. */
4668 if ((off & 1) != 0)
4669 off &= ~1;
4670 else
4671 {
4672 bfd_put_NN (output_bfd, value,
4673 base_got->contents + off);
4674 /* Note that this is harmless as -1 | 1 still is -1. */
4675 h->got.offset |= 1;
4676 }
4677 }
4678 value = (base_got->output_section->vma
4679 + base_got->output_offset + off);
4680 }
4681 else
4682 value = aarch64_calculate_got_entry_vma (h, globals, info,
4683 value, output_bfd,
4684 unresolved_reloc_p);
4685 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15)
4686 addend = (globals->root.sgot->output_section->vma
4687 + globals->root.sgot->output_offset);
4688 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4689 addend, weak_undef_p);
4690 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type, howto, value);
4691 case BFD_RELOC_AARCH64_ADD_LO12:
4692 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4693 break;
4694 }
4695 }
4696
4697 switch (bfd_r_type)
4698 {
4699 case BFD_RELOC_AARCH64_NONE:
4700 case BFD_RELOC_AARCH64_TLSDESC_CALL:
4701 *unresolved_reloc_p = FALSE;
4702 return bfd_reloc_ok;
4703
4704 case BFD_RELOC_AARCH64_NN:
4705
4706 /* When generating a shared object or relocatable executable, these
4707 relocations are copied into the output file to be resolved at
4708 run time. */
4709 if (((info->shared == TRUE) || globals->root.is_relocatable_executable)
4710 && (input_section->flags & SEC_ALLOC)
4711 && (h == NULL
4712 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
4713 || h->root.type != bfd_link_hash_undefweak))
4714 {
4715 Elf_Internal_Rela outrel;
4716 bfd_byte *loc;
4717 bfd_boolean skip, relocate;
4718 asection *sreloc;
4719
4720 *unresolved_reloc_p = FALSE;
4721
4722 skip = FALSE;
4723 relocate = FALSE;
4724
4725 outrel.r_addend = signed_addend;
4726 outrel.r_offset =
4727 _bfd_elf_section_offset (output_bfd, info, input_section,
4728 rel->r_offset);
4729 if (outrel.r_offset == (bfd_vma) - 1)
4730 skip = TRUE;
4731 else if (outrel.r_offset == (bfd_vma) - 2)
4732 {
4733 skip = TRUE;
4734 relocate = TRUE;
4735 }
4736
4737 outrel.r_offset += (input_section->output_section->vma
4738 + input_section->output_offset);
4739
4740 if (skip)
4741 memset (&outrel, 0, sizeof outrel);
4742 else if (h != NULL
4743 && h->dynindx != -1
4744 && (!info->shared || !SYMBOLIC_BIND (info, h) || !h->def_regular))
4745 outrel.r_info = ELFNN_R_INFO (h->dynindx, r_type);
4746 else
4747 {
4748 int symbol;
4749
4750 /* On SVR4-ish systems, the dynamic loader cannot
4751 relocate the text and data segments independently,
4752 so the symbol does not matter. */
4753 symbol = 0;
4754 outrel.r_info = ELFNN_R_INFO (symbol, AARCH64_R (RELATIVE));
4755 outrel.r_addend += value;
4756 }
4757
4758 sreloc = elf_section_data (input_section)->sreloc;
4759 if (sreloc == NULL || sreloc->contents == NULL)
4760 return bfd_reloc_notsupported;
4761
4762 loc = sreloc->contents + sreloc->reloc_count++ * RELOC_SIZE (globals);
4763 bfd_elfNN_swap_reloca_out (output_bfd, &outrel, loc);
4764
4765 if (sreloc->reloc_count * RELOC_SIZE (globals) > sreloc->size)
4766 {
4767 /* Sanity to check that we have previously allocated
4768 sufficient space in the relocation section for the
4769 number of relocations we actually want to emit. */
4770 abort ();
4771 }
4772
4773 /* If this reloc is against an external symbol, we do not want to
4774 fiddle with the addend. Otherwise, we need to include the symbol
4775 value so that it becomes an addend for the dynamic reloc. */
4776 if (!relocate)
4777 return bfd_reloc_ok;
4778
4779 return _bfd_final_link_relocate (howto, input_bfd, input_section,
4780 contents, rel->r_offset, value,
4781 signed_addend);
4782 }
4783 else
4784 value += signed_addend;
4785 break;
4786
4787 case BFD_RELOC_AARCH64_CALL26:
4788 case BFD_RELOC_AARCH64_JUMP26:
4789 {
4790 asection *splt = globals->root.splt;
4791 bfd_boolean via_plt_p =
4792 splt != NULL && h != NULL && h->plt.offset != (bfd_vma) - 1;
4793
4794 /* A call to an undefined weak symbol is converted to a jump to
4795 the next instruction unless a PLT entry will be created.
4796 The jump to the next instruction is optimized as a NOP.
4797 Do the same for local undefined symbols. */
4798 if (weak_undef_p && ! via_plt_p)
4799 {
4800 bfd_putl32 (INSN_NOP, hit_data);
4801 return bfd_reloc_ok;
4802 }
4803
4804 /* If the call goes through a PLT entry, make sure to
4805 check distance to the right destination address. */
4806 if (via_plt_p)
4807 {
4808 value = (splt->output_section->vma
4809 + splt->output_offset + h->plt.offset);
4810 *unresolved_reloc_p = FALSE;
4811 }
4812
4813 /* If the target symbol is global and marked as a function the
4814 relocation applies a function call or a tail call. In this
4815 situation we can veneer out of range branches. The veneers
4816 use IP0 and IP1 hence cannot be used arbitrary out of range
4817 branches that occur within the body of a function. */
4818 if (h && h->type == STT_FUNC)
4819 {
4820 /* Check if a stub has to be inserted because the destination
4821 is too far away. */
4822 if (! aarch64_valid_branch_p (value, place))
4823 {
4824 /* The target is out of reach, so redirect the branch to
4825 the local stub for this function. */
4826 struct elf_aarch64_stub_hash_entry *stub_entry;
4827 stub_entry = elfNN_aarch64_get_stub_entry (input_section,
4828 sym_sec, h,
4829 rel, globals);
4830 if (stub_entry != NULL)
4831 value = (stub_entry->stub_offset
4832 + stub_entry->stub_sec->output_offset
4833 + stub_entry->stub_sec->output_section->vma);
4834 }
4835 }
4836 }
4837 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4838 signed_addend, weak_undef_p);
4839 break;
4840
4841 case BFD_RELOC_AARCH64_16_PCREL:
4842 case BFD_RELOC_AARCH64_32_PCREL:
4843 case BFD_RELOC_AARCH64_64_PCREL:
4844 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
4845 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
4846 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
4847 case BFD_RELOC_AARCH64_LD_LO19_PCREL:
4848 if (info->shared
4849 && (input_section->flags & SEC_ALLOC) != 0
4850 && (input_section->flags & SEC_READONLY) != 0
4851 && h != NULL
4852 && !h->def_regular)
4853 {
4854 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
4855
4856 (*_bfd_error_handler)
4857 (_("%B: relocation %s against external symbol `%s' can not be used"
4858 " when making a shared object; recompile with -fPIC"),
4859 input_bfd, elfNN_aarch64_howto_table[howto_index].name,
4860 h->root.root.string);
4861 bfd_set_error (bfd_error_bad_value);
4862 return FALSE;
4863 }
4864
4865 case BFD_RELOC_AARCH64_16:
4866#if ARCH_SIZE == 64
4867 case BFD_RELOC_AARCH64_32:
4868#endif
4869 case BFD_RELOC_AARCH64_ADD_LO12:
4870 case BFD_RELOC_AARCH64_BRANCH19:
4871 case BFD_RELOC_AARCH64_LDST128_LO12:
4872 case BFD_RELOC_AARCH64_LDST16_LO12:
4873 case BFD_RELOC_AARCH64_LDST32_LO12:
4874 case BFD_RELOC_AARCH64_LDST64_LO12:
4875 case BFD_RELOC_AARCH64_LDST8_LO12:
4876 case BFD_RELOC_AARCH64_MOVW_G0:
4877 case BFD_RELOC_AARCH64_MOVW_G0_NC:
4878 case BFD_RELOC_AARCH64_MOVW_G0_S:
4879 case BFD_RELOC_AARCH64_MOVW_G1:
4880 case BFD_RELOC_AARCH64_MOVW_G1_NC:
4881 case BFD_RELOC_AARCH64_MOVW_G1_S:
4882 case BFD_RELOC_AARCH64_MOVW_G2:
4883 case BFD_RELOC_AARCH64_MOVW_G2_NC:
4884 case BFD_RELOC_AARCH64_MOVW_G2_S:
4885 case BFD_RELOC_AARCH64_MOVW_G3:
4886 case BFD_RELOC_AARCH64_TSTBR14:
4887 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4888 signed_addend, weak_undef_p);
4889 break;
4890
4891 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
4892 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
4893 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
4894 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
4895 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
4896 if (globals->root.sgot == NULL)
4897 BFD_ASSERT (h != NULL);
4898
4899 if (h != NULL)
4900 {
4901 bfd_vma addend = 0;
4902 value = aarch64_calculate_got_entry_vma (h, globals, info, value,
4903 output_bfd,
4904 unresolved_reloc_p);
4905 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15)
4906 addend = (globals->root.sgot->output_section->vma
4907 + globals->root.sgot->output_offset);
4908 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4909 addend, weak_undef_p);
4910 }
4911 else
4912 {
4913 bfd_vma addend = 0;
4914 struct elf_aarch64_local_symbol *locals
4915 = elf_aarch64_locals (input_bfd);
4916
4917 if (locals == NULL)
4918 {
4919 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
4920 (*_bfd_error_handler)
4921 (_("%B: Local symbol descriptor table be NULL when applying "
4922 "relocation %s against local symbol"),
4923 input_bfd, elfNN_aarch64_howto_table[howto_index].name);
4924 abort ();
4925 }
4926
4927 off = symbol_got_offset (input_bfd, h, r_symndx);
4928 base_got = globals->root.sgot;
4929 bfd_vma got_entry_addr = (base_got->output_section->vma
4930 + base_got->output_offset + off);
4931
4932 if (!symbol_got_offset_mark_p (input_bfd, h, r_symndx))
4933 {
4934 bfd_put_64 (output_bfd, value, base_got->contents + off);
4935
4936 if (info->shared)
4937 {
4938 asection *s;
4939 Elf_Internal_Rela outrel;
4940
4941 /* For local symbol, we have done absolute relocation in static
4942 linking stageh. While for share library, we need to update
4943 the content of GOT entry according to the share objects
4944 loading base address. So we need to generate a
4945 R_AARCH64_RELATIVE reloc for dynamic linker. */
4946 s = globals->root.srelgot;
4947 if (s == NULL)
4948 abort ();
4949
4950 outrel.r_offset = got_entry_addr;
4951 outrel.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
4952 outrel.r_addend = value;
4953 elf_append_rela (output_bfd, s, &outrel);
4954 }
4955
4956 symbol_got_offset_mark (input_bfd, h, r_symndx);
4957 }
4958
4959 /* Update the relocation value to GOT entry addr as we have transformed
4960 the direct data access into indirect data access through GOT. */
4961 value = got_entry_addr;
4962
4963 if (bfd_r_type == BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15)
4964 addend = base_got->output_section->vma + base_got->output_offset;
4965
4966 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4967 addend, weak_undef_p);
4968 }
4969
4970 break;
4971
4972 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
4973 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
4974 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
4975 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
4976 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
4977 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
4978 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
4979 if (globals->root.sgot == NULL)
4980 return bfd_reloc_notsupported;
4981
4982 value = (symbol_got_offset (input_bfd, h, r_symndx)
4983 + globals->root.sgot->output_section->vma
4984 + globals->root.sgot->output_offset);
4985
4986 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
4987 0, weak_undef_p);
4988 *unresolved_reloc_p = FALSE;
4989 break;
4990
4991 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
4992 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
4993 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
4994 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
4995 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
4996 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
4997 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
4998 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
4999 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
5000 signed_addend - tpoff_base (info),
5001 weak_undef_p);
5002 *unresolved_reloc_p = FALSE;
5003 break;
5004
5005 case BFD_RELOC_AARCH64_TLSDESC_ADD:
5006 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5007 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5008 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5009 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
5010 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
5011 case BFD_RELOC_AARCH64_TLSDESC_LDR:
5012 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5013 if (globals->root.sgot == NULL)
5014 return bfd_reloc_notsupported;
5015 value = (symbol_tlsdesc_got_offset (input_bfd, h, r_symndx)
5016 + globals->root.sgotplt->output_section->vma
5017 + globals->root.sgotplt->output_offset
5018 + globals->sgotplt_jump_table_size);
5019
5020 value = _bfd_aarch64_elf_resolve_relocation (bfd_r_type, place, value,
5021 0, weak_undef_p);
5022 *unresolved_reloc_p = FALSE;
5023 break;
5024
5025 default:
5026 return bfd_reloc_notsupported;
5027 }
5028
5029 if (saved_addend)
5030 *saved_addend = value;
5031
5032 /* Only apply the final relocation in a sequence. */
5033 if (save_addend)
5034 return bfd_reloc_continue;
5035
5036 return _bfd_aarch64_elf_put_addend (input_bfd, hit_data, bfd_r_type,
5037 howto, value);
5038}
5039
5040/* Handle TLS relaxations. Relaxing is possible for symbols that use
5041 R_AARCH64_TLSDESC_ADR_{PAGE, LD64_LO12_NC, ADD_LO12_NC} during a static
5042 link.
5043
5044 Return bfd_reloc_ok if we're done, bfd_reloc_continue if the caller
5045 is to then call final_link_relocate. Return other values in the
5046 case of error. */
5047
5048static bfd_reloc_status_type
5049elfNN_aarch64_tls_relax (struct elf_aarch64_link_hash_table *globals,
5050 bfd *input_bfd, bfd_byte *contents,
5051 Elf_Internal_Rela *rel, struct elf_link_hash_entry *h)
5052{
5053 bfd_boolean is_local = h == NULL;
5054 unsigned int r_type = ELFNN_R_TYPE (rel->r_info);
5055 unsigned long insn;
5056
5057 BFD_ASSERT (globals && input_bfd && contents && rel);
5058
5059 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
5060 {
5061 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5062 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5063 if (is_local)
5064 {
5065 /* GD->LE relaxation:
5066 adrp x0, :tlsgd:var => movz x0, :tprel_g1:var
5067 or
5068 adrp x0, :tlsdesc:var => movz x0, :tprel_g1:var
5069 */
5070 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
5071 return bfd_reloc_continue;
5072 }
5073 else
5074 {
5075 /* GD->IE relaxation:
5076 adrp x0, :tlsgd:var => adrp x0, :gottprel:var
5077 or
5078 adrp x0, :tlsdesc:var => adrp x0, :gottprel:var
5079 */
5080 return bfd_reloc_continue;
5081 }
5082
5083 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5084 BFD_ASSERT (0);
5085 break;
5086
5087 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5088 if (is_local)
5089 {
5090 /* Tiny TLSDESC->LE relaxation:
5091 ldr x1, :tlsdesc:var => movz x0, #:tprel_g1:var
5092 adr x0, :tlsdesc:var => movk x0, #:tprel_g0_nc:var
5093 .tlsdesccall var
5094 blr x1 => nop
5095 */
5096 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
5097 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
5098
5099 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5100 AARCH64_R (TLSLE_MOVW_TPREL_G0_NC));
5101 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5102
5103 bfd_putl32 (0xd2a00000, contents + rel->r_offset);
5104 bfd_putl32 (0xf2800000, contents + rel->r_offset + 4);
5105 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
5106 return bfd_reloc_continue;
5107 }
5108 else
5109 {
5110 /* Tiny TLSDESC->IE relaxation:
5111 ldr x1, :tlsdesc:var => ldr x0, :gottprel:var
5112 adr x0, :tlsdesc:var => nop
5113 .tlsdesccall var
5114 blr x1 => nop
5115 */
5116 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (TLSDESC_ADR_PREL21));
5117 BFD_ASSERT (ELFNN_R_TYPE (rel[2].r_info) == AARCH64_R (TLSDESC_CALL));
5118
5119 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5120 rel[2].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5121
5122 bfd_putl32 (0x58000000, contents + rel->r_offset);
5123 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 4);
5124 bfd_putl32 (INSN_NOP, contents + rel->r_offset + 8);
5125 return bfd_reloc_continue;
5126 }
5127
5128 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5129 if (is_local)
5130 {
5131 /* Tiny GD->LE relaxation:
5132 adr x0, :tlsgd:var => mrs x1, tpidr_el0
5133 bl __tls_get_addr => add x0, x1, #:tprel_hi12:x, lsl #12
5134 nop => add x0, x0, #:tprel_lo12_nc:x
5135 */
5136
5137 /* First kill the tls_get_addr reloc on the bl instruction. */
5138 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5139
5140 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 0);
5141 bfd_putl32 (0x91400020, contents + rel->r_offset + 4);
5142 bfd_putl32 (0x91000000, contents + rel->r_offset + 8);
5143
5144 rel[1].r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5145 AARCH64_R (TLSLE_ADD_TPREL_LO12_NC));
5146 rel[1].r_offset = rel->r_offset + 8;
5147
5148 /* Move the current relocation to the second instruction in
5149 the sequence. */
5150 rel->r_offset += 4;
5151 rel->r_info = ELFNN_R_INFO (ELFNN_R_SYM (rel->r_info),
5152 AARCH64_R (TLSLE_ADD_TPREL_HI12));
5153 return bfd_reloc_continue;
5154 }
5155 else
5156 {
5157 /* Tiny GD->IE relaxation:
5158 adr x0, :tlsgd:var => ldr x0, :gottprel:var
5159 bl __tls_get_addr => mrs x1, tpidr_el0
5160 nop => add x0, x0, x1
5161 */
5162
5163 /* First kill the tls_get_addr reloc on the bl instruction. */
5164 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5165 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5166
5167 bfd_putl32 (0x58000000, contents + rel->r_offset);
5168 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5169 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5170 return bfd_reloc_continue;
5171 }
5172
5173 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5174 return bfd_reloc_continue;
5175
5176 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5177 if (is_local)
5178 {
5179 /* GD->LE relaxation:
5180 ldr xd, [x0, #:tlsdesc_lo12:var] => movk x0, :tprel_g0_nc:var
5181 */
5182 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5183 return bfd_reloc_continue;
5184 }
5185 else
5186 {
5187 /* GD->IE relaxation:
5188 ldr xd, [x0, #:tlsdesc_lo12:var] => ldr x0, [x0, #:gottprel_lo12:var]
5189 */
5190 insn = bfd_getl32 (contents + rel->r_offset);
5191 insn &= 0xffffffe0;
5192 bfd_putl32 (insn, contents + rel->r_offset);
5193 return bfd_reloc_continue;
5194 }
5195
5196 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5197 if (is_local)
5198 {
5199 /* GD->LE relaxation
5200 add x0, #:tlsgd_lo12:var => movk x0, :tprel_g0_nc:var
5201 bl __tls_get_addr => mrs x1, tpidr_el0
5202 nop => add x0, x1, x0
5203 */
5204
5205 /* First kill the tls_get_addr reloc on the bl instruction. */
5206 BFD_ASSERT (rel->r_offset + 4 == rel[1].r_offset);
5207 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5208
5209 bfd_putl32 (0xf2800000, contents + rel->r_offset);
5210 bfd_putl32 (0xd53bd041, contents + rel->r_offset + 4);
5211 bfd_putl32 (0x8b000020, contents + rel->r_offset + 8);
5212 return bfd_reloc_continue;
5213 }
5214 else
5215 {
5216 /* GD->IE relaxation
5217 ADD x0, #:tlsgd_lo12:var => ldr x0, [x0, #:gottprel_lo12:var]
5218 BL __tls_get_addr => mrs x1, tpidr_el0
5219 R_AARCH64_CALL26
5220 NOP => add x0, x1, x0
5221 */
5222
5223 BFD_ASSERT (ELFNN_R_TYPE (rel[1].r_info) == AARCH64_R (CALL26));
5224
5225 /* Remove the relocation on the BL instruction. */
5226 rel[1].r_info = ELFNN_R_INFO (STN_UNDEF, R_AARCH64_NONE);
5227
5228 bfd_putl32 (0xf9400000, contents + rel->r_offset);
5229
5230 /* We choose to fixup the BL and NOP instructions using the
5231 offset from the second relocation to allow flexibility in
5232 scheduling instructions between the ADD and BL. */
5233 bfd_putl32 (0xd53bd041, contents + rel[1].r_offset);
5234 bfd_putl32 (0x8b000020, contents + rel[1].r_offset + 4);
5235 return bfd_reloc_continue;
5236 }
5237
5238 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5239 case BFD_RELOC_AARCH64_TLSDESC_CALL:
5240 /* GD->IE/LE relaxation:
5241 add x0, x0, #:tlsdesc_lo12:var => nop
5242 blr xd => nop
5243 */
5244 bfd_putl32 (INSN_NOP, contents + rel->r_offset);
5245 return bfd_reloc_ok;
5246
5247 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5248 /* IE->LE relaxation:
5249 adrp xd, :gottprel:var => movz xd, :tprel_g1:var
5250 */
5251 if (is_local)
5252 {
5253 insn = bfd_getl32 (contents + rel->r_offset);
5254 bfd_putl32 (0xd2a00000 | (insn & 0x1f), contents + rel->r_offset);
5255 }
5256 return bfd_reloc_continue;
5257
5258 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5259 /* IE->LE relaxation:
5260 ldr xd, [xm, #:gottprel_lo12:var] => movk xd, :tprel_g0_nc:var
5261 */
5262 if (is_local)
5263 {
5264 insn = bfd_getl32 (contents + rel->r_offset);
5265 bfd_putl32 (0xf2800000 | (insn & 0x1f), contents + rel->r_offset);
5266 }
5267 return bfd_reloc_continue;
5268
5269 default:
5270 return bfd_reloc_continue;
5271 }
5272
5273 return bfd_reloc_ok;
5274}
5275
5276/* Relocate an AArch64 ELF section. */
5277
5278static bfd_boolean
5279elfNN_aarch64_relocate_section (bfd *output_bfd,
5280 struct bfd_link_info *info,
5281 bfd *input_bfd,
5282 asection *input_section,
5283 bfd_byte *contents,
5284 Elf_Internal_Rela *relocs,
5285 Elf_Internal_Sym *local_syms,
5286 asection **local_sections)
5287{
5288 Elf_Internal_Shdr *symtab_hdr;
5289 struct elf_link_hash_entry **sym_hashes;
5290 Elf_Internal_Rela *rel;
5291 Elf_Internal_Rela *relend;
5292 const char *name;
5293 struct elf_aarch64_link_hash_table *globals;
5294 bfd_boolean save_addend = FALSE;
5295 bfd_vma addend = 0;
5296
5297 globals = elf_aarch64_hash_table (info);
5298
5299 symtab_hdr = &elf_symtab_hdr (input_bfd);
5300 sym_hashes = elf_sym_hashes (input_bfd);
5301
5302 rel = relocs;
5303 relend = relocs + input_section->reloc_count;
5304 for (; rel < relend; rel++)
5305 {
5306 unsigned int r_type;
5307 bfd_reloc_code_real_type bfd_r_type;
5308 bfd_reloc_code_real_type relaxed_bfd_r_type;
5309 reloc_howto_type *howto;
5310 unsigned long r_symndx;
5311 Elf_Internal_Sym *sym;
5312 asection *sec;
5313 struct elf_link_hash_entry *h;
5314 bfd_vma relocation;
5315 bfd_reloc_status_type r;
5316 arelent bfd_reloc;
5317 char sym_type;
5318 bfd_boolean unresolved_reloc = FALSE;
5319 char *error_message = NULL;
5320
5321 r_symndx = ELFNN_R_SYM (rel->r_info);
5322 r_type = ELFNN_R_TYPE (rel->r_info);
5323
5324 bfd_reloc.howto = elfNN_aarch64_howto_from_type (r_type);
5325 howto = bfd_reloc.howto;
5326
5327 if (howto == NULL)
5328 {
5329 (*_bfd_error_handler)
5330 (_("%B: unrecognized relocation (0x%x) in section `%A'"),
5331 input_bfd, input_section, r_type);
5332 return FALSE;
5333 }
5334 bfd_r_type = elfNN_aarch64_bfd_reloc_from_howto (howto);
5335
5336 h = NULL;
5337 sym = NULL;
5338 sec = NULL;
5339
5340 if (r_symndx < symtab_hdr->sh_info)
5341 {
5342 sym = local_syms + r_symndx;
5343 sym_type = ELFNN_ST_TYPE (sym->st_info);
5344 sec = local_sections[r_symndx];
5345
5346 /* An object file might have a reference to a local
5347 undefined symbol. This is a daft object file, but we
5348 should at least do something about it. */
5349 if (r_type != R_AARCH64_NONE && r_type != R_AARCH64_NULL
5350 && bfd_is_und_section (sec)
5351 && ELF_ST_BIND (sym->st_info) != STB_WEAK)
5352 {
5353 if (!info->callbacks->undefined_symbol
5354 (info, bfd_elf_string_from_elf_section
5355 (input_bfd, symtab_hdr->sh_link, sym->st_name),
5356 input_bfd, input_section, rel->r_offset, TRUE))
5357 return FALSE;
5358 }
5359
5360 relocation = _bfd_elf_rela_local_sym (output_bfd, sym, &sec, rel);
5361
5362 /* Relocate against local STT_GNU_IFUNC symbol. */
5363 if (!info->relocatable
5364 && ELF_ST_TYPE (sym->st_info) == STT_GNU_IFUNC)
5365 {
5366 h = elfNN_aarch64_get_local_sym_hash (globals, input_bfd,
5367 rel, FALSE);
5368 if (h == NULL)
5369 abort ();
5370
5371 /* Set STT_GNU_IFUNC symbol value. */
5372 h->root.u.def.value = sym->st_value;
5373 h->root.u.def.section = sec;
5374 }
5375 }
5376 else
5377 {
5378 bfd_boolean warned, ignored;
5379
5380 RELOC_FOR_GLOBAL_SYMBOL (info, input_bfd, input_section, rel,
5381 r_symndx, symtab_hdr, sym_hashes,
5382 h, sec, relocation,
5383 unresolved_reloc, warned, ignored);
5384
5385 sym_type = h->type;
5386 }
5387
5388 if (sec != NULL && discarded_section (sec))
5389 RELOC_AGAINST_DISCARDED_SECTION (info, input_bfd, input_section,
5390 rel, 1, relend, howto, 0, contents);
5391
5392 if (info->relocatable)
5393 continue;
5394
5395 if (h != NULL)
5396 name = h->root.root.string;
5397 else
5398 {
5399 name = (bfd_elf_string_from_elf_section
5400 (input_bfd, symtab_hdr->sh_link, sym->st_name));
5401 if (name == NULL || *name == '\0')
5402 name = bfd_section_name (input_bfd, sec);
5403 }
5404
5405 if (r_symndx != 0
5406 && r_type != R_AARCH64_NONE
5407 && r_type != R_AARCH64_NULL
5408 && (h == NULL
5409 || h->root.type == bfd_link_hash_defined
5410 || h->root.type == bfd_link_hash_defweak)
5411 && IS_AARCH64_TLS_RELOC (bfd_r_type) != (sym_type == STT_TLS))
5412 {
5413 (*_bfd_error_handler)
5414 ((sym_type == STT_TLS
5415 ? _("%B(%A+0x%lx): %s used with TLS symbol %s")
5416 : _("%B(%A+0x%lx): %s used with non-TLS symbol %s")),
5417 input_bfd,
5418 input_section, (long) rel->r_offset, howto->name, name);
5419 }
5420
5421 /* We relax only if we can see that there can be a valid transition
5422 from a reloc type to another.
5423 We call elfNN_aarch64_final_link_relocate unless we're completely
5424 done, i.e., the relaxation produced the final output we want. */
5425
5426 relaxed_bfd_r_type = aarch64_tls_transition (input_bfd, info, r_type,
5427 h, r_symndx);
5428 if (relaxed_bfd_r_type != bfd_r_type)
5429 {
5430 bfd_r_type = relaxed_bfd_r_type;
5431 howto = elfNN_aarch64_howto_from_bfd_reloc (bfd_r_type);
5432 BFD_ASSERT (howto != NULL);
5433 r_type = howto->type;
5434 r = elfNN_aarch64_tls_relax (globals, input_bfd, contents, rel, h);
5435 unresolved_reloc = 0;
5436 }
5437 else
5438 r = bfd_reloc_continue;
5439
5440 /* There may be multiple consecutive relocations for the
5441 same offset. In that case we are supposed to treat the
5442 output of each relocation as the addend for the next. */
5443 if (rel + 1 < relend
5444 && rel->r_offset == rel[1].r_offset
5445 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NONE
5446 && ELFNN_R_TYPE (rel[1].r_info) != R_AARCH64_NULL)
5447 save_addend = TRUE;
5448 else
5449 save_addend = FALSE;
5450
5451 if (r == bfd_reloc_continue)
5452 r = elfNN_aarch64_final_link_relocate (howto, input_bfd, output_bfd,
5453 input_section, contents, rel,
5454 relocation, info, sec,
5455 h, &unresolved_reloc,
5456 save_addend, &addend, sym);
5457
5458 switch (elfNN_aarch64_bfd_reloc_from_type (r_type))
5459 {
5460 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5461 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5462 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5463 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5464 {
5465 bfd_boolean need_relocs = FALSE;
5466 bfd_byte *loc;
5467 int indx;
5468 bfd_vma off;
5469
5470 off = symbol_got_offset (input_bfd, h, r_symndx);
5471 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5472
5473 need_relocs =
5474 (info->shared || indx != 0) &&
5475 (h == NULL
5476 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5477 || h->root.type != bfd_link_hash_undefweak);
5478
5479 BFD_ASSERT (globals->root.srelgot != NULL);
5480
5481 if (need_relocs)
5482 {
5483 Elf_Internal_Rela rela;
5484 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPMOD));
5485 rela.r_addend = 0;
5486 rela.r_offset = globals->root.sgot->output_section->vma +
5487 globals->root.sgot->output_offset + off;
5488
5489
5490 loc = globals->root.srelgot->contents;
5491 loc += globals->root.srelgot->reloc_count++
5492 * RELOC_SIZE (htab);
5493 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5494
5495 if (indx == 0)
5496 {
5497 bfd_put_NN (output_bfd,
5498 relocation - dtpoff_base (info),
5499 globals->root.sgot->contents + off
5500 + GOT_ENTRY_SIZE);
5501 }
5502 else
5503 {
5504 /* This TLS symbol is global. We emit a
5505 relocation to fixup the tls offset at load
5506 time. */
5507 rela.r_info =
5508 ELFNN_R_INFO (indx, AARCH64_R (TLS_DTPREL));
5509 rela.r_addend = 0;
5510 rela.r_offset =
5511 (globals->root.sgot->output_section->vma
5512 + globals->root.sgot->output_offset + off
5513 + GOT_ENTRY_SIZE);
5514
5515 loc = globals->root.srelgot->contents;
5516 loc += globals->root.srelgot->reloc_count++
5517 * RELOC_SIZE (globals);
5518 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5519 bfd_put_NN (output_bfd, (bfd_vma) 0,
5520 globals->root.sgot->contents + off
5521 + GOT_ENTRY_SIZE);
5522 }
5523 }
5524 else
5525 {
5526 bfd_put_NN (output_bfd, (bfd_vma) 1,
5527 globals->root.sgot->contents + off);
5528 bfd_put_NN (output_bfd,
5529 relocation - dtpoff_base (info),
5530 globals->root.sgot->contents + off
5531 + GOT_ENTRY_SIZE);
5532 }
5533
5534 symbol_got_offset_mark (input_bfd, h, r_symndx);
5535 }
5536 break;
5537
5538 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5539 case BFD_RELOC_AARCH64_TLSIE_LDNN_GOTTPREL_LO12_NC:
5540 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5541 if (! symbol_got_offset_mark_p (input_bfd, h, r_symndx))
5542 {
5543 bfd_boolean need_relocs = FALSE;
5544 bfd_byte *loc;
5545 int indx;
5546 bfd_vma off;
5547
5548 off = symbol_got_offset (input_bfd, h, r_symndx);
5549
5550 indx = h && h->dynindx != -1 ? h->dynindx : 0;
5551
5552 need_relocs =
5553 (info->shared || indx != 0) &&
5554 (h == NULL
5555 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5556 || h->root.type != bfd_link_hash_undefweak);
5557
5558 BFD_ASSERT (globals->root.srelgot != NULL);
5559
5560 if (need_relocs)
5561 {
5562 Elf_Internal_Rela rela;
5563
5564 if (indx == 0)
5565 rela.r_addend = relocation - dtpoff_base (info);
5566 else
5567 rela.r_addend = 0;
5568
5569 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLS_TPREL));
5570 rela.r_offset = globals->root.sgot->output_section->vma +
5571 globals->root.sgot->output_offset + off;
5572
5573 loc = globals->root.srelgot->contents;
5574 loc += globals->root.srelgot->reloc_count++
5575 * RELOC_SIZE (htab);
5576
5577 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5578
5579 bfd_put_NN (output_bfd, rela.r_addend,
5580 globals->root.sgot->contents + off);
5581 }
5582 else
5583 bfd_put_NN (output_bfd, relocation - tpoff_base (info),
5584 globals->root.sgot->contents + off);
5585
5586 symbol_got_offset_mark (input_bfd, h, r_symndx);
5587 }
5588 break;
5589
5590 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5591 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5592 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5593 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5594 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5595 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5596 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5597 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5598 break;
5599
5600 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5601 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5602 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5603 case BFD_RELOC_AARCH64_TLSDESC_LDNN_LO12_NC:
5604 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5605 if (! symbol_tlsdesc_got_offset_mark_p (input_bfd, h, r_symndx))
5606 {
5607 bfd_boolean need_relocs = FALSE;
5608 int indx = h && h->dynindx != -1 ? h->dynindx : 0;
5609 bfd_vma off = symbol_tlsdesc_got_offset (input_bfd, h, r_symndx);
5610
5611 need_relocs = (h == NULL
5612 || ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
5613 || h->root.type != bfd_link_hash_undefweak);
5614
5615 BFD_ASSERT (globals->root.srelgot != NULL);
5616 BFD_ASSERT (globals->root.sgot != NULL);
5617
5618 if (need_relocs)
5619 {
5620 bfd_byte *loc;
5621 Elf_Internal_Rela rela;
5622 rela.r_info = ELFNN_R_INFO (indx, AARCH64_R (TLSDESC));
5623
5624 rela.r_addend = 0;
5625 rela.r_offset = (globals->root.sgotplt->output_section->vma
5626 + globals->root.sgotplt->output_offset
5627 + off + globals->sgotplt_jump_table_size);
5628
5629 if (indx == 0)
5630 rela.r_addend = relocation - dtpoff_base (info);
5631
5632 /* Allocate the next available slot in the PLT reloc
5633 section to hold our R_AARCH64_TLSDESC, the next
5634 available slot is determined from reloc_count,
5635 which we step. But note, reloc_count was
5636 artifically moved down while allocating slots for
5637 real PLT relocs such that all of the PLT relocs
5638 will fit above the initial reloc_count and the
5639 extra stuff will fit below. */
5640 loc = globals->root.srelplt->contents;
5641 loc += globals->root.srelplt->reloc_count++
5642 * RELOC_SIZE (globals);
5643
5644 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
5645
5646 bfd_put_NN (output_bfd, (bfd_vma) 0,
5647 globals->root.sgotplt->contents + off +
5648 globals->sgotplt_jump_table_size);
5649 bfd_put_NN (output_bfd, (bfd_vma) 0,
5650 globals->root.sgotplt->contents + off +
5651 globals->sgotplt_jump_table_size +
5652 GOT_ENTRY_SIZE);
5653 }
5654
5655 symbol_tlsdesc_got_offset_mark (input_bfd, h, r_symndx);
5656 }
5657 break;
5658 default:
5659 break;
5660 }
5661
5662 if (!save_addend)
5663 addend = 0;
5664
5665
5666 /* Dynamic relocs are not propagated for SEC_DEBUGGING sections
5667 because such sections are not SEC_ALLOC and thus ld.so will
5668 not process them. */
5669 if (unresolved_reloc
5670 && !((input_section->flags & SEC_DEBUGGING) != 0
5671 && h->def_dynamic)
5672 && _bfd_elf_section_offset (output_bfd, info, input_section,
5673 +rel->r_offset) != (bfd_vma) - 1)
5674 {
5675 (*_bfd_error_handler)
5676 (_
5677 ("%B(%A+0x%lx): unresolvable %s relocation against symbol `%s'"),
5678 input_bfd, input_section, (long) rel->r_offset, howto->name,
5679 h->root.root.string);
5680 return FALSE;
5681 }
5682
5683 if (r != bfd_reloc_ok && r != bfd_reloc_continue)
5684 {
5685 switch (r)
5686 {
5687 case bfd_reloc_overflow:
5688 if (!(*info->callbacks->reloc_overflow)
5689 (info, (h ? &h->root : NULL), name, howto->name, (bfd_vma) 0,
5690 input_bfd, input_section, rel->r_offset))
5691 return FALSE;
5692 break;
5693
5694 case bfd_reloc_undefined:
5695 if (!((*info->callbacks->undefined_symbol)
5696 (info, name, input_bfd, input_section,
5697 rel->r_offset, TRUE)))
5698 return FALSE;
5699 break;
5700
5701 case bfd_reloc_outofrange:
5702 error_message = _("out of range");
5703 goto common_error;
5704
5705 case bfd_reloc_notsupported:
5706 error_message = _("unsupported relocation");
5707 goto common_error;
5708
5709 case bfd_reloc_dangerous:
5710 /* error_message should already be set. */
5711 goto common_error;
5712
5713 default:
5714 error_message = _("unknown error");
5715 /* Fall through. */
5716
5717 common_error:
5718 BFD_ASSERT (error_message != NULL);
5719 if (!((*info->callbacks->reloc_dangerous)
5720 (info, error_message, input_bfd, input_section,
5721 rel->r_offset)))
5722 return FALSE;
5723 break;
5724 }
5725 }
5726 }
5727
5728 return TRUE;
5729}
5730
5731/* Set the right machine number. */
5732
5733static bfd_boolean
5734elfNN_aarch64_object_p (bfd *abfd)
5735{
5736#if ARCH_SIZE == 32
5737 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64_ilp32);
5738#else
5739 bfd_default_set_arch_mach (abfd, bfd_arch_aarch64, bfd_mach_aarch64);
5740#endif
5741 return TRUE;
5742}
5743
5744/* Function to keep AArch64 specific flags in the ELF header. */
5745
5746static bfd_boolean
5747elfNN_aarch64_set_private_flags (bfd *abfd, flagword flags)
5748{
5749 if (elf_flags_init (abfd) && elf_elfheader (abfd)->e_flags != flags)
5750 {
5751 }
5752 else
5753 {
5754 elf_elfheader (abfd)->e_flags = flags;
5755 elf_flags_init (abfd) = TRUE;
5756 }
5757
5758 return TRUE;
5759}
5760
5761/* Merge backend specific data from an object file to the output
5762 object file when linking. */
5763
5764static bfd_boolean
5765elfNN_aarch64_merge_private_bfd_data (bfd *ibfd, bfd *obfd)
5766{
5767 flagword out_flags;
5768 flagword in_flags;
5769 bfd_boolean flags_compatible = TRUE;
5770 asection *sec;
5771
5772 /* Check if we have the same endianess. */
5773 if (!_bfd_generic_verify_endian_match (ibfd, obfd))
5774 return FALSE;
5775
5776 if (!is_aarch64_elf (ibfd) || !is_aarch64_elf (obfd))
5777 return TRUE;
5778
5779 /* The input BFD must have had its flags initialised. */
5780 /* The following seems bogus to me -- The flags are initialized in
5781 the assembler but I don't think an elf_flags_init field is
5782 written into the object. */
5783 /* BFD_ASSERT (elf_flags_init (ibfd)); */
5784
5785 in_flags = elf_elfheader (ibfd)->e_flags;
5786 out_flags = elf_elfheader (obfd)->e_flags;
5787
5788 if (!elf_flags_init (obfd))
5789 {
5790 /* If the input is the default architecture and had the default
5791 flags then do not bother setting the flags for the output
5792 architecture, instead allow future merges to do this. If no
5793 future merges ever set these flags then they will retain their
5794 uninitialised values, which surprise surprise, correspond
5795 to the default values. */
5796 if (bfd_get_arch_info (ibfd)->the_default
5797 && elf_elfheader (ibfd)->e_flags == 0)
5798 return TRUE;
5799
5800 elf_flags_init (obfd) = TRUE;
5801 elf_elfheader (obfd)->e_flags = in_flags;
5802
5803 if (bfd_get_arch (obfd) == bfd_get_arch (ibfd)
5804 && bfd_get_arch_info (obfd)->the_default)
5805 return bfd_set_arch_mach (obfd, bfd_get_arch (ibfd),
5806 bfd_get_mach (ibfd));
5807
5808 return TRUE;
5809 }
5810
5811 /* Identical flags must be compatible. */
5812 if (in_flags == out_flags)
5813 return TRUE;
5814
5815 /* Check to see if the input BFD actually contains any sections. If
5816 not, its flags may not have been initialised either, but it
5817 cannot actually cause any incompatiblity. Do not short-circuit
5818 dynamic objects; their section list may be emptied by
5819 elf_link_add_object_symbols.
5820
5821 Also check to see if there are no code sections in the input.
5822 In this case there is no need to check for code specific flags.
5823 XXX - do we need to worry about floating-point format compatability
5824 in data sections ? */
5825 if (!(ibfd->flags & DYNAMIC))
5826 {
5827 bfd_boolean null_input_bfd = TRUE;
5828 bfd_boolean only_data_sections = TRUE;
5829
5830 for (sec = ibfd->sections; sec != NULL; sec = sec->next)
5831 {
5832 if ((bfd_get_section_flags (ibfd, sec)
5833 & (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5834 == (SEC_LOAD | SEC_CODE | SEC_HAS_CONTENTS))
5835 only_data_sections = FALSE;
5836
5837 null_input_bfd = FALSE;
5838 break;
5839 }
5840
5841 if (null_input_bfd || only_data_sections)
5842 return TRUE;
5843 }
5844
5845 return flags_compatible;
5846}
5847
5848/* Display the flags field. */
5849
5850static bfd_boolean
5851elfNN_aarch64_print_private_bfd_data (bfd *abfd, void *ptr)
5852{
5853 FILE *file = (FILE *) ptr;
5854 unsigned long flags;
5855
5856 BFD_ASSERT (abfd != NULL && ptr != NULL);
5857
5858 /* Print normal ELF private data. */
5859 _bfd_elf_print_private_bfd_data (abfd, ptr);
5860
5861 flags = elf_elfheader (abfd)->e_flags;
5862 /* Ignore init flag - it may not be set, despite the flags field
5863 containing valid data. */
5864
5865 /* xgettext:c-format */
5866 fprintf (file, _("private flags = %lx:"), elf_elfheader (abfd)->e_flags);
5867
5868 if (flags)
5869 fprintf (file, _("<Unrecognised flag bits set>"));
5870
5871 fputc ('\n', file);
5872
5873 return TRUE;
5874}
5875
5876/* Update the got entry reference counts for the section being removed. */
5877
5878static bfd_boolean
5879elfNN_aarch64_gc_sweep_hook (bfd *abfd,
5880 struct bfd_link_info *info,
5881 asection *sec,
5882 const Elf_Internal_Rela * relocs)
5883{
5884 struct elf_aarch64_link_hash_table *htab;
5885 Elf_Internal_Shdr *symtab_hdr;
5886 struct elf_link_hash_entry **sym_hashes;
5887 struct elf_aarch64_local_symbol *locals;
5888 const Elf_Internal_Rela *rel, *relend;
5889
5890 if (info->relocatable)
5891 return TRUE;
5892
5893 htab = elf_aarch64_hash_table (info);
5894
5895 if (htab == NULL)
5896 return FALSE;
5897
5898 elf_section_data (sec)->local_dynrel = NULL;
5899
5900 symtab_hdr = &elf_symtab_hdr (abfd);
5901 sym_hashes = elf_sym_hashes (abfd);
5902
5903 locals = elf_aarch64_locals (abfd);
5904
5905 relend = relocs + sec->reloc_count;
5906 for (rel = relocs; rel < relend; rel++)
5907 {
5908 unsigned long r_symndx;
5909 unsigned int r_type;
5910 struct elf_link_hash_entry *h = NULL;
5911
5912 r_symndx = ELFNN_R_SYM (rel->r_info);
5913
5914 if (r_symndx >= symtab_hdr->sh_info)
5915 {
5916
5917 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
5918 while (h->root.type == bfd_link_hash_indirect
5919 || h->root.type == bfd_link_hash_warning)
5920 h = (struct elf_link_hash_entry *) h->root.u.i.link;
5921 }
5922 else
5923 {
5924 Elf_Internal_Sym *isym;
5925
5926 /* A local symbol. */
5927 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
5928 abfd, r_symndx);
5929
5930 /* Check relocation against local STT_GNU_IFUNC symbol. */
5931 if (isym != NULL
5932 && ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
5933 {
5934 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel, FALSE);
5935 if (h == NULL)
5936 abort ();
5937 }
5938 }
5939
5940 if (h)
5941 {
5942 struct elf_aarch64_link_hash_entry *eh;
5943 struct elf_dyn_relocs **pp;
5944 struct elf_dyn_relocs *p;
5945
5946 eh = (struct elf_aarch64_link_hash_entry *) h;
5947
5948 for (pp = &eh->dyn_relocs; (p = *pp) != NULL; pp = &p->next)
5949 if (p->sec == sec)
5950 {
5951 /* Everything must go for SEC. */
5952 *pp = p->next;
5953 break;
5954 }
5955 }
5956
5957 r_type = ELFNN_R_TYPE (rel->r_info);
5958 switch (aarch64_tls_transition (abfd,info, r_type, h ,r_symndx))
5959 {
5960 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
5961 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
5962 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
5963 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
5964 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
5965 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
5966 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
5967 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
5968 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
5969 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
5970 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
5971 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
5972 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
5973 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
5974 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
5975 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
5976 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
5977 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
5978 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
5979 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
5980 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
5981 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
5982 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
5983 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
5984 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
5985 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
5986 if (h != NULL)
5987 {
5988 if (h->got.refcount > 0)
5989 h->got.refcount -= 1;
5990
5991 if (h->type == STT_GNU_IFUNC)
5992 {
5993 if (h->plt.refcount > 0)
5994 h->plt.refcount -= 1;
5995 }
5996 }
5997 else if (locals != NULL)
5998 {
5999 if (locals[r_symndx].got_refcount > 0)
6000 locals[r_symndx].got_refcount -= 1;
6001 }
6002 break;
6003
6004 case BFD_RELOC_AARCH64_CALL26:
6005 case BFD_RELOC_AARCH64_JUMP26:
6006 /* If this is a local symbol then we resolve it
6007 directly without creating a PLT entry. */
6008 if (h == NULL)
6009 continue;
6010
6011 if (h->plt.refcount > 0)
6012 h->plt.refcount -= 1;
6013 break;
6014
6015 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
6016 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6017 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
6018 case BFD_RELOC_AARCH64_MOVW_G0_NC:
6019 case BFD_RELOC_AARCH64_MOVW_G1_NC:
6020 case BFD_RELOC_AARCH64_MOVW_G2_NC:
6021 case BFD_RELOC_AARCH64_MOVW_G3:
6022 case BFD_RELOC_AARCH64_NN:
6023 if (h != NULL && info->executable)
6024 {
6025 if (h->plt.refcount > 0)
6026 h->plt.refcount -= 1;
6027 }
6028 break;
6029
6030 default:
6031 break;
6032 }
6033 }
6034
6035 return TRUE;
6036}
6037
6038/* Adjust a symbol defined by a dynamic object and referenced by a
6039 regular object. The current definition is in some section of the
6040 dynamic object, but we're not including those sections. We have to
6041 change the definition to something the rest of the link can
6042 understand. */
6043
6044static bfd_boolean
6045elfNN_aarch64_adjust_dynamic_symbol (struct bfd_link_info *info,
6046 struct elf_link_hash_entry *h)
6047{
6048 struct elf_aarch64_link_hash_table *htab;
6049 asection *s;
6050
6051 /* If this is a function, put it in the procedure linkage table. We
6052 will fill in the contents of the procedure linkage table later,
6053 when we know the address of the .got section. */
6054 if (h->type == STT_FUNC || h->type == STT_GNU_IFUNC || h->needs_plt)
6055 {
6056 if (h->plt.refcount <= 0
6057 || (h->type != STT_GNU_IFUNC
6058 && (SYMBOL_CALLS_LOCAL (info, h)
6059 || (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT
6060 && h->root.type == bfd_link_hash_undefweak))))
6061 {
6062 /* This case can occur if we saw a CALL26 reloc in
6063 an input file, but the symbol wasn't referred to
6064 by a dynamic object or all references were
6065 garbage collected. In which case we can end up
6066 resolving. */
6067 h->plt.offset = (bfd_vma) - 1;
6068 h->needs_plt = 0;
6069 }
6070
6071 return TRUE;
6072 }
6073 else
6074 /* Otherwise, reset to -1. */
6075 h->plt.offset = (bfd_vma) - 1;
6076
6077
6078 /* If this is a weak symbol, and there is a real definition, the
6079 processor independent code will have arranged for us to see the
6080 real definition first, and we can just use the same value. */
6081 if (h->u.weakdef != NULL)
6082 {
6083 BFD_ASSERT (h->u.weakdef->root.type == bfd_link_hash_defined
6084 || h->u.weakdef->root.type == bfd_link_hash_defweak);
6085 h->root.u.def.section = h->u.weakdef->root.u.def.section;
6086 h->root.u.def.value = h->u.weakdef->root.u.def.value;
6087 if (ELIMINATE_COPY_RELOCS || info->nocopyreloc)
6088 h->non_got_ref = h->u.weakdef->non_got_ref;
6089 return TRUE;
6090 }
6091
6092 /* If we are creating a shared library, we must presume that the
6093 only references to the symbol are via the global offset table.
6094 For such cases we need not do anything here; the relocations will
6095 be handled correctly by relocate_section. */
6096 if (info->shared)
6097 return TRUE;
6098
6099 /* If there are no references to this symbol that do not use the
6100 GOT, we don't need to generate a copy reloc. */
6101 if (!h->non_got_ref)
6102 return TRUE;
6103
6104 /* If -z nocopyreloc was given, we won't generate them either. */
6105 if (info->nocopyreloc)
6106 {
6107 h->non_got_ref = 0;
6108 return TRUE;
6109 }
6110
6111 /* We must allocate the symbol in our .dynbss section, which will
6112 become part of the .bss section of the executable. There will be
6113 an entry for this symbol in the .dynsym section. The dynamic
6114 object will contain position independent code, so all references
6115 from the dynamic object to this symbol will go through the global
6116 offset table. The dynamic linker will use the .dynsym entry to
6117 determine the address it must put in the global offset table, so
6118 both the dynamic object and the regular object will refer to the
6119 same memory location for the variable. */
6120
6121 htab = elf_aarch64_hash_table (info);
6122
6123 /* We must generate a R_AARCH64_COPY reloc to tell the dynamic linker
6124 to copy the initial value out of the dynamic object and into the
6125 runtime process image. */
6126 if ((h->root.u.def.section->flags & SEC_ALLOC) != 0 && h->size != 0)
6127 {
6128 htab->srelbss->size += RELOC_SIZE (htab);
6129 h->needs_copy = 1;
6130 }
6131
6132 s = htab->sdynbss;
6133
6134 return _bfd_elf_adjust_dynamic_copy (info, h, s);
6135
6136}
6137
6138static bfd_boolean
6139elfNN_aarch64_allocate_local_symbols (bfd *abfd, unsigned number)
6140{
6141 struct elf_aarch64_local_symbol *locals;
6142 locals = elf_aarch64_locals (abfd);
6143 if (locals == NULL)
6144 {
6145 locals = (struct elf_aarch64_local_symbol *)
6146 bfd_zalloc (abfd, number * sizeof (struct elf_aarch64_local_symbol));
6147 if (locals == NULL)
6148 return FALSE;
6149 elf_aarch64_locals (abfd) = locals;
6150 }
6151 return TRUE;
6152}
6153
6154/* Create the .got section to hold the global offset table. */
6155
6156static bfd_boolean
6157aarch64_elf_create_got_section (bfd *abfd, struct bfd_link_info *info)
6158{
6159 const struct elf_backend_data *bed = get_elf_backend_data (abfd);
6160 flagword flags;
6161 asection *s;
6162 struct elf_link_hash_entry *h;
6163 struct elf_link_hash_table *htab = elf_hash_table (info);
6164
6165 /* This function may be called more than once. */
6166 s = bfd_get_linker_section (abfd, ".got");
6167 if (s != NULL)
6168 return TRUE;
6169
6170 flags = bed->dynamic_sec_flags;
6171
6172 s = bfd_make_section_anyway_with_flags (abfd,
6173 (bed->rela_plts_and_copies_p
6174 ? ".rela.got" : ".rel.got"),
6175 (bed->dynamic_sec_flags
6176 | SEC_READONLY));
6177 if (s == NULL
6178 || ! bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6179 return FALSE;
6180 htab->srelgot = s;
6181
6182 s = bfd_make_section_anyway_with_flags (abfd, ".got", flags);
6183 if (s == NULL
6184 || !bfd_set_section_alignment (abfd, s, bed->s->log_file_align))
6185 return FALSE;
6186 htab->sgot = s;
6187 htab->sgot->size += GOT_ENTRY_SIZE;
6188
6189 if (bed->want_got_sym)
6190 {
6191 /* Define the symbol _GLOBAL_OFFSET_TABLE_ at the start of the .got
6192 (or .got.plt) section. We don't do this in the linker script
6193 because we don't want to define the symbol if we are not creating
6194 a global offset table. */
6195 h = _bfd_elf_define_linkage_sym (abfd, info, s,
6196 "_GLOBAL_OFFSET_TABLE_");
6197 elf_hash_table (info)->hgot = h;
6198 if (h == NULL)
6199 return FALSE;
6200 }
6201
6202 if (bed->want_got_plt)
6203 {
6204 s = bfd_make_section_anyway_with_flags (abfd, ".got.plt", flags);
6205 if (s == NULL
6206 || !bfd_set_section_alignment (abfd, s,
6207 bed->s->log_file_align))
6208 return FALSE;
6209 htab->sgotplt = s;
6210 }
6211
6212 /* The first bit of the global offset table is the header. */
6213 s->size += bed->got_header_size;
6214
6215 return TRUE;
6216}
6217
6218/* Look through the relocs for a section during the first phase. */
6219
6220static bfd_boolean
6221elfNN_aarch64_check_relocs (bfd *abfd, struct bfd_link_info *info,
6222 asection *sec, const Elf_Internal_Rela *relocs)
6223{
6224 Elf_Internal_Shdr *symtab_hdr;
6225 struct elf_link_hash_entry **sym_hashes;
6226 const Elf_Internal_Rela *rel;
6227 const Elf_Internal_Rela *rel_end;
6228 asection *sreloc;
6229
6230 struct elf_aarch64_link_hash_table *htab;
6231
6232 if (info->relocatable)
6233 return TRUE;
6234
6235 BFD_ASSERT (is_aarch64_elf (abfd));
6236
6237 htab = elf_aarch64_hash_table (info);
6238 sreloc = NULL;
6239
6240 symtab_hdr = &elf_symtab_hdr (abfd);
6241 sym_hashes = elf_sym_hashes (abfd);
6242
6243 rel_end = relocs + sec->reloc_count;
6244 for (rel = relocs; rel < rel_end; rel++)
6245 {
6246 struct elf_link_hash_entry *h;
6247 unsigned long r_symndx;
6248 unsigned int r_type;
6249 bfd_reloc_code_real_type bfd_r_type;
6250 Elf_Internal_Sym *isym;
6251
6252 r_symndx = ELFNN_R_SYM (rel->r_info);
6253 r_type = ELFNN_R_TYPE (rel->r_info);
6254
6255 if (r_symndx >= NUM_SHDR_ENTRIES (symtab_hdr))
6256 {
6257 (*_bfd_error_handler) (_("%B: bad symbol index: %d"), abfd,
6258 r_symndx);
6259 return FALSE;
6260 }
6261
6262 if (r_symndx < symtab_hdr->sh_info)
6263 {
6264 /* A local symbol. */
6265 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6266 abfd, r_symndx);
6267 if (isym == NULL)
6268 return FALSE;
6269
6270 /* Check relocation against local STT_GNU_IFUNC symbol. */
6271 if (ELF_ST_TYPE (isym->st_info) == STT_GNU_IFUNC)
6272 {
6273 h = elfNN_aarch64_get_local_sym_hash (htab, abfd, rel,
6274 TRUE);
6275 if (h == NULL)
6276 return FALSE;
6277
6278 /* Fake a STT_GNU_IFUNC symbol. */
6279 h->type = STT_GNU_IFUNC;
6280 h->def_regular = 1;
6281 h->ref_regular = 1;
6282 h->forced_local = 1;
6283 h->root.type = bfd_link_hash_defined;
6284 }
6285 else
6286 h = NULL;
6287 }
6288 else
6289 {
6290 h = sym_hashes[r_symndx - symtab_hdr->sh_info];
6291 while (h->root.type == bfd_link_hash_indirect
6292 || h->root.type == bfd_link_hash_warning)
6293 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6294
6295 /* PR15323, ref flags aren't set for references in the same
6296 object. */
6297 h->root.non_ir_ref = 1;
6298 }
6299
6300 /* Could be done earlier, if h were already available. */
6301 bfd_r_type = aarch64_tls_transition (abfd, info, r_type, h, r_symndx);
6302
6303 if (h != NULL)
6304 {
6305 /* Create the ifunc sections for static executables. If we
6306 never see an indirect function symbol nor we are building
6307 a static executable, those sections will be empty and
6308 won't appear in output. */
6309 switch (bfd_r_type)
6310 {
6311 default:
6312 break;
6313
6314 case BFD_RELOC_AARCH64_ADD_LO12:
6315 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6316 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6317 case BFD_RELOC_AARCH64_CALL26:
6318 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6319 case BFD_RELOC_AARCH64_JUMP26:
6320 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6321 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
6322 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6323 case BFD_RELOC_AARCH64_NN:
6324 if (htab->root.dynobj == NULL)
6325 htab->root.dynobj = abfd;
6326 if (!_bfd_elf_create_ifunc_sections (htab->root.dynobj, info))
6327 return FALSE;
6328 break;
6329 }
6330
6331 /* It is referenced by a non-shared object. */
6332 h->ref_regular = 1;
6333 h->root.non_ir_ref = 1;
6334 }
6335
6336 switch (bfd_r_type)
6337 {
6338 case BFD_RELOC_AARCH64_NN:
6339
6340 /* We don't need to handle relocs into sections not going into
6341 the "real" output. */
6342 if ((sec->flags & SEC_ALLOC) == 0)
6343 break;
6344
6345 if (h != NULL)
6346 {
6347 if (!info->shared)
6348 h->non_got_ref = 1;
6349
6350 h->plt.refcount += 1;
6351 h->pointer_equality_needed = 1;
6352 }
6353
6354 /* No need to do anything if we're not creating a shared
6355 object. */
6356 if (! info->shared)
6357 break;
6358
6359 {
6360 struct elf_dyn_relocs *p;
6361 struct elf_dyn_relocs **head;
6362
6363 /* We must copy these reloc types into the output file.
6364 Create a reloc section in dynobj and make room for
6365 this reloc. */
6366 if (sreloc == NULL)
6367 {
6368 if (htab->root.dynobj == NULL)
6369 htab->root.dynobj = abfd;
6370
6371 sreloc = _bfd_elf_make_dynamic_reloc_section
6372 (sec, htab->root.dynobj, LOG_FILE_ALIGN, abfd, /*rela? */ TRUE);
6373
6374 if (sreloc == NULL)
6375 return FALSE;
6376 }
6377
6378 /* If this is a global symbol, we count the number of
6379 relocations we need for this symbol. */
6380 if (h != NULL)
6381 {
6382 struct elf_aarch64_link_hash_entry *eh;
6383 eh = (struct elf_aarch64_link_hash_entry *) h;
6384 head = &eh->dyn_relocs;
6385 }
6386 else
6387 {
6388 /* Track dynamic relocs needed for local syms too.
6389 We really need local syms available to do this
6390 easily. Oh well. */
6391
6392 asection *s;
6393 void **vpp;
6394
6395 isym = bfd_sym_from_r_symndx (&htab->sym_cache,
6396 abfd, r_symndx);
6397 if (isym == NULL)
6398 return FALSE;
6399
6400 s = bfd_section_from_elf_index (abfd, isym->st_shndx);
6401 if (s == NULL)
6402 s = sec;
6403
6404 /* Beware of type punned pointers vs strict aliasing
6405 rules. */
6406 vpp = &(elf_section_data (s)->local_dynrel);
6407 head = (struct elf_dyn_relocs **) vpp;
6408 }
6409
6410 p = *head;
6411 if (p == NULL || p->sec != sec)
6412 {
6413 bfd_size_type amt = sizeof *p;
6414 p = ((struct elf_dyn_relocs *)
6415 bfd_zalloc (htab->root.dynobj, amt));
6416 if (p == NULL)
6417 return FALSE;
6418 p->next = *head;
6419 *head = p;
6420 p->sec = sec;
6421 }
6422
6423 p->count += 1;
6424
6425 }
6426 break;
6427
6428 /* RR: We probably want to keep a consistency check that
6429 there are no dangling GOT_PAGE relocs. */
6430 case BFD_RELOC_AARCH64_ADR_GOT_PAGE:
6431 case BFD_RELOC_AARCH64_GOT_LD_PREL19:
6432 case BFD_RELOC_AARCH64_LD32_GOT_LO12_NC:
6433 case BFD_RELOC_AARCH64_LD64_GOTPAGE_LO15:
6434 case BFD_RELOC_AARCH64_LD64_GOT_LO12_NC:
6435 case BFD_RELOC_AARCH64_TLSDESC_ADD_LO12_NC:
6436 case BFD_RELOC_AARCH64_TLSDESC_ADR_PAGE21:
6437 case BFD_RELOC_AARCH64_TLSDESC_ADR_PREL21:
6438 case BFD_RELOC_AARCH64_TLSDESC_LD32_LO12_NC:
6439 case BFD_RELOC_AARCH64_TLSDESC_LD64_LO12_NC:
6440 case BFD_RELOC_AARCH64_TLSDESC_LD_PREL19:
6441 case BFD_RELOC_AARCH64_TLSGD_ADD_LO12_NC:
6442 case BFD_RELOC_AARCH64_TLSGD_ADR_PAGE21:
6443 case BFD_RELOC_AARCH64_TLSGD_ADR_PREL21:
6444 case BFD_RELOC_AARCH64_TLSIE_ADR_GOTTPREL_PAGE21:
6445 case BFD_RELOC_AARCH64_TLSIE_LD32_GOTTPREL_LO12_NC:
6446 case BFD_RELOC_AARCH64_TLSIE_LD64_GOTTPREL_LO12_NC:
6447 case BFD_RELOC_AARCH64_TLSIE_LD_GOTTPREL_PREL19:
6448 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_HI12:
6449 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12:
6450 case BFD_RELOC_AARCH64_TLSLE_ADD_TPREL_LO12_NC:
6451 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0:
6452 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G0_NC:
6453 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1:
6454 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G1_NC:
6455 case BFD_RELOC_AARCH64_TLSLE_MOVW_TPREL_G2:
6456 {
6457 unsigned got_type;
6458 unsigned old_got_type;
6459
6460 got_type = aarch64_reloc_got_type (bfd_r_type);
6461
6462 if (h)
6463 {
6464 h->got.refcount += 1;
6465 old_got_type = elf_aarch64_hash_entry (h)->got_type;
6466 }
6467 else
6468 {
6469 struct elf_aarch64_local_symbol *locals;
6470
6471 if (!elfNN_aarch64_allocate_local_symbols
6472 (abfd, symtab_hdr->sh_info))
6473 return FALSE;
6474
6475 locals = elf_aarch64_locals (abfd);
6476 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6477 locals[r_symndx].got_refcount += 1;
6478 old_got_type = locals[r_symndx].got_type;
6479 }
6480
6481 /* If a variable is accessed with both general dynamic TLS
6482 methods, two slots may be created. */
6483 if (GOT_TLS_GD_ANY_P (old_got_type) && GOT_TLS_GD_ANY_P (got_type))
6484 got_type |= old_got_type;
6485
6486 /* We will already have issued an error message if there
6487 is a TLS/non-TLS mismatch, based on the symbol type.
6488 So just combine any TLS types needed. */
6489 if (old_got_type != GOT_UNKNOWN && old_got_type != GOT_NORMAL
6490 && got_type != GOT_NORMAL)
6491 got_type |= old_got_type;
6492
6493 /* If the symbol is accessed by both IE and GD methods, we
6494 are able to relax. Turn off the GD flag, without
6495 messing up with any other kind of TLS types that may be
6496 involved. */
6497 if ((got_type & GOT_TLS_IE) && GOT_TLS_GD_ANY_P (got_type))
6498 got_type &= ~ (GOT_TLSDESC_GD | GOT_TLS_GD);
6499
6500 if (old_got_type != got_type)
6501 {
6502 if (h != NULL)
6503 elf_aarch64_hash_entry (h)->got_type = got_type;
6504 else
6505 {
6506 struct elf_aarch64_local_symbol *locals;
6507 locals = elf_aarch64_locals (abfd);
6508 BFD_ASSERT (r_symndx < symtab_hdr->sh_info);
6509 locals[r_symndx].got_type = got_type;
6510 }
6511 }
6512
6513 if (htab->root.dynobj == NULL)
6514 htab->root.dynobj = abfd;
6515 if (! aarch64_elf_create_got_section (htab->root.dynobj, info))
6516 return FALSE;
6517 break;
6518 }
6519
6520 case BFD_RELOC_AARCH64_MOVW_G0_NC:
6521 case BFD_RELOC_AARCH64_MOVW_G1_NC:
6522 case BFD_RELOC_AARCH64_MOVW_G2_NC:
6523 case BFD_RELOC_AARCH64_MOVW_G3:
6524 if (info->shared)
6525 {
6526 int howto_index = bfd_r_type - BFD_RELOC_AARCH64_RELOC_START;
6527 (*_bfd_error_handler)
6528 (_("%B: relocation %s against `%s' can not be used when making "
6529 "a shared object; recompile with -fPIC"),
6530 abfd, elfNN_aarch64_howto_table[howto_index].name,
6531 (h) ? h->root.root.string : "a local symbol");
6532 bfd_set_error (bfd_error_bad_value);
6533 return FALSE;
6534 }
6535
6536 case BFD_RELOC_AARCH64_ADR_HI21_NC_PCREL:
6537 case BFD_RELOC_AARCH64_ADR_HI21_PCREL:
6538 case BFD_RELOC_AARCH64_ADR_LO21_PCREL:
6539 if (h != NULL && info->executable)
6540 {
6541 /* If this reloc is in a read-only section, we might
6542 need a copy reloc. We can't check reliably at this
6543 stage whether the section is read-only, as input
6544 sections have not yet been mapped to output sections.
6545 Tentatively set the flag for now, and correct in
6546 adjust_dynamic_symbol. */
6547 h->non_got_ref = 1;
6548 h->plt.refcount += 1;
6549 h->pointer_equality_needed = 1;
6550 }
6551 /* FIXME:: RR need to handle these in shared libraries
6552 and essentially bomb out as these being non-PIC
6553 relocations in shared libraries. */
6554 break;
6555
6556 case BFD_RELOC_AARCH64_CALL26:
6557 case BFD_RELOC_AARCH64_JUMP26:
6558 /* If this is a local symbol then we resolve it
6559 directly without creating a PLT entry. */
6560 if (h == NULL)
6561 continue;
6562
6563 h->needs_plt = 1;
6564 if (h->plt.refcount <= 0)
6565 h->plt.refcount = 1;
6566 else
6567 h->plt.refcount += 1;
6568 break;
6569
6570 default:
6571 break;
6572 }
6573 }
6574
6575 return TRUE;
6576}
6577
6578/* Treat mapping symbols as special target symbols. */
6579
6580static bfd_boolean
6581elfNN_aarch64_is_target_special_symbol (bfd *abfd ATTRIBUTE_UNUSED,
6582 asymbol *sym)
6583{
6584 return bfd_is_aarch64_special_symbol_name (sym->name,
6585 BFD_AARCH64_SPECIAL_SYM_TYPE_ANY);
6586}
6587
6588/* This is a copy of elf_find_function () from elf.c except that
6589 AArch64 mapping symbols are ignored when looking for function names. */
6590
6591static bfd_boolean
6592aarch64_elf_find_function (bfd *abfd ATTRIBUTE_UNUSED,
6593 asymbol **symbols,
6594 asection *section,
6595 bfd_vma offset,
6596 const char **filename_ptr,
6597 const char **functionname_ptr)
6598{
6599 const char *filename = NULL;
6600 asymbol *func = NULL;
6601 bfd_vma low_func = 0;
6602 asymbol **p;
6603
6604 for (p = symbols; *p != NULL; p++)
6605 {
6606 elf_symbol_type *q;
6607
6608 q = (elf_symbol_type *) * p;
6609
6610 switch (ELF_ST_TYPE (q->internal_elf_sym.st_info))
6611 {
6612 default:
6613 break;
6614 case STT_FILE:
6615 filename = bfd_asymbol_name (&q->symbol);
6616 break;
6617 case STT_FUNC:
6618 case STT_NOTYPE:
6619 /* Skip mapping symbols. */
6620 if ((q->symbol.flags & BSF_LOCAL)
6621 && (bfd_is_aarch64_special_symbol_name
6622 (q->symbol.name, BFD_AARCH64_SPECIAL_SYM_TYPE_ANY)))
6623 continue;
6624 /* Fall through. */
6625 if (bfd_get_section (&q->symbol) == section
6626 && q->symbol.value >= low_func && q->symbol.value <= offset)
6627 {
6628 func = (asymbol *) q;
6629 low_func = q->symbol.value;
6630 }
6631 break;
6632 }
6633 }
6634
6635 if (func == NULL)
6636 return FALSE;
6637
6638 if (filename_ptr)
6639 *filename_ptr = filename;
6640 if (functionname_ptr)
6641 *functionname_ptr = bfd_asymbol_name (func);
6642
6643 return TRUE;
6644}
6645
6646
6647/* Find the nearest line to a particular section and offset, for error
6648 reporting. This code is a duplicate of the code in elf.c, except
6649 that it uses aarch64_elf_find_function. */
6650
6651static bfd_boolean
6652elfNN_aarch64_find_nearest_line (bfd *abfd,
6653 asymbol **symbols,
6654 asection *section,
6655 bfd_vma offset,
6656 const char **filename_ptr,
6657 const char **functionname_ptr,
6658 unsigned int *line_ptr,
6659 unsigned int *discriminator_ptr)
6660{
6661 bfd_boolean found = FALSE;
6662
6663 if (_bfd_dwarf2_find_nearest_line (abfd, symbols, NULL, section, offset,
6664 filename_ptr, functionname_ptr,
6665 line_ptr, discriminator_ptr,
6666 dwarf_debug_sections, 0,
6667 &elf_tdata (abfd)->dwarf2_find_line_info))
6668 {
6669 if (!*functionname_ptr)
6670 aarch64_elf_find_function (abfd, symbols, section, offset,
6671 *filename_ptr ? NULL : filename_ptr,
6672 functionname_ptr);
6673
6674 return TRUE;
6675 }
6676
6677 /* Skip _bfd_dwarf1_find_nearest_line since no known AArch64
6678 toolchain uses DWARF1. */
6679
6680 if (!_bfd_stab_section_find_nearest_line (abfd, symbols, section, offset,
6681 &found, filename_ptr,
6682 functionname_ptr, line_ptr,
6683 &elf_tdata (abfd)->line_info))
6684 return FALSE;
6685
6686 if (found && (*functionname_ptr || *line_ptr))
6687 return TRUE;
6688
6689 if (symbols == NULL)
6690 return FALSE;
6691
6692 if (!aarch64_elf_find_function (abfd, symbols, section, offset,
6693 filename_ptr, functionname_ptr))
6694 return FALSE;
6695
6696 *line_ptr = 0;
6697 return TRUE;
6698}
6699
6700static bfd_boolean
6701elfNN_aarch64_find_inliner_info (bfd *abfd,
6702 const char **filename_ptr,
6703 const char **functionname_ptr,
6704 unsigned int *line_ptr)
6705{
6706 bfd_boolean found;
6707 found = _bfd_dwarf2_find_inliner_info
6708 (abfd, filename_ptr,
6709 functionname_ptr, line_ptr, &elf_tdata (abfd)->dwarf2_find_line_info);
6710 return found;
6711}
6712
6713
6714static void
6715elfNN_aarch64_post_process_headers (bfd *abfd,
6716 struct bfd_link_info *link_info)
6717{
6718 Elf_Internal_Ehdr *i_ehdrp; /* ELF file header, internal form. */
6719
6720 i_ehdrp = elf_elfheader (abfd);
6721 i_ehdrp->e_ident[EI_ABIVERSION] = AARCH64_ELF_ABI_VERSION;
6722
6723 _bfd_elf_post_process_headers (abfd, link_info);
6724}
6725
6726static enum elf_reloc_type_class
6727elfNN_aarch64_reloc_type_class (const struct bfd_link_info *info ATTRIBUTE_UNUSED,
6728 const asection *rel_sec ATTRIBUTE_UNUSED,
6729 const Elf_Internal_Rela *rela)
6730{
6731 switch ((int) ELFNN_R_TYPE (rela->r_info))
6732 {
6733 case AARCH64_R (RELATIVE):
6734 return reloc_class_relative;
6735 case AARCH64_R (JUMP_SLOT):
6736 return reloc_class_plt;
6737 case AARCH64_R (COPY):
6738 return reloc_class_copy;
6739 default:
6740 return reloc_class_normal;
6741 }
6742}
6743
6744/* Handle an AArch64 specific section when reading an object file. This is
6745 called when bfd_section_from_shdr finds a section with an unknown
6746 type. */
6747
6748static bfd_boolean
6749elfNN_aarch64_section_from_shdr (bfd *abfd,
6750 Elf_Internal_Shdr *hdr,
6751 const char *name, int shindex)
6752{
6753 /* There ought to be a place to keep ELF backend specific flags, but
6754 at the moment there isn't one. We just keep track of the
6755 sections by their name, instead. Fortunately, the ABI gives
6756 names for all the AArch64 specific sections, so we will probably get
6757 away with this. */
6758 switch (hdr->sh_type)
6759 {
6760 case SHT_AARCH64_ATTRIBUTES:
6761 break;
6762
6763 default:
6764 return FALSE;
6765 }
6766
6767 if (!_bfd_elf_make_section_from_shdr (abfd, hdr, name, shindex))
6768 return FALSE;
6769
6770 return TRUE;
6771}
6772
6773/* A structure used to record a list of sections, independently
6774 of the next and prev fields in the asection structure. */
6775typedef struct section_list
6776{
6777 asection *sec;
6778 struct section_list *next;
6779 struct section_list *prev;
6780}
6781section_list;
6782
6783/* Unfortunately we need to keep a list of sections for which
6784 an _aarch64_elf_section_data structure has been allocated. This
6785 is because it is possible for functions like elfNN_aarch64_write_section
6786 to be called on a section which has had an elf_data_structure
6787 allocated for it (and so the used_by_bfd field is valid) but
6788 for which the AArch64 extended version of this structure - the
6789 _aarch64_elf_section_data structure - has not been allocated. */
6790static section_list *sections_with_aarch64_elf_section_data = NULL;
6791
6792static void
6793record_section_with_aarch64_elf_section_data (asection *sec)
6794{
6795 struct section_list *entry;
6796
6797 entry = bfd_malloc (sizeof (*entry));
6798 if (entry == NULL)
6799 return;
6800 entry->sec = sec;
6801 entry->next = sections_with_aarch64_elf_section_data;
6802 entry->prev = NULL;
6803 if (entry->next != NULL)
6804 entry->next->prev = entry;
6805 sections_with_aarch64_elf_section_data = entry;
6806}
6807
6808static struct section_list *
6809find_aarch64_elf_section_entry (asection *sec)
6810{
6811 struct section_list *entry;
6812 static struct section_list *last_entry = NULL;
6813
6814 /* This is a short cut for the typical case where the sections are added
6815 to the sections_with_aarch64_elf_section_data list in forward order and
6816 then looked up here in backwards order. This makes a real difference
6817 to the ld-srec/sec64k.exp linker test. */
6818 entry = sections_with_aarch64_elf_section_data;
6819 if (last_entry != NULL)
6820 {
6821 if (last_entry->sec == sec)
6822 entry = last_entry;
6823 else if (last_entry->next != NULL && last_entry->next->sec == sec)
6824 entry = last_entry->next;
6825 }
6826
6827 for (; entry; entry = entry->next)
6828 if (entry->sec == sec)
6829 break;
6830
6831 if (entry)
6832 /* Record the entry prior to this one - it is the entry we are
6833 most likely to want to locate next time. Also this way if we
6834 have been called from
6835 unrecord_section_with_aarch64_elf_section_data () we will not
6836 be caching a pointer that is about to be freed. */
6837 last_entry = entry->prev;
6838
6839 return entry;
6840}
6841
6842static void
6843unrecord_section_with_aarch64_elf_section_data (asection *sec)
6844{
6845 struct section_list *entry;
6846
6847 entry = find_aarch64_elf_section_entry (sec);
6848
6849 if (entry)
6850 {
6851 if (entry->prev != NULL)
6852 entry->prev->next = entry->next;
6853 if (entry->next != NULL)
6854 entry->next->prev = entry->prev;
6855 if (entry == sections_with_aarch64_elf_section_data)
6856 sections_with_aarch64_elf_section_data = entry->next;
6857 free (entry);
6858 }
6859}
6860
6861
6862typedef struct
6863{
6864 void *finfo;
6865 struct bfd_link_info *info;
6866 asection *sec;
6867 int sec_shndx;
6868 int (*func) (void *, const char *, Elf_Internal_Sym *,
6869 asection *, struct elf_link_hash_entry *);
6870} output_arch_syminfo;
6871
6872enum map_symbol_type
6873{
6874 AARCH64_MAP_INSN,
6875 AARCH64_MAP_DATA
6876};
6877
6878
6879/* Output a single mapping symbol. */
6880
6881static bfd_boolean
6882elfNN_aarch64_output_map_sym (output_arch_syminfo *osi,
6883 enum map_symbol_type type, bfd_vma offset)
6884{
6885 static const char *names[2] = { "$x", "$d" };
6886 Elf_Internal_Sym sym;
6887
6888 sym.st_value = (osi->sec->output_section->vma
6889 + osi->sec->output_offset + offset);
6890 sym.st_size = 0;
6891 sym.st_other = 0;
6892 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_NOTYPE);
6893 sym.st_shndx = osi->sec_shndx;
6894 return osi->func (osi->finfo, names[type], &sym, osi->sec, NULL) == 1;
6895}
6896
6897
6898
6899/* Output mapping symbols for PLT entries associated with H. */
6900
6901static bfd_boolean
6902elfNN_aarch64_output_plt_map (struct elf_link_hash_entry *h, void *inf)
6903{
6904 output_arch_syminfo *osi = (output_arch_syminfo *) inf;
6905 bfd_vma addr;
6906
6907 if (h->root.type == bfd_link_hash_indirect)
6908 return TRUE;
6909
6910 if (h->root.type == bfd_link_hash_warning)
6911 /* When warning symbols are created, they **replace** the "real"
6912 entry in the hash table, thus we never get to see the real
6913 symbol in a hash traversal. So look at it now. */
6914 h = (struct elf_link_hash_entry *) h->root.u.i.link;
6915
6916 if (h->plt.offset == (bfd_vma) - 1)
6917 return TRUE;
6918
6919 addr = h->plt.offset;
6920 if (addr == 32)
6921 {
6922 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6923 return FALSE;
6924 }
6925 return TRUE;
6926}
6927
6928
6929/* Output a single local symbol for a generated stub. */
6930
6931static bfd_boolean
6932elfNN_aarch64_output_stub_sym (output_arch_syminfo *osi, const char *name,
6933 bfd_vma offset, bfd_vma size)
6934{
6935 Elf_Internal_Sym sym;
6936
6937 sym.st_value = (osi->sec->output_section->vma
6938 + osi->sec->output_offset + offset);
6939 sym.st_size = size;
6940 sym.st_other = 0;
6941 sym.st_info = ELF_ST_INFO (STB_LOCAL, STT_FUNC);
6942 sym.st_shndx = osi->sec_shndx;
6943 return osi->func (osi->finfo, name, &sym, osi->sec, NULL) == 1;
6944}
6945
6946static bfd_boolean
6947aarch64_map_one_stub (struct bfd_hash_entry *gen_entry, void *in_arg)
6948{
6949 struct elf_aarch64_stub_hash_entry *stub_entry;
6950 asection *stub_sec;
6951 bfd_vma addr;
6952 char *stub_name;
6953 output_arch_syminfo *osi;
6954
6955 /* Massage our args to the form they really have. */
6956 stub_entry = (struct elf_aarch64_stub_hash_entry *) gen_entry;
6957 osi = (output_arch_syminfo *) in_arg;
6958
6959 stub_sec = stub_entry->stub_sec;
6960
6961 /* Ensure this stub is attached to the current section being
6962 processed. */
6963 if (stub_sec != osi->sec)
6964 return TRUE;
6965
6966 addr = (bfd_vma) stub_entry->stub_offset;
6967
6968 stub_name = stub_entry->output_name;
6969
6970 switch (stub_entry->stub_type)
6971 {
6972 case aarch64_stub_adrp_branch:
6973 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6974 sizeof (aarch64_adrp_branch_stub)))
6975 return FALSE;
6976 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6977 return FALSE;
6978 break;
6979 case aarch64_stub_long_branch:
6980 if (!elfNN_aarch64_output_stub_sym
6981 (osi, stub_name, addr, sizeof (aarch64_long_branch_stub)))
6982 return FALSE;
6983 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6984 return FALSE;
6985 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_DATA, addr + 16))
6986 return FALSE;
6987 break;
6988 case aarch64_stub_erratum_835769_veneer:
6989 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6990 sizeof (aarch64_erratum_835769_stub)))
6991 return FALSE;
6992 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
6993 return FALSE;
6994 break;
6995 case aarch64_stub_erratum_843419_veneer:
6996 if (!elfNN_aarch64_output_stub_sym (osi, stub_name, addr,
6997 sizeof (aarch64_erratum_843419_stub)))
6998 return FALSE;
6999 if (!elfNN_aarch64_output_map_sym (osi, AARCH64_MAP_INSN, addr))
7000 return FALSE;
7001 break;
7002
7003 default:
7004 abort ();
7005 }
7006
7007 return TRUE;
7008}
7009
7010/* Output mapping symbols for linker generated sections. */
7011
7012static bfd_boolean
7013elfNN_aarch64_output_arch_local_syms (bfd *output_bfd,
7014 struct bfd_link_info *info,
7015 void *finfo,
7016 int (*func) (void *, const char *,
7017 Elf_Internal_Sym *,
7018 asection *,
7019 struct elf_link_hash_entry
7020 *))
7021{
7022 output_arch_syminfo osi;
7023 struct elf_aarch64_link_hash_table *htab;
7024
7025 htab = elf_aarch64_hash_table (info);
7026
7027 osi.finfo = finfo;
7028 osi.info = info;
7029 osi.func = func;
7030
7031 /* Long calls stubs. */
7032 if (htab->stub_bfd && htab->stub_bfd->sections)
7033 {
7034 asection *stub_sec;
7035
7036 for (stub_sec = htab->stub_bfd->sections;
7037 stub_sec != NULL; stub_sec = stub_sec->next)
7038 {
7039 /* Ignore non-stub sections. */
7040 if (!strstr (stub_sec->name, STUB_SUFFIX))
7041 continue;
7042
7043 osi.sec = stub_sec;
7044
7045 osi.sec_shndx = _bfd_elf_section_from_bfd_section
7046 (output_bfd, osi.sec->output_section);
7047
7048 /* The first instruction in a stub is always a branch. */
7049 if (!elfNN_aarch64_output_map_sym (&osi, AARCH64_MAP_INSN, 0))
7050 return FALSE;
7051
7052 bfd_hash_traverse (&htab->stub_hash_table, aarch64_map_one_stub,
7053 &osi);
7054 }
7055 }
7056
7057 /* Finally, output mapping symbols for the PLT. */
7058 if (!htab->root.splt || htab->root.splt->size == 0)
7059 return TRUE;
7060
7061 /* For now live without mapping symbols for the plt. */
7062 osi.sec_shndx = _bfd_elf_section_from_bfd_section
7063 (output_bfd, htab->root.splt->output_section);
7064 osi.sec = htab->root.splt;
7065
7066 elf_link_hash_traverse (&htab->root, elfNN_aarch64_output_plt_map,
7067 (void *) &osi);
7068
7069 return TRUE;
7070
7071}
7072
7073/* Allocate target specific section data. */
7074
7075static bfd_boolean
7076elfNN_aarch64_new_section_hook (bfd *abfd, asection *sec)
7077{
7078 if (!sec->used_by_bfd)
7079 {
7080 _aarch64_elf_section_data *sdata;
7081 bfd_size_type amt = sizeof (*sdata);
7082
7083 sdata = bfd_zalloc (abfd, amt);
7084 if (sdata == NULL)
7085 return FALSE;
7086 sec->used_by_bfd = sdata;
7087 }
7088
7089 record_section_with_aarch64_elf_section_data (sec);
7090
7091 return _bfd_elf_new_section_hook (abfd, sec);
7092}
7093
7094
7095static void
7096unrecord_section_via_map_over_sections (bfd *abfd ATTRIBUTE_UNUSED,
7097 asection *sec,
7098 void *ignore ATTRIBUTE_UNUSED)
7099{
7100 unrecord_section_with_aarch64_elf_section_data (sec);
7101}
7102
7103static bfd_boolean
7104elfNN_aarch64_close_and_cleanup (bfd *abfd)
7105{
7106 if (abfd->sections)
7107 bfd_map_over_sections (abfd,
7108 unrecord_section_via_map_over_sections, NULL);
7109
7110 return _bfd_elf_close_and_cleanup (abfd);
7111}
7112
7113static bfd_boolean
7114elfNN_aarch64_bfd_free_cached_info (bfd *abfd)
7115{
7116 if (abfd->sections)
7117 bfd_map_over_sections (abfd,
7118 unrecord_section_via_map_over_sections, NULL);
7119
7120 return _bfd_free_cached_info (abfd);
7121}
7122
7123/* Create dynamic sections. This is different from the ARM backend in that
7124 the got, plt, gotplt and their relocation sections are all created in the
7125 standard part of the bfd elf backend. */
7126
7127static bfd_boolean
7128elfNN_aarch64_create_dynamic_sections (bfd *dynobj,
7129 struct bfd_link_info *info)
7130{
7131 struct elf_aarch64_link_hash_table *htab;
7132
7133 /* We need to create .got section. */
7134 if (!aarch64_elf_create_got_section (dynobj, info))
7135 return FALSE;
7136
7137 if (!_bfd_elf_create_dynamic_sections (dynobj, info))
7138 return FALSE;
7139
7140 htab = elf_aarch64_hash_table (info);
7141 htab->sdynbss = bfd_get_linker_section (dynobj, ".dynbss");
7142 if (!info->shared)
7143 htab->srelbss = bfd_get_linker_section (dynobj, ".rela.bss");
7144
7145 if (!htab->sdynbss || (!info->shared && !htab->srelbss))
7146 abort ();
7147
7148 return TRUE;
7149}
7150
7151
7152/* Allocate space in .plt, .got and associated reloc sections for
7153 dynamic relocs. */
7154
7155static bfd_boolean
7156elfNN_aarch64_allocate_dynrelocs (struct elf_link_hash_entry *h, void *inf)
7157{
7158 struct bfd_link_info *info;
7159 struct elf_aarch64_link_hash_table *htab;
7160 struct elf_aarch64_link_hash_entry *eh;
7161 struct elf_dyn_relocs *p;
7162
7163 /* An example of a bfd_link_hash_indirect symbol is versioned
7164 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7165 -> __gxx_personality_v0(bfd_link_hash_defined)
7166
7167 There is no need to process bfd_link_hash_indirect symbols here
7168 because we will also be presented with the concrete instance of
7169 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7170 called to copy all relevant data from the generic to the concrete
7171 symbol instance.
7172 */
7173 if (h->root.type == bfd_link_hash_indirect)
7174 return TRUE;
7175
7176 if (h->root.type == bfd_link_hash_warning)
7177 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7178
7179 info = (struct bfd_link_info *) inf;
7180 htab = elf_aarch64_hash_table (info);
7181
7182 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7183 here if it is defined and referenced in a non-shared object. */
7184 if (h->type == STT_GNU_IFUNC
7185 && h->def_regular)
7186 return TRUE;
7187 else if (htab->root.dynamic_sections_created && h->plt.refcount > 0)
7188 {
7189 /* Make sure this symbol is output as a dynamic symbol.
7190 Undefined weak syms won't yet be marked as dynamic. */
7191 if (h->dynindx == -1 && !h->forced_local)
7192 {
7193 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7194 return FALSE;
7195 }
7196
7197 if (info->shared || WILL_CALL_FINISH_DYNAMIC_SYMBOL (1, 0, h))
7198 {
7199 asection *s = htab->root.splt;
7200
7201 /* If this is the first .plt entry, make room for the special
7202 first entry. */
7203 if (s->size == 0)
7204 s->size += htab->plt_header_size;
7205
7206 h->plt.offset = s->size;
7207
7208 /* If this symbol is not defined in a regular file, and we are
7209 not generating a shared library, then set the symbol to this
7210 location in the .plt. This is required to make function
7211 pointers compare as equal between the normal executable and
7212 the shared library. */
7213 if (!info->shared && !h->def_regular)
7214 {
7215 h->root.u.def.section = s;
7216 h->root.u.def.value = h->plt.offset;
7217 }
7218
7219 /* Make room for this entry. For now we only create the
7220 small model PLT entries. We later need to find a way
7221 of relaxing into these from the large model PLT entries. */
7222 s->size += PLT_SMALL_ENTRY_SIZE;
7223
7224 /* We also need to make an entry in the .got.plt section, which
7225 will be placed in the .got section by the linker script. */
7226 htab->root.sgotplt->size += GOT_ENTRY_SIZE;
7227
7228 /* We also need to make an entry in the .rela.plt section. */
7229 htab->root.srelplt->size += RELOC_SIZE (htab);
7230
7231 /* We need to ensure that all GOT entries that serve the PLT
7232 are consecutive with the special GOT slots [0] [1] and
7233 [2]. Any addtional relocations, such as
7234 R_AARCH64_TLSDESC, must be placed after the PLT related
7235 entries. We abuse the reloc_count such that during
7236 sizing we adjust reloc_count to indicate the number of
7237 PLT related reserved entries. In subsequent phases when
7238 filling in the contents of the reloc entries, PLT related
7239 entries are placed by computing their PLT index (0
7240 .. reloc_count). While other none PLT relocs are placed
7241 at the slot indicated by reloc_count and reloc_count is
7242 updated. */
7243
7244 htab->root.srelplt->reloc_count++;
7245 }
7246 else
7247 {
7248 h->plt.offset = (bfd_vma) - 1;
7249 h->needs_plt = 0;
7250 }
7251 }
7252 else
7253 {
7254 h->plt.offset = (bfd_vma) - 1;
7255 h->needs_plt = 0;
7256 }
7257
7258 eh = (struct elf_aarch64_link_hash_entry *) h;
7259 eh->tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7260
7261 if (h->got.refcount > 0)
7262 {
7263 bfd_boolean dyn;
7264 unsigned got_type = elf_aarch64_hash_entry (h)->got_type;
7265
7266 h->got.offset = (bfd_vma) - 1;
7267
7268 dyn = htab->root.dynamic_sections_created;
7269
7270 /* Make sure this symbol is output as a dynamic symbol.
7271 Undefined weak syms won't yet be marked as dynamic. */
7272 if (dyn && h->dynindx == -1 && !h->forced_local)
7273 {
7274 if (!bfd_elf_link_record_dynamic_symbol (info, h))
7275 return FALSE;
7276 }
7277
7278 if (got_type == GOT_UNKNOWN)
7279 {
7280 }
7281 else if (got_type == GOT_NORMAL)
7282 {
7283 h->got.offset = htab->root.sgot->size;
7284 htab->root.sgot->size += GOT_ENTRY_SIZE;
7285 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7286 || h->root.type != bfd_link_hash_undefweak)
7287 && (info->shared
7288 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7289 {
7290 htab->root.srelgot->size += RELOC_SIZE (htab);
7291 }
7292 }
7293 else
7294 {
7295 int indx;
7296 if (got_type & GOT_TLSDESC_GD)
7297 {
7298 eh->tlsdesc_got_jump_table_offset =
7299 (htab->root.sgotplt->size
7300 - aarch64_compute_jump_table_size (htab));
7301 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7302 h->got.offset = (bfd_vma) - 2;
7303 }
7304
7305 if (got_type & GOT_TLS_GD)
7306 {
7307 h->got.offset = htab->root.sgot->size;
7308 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7309 }
7310
7311 if (got_type & GOT_TLS_IE)
7312 {
7313 h->got.offset = htab->root.sgot->size;
7314 htab->root.sgot->size += GOT_ENTRY_SIZE;
7315 }
7316
7317 indx = h && h->dynindx != -1 ? h->dynindx : 0;
7318 if ((ELF_ST_VISIBILITY (h->other) == STV_DEFAULT
7319 || h->root.type != bfd_link_hash_undefweak)
7320 && (info->shared
7321 || indx != 0
7322 || WILL_CALL_FINISH_DYNAMIC_SYMBOL (dyn, 0, h)))
7323 {
7324 if (got_type & GOT_TLSDESC_GD)
7325 {
7326 htab->root.srelplt->size += RELOC_SIZE (htab);
7327 /* Note reloc_count not incremented here! We have
7328 already adjusted reloc_count for this relocation
7329 type. */
7330
7331 /* TLSDESC PLT is now needed, but not yet determined. */
7332 htab->tlsdesc_plt = (bfd_vma) - 1;
7333 }
7334
7335 if (got_type & GOT_TLS_GD)
7336 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7337
7338 if (got_type & GOT_TLS_IE)
7339 htab->root.srelgot->size += RELOC_SIZE (htab);
7340 }
7341 }
7342 }
7343 else
7344 {
7345 h->got.offset = (bfd_vma) - 1;
7346 }
7347
7348 if (eh->dyn_relocs == NULL)
7349 return TRUE;
7350
7351 /* In the shared -Bsymbolic case, discard space allocated for
7352 dynamic pc-relative relocs against symbols which turn out to be
7353 defined in regular objects. For the normal shared case, discard
7354 space for pc-relative relocs that have become local due to symbol
7355 visibility changes. */
7356
7357 if (info->shared)
7358 {
7359 /* Relocs that use pc_count are those that appear on a call
7360 insn, or certain REL relocs that can generated via assembly.
7361 We want calls to protected symbols to resolve directly to the
7362 function rather than going via the plt. If people want
7363 function pointer comparisons to work as expected then they
7364 should avoid writing weird assembly. */
7365 if (SYMBOL_CALLS_LOCAL (info, h))
7366 {
7367 struct elf_dyn_relocs **pp;
7368
7369 for (pp = &eh->dyn_relocs; (p = *pp) != NULL;)
7370 {
7371 p->count -= p->pc_count;
7372 p->pc_count = 0;
7373 if (p->count == 0)
7374 *pp = p->next;
7375 else
7376 pp = &p->next;
7377 }
7378 }
7379
7380 /* Also discard relocs on undefined weak syms with non-default
7381 visibility. */
7382 if (eh->dyn_relocs != NULL && h->root.type == bfd_link_hash_undefweak)
7383 {
7384 if (ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7385 eh->dyn_relocs = NULL;
7386
7387 /* Make sure undefined weak symbols are output as a dynamic
7388 symbol in PIEs. */
7389 else if (h->dynindx == -1
7390 && !h->forced_local
7391 && !bfd_elf_link_record_dynamic_symbol (info, h))
7392 return FALSE;
7393 }
7394
7395 }
7396 else if (ELIMINATE_COPY_RELOCS)
7397 {
7398 /* For the non-shared case, discard space for relocs against
7399 symbols which turn out to need copy relocs or are not
7400 dynamic. */
7401
7402 if (!h->non_got_ref
7403 && ((h->def_dynamic
7404 && !h->def_regular)
7405 || (htab->root.dynamic_sections_created
7406 && (h->root.type == bfd_link_hash_undefweak
7407 || h->root.type == bfd_link_hash_undefined))))
7408 {
7409 /* Make sure this symbol is output as a dynamic symbol.
7410 Undefined weak syms won't yet be marked as dynamic. */
7411 if (h->dynindx == -1
7412 && !h->forced_local
7413 && !bfd_elf_link_record_dynamic_symbol (info, h))
7414 return FALSE;
7415
7416 /* If that succeeded, we know we'll be keeping all the
7417 relocs. */
7418 if (h->dynindx != -1)
7419 goto keep;
7420 }
7421
7422 eh->dyn_relocs = NULL;
7423
7424 keep:;
7425 }
7426
7427 /* Finally, allocate space. */
7428 for (p = eh->dyn_relocs; p != NULL; p = p->next)
7429 {
7430 asection *sreloc;
7431
7432 sreloc = elf_section_data (p->sec)->sreloc;
7433
7434 BFD_ASSERT (sreloc != NULL);
7435
7436 sreloc->size += p->count * RELOC_SIZE (htab);
7437 }
7438
7439 return TRUE;
7440}
7441
7442/* Allocate space in .plt, .got and associated reloc sections for
7443 ifunc dynamic relocs. */
7444
7445static bfd_boolean
7446elfNN_aarch64_allocate_ifunc_dynrelocs (struct elf_link_hash_entry *h,
7447 void *inf)
7448{
7449 struct bfd_link_info *info;
7450 struct elf_aarch64_link_hash_table *htab;
7451 struct elf_aarch64_link_hash_entry *eh;
7452
7453 /* An example of a bfd_link_hash_indirect symbol is versioned
7454 symbol. For example: __gxx_personality_v0(bfd_link_hash_indirect)
7455 -> __gxx_personality_v0(bfd_link_hash_defined)
7456
7457 There is no need to process bfd_link_hash_indirect symbols here
7458 because we will also be presented with the concrete instance of
7459 the symbol and elfNN_aarch64_copy_indirect_symbol () will have been
7460 called to copy all relevant data from the generic to the concrete
7461 symbol instance.
7462 */
7463 if (h->root.type == bfd_link_hash_indirect)
7464 return TRUE;
7465
7466 if (h->root.type == bfd_link_hash_warning)
7467 h = (struct elf_link_hash_entry *) h->root.u.i.link;
7468
7469 info = (struct bfd_link_info *) inf;
7470 htab = elf_aarch64_hash_table (info);
7471
7472 eh = (struct elf_aarch64_link_hash_entry *) h;
7473
7474 /* Since STT_GNU_IFUNC symbol must go through PLT, we handle it
7475 here if it is defined and referenced in a non-shared object. */
7476 if (h->type == STT_GNU_IFUNC
7477 && h->def_regular)
7478 return _bfd_elf_allocate_ifunc_dyn_relocs (info, h,
7479 &eh->dyn_relocs,
7480 htab->plt_entry_size,
7481 htab->plt_header_size,
7482 GOT_ENTRY_SIZE);
7483 return TRUE;
7484}
7485
7486/* Allocate space in .plt, .got and associated reloc sections for
7487 local dynamic relocs. */
7488
7489static bfd_boolean
7490elfNN_aarch64_allocate_local_dynrelocs (void **slot, void *inf)
7491{
7492 struct elf_link_hash_entry *h
7493 = (struct elf_link_hash_entry *) *slot;
7494
7495 if (h->type != STT_GNU_IFUNC
7496 || !h->def_regular
7497 || !h->ref_regular
7498 || !h->forced_local
7499 || h->root.type != bfd_link_hash_defined)
7500 abort ();
7501
7502 return elfNN_aarch64_allocate_dynrelocs (h, inf);
7503}
7504
7505/* Allocate space in .plt, .got and associated reloc sections for
7506 local ifunc dynamic relocs. */
7507
7508static bfd_boolean
7509elfNN_aarch64_allocate_local_ifunc_dynrelocs (void **slot, void *inf)
7510{
7511 struct elf_link_hash_entry *h
7512 = (struct elf_link_hash_entry *) *slot;
7513
7514 if (h->type != STT_GNU_IFUNC
7515 || !h->def_regular
7516 || !h->ref_regular
7517 || !h->forced_local
7518 || h->root.type != bfd_link_hash_defined)
7519 abort ();
7520
7521 return elfNN_aarch64_allocate_ifunc_dynrelocs (h, inf);
7522}
7523
7524/* This is the most important function of all . Innocuosly named
7525 though ! */
7526static bfd_boolean
7527elfNN_aarch64_size_dynamic_sections (bfd *output_bfd ATTRIBUTE_UNUSED,
7528 struct bfd_link_info *info)
7529{
7530 struct elf_aarch64_link_hash_table *htab;
7531 bfd *dynobj;
7532 asection *s;
7533 bfd_boolean relocs;
7534 bfd *ibfd;
7535
7536 htab = elf_aarch64_hash_table ((info));
7537 dynobj = htab->root.dynobj;
7538
7539 BFD_ASSERT (dynobj != NULL);
7540
7541 if (htab->root.dynamic_sections_created)
7542 {
7543 if (info->executable)
7544 {
7545 s = bfd_get_linker_section (dynobj, ".interp");
7546 if (s == NULL)
7547 abort ();
7548 s->size = sizeof ELF_DYNAMIC_INTERPRETER;
7549 s->contents = (unsigned char *) ELF_DYNAMIC_INTERPRETER;
7550 }
7551 }
7552
7553 /* Set up .got offsets for local syms, and space for local dynamic
7554 relocs. */
7555 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7556 {
7557 struct elf_aarch64_local_symbol *locals = NULL;
7558 Elf_Internal_Shdr *symtab_hdr;
7559 asection *srel;
7560 unsigned int i;
7561
7562 if (!is_aarch64_elf (ibfd))
7563 continue;
7564
7565 for (s = ibfd->sections; s != NULL; s = s->next)
7566 {
7567 struct elf_dyn_relocs *p;
7568
7569 for (p = (struct elf_dyn_relocs *)
7570 (elf_section_data (s)->local_dynrel); p != NULL; p = p->next)
7571 {
7572 if (!bfd_is_abs_section (p->sec)
7573 && bfd_is_abs_section (p->sec->output_section))
7574 {
7575 /* Input section has been discarded, either because
7576 it is a copy of a linkonce section or due to
7577 linker script /DISCARD/, so we'll be discarding
7578 the relocs too. */
7579 }
7580 else if (p->count != 0)
7581 {
7582 srel = elf_section_data (p->sec)->sreloc;
7583 srel->size += p->count * RELOC_SIZE (htab);
7584 if ((p->sec->output_section->flags & SEC_READONLY) != 0)
7585 info->flags |= DF_TEXTREL;
7586 }
7587 }
7588 }
7589
7590 locals = elf_aarch64_locals (ibfd);
7591 if (!locals)
7592 continue;
7593
7594 symtab_hdr = &elf_symtab_hdr (ibfd);
7595 srel = htab->root.srelgot;
7596 for (i = 0; i < symtab_hdr->sh_info; i++)
7597 {
7598 locals[i].got_offset = (bfd_vma) - 1;
7599 locals[i].tlsdesc_got_jump_table_offset = (bfd_vma) - 1;
7600 if (locals[i].got_refcount > 0)
7601 {
7602 unsigned got_type = locals[i].got_type;
7603 if (got_type & GOT_TLSDESC_GD)
7604 {
7605 locals[i].tlsdesc_got_jump_table_offset =
7606 (htab->root.sgotplt->size
7607 - aarch64_compute_jump_table_size (htab));
7608 htab->root.sgotplt->size += GOT_ENTRY_SIZE * 2;
7609 locals[i].got_offset = (bfd_vma) - 2;
7610 }
7611
7612 if (got_type & GOT_TLS_GD)
7613 {
7614 locals[i].got_offset = htab->root.sgot->size;
7615 htab->root.sgot->size += GOT_ENTRY_SIZE * 2;
7616 }
7617
7618 if (got_type & GOT_TLS_IE
7619 || got_type & GOT_NORMAL)
7620 {
7621 locals[i].got_offset = htab->root.sgot->size;
7622 htab->root.sgot->size += GOT_ENTRY_SIZE;
7623 }
7624
7625 if (got_type == GOT_UNKNOWN)
7626 {
7627 }
7628
7629 if (info->shared)
7630 {
7631 if (got_type & GOT_TLSDESC_GD)
7632 {
7633 htab->root.srelplt->size += RELOC_SIZE (htab);
7634 /* Note RELOC_COUNT not incremented here! */
7635 htab->tlsdesc_plt = (bfd_vma) - 1;
7636 }
7637
7638 if (got_type & GOT_TLS_GD)
7639 htab->root.srelgot->size += RELOC_SIZE (htab) * 2;
7640
7641 if (got_type & GOT_TLS_IE
7642 || got_type & GOT_NORMAL)
7643 htab->root.srelgot->size += RELOC_SIZE (htab);
7644 }
7645 }
7646 else
7647 {
7648 locals[i].got_refcount = (bfd_vma) - 1;
7649 }
7650 }
7651 }
7652
7653
7654 /* Allocate global sym .plt and .got entries, and space for global
7655 sym dynamic relocs. */
7656 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_dynrelocs,
7657 info);
7658
7659 /* Allocate global ifunc sym .plt and .got entries, and space for global
7660 ifunc sym dynamic relocs. */
7661 elf_link_hash_traverse (&htab->root, elfNN_aarch64_allocate_ifunc_dynrelocs,
7662 info);
7663
7664 /* Allocate .plt and .got entries, and space for local symbols. */
7665 htab_traverse (htab->loc_hash_table,
7666 elfNN_aarch64_allocate_local_dynrelocs,
7667 info);
7668
7669 /* Allocate .plt and .got entries, and space for local ifunc symbols. */
7670 htab_traverse (htab->loc_hash_table,
7671 elfNN_aarch64_allocate_local_ifunc_dynrelocs,
7672 info);
7673
7674 /* For every jump slot reserved in the sgotplt, reloc_count is
7675 incremented. However, when we reserve space for TLS descriptors,
7676 it's not incremented, so in order to compute the space reserved
7677 for them, it suffices to multiply the reloc count by the jump
7678 slot size. */
7679
7680 if (htab->root.srelplt)
7681 htab->sgotplt_jump_table_size = aarch64_compute_jump_table_size (htab);
7682
7683 if (htab->tlsdesc_plt)
7684 {
7685 if (htab->root.splt->size == 0)
7686 htab->root.splt->size += PLT_ENTRY_SIZE;
7687
7688 htab->tlsdesc_plt = htab->root.splt->size;
7689 htab->root.splt->size += PLT_TLSDESC_ENTRY_SIZE;
7690
7691 /* If we're not using lazy TLS relocations, don't generate the
7692 GOT entry required. */
7693 if (!(info->flags & DF_BIND_NOW))
7694 {
7695 htab->dt_tlsdesc_got = htab->root.sgot->size;
7696 htab->root.sgot->size += GOT_ENTRY_SIZE;
7697 }
7698 }
7699
7700 /* Init mapping symbols information to use later to distingush between
7701 code and data while scanning for errata. */
7702 if (htab->fix_erratum_835769 || htab->fix_erratum_843419)
7703 for (ibfd = info->input_bfds; ibfd != NULL; ibfd = ibfd->link.next)
7704 {
7705 if (!is_aarch64_elf (ibfd))
7706 continue;
7707 bfd_elfNN_aarch64_init_maps (ibfd);
7708 }
7709
7710 /* We now have determined the sizes of the various dynamic sections.
7711 Allocate memory for them. */
7712 relocs = FALSE;
7713 for (s = dynobj->sections; s != NULL; s = s->next)
7714 {
7715 if ((s->flags & SEC_LINKER_CREATED) == 0)
7716 continue;
7717
7718 if (s == htab->root.splt
7719 || s == htab->root.sgot
7720 || s == htab->root.sgotplt
7721 || s == htab->root.iplt
7722 || s == htab->root.igotplt || s == htab->sdynbss)
7723 {
7724 /* Strip this section if we don't need it; see the
7725 comment below. */
7726 }
7727 else if (CONST_STRNEQ (bfd_get_section_name (dynobj, s), ".rela"))
7728 {
7729 if (s->size != 0 && s != htab->root.srelplt)
7730 relocs = TRUE;
7731
7732 /* We use the reloc_count field as a counter if we need
7733 to copy relocs into the output file. */
7734 if (s != htab->root.srelplt)
7735 s->reloc_count = 0;
7736 }
7737 else
7738 {
7739 /* It's not one of our sections, so don't allocate space. */
7740 continue;
7741 }
7742
7743 if (s->size == 0)
7744 {
7745 /* If we don't need this section, strip it from the
7746 output file. This is mostly to handle .rela.bss and
7747 .rela.plt. We must create both sections in
7748 create_dynamic_sections, because they must be created
7749 before the linker maps input sections to output
7750 sections. The linker does that before
7751 adjust_dynamic_symbol is called, and it is that
7752 function which decides whether anything needs to go
7753 into these sections. */
7754
7755 s->flags |= SEC_EXCLUDE;
7756 continue;
7757 }
7758
7759 if ((s->flags & SEC_HAS_CONTENTS) == 0)
7760 continue;
7761
7762 /* Allocate memory for the section contents. We use bfd_zalloc
7763 here in case unused entries are not reclaimed before the
7764 section's contents are written out. This should not happen,
7765 but this way if it does, we get a R_AARCH64_NONE reloc instead
7766 of garbage. */
7767 s->contents = (bfd_byte *) bfd_zalloc (dynobj, s->size);
7768 if (s->contents == NULL)
7769 return FALSE;
7770 }
7771
7772 if (htab->root.dynamic_sections_created)
7773 {
7774 /* Add some entries to the .dynamic section. We fill in the
7775 values later, in elfNN_aarch64_finish_dynamic_sections, but we
7776 must add the entries now so that we get the correct size for
7777 the .dynamic section. The DT_DEBUG entry is filled in by the
7778 dynamic linker and used by the debugger. */
7779#define add_dynamic_entry(TAG, VAL) \
7780 _bfd_elf_add_dynamic_entry (info, TAG, VAL)
7781
7782 if (info->executable)
7783 {
7784 if (!add_dynamic_entry (DT_DEBUG, 0))
7785 return FALSE;
7786 }
7787
7788 if (htab->root.splt->size != 0)
7789 {
7790 if (!add_dynamic_entry (DT_PLTGOT, 0)
7791 || !add_dynamic_entry (DT_PLTRELSZ, 0)
7792 || !add_dynamic_entry (DT_PLTREL, DT_RELA)
7793 || !add_dynamic_entry (DT_JMPREL, 0))
7794 return FALSE;
7795
7796 if (htab->tlsdesc_plt
7797 && (!add_dynamic_entry (DT_TLSDESC_PLT, 0)
7798 || !add_dynamic_entry (DT_TLSDESC_GOT, 0)))
7799 return FALSE;
7800 }
7801
7802 if (relocs)
7803 {
7804 if (!add_dynamic_entry (DT_RELA, 0)
7805 || !add_dynamic_entry (DT_RELASZ, 0)
7806 || !add_dynamic_entry (DT_RELAENT, RELOC_SIZE (htab)))
7807 return FALSE;
7808
7809 /* If any dynamic relocs apply to a read-only section,
7810 then we need a DT_TEXTREL entry. */
7811 if ((info->flags & DF_TEXTREL) != 0)
7812 {
7813 if (!add_dynamic_entry (DT_TEXTREL, 0))
7814 return FALSE;
7815 }
7816 }
7817 }
7818#undef add_dynamic_entry
7819
7820 return TRUE;
7821}
7822
7823static inline void
7824elf_aarch64_update_plt_entry (bfd *output_bfd,
7825 bfd_reloc_code_real_type r_type,
7826 bfd_byte *plt_entry, bfd_vma value)
7827{
7828 reloc_howto_type *howto = elfNN_aarch64_howto_from_bfd_reloc (r_type);
7829
7830 _bfd_aarch64_elf_put_addend (output_bfd, plt_entry, r_type, howto, value);
7831}
7832
7833static void
7834elfNN_aarch64_create_small_pltn_entry (struct elf_link_hash_entry *h,
7835 struct elf_aarch64_link_hash_table
7836 *htab, bfd *output_bfd,
7837 struct bfd_link_info *info)
7838{
7839 bfd_byte *plt_entry;
7840 bfd_vma plt_index;
7841 bfd_vma got_offset;
7842 bfd_vma gotplt_entry_address;
7843 bfd_vma plt_entry_address;
7844 Elf_Internal_Rela rela;
7845 bfd_byte *loc;
7846 asection *plt, *gotplt, *relplt;
7847
7848 /* When building a static executable, use .iplt, .igot.plt and
7849 .rela.iplt sections for STT_GNU_IFUNC symbols. */
7850 if (htab->root.splt != NULL)
7851 {
7852 plt = htab->root.splt;
7853 gotplt = htab->root.sgotplt;
7854 relplt = htab->root.srelplt;
7855 }
7856 else
7857 {
7858 plt = htab->root.iplt;
7859 gotplt = htab->root.igotplt;
7860 relplt = htab->root.irelplt;
7861 }
7862
7863 /* Get the index in the procedure linkage table which
7864 corresponds to this symbol. This is the index of this symbol
7865 in all the symbols for which we are making plt entries. The
7866 first entry in the procedure linkage table is reserved.
7867
7868 Get the offset into the .got table of the entry that
7869 corresponds to this function. Each .got entry is GOT_ENTRY_SIZE
7870 bytes. The first three are reserved for the dynamic linker.
7871
7872 For static executables, we don't reserve anything. */
7873
7874 if (plt == htab->root.splt)
7875 {
7876 plt_index = (h->plt.offset - htab->plt_header_size) / htab->plt_entry_size;
7877 got_offset = (plt_index + 3) * GOT_ENTRY_SIZE;
7878 }
7879 else
7880 {
7881 plt_index = h->plt.offset / htab->plt_entry_size;
7882 got_offset = plt_index * GOT_ENTRY_SIZE;
7883 }
7884
7885 plt_entry = plt->contents + h->plt.offset;
7886 plt_entry_address = plt->output_section->vma
7887 + plt->output_offset + h->plt.offset;
7888 gotplt_entry_address = gotplt->output_section->vma +
7889 gotplt->output_offset + got_offset;
7890
7891 /* Copy in the boiler-plate for the PLTn entry. */
7892 memcpy (plt_entry, elfNN_aarch64_small_plt_entry, PLT_SMALL_ENTRY_SIZE);
7893
7894 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
7895 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
7896 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
7897 plt_entry,
7898 PG (gotplt_entry_address) -
7899 PG (plt_entry_address));
7900
7901 /* Fill in the lo12 bits for the load from the pltgot. */
7902 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
7903 plt_entry + 4,
7904 PG_OFFSET (gotplt_entry_address));
7905
7906 /* Fill in the lo12 bits for the add from the pltgot entry. */
7907 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
7908 plt_entry + 8,
7909 PG_OFFSET (gotplt_entry_address));
7910
7911 /* All the GOTPLT Entries are essentially initialized to PLT0. */
7912 bfd_put_NN (output_bfd,
7913 plt->output_section->vma + plt->output_offset,
7914 gotplt->contents + got_offset);
7915
7916 rela.r_offset = gotplt_entry_address;
7917
7918 if (h->dynindx == -1
7919 || ((info->executable
7920 || ELF_ST_VISIBILITY (h->other) != STV_DEFAULT)
7921 && h->def_regular
7922 && h->type == STT_GNU_IFUNC))
7923 {
7924 /* If an STT_GNU_IFUNC symbol is locally defined, generate
7925 R_AARCH64_IRELATIVE instead of R_AARCH64_JUMP_SLOT. */
7926 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (IRELATIVE));
7927 rela.r_addend = (h->root.u.def.value
7928 + h->root.u.def.section->output_section->vma
7929 + h->root.u.def.section->output_offset);
7930 }
7931 else
7932 {
7933 /* Fill in the entry in the .rela.plt section. */
7934 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (JUMP_SLOT));
7935 rela.r_addend = 0;
7936 }
7937
7938 /* Compute the relocation entry to used based on PLT index and do
7939 not adjust reloc_count. The reloc_count has already been adjusted
7940 to account for this entry. */
7941 loc = relplt->contents + plt_index * RELOC_SIZE (htab);
7942 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
7943}
7944
7945/* Size sections even though they're not dynamic. We use it to setup
7946 _TLS_MODULE_BASE_, if needed. */
7947
7948static bfd_boolean
7949elfNN_aarch64_always_size_sections (bfd *output_bfd,
7950 struct bfd_link_info *info)
7951{
7952 asection *tls_sec;
7953
7954 if (info->relocatable)
7955 return TRUE;
7956
7957 tls_sec = elf_hash_table (info)->tls_sec;
7958
7959 if (tls_sec)
7960 {
7961 struct elf_link_hash_entry *tlsbase;
7962
7963 tlsbase = elf_link_hash_lookup (elf_hash_table (info),
7964 "_TLS_MODULE_BASE_", TRUE, TRUE, FALSE);
7965
7966 if (tlsbase)
7967 {
7968 struct bfd_link_hash_entry *h = NULL;
7969 const struct elf_backend_data *bed =
7970 get_elf_backend_data (output_bfd);
7971
7972 if (!(_bfd_generic_link_add_one_symbol
7973 (info, output_bfd, "_TLS_MODULE_BASE_", BSF_LOCAL,
7974 tls_sec, 0, NULL, FALSE, bed->collect, &h)))
7975 return FALSE;
7976
7977 tlsbase->type = STT_TLS;
7978 tlsbase = (struct elf_link_hash_entry *) h;
7979 tlsbase->def_regular = 1;
7980 tlsbase->other = STV_HIDDEN;
7981 (*bed->elf_backend_hide_symbol) (info, tlsbase, TRUE);
7982 }
7983 }
7984
7985 return TRUE;
7986}
7987
7988/* Finish up dynamic symbol handling. We set the contents of various
7989 dynamic sections here. */
7990static bfd_boolean
7991elfNN_aarch64_finish_dynamic_symbol (bfd *output_bfd,
7992 struct bfd_link_info *info,
7993 struct elf_link_hash_entry *h,
7994 Elf_Internal_Sym *sym)
7995{
7996 struct elf_aarch64_link_hash_table *htab;
7997 htab = elf_aarch64_hash_table (info);
7998
7999 if (h->plt.offset != (bfd_vma) - 1)
8000 {
8001 asection *plt, *gotplt, *relplt;
8002
8003 /* This symbol has an entry in the procedure linkage table. Set
8004 it up. */
8005
8006 /* When building a static executable, use .iplt, .igot.plt and
8007 .rela.iplt sections for STT_GNU_IFUNC symbols. */
8008 if (htab->root.splt != NULL)
8009 {
8010 plt = htab->root.splt;
8011 gotplt = htab->root.sgotplt;
8012 relplt = htab->root.srelplt;
8013 }
8014 else
8015 {
8016 plt = htab->root.iplt;
8017 gotplt = htab->root.igotplt;
8018 relplt = htab->root.irelplt;
8019 }
8020
8021 /* This symbol has an entry in the procedure linkage table. Set
8022 it up. */
8023 if ((h->dynindx == -1
8024 && !((h->forced_local || info->executable)
8025 && h->def_regular
8026 && h->type == STT_GNU_IFUNC))
8027 || plt == NULL
8028 || gotplt == NULL
8029 || relplt == NULL)
8030 abort ();
8031
8032 elfNN_aarch64_create_small_pltn_entry (h, htab, output_bfd, info);
8033 if (!h->def_regular)
8034 {
8035 /* Mark the symbol as undefined, rather than as defined in
8036 the .plt section. */
8037 sym->st_shndx = SHN_UNDEF;
8038 /* If the symbol is weak we need to clear the value.
8039 Otherwise, the PLT entry would provide a definition for
8040 the symbol even if the symbol wasn't defined anywhere,
8041 and so the symbol would never be NULL. Leave the value if
8042 there were any relocations where pointer equality matters
8043 (this is a clue for the dynamic linker, to make function
8044 pointer comparisons work between an application and shared
8045 library). */
8046 if (!h->ref_regular_nonweak || !h->pointer_equality_needed)
8047 sym->st_value = 0;
8048 }
8049 }
8050
8051 if (h->got.offset != (bfd_vma) - 1
8052 && elf_aarch64_hash_entry (h)->got_type == GOT_NORMAL)
8053 {
8054 Elf_Internal_Rela rela;
8055 bfd_byte *loc;
8056
8057 /* This symbol has an entry in the global offset table. Set it
8058 up. */
8059 if (htab->root.sgot == NULL || htab->root.srelgot == NULL)
8060 abort ();
8061
8062 rela.r_offset = (htab->root.sgot->output_section->vma
8063 + htab->root.sgot->output_offset
8064 + (h->got.offset & ~(bfd_vma) 1));
8065
8066 if (h->def_regular
8067 && h->type == STT_GNU_IFUNC)
8068 {
8069 if (info->shared)
8070 {
8071 /* Generate R_AARCH64_GLOB_DAT. */
8072 goto do_glob_dat;
8073 }
8074 else
8075 {
8076 asection *plt;
8077
8078 if (!h->pointer_equality_needed)
8079 abort ();
8080
8081 /* For non-shared object, we can't use .got.plt, which
8082 contains the real function address if we need pointer
8083 equality. We load the GOT entry with the PLT entry. */
8084 plt = htab->root.splt ? htab->root.splt : htab->root.iplt;
8085 bfd_put_NN (output_bfd, (plt->output_section->vma
8086 + plt->output_offset
8087 + h->plt.offset),
8088 htab->root.sgot->contents
8089 + (h->got.offset & ~(bfd_vma) 1));
8090 return TRUE;
8091 }
8092 }
8093 else if (info->shared && SYMBOL_REFERENCES_LOCAL (info, h))
8094 {
8095 if (!h->def_regular)
8096 return FALSE;
8097
8098 BFD_ASSERT ((h->got.offset & 1) != 0);
8099 rela.r_info = ELFNN_R_INFO (0, AARCH64_R (RELATIVE));
8100 rela.r_addend = (h->root.u.def.value
8101 + h->root.u.def.section->output_section->vma
8102 + h->root.u.def.section->output_offset);
8103 }
8104 else
8105 {
8106do_glob_dat:
8107 BFD_ASSERT ((h->got.offset & 1) == 0);
8108 bfd_put_NN (output_bfd, (bfd_vma) 0,
8109 htab->root.sgot->contents + h->got.offset);
8110 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (GLOB_DAT));
8111 rela.r_addend = 0;
8112 }
8113
8114 loc = htab->root.srelgot->contents;
8115 loc += htab->root.srelgot->reloc_count++ * RELOC_SIZE (htab);
8116 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8117 }
8118
8119 if (h->needs_copy)
8120 {
8121 Elf_Internal_Rela rela;
8122 bfd_byte *loc;
8123
8124 /* This symbol needs a copy reloc. Set it up. */
8125
8126 if (h->dynindx == -1
8127 || (h->root.type != bfd_link_hash_defined
8128 && h->root.type != bfd_link_hash_defweak)
8129 || htab->srelbss == NULL)
8130 abort ();
8131
8132 rela.r_offset = (h->root.u.def.value
8133 + h->root.u.def.section->output_section->vma
8134 + h->root.u.def.section->output_offset);
8135 rela.r_info = ELFNN_R_INFO (h->dynindx, AARCH64_R (COPY));
8136 rela.r_addend = 0;
8137 loc = htab->srelbss->contents;
8138 loc += htab->srelbss->reloc_count++ * RELOC_SIZE (htab);
8139 bfd_elfNN_swap_reloca_out (output_bfd, &rela, loc);
8140 }
8141
8142 /* Mark _DYNAMIC and _GLOBAL_OFFSET_TABLE_ as absolute. SYM may
8143 be NULL for local symbols. */
8144 if (sym != NULL
8145 && (h == elf_hash_table (info)->hdynamic
8146 || h == elf_hash_table (info)->hgot))
8147 sym->st_shndx = SHN_ABS;
8148
8149 return TRUE;
8150}
8151
8152/* Finish up local dynamic symbol handling. We set the contents of
8153 various dynamic sections here. */
8154
8155static bfd_boolean
8156elfNN_aarch64_finish_local_dynamic_symbol (void **slot, void *inf)
8157{
8158 struct elf_link_hash_entry *h
8159 = (struct elf_link_hash_entry *) *slot;
8160 struct bfd_link_info *info
8161 = (struct bfd_link_info *) inf;
8162
8163 return elfNN_aarch64_finish_dynamic_symbol (info->output_bfd,
8164 info, h, NULL);
8165}
8166
8167static void
8168elfNN_aarch64_init_small_plt0_entry (bfd *output_bfd ATTRIBUTE_UNUSED,
8169 struct elf_aarch64_link_hash_table
8170 *htab)
8171{
8172 /* Fill in PLT0. Fixme:RR Note this doesn't distinguish between
8173 small and large plts and at the minute just generates
8174 the small PLT. */
8175
8176 /* PLT0 of the small PLT looks like this in ELF64 -
8177 stp x16, x30, [sp, #-16]! // Save the reloc and lr on stack.
8178 adrp x16, PLT_GOT + 16 // Get the page base of the GOTPLT
8179 ldr x17, [x16, #:lo12:PLT_GOT+16] // Load the address of the
8180 // symbol resolver
8181 add x16, x16, #:lo12:PLT_GOT+16 // Load the lo12 bits of the
8182 // GOTPLT entry for this.
8183 br x17
8184 PLT0 will be slightly different in ELF32 due to different got entry
8185 size.
8186 */
8187 bfd_vma plt_got_2nd_ent; /* Address of GOT[2]. */
8188 bfd_vma plt_base;
8189
8190
8191 memcpy (htab->root.splt->contents, elfNN_aarch64_small_plt0_entry,
8192 PLT_ENTRY_SIZE);
8193 elf_section_data (htab->root.splt->output_section)->this_hdr.sh_entsize =
8194 PLT_ENTRY_SIZE;
8195
8196 plt_got_2nd_ent = (htab->root.sgotplt->output_section->vma
8197 + htab->root.sgotplt->output_offset
8198 + GOT_ENTRY_SIZE * 2);
8199
8200 plt_base = htab->root.splt->output_section->vma +
8201 htab->root.splt->output_offset;
8202
8203 /* Fill in the top 21 bits for this: ADRP x16, PLT_GOT + n * 8.
8204 ADRP: ((PG(S+A)-PG(P)) >> 12) & 0x1fffff */
8205 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8206 htab->root.splt->contents + 4,
8207 PG (plt_got_2nd_ent) - PG (plt_base + 4));
8208
8209 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_LDSTNN_LO12,
8210 htab->root.splt->contents + 8,
8211 PG_OFFSET (plt_got_2nd_ent));
8212
8213 elf_aarch64_update_plt_entry (output_bfd, BFD_RELOC_AARCH64_ADD_LO12,
8214 htab->root.splt->contents + 12,
8215 PG_OFFSET (plt_got_2nd_ent));
8216}
8217
8218static bfd_boolean
8219elfNN_aarch64_finish_dynamic_sections (bfd *output_bfd,
8220 struct bfd_link_info *info)
8221{
8222 struct elf_aarch64_link_hash_table *htab;
8223 bfd *dynobj;
8224 asection *sdyn;
8225
8226 htab = elf_aarch64_hash_table (info);
8227 dynobj = htab->root.dynobj;
8228 sdyn = bfd_get_linker_section (dynobj, ".dynamic");
8229
8230 if (htab->root.dynamic_sections_created)
8231 {
8232 ElfNN_External_Dyn *dyncon, *dynconend;
8233
8234 if (sdyn == NULL || htab->root.sgot == NULL)
8235 abort ();
8236
8237 dyncon = (ElfNN_External_Dyn *) sdyn->contents;
8238 dynconend = (ElfNN_External_Dyn *) (sdyn->contents + sdyn->size);
8239 for (; dyncon < dynconend; dyncon++)
8240 {
8241 Elf_Internal_Dyn dyn;
8242 asection *s;
8243
8244 bfd_elfNN_swap_dyn_in (dynobj, dyncon, &dyn);
8245
8246 switch (dyn.d_tag)
8247 {
8248 default:
8249 continue;
8250
8251 case DT_PLTGOT:
8252 s = htab->root.sgotplt;
8253 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset;
8254 break;
8255
8256 case DT_JMPREL:
8257 dyn.d_un.d_ptr = htab->root.srelplt->output_section->vma;
8258 break;
8259
8260 case DT_PLTRELSZ:
8261 s = htab->root.srelplt;
8262 dyn.d_un.d_val = s->size;
8263 break;
8264
8265 case DT_RELASZ:
8266 /* The procedure linkage table relocs (DT_JMPREL) should
8267 not be included in the overall relocs (DT_RELA).
8268 Therefore, we override the DT_RELASZ entry here to
8269 make it not include the JMPREL relocs. Since the
8270 linker script arranges for .rela.plt to follow all
8271 other relocation sections, we don't have to worry
8272 about changing the DT_RELA entry. */
8273 if (htab->root.srelplt != NULL)
8274 {
8275 s = htab->root.srelplt;
8276 dyn.d_un.d_val -= s->size;
8277 }
8278 break;
8279
8280 case DT_TLSDESC_PLT:
8281 s = htab->root.splt;
8282 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8283 + htab->tlsdesc_plt;
8284 break;
8285
8286 case DT_TLSDESC_GOT:
8287 s = htab->root.sgot;
8288 dyn.d_un.d_ptr = s->output_section->vma + s->output_offset
8289 + htab->dt_tlsdesc_got;
8290 break;
8291 }
8292
8293 bfd_elfNN_swap_dyn_out (output_bfd, &dyn, dyncon);
8294 }
8295
8296 }
8297
8298 /* Fill in the special first entry in the procedure linkage table. */
8299 if (htab->root.splt && htab->root.splt->size > 0)
8300 {
8301 elfNN_aarch64_init_small_plt0_entry (output_bfd, htab);
8302
8303 elf_section_data (htab->root.splt->output_section)->
8304 this_hdr.sh_entsize = htab->plt_entry_size;
8305
8306
8307 if (htab->tlsdesc_plt)
8308 {
8309 bfd_put_NN (output_bfd, (bfd_vma) 0,
8310 htab->root.sgot->contents + htab->dt_tlsdesc_got);
8311
8312 memcpy (htab->root.splt->contents + htab->tlsdesc_plt,
8313 elfNN_aarch64_tlsdesc_small_plt_entry,
8314 sizeof (elfNN_aarch64_tlsdesc_small_plt_entry));
8315
8316 {
8317 bfd_vma adrp1_addr =
8318 htab->root.splt->output_section->vma
8319 + htab->root.splt->output_offset + htab->tlsdesc_plt + 4;
8320
8321 bfd_vma adrp2_addr = adrp1_addr + 4;
8322
8323 bfd_vma got_addr =
8324 htab->root.sgot->output_section->vma
8325 + htab->root.sgot->output_offset;
8326
8327 bfd_vma pltgot_addr =
8328 htab->root.sgotplt->output_section->vma
8329 + htab->root.sgotplt->output_offset;
8330
8331 bfd_vma dt_tlsdesc_got = got_addr + htab->dt_tlsdesc_got;
8332
8333 bfd_byte *plt_entry =
8334 htab->root.splt->contents + htab->tlsdesc_plt;
8335
8336 /* adrp x2, DT_TLSDESC_GOT */
8337 elf_aarch64_update_plt_entry (output_bfd,
8338 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8339 plt_entry + 4,
8340 (PG (dt_tlsdesc_got)
8341 - PG (adrp1_addr)));
8342
8343 /* adrp x3, 0 */
8344 elf_aarch64_update_plt_entry (output_bfd,
8345 BFD_RELOC_AARCH64_ADR_HI21_PCREL,
8346 plt_entry + 8,
8347 (PG (pltgot_addr)
8348 - PG (adrp2_addr)));
8349
8350 /* ldr x2, [x2, #0] */
8351 elf_aarch64_update_plt_entry (output_bfd,
8352 BFD_RELOC_AARCH64_LDSTNN_LO12,
8353 plt_entry + 12,
8354 PG_OFFSET (dt_tlsdesc_got));
8355
8356 /* add x3, x3, 0 */
8357 elf_aarch64_update_plt_entry (output_bfd,
8358 BFD_RELOC_AARCH64_ADD_LO12,
8359 plt_entry + 16,
8360 PG_OFFSET (pltgot_addr));
8361 }
8362 }
8363 }
8364
8365 if (htab->root.sgotplt)
8366 {
8367 if (bfd_is_abs_section (htab->root.sgotplt->output_section))
8368 {
8369 (*_bfd_error_handler)
8370 (_("discarded output section: `%A'"), htab->root.sgotplt);
8371 return FALSE;
8372 }
8373
8374 /* Fill in the first three entries in the global offset table. */
8375 if (htab->root.sgotplt->size > 0)
8376 {
8377 bfd_put_NN (output_bfd, (bfd_vma) 0, htab->root.sgotplt->contents);
8378
8379 /* Write GOT[1] and GOT[2], needed for the dynamic linker. */
8380 bfd_put_NN (output_bfd,
8381 (bfd_vma) 0,
8382 htab->root.sgotplt->contents + GOT_ENTRY_SIZE);
8383 bfd_put_NN (output_bfd,
8384 (bfd_vma) 0,
8385 htab->root.sgotplt->contents + GOT_ENTRY_SIZE * 2);
8386 }
8387
8388 if (htab->root.sgot)
8389 {
8390 if (htab->root.sgot->size > 0)
8391 {
8392 bfd_vma addr =
8393 sdyn ? sdyn->output_section->vma + sdyn->output_offset : 0;
8394 bfd_put_NN (output_bfd, addr, htab->root.sgot->contents);
8395 }
8396 }
8397
8398 elf_section_data (htab->root.sgotplt->output_section)->
8399 this_hdr.sh_entsize = GOT_ENTRY_SIZE;
8400 }
8401
8402 if (htab->root.sgot && htab->root.sgot->size > 0)
8403 elf_section_data (htab->root.sgot->output_section)->this_hdr.sh_entsize
8404 = GOT_ENTRY_SIZE;
8405
8406 /* Fill PLT and GOT entries for local STT_GNU_IFUNC symbols. */
8407 htab_traverse (htab->loc_hash_table,
8408 elfNN_aarch64_finish_local_dynamic_symbol,
8409 info);
8410
8411 return TRUE;
8412}
8413
8414/* Return address for Ith PLT stub in section PLT, for relocation REL
8415 or (bfd_vma) -1 if it should not be included. */
8416
8417static bfd_vma
8418elfNN_aarch64_plt_sym_val (bfd_vma i, const asection *plt,
8419 const arelent *rel ATTRIBUTE_UNUSED)
8420{
8421 return plt->vma + PLT_ENTRY_SIZE + i * PLT_SMALL_ENTRY_SIZE;
8422}
8423
8424
8425/* We use this so we can override certain functions
8426 (though currently we don't). */
8427
8428const struct elf_size_info elfNN_aarch64_size_info =
8429{
8430 sizeof (ElfNN_External_Ehdr),
8431 sizeof (ElfNN_External_Phdr),
8432 sizeof (ElfNN_External_Shdr),
8433 sizeof (ElfNN_External_Rel),
8434 sizeof (ElfNN_External_Rela),
8435 sizeof (ElfNN_External_Sym),
8436 sizeof (ElfNN_External_Dyn),
8437 sizeof (Elf_External_Note),
8438 4, /* Hash table entry size. */
8439 1, /* Internal relocs per external relocs. */
8440 ARCH_SIZE, /* Arch size. */
8441 LOG_FILE_ALIGN, /* Log_file_align. */
8442 ELFCLASSNN, EV_CURRENT,
8443 bfd_elfNN_write_out_phdrs,
8444 bfd_elfNN_write_shdrs_and_ehdr,
8445 bfd_elfNN_checksum_contents,
8446 bfd_elfNN_write_relocs,
8447 bfd_elfNN_swap_symbol_in,
8448 bfd_elfNN_swap_symbol_out,
8449 bfd_elfNN_slurp_reloc_table,
8450 bfd_elfNN_slurp_symbol_table,
8451 bfd_elfNN_swap_dyn_in,
8452 bfd_elfNN_swap_dyn_out,
8453 bfd_elfNN_swap_reloc_in,
8454 bfd_elfNN_swap_reloc_out,
8455 bfd_elfNN_swap_reloca_in,
8456 bfd_elfNN_swap_reloca_out
8457};
8458
8459#define ELF_ARCH bfd_arch_aarch64
8460#define ELF_MACHINE_CODE EM_AARCH64
8461#define ELF_MAXPAGESIZE 0x10000
8462#define ELF_MINPAGESIZE 0x1000
8463#define ELF_COMMONPAGESIZE 0x1000
8464
8465#define bfd_elfNN_close_and_cleanup \
8466 elfNN_aarch64_close_and_cleanup
8467
8468#define bfd_elfNN_bfd_free_cached_info \
8469 elfNN_aarch64_bfd_free_cached_info
8470
8471#define bfd_elfNN_bfd_is_target_special_symbol \
8472 elfNN_aarch64_is_target_special_symbol
8473
8474#define bfd_elfNN_bfd_link_hash_table_create \
8475 elfNN_aarch64_link_hash_table_create
8476
8477#define bfd_elfNN_bfd_merge_private_bfd_data \
8478 elfNN_aarch64_merge_private_bfd_data
8479
8480#define bfd_elfNN_bfd_print_private_bfd_data \
8481 elfNN_aarch64_print_private_bfd_data
8482
8483#define bfd_elfNN_bfd_reloc_type_lookup \
8484 elfNN_aarch64_reloc_type_lookup
8485
8486#define bfd_elfNN_bfd_reloc_name_lookup \
8487 elfNN_aarch64_reloc_name_lookup
8488
8489#define bfd_elfNN_bfd_set_private_flags \
8490 elfNN_aarch64_set_private_flags
8491
8492#define bfd_elfNN_find_inliner_info \
8493 elfNN_aarch64_find_inliner_info
8494
8495#define bfd_elfNN_find_nearest_line \
8496 elfNN_aarch64_find_nearest_line
8497
8498#define bfd_elfNN_mkobject \
8499 elfNN_aarch64_mkobject
8500
8501#define bfd_elfNN_new_section_hook \
8502 elfNN_aarch64_new_section_hook
8503
8504#define elf_backend_adjust_dynamic_symbol \
8505 elfNN_aarch64_adjust_dynamic_symbol
8506
8507#define elf_backend_always_size_sections \
8508 elfNN_aarch64_always_size_sections
8509
8510#define elf_backend_check_relocs \
8511 elfNN_aarch64_check_relocs
8512
8513#define elf_backend_copy_indirect_symbol \
8514 elfNN_aarch64_copy_indirect_symbol
8515
8516/* Create .dynbss, and .rela.bss sections in DYNOBJ, and set up shortcuts
8517 to them in our hash. */
8518#define elf_backend_create_dynamic_sections \
8519 elfNN_aarch64_create_dynamic_sections
8520
8521#define elf_backend_init_index_section \
8522 _bfd_elf_init_2_index_sections
8523
8524#define elf_backend_finish_dynamic_sections \
8525 elfNN_aarch64_finish_dynamic_sections
8526
8527#define elf_backend_finish_dynamic_symbol \
8528 elfNN_aarch64_finish_dynamic_symbol
8529
8530#define elf_backend_gc_sweep_hook \
8531 elfNN_aarch64_gc_sweep_hook
8532
8533#define elf_backend_object_p \
8534 elfNN_aarch64_object_p
8535
8536#define elf_backend_output_arch_local_syms \
8537 elfNN_aarch64_output_arch_local_syms
8538
8539#define elf_backend_plt_sym_val \
8540 elfNN_aarch64_plt_sym_val
8541
8542#define elf_backend_post_process_headers \
8543 elfNN_aarch64_post_process_headers
8544
8545#define elf_backend_relocate_section \
8546 elfNN_aarch64_relocate_section
8547
8548#define elf_backend_reloc_type_class \
8549 elfNN_aarch64_reloc_type_class
8550
8551#define elf_backend_section_from_shdr \
8552 elfNN_aarch64_section_from_shdr
8553
8554#define elf_backend_size_dynamic_sections \
8555 elfNN_aarch64_size_dynamic_sections
8556
8557#define elf_backend_size_info \
8558 elfNN_aarch64_size_info
8559
8560#define elf_backend_write_section \
8561 elfNN_aarch64_write_section
8562
8563#define elf_backend_can_refcount 1
8564#define elf_backend_can_gc_sections 1
8565#define elf_backend_plt_readonly 1
8566#define elf_backend_want_got_plt 1
8567#define elf_backend_want_plt_sym 0
8568#define elf_backend_may_use_rel_p 0
8569#define elf_backend_may_use_rela_p 1
8570#define elf_backend_default_use_rela_p 1
8571#define elf_backend_rela_normal 1
8572#define elf_backend_got_header_size (GOT_ENTRY_SIZE * 3)
8573#define elf_backend_default_execstack 0
8574
8575#undef elf_backend_obj_attrs_section
8576#define elf_backend_obj_attrs_section ".ARM.attributes"
8577
8578#include "elfNN-target.h"
This page took 0.051932 seconds and 4 git commands to generate.