summaryrefslogtreecommitdiffstats
path: root/docs/manual/configure.txt
blob: 56b04696d2c6f95fe0008fb87b2f12826642eefe (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
// -*- mode:doc; -*-
// vim: set syntax=asciidoc:

[[configure]]
Details on Buildroot configuration
----------------------------------

All the configuration options in +make *config+ have a help text
providing details about the option. However, a number of topics
require additional details that cannot easily be covered in the help
text and are there covered in the following sections.

Cross-compilation toolchain
~~~~~~~~~~~~~~~~~~~~~~~~~~~

A compilation toolchain is the set of tools that allows you to compile
code for your system. It consists of a compiler (in our case, +gcc+),
binary utils like assembler and linker (in our case, +binutils+) and a
C standard library (for example
http://www.gnu.org/software/libc/libc.html[GNU Libc],
http://www.uclibc.org/[uClibc]).

The system installed on your development station certainly already has
a compilation toolchain that you can use to compile an application
that runs on your system. If you're using a PC, your compilation
toolchain runs on an x86 processor and generates code for an x86
processor. Under most Linux systems, the compilation toolchain uses
the GNU libc (glibc) as the C standard library. This compilation
toolchain is called the "host compilation toolchain". The machine on
which it is running, and on which you're working, is called the "host
system" footnote:[This terminology differs from what is used by GNU
configure, where the host is the machine on which the application will
run (which is usually the same as target)].

The compilation toolchain is provided by your distribution, and
Buildroot has nothing to do with it (other than using it to build a
cross-compilation toolchain and other tools that are run on the
development host).

As said above, the compilation toolchain that comes with your system
runs on and generates code for the processor in your host system. As
your embedded system has a different processor, you need a
cross-compilation toolchain - a compilation toolchain that runs on
your _host system_ but generates code for your _target system_ (and
target processor). For example, if your host system uses x86 and your
target system uses ARM, the regular compilation toolchain on your host
runs on x86 and generates code for x86, while the cross-compilation
toolchain runs on x86 and generates code for ARM.

Buildroot provides different solutions to build, or use existing
cross-compilation toolchains:

 * The *internal toolchain backend*, called +Buildroot toolchain+ in
   the configuration interface.

 * The *external toolchain backend*, called +External toolchain+ in
   the configuration interface.

 * The *Crosstool-NG toolchain backend*, called +Crosstool-NG
   toolchain+ in the configuration interface.

The choice between these three solutions is done using the +Toolchain
Type+ option in the +Toolchain+ menu. Once one solution has been
chosen, a number of configuration options appear, they are detailed in
the following sections.

[[internal-toolchain-backend]]
Internal toolchain backend
^^^^^^^^^^^^^^^^^^^^^^^^^^

The _internal toolchain backend_ is the backend where Buildroot builds
by itself a cross-compilation toolchain, before building the userspace
applications and libraries for your target embedded system.

This backend is the historical backend of Buildroot, and has been
limited for a long time to the usage of the
http://www.uclibc.org[uClibc C library]. Support for the _eglibc_ C
library has been added in 2013 and is at this point considered
experimental. See the _External toolchain backend_ and _Crosstool-NG
toolchain backend_ for other solutions to use _glibc_ or _eglibc_.

Once you have selected this backend, a number of options appear. The
most important ones allow to:

 * Change the version of the Linux kernel headers used to build the
   toolchain. This item deserves a few explanations. In the process of
   building a cross-compilation toolchain, the C library is being
   built. This library provides the interface between userspace
   applications and the Linux kernel. In order to know how to "talk"
   to the Linux kernel, the C library needs to have access to the
   _Linux kernel headers_ (i.e, the +.h+ files from the kernel), which
   define the interface between userspace and the kernel (system
   calls, data structures, etc.). Since this interface is backward
   compatible, the version of the Linux kernel headers used to build
   your toolchain do not need to match _exactly_ the version of the
   Linux kernel you intend to run on your embedded system. They only
   need to have a version equal or older to the version of the Linux
   kernel you intend to run. If you use kernel headers that are more
   recent than the Linux kernel you run on your embedded system, then
   the C library might be using interfaces that are not provided by
   your Linux kernel.

 * Change the version and the configuration of the uClibc C library
   (if uClibc is selected). The default options are usually
   fine. However, if you really need to specifically customize the
   configuration of your uClibc C library, you can pass a specific
   configuration file here. Or alternatively, you can run the +make
   uclibc-menuconfig+ command to get access to uClibc's configuration
   interface. Note that all packages in Buildroot are tested against
   the default uClibc configuration bundled in Buildroot: if you
   deviate from this configuration by removing features from uClibc,
   some packages may no longer build.

 * Change the version of the GCC compiler and binutils.

 * Select a number of toolchain options (uClibc only): whether the
   toolchain should have largefile support (i.e support for files
   larger than 2 GB on 32 bits systems), IPv6 support, RPC support
   (used mainly for NFS), wide-char support, locale support (for
   internationalization), C++ support, thread support. Depending on
   which options you choose, the number of userspace applications and
   libraries visible in Buildroot menus will change: many applications
   and libraries require certain toolchain options to be enabled. Most
   packages show a comment when a certain toolchain option is required
   to be able to enable those packages.

It is worth noting that whenever one of those options is modified,
then the entire toolchain and system must be rebuilt. See
xref:full-rebuild[].

Advantages of this backend:

* Well integrated with Buildroot
* Fast, only builds what's necessary

Drawbacks of this backend:

* Rebuilding the toolchain is needed when doing +make clean+, which
  takes time. If you're trying to reduce your build time, consider
  using the _External toolchain backend_.

[[external-toolchain-backend]]
External toolchain backend
^^^^^^^^^^^^^^^^^^^^^^^^^^

The _external toolchain backend_ allows to use existing pre-built
cross-compilation toolchains. Buildroot knows about a number of
well-known cross-compilation toolchains (from
http://www.linaro.org[Linaro] for ARM,
http://www.mentor.com/embedded-software/sourcery-tools/sourcery-codebench/editions/lite-edition/[Sourcery
CodeBench] for ARM, x86, x86-64, PowerPC, MIPS and SuperH,
https://blackfin.uclinux.org/gf/project/toolchain[Blackfin toolchains
from ADI], http://git.xilinx.com/[Xilinx toolchains for Microblaze],
etc.) and is capable of downloading them automatically, or it can be
pointed to a custom toolchain, either available for download or
installed locally.

Then, you have three solutions to use an external toolchain:

* Use a predefined external toolchain profile, and let Buildroot
  download, extract and install the toolchain. Buildroot already knows
  about a few CodeSourcery, Linaro, Blackfin and Xilinx toolchains.
  Just select the toolchain profile in +Toolchain+ from the
  available ones. This is definitely the easiest solution.

* Use a predefined external toolchain profile, but instead of having
  Buildroot download and extract the toolchain, you can tell Buildroot
  where your toolchain is already installed on your system. Just
  select the toolchain profile in +Toolchain+ through the available
  ones, unselect +Download toolchain automatically+, and fill the
  +Toolchain path+ text entry with the path to your cross-compiling
  toolchain.

* Use a completely custom external toolchain. This is particularly
  useful for toolchains generated using crosstool-NG. To do this,
  select the +Custom toolchain+ solution in the +Toolchain+ list. You
  need to fill the +Toolchain path+, +Toolchain prefix+ and +External
  toolchain C library+ options. Then, you have to tell Buildroot what
  your external toolchain supports. If your external toolchain uses
  the 'glibc' library, you only have to tell whether your toolchain
  supports C\+\+ or not and whether it has built-in RPC support. If
  your external toolchain uses the 'uClibc'
  library, then you have to tell Buildroot if it supports largefile,
  IPv6, RPC, wide-char, locale, program invocation, threads and
  C++. At the beginning of the execution, Buildroot will tell you if
  the selected options do not match the toolchain configuration.

Our external toolchain support has been tested with toolchains from
CodeSourcery and Linaro, toolchains generated by
http://crosstool-ng.org[crosstool-NG], and toolchains generated by
Buildroot itself. In general, all toolchains that support the
'sysroot' feature should work. If not, do not hesitate to contact the
developers.

We do not support toolchains from the
http://www.denx.de/wiki/DULG/ELDK[ELDK] of Denx, for two reasons:

* The ELDK does not contain a pure toolchain (i.e just the compiler,
  binutils, the C and C++ libraries), but a toolchain that comes with
  a very large set of pre-compiled libraries and programs. Therefore,
  Buildroot cannot import the 'sysroot' of the toolchain, as it would
  contain hundreds of megabytes of pre-compiled libraries that are
  normally built by Buildroot.

* The ELDK toolchains have a completely non-standard custom mechanism
  to handle multiple library variants. Instead of using the standard
  GCC 'multilib' mechanism, the ARM ELDK uses different symbolic links
  to the compiler to differentiate between library variants (for ARM
  soft-float and ARM VFP), and the PowerPC ELDK compiler uses a
  +CROSS_COMPILE+ environment variable. This non-standard behaviour
  makes it difficult to support ELDK in Buildroot.

We also do not support using the distribution toolchain (i.e the
gcc/binutils/C library installed by your distribution) as the
toolchain to build software for the target. This is because your
distribution toolchain is not a "pure" toolchain (i.e only with the
C/C++ library), so we cannot import it properly into the Buildroot
build environment. So even if you are building a system for a x86 or
x86_64 target, you have to generate a cross-compilation toolchain with
Buildroot or crosstool-NG.

If you want to generate a custom toolchain for your project, that can
be used as an external toolchain in Buildroot, our recommandation is
definitely to build it with http://crosstool-ng.org[crosstool-NG]. We
recommend to build the toolchain separately from Buildroot, and then
_import_ it in Buildroot using the external toolchain backend.

Advantages of this backend:

* Allows to use well-known and well-tested cross-compilation
  toolchains.

* Avoids the build time of the cross-compilation toolchain, which is
  often very significant in the overall build time of an embedded
  Linux system.

* Not limited to uClibc: glibc and eglibc toolchains are supported.

Drawbacks of this backend:

* If your pre-built external toolchain has a bug, may be hard to get a
  fix from the toolchain vendor, unless you build your external
  toolchain by yourself using Crosstool-NG.

[[crosstool-ng-toolchain-backend]]
Crosstool-NG toolchain backend
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The _Crosstool-NG toolchain backend_ integrates the
http://crosstool-ng.org[Crosstool-NG] project with
Buildroot. Crosstool-NG is a highly-configurable, versatile and
well-maintained tool to build cross-compilation toolchains.

If you select the +Crosstool-NG toolchain+ option in +Toolchain Type+,
then you will be offered to:

 * Choose which C library you want to use. Crosstool-NG supports the
   three most important C libraries used in Linux systems: glibc,
   eglibc and uClibc

 * Choose a custom Crosstool-NG configuration file. Buildroot has its
   own default configuration file (one per C library choice), but you
   can provide your own. Another option is to run +make
   ctng-menuconfig+ to get access to the Crosstool-NG configuration
   interface. However, note that all Buildroot packages have only been
   tested with the default Crosstool-NG configurations.

 * Choose a number of toolchain options (rather limited if glibc or
   eglibc are used, or numerous if uClibc is used)

When you will start the Buildroot build process, Buildroot will
download and install the Crosstool-NG tool, build and install its
required dependencies, and then run Crosstool-NG with the provided
configuration.

Advantages of this backend:

* Not limited to uClibc: glibc and eglibc are supported.

* Vast possibilities of toolchain configuration.

Drawbacks of this backend:

* Crosstool-NG is not perfectly integrated with Buildroot. For
  example, Crosstool-NG has its own download infrastructure, not
  integrated with the one in Buildroot (for example a Buildroot +make
  source+ will not download all the source code tarballs needed by
  Crosstool-NG).

* The toolchain is completely rebuilt from scratch if you do a +make
  clean+.

/dev management
~~~~~~~~~~~~~~~

On a Linux system, the +/dev+ directory contains special files, called
_device files_, that allow userspace applications to access the
hardware devices managed by the Linux kernel. Without these _device
files_, your userspace applications would not be able to use the
hardware devices, even if they are properly recognized by the Linux
kernel.

Under +System configuration+, +/dev management+, Buildroot offers four
different solutions to handle the +/dev+ directory :

 * The first solution is *Static using device table*. This is the old
   classical way of handling device files in Linux. With this method,
   the device files are persistently stored in the root filesystem
   (i.e they persist accross reboots), and there is nothing that will
   automatically create and remove those device files when hardware
   devices are added or removed from the system. Buildroot therefore
   creates a standard set of device files using a _device table_, the
   default one being stored in +system/device_table_dev.txt+ in the
   Buildroot source code. This file is processed when Buildroot
   generates the final root filesystem image, and the _device files_
   are therefore not visible in the +output/target+ directory. The
   +BR2_ROOTFS_STATIC_DEVICE_TABLE+ option allows to change the
   default device table used by Buildroot, or to add an additional
   device table, so that additional _device files_ are created by
   Buildroot during the build. So, if you use this method, and a
   _device file_ is missing in your system, you can for example create
   a +board/<yourcompany>/<yourproject>/device_table_dev.txt+ file
   that contains the description of your additional _device files_,
   and then you can set +BR2_ROOTFS_STATIC_DEVICE_TABLE+ to
   +system/device_table_dev.txt
   board/<yourcompany>/<yourproject>/device_table_dev.txt+. For more
   details about the format of the device table file, see
   xref:makedev-syntax[].

 * The second solution is *Dynamic using devtmpfs only*. _devtmpfs_ is
   a virtual filesystem inside the Linux kernel that has been
   introduced in kernel 2.6.32 (if you use an older kernel, it is not
   possible to use this option). When mounted in +/dev+, this virtual
   filesystem will automatically make _device files_ appear and
   disappear as hardware devices are added and removed from the
   system. This filesystem is not persistent accross reboots: it is
   filled dynamically by the kernel. Using _devtmpfs_ requires the
   following kernel configuration options to be enabled:
   +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+. When Buildroot is in
   charge of building the Linux kernel for your embedded device, it
   makes sure that those two options are enabled. However, if you
   build your Linux kernel outside of Buildroot, then it is your
   responsability to enable those two options (if you fail to do so,
   your Buildroot system will not boot).

 * The third solution is *Dynamic using mdev*. This method also relies
   on the _devtmpfs_ virtual filesystem detailed above (so the
   requirement to have +CONFIG_DEVTMPFS+ and +CONFIG_DEVTMPFS_MOUNT+
   enabled in the kernel configuration still apply), but adds the
   +mdev+ userspace utility on top of it. +mdev+ is a program part of
   Busybox that the kernel will call every time a device is added or
   removed. Thanks to the +/etc/mdev.conf+ configuration file, +mdev+
   can be configured to for example, set specific permissions or
   ownership on a device file, call a script or application whenever a
   device appears or disappear, etc. Basically, it allows _userspace_
   to react on device addition and removal events. +mdev+ can for
   example be used to automatically load kernel modules when devices
   appear on the system. +mdev+ is also important if you have devices
   that require a firmware, as it will be responsible for pushing the
   firmware contents to the kernel. +mdev+ is a lightweight
   implementation (with fewer features) of +udev+. For more details
   about +mdev+ and the syntax of its configuration file, see
   http://git.busybox.net/busybox/tree/docs/mdev.txt.

 * The fourth solution is *Dynamic using udev*. This method also
   relies on the _devtmpfs_ virtual filesystem detailed above, but
   adds the +udev+ userspace daemon on top of it. +udev+ is a daemon
   that runs in the background, and gets called by the kernel when a
   device gets added or removed from the system. It is a more
   heavyweight solution than +mdev+, but provides higher flexibility
   and is sometimes mandatory for some system components (systemd for
   example). +udev+ is the mechanism used in most desktop Linux
   distributions. For more details about +udev+, see
   http://en.wikipedia.org/wiki/Udev.

The Buildroot developers recommandation is to start with the *Dynamic
using devtmpfs only* solution, until you have the need for userspace
to be notified when devices are added/removed, or if firmwares are
needed, in which case *Dynamic using mdev* is usually a good solution.

init system
~~~~~~~~~~~

The _init_ program is the first userspace program started by the
kernel (it carries the PID number 1), and is responsible for starting
the userspace services and programs (for example: web server,
graphical applications, other network servers, etc.).

Buildroot allows to use three different types of init systems, which
can be chosen from +System configuration+, +Init system+:

 * The first solution is *Busybox*. Amongst many programs, Busybox has
   an implementation of a basic +init+ program, which is sufficient
   for most embedded systems. Enabling the +BR2_INIT_BUSYBOX+ will
   ensure Busybox will build and install its +init+ program. This is
   the default solution in Buildroot. The Busybox +init+ program will
   read the +/etc/inittab+ file at boot to know what to do. The syntax
   of this file can be found in
   http://git.busybox.net/busybox/tree/examples/inittab (note that
   Busybox +inittab+ syntax is special: do not use a random +inittab+
   documentation from the Internet to learn about Busybox
   +inittab+). The default +inittab+ in Buildroot is stored in
   +system/skeleton/etc/inittab+. Apart from mounting a few important
   filesystems, the main job the default inittab does is to start the
   +/etc/init.d/rcS+ shell script, and start a +getty+ program (which
   provides a login prompt).

 * The second solution is *systemV*. This solution uses the old
   traditional _sysvinit_ program, packed in Buildroot in
   +package/sysvinit+. This was the solution used in most desktop
   Linux distributions, until they switched to more recent
   alternatives such as Upstart or Systemd. +sysvinit+ also works with
   an +inittab+ file (which has a slightly different syntax than the
   one from Busybox). The default +inittab+ installed with this init
   solution is located in +package/sysvinit/inittab+.

 * The third solution is *systemd*. +systemd+ is the new generation
   init system for Linux. It does far more than traditional _init_
   programs: aggressive parallelization capabilities, uses socket and
   D-Bus activation for starting services, offers on-demand starting
   of daemons, keeps track of processes using Linux control groups,
   supports snapshotting and restoring of the system state,
   etc. +systemd+ will be useful on relatively complex embedded
   systems, for example the ones requiring D-Bus and services
   communicating between each other. It is worth noting that +systemd+
   brings a fairly big number of large dependencies: +dbus+, +glib+
   and more. For more details about +systemd+, see
   http://www.freedesktop.org/wiki/Software/systemd.

The solution recommended by Buildroot developers is to use the
*Busybox init* as it is sufficient for most embedded
systems. *systemd* can be used for more complex situations.