Chapter 4. The Build Process

Some CDL properties describe the consequences of manipulating configuration options. There are two main types of consequences. Typically enabling a configuration option results in one or more #define's in a configuration header file, and properties that affect this include define, define_proc and no_define. Enabling a configuration option can also affect the build process, primarily determining which files get built and added to the appropriate library. Properties related to the build process include compile and make. This chapter describes the whole build process, including details such as compiler flags and custom build steps.

Part of the overall design of the eCos component framework is that it can interact with a number of different build systems. The most obvious of these is GNU make:the component framework can generate one or more makefiles, and the user can then build the various packages simply by invoking make. However it should also be possible to build eCos by other means: the component framework can be queried about what is involved in building a given configuration, and this information can then be fed into the desired build system. Component writers should be aware of this possibility. Most packages will not be affected because the compile property can be used to provide all the required information, but care has to be taken when writing custom build steps.

4.1. Build Tree Generation

It is necessary to create an eCos configuration before anything can be built. With some tools such as the graphical configuration tool this configuration will be created in memory, and it is not essential to produce an ecos.ecc savefile first (although it is still very desirable to generate such a savefile at some point, to allow the configuration to be re-loaded later on). With other tools the savefile is generated first, for example using ecosconfig new, and then a build tree is generated using ecosconfig tree. The savefile contains all the information needed to recreate a configuration.

An eCos build actually involves three separate trees. The component repository acts as the source tree, and for application developers this should be considered a read-only resource. The build tree is where all intermediate files, especially object files, are created. The install tree is where the main library libtarget.a, the exported header files, and similar files end up. Following a successful build it is possible to take just the install tree and use it for developing an application: none of the files in the component repository or the build tree are needed for that. The build tree will be needed again only if the user changes the configuration. However the install tree does not contain copies of all of the documentation for the various packages, instead the documentation is kept only in the component repository.

By default the build tree, the install tree, and the ecos.ecc savefile all reside in the same directory tree. This is not a requirement, both the install tree and the savefile can be anywhere in the file system.

It is worth noting that the component framework does not separate the usual make and make install stages. A build always populates the install tree, and any make install step would be redundant.

The install tree will always begin with two directories, include for the exported header files and lib for the main library libtarget.a and other files such as the linker script. In addition there will be a subdirectory include/pkgconf containing the configuration header files, which are generated or updated at the same time the build tree is created or updated. More details of header file generation are given below. Additional include subdirectories such as sys and cyg/kernel will be created during the first build, when each package's exported header files are copied to the install tree. The install tree may also end up with additional subdirectories during a build, for example as a result of custom build steps.

The component framework does not define the structure of the build tree, and this may vary between build systems. It can be assumed that each package in the configuration will have its own directory in the build tree, and that this directory will be used for storing the package's object files and as the current directory for any build steps for that package. This avoids problems when custom build steps from different packages generate intermediate files which happen to have the same name.

Some build systems may allow application developers to copy a source file from the component repository to the build tree and edit the copy. This allows users to experiment with small changes, for example to add a couple of lines of debugging to a package, without having to modify the master copy in the component repository which could be shared by several projects or several people. Functionality such as this is transparent to component writers, and it is the responsibility of the build system to make sure that the right thing happens.

[Note]Note

There are some unresolved issues related to the build tree and install tree. Specifically, when updating an existing build or install tree, what should happen to unexpected files or directories? Suppose the user started with a configuration that included the math library, and the install tree contains header files include/math.h and include/sys/ieeefp.h. The user then removed the math library from the configuration and is updating the build tree. It is now desirable to remove these header files from the install tree, so that if any application code still attempts to use the math library this will fail at compile time rather than at link time. There will also be some object files in the existing libtarget.a library which are no longer appropriate, and there may be other files in the install tree as a result of custom build steps. The build tree will still contain a directory for the math library, which no longer serves any purpose.

However, it is also possible that some of the files in the build tree or the install tree were placed there by the user, in which case removing them automatically would be a bad idea.

At present the component framework does not keep track of exactly what should be present in the build and install trees, so it cannot readily determine which files or library members are obsolete and can safely be removed, and which ones are unexpected and need to be reported to the user. This will be addressed in a future release of the system.

4.2. Configuration Header File Generation

Configuration options can affect a build in two main ways. First, enabling a configuration option or other CDL entity can result in various files being built and added to a library, thus providing functionality to the application code. However this mechanism can only operate at a rather coarse grain, at the level of entire source files. Hence the component framework also generates configuration header files containing mainly C preprocessor #define directives. Package source code can then #include the appropriate header files and use #if, #ifdef and #ifndef directives to adapt accordingly. In this way configuration options can be used to enable or disable entire functions within a source file or just a single line, whichever is appropriate.

The configuration header files end up in the include/pkgconf subdirectory of the install tree. There will be one header file for the system as a whole, pkgconf/system.h, and there will be additional header files for each package, for example pkgconf/kernel.h. The header files are generated when creating or updating the build and install trees, which needs to happen after every change to the configuration.

The component framework processes each package in the configuration one at a time. The exact order in which the packages are processed is not defined, so the order in which #define's will end up in the global pkgconf/system.h header may vary. However for any given configuration the order should remain consistent until packages are added to or removed from the system. This avoids unnecessary changes to the global header file and hence unnecessary rebuilds of the packages and of application code because of header file dependency handling.

Within a given package the various components, options and interfaces will be processed in the order in which they were defined in the corresponding CDL scripts. Typically the data in the configuration headers consists only of a sequence of #define's so the order in which these are generated is irrelevant, but some properties such as define_proc can be used to add arbitrary data to a configuration header and hence there may be dependencies on the order. It should be noted that re-parenting an option below some other package has no effect on which header file will contain the corresponding #define: the preprocessor directives will always end up in the header file for the package that defines the option, or in the global configuration header.

There are six properties which affect the process of generating header files: define_header, no_define, define_format, define, if_define, and define_proc.

The define_header property can only occur in the body of a cdl_package command and specifies the name of the header file which should contain the package's configuration data, for example:

cdl_package <some_package> {
  …
  define_header xyzzy.h
}

Given such a define_header property the component framework will use the file pkgconf/xyzzy.h for the package's configuration data. If a package does not have a define_header property then a suitable file name is constructed from the package's name. This involves:

  1. All characters in the package name up to and including the first underscore are removed. For example CYGPKG_KERNEL is converted to KERNEL, and CYGPKG_HAL_ARM is converted to HAL_ARM.

  2. Any upper case letters in the resulting string will be converted to lower case, yielding e.g. kernel and hal_arm.

  3. A .h suffix is appended, yielding e.g. kernel.h and hal_arm.h.

Because of the naming restrictions on configuration options, this should result in a valid filename. There is a small possibility of a file name class, for example CYGPKG_PLUGH and CYGPKG_plugh would both end up trying to use the same header file pkgconf/plugh.h, but the use of lower case letters for package names violates the naming conventions. It is not legal to use the define_header property to put the configuration data for several packages in a single header file. The resulting behaviour is undefined.

Once the name of the package's header file has been determined and the file has been opened, the various components, options and interfaces in the package will be processed starting with the package itself. The following steps are involved:

  1. If the current option or other CDL entity is inactive or disabled, the option is ignored for the purposes of header file generation. #define's are only generated for options that are both active and enabled.

  2. The next step is to generate a default #define for the current option. If this option has a no_define property then the default #define is suppressed, and processing continues for define, if_define and define_proc properties.

    1. The header file appropriate for the default #define is determined. For a cdl_package this will be pkgconf/system.h, for any other option this will be the package's own header file. The intention here is that packages and application code can always determine which packages are in the configuration by #include'ing pkgconf/system.h. The C preprocessor lacks any facilities for including a header file only if it exists, and taking appropriate action otherwise.

    2. For options with the flavors bool or none, a single #define will be generated. This takes the form:

      #define <option> 1

      For example:

      #define CYGFUN_LIBC_TIME_POSIX 1

      Package source code can check whether or not an option is active and enabled by using the #ifdef, #ifndef or #if defined(…)directives.

    3. For options with the flavors data or booldata, either one or two #define's will be generated. The first of these may be affected by a define_format property. If this property is not defined then the first #define will take the form:

      #define <option> <value>

      For example:

      #define CYGNUM_LIBC_ATEXIT_HANDLERS 32

      Package source code can examine this value using the #if directive, or by using the symbol in code such as:

      for (i = 0; i < CYGNUM_LIBC_ATEXIT_HANDLERS; i++) {
        …
      }

      It must be noted that the #define will be generated only if the corresponding option is both active and enabled. Options with the data flavor are always enabled but may not be active. Code like the above should be written only if it is known that the symbol will always be defined, for example if the corresponding source file will only get built if the containing component is active and enabled. Otherwise the use of additional #ifdef or similar directives will be necessary.

    4. If there is a define_format property then this controls how the option's value will appear in the header file. Given a format string such as %08x and a value 42, the component framework will execute the Tcl command format %08x 42 and the result will be used for the #define's value. It is the responsibility of the component writer to make sure that this Tcl command will be valid given the format string and the legal values for the option.

    5. In addition a second #define may or may not be generated. This will take the form:

      #define <option>_<value>

      For example:

      #define CYGNUM_LIBC_ATEXIT_HANDLERS_32

      The #define will be generated only if it would result in a valid C preprocessor symbol. If the value is a string such as "/dev/ser0" then the #define would be suppressed. This second #define is not particularly useful for numerical data, but can be valuable in other circumstances. For example if the legal values for an option XXX_COLOR are red, green and blue then code like the following can be used:

      #ifdef XXX_COLOR_red
        …
      #endif
      #ifdef XXX_COLOR_green
        …
      #endif
      #ifdef XXX_COLOR_blue
        …
      #endif

      The expression syntax provided by the C preprocessor is limited to numerical data and cannot perform string comparisons. By generating two #define's in this way it is possible to work around this limitation of the C preprocessor. However some care has to be taken: if a component writer also defined a configuration option XXX_COLOR_green then there will be confusion. Since such a configuration option violates the naming conventions, the problem is unlikely to arise in practice.

  3. For some options it may be useful to generate one or more additional #define's or, in conjunction with the no_define property, to define a symbol with a name different from the option's name. This can be achieved with the define property, which takes the following form:

    define [-file=<filename>] [-format=<format>] <symbol>

    For example:

    define FOPEN_MAX

    This will result in something like:

    #define FOPEN_MAX 8
    #define FOPEN_MAX_8

    The specified symbol must be a valid C preprocessor symbol. Normally the #define will end up in the same header file as the default one, in other words pkgconf/system.h in the case of a cdl_package, or the package's own header file for any other option. The -file option can be used to change this. At present the only legal value is system.h, for example:

    define -file=system.h <symbol>

    This will cause the #define to end up in the global configuration header rather than in the package's own header. Use of this facility should be avoided since it is very rarely necessary to make options globally visible.

    The define property takes another option, -format, to provide a format string.

    define -format=%08x <symbol>

    This should only be used for options with the data or booldata flavor, and has the same effect as the define_format property has on the default #define.

    define properties are processed in the same way the default #define. For options with the bool or none flavors a single #define will be generated using the value 1. For options with the data or booldata flavors either one or two #define's will be generated.

  4. After processing all define properties, the component framework will look for any if_define properties. These take the following form:

    if_define [-file=<filename>] <symbol1> <symbol2>

    For example:

    if_define CYGSRC_KERNEL CYGDBG_USE_ASSERTS

    The following will be generated in the configuration header file:

    #ifdef CYGSRC_KERNEL
    # define CYGDBG_USE_ASSERTS
    #endif

    Typical kernel source code would begin with the following construct:

    #define CYGSRC_KERNEL 1
    #include <pkgconf/kernel.h>
    #include <cyg/infra/cyg_ass.h>

    The infrastructure header file cyg/infra/cyg_ass.h only checks for symbols such as CYGDBG_USE_ASSERTS, and has no special knowledge of the kernel or any other package. The if_define property will only affect code that defines the symbol CYGSRC_KERNEL, so typically only kernel source code. If the option is enabled then assertion support will be enabled for the kernel source code only. If the option is inactive or disabled then kernel assertions will be disabled. Assertions in other packages are not affected. Thus the if_define property allows control over assertions, tracing, and similar facilities at the level of individual packages, or at finer levels such as components or even single source files if desired.

    [Note]Note

    Current eCos packages do not yet make use of this facility. Instead there is a single global configuration option CYGDBG_USE_ASSERTS which is used to enable or disable assertions for all packages. This issue should be addressed in a future release of the system.

    As with the define property, the if_define property takes an option -file with a single legal value system.h. This allows the output to be redirected to pkgconf/system.h if and when necessary.

  5. The final property that is relevant to configuration header file generation is define_proc. This takes a single argument, a Tcl fragment that can add arbitrary data to the global header pkgconf/system.h and to the package's own header. When the define_proc script is invoked two variables will be set up to allow access to these headers: cdl_header will be a channel to the package's own header file, for example pkgconf/kernel.h; cdl_system_header will be a channel to pkgconf/system.h. A typical define_proc script will use the Tcl puts command to output data to one of these channels, for example:

    cdl_option <name> {
      …
      define_proc {
        puts $::cdl_header "#define XXX 1"
      }
    }
    [Note]Note

    In the current implementation the use of define_proc is limited because the Tcl script cannot access any of the configuration data. Therefore the script is limited to writing constant data to the configuration headers. This is a major limitation which will be addressed in a future release of the component framework.

[Note]Note

Generating C header files with #define's for the configuration data suffices for existing packages written in some combination of C, C++ and assembler. It can also be used in conjunction with some other languages, for example by first passing the source code through the C preprocessor and feeding the result into the appropriate compiler. In future versions of the component framework additional programming languages such as Java may be supported, and the configuration data may also be written to files in some format other than C preprocessor directives.

[Note]Note

At present there is no way for application or package source code to get hold of all the configuration details related to the current hardware. Instead that information is spread over various different configuration headers for the HAL and device driver packages, with some of the information going into pkgconf/system.h. It is possible that in some future release of the system there will be another global configuration header file pkgconf/hardware.h which either contains the configuration details for the various hardware-specific packages or which #include's all the hardware-specific configuration headers. The desirability and feasibility of such a scheme are still to be determined. To avoid future incompatibility problems as a result of any such changes, it is recommended that all hardware packages (in other packages containing the hardware property) use the define_header property to specify explicitly which configuration header should be generated.

4.2.1. The system.h Header

Typically configuration header files are #include'd only by the package's source code at build time, or by a package's exported header files if the interface provided by the package may be affected by a configuration option. There should be no need for application code to know the details of individual configuration options, instead the configuration should specifically meet the needs of the application.

There are always exceptions. Application code may want to adapt to configuration options, for example to do different things for ROM and RAM booting systems, or when it is necessary to support several different target boards. This is especially true if the code in question is really re-usable library code which has not been converted to an eCos package, and hence cannot use any CDL facilities.

A major problem here is determining which packages are in the configuration: attempting to #include a header file such as pkgconf/net.h when it is not known for certain that that particular package is part of the configuration will result in compilation errors. The global header file pkgconf/system.h serves to provide such information, so application code can use techniques like the following:

#include <pkgconf/system.h>
#ifdef CYGPKG_NET
# include <pkgconf/net.h>
#endif

This will compile correctly irrespective of the eCos configuration, and subsequent code can use #ifdef or similar directives on CYGPKG_NET or any of the configuration options in that package.

In addition to determining whether or not a package is present, the global configuration header file can also be used to find out the specific version of a package that is being used. This can be useful if a more recent version exports additional functionality. It may also be necessary to adapt to incompatible changes in the exported interface or to changes in behaviour. For each package the configuration system will typically #define three symbols, for example for a V1.3.1 release:

#define CYGNUM_NET_VERSION_MAJOR 1
#define CYGNUM_NET_VERSION_MINOR 3
#define CYGNUM_NET_VERSION_RELEASE 1

There are a number of problems associated with such version #define's. The first restriction is that the package must follow the standard naming conventions, so the package name must be of the form xxxPKG_yyy. The three characters immediately preceding the first underscore must be PKG, and will be replaced with NUM when generating the version #define's. If a package does not follow the naming convention then no version #define's will be generated.

Assuming the package does follow the naming conventions, the configuration tools will always generate three version #define's for the major, minor, and release numbers. The symbol names are obtained from the package name by replacing PKG with NUM and appending _VERSION_MAJOR, _VERSION_MINOR and _VERSION_RELEASE. It is assumed that the resulting symbols will not clash with any configuration option names. The values for the #define's are determined by searching the version string for sequences of digits, optionally preceded by a minus sign. It is possible that some or all of the numbers are absent in any given version string, in which case -1 will be used in the #define. For example, given a version string of V1.12beta, the major version number is 1, the minor number is 12, and the release number is -1. Given a version string of beta all three numbers would be set to -1.

There is special case code for the version current, which typically corresponds to a development version obtained via anonymous CVS or similar means. The configuration system has special built-in knowledge of this version, and will assume it is more recent than any specific release number. The global configuration header defines a special symbol CYGNUM_VERSION_CURRENT, and this will be used as the major version number when version current of a package is used:

#define CYGNUM_VERSION_CURRENT 0x7fffff00
...
#define CYGNUM_INFRA_VERSION_MAJOR CYGNUM_VERSION_CURRENT
#define CYGNUM_INFRA_VERSION_MINOR -1
#define CYGNUM_INFRA_VERSION_RELEASE -1

The large number used for CYGNUM_VERSION_CURRENT should ensure that major version comparisons work as expected, while still allowing for a small amount of arithmetic in case that proves useful.

It should be noted that this implementation of version #define's will not cope with all version number schemes. However for many cases it should suffice.

4.3. Building eCos

The primary goal of an eCos build is to produce the library libtarget.a. A typical eCos build will also generate a number of other targets: extras.o, startup code vectors.o, and a linker script. Some packages may cause additional libraries or targets to be generated. The basic build process involves a number of different phases with corresponding priorities. There are a number of predefined priorities:

PriorityAction
0

Export header files

100

Process compile properties and most make_object custom build steps

200

Generate libraries

300

Process make custom build steps

Generation of the extras.o file, the startup code and the linker script actually happens via make custom build steps, typically defined in appropriate HAL packages. The component framework has no special knowledge of these targets.

By default custom build steps for a make_object property happen during the same phase as most compilations, but this can be changed using a -priority option. Similarly custom build steps for a make property happen at the end of a build, but this can also be changed with a -priority option. For example a priority of 50 can be used to run a custom build step between the header file export phase and the main compilation phase. Custom build steps are discussed in more detail below.

Some build systems may run several commands of the same priority in parallel. For example files listed in compile properties may get compiled in parallel, concurrently with make_object custom build steps with default priorities. Since most of the time for an eCos build involves processing compile properties, this allows builds to be speeded up on suitable host hardware. All build steps for a given phase will complete before the next phase is started.

4.3.1. Updating the Build Tree

Some build systems may involve a phase before the header files get exported, to update the build and install trees automatically when there has been a change to the configuration savefile ecos.ecc. This is useful mainly for application developers using the command line tools: it would allow users to create the build tree only once, and after any subsequent configuration changes the tree would be updated automatically by the build system. The facility would be analogous to the --enable-maintainer-mode option provide by the autoconf and automake programs. At present no eCos build system implements this functionality, but it is likely to be added in a future release.

4.3.2. Exporting Public Header Files

The first compulsory phase involves making sure that there is an up to date set of header files in the install tree. Each package can contain some number of header files defining the exported interface. Applications should only use exported functionality. A package can also contain some number of private header files which are only of interest to the implementation, and which should not be visible to application code. The various packages that go into a particular configuration can be spread all over the component repository. In theory it might be possible to make all the exported header files accessible by having a lengthy -I header file search path, but this would be inconvenient both for building eCos and for building applications. Instead all the relevant header files are copied to a single location, the include subdirectory of the install tree. The process involves the following:

  1. The install tree, for example /usr/local/ecos/install, and its include subdirectory /usr/local/ecos/install/include will typically be created when the build tree is generated or updated. At the same time configuration header files will be written to the pkgconf subdirectory, for example /usr/local/ecos/include/pkgconf, so that the configuration data is visible to all the packages and to application code that may wish to examine some of the configuration options.

  2. Each package in the configuration is examined for exported header files. The exact order in which the packages are processed is not defined, but should not matter.

    1. If the package has an include_files property then this lists all the exported header files:

      cdl_package <some_package> {
        …
        include_files header1.h header2.h
      }    

      If no arguments are given then the package does not export any header files.

      cdl_package <some_package> {
        …
        include_files
      }    

      The listed files may be in an include subdirectory within the package's hierarchy, or they may be relative to the package's toplevel directory. The include_files property is intended mainly for very simple packages. It can also be useful when converting existing code to an eCos package, to avoid rearranging the sources.

    2. If there is no include_files property then the component framework will look for an include subdirectory in the package, as per the layout conventions. All files, including those in subdirectories, will be treated as exported header files. For example, the math library package contains files include/math.h and include/sys/ieeefp.h, both of which will be exported to the install tree.

    3. As a last resort, if there is neither an include_files property nor an include subdirectory, the component framework will search the package's toplevel directory and all of its subdirectories for files with one of the following suffixes: .h, .hxx, .inl or .inc. All such files will be interpreted as exported header files.

      This last resort rule could cause confusion for packages which have no exported header files but which do contain one or more private header files. For example a typical device driver simply implements an existing interface rather than define a new one, so it does not need to export a header file. However it may still have one or more private header files. Such packages should use an include_files property with no arguments.

  3. If the package has one or more exported header files, the next step is to determine where the files should end up. By default all exported header files will just end up relative to the install tree's include subdirectory. For example the math library's math.h header would end up as /usr/local/ecos/include/math.h, and the sys/ieeefp.h header would end up as /usr/local/ecos/include/sys/ieeefp.h. This behaviour is correct for packages like the C library where the interface is defined by appropriate standards. For other packages this behaviour can lead to file name clashes, and the include_dir property should be used to avoid this:

    cdl_package CYGPKG_KERNEL {
      include_dir cyg/kernel
    }

    This means that the kernel's exported header file include/kapi.h should be copied to /usr/local/ecos/include/cyg/kernel/kapi.h, where it is very unlikely to clash with a header file from some other package.

  4. For typical application developers there will be little or no need for the installed header files to change after the first build. Changes will be necessary only if packages are added to or removed from the configuration. For component writers, the build system should detect changes to the master copy of the header file source code and update the installed copies automatically during the next build. The build system is expected to perform a header file dependency analysis, so any source files affected should get rebuilt as well.

  5. Some build systems may provide additional support for application developers who want to make minor changes to a package, especially for debugging purposes. A header file could be copied from the component repository (which for application developers is assumed to be a read-only resource) into the build tree and edited there. The build system would detect a more recent version of such a header file in the build tree and install it. Care would have to be taken to recover properly if the modified copy in the build tree is subsequently removed, in order to revert to the original behaviour.

  6. When updating the install tree's include subdirectory, the build tree may also perform a clean-up operation. Specifically, it may check for any files which do not correspond to known exported header files and delete them.

[Note]Note

At present there is no defined support in the build system for defining custom build steps that generate exported header files. Any attempt to use the existing custom build step support may fall foul of unexpected header files being deleted automatically by the build system. This limitation will be addressed in a future release of the component framework, and may require changing the priority for exporting header files so that a custom build step can happen first.

4.3.3. Compiling

Once there are up to date copies of all the exported header files in the build tree, the main build can proceed. Most of this involves compiling source files listed in compile properties in the CDL scripts for the various packages, for example:

cdl_package CYGPKG_ERROR {
  display       "Common error code support"
  compile       strerror.cxx
  …
}

compile properties may appear in the body of a cdl_package, cdl_component, cdl_option or cdl_interface. If the option or other CDL entity is active and enabled, the property takes effect. If the option is inactive or disabled the property is ignored. It is possible for a compile property to list multiple source files, and it is also possible for a given CDL entity to contain multiple compile properties. The following three examples are equivalent:

cdl_option <some_option> {
  …
  compile file1.c file2.c file3.c
}
cdl_option <some_option> {
  …
  compile file1.c
  compile file2.c
  compile file3.c
}
cdl_option <some_option> {
  …
  compile file1.c file2.c
  compile file3.c
}

Packages that follow the directory layout conventions should have a subdirectory src, and the component framework will first look for the specified files there. Failing that it will look for the specified files relative to the package's root directory. For example if a package contains a source file strerror.cxx then the following two lines are equivalent:

compile strerror.cxx
compile src/strerror.cxx

In the first case the component framework will find the file immediately in the packages src subdirectory. In the second case the framework will first look for a file src/src/strerror.cxx, and then for str/strerror.cxx relative to the package's root directory. The result is the same.

The file names may be relative paths, allowing the source code to be split over multiple directories. For example if a package contains a file src/sync/mutex.cxx then the corresponding CDL entry would be:

compile sync/mutex.cxx

All the source files relevant to the current configuration will be identified when the build tree is generated or updated, and added to the appropriate makefile (or its equivalent for other build systems). The actual build will involve a rule of the form:

<object file> : <source file>
        $(CC) -c $(INCLUDE_PATH) $(CFLAGS) -o $@ $<

The component framework has built-in knowledge for processing source files written in C, C++ or assembler. These should have a .c, .cxx and .S suffix respectively. The current implementation has no simple mechanism for extending this with support for other languages or for alternative suffixes, but this should be addressed in a future release.

The compiler command that will be used is something like arm-eabi-gcc. This consists of a command prefix, in this case arm-eabi, and a specific command such as gcc. The command prefix will depend on the target architecture and is controlled by a configuration option in the appropriate HAL package. It will have a sensible default value for the current architecture, but users can modify this option when necessary. The command prefix cannot be changed on a per-package basis, since it is usually essential that all packages are built with a consistent set of tools.

The $(INCLUDE_PATH) header file search path consists of at least the following:

  1. The include directory in the install tree. This allows source files to access the various header files exported by all the packages in the configuration, and also the configuration header files.

  2. The current package's root directory. This ensures that all files in the package are accessible at build time.

  3. The current package's src subdirectory, if it is present. Generally all files to be compiled are located in or below this directory. Typically this is used to access private header files containing implementation details only.

The compiler flags $(CFLAGS) are determined in two steps. First the appropriate HAL package will provide a configuration option defining the global flags. Typically this includes flags that are needed for the target processor, for example -mcpu=arm9, various flags related to warnings, debugging and optimization, and flags such as -finit-priority which are needed by eCos itself. Users can modify the global flags option as required. In addition it is possible for existing flags to be removed from and new flags to be added to the current set on a per-package basis, again by means of user-modifiable configuration options. More details are given below.

Component writers can assume that the build system will perform full header file dependency analysis, including dependencies on configuration headers, but the exact means by which this happens is implementation-defined. Typical application developers are unlikely to modify exported or private header files, but configuration headers are likely to change as the configuration is changed to better meet the needs of the application. Full header file dependency analysis also makes things easier for the component writers themselves.

The current directory used during a compilation is an implementation detail of the build system. However it can be assumed that each package will have its own directory somewhere in the build tree, to prevent file name clashes, that this will be the current directory, and that intermediate object files will end up here.

4.3.4. Generating the Libraries

Once all the compile and make_object properties have been processed and the required object files have been built or rebuilt, these can be collected together in one or more libraries. The archiver will be the ar command corresponding to the current architecture, for example powerpc-eabi-ar. By default al of the object files will end up in a single library libtarget.a. This can be changed on a per-package basis using the library property in the body of the corresponding cdl_package command, for example:

cdl_package <SOME_PACKAGE> {
  …
  library  libSomePackage.a
}

However using different libraries for each package should be avoided. It makes things more difficult for application developers since they now have to link the application code with more libraries, and possibly even change this set of libraries when packages are added to or removed from the configuration. The use of a single library libtarget.a avoids any complications.

It is also possible to change the target library for individual files, using a -library option with the correspondingcompile or make_object property. For example:

compile -library=libSomePackage.a hello.c
make_object -library=libSomePackage.a {
  …
}

Again this should be avoided because it makes application development more difficult. There is one special library which can be used freely, libextras.a, which is used to generate the extras.o file as described below.

The order in which object files end up in a library is not defined. Typically each library will be created directly in the install tree, since there is little point in generating a file in the build tree and then immediately copying it to the install tree.

4.3.5. The extras.o file

Package sources files normally get compiled and then added to a library, by default libtarget.a, which is then linked with the application code. Because of the usual rules for linking with libraries, augmented by the use of link-time garbage collection, this means that code will only end up in the final executable if there is a direct or indirect reference to it in the application. Usually this is the desired behaviour: if the application does not make any use of say kernel message boxes, directly or indirectly, then that code should not end up in the final executable taking up valuable memory space.

In a few cases it is desirable for package code to end up in the final executable even if there are no direct or indirect references. For example, device driver functions are often not called directly. Instead the application will access the device via the string "/dev/xyzzy" and call the device functions indirectly. This will be impossible if the functions have been removed at link-time.

Another example involves static C++ objects. It is possible to have a static C++ object, preferably with a suitable constructor priority, where all of the interesting work happens as a side effect of running the constructor. For example a package might include a monitoring thread or a garbage collection thread created from inside such a constructor. Without a reference by the application to the static object the latter will never get linked in, and the package will not function as expected.

A third example would be copyright messages. A package vendor may want to insist that all products shipped using that package include a particular message in memory, even though many users of that package will object to such a restriction.

To meet requirements such as these the build system provides support for a file extras.o, which always gets linked with the application code via the linker script. Because it is an object file rather than a library everything in the file will be linked in. The extras.o file is generated at the end of a build from a library libextras.a, so packages can put functions and variables in suitable source files and add them to that library explicitly:

compile -library=libextras.a xyzzy.c
compile xyzzy_support.c

In this example xyzzy.o will end up in libextras.a, and hence in extras.o and in the final executable. xyzzy_support.o will end up in libtarget.a as usual, and is subject to linker garbage collection.

4.3.6. Compilers and Flags

[Caution]Caution

Some of the details of compiler selection and compiler flags described below are subject to change in future revisions of the component framework, although every reasonable attempt will be made to avoid breaking backwards compatibility.

The build system needs to know what compiler to use, what compiler flags should be used for different stages of the build and so on. Much of this information will vary from target to target, although users should be able to override this when appropriate. There may also be a need for some packages to modify the compiler flags. All platform HAL packages should define a number of options with well-known names, along the following lines (any existing platform HAL package can be consulted for a complete example):

cdl_component CYGBLD_GLOBAL_OPTIONS {
  flavor  none
  parent  CYGPKG_NONE
  …
  cdl_option CYGBLD_GLOBAL_COMMAND_PREFIX {
    flavor  data
    default_value { "arm-eabi" }
    …
  }
  cdl_option CYGBLD_GLOBAL_CFLAGS {
    flavor  data
    default_value "-Wall -g -O2 …"
    …
  }
  cdl_option CYGBLD_GLOBAL_LDFLAGS {
    flavor  data
    default_value "-g -nostdlib -Wl,--gc-sections …"
    …
  }
}

The CYGBLD_GLOBAL_OPTIONS component serves to collect together all global build-related options. It has the flavor none since disabling all of these options would make it impossible to build anything and hence is not useful. It is parented immediately below the root of the configuration hierarchy, thus making sure that it is readily accessible in the graphical configuration tool and, for command line users, in the ecos.ecc save file.

[Note]Note

Currently the parent property lists a parent of CYGPKG_NONE, rather than an empty string. This could be unfortunate if there was ever a package with that name. The issue will be addressed in a future release of the component framework.

The option CYGBLD_GLOBAL_COMMAND_PREFIX defines which tools should be used for the current target. Typically this is determined by the processor on the target hardware. In some cases a given target board may be able to support several different processors, in which case the default_value expression could select a different toolchain depending on some other option that is used to control which particular processor. CYGBLD_GLOBAL_COMMAND_PREFIX is modifiable rather than calculated, so users can override this when necessary.

Given a command prefix such as arm-eabi, all C source files will be compiled with arm-eabi-gcc, all C++ sources will be built using arm-eabi-g++, and arm-eabi-ar will be used to generate the library. This is in accordance with the usual naming conventions for GNU cross-compilers and similar tools. For the purposes of custom build steps, tokens such as $(CC) will be set to arm-eabi-gcc.

The next option, CYGBLD_GLOBAL_CFLAGS, is used to provide the initial value of $(CFLAGS). Some compiler flags such as -Wall and -g are likely to be used on all targets. Other flags such as -mcpu=arm7tdmi will be target-specific. Again this is a modifiable option, so the user can switch from say -O2 to -Os if desired. The option CYGBLD_GLOBAL_LDFLAGS serves the same purpose for $(LDFLAGS) and linking. It is used primarily when building test cases or possibly for some custom build steps, since building eCos itself generally involves building one or more libraries rather than executables.

Some packages may wish to add certain flags to the global set, or possibly remove some flags. This can be achieved by having appropriately named options in the package, for example:

cdl_component CYGPKG_KERNEL_OPTIONS {
  display "Kernel build options"
  flavor  none
  …
  cdl_option CYGPKG_KERNEL_CFLAGS_ADD {
    display "Additional compiler flags"
    flavor  data
    default_value { "" }
    …
  }
  cdl_option CYGPKG_KERNEL_CFLAGS_REMOVE {
    display "Suppressed compiler flags"
    flavor  data
    default_value { "" }
    …
  }
  cdl_option CYGPKG_KERNEL_LDFLAGS_ADD {
    display "Additional linker flags"
    flavor  data
    default_value { "" }
    …
  }
  cdl_option CYGPKG_KERNEL_LDFLAGS_REMOVE {
    display "Suppressed linker flags"
    flavor  data
    default_value { "" }
    …
  }
}

In this example the kernel does not modify the global compiler flags by default, but it is possible for the users to modify the options if desired. The value of $(CFLAGS) that is used for the compilations and custom build steps in a given package is determined as follows:

  1. Start with the global settings from CYGBLD_GLOBAL_CFLAGS, for example -g -O2.

  2. Remove any flags specified in the per-package CFLAGS_REMOVE option, if any. For example if -O2 should be removed for this package then $(CFLAGS) would now have a value of just -g.

  3. Then concatenate the flags specified by the per-package CFLAGS_ADD option, if any. For example if -Os should be added for the current package then the final value of $(CFLAGS) will be -g -Os.

$(LDFLAGS) is determined in much the same way.

[Note]Note

The way compiler flags are handled at present has numerous limitations that need to be addressed in a future release, although it should suffice for nearly all cases. For the time being custom build steps and in particular the make_object property can be used to work around the limitations.

Amongst the issues, there is a specific problem with package encapsulation. For example the math library imposes some stringent requirements on the compiler in order to guarantee exact IEEE behavior, and may need special flags on a per-architecture basis. One way of handling this is to have CYGPKG_LIBM_CFLAGS_ADD and CYGPKG_LIBM_CFLAGS_REMOVE default_value expressions which depend on the target architecture, but such expressions may have to updated for each new architecture. An alternative approach would allow the architectural HAL package to modify the default_value expressions for the math library, but this breaks encapsulation. A third approach would allow some architectural HAL packages to define one or more special options with well-known names, and the math library could check if these options were defined and adjust the default values appropriately. Other packages with floating point requirements could do the same. This approach also has scalability issues, in particular how many such categories of options would be needed? It is not yet clear how best to resolve such issues.

[Note]Note

When generating a build tree it would be desirable for the component framework to output details of the tools and compiler flags in a format that can be re-used for application builds, for example a makefile fragment. This would make it easier for application developers to use the same set of flags as were used for building eCos itself, thus avoiding some potential problems with incompatible compiler flags.

4.3.7. Custom Build Steps

[Caution]Caution
  • Some of the details of custom build steps as described below are subject to change in future revisions of the component framework, although every reasonable attempt will be made to avoid breaking backwards compatibility.

  • The first line in the make and make_object blocks introduced below defines the make target and its dependencies with the remaining lines specifying the commands. These commands will always be tab-indented in the resulting makefile and as a result all GNUmake commands, such as ifeq(...), will be illegal as these cannot be indented. It is possible to work around this CDL deficiency in some instances using make's shell syntax support. For an example, see the eCos custom build step used to create the mk_romfs command within the CYGPKG_FS_ROM package located in $ECOS_REPOSITORY/packages/fs/rom/<version>/cdl/romfs.cdl.

For most packages simply listing one or more source files in a compile property is sufficient. These files will get built using the appropriate compiler and compiler flags and added to a library, which then gets linked with application code. A package that can be built in this way is likely to be more portable to different targets and build environments, since it avoids build-time dependencies. However some packages have special needs, and the component framework supports custom build steps to allow for these needs. There are two properties related to this, make and make_object, and both take the following form:

make {
  <target_filepath> : <dependency_filepath> …
    <command>
    ...
}

Although this may look like makefile syntax, and although some build environments will indeed involve generating makefiles and running make, this is not guaranteed. It is possible for the component framework to be integrated with some other build system, and custom build steps should be written with that possibility in mind. Each custom build step involves a target, some number of dependency files, and some number of commands. If the target is not up to date with respect to one or more of the dependencies then the commands need to be executed.

  1. Only one target can be specified. For a make_object property this target must be an object file. For a make property it can be any file. In both cases it must refer to a physical file, the use of phony targets is not supported. The target should not be an absolute path name. If the generated file needs to end up in the install tree then this can be achieved using a <PREFIX> token, for example:

    make {
      <PREFIX>/lib/mytarget : …
        ...
    }

    When the build tree is generated and the custom build step is added to the makefile (or whatever build system is used) <PREFIX> will be replaced with the absolute path to the install tree.

  2. All the dependencies must also refer to physical files, not to phony targets. These files may be in the source tree. The <PACKAGE> token can be used to indicate this: when the build tree is generated this token will be replaced with the absolute path to the package's root directory in the component repository, for example:

    make_object {
      xyzzy.o : <PACKAGE>/src/xyzzy.c
        …
    [Note]Note

    The token <PACKAGE> can only be used in the dependancies list and must not be used in a target name as it refers to the package directory in the source repository.

    If the component repository was installed in /usr/local/ecos and this custom build step existed in version 1_5 of the kernel, <PACKAGE> would be replaced with /usr/local/ecos/packages/kernel/v1_5.

    Alternatively the dependencies may refer to files that are generated during the build. These may be object files resulting from compile properties or other make_object properties, or they may be other files resulting from a make property, for example:

    compile plugh.c
    make_object {
      xyzzy.o : plugh.o
        …
    }
  3. No other token or makefile variables may be used in the target or dependency file names. Also conditionals such as ifneq and similar makefile functionality must not be used.

  4. Similarly the list of commands must not use any makefile conditionals or similar functionality. A number of tokens can be used to provide access to target-specific or environmental data. Note that these tokens look like makefile variables, unlike the <PREFIX> and <PACKAGE> tokens mentioned earlier:

    TokenPurposeExample value
    $(AR) the GNU archiver mips-tx39-elf-ar
    $(CC) the GNU compiler sh-elf-gcc
    $(CFLAGS) compiler flags -O2 -Wall
    $(COMMAND_PREFIX) the triplet prefix mn10300-elf-
    $(INCLUDE_PATH) header file search path -I. -Isrc/misc
    $(LDFLAGS) linker flags -nostdlib -Wl,-static
    $(OBJCOPY) the objcopy utility arm-eabi-objcopy
    $(PREFIX) location of the install tree /home/fred/ecos-install
    $(REPOSITORY) location of the component repository /home/fred/ecos/packages

    In addition commands in a custom build step may refer to the target and the dependencies using $@, $<, $^ and $*, all of which behave as per GNU make syntax. The commands will execute in a suitable directory in the build tree.

  5. The current directory used during a custom build step is an implementation detail of the build system. However it can be assumed that each package will have its own directory somewhere in the build tree, to prevent file name clashes, and that this will be the current directory. In addition any object files generated as a result of compile properties will be located here as well, which is useful for custom build steps that depend on a .o file previously generated.

    Any temporary files created by a custom build step should be generated in the build tree (in or under the current directory). Such files should be given a .tmp file extension to ensure that they are deleted during a make clean or equivalent operation.

    If a package contains multiple custom build steps with the same priority, it is possible that these build steps will be run concurrently. Therefore these custom build steps must not accidentally use the same file names for intermediate files.

  6. Care has to be taken to make sure that the commands in a custom build step will run on all host platforms, including Windows NT as well as Linux and other Unix systems. For example, all file paths should use forward slashes as the directory separator. It can be assumed that Windows users will have a full set of CygWin tools installed and available on the path. The GNU coding standards provide some useful guidelines for writing portable build rules.

  7. A custom build step must not make any assumptions concerning the version of another package. This enforces package encapsulation, preventing one package from accessing the internals of another.

  8. No assumptions should be made about the target platform, unless the package is inherently specific to that platform. Even then it is better to use the various tokens whenever possible, rather than hard-coding in details such as the compiler. For example, given a custom build step such as:

    arm-eabi-gcc -c -mcpu=arm7di -o $@ $<

    Even if this build step will only be invoked on ARM targets, it could cause problems. For example the toolchain may have been installed using a prefix other than arm-eabi. Also, if the user changes the compiler flags then this would not be reflected in the build step. The correct way to write this rule would be:

    $(CC) -c $(CFLAGS) -o $@ $<

    Some commands such as the compiler, the archiver, and objcopy are required sufficiently often to warrant their own tokens, for example $(CC) and $(OBJCOPY). Other target-specific commands are needed only rarely and the $(COMMAND_PREFIX) token can be used to construct the appropriate command name, for example:

    $(COMMAND_PREFIX)size $< > $@
  9. Custom build steps should not be used to build host-side executables, even if those executables are needed to build parts of the target side code. Support for building host-side executables will be added in a future version of the component framework, although it will not necessarily involve these custom build steps.

By default custom build steps defined in a make_object property have a priority of 100, which means that they will be executed in the same phase as compilations resulting from a compile property. It is possible to change the priority using a property option, for example:

make_object -priority 50 {
  …
}

Specifying a priority smaller than a 100 means that the custom build step happens before the normal compilations. Priorities between 100 and 200 happen after normal compilations but before the libraries are archived together. make_object properties should not specify a priority of 200 or later.

Custom build steps defined in a make property have a default priority of 300, and so they will happen after the libraries have been built. Again this can be changed using a -priority property option.

4.3.8. Startup Code

Linking an application requires the application code, a linker script, the eCos library or libraries, the extras.o file, and some startup code. Depending on the target hardware and how the application gets booted, this startup code may do little more than branching to main(), or it may have to perform a considerable amount of hardware initialization. The startup code generally lives in a file vectors.o which is created by a custom build step in a HAL package. As far as application developers are concered the existence of this file is largely transparent, since the linker script ensures that the file is part of the final executable.

This startup code is not generally of interest to component writers, only to HAL developers who are referred to one of the existing HAL packages for specific details. Other packages are not expected to modify the startup in any way. If a package needs some work performed early on during system initialization, before the application's main entry point gets invoked, this can be achieved using a static object with a suitable constructor priority.

[Note]Note

It is possible that the extras.o support, in conjunction with appropriate linker script directives, could be used to eliminate the need for a special startup file. The details are not yet clear.

4.3.9. The Linker Script

[Caution]Caution

This section is not finished, and the details are subject to change in a future release. Arguably linker script issues should be documented in the HAL documentation rather than in this guide.

Generating the linker script is the responsibility of the various HAL packages that are applicable to a given target. Developers of components other than HAL packages need not be concerned about what is involved. Developers of new HAL packages should use an existing HAL as a template.

[Note]Note

It may be desirable for some packages to have some control over the linker script, for example to add extra alignment details for a particular section. This can be risky because it can result in subtle portability problems, and the current component framework has no support for any such operations. The issue may be addressed in a future release.

4.4. Building Test Cases

[Caution]Caution

The support in the current implementation of the component framework for building and running test cases is limited, and should be enhanced considerably in a future version. Compatibility with the existing mechanisms described below will be maintained if possible, but this cannot be guaranteed.

Whenever possible packages should be shipped with one or more test cases. This allows users to check that all packages function correctly in their particular configuration and on their target, which may be custom hardware unavailable to the package developer. The component framework needs to provide a way of building such test cases. For example, if a makefile system is used then there could be a make tests target to build the test cases, or possibly a make check target to build and run the test cases and process all the results. Unfortunately there are various complications.

Not every test case will be applicable to every configuration. For example if the user has disabled the C library's CYGPKG_LIBC_STDIO component then there is no point in building or running any of the test cases for that component. This implies that test cases need to be associated with configuration options somehow. It is possible for the test case to use one or more #ifdef statements to check whether or not it is applicable in the current configuration, and compile to a null program when not applicable. This is inefficient because the test case will still get built and possibly run, even though it will not provide any useful information.

Many packages involve direct interaction with hardware, for example a serial line or an ethernet interface. In such cases it is only worthwhile building and running the test if there is suitable software running at the other end of the serial line or listening on the same ethernet segment, and that software would typically have to run on the host. Of course the serial line in question may be hooked up to a different piece of hardware which the application needs to talk to, so disconnecting it and then hooking it up to the host for running some tests may be undesirable. The decision as to whether or not to build the test depends not just on the eCos configuration but also on the hardware setup and the availability of suitable host software.

There are different kinds of tests, and it is not always desirable to run all of them. For example a package may contain a number of stress tests intended to run for long periods of time, possibly days or longer. Such tests should certainly be distinguished somehow from ordinary test cases so that users will not run them accidentally and wonder how long they should wait for a pass message before giving up. Stress tests may also have dependencies on the hardware configuration and on host software, for example a network stress test may require lots of ethernet packets.

In the current implementation of the component framework these issues are not yet addressed. Instead there is only very limited support for building test cases. Any package can define a calculated configuration option of the form CYGPKG_<package-name>_TESTS, whose value is a list of test cases. The calculated property can involve an expression so it is possible to adapt to a small number of configuration options, but this quickly becomes unwieldy. A typical example would be:

cdl_option CYGPKG_UITRON_TESTS {
  display "uITRON tests"
  flavor  data
  no_define
  calculated { "tests/test1 tests/test2 tests/test3 \
    tests/test4 tests/test5 tests/test6 tests/test7 \
    tests/test8 tests/test9 tests/testcxx tests/testcx2 \
    tests/testcx3 tests/testcx4 tests/testcx5 \
    tests/testcx6 tests/testcx7 tests/testcx8 \
    tests/testcx9 tests/testintr" }
  description   "
This option specifies the set of tests for the uITRON compatibility layer."
}

This implies that there is a file tests/test1.c or tests/test1.cxx in the package's directory. The commands that will be used to build the test case will take the form:

$(CC) -c $(INCLUDE_PATH) $(CFLAGS) -o <build path>/test1.o \
  <source path>/tests/test1.c
$(CC) $(LDFLAGS) -o <install path>/tests/test1 <build_path>/test1.o

The variables $(CC) and so on are determined in the same way as for custom build steps. The various paths and the current directory will depend on the exact build system being used, and are subject to change. As usual the sources in the component repository are treated as a read-only resources, intermediate files live in the build tree, and the desired executables should end up in the install tree.

Each test source file must be self-contained. It is not possible at present to build a little per-package library that can be used by the test cases, or to link together several object files to produce a single test executable. In some cases it may be possible to #include source code from a shared file in order to avoid unnecessary code replication. There is no support for manipulating compiler or linker flags for individual test cases: the flags that will be used for all files are $(CFLAGS) and $(LDFLAGS), as per custom build steps. Note that it is possible for a package to define options of the form CYGPKG_<PACKAGE-NAME>_LDFLAGS_ADD and CYGPKG_<PACKAGE-NAME>_LDFLAGS_REMOVE. These will affect test cases, but in the absence of custom build steps they will have no other effect on the build.