Jump to content

Template metaprogramming

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 173.80.250.153 (talk) at 20:08, 25 December 2020 (Corrected link to [Lisp (programming language)] that erroneously pointed to [Lisp] (the speech impediment)). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Template metaprogramming (TMP) is a metaprogramming technique in which templates are used by a compiler to generate temporary source code, which is merged by the compiler with the rest of the source code and then compiled. The output of these templates include compile-time constants, data structures, and complete functions. The use of templates can be thought of as compile-time polymorphism. The technique is used by a number of languages, the best-known being C++, but also Curl, D, and XL.

Template metaprogramming was, in a sense, discovered accidentally.[1][2]

Some other languages support similar, if not more powerful, compile-time facilities (such as Lisp macros), but those are outside the scope of this article.

Components of template metaprogramming

The use of templates as a metaprogramming technique requires two distinct operations: a template must be defined, and a defined template must be instantiated. The template definition describes the generic form of the generated source code, and the instantiation causes a specific set of source code to be generated from the generic form in the template.

Template metaprogramming is Turing-complete, meaning that any computation expressible by a computer program can be computed, in some form, by a template metaprogram.[3]

Templates are different from macros. A macro is a piece of code that executes at compile time and either performs textual manipulation of code to-be compiled (e.g. C++ macros) or manipulates the abstract syntax tree being produced by the compiler (e.g. Rust or Lisp macros). Textual macros are notably more independent of the syntax of the language being manipulated, as they merely change the in-memory text of the source code right before compilation.

Template metaprograms have no mutable variables— that is, no variable can change value once it has been initialized, therefore template metaprogramming can be seen as a form of functional programming. In fact many template implementations implement flow control only through recursion, as seen in the example below.

Using template metaprogramming

Though the syntax of template metaprogramming is usually very different from the programming language it is used with, it has practical uses. Some common reasons to use templates are to implement generic programming (avoiding sections of code which are similar except for some minor variations) or to perform automatic compile-time optimization such as doing something once at compile time rather than every time the program is run — for instance, by having the compiler unroll loops to eliminate jumps and loop count decrements whenever the program is executed.

Compile-time class generation

What exactly "programming at compile-time" means can be illustrated with an example of a factorial function, which in non-template C++ can be written using recursion as follows:

unsigned int factorial(unsigned int n) {
	return n == 0 ? 1 : n * factorial(n - 1); 
}

// Usage examples:
// factorial(0) would yield 1;
// factorial(4) would yield 24.

The code above will execute at run time to determine the factorial value of the literals 4 and 0. By using template metaprogramming and template specialization to provide the ending condition for the recursion, the factorials used in the program—ignoring any factorial not used—can be calculated at compile time by this code:

template <unsigned int n>
struct factorial {
	enum { value = n * factorial<n - 1>::value };
};

template <>
struct factorial<0> {
	enum { value = 1 };
};

// Usage examples:
// factorial<0>::value would yield 1;
// factorial<4>::value would yield 24.

The code above calculates the factorial value of the literals 4 and 0 at compile time and uses the results as if they were precalculated constants. To be able to use templates in this manner, the compiler must know the value of its parameters at compile time, which has the natural precondition that factorial<X>::value can only be used if X is known at compile time. In other words, X must be a constant literal or a constant expression.

In C++11 and C++20, constexpr and consteval were introduced to let the compiler execute code. Using constexpr and consteval, one can use the usual recursive factorial definition with the non-templated syntax.[4]

Compile-time code optimization

The factorial example above is one example of compile-time code optimization in that all factorials used by the program are pre-compiled and injected as numeric constants at compilation, saving both run-time overhead and memory footprint. It is, however, a relatively minor optimization.

As another, more significant, example of compile-time loop unrolling, template metaprogramming can be used to create length-n vector classes (where n is known at compile time). The benefit over a more traditional length-n vector is that the loops can be unrolled, resulting in very optimized code. As an example, consider the addition operator. A length-n vector addition might be written as

template <int length>
Vector<length>& Vector<length>::operator+=(const Vector<length>& rhs) 
{
    for (int i = 0; i < length; ++i)
        value[i] += rhs.value[i];
    return *this;
}

When the compiler instantiates the function template defined above, the following code may be produced:[citation needed]

template <>
Vector<2>& Vector<2>::operator+=(const Vector<2>& rhs) 
{
    value[0] += rhs.value[0];
    value[1] += rhs.value[1];
    return *this;
}

The compiler's optimizer should be able to unroll the for loop because the template parameter length is a constant at compile time.

However, take care and exercise caution as this may cause code bloat as separate unrolled code will be generated for each 'N'(vector size) you instantiate with.

Static polymorphism

Polymorphism is a common standard programming facility where derived objects can be used as instances of their base object but where the derived objects' methods will be invoked, as in this code

class Base
{
public:
    virtual void method() { std::cout << "Base"; }
    virtual ~Base() {}
};

class Derived : public Base
{
public:
    virtual void method() { std::cout << "Derived"; }
};

int main()
{
    Base *pBase = new Derived;
    pBase->method(); //outputs "Derived"
    delete pBase;
    return 0;
}

where all invocations of virtual methods will be those of the most-derived class. This dynamically polymorphic behaviour is (typically) obtained by the creation of virtual look-up tables for classes with virtual methods, tables that are traversed at run time to identify the method to be invoked. Thus, run-time polymorphism necessarily entails execution overhead (though on modern architectures the overhead is small).

However, in many cases the polymorphic behaviour needed is invariant and can be determined at compile time. Then the Curiously Recurring Template Pattern (CRTP) can be used to achieve static polymorphism, which is an imitation of polymorphism in programming code but which is resolved at compile time and thus does away with run-time virtual-table lookups. For example:

template <class Derived>
struct base
{
    void interface()
    {
         // ...
         static_cast<Derived*>(this)->implementation();
         // ...
    }
};

struct derived : base<derived>
{
     void implementation()
     {
         // ...
     }
};

Here the base class template will take advantage of the fact that member function bodies are not instantiated until after their declarations, and it will use members of the derived class within its own member functions, via the use of a static_cast, thus at compilation generating an object composition with polymorphic characteristics. As an example of real-world usage, the CRTP is used in the Boost iterator library.[5]

Another similar use is the "Barton–Nackman trick", sometimes referred to as "restricted template expansion", where common functionality can be placed in a base class that is used not as a contract but as a necessary component to enforce conformant behaviour while minimising code redundancy.

Static Table Generation

The benefit of static tables is the replacement of "expensive" calculations with a simple array indexing operation (for examples, see lookup table). In C++, there exists more than one way to generate a static table at compile time. The following listing shows an example of creating a very simple table by using recursive structs and variadic templates. The table has a size of ten. Each value is the square of the index.

#include <iostream>
#include <array>

constexpr int TABLE_SIZE = 10;

/**
 * Variadic template for a recursive helper struct.
 */
template<int INDEX = 0, int ...D>
struct Helper : Helper<INDEX + 1, D..., INDEX * INDEX> { };

/**
 * Specialization of the template to end the recursion when the table size reaches TABLE_SIZE.
 */
template<int ...D>
struct Helper<TABLE_SIZE, D...> {
  static constexpr std::array<int, TABLE_SIZE> table = { D... };
};

constexpr std::array<int, TABLE_SIZE> table = Helper<>::table;

enum  {
  FOUR = table[2] // compile time use
};

int main() {
  for(int i=0; i < TABLE_SIZE; i++) {
    std::cout << table[i]  << std::endl; // run time use
  }
  std::cout << "FOUR: " << FOUR << std::endl;
}

The idea behind this is that the struct Helper recursively inherits from a struct with one more template argument (in this example calculated as INDEX * INDEX) until the specialization of the template ends the recursion at a size of 10 elements. The specialization simply uses the variable argument list as elements for the array. The compiler will produce code similar to the following (taken from clang called with -Xclang -ast-print -fsyntax-only).

template <int INDEX = 0, int ...D> struct Helper : Helper<INDEX + 1, D..., INDEX * INDEX> {
};
template<> struct Helper<0, <>> : Helper<0 + 1, 0 * 0> {
};
template<> struct Helper<1, <0>> : Helper<1 + 1, 0, 1 * 1> {
};
template<> struct Helper<2, <0, 1>> : Helper<2 + 1, 0, 1, 2 * 2> {
};
template<> struct Helper<3, <0, 1, 4>> : Helper<3 + 1, 0, 1, 4, 3 * 3> {
};
template<> struct Helper<4, <0, 1, 4, 9>> : Helper<4 + 1, 0, 1, 4, 9, 4 * 4> {
};
template<> struct Helper<5, <0, 1, 4, 9, 16>> : Helper<5 + 1, 0, 1, 4, 9, 16, 5 * 5> {
};
template<> struct Helper<6, <0, 1, 4, 9, 16, 25>> : Helper<6 + 1, 0, 1, 4, 9, 16, 25, 6 * 6> {
};
template<> struct Helper<7, <0, 1, 4, 9, 16, 25, 36>> : Helper<7 + 1, 0, 1, 4, 9, 16, 25, 36, 7 * 7> {
};
template<> struct Helper<8, <0, 1, 4, 9, 16, 25, 36, 49>> : Helper<8 + 1, 0, 1, 4, 9, 16, 25, 36, 49, 8 * 8> {
};
template<> struct Helper<9, <0, 1, 4, 9, 16, 25, 36, 49, 64>> : Helper<9 + 1, 0, 1, 4, 9, 16, 25, 36, 49, 64, 9 * 9> {
};
template<> struct Helper<10, <0, 1, 4, 9, 16, 25, 36, 49, 64, 81>> {
  static constexpr std::array<int, TABLE_SIZE> table = {0, 1, 4, 9, 16, 25, 36, 49, 64, 81};
};

Since C++17 this can be more readably written as:

 
#include <iostream>
#include <array>

constexpr int TABLE_SIZE = 10;

constexpr std::array<int, TABLE_SIZE> table = [] { // OR: constexpr auto table
  std::array<int, TABLE_SIZE> A = {};
  for (unsigned i = 0; i < TABLE_SIZE; i++) {
    A[i] = i * i;
  }
  return A;
}();

enum  {
  FOUR = table[2] // compile time use
};

int main() {
  for(int i=0; i < TABLE_SIZE; i++) {
    std::cout << table[i]  << std::endl; // run time use
  }
  std::cout << "FOUR: " << FOUR << std::endl;
}

To show a more sophisticated example the code in the following listing has been extended to have a helper for value calculation (in preparation for more complicated computations), a table specific offset and a template argument for the type of the table values (e.g. uint8_t, uint16_t, ...).

                                                                
#include <iostream>
#include <array>

constexpr int TABLE_SIZE = 20;
constexpr int OFFSET = 12;

/**
 * Template to calculate a single table entry
 */
template <typename VALUETYPE, VALUETYPE OFFSET, VALUETYPE INDEX>
struct ValueHelper {
  static constexpr VALUETYPE value = OFFSET + INDEX * INDEX;
};

/**
 * Variadic template for a recursive helper struct.
 */
template<typename VALUETYPE, VALUETYPE OFFSET, int N = 0, VALUETYPE ...D>
struct Helper : Helper<VALUETYPE, OFFSET, N+1, D..., ValueHelper<VALUETYPE, OFFSET, N>::value> { };

/**
 * Specialization of the template to end the recursion when the table size reaches TABLE_SIZE.
 */
template<typename VALUETYPE, VALUETYPE OFFSET, VALUETYPE ...D>
struct Helper<VALUETYPE, OFFSET, TABLE_SIZE, D...> {
  static constexpr std::array<VALUETYPE, TABLE_SIZE> table = { D... };
};

constexpr std::array<uint16_t, TABLE_SIZE> table = Helper<uint16_t, OFFSET>::table;

int main() {
  for(int i = 0; i < TABLE_SIZE; i++) {
    std::cout << table[i] << std::endl;
  }
}

Which could be written as follows using C++17:

#include <iostream>
#include <array>

constexpr int TABLE_SIZE = 20;
constexpr int OFFSET = 12;

template<typename VALUETYPE, VALUETYPE OFFSET>
constexpr std::array<VALUETYPE, TABLE_SIZE> table = [] { // OR: constexpr auto table
  std::array<VALUETYPE, TABLE_SIZE> A = {};
  for (unsigned i = 0; i < TABLE_SIZE; i++) {
    A[i] = OFFSET + i * i;
  }
  return A;
}();

int main() {
  for(int i = 0; i < TABLE_SIZE; i++) {
    std::cout << table<uint16_t, OFFSET>[i] << std::endl;
  }
}

Benefits and drawbacks of template metaprogramming

Compile-time versus execution-time tradeoff
If a great deal of template metaprogramming is used.
Generic programming
Template metaprogramming allows the programmer to focus on architecture and delegate to the compiler the generation of any implementation required by client code. Thus, template metaprogramming can accomplish truly generic code, facilitating code minimization and better maintainability[citation needed].
Readability
With respect to C++, the syntax and idioms of template metaprogramming are esoteric compared to conventional C++ programming, and template metaprograms can be very difficult to understand.[6][7]

See also

References

  1. ^ Scott Meyers (12 May 2005). Effective C++: 55 Specific Ways to Improve Your Programs and Designs. Pearson Education. ISBN 978-0-13-270206-5.
  2. ^ See History of TMP on Wikibooks
  3. ^ Veldhuizen, Todd L. "C++ Templates are Turing Complete" (Document). {{cite document}}: Cite document requires |publisher= (help); Unknown parameter |citeseerx= ignored (help)
  4. ^ http://www.cprogramming.com/c++11/c++11-compile-time-processing-with-constexpr.html
  5. ^ http://www.boost.org/libs/iterator/doc/iterator_facade.html
  6. ^ Czarnecki, K.; O'Donnell, J.; Striegnitz, J.; Taha, Walid Mohamed (2004). "DSL implementation in metaocaml, template haskell, and C++" (Document). University of Waterloo, University of Glasgow, Research Centre Julich, Rice University. C++ Template Metaprogramming suffers from a number of limitations, including portability problems due to compiler limitations (although this has significantly improved in the last few years), lack of debugging support or IO during template instantiation, long compilation times, long and incomprehensible errors, poor readability of the code, and poor error reporting. {{cite document}}: Unknown parameter |url= ignored (help)
  7. ^ Sheard, Tim; Jones, Simon Peyton (2002). "Template Meta-programming for Haskell" (Document). ACM 1-58113-415-0/01/0009. Robinson's provocative paper identifies C++ templates as a major, albeit accidental, success of the C++ language design. Despite the extremely baroque nature of template meta-programming, templates are used in fascinating ways that extend beyond the wildest dreams of the language designers. Perhaps surprisingly, in view of the fact that templates are functional programs, functional programmers have been slow to capitalize on C++'s success {{cite document}}: Unknown parameter |url= ignored (help)