In May of 2013 I started to play with the idea of producing better abstractions for embedded systems programming through creative uses of C++ templates. My premise was simple: by using templates to achieve polymorphism at compile time it would be possible to compose loosly-coupled components with little or no runtime overhead.
For example, given an interface representing an SPI bus, a higher-level class can be written to allow access to a peripheral that uses an SPI bus, and then that wrapper class could be used against various platform's built-in SPI buses as well as bit-bang SPI implementations with no changes.
Traditionally this sort of design is achieved in C++ using virtual methods, with the concrete implementation not selected and known until runtime. The extra indirection in virtual calls adds overhead that can add up in an embedded system, not to mention an increased code size.
Compile-time polymorphism, on the other hand, allows the SPI implementation to be bound at compile time, and this in turn allows the compiler to produce much more optimal code tailored to a specific configuration. This can then be used in C++ as follows (with a hypothetical AVR SPI implementation):
AvrSpiBus spi_bus(1); SomeSpiPeripheral<AvrSpiBus> some_peripheral(&spi_bus); some_peripheral.do_something();
The SomeSpiPeripheral class template is thus parameterized with the concrete SPI bus implementation it should use, and the resulting class will then call directly into the relevant methods, rather than indirecting through a vtable. Further, the methods of AvrSpiBus can potentially be inlined, resulting in tight code that interacts directly with the device control registers with no method calls whatsoever.
The apparent downside of this approach is that it results in generating a separate version of the entire class code for each distinct SPI implementation, but in most cases an embedded system will not have more than one SPI bus and so this seems like a good tradeoff.
My initial attempt at this technique was a C++ library called Alambre. I created the basic framework, including interfaces for GPIO and SPI, AVR-specific implementations of those, and a device driver for the MAX7219 LED display driver IC. I also prototyped a system for decoupled event handling using template specialization, with the goal that this could be used to model asynchronous code such as interrupts.
While this first attempt showed that the approach was plausible, the result was dissatisfying due to the unnecessarily-verbose declarations using C++ template syntax, and the cryptic error messages that issued from GCC when an error was encountered.
Ultimately I felt that the concept was still compelling and it was just a matter of needing a more appropriate syntax, so I set about designing a new programming language that would enable compile-time binding with a more natural syntax.
The new language is Alamatic. Its compiler is not yet complete enough to produce runnable code but the basic language design is roughed out and I continue to iterate on the compiler as time allows.
Alamatic's syntax is quite heavily based on Python, but the compilation model is rather different: the language is statically typed and uses type inference to check the validity of the program without explicit type declarations. It uses a compile-time variant of duck typing to allow objects to be composed in various ways as long as they provide the necessary methods and attributes:
import avr import someperipheral var spi_bus = avr.spi_bus(1) var some_peripheral = someperipheral.SomeSpiPeripheral(spi_bus)
Just as with the Alambre C++ example above, the compiler is able to recognize that spi_bus is specifically the AVR implementation and generate an appropriate specialization of SomeSpiPeripheral for some_peripheral, but now the types are inferred automatically. In Alamatic, all functions and classes act as templates and these are implicitly instantiated when used; the argument types (or, optionally, argument values) at the callsite act as the template's parameters.
Along with the compile-time polymorphism feature, this compilation model allows for other unusual features such as the ability to execute a subset of the language at compile time for metaprogramming. For example, it will be possible to instruct the compiler to read a data file from disk at compile time and transform it into a constant byte array for use at runtime, or even to instruct the compiler to produce a vtable to achieve traditional runtime polymorphism where needed.
Alamatic also introduces some more mundane features that C++ lacks, such as a true module system and Go-style class embedding.
I hope that Alamatic will one day become a great alternative to C++ for the open source and maker communities, making it easier to share and compose code and faster to create systems. It's still very much a work-in-progress at the time of writing, but I continue to move it along slowly as my spare time allows.