In the expr.c expression package, we maintain a pair of stacks - one for names, one for operators. As we parse the source code, we push names onto the names stack, and operators onto the operator stack. When we encounter an operator in the source with a lower priority than the operator at the top of the operator stack, we begin reducing the stacks. Take off the top operator, and the number of names it needs, and generate an expression node, and add that node to an output tree. Keep doing this until the operator at the top no longer has higher priority than the one we just read.

(It’s a bit more complex than that really - parenthesised expressions, function calls, array indices - but that’s the basic operation of an operator-precedence expression parser).

So - simple question - how big should the name and operator stacks be?

And of course the answer is “it depends”.

In earlier versions of expr.c, the easy way out was taken - the stacks were a fixed size and the error message “expression too complex” was given if their limits were surpassed. But that’s just nasty. 

So for teq, we have introduced a new useful building block, the Container.

typedef struct {

    char * name;

    uint32 max;         // max number of elements

    uint32 count;       // how many elements we have

    char ** data;       // array of pointers

} Container;

It’s a simple structure with pointer to an array of pointers (data) and some metadata - count and max. max defines the current capacity of the data array, count the number of pointers in the array.

When a Container is created, the capacity is set to some default value, the array f pointers is mallocd with that number of entries, and data is pointed at the allocated array.

Containers should be accessed only through the provided accessor functions (see below). When a pointer is added to a Container, the obvious check is done that this isn’t beyond capacity. If it is, we realloc() the data array to a capacity that is some multiple of the current capacity, check the realloc worked, and replace the data pointer if so.

Thus, we get self-growing arrays of exactly the right size.



//  container.h

//  teq


//  Created by Pete Wilson on 6/6/17.

//  Copyright © 2017 Kiva Design Groupe LLC. All rights reserved.


#ifndef container_h

#define container_h

typedef struct {

    char * name;

    uint32 max;         // max number of elements

    uint32 count;       // how many elements we have

    char ** data;       // array of pointers

} Container;

char * contNameAndVersion(void);

// containers are self-extending arrays of pointers

uint32 ContainerCount(Container * c);

uint32 allocateInContainer(Container * c);

Container * newContainer(uint32 count, char * name);

void zeroContainer(Container * c);

void printContainer(Container * c);

void * getPtrFromContainer(Container * c, uint64 n);        // get the n'th pointer; don't change the container

void addPtrToContainer(Container * c, void * p);            // add a pointer to the container

void pushPtrToContainer(Container * c, void * ptr);         // push a pointer into a container (same as add)

void * pullPtrFromContainer(Container * c);                 // pull a pointer from a container into ptr; return 1. If none, return 0

void * getTopPtrInContainer(Container * c);                 // if there is a pointer, copies top value into ptr and returns 1; else returns 0

#endif /* container_h */



//  container.c

//  teq


//  Created by Pete Wilson on 6/6/17.

//  Copyright © 2017 Kiva Design Groupe LLC. All rights reserved.


#include <stdio.h>

#include <stdlib.h>

#include <string.h>

#define EXTERN extern

#include "types.h"

#include "container.h"

extern void allocError(uint64 s, char * msg);

static char * name_and_version = "containers 0.1v6 [September 23 2018]";


 provides a uniform way of creating an array of objects which can grow in size


    0.1v6 [September 28 2018]

        - all Containers now hold just pointers. No 'ptrContainers'. Errors remain..

    0.1v5 [April 16 2018]

        - changed API for int pullValFromContainer(). returns 1 if there was something in the container, and writes the value removed to a pointer's variable

    0.1v4 [November 2017]

        - added pointer only containers (type Container)

    0.1v3   August 14 2017

        - added a set of functions to more efficiently push and pull 'word-sized' (64 bit) values to and from a Container


// ------------------------- contNameAndVersion --------------

char * contNameAndVersion(void) {

    return name_and_version;


// ------------------ printContainer --------------

void printContainer(Container * c) {

    printf("\n\nContainer '%s':", c -> name);

    printf("\n\tcapacity  = %d", c -> max);

    printf("\n\tcount     = %d", c -> count);

    // print the pointers

    for (int i = 0; i < c -> count; i++) {

        char * p = getPtrFromContainer(c, i);

        printf("\n\t\tptr %d: %p", i, p);




// ----------------- zeroContainer ----------

void zeroContainer(Container * c) {

    c -> count = 0;


// ------------------ ContainerCount -----------------

uint32 ContainerCount(Container * c) {

    return c -> count;


// ----------- allocateInContainer ------------

uint32 allocateInContainer(Container * c) {

    // allocate space for one more pointer in the container, and return the index to it

    if (c -> count >= c -> max) {

        // we need to grow the container - we use 50%

        uint32 newmax = (c -> max + (c -> max/2));

        //reallocate the data

        //printf("\n-- growing container %p from %lld to %lld pointers", c, c -> max, newmax);

        void * newdata = realloc(c -> data, newmax * sizeof(char *));

        if (newdata) {


            c -> data = newdata;

            c -> max = newmax;


        else {

            allocError(newmax * sizeof(char *), "reallocation of a container's data");



    // return the index to the current slot

    uint32 index = c -> count;


    // and increment the slot

    c -> count++;

    return index;


// -------------- getPtrFromContainer ---------

void * getPtrFromContainer(Container * c, uint64 n) {

    // Read the n'th pointer

    if (n >= c -> count) {

        printf("\n=== container error - asking for item %llu in container '%s' when count is %u", n, c -> name, c -> count);

        return NULL// safety - a NULL pointer should cause havoc 


    char * ptr = c -> data[n];

    // printf("\nget ptr[%lld] %p from %s", n, ptr, c -> name);

    return ptr;


// -------------- addPtrToContainer ---------

void addPtrToContainer(Container * c, void * p) {

    // Add the pointer p to the container

    uint32 n = allocateInContainer(c);       // find index at which to store our pointer

    // printf("\nadd ptr %p to %s[%lld]", p, c -> name, n);

    c -> data[n] = p;                        // and store it


// ------------ newContainer --------------

Container * newContainer(uint32 count, char * name) {

    // create a new pointer container capable of holding up to 'count' pointers

    uint64 s = sizeof(Container);

    Container * c = malloc(s);

    if (c) {

        c -> name = strdup(name);

        c -> data = malloc((count + 1) * sizeof(void *));

        c -> max = count;

        c -> count = 0;

        if (c -> data == NULL) {

            allocError(count * sizeof(char *), "cannot allocate a pointer container's data array");


            c = NULL;



    else {

        allocError(s, "cannot allocate a pointer container");


    return c;


// ------------ pushPtrToContainer --------------

void pushPtrToContainer(Container * c, void * ptr) {

    // push a pointer into a container. Same as add.

    addPtrToContainer(c, ptr);


// ------------ pullPtrFromContainer --------------

void * pullPtrFromContainer(Container * c) {

    // pull a pointer from a container into ptr; return 1. If none, return 0

    if (c -> count > 0) {

        char * p = getPtrFromContainer(c, c -> count - 1);

        c -> count--;

        return p;


    return NULL;


// ------------ getTopPtrInContainer --------------

void * getTopPtrInContainer(Container * c) {

     // if there is a pointer, copies top value into ptr and returns 1; else returns 0

    uint64 n = c -> count;

    if (n == 0) {

        return NULL;


    char * p = getPtrFromContainer(c, n - 1);

    return p;




We have just received our Planet Computers Gemini; an ARM-based computer with screen and keyboard, extremely reminiscent of a Psion Series 5 machine (and, indeed, with some shared people).

Details at


In short, a 10-core machine (A53s and A72s) with 4GB RAM and 64GB storage, enhanceable with up to 256GB SD card, and quad-bootable. 

Ours has the Community Edition of Sailfish OS installed; it’s a Linux with cellphone support (the Gemini is a cellphone, too) and it isn’t Android. If Sailfish turns out to be a Bad Idea, there’s always Debian, and apparently Ubuntu too, one day soonish.

Started a separate page on getting things done with the Gemini. The idea, obviously, is to have the thing be a useful portable demonstrator of the (eventual) Good Stuff on this site, which will mean porting llvm to the machine and building thereon…



Yep, had to rip up the instruction decode representation, but the new form is simpler to build and to walk. It’s essentially just a direct representation of a decode tree, with a tree of structures; in each structure, the level below is captured as an array of pointers to the decode nodes below. The bottom level, below which there are only instructions, holds a pointer to a single queue of instructions.

We capture the instructions as they arrive, and insert them into the single queue in field order (there’s always a ‘final opocde field’ to define this instruction rather than that, and that’s the field we use to insertion sort on.

Insertion sorting means duplicates are easily found.

Also looked at the error reporting, and tidied that up extensively. The whole thing seems much more robust now - trying it on an increasing number of acrhitectures. In particular, we want an architecture whose instruction set is as close to one-to-one with the abstract architecture implied by the triples - if they were instructions, what would they look like? This lead to a new field type being needed - an ‘operand’, whose definition includes its base register. So now we can represent accumulator machines simply, too.

fields {

// the name of a field, for assembler definition, is the name given here

// the name of a field, for assembler definition, is the name given here

op8[0:7] opcode;// 4 bits of major opcode unsigned value


l_dest[9:15]   opndsel fptr srcdest;

l_src1[16:23]  opndsel fptr src;

l_src2[24:31]  opndsel fptr src;





I’m generating correctly-executable code for an arbitrarily large number of processors from an architecture description and ‘hand-assembled’ program (using generated field macros)

And now I begin to see the lacunae….

For example, in thinking about how to incrementally add instructions, I realized that there were no checks in place to check the uniqueness of instruction decode. Bah, humbug…

Now to think hard about how either change the instruction encoding data structures; or to faff with a collection of instructions and discern the decode tree after the fact (as I did with the very first version of this stuff, dealing with instructions presented in more or less alphabetical order of mnemonics); or to work out how to insert things into the current structure.

This last seems most attractive at the moment.

It doesn’t fix the ‘uniqueness’ thingy, but that might fall out, or be amenable to a simple ’symbol table’ solution,,,

A Digression on an Aspect of Peformance

A Digression on an Aspect of Performance

teq (and simpleADL) are built from odds and sods of software I’ve had lying around for years.

One of these is the symbol package. The original version was essentially the example in Kernighan and Ritchie. This used a collection of queues, each holding a single-forward-linked queue to hold the symbol data structures. To insert a new symbol, you created it, filled in the name, computed a hash, and used the hash to select what queue to put it in. 

(The Second Edition of The C programming Language has the example starting on page 143, under section 6.6 Table Lookup).

A key operation in any compiler or equivalent is looking up a name. You’ve just read a token, and eventually you’ll need to know what it is (if it’s been defined). So you’ll look it up in the name table. The simplest organization is a long list (single-forward chained queue) of all the names of interest. If this is,  at some point 280 entries long, you’ll need to make on average 140 string comparisons. Since names vary from one character in length to 20 or more, these comparisons can take a long time.

A first step is to reduce the average work by any amount you like: instead of one queue, have N. This reduces the search to 140/N comparisons. How to choose what queue to use for a given name? Compute a small integer in the range 0..N-1 from the name.

K&R offer a simple function to compute a hash; here’s the code from the second edition::

/* hash: form hash value for string s */
unsigned hash(char * s)
    unsigned hashval;
    for (hashval = 0; *s != '\0'; s++)
        hashval = *s + 31 * hashval;
     return hashval % HASHSIZE;

We note that often the % operator is relatively expensive on many processors, but we’ll ignore this because we don’t do it very often. But when looking up a name, we need to do a string compare between every entry in a queue and the name we’re looking up. This can be expensive. Luckily  the standard library has the strcmp() function, generally optimised as much as practical. But it still needs to look at every name in the queue until a match is found or we reach the end of the queue indicating no match.

strcmp() looks like this, in non-weird form (from:

/* ANSI sez:
 * The `strcmp' function compares the string pointed to by `s1' to the    
 * string pointed to by `s2'.
 * The `strcmp' function returns an integer greater than, equal to, or less
 * than zero, according as the string pointed to by `s1' is greater than,
 * equal to, or less than the string pointed to by `s2'. []
 int strcmp(const char *s1, const char *s2) {
     for ( ; *s1 == *s2; s1++, s2++)
         if (*s1 == '\0')
               return 0;
     return ((*(unsigned char *)s1 < *(unsigned char *)s2) ? -1 : +1);


With long names, that’s a lot of comparisons. Is there any way we could speed it up?

Probably. We have used one form of (intended) speedup for a long time. The structure holding the name also included the length of the string; not much point comparing the strings for equality of they’re different lengths, eh?

But that’s not all. Every name we encounter means we need to compute the hash. We could store the hash, and get a much stronger avoidance of doing the string compare. Naturally, we’d store the value computed as hashval in the hash() function, before it’s range-reduced.

Testing the hypothesis that these represent improving performance requires a fairly large experiment, because performance all depends on the names being looked up. Random names aren’t the same as English-language names; and these aren’t the same as names used in program (‘not the same as’ means distribution of length and of characters in the names differ).

So we performed a basic reality test. We created an array of pointers to names, and then malloc() space for each name, and created each name using our uniform() function to choose successive letters (‘a’ to ‘z’ more or less) and a separate call to uniform() to choose the length of each name. And installed them all, in order of creation, in a hash table. Then ran through the array in order, looking up each one. 

We did two experiments. In the first, we chose a letter at random and used that for the first three characters of each name, a rough’n’ready nod to the fact that names are often ‘categorised’ thus. Results:

  • Using Raw, looking up 50,000 names in our symbol table took 1021692 microseconds
  • Using Length, looking up 50,000 names in our symbol table took 959724 microseconds
  • Using Hash, looking up 50,000 names in our symbol table took 901598 microseconds 

And then, with no pre-string, we got:

  • Using Raw, looking up 50,000 names in our symbol table took 1012510 microseconds
  • Using Length, looking up 50,000 names in our symbol table took 990427 microseconds
  • Using Hash, looking up 50,000 names in our symbol table took 1020117 microseconds 

Why isn’t hash better than length - or raw - for the random strings? Probably because the string compare can decide that the strings are different after looking at the first letter of the string. To do a proper test, we need a more realistic set of names.

So, yep. The thinking was good. But the effects are so small as to be unreliable..

From this we a reminded once again that optimizing without measuring is oft-times a Very Bad Idea...

teq (introduction)

teq (introduction)

Well, we’re about a year further on. simpleADL (sadl) has not been touched in all that time, because it’s a rather limited piece of work.

Instead, we’ve been working on teq, a new toolkit. 

Note to the unwary - we don’t think teq contains major intellectual breakthroughs - its purpose is threefold: to provide a useful tool for architecture exploration (and to show by example how to construct such things); to present (eventually) some novel architectural thoughts aimed at IoT; and to provide us with (another) useful and pleasurable activity.

teq is an architecture description language, with (hopefully) the shortfalls of sadl as a language and as a tool fixed. 

The project is just coming to life - we can consume architecture models and detect errors and provide useful error messages and build the data structures which represent the architecture, and are beginning to be able to generate the necessary .c and .h files. In addition to the architecture of a processor, teq will also be able to describe an implementation of an architecture (in the sense that the clock rate of an implementation, and the effects of the micro-architecture will be specifiable) so that performance studies can be done.

teq will also be able to describe large multiprocessor systems, in which many processors operate concurrently and interact. 

Such systems will need software, so teq will also be a concurrent programming language. At the moment, we envisage this as being a slightly simplified C, with concurrency extensions more or less along the lines of occam (that is, channel-like concurrency, and static, block structured concurrency specifications) with aspects of go (that is, channels and dynamic concurrency) and a touch of csp (that is, perhaps no channels :-).

But writing code requires a compiler, so we’ve started working on how to automatically generate a compiler from the architecture and some extra information. At least initially, the compilers won’t be very good, but they’ll be of similar quality for all architectures and so not all comparisons will be useless. We’re thinking of a structure which compiles code into a simple intermediate language, being able to distributed code in that form, and having a sort of just in time converter to convert this into per-architecture loadable formats. But these are early days, and undoubtedly there will be gotchas.

And such systems also need useful widgets like DMA engines and other independently-executing non-programmable engines. teq will allow these to be defined; the approach will be to specify the behaviour of the state machine in a version of the teq programming language, with the code being executed on an appropriate abstract architecture. And, clearly, system descriptions will contain both processors and engines.

Finally, teq will allow the definition of a limited set of interconnect topologies, perhaps limited to regular meshes with some number of layers.

As each phase of this Grand Project gets to apparent usability, we shall upload the source and executable (for Mac OS X only, for the time being) along with some documentation.

The license for the teq stuff is as follows:

Copyright (c) 2018, Kiva Design Groupe LLC All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.


The views and conclusions contained in the software and documentation are those of the authors and should not be interpreted as representing official policies, either expressed or implied, of anyone else.

Slowly getting there, perhaps

Slowly getting there, perhaps

Just uploaded the 0.5 release of the simpleADL tools and associated documentation.

The toolkit itself seems to be able to do what it says on the can, although there’s still no example programs which do any memory write operations. Shocking, really.

But this release adds some implementation stuff. Article 3 talks about how a processor is implemented, and presents time-tagging, a simple addition to an architectural simulation which provides quite useful performance estimation. It also discusses the need for caches, and exposes the key elements of the changes to the generated model for the r32 architecture needed to support cache simulation and time-tagging.

What’s time-tagging? It’s a simple technique - every resource in the system that can be written and read has an associated 64 bit time tag value. The interpreter has a notion of current time, stored in the variable now. We start off with every tag zero. When the interpreter performs an ‘add’ instruction, for example, it needs to read the two source registers and write the destination register. and the operation takes some time - for an add, there’s a latency of 1 clock. So the interpreter works out when the instruction can start executing - it’s the maximum of now and the time tags of the registers involved. It sets the value of now to that result, and then adds the latency of the operation to now and writes that into the timetag of the destination register.

When we have caches, each line has its own tag, and so does main memory. 

Function units need a tag if they’re not completely pipelined.

The whole scheme works quite well, without severely compromising the runtime performance of the interpreter.

Its weakness, of course, is that it doesn’t work so well when there are multiple agents - like multiple processors - making use of shared resources. But that’s a subject for the future.



At last, a somewhat tidied version of the simpleADL tools and documentation are published.

This release is still rather alpha-y, but seems to work with the two example architectures and their two example programs.

We’ve found problems with XCode and symbolic links, and with make and symbolic links (macOS make doesn’t sem to follow the link back for the touch date of the original file, so doesn’t do what we’d like), and so we’ve fallen back on an odious install script which populates the various folders with necessary files and builds things and installs them in /usr/local/bin



Computer architecture has been annoyingly boring for a long time - on the whole, everybody’s doing RISC with a classical MMU and some form of cache coherence. (There are honourable exceptions - the Mill is worth reading about) but in general it’s all samey-samey.

This is about to change as systems which need lots of performance increasingly adopt multicore systems, with core counts running into the tens, hundreds, or thousands. There’s a bunch of examples:

  • Rex Computing has a (4-way VLIW) multicore solution which eschews caches and uses interrupts and ‘remote writes’ for intercore communication. 
  • Adapteva has a similar approach.
  • Kalray also

What these all seem to lack is a basic understanding that these things need to be programmed, and “thinking parallel” when all you have is a sequential programming language is hard. 

While there is progress being made (C++ atomics etc), in general current widely-used programming langauges either don’t support parallelism, or do it in an under-the counter way (such as providing a threads library or equivalent, along with locks and so forth). This generally makes writing correct, efficient, comprehensible code difficult.

An alternative approach is to notice that much “high performance computing” code is nested loops, and when you’re lucky, the iterations of the loops are independent, so that you can ignore identifying parallelism and let a compiler do it for you. Or, you can use an approach like OpenCL - then you identify those loops, rewrite your code as program + kernels (where the kernels capture the loops) and arrange to run the kernels on some collection of compute resource.

A more promising approach is to think of what the physics of this sort of parallel computing rewards and punishes. These multicore systems generally have a rectangular array of cores (each with some local memory) connected up with some form of mesh network - the network may provide one or more independent communications ‘planes’ to allow different sorts of data to proceed concurrently (examples - it can be helpful to have an ACK network separate from a data network, and if you’re moving cachelines-worth of data, it might also help to have an address network and a large data network).

This sort of system is capable of delivering data directly from one core into appropriate resources of another; you’re immediately tempted to think that an architecture which allows program to wait until some new data has arrived, and then process it, would be a natural fit. And so it seems - this style of programming is generally known as message-passing, and has the advantage that you don’t need locks or other shared-data-protecting structures. Also, by providing message queues/buffers in memory or registers, the architecture can speed up communication and synchronisation substantially while reducing power consumption at the same time.

But to include these features as an ordinary part of user mode program support you really want some new architecture. And the architecture tradeoffs for a very high core count system which leverages hardware message passing can be quite different from current mainstream practice.

And so there’s a new day dawning in processor/system architecture. Now all we need is a toolkit to go explore this new world.

As a first step in providing such a toolkit, we’re initiating a series of articles and software tools. The initial stuff provides an existence proof of how to define an architecture, and generate an executable model and an assembler automatically from that specification. The capabilities of this initial tool are exceedingly limited - despite the conversation above, it’s single processor only, for example. But most architects don’t use toolkits like this - they seem to work on squared paper or in text editors, and write plain text documents which seek to describe the architecture, and rely on tool makers to understand what’s written in the way they meant when constructing implementations, verification models and software tools. This inevitably leads to tears, so showing to a wider audience that this stuff needn’t be black magic seems a useful goal.

Look for the simpleADL toolkit to appear here quite soon.

Filling In

Filling In

Slowly adding to the site. First addition, a few words about myself (under People).

© kiva design groupe • 2017