tinysys part two: overview of the OS

Multitasking on tinysys

I wanted the hardware to be able to run more than one task at the same time (at the very least the device handlers and one or more other tasks), which meant that at the very least there had to be some kind of timer interrupt mechanism supported at CPU level. Luckily RISC-V has a series of different types of interrupt control flags, registers and other facilities defined in the ISA to help us with this.

The OS running on tinysys is a preemptive OS, which means the OS can switch between running tasks to give a fair number of resources to each. However, it also has more than one CPU, which means each CPU has its own preemptive scheduler, its own timer interrupt handler, and a list of tasks to run.

Scheduling on a single core

Let’s assume for a second we want to run two tasks simultaneously on the same core. That can’t be achieved by ordinary means, so the next best thing is to make it look like the two tasks are executing at the same time by rapidly switching between them.

However, if we were to try and switch between these tasks at a very fine-grained fashion, the overhead of switching between tasks will overwhelm the CPU and we’d end up with two very slow running tasks. This is mainly because we need to save the registers “somewhere” on each task switch, and also restore the next task’s registers, which is not a free operation.

The hardware design of tinysys helps us out here a bit. There is a memory location that is cheaper to access, which also doesn’t pollute the cache. This region of memory lies inside the FPGA fabric and is made up of 1024 x 32bit words, and starts at address 0x80080000

The contents of this memory is a series of words describing details for a task and some more space for storing the register contents, and looks like this:

// Per task on a CPU
struct STask {
	uint32_t HART;
	uint32_t runLength;
	enum ETaskState state;
	uint32_t exitCode;
	uint32_t regs[32];
	char name[16];
};

// Per CPU
struct STaskContext {
	struct STask tasks[TASK_MAX];
	int32_t currentTask;
	int32_t numTasks;
	int32_t debugFlags;
	int32_t kernelError;
	int32_t kernelErrorData[3];
	int32_t hartID;
};

TASK_MAX defaults to 4 (for no particular reason apart from saving space) therefore the total words required for one task is 40 words, and each CPU consumes 168 words out of the 1024 available. Rest of the unused space is considered to be a shared uncached memory that does not require cache access to read or write to, making it a good candidate for communicating task data as the user sees fit.

The runLength parameter is the default desired time slice value for the given task. Once this time runs out, or if the task calls TaskYield(), the scheduler will switch to the next active task in the list.

The scheduler ISR

The scheduler itself is installed at startup as an interrupt service routine, therefore what the OS needs to do is to first set up the ISR entry point, then set a default timeout for the initial firing of the timer interrupt and finally enable timer interrupts.

This can be achieved by a short sequence. Here’s some pseudo-code to show the order of operations (please see rombase.c line 1081; InstallIISR() function on the git repo for the actual code):

writecsr(mtvec, ISR_address);
uint64_t now = readtime();
uint64_t future = now + TWO_HUNDRED_FIFTY_MS_IN_TICKS;
settimecmp(future); // set timecmphi/timecmplo
// enable timer interrupts
writecsr(mie, MIP_MTIP);
// finally, enable machine interrupt handling
writecsr(mstatus, MSTATUS_MIE);

Handling the timer interrupt

Once the default initial timeout of 250ms expires, we’ll be in our ISR routine, with the mcause register set to the reason of the interrupt. At this point all we need to do is to call the OS helper function TaskSwitchToNext(taskctx) where taskctx is the task context of the current CPU, which we can detect by reading the mhartid register.

uint32_t runLength = TaskSwitchToNext(taskctx);
uint64_t now = E32ReadTime();
uint64_t future = now + runLength;
E32SetTimeCompare(future);

Since we’re in the ISR and the user code is not running anymore, the task switch call above can grab all the registers saved on ISR entry (they’re stored in some user CSR land, more on that later), find the next alive task, restore all the registers for it (again, into our user CSRs), set up when the next timer interrupt is supposed to fire, and return. That’s pretty much all there’s to it to making a simple task switcher.

The ISR function itself has the following signature:

void __attribute__((aligned(16))) __attribute__((naked)) interrupt_service_routine()

This is different than the default interrupt(“machine”) you might see elsewhere, since we wish to save our registers in a specific way. Tinysys has 4096 CSR words and the range between CSR 0x8A0-0x8BF is reserved for ISRs so they can save all the incoming registers here.

There is a floating-point math unit on each CPU of tinysys, and by default the FPU should have its own set of registers. That makes task switching a bit more expensive then it should be, therefore the extension ‘zfinx’ (floating point registers in integer registers) was used to make saving/restoring registers much quicker.

If we go back a bit, the TaskSwitchToNext() function now has to grab those registers saved in the CSRs and store them in the task area (remember that uncached memory area from earlier storing the task structures?). Registers could have been a save directly to this area to avoid a roundtrip, so I guess that’s something to clean up for later.

The switch-to-next function is also where we decide if a task should be removed from the list by checking the state; if it’s set to TS_TERMINATED we simply swap the end of list onto this item and reduce task count by one before returning, effectively deleting the task from our list.

Scheduling on multiple cores

This should come as no surprise to anyone when I say ‘tinysys does not schedule across cores’

The reason for that is for simplicity: each core has its own ISR handler, which will handle their individual timer interrupts and switch tasks, and interfering with that from one core while the other might not be aware just makes things a bit more difficult. However, if we ever wish to, say, implement task migration across the CPUs, this is something we’ll need to tackle with in the future.

The ‘idle’ task

Each CPU on tinysys gets one of the two copies of an ‘idle’ task installed as the first entry in their task list. The responsibility of this task is to handle (or route) errors to the main CPU to be displayed, and for the main core to manage any pending device I/O.

There is also a second task installed on the main CPU called ‘cli’ which is essentially a command line interpreter which lets us enter, well, commands or executable names to run.

That makes a total of 1 task per CPU plus 1 more for the main CPU, therefore currently tinysys already runs 3 tasks and waits for further ones, at boot time.

Boot time and waking up the secondary CPUs

The fist CPU on the system, or in RISC-V terms our first HART (hardware thread) has the number 0 assigned to it. This is the main CPU mentioned above, and is the first alive and running CPU at boot time. The other CPUs will simply sit in a WFI (wait for interrupt) loop and do pretty much nothing until the main CPU wakes them up.

And that ‘wakeup’ mechanism is something worth mentioning. Our startup code (the code in our rvcrt0.h file) contains a sequence of instructions that will decide whether we’re ‘booting’ or being ‘reset’ by checking the mscratch register. If there’s a non-zero value in this register, execution will be routed to that address, otherwise it will take the default path. The default path for main CPU is to run the main() function (our entry point) but the secondary CPUs will drop into the WFI infinite loop as mentioned above.

If CPU#0 writes a function pointer into the memory mapped CSR register file of, say, CPU#1’s mscratch register and point at a different startup function and then writes 0x1 to a special CSR to ‘reset’ the core (CSR 0xFEE on tinysys), we can get our other cores to ‘wake up’ and run any code we desire. This is the mechanism the main() function kick starts the OS: by installing one function for CPU#0 and another for CPU#1 and resetting both.

The reset is handled by the fetch unit in hardware, where upon recognizing the reset signal, the CPU will set PC to the reset vector after waiting for all pending instructions to be retired from the pipe. As you can guess, this is the reason we check the mscratch register at startup, since the startup code will be re-called after main() resets the CPUs.

Next: OS / hardware interop and the kernel

Next part will be about how the hardware takes the system from boot to a running OS in a bit more detail, and go into a bit into the kernel functionality. Until then, switch to next task.