Vanguard (microkernel)

Vanguard is a discontinued experimental microkernel developed at Apple Computer, in the research-oriented Apple Advanced Technology Group (ATG) in the early 1990s. Based on the V-System, Vanguard introduced standardized object identifiers and a unique message chaining system for improved performance. Vanguard was not used in any of Apple's commercial products. Development ended in 1993 when Ross Finlayson, the project's main investigator, left Apple.

Basic concepts
Vanguard was generally very similar to the V-System, but added support for true object-oriented programming of the operating system. This meant that kernel and server interfaces were exported as objects, which could be inherited and extended in new code. This change has no visible effect on the system, it is mainly a change in the source code that makes programming easier.

For example, Vanguard had an input/output (I/O) class which was supported by several different servers, for example, networking and file servers, which new applications could interact with by importing the I/O interface and calling methods. This also made writing new servers much easier, because they had a standard to program to, and were able to share code more easily.

V messaging semantics
A key concept to almost all microkernels is to break down one larger kernel into a set of communicating servers. Instead of having one larger program controlling all of the hardware of a computer system, the various duties are apportioned among smaller programs that are given rights to control different parts of the machine. For example, one server can be given control of the networking hardware, while another has the task of managing the hard disk drives. Another server would handle the file system, calling both of these lower-level servers. User applications ask for services by sending messages to these servers, using some form of inter-process communications (IPC), in contrast to asking the kernel to do this work via a system call (syscall) or trap.

Under V the IPC system appears to be conceptually modeled on remote procedure calls (RPC) from the client application's perspective. The client imported an interface definition file containing information about the calls supported by the kernel, or other applications, and then used this definition to package up requests. When called, the kernel would immediately take over, examine the results, and pass the information off to the right handler, potentially within the kernel. Any results were then handed back through the kernel to the client.

The operation of the system as it appears to the client application is very similar to working with a normal monolithic kernel. Although the results passed back might come from a third party handler, this was essentially invisible to the client. Servers handling these requests operated in a similar fashion to the clients, opening connections with the kernel to pass data. However, servers generally spawned new threads as required to handle longer-lasting requests. When these were handled and the responses posted, the thread could be de-allocated and the servers could go into a receive mode awaiting further requests.

In contrast, most microkernel systems are based on a model of asynchronous communications, instead of synchronous procedure calls. The canonical microkernel system, Mach, modeled messages as I/O, which has several important side-effects. Primary among these is that the normal task schedulers under Unix-like systems will normally block a client that is waiting on an I/O request, so in this way the actions of pausing and restarting applications waiting on messages was already built into the underlying system. The downside to this approach is that the scheduler is fairly heavyweight, and calling it was a serious performance bottleneck and led to extensive development efforts to improve its performance. Under the V-System model, the message passing overhead is reduced because the process scheduler does not need to be consulted, there is no question as to what should next be run, which is the server being called. The downside to the V approach is that it requires more work for the server if the response may take some time to process.

Chaining
One major addition to the IPC system under Vanguard, in contrast to V, was the concept of message chains, allowing one message to be sent between several interacting servers in one round-trip. In theory, chaining could improve the performance of common multi-step operations.

Consider the case where a client application must read a file. Normally this would require one message to the kernel to find the file server, then three more messages to the file server: one to resolve the file name into an object id, another to open that id, then finally another to read the file. Using Vanguard's chaining, one message could be constructed by the client that contained all of these requests. The message would be sent to the kernel, and then passed off to the file server which would handle all three requests before finally returning data.

Much of the performance problem normally associated with microkernel systems are due to the context switches as messages are passed back and forth between applications. In the example above running on a V system, there would have to be a total of eight context switches; two for each request as the client switched to and from the kernel. In Vanguard using a chain would reduce this to only three switches; one out of the client into the kernel, another from the kernel to the file server, and finally from the server back to the client. In some cases the overhead of a context switch is greater than the time it takes to actually run the request, so Vanguard's chaining mechanism could result in real-world performance improvements.

Object naming
V had also introduced a simple distributed name service. The service stored well known character names representing various objects in a distributed V system, for example,. Applications could ask the name server for objects by name, and would be handed back an identifier that would allow them to interact with that object. The name service was not a separate server, and was managed by code in the kernel. Contrast this with the full name server under the operating system Spring, which not only knew about objects inside the system, but was also used by other servers on the system to translate their private names, for example, file names and IP addresses.

Under the V-System, objects in servers were referred to via an ad hoc private key of some sort, say a 32-bit integer. Clients would pass these keys into the servers to maintain a conversation about a specific task. For example, an application might ask the kernel for the file system and be handed a 32-bit key representing a program id, and then use that key to send a message to the file system asking it to open the file, which would result in a 64-bit key being handed back. The keys in this example are proprietary to the servers, there was no common key format being used across the system.

This sort of name resolving was so common under V that the authors decided to make these keys first-class citizens under Vanguard. Instead of using whatever object ID's the servers just happened to use, under Vanguard all servers were expected to understand and return a globally unique 128-bit key, the first 64-bits containing a server identifier, the second identifying an object in that server. The server id was maintained in the kernel, allowing it to hand off the message over the network if the server being referenced was on a remote machine. To the client this was invisible. It is unclear if the id's were assigned randomly to preclude successful guessing by ill-intentioned software.