Unlike many distributed systems, Chromium is configured in only one place. The configuration mothership is a network service that responds to configuration queries from Chromium nodes, so all configuration for an entire cluster run can be managed in one place. In addition, Chromium configuration is not specified in static files, but rather it is specified programmatically using the Python programming language. If you're not familiar with Python, don't worry, you can almost always create a new configuration script from an existing one without really understanding the semantics of the language. In WireGL, we were always writing homegrown parameterized scripts to generate configuration files; in Chromium, the script is the configuration file.
A Chromium configuration script basically builds a graph which describes how OpenGL commands flow from the application node(s) to the server node(s). Application and server nodes host SPUs which process and filter the OpenGL command steram. The graph is a DAG (directed acyclic graph). To get a better feel for the layout of configuration graphs, take a look at the figures shown in the graphical configuration tool section.
Let's look at the crdemo.conf script that drove the Hello, World demo. You should be viewing this configuration script while reading the description below.
6: 7: 8: |
import sys sys.path.append( '../server' ) from mothership import * |
These three lines will appear at the top of every configuration script.
Lines 1 and 2 are Python jargon that allow the interpreter to search
another directory for imported modules. In this case, the module
mothership
loaded in line 3 is located in the
cr/mothership/server directory. Line 3
imports all of the symbols from the mothership
Python module. We say "from mothership import *
" instead of
"import mothership
" so that we can refer to elements of the
mothership module without explicitly qualifying their names. For example,
later we will see references to SPU
objects,
which would be mothership.SPU
objects had we
not done our import this way.
10: 11: 12: |
if len(sys.argv) > 3 or len(sys.argv) < 2: print 'Usage: %s [spu]' % sys.argv[0] sys.exit(-1) |
Thise lines make sure that the arguments to the script are correct.
Recall that this script determines which program to run by its first
argument, so "sys.argv
" (analogous to the
"argv
" parameter to "main
" in
C/C++) must have at least two elements (including the name of the
script). This script can take an optional third argument specifying the
SPU to be loaded by the client node. Notice, by the way, that Python
delimits blocks by indentation only; there are no curly braces or
"endif
"s to mark the end of things. Be extremely careful
about mixing tabs and spaces (i.e., don't do it).
14: | demo = sys.argv[1] |
Line 14 assigns "demo" the first script argument, or the name of the program to be run. This, for example, would hold the value "fonttest" in the Hello, World example.
16: 17: 18: 19: |
if len(sys.argv) == 3: clientspuname = sys.argv[2] else: clientspuname = 'pack' |
Lines 16-19 figure out which SPU will be loaded by the application. If the user has specified three arguments to the script, then the third one will be the name of the spu. Otherwise, it defaults to "pack".
21: 22: |
server_spu = SPU( 'render' ) client_spu = SPU( clientspuname ) |
Lines 21-22 create SPU
objects. These are the
plug-in modules that implement the OpenGL API. In the Hello, World example, we created two SPUs: a
render SPU that dispatches OpenGL calls to the
system's implementation, and a pack SPU that
encodes its commands and sends them over the network verbatim (i.e., with
[almost] no analysis). Each SPU in a system needs to have a separate
SPU
object created for it in the configuration
script. Notice that we are using the client SPU name that was computed in
lines 16-19.
24: | server_spu.Conf( 'window_geometry', [100, 100, 500, 500] ) |
Once the SPUs are created, they can be configured. In this case, the render SPU needs to know what kind of window to create. This directive tells the render SPU to create a window that is 500 pixels wide and 500 pixels high, and is shown at an offset of (100,100) from the upper-left corner of the screen. This is all the SPU configuration that happens in this script, because the client SPU needs no configuration. A complete list of all available configuration parameters for all provided SPUs is given in the "Configuration options for Provided SPUs" section.
27: 28: |
server_node = CRNetworkNode( ) server_node.AddSPU( server_spu ) |
Once all the SPUs have been defined and configured, it's now time to
describe the graph of nodes itself. The server node is created as an
instance of a CRNetworkNode
object. Notice
that no parameters are given to the constructor for this object. There
are two optional parameters that can be passed to the constructor: a
"hostname" parameter indicating the name of the computer on which this
node will be running, and a "port" parameter indicating what port it
should listen on for clients. If no hostname is provided, the default is
'localhost'. The default port is 7000.
Each node can have a chain of SPUs attached to it. The order of
the chain is determined by the order of calls to the
"AddSPU
" method. Here, the server node only has one SPU, so
order is irrelevant.
30: 31: |
if (clientspuname == 'tilesort' ): server_node.AddTile( 0, 0, 500, 500 ) |
If the client SPU is the tilesort SPU (used for
rendering to tiled displays), the tiling of the logical output space must
be provided. This information is mandatory even if there is only one
tile, since the absence of such a tile will make the server behave in
slightly different ways (in particular, with respect to the glViewport
and glScissor
calls). Notice that the tiling information is associated with the
server node, not the render SPU.
33: 34: 35: |
client_node = CRApplicationNode( ) client_node.AddSPU( client_spu ) client_spu.AddServer( server_node, 'tcpip' ) |
Now that the server node has been completely defined, we define the
client node. Notice that lines 33 and 34 are almost identical to lines 27
and 28, except the client node is defined as an instance of a
CRApplicationNode
object. Line 30 adds the
server node (defined on line 22) to the client SPU. Recall that the
client SPU defaults to the pack SPU, or the
tilesort SPU could be used instead. In fact, it
is possible to make the client SPU the render
SPU, in which case the server is not needed. In such a case, calling
AddServer
won't hurt anything, so we always
create a client-server relationship, regardless of the client SPU being
used. This is in contrast to lines 30 and 31, where we don't want to
define a tiling if we're not using a tiling-aware client SPU.
37: 38: |
client_node.SetApplication( '%s/%s' % (crbindir, demo) ) client_node.StartDir( crbindir ) |
So far, the node graph and SPU collection have been completely generic.
Lines 37 and 38 bind a specific application to the client node. Line 37
tells the application faker running on the client node which application
to run. Line 38 tells the application faker to change directories to
"crbindir
" before launching the application. This is an
optional step, but it is useful for applications that need to find
certain input files, such as textures or application configuration
information. If this line is omitted, the specified application will run
in whatever directory crappfaker was run from.
40: 41: 42: 43: 44: |
cr = CR() cr.MTU( 32*1024 ); cr.AddNode( client_node ) cr.AddNode( server_node ) cr.Go() |
The final 5 lines of the configuration script set everything in motion.
Line 40 creates a CR
object, which is the
network-aware mothership that will respond to queries about the nodes it
manages. On line 41 the MTU
function
sets the largest buffer size that is allowed to pass between a client and
a server (in future versions of Chromium, this parameter may be specified
in a different way). Lines 43 and 44 add the two nodes we have created to
the mothership. Finally, the Go
method is
called, which will loop forever, answering configuration queries over the
network.
Although this configuration script is quite simple, it exercises all of the features of the mothership scripting environment. An augmented version of this configuration file is available as crdemo_full.conf. This version adds two more SPUs, one at each node. These SPUs, called print SPUs, generate human-readable dumps of the OpenGL stream to log files for debugging or analysis. The reader should look at both crdemo_full.conf and crdemo.conf and make sure that the differences are clear.
An alternative to writing configuration scripts by hand is to generate them with the graphical configuration tool, which is documented separately.
Also see the autostart section for information about automatically starting application and crserver processes.