## August 13, 2009

### Dan Ghica on Software vs Hardware

#### Posted by John Baez

Here’s a quick note from LICS ‘09. I’m too ignorant to offer a well-balanced summary of the conference, even apart from the fact that I missed the first day! So I won’t even try.

But here’s something I learned.

I’m fascinated by how categories can be applied both to physics (where we can think of a morphism as a physical process and draw it as a Feynman diagram or a circuit diagram) and to computer science (where we can think of a morphism as a program that takes data of some type as input, and spits out data of some other type).

Clearly these two applications must meet somewhere when we try to implement programs as actual physical processes! But how, exactly?

Yesterday Dan Ghica gave a talk on ‘Applications of game semantics: from program analysis to hardware synthesis’. Unfortunately I missed that talk. But today he told me a bit about it. Among other things, he said that symmetric monoidal closed categories are a useful framework for the problem of converting programs written in a high-level language into hardware specifications.

The point is that hardware amounts to black boxes hooked up by wires, and that’s what symmetric monoidal closed categories are good for! An example is the category of ‘handshake circuits’ in Section 3 of his paper Geometry of synthesis: a structured approach to VLSI design.

The first section of his paper Function interface models for hardware compilation is also very interesting. For example:

The problem of synthesis of gate-level descriptions of digital circuits from behavioural specifications written in higher-level programming lan- guages (hardware compilation) has been studied for a long time yet a definitive solution has not been forthcoming. The argument of this essay is mainly methodological, bringing a perspective that is informed by recent developments in programming-language theory. We argue that one of the major obstacles in the way of hardware compilation becoming a useful and mature technology is the lack of a well defined function interface model, i.e. a canonical way in which functions communicate with arguments. We discuss the consequences of this problem and propose a solution based on new developments in programming language theory.

On the one hand, ‘type theory’, profoundly related to category theory, has been widely adopted in the analysis and design of high-level languages. He quotes Robert Harper on this:

Over the last two decades type theory has emerged as the central organizing framework for the design and implementation of programming languages. Type theory (the study of type systems) provides the theoretical foundation for safe component integration. In the words of John Reynolds, “a type system is a syntactic discipline for enforcing levels of abstraction”. By syntactic we mean that the type system is part of the program, rather than purely in the mind of the implementer. By discipline we mean that type restrictions are enforced; ill-typed combinations are ruled out by the type system. By levels of abstraction we mean the clean separation between conceptually distinct data objects that may, in a particular program or compiler, have the same or similar representations. Implicit in this definition is an important principle: the power of a type system lies as much in what it precludes as what it allows. The most powerful type system of all is the one that rules out all programs. However, such a type system, while surely preventing safety violations, is hardly very useful. The goal of type system design is to increase expressiveness by admitting useful programs, but without compromising safety.

Particularly crucial is the ability to talk about ‘function types’ — this is where the closedness in symmetric monoidal categories is important:

Function interface models

Functions and related concepts (procedures, methods, subroutines, etc.) are the main mechanism for implementing abstraction in a programming language. The importance of using functional abstractions lies at the core of any good programming methodology and its benefits have long been established.

All three initial major modern programming languages (Fortran, Lisp, Cobol) provided support for functions, albeit sometimes in a clumsy or limited way. Of the three, the most advanced support for functions is found in Lisp. Not only does it support higher-order functions (functions that take functions as arguments) but it also introduced the new concept of a Foreign Function Interface: a way for a program written in Lisp to call functions written in another programming language. This idea was adopted by all subsequent programming languages that had mature compilers, under one name or another…

He seems to use the term ‘function interface model’ (FIM) to mean a setup for dealing with all the above issues involving functions. I would be glad to think of the theory of symmetric monoidal closed categories as a very basic ‘function interface model’.

But on the other hand, it’s not obvious how all these abstract ideas play out at the level of hardware! The idea of ‘hardware compilation’ makes this problem more than merely theoretical. Nicklaus Wirth explains what ‘hardware compilation’ means:

Automatically translating a program specified in a programming language into a digital circuit is an idea of long-standing interest. Thus far, the concept has appeared to be an uneconomical method of largely academic, but hardly practical, interest. It has therefore not been pursued with vigor and consequently has remained an idealist’s dream. With the increasing use of hardware description languages, however, it has become evident that hardware and software design share several traits.

They have traits in common, but also big differences. Where in software we build big programs out of ‘subroutines’ or ‘functions’, in hardware we build big circuits out of so-called ‘modules’. Here a ‘module’ is a fancy name for what we might also call a ‘black box’, with some wires going in, and some wires going out. The interesting thing is how ‘modules’ are less flexible than ‘functions’. Dan Ghica explains:

FIMs and hardware compilation

It is taken as a given that stored-program computers must offer well-defined function interface models (FIMs). It is inconceivable to design a modern operating system or compiler if this fundamental requirement is not met. On the other hand, in the world of hardware the concept of a FIM simply did not arise. The net-lists (boxes and wires) that are the underlying computational models of hardware languages are not structured in a way that suggests any obvious native FIM.

The abstraction mechanism common in hardware languages such as Verilog or VHDL, the module, is a form of placing a design inside a conceptual box with a specified interface. This does serve the purpose of hiding the implementation details of the module from its user, which is one of the main advantages of abstraction. However, a module is a less powerful than functional abstraction as found in programming languages in several significant ways:

1. Modules are always first order. A module can only take data as input and not other modules; in contrast, functions can take other functions as input.

2. Modules must be explicitly instantiated. The hardware designer must manually keep track of the names of each module occurrence and its ports, which must be connected explicitly by wires to other elements of the design. In contrast, the run-time management of various instances of functions is managed by the OS (using activation records) in a way that is transparent to the programmer.

3. Sharing a module from various points in a design is not supported by hardware design languages. The designer must design ad hoc circuitry, such as multiplexers or de-multiplexers and ad hoc protocols to achieve this. The lack of language support for sharing makes this a delicate and error-prone task.

It would be nice to understand these difference between ‘modules’ and ‘functions’ in category-theoretic terms. I bet Dan Ghica already knows a lot about this, and probably there’s a lot more about it in his papers! But this is all I know right now.

Posted at August 13, 2009 1:37 AM UTC

TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2035

### Re: Dan Ghica on Software vs Hardware

A non-mathematical comment: one thing you didn’t explicitly mention is that Field Programmable Gate Array (FPGA) technology is what makes hardware compilation real-world applicable. This technology allows a signal to a “meta-input” on a programmable gate to configure the gate to implement a particular (digital) input-output mapping, such reprogramming for the entire chip taking many, many orders of magnitude longer than “standard” operations of the gates. So it’s not something you want to do often, but it does mean that you can perform partial evaluation (as really badly explained on this wikipedia link) and/or symbolic evaluation for some program use-context before programming the chip for use in this application for a while.

The upshot is that you can write programs that involve higher order functions for these applications, providing enough of their arguments structure and/or inputs is known that after partial evaluation all you’ve got is a 1-st order functional program (and its implicit data structures are not problematic ones that involve unbounded storage, like lists). I don’t have enough knowledge to know whether the most realistic attack on the practical problem is in enhancing partial evaluation technologies to transform from higher-order functions to all 1-st order functions (which can then be relatively simply mapped to “modules”) or expanding the ways of representing modules which still reflect hardware limitations but make them able to deal with more higher level programming language issues.

Posted by: bane on August 13, 2009 12:46 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Bane wrote:

A non-mathematical comment: one thing you didn’t explicitly mention is that Field Programmable Gate Array (FPGA) technology is what makes hardware compilation real-world applicable.

Cool! I didn’t know about Field Programmable Gate Array technology. I’m definitely curious about the real-world applicability of the ideas I’m talking about here… but I’m very ignorant about it, and very confused about a lot of incredibly basic things.

For example, what’s the difference between ‘hardware compilation’ and designing an application-specific integrated circuit (ASIC) using a hardware description language? Hmm, maybe I can guess. I guess it’s called ‘hardware compilation’ when you write a program and it gets converted to a system of ‘modules’ that get implemented on programmable gates in real time, while otherwise it’s just called ‘ASIC design’ or something. Is that at all close?

How much have people actually done with FGPA technology? Is there some setup where you can write a little program in a high-level language, have some tool automatically convert it to hardware description language, and then use FPGA technology to get the desired circuit to magically materialize on your chip? Or is this something people are just dreaming about? Or…?

Posted by: John Baez on August 13, 2009 11:35 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Warning: I know most of this from being in an office with someone working on this 10 years ago, so some of this might be superceded.

I don’t think that there’s a hard line between using “hardware compilation” to describe FPGA stuff and static ASIC design. It’s more of an economic thing: given how much it costs to set up a fabrication run, the need to sell the resulting chips at a price point giving a profit and the relative cheapness of using older “mainstream CPUs”, there’s a limit on how much static hardware design needs to be done. Whilst computer assisted circuit design is already important, it’s not clear-cut that the chip industry absolutely needs higher level design techniques to continue. On the other hand, programs compiled down into FPGAs give you close to the speed of static ASICs with the advantage that it’s just programming, so if there was an easy “language” for human beings writing programs which can be compiled down relatively efficently to hardware, it’s almost certain that companies could sell watches, cell-phones, PDAs, wearable computers, etc, with one or more FPGA processors. So it’s not any conceptual difference but rather an economic one.

This would be both because being much closer to a gate specific circuit is both signficantly faster and uses less power (though slower and more power than an ASIC), but also because you can economically specialise things for the immediate task even more than you can with ASICs. As an example, a common operation in computer vision is extract various features from an image, which basically amounts to computing the 2-D correlation of various feature matrices against the image matrix. Different features work better in urban scenes than countryside scenes. So rather than having an ASIC that takes as input both feature matrices and images that you could use anywhere, you could have “partially evaluate” urban features into the feature detection circuit and program it into your FPGA when you go into an urban area. When you go into the countryside you could reprogram it with the countryside specialised feature extractor circuit. This is one of the things that gets people, myself included, most excited about FPGA possibilities, but it’s also the most difficult because the theory and techniques for converting programs and partial inputs into effective circuits is such a difficult problem.

As to how much they’re used: I think it’s standard practice when you’re designing an ASIC to test various prototype design on an FPGA, and it sometimes makes economic sense to use FPGAs in the final product (as you don’t need to book fabrication time and can fix design bugs in the field). Other than that, my experience has been that you get the typical problem of the split between “core technology developers” and “higher level technology users”. All the chip engineering people want to bring in users, eg, vision people. But, because it’s such a complicated task, the compilation technology is not as good as it needs to be to be effecitvely used and after a few initial projects it ends up being labelled as “get involved in the future when it’s more mature”. (There’s an irony that the “using” fields like AI, computer vision, etc have “users” of their own who take the same viewpoint!)

Ten years ago you could write a program in a restricted functional language and get it compiled down to an FPGA circuit automatically, but this was party done using a lot of “primitive” circuit designs that were easy to connect together but the resulting circuits tended to have a significant amount of redundant recomputations; the big research topic was finding better techniques that reduced that to much closer to a hand tuned design.

So my, very possibly out of date, undertstanding is that it’s similar to the first very first motor cars: it works and there’s no doubt it will be made to work with less hassle, but we’re some unknown number of years before Henry Ford made motor transport accessible to people who didn’t have an interest in their cars beyond using them.

Posted by: bane on August 14, 2009 11:16 AM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Thanks, bane, for your very nice explanation of the state of the art… as it was back then. It’s fascinating! I wonder how much it’s changed by now. I hope the dream of economically viable hardware compilation comes true — just because it’s such a cool idea.

Posted by: John Baez on August 15, 2009 6:18 AM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

I’ve played a bit with FPGAs. It’s very different from wiring software - you almost have to invert the way you think. It’s fun to design and implement your own CPU and at the simpler end I implemented a Nim-like combinatorial game with a bunch of LEDs and switches for I/O. Kits start at about \$40.

Posted by: Dan Piponi on August 15, 2009 4:25 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Dan Piponi wrote:

I’ve played a bit with FPGAs.

Wow! I’m such a theorist it would never occur me to actually play around with Field Programmable Gate Arrays — I’d be just as inclined to play around with quarks or black holes. I was having fun playing around with the idea of FPGAs!

I really admire the hands-on approach, but I had it beaten out of me early in life, and retreated to theory.

Posted by: John Baez on August 16, 2009 12:08 AM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Well the thing that attracted me to FPGA development was that you can call it electronics but once it’s plugged into your PC you do can all of it from a keyboard.

But it’s also fun to start attaching other bits of hardware to your FPGAs too. I’d attach a black hole if I could get hold of one.

Posted by: Dan Piponi on August 22, 2009 4:50 AM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Unlike ASICs, FPGAs are often field programmable, i.e. that you can solder a blank FPGA on the circuit board and then program. Some FPGAs can even be reprogrammed, which means that you don’t have to throw away the entire board, or perhaps even the entire machine, if you discover a bug in you FPGA code.

I work a little with FPGAs for motor control electronics a decade ago. This would be a rather typical application for FPGAs: low volumes make factory programmed ASICs impractical, and you still don’t have to implement stuff in discrete logic. Incidentally, the brand name of this circuit was pASIC (programmable ASIC). It could neither be field programmed nor reprogrammed, though.

The first versions of the circuit was constructed as an electronic schematic, but later we switched to the hardware dscription language Verilog. An alternative would have been VHDL, but I have never tried that.

### Re: Dan Ghica on Software vs Hardware

This’ll be a bit of a ramble.

For example, what’s the difference between ‘hardware compilation’ and designing an application-specific integrated circuit (ASIC) using a hardware description language?

[Hypothetically, Hands-On] You use a hardware description language when you design for both ASICs and FPGAs. Hardware compilation takes the HDL [VHDL|Verilog; some FPGA vendors have a toolchain that you can even use C to describe your hardware function, but it’s not a proper C program] description of your system, and generates a netlist [gatelist of logic circuits]. This is where ASIC design departs from FPGA design.

With an FPGA, you stick that netlist into your FPGA programmer and ‘flash’ the circuit. This creates in the device the logic gates that represent your hardware design. In ASIC design, you hand this off to the fab, who will produce your mask, and burn your chips for you. For a single chip, the former is MUCH cheaper than the latter. This drives the cost problem.

What kills you, costwise, is verification. If you made a mistake with your FPGA, you go back to your workbench, fix it, recompile, come back the next day and flash your device, then keep testing. If you discover a mistake in your ASIC design, you lose weeks and piles of dollars while you re-rig and re-run the fab.

So why not use FPGAs for everything? Well, we are using them more and more because the technology is growing fast. But you are picking a custom circuit solution because you have a performance challenge.

It needs to be fast, it needs to fit in a couple of square millimeters, it needs to … An ASIC can be made fast enough and fit enough, while an FPGA is a little slower, with a little less capacity, and a little bigger. Tolerances are tight. FPGAs are getting there as the manufacturing tech is scaling down like crazy. We are, lately, using FPGAs to prototype and verify functional blocks [modules?], then shipping final designs to fabs [where we can get economy of scale].

As an aside, flashing an FPGA might be as fast as a 10^-3s process, which is 10^3 longer than a lot of the ‘functions’ need to execute for the device to function. This is why dynamically reprogramming an FPGA is weird.

Posted by: David on August 31, 2009 8:50 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Another development in the abstract theory of circuits has long been pursued in a series of papers by R.F.C. Walters and his collaborators (particularly Katis and Sabadini). A sample paper is here.

Posted by: Todd Trimble on August 13, 2009 2:42 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

Thanks! I’ll have to read more of their stuff. A quick look at the paper you cited left me curious how much contact there is between this abstract work and various ‘practical’ approaches to circuits. It says “compact closed categories and Cartesian bicategories have been used as a model for circuit design”, and cites these papers as evidence:

• C. Brown and A. Jeffrey, Allegories of circuits, in Proc. Logical Foundations of Computer Science, St. Petersburg, 1991.
• G. Jones and M. Sheeran, Circuit design in Ruby, in Formal Methods for VLSI Design, North-Holland, 1990, pp. 13-70.

I should look at these!

I know a little about allegories, so I’m curious if the idea is that circuits are supposed to be morphisms in an allegory, and if so, what’s the meaning of the partial order on circuits $f: X \to Y$. Are they talking about boolean circuits, which are basically matrices taking values in the poset $\{F,T\}$? If so I can guess the partial order.

I don’t know anything about Ruby, so I should probably learn a little about that too…

Posted by: John Baez on August 13, 2009 11:08 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

(BTW, Is this the line of thinking whence the Pi-calculus conversation arose?)

Anyways, some musing, inspired mostly by contemporary hardware design:

To acheive first-order treatment of modules, modules should be addressable. We’re used to thinking of computer memory as addressable and network terminals as having IP addresses. In full generality, module addresses will have to be arbitrarily long, but that’s OK. Of note, there shouldn’t be too much protocol distinction between a memory device and a function-computing device. But also, a module will need, in some way, to be able to deal with arbitrarily long addresses.

Another notion that occurred to me, in a related way but which I can’t trace anymore, is that it would at least be handy if modules are assumed to also act as proxies between modules.

A third thought: it’s sometimes not very helpful to think of (untyped) lambda terms as “functions” or morphisms in a computer-sciency sense: they’re more like function libraries. The function aspect of an untyped lambda term is more like an initialization routine that imports another library’s interface, sets up some private internals, and exports a new interface (address).

Posted by: anonymous coward on August 25, 2009 3:56 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

It depends what ‘arbitrarily long’ denotes. One of the issues about converting even relatively simple functional programs into hardware descriptions is that potentially unbounded data structures can’t be implicitly embedded into the system in a flat way. If arbitrarily long means “settable but fixed once a compilation begins” then potentially (although it’s debatable whether arbitrarily long is necessary given that you’ve got a fixed number of gates anyway).

The bigger issue is whether the best goal for constructing hardware compilation systems is for hardware “modules” (by which I’m meaning a “region” of a chip (FPGA or ASIC)) to be truly first class entities, or whether it’ll be more effective to have a better understanding of how to infer what higher order patterns in a programming language will lead to when it’s “unfolded” into something expressible as modules, and inform the programmer as soon as possible when the translation will run into problems. In terms of category theory, I guess it’s trying to find a maximally useful sub structure of “higher order programming” which has a (computable, indeed quickly computable) mappings into the language of 1st order modules.

Posted by: bane on August 25, 2009 6:26 PM | Permalink | Reply to this

### Re: Dan Ghica on Software vs Hardware

A.C. wrote:

(BTW, Is this the line of thinking whence the Pi-calculus conversation arose?)

Only in a rough sense. I’m working with Mike Stay and Paul-André Melliès on $n$-categories and computer science. My main contribution is a nearly complete ignorance of computer science, which has a certain liberating effect. I spent a couple years trying to understand the lambda calculus and various ‘quantum’ or ‘linear’ variants thereof; now I’m happily trying to bring 2-categories to bear on these subjects. Mike Stay has been trying to get me interested in the pi-calculus and especially the blue calculus for quite some time. I found them mysterious and unintuitive. But somehow going to LICS and talking to Dan Ghica made me more eager to learn about a broader range of category-theoretic-and-computer-sciency topics, including the pi-calculus but also the whole business of ‘software versus hardware’, ‘modules’, ‘circuits’, and other distantly related things like Eilenberg’s old work on ‘automata’. In fact, yesterday I went to the library and checked out a pile of books about automata and electrical circuits. Unfortunately UCR does not have The π-Calculus: A Theory of Mobile Processes or Communicating and Mobile Systems: the π-Calculus. I’ll have to get those via interlibrary loan.

Posted by: John Baez on August 26, 2009 9:58 PM | Permalink | Reply to this

Post a New Comment