Sunday, March 02, 2008

Matters of Thought, Part 2

"To put it simply, then, all we have to do is construct a digital device, a computer capable of producing an informational model of absolutely anything in existence. Properly programmed, it will provide us with an exact simulation of the Highest Possible Level of Development, which we can then question, and obtain the Ultimate Answers!" --Stanislaw Lem, The Cyberiad

Witness the lowly field effect transistor as it's used for logic: a source struggling to push electrical current over a parched channel, waiting for the gate awaken and conjure electrons from deep in the silicon, that they may then flow to the drain, and gratify the devices voltaic urges. A transistor has two inputs, (a gate and a source), and one output (the drain), which will produce a signal that depends on the state of the first two terminals. In computers, these inputs and outputs are wired to one another so as to create basic logic structures, whose function is to convert a pair of binary signals (1s or 0s, high or low voltage) into a single output signal according to one of sixteen possible operations in boolean arithmetic. It seems ham-handed to use half a dozen of these little machines to run an elementary logical step (you actually need a fair amount of redundancy to guarantee that transistor logic structures are perfectly predictable) but thus are the rules of a Turing machine written into a real device. The practical and theoretical limits of transistor operation are an important concern as computer applications keep screaming for more, faster. The sheer number of devices in a processor implies a heating issue--enough current must be provided to power the next transistor down the line--and as devices get small and tightly packed, leakage currents and crosstalk becomes a problem, and dissipation of signal at interconnects becomes non-trivial. Even as engineers keep managing to out-think the lower size limits of fabrication, there may be a bottom on the performance. The price of fabrication increases as transistor sizes shrink too, and Moore's law may ultimately fail based on what outlandish fab people are actually willing to develop and pay for.

For all I know, engineers will get single-molecule transistors into computers in my lifetime, but to keep the prophecy (self-) fulfilled, reearchers are also looking for alternate computing schemes that can get around using transistors altogether, maybe something more naturally small. One group at Notre Dame has proposed equivalent logic structures using quantum dots.1 These are solid crystals that are small enough (several millionths of a millimeter) that the electrons that normally spread across the whole crystal start to behave more like the electrons in atoms, confined to quantum states as defined by a "particle in a box," where the box is the crystal itself. A quantum dot could accommodate an extra electron, and it would tend to be localized on the crystal in a predictable energy state, but it would also allowed to move to an empty equivalent state another dot by by tunneling if it was both energetically favorable and close enough. When a cell of four equally spaced dots had two electrons to distribute between them, the charges would arrange so that they stayed as far apart as possible, that is, at either pair of corners (dots shaded black in Figure 1). Another cell could be placed nearby so that it was close enough to feel that charge repulsion, and arrange its own two electrons according to what its neighbor is doing, but just far enough away that tunneling would be prohibited.

With two distinguishable cell configurations, and a means to arrange them in 2-D space, then it's possible to begin building logic structures. Two simple ones are shown in Figure 1: a wire which can transfer the information imposed by its first cell all the way to the last cell in the chain, and an inverter, which will transfer the opposite information to its destination. In their paper, Porod et al. demonstrate how from there, standard logic gates can be constructed from these sorts of devices, and suggest several ways to go about building them in real materials (quantum dots are real enough), all possible in principle.

[IMG]http://i219.photobucket.com/albums/cc74/Keifus/Slide1.jpg[/IMG]

These quantum dot devices resemble cellular automata (CA). Strictly speaking, CA are computer models, simulations in which the state of individual squares on a grid change based on the states of their nearest neighbors according to a simple set of rules. The quantum dot array is an obvious analog (a CA processor), where the rules happen to be governed by physics (like charges repel, and tunneling is highly distance dependent). CA are like mathemeticians' drug philosophies, producing an acid haze of states that are deterministic but sometimes hard to predict, posessing, it sometimes seems, lives of their own. The Game of Life is a cellular automoton that was famous enough in the eighties to sneak into a science ficiton novel or two. Go ahead and click the link to the game: it's pretty fun to play with for a few minutes.

More complicated cellular automata than Life allow more states, and allow different rules. Stephen Wolfram (the guy who developed Mathematica) published a doorstop a couple of years ago proclaiming CA computing as A New Kind of Science. (This was controversial, evidently because however amazing it may be, it was light on citing others' work, and thin on peer review.) Even CA with two states and one dimension are weird, it turns out, and several of these appear able to generate random numbers from non-random input. One thing that's interesting about studying cellular automata is that the emphasis is empirical. Even for stuff that's run with computer software, the approach is slanted toward understanding the "behavior" that the simple rules evolve, more than knowing the behavior and using it to predict outcome. It's not the same animal as, say, differential equations, where even when the solution is broken up into cells for the purposes of computing (talking your standard numerical integration here), the rules are expected to correspond to something. CA may end up behaving like physical things, but the rules aren't necessarily little chunks of continuum mechanics.

Porod's group envisioned predictable logic structures with their CA processors, but they didn't have to. The could have laid out a large grid of cells, more like the Game of Life, and, for a given set of "input" values imposed on one bunch of squares, proposed to monitor the "output" for another group after the system would achieved a static point or limit cycle. It's turning one binary string of numbers into another one in other words, even if the means from getting from here to there is not always obvious. This sort of computation could be used for pattern recognition: for example, an image could be projected onto the surface, and a characteristic number generated that corresponded to the image, matched, ultimately, with the appropriate learning routine. It could be used for cryptography if any of those steps proved irreversible, as some of Wolfram's CA appear to be.

[IMG]http://i219.photobucket.com/albums/cc74/Keifus/Slide2.jpg[/IMG]

What's more, there's no reason for it to be made up of little binary dot structures, and the nodes don't really even have to have finite states either, but can instead go analog. Researchers have put together or proposed this sort of thing using regular circuit elements as the nodes, liquid crystals, magnetic quantum dots, or acoustic pulses.2 [Disclosure: I worked with one of these guys, and a lot of these points came up in his conversations a few years ago.] Almost any physics will do, provided the nodes are isolated, and there is some basic means of exchanging selected information with the nearest neighbors. For a given input, the system may rearrange itself to find the "solution" in response. Computing with neurons is like this too, the frequency states of quasi-periodic electrical oscillations5 (or whatever, depending on the model) are updated based on disturbances from other nodes, or from the outside world. The DNA computation that I talked about before also strongly resembles processing with cellular automata, where the selection rules for neighbor interactions are highly specific. In a sense, this brand of computation is letting the real world act more like itself, and avoiding the imposition of binary (or other mathematical) logic on it may lead to better models (or at least faster approximate ones). One may notice that the universe itself is a great big tangle of interactions, gradually approaching some limiting cycle (or not): is the universe the ultimate model of itself? Is it doing computations? Yeah sure, why not.

Of course, just because you're in the business of turning numbers into other numbers doesn't mean that you're doing anything useful, and given a system whose innards aren't necessarily worth knowing in mathematical detail, it remains to look at your inputs, and judge the fitness of the outputs. This can be automated as well, and these sorts of processes often use an evolutionary model, in which the best matches of one generation of inputs is selected, and then "mutated" to provide another generation to test against whatever real-world process, activity, or data you're trying to approximate. Simple rules (that is, non-evolutionary) for reacting to external data can be engineered as well, and if you've seen those walking robots, they're very cool. Of course, it's a big step up from here to take machine learning to include models that are self-aware. I don't think this says what consciousness is (for all the mess, it's still looking like a set of instructions under the hood, and it's as hard as ever to say how the gods or ghosts finally come out of the machine) but maybe it's getting a better handle on the fuzzy, approximate way that gray matter--or any matter at all--may actually processes information.

"[The computer] set forth the general theory of a posteriori deities, deities which had to be added to the Universe later by advanced civilizations, since, as everyone knows, Matter always comes first and no one, consequently, could have possibly thought in the very beginning."

[1] Porod et al., Int. J. Electronics, 86, 549, 1999. (Score! There's a copy online.)
[2] Harding et al., arXiv:cond-mat/0611462v1 [cond-mat.other], 2006
[3] Rietman and Hillis, arXiv:cs/0611136v1 [cs.RO] (I only read half of this one. Already past deadline, you know.)

No comments: