Computing A Collection of 6 Articles Thinking in Silicon

Computing

Thinking in Silicon

Microchips modeled on the brain may excel at tasks that baffle today’s computers.

Picture a person reading these words on a laptop in a coffee shop. The machine made of metal, plastic, and silicon consumes about 50 watts of power as it translates bits of information—a long string of 1s and 0s—into a pattern of dots on a screen. Meanwhile, inside that person’s skull, a gooey clump of proteins, salt, and water uses a fraction of that power not only to recognize those patterns as letters, words, and sentences but to recognize the song playing on the radio.

This computer chip, made by IBM in 2011, features components that serve as 256 neurons and 262,144 synapses.

Computers are incredibly inefficient at lots of tasks that are easy for even the simplest brains, such as recognizing images and navigating in unfamiliar spaces. Machines found in research labs or vast data centers can perform such tasks, but they are huge and energy-hungry, and they need specialized programming. Google recently made headlines with software that can reliably recognize cats and human faces in video clips, but this achievement required no fewer than 16,000 powerful processors.

A new breed of computer chips that operate more like the brain may be about to narrow the gulf between artificial and natural computation—between circuits that crunch through logical operations at blistering speed and a mechanism honed by evolution to process and act on sensory input from the real world. Advances in neuroscience and chip technology have made it practical to build devices that, on a small scale at least, process data the way a mammalian brain does. These “neuromorphic” chips may be the missing piece of many promising but unfinished projects in artificial intelligence, such as cars that drive themselves reliably in all conditions, and smartphones that act as competent conversational assistants.

“Modern computers are inherited from calculators, good for crunching numbers,” says Dharmendra Modha, a senior researcher at IBM Research in Almaden, California. “Brains evolved in the real world.” Modha leads one of two groups that have built computer chips with a basic architecture copied from the mammalian brain under a $100 million project called Synapse, funded by the Pentagon’s Defense Advanced Research Projects Agency.

The prototypes have already shown early sparks of intelligence, processing images very efficiently and gaining new skills in a way that resembles biological learning. IBM has created tools to let software engineers program these brain-inspired chips; the other prototype, at HRL Laboratories in Malibu, California, will soon be installed inside a tiny robotic aircraft, from which it will learn to recognize its surroundings.

The evolution of brain-inspired chips began in the early 1980s with Carver Mead, a professor at the California Institute of Technology and one of the fathers of modern computing. Mead had made his name by helping to develop a way of designing computer chips called very large scale integration, or VLSI, which enabled manufacturers to create much more complex microprocessors. This triggered explosive growth in computation power: computers looked set to become mainstream, even ubiquitous. But the industry seemed happy to build them around one blueprint, dating from 1945. The von Neumann architecture, named after the Hungarian-born mathematician John von Neumann, is designed to execute linear sequences of instructions. All today’s computers, from smartphones to supercomputers, have just two main components: a central processing unit, or CPU, to manipulate data, and a block of random access memory, or RAM, to store the data and the instructions on how to manipulate it. The CPU begins by fetching its first instruction from memory, followed by the data needed to execute it; after the instruction is performed, the result is sent back to memory and the cycle repeats. Even multicore chips that handle data in parallel are limited to just a few simultaneous linear processes.

That approach developed naturally from theoretical math and logic, where problems are solved with linear chains of reasoning. Yet it was unsuitable for processing and learning from large amounts of data, especially sensory input such as images or sound. It also came with built-in limitations: to make computers more powerful, the industry had tasked itself with building increasingly complex chips capable of carrying out sequential operations faster and faster, but this put engineers on course for major efficiency and cooling problems, because speedier chips produce more waste heat. Mead, now 79 and a professor emeritus, sensed even then that there could be a better way. “The more I thought about it, the more it felt awkward,” he says, sitting in the office he retains at Caltech. He began dreaming of chips that processed many instructions—perhaps millions—in parallel. Such a chip could accomplish new tasks, efficiently handling large quantities of unstructured information such as video or sound. It could be more compact and use power more efficiently, even if it were more specialized for particular kinds of tasks. Evidence that this was possible could be found flying, scampering, and walking all around. “The only examples we had of a massively parallel thing were in the brains of animals,” says Mead.

Brains compute in parallel as the electrically active cells inside them, called neurons, operate simultaneously and unceasingly. Bound into intricate networks by threadlike appendages, neurons influence one another’s electrical pulses via connections called synapses. When information flows through a brain, it processes data as a fusillade of spikes that spread through its neurons and synapses. You recognize the words in this paragraph, for example, thanks to a particular pattern of electrical activity in your brain triggered by input from your eyes. Crucially, neural hardware is also flexible: new input can cause synapses to adjust so as to give some neurons more or less influence over others, a process that underpins learning. In computing terms, it’s a massively parallel system that can reprogram itself.

Ironically, though he inspired the conventional designs that endure today, von Neumann had also sensed the potential of brain-inspired computing. In the unfinished book The Computer and the Brain, published a year after his death in 1957, he marveled at the size, efficiency, and power of brains compared with computers. “Deeper mathematical study of the nervous system … may alter the way we look on mathematics and logic,” he argued. When Mead came to the same realization more than two decades later, he found that no one had tried making a computer inspired by the brain. “Nobody at that time was thinking, ‘How do I build one?’” says Mead. “We had no clue how it worked.”

Mead finally built his first neuromorphic chips, as he christened his brain-inspired devices, in the mid-1980s, after collaborating with neuroscientists to study how neurons process data. By operating ordinary transistors at unusually low voltages, he could arrange them into feedback networks that looked very different from collections of neurons but functioned in a similar way. He used that trick to emulate the data-processing circuits in the retina and cochlea, building chips that performed tricks like detecting the edges of objects and features in an audio signal. But the chips were difficult to work with, and the effort was limited by chip-making technology. With neuromorphic computing still just a curiosity, Mead moved on to other projects. “It was harder than I thought going in,” he reflects. “A fly’s brain doesn’t look that complicated, but it does stuff that we to this day can’t do. That’s telling you something.”

Neurons Inside

IBM’s Almaden lab, near San Jose, sits close to but apart from Silicon Valley—perhaps the ideal location from which to rethink the computing industry’s foundations. Getting there involves driving to a magnolia-lined street at the city’s edge and climbing up two miles of curves. The lab sits amid 2,317 protected acres of rolling hills. Inside, researchers pace long, wide, quiet corridors and mull over problems. Here, Modha leads the larger of the two teams DARPA recruited to break the computing industry’s von Neumann dependency. The basic approach is similar to Mead’s: build silicon chips with elements that operate like neurons. But he has the benefit of advances in neuroscience and chip making. “Timing is everything; it wasn’t quite right for Carver,” says Modha, who has a habit of closing his eyes to think, breathe, and reflect before speaking.

IBM makes neuromorphic chips by using collections of 6,000 transistors to emulate the electrical spiking behavior of a neuron and then wiring those silicon neurons together. Modha’s strategy for combining them to build a brainlike system is inspired by studies on the cortex of the brain, the wrinkly outer layer. Although different parts of the cortex have different functions, such as controlling language or movement, they are all made up of so-called microcolumns, repeating clumps of 100 to 250 neurons. Modha unveiled his version of a microcolumn in 2011. A speck of silicon little bigger than a pinhead, it contained 256 silicon neurons and a block of memory that defines the properties of up to 262,000 synaptic connections between them. Programming those synapses correctly can create a network that processes and reacts to information much as the neurons of a real brain do.

Setting that chip to work on a problem involves programming a simulation of the chip on a conventional computer and then transferring the configuration to the real chip. In one experiment, the chip could recognize handwritten digits from 0 to 9, even predicting which number someone was starting to trace with a digital stylus. In another, the chip’s network was programmed to play a version of the video game Pong. In a third, it directed a small unmanned aerial vehicle to follow the double yellow line on the road approaching IBM’s lab. None of these feats are beyond the reach of conventional software, but they were achieved using a fraction of the code, power, and hardware that would normally be required.

Modha is testing early versions of a more complex chip, made from a grid of neurosynaptic cores tiled into a kind of rudimentary cortex—over a million neurons altogether. Last summer, IBM also announced a neuromorphic programming architecture based on modular blocks of code called corelets. The intention is for programmers to combine and tweak corelets from a preëxisting menu, to save them from wrestling with silicon synapses and neurons. Over 150 corelets have already been designed, for tasks ranging from recognizing people in videos to distinguishing the music of Beethoven and Bach.

Learning Machines

On another California hillside 300 miles to the south, the other part of DARPA’s project aims to make chips that mimic brains even more closely. HRL, which looks out over Malibu from the foothills of the Santa Monica Mountains, was founded by Hughes Aircraft and now operates as a joint venture of General Motors and Boeing. With a koi pond, palm trees, and banana plants, the entrance resembles a hotel from Hollywood’s golden era. It also boasts a plaque commemorating the first working laser, built in 1960 at what was then called Hughes Research Labs.

A microchip developed at HRL learns like a biological brain by strengthening or weakening synapse-like connections.

On a bench in a windowless lab, Narayan Srinivasa’s chip sits at the center of a tangle of wires. The activity of its 576 artificial neurons appears on a computer screen as a parade of spikes, an EEG for a silicon brain. The HRL chip has neurons and synapses much like IBM’s. But like the neurons in your own brain, those on HRL’s chip adjust their synaptic connections when exposed to new data. In other words, the chip learns through experience.

The HRL chip mimics two learning phenomena in brains. One is that neurons become more or less sensitive to signals from another neuron depending on how frequently those signals arrive. The other is more complex: a process believed to support learning and memory, known as spike-timing-dependent plasticity. This causes neurons to become more responsive to other neurons that have tended to closely match their own signaling activity in the past. If groups of neurons are working together constructively, the connections between them strengthen, while less useful connections fall dormant.

Results from experiments with simulated versions of the chip are impressive. The chip played a virtual game of Pong, just as IBM’s chip did. But unlike IBM’s chip, HRL’s wasn’t programmed to play the game—only to move its paddle, sense the ball, and receive feedback that either rewarded a successful shot or punished a miss. A system of 120 neurons started out flailing, but within about five rounds it had become a skilled player. “You don’t program it,” Srinivasa says. “You just say ‘Good job,’ ‘Bad job,’ and it figures out what it should be doing.” If extra balls, paddles, or opponents are added, the network quickly adapts to the changes.

This approach might eventually let engineers create a robot that stumbles through a kind of “childhood,” figuring out how to move around and navigate. “You can’t capture the richness of all the things that happen in the real-world environment, so you should make the system deal with it directly,” says Srinivasa. Identical machines could then incorporate whatever the original one has learned. But leaving robots some ability to learn after that point could also be useful. That way they could adapt if damaged, or adjust their gait to different kinds of terrain.

The first real test of this vision for neuromorphic computing will come next summer, when the HRL chip is scheduled to escape its lab bench and take flight in a palm-sized aircraft with flapping wings, called a Snipe. As a human remotely pilots the craft through a series of rooms, the chip will take in data from the craft’s camera and other sensors. At some point the chip will be given a signal that means “Pay attention here.” The next time the Snipe visits that room, the chip should turn on a light to signal that it remembers. Performing this kind of recognition would normally require too much electrical and computing power for such a small craft.

Alien Intelligence

Despite the Synapse chips’ modest but significant successes, it is still unclear whether scaling up these chips will produce machines with more sophisticated brainlike faculties. And some critics doubt it will ever be possible for engineers to copy biology closely enough to capture these abilities.

IBM used this simulation of long-range neural pathways in a macaque monkey to guide the design of neuromorphic chips.

Neuroscientist Henry Markram, who discovered spike-timing-dependent plasticity, has attacked Modha’s work on networks of simulated neurons, saying their behavior is too simplistic. He believes that successfully emulating the brain’s faculties requires copying synapses down to the molecular scale; the behavior of neurons is influenced by the interactions of dozens of ion channels and thousands of proteins, he notes, and there are numerous types of synapses, all of which behave in nonlinear, or chaotic, ways. In Markram’s view, capturing the capabilities of a real brain would require scientists to incorporate all those features.

The DARPA teams counter that they don’t have to capture the full complexity of brains to get useful things done, and that successive generations of their chips can be expected to come closer to representing biology. HRL hopes to improve its chips by enabling the silicon neurons to regulate their own firing rate as those in brains do, and IBM is wiring the connections between cores on its latest neuromorphic chip in a new way, using insights from simulations of the connections between different regions of the cortex of a macaque.

Modha believes these connections could be important to higher-level brain functioning. Yet even after such improvements, these chips will still be far from the messy, complex reality of brains. It seems unlikely that microchips will ever match brains in fitting 10 billion synaptic connections into a single square centimeter, even though HRL is experimenting with a denser form of memory based on exotic devices known as memristors.

At the same time, neuromorphic designs are still far removed from most computers we have today. Perhaps it is better to recognize these chips as something entirely apart—a new, alien form of intelligence.

They may be alien, but IBM’s head of research strategy, Zachary Lemnios, predicts that we’ll want to get familiar with them soon enough. Many large businesses already feel the need for a new kind of computational intelligence, he says: “The traditional approach is to add more computational capability and stronger algorithms, but that just doesn’t scale, and we’re seeing that.” As examples, he cites Apple’s Siri personal assistant and Google’s self–driving cars. These technologies are not very sophisticated in how they understand the world around them, Lemnios says; Google’s cars rely heavily on preloaded map data to navigate, while Siri taps into distant cloud servers for voice recognition and language processing, causing noticeable delays.

Today the cutting edge of artificial–intelligence software is a discipline known as “deep learning,” embraced by Google and Facebook, among others. It involves using software to simulate networks of very basic neurons on normal computer architecture (see “10 Breakthrough Technologies: Deep Learning,” May/June 2013). But that approach, which produced Google’s cat-spotting software, relies on vast clusters of computers to run the simulated neural networks and feed them data. Neuromorphic machines should allow such faculties to be packaged into compact, efficient devices for situations in which it’s impractical to connect to a distant data center. IBM is already talking with clients interested in using neuromorphic systems. Security video processing and financial fraud prediction are at the front of the line, as both require complex learning and real-time pattern recognition.

Whenever and however neuromorphic chips are finally used, it will most likely be in collaboration with von Neumann machines. Numbers will still need to be crunched, and even in systems faced with problems such as analyzing images, it will be easier and more efficient to have a conventional computer in command. Neuromorphic chips could then be used for particular tasks, just as a brain relies on different regions specialized to perform different jobs.

As has usually been the case throughout the history of computing, the first such systems will probably be deployed in the service of the U.S. military. “It’s not mystical or magical,” Gill Pratt, who manages the Synapse project at DARPA, says of neuromorphic computing. “It’s an architectural difference that leads to a different trade-off between energy and performance.” Pratt says that UAVs, in particular, could use the approach. Neuromorphic chips could recognize landmarks or targets without the bulky data transfers and powerful conventional computers now needed to process imagery. “Rather than sending video of a bunch of guys, it would say, ‘There’s a person in each of these positions—it looks like they’re running,’” he says.

This vision of a new kind of computer chip is one that both Mead and von Neumann would surely recognize.

Computing

How Apple Could Fed-Proof Its Software Update System

The FBI’s demands on Apple have got security experts thinking about how to make it harder for the government to secretly coerce software companies.

Apple’s refusal to comply with a judge’s demand that it help the FBI unlock a terrorist’s iPhone has triggered a roiling debate about how much the U.S. government can or should demand of tech companies.

It is also leading some experts to question the trustworthiness of one of the bedrocks of modern computer security: the way companies like Apple distribute software updates to our devices. Efforts are now under way to figure out how a company could make it impossible for agencies like the FBI to secretly borrow the systems used for those updates.

The FBI is asking Apple for two things: to supply software that would disable protections against guessing the device’s passcode, and to validate or “sign” that software so that the phone will accept it.

That second demand has left some experts horrified. In order to prevent our laptops and phones from being tricked into downloading malicious software, companies carefully guard the encryption keys they use to sign updates. “Signing keys are some of the crown jewels of the tech industry,” says Joseph Lorenzo Hall, chief technologist at the Center for Democracy and Technology. “I don’t think anyone envisioned that the thing keeping malware at bay from very popular platforms like iOS could in itself be a weakness.”

Hall and others say that if Apple can be forced to sign updates for the FBI, then the government could use this tactic again and again, perhaps in secret.

Commandeering a company’s software update mechanism would be a very effective way for law enforcement or intelligence agencies to deploy surveillance software. Opponents of the FBI’s tactics say it’s likely that power would be abused, and that it would undermine trust in software updates needed to keep us safe against criminals. Chris Soghoian, principal technologist for the ACLU, likens the idea of the government manipulating software updates to subverting vaccination against disease.

Bryan Ford, an associate professor at the Swiss Federal Institute of Technology (EPFL) in Lausanne, thinks companies could defend themselves by giving up sole ownership of the keys needed to sign software updates.

Today companies might have one signing key, or several keys in the hands of a few trusted employees who must come together to sign a new update. Ford has developed a system that can create hundreds or thousands of signing keys intended to be distributed more widely, even to people at other companies or public-interest organizations such as the Electronic Frontier Foundation.

Under that model, when Apple created and signed a new update it would pause before distributing it to ask for additional “witness” signatures from other people it had granted keys to. Whether or not diverse witnesses provided their signatures would signal to the security community whether this was a routine update or something unusual, says Ford.

“For companies like Apple that sell products globally, witnesses should even be spread out across many different countries,” he says. That would be a significant change in how tech companies operate—essentially enlisting outside parties to help with a core part of a business’s operations. But Ford says the Apple case may be troubling enough to make software companies consider additional security measures like his proposal.

“I hope they’ll want to do this to improve their own products’ security, to deter coercion attempts by governments or other actors around the world, and to fend off the fear from their international customers about risks of coerced back doors,” says Ford. He developed the system, called decentralized witness cosigning, with colleagues at EPFL and Yale. They released code for that system late last year and will formally present it at a leading academic security conference in May.

Joseph Bonneau, a postdoctoral researcher at Stanford and a technology fellow at EFF, says Ford’s idea makes sense. “It is a very nice improvement which makes it much harder to sign something like back-doored firmware in secret,” he says. Previous schemes to distribute signing keys have been designed to scale to around 10 cosigners, not as many as thousands, says Bonneau.

Seny Kamara, an associate professor at Brown University, agrees that Ford’s scheme is a good technical solution, and he says it could also be applied to securing the system that underpins Web addresses. However, convincing companies to adopt it could be challenging. “There remains a question as to whether the big companies would be comfortable having their software update mechanisms be dependent on third parties,” he says.

Ford’s is not the only idea that could make software updates more trustworthy. For example, researchers have proposed adapting a system championed by Google that makes it easier to detect attempts to abuse the security certificates used to secure Internet services.

Hall of CDT says that although right now no solution looks sure to gain traction with tech companies, it appears likely that some tactic eventually will. He points to how Google and other companies acted to shore up their infrastructure after disclosures about U.S. surveillance by erstwhile federal contractor Edward Snowden. “We had to reconsider our threat models and what we have to defend against,” says Hall. “I think you’ll see similar shifts here.”

Computing

The U.S. Government’s Internet Lifeline for the Poor Isn’t Much of One

The FCC subsidizes Internet and mobile access for millions of people in the U.S.—but its program stands little hope of helping bridge America’s “persistent digital divide.”

A new proposal to offer low-income Americans a monthly subsidy for broadband Internet access is just the latest reflection of the Federal Communications Commission’s concern that the country is facing a “persistent digital divide.” But given the relatively high cost of a decent Internet connection in the U.S., it’s hard to see the policy, which would pay $9.25 per month to eligible households, making much of a difference.

According to a blog post published on March 8 by FCC chairman Tom Wheeler and commissioner Mignon Clyburn, the subsidy is meant to help the more than 64 million Americans missing out on the benefits of broadband Internet access—mostly people who have low incomes or live in rural areas. But the average price of broadband in the U.S., now defined as 25 megabits per second for downloads and three megabits per second for uploads, is more than four times  the $9.25 subsidy, so prospective new Internet users will still need to shell out a fair amount to connect.

In practice, the average connection speed in the U.S. is 11.9 megabits per second, and the minimum speed for fixed Internet under the new plan is 10 megabits per second for downloads and one megabit per second for uploads (or 500 megabytes of data at 3G speeds for mobile). But the average monthly cost in the U.S. for an unlimited fixed plan with download speeds of less than 10 megabits per second is still $33.12, substantially higher than the cost in many other countries.

FCC chairman Tom Wheeler.

The FCC’s policy is, simply put, badly outdated. It is essentially a revamped version of a Reagan-era measure aimed at helping low-income households purchase landline telephone service. Called Lifeline, it has been updated to include mobile phones, and some 13 million households subscribe to the program. But it is politically controversial, and its opponents say it’s wasteful and mismanaged. Because of that, the FCC is in no position to attempt to increase the amount the policy pays. And $9.25 is probably not enough to narrow the digital divide very far.

(Read more: Washington Post, “America’s Broadband Improves, Cementing a ‘Persistent Digital Divide’”)

You've read 2 of 5 free articles this month.