Last week, Dharmendra Modha said goodbye to a computer some six years in the making: a set of 16 interconnected TrueNorth chips built to mimic the ultra-low-energy, highly-parallel operation of the human brain.
On Thursday, a team from IBM Research-Almaden in California hopped in a car and drove the unit some 75 minutes north to the U.S. Department of Energy’s Lawrence Livermore National Laboratory. There, scientists and engineers will evaluate whether the technology could be a useful weapon in their computing arsenal.
It was a big moment for the IBM program, which devised the TrueNorth concept in 2010 and unveiled the first chip in 2014. Developed in collaboration with Cornell University, the TrueNorth chips use traditional digital components to implement a decidedly more brain-like behavior; each 5.4-billion-transistor chip can consume as little as 70 milliwatts (for more on how that could possibly work, see our 2014 story “How IBM Got Brainlike Efficiency From the TrueNorth Chip”).
Although these were not the first TrueNorth chips to ship, the array is notable, Modha says, because it integrates 16 chips onto a single board, allowing the company to demonstrate that it can “scale up” the approach to larger and larger systems. The entire 16-chip array can require as little as 2.5 watts (other systems, such as communications fabric, add some overhead to that).
Livermore, which has some of the world’s fastest supercomputers and signed a $1 million contract with IBM for the TrueNorth unit, will be exploring how this new technology might play a role in areas such as cybersecurity and physical simulation.
I was particularly excited to see exascale computing mentioned in the press release announcing the system. Probably the looming question among high-performance computer makers is how we’ll reach the exascale—when machines are some 30 times as fast as the fastest supercomputer today—without also creating staggering (and probably infeasibly expensive) utility bills.
But as it turns out, chances are slim that we’ll be simulating nuclear weapons or designing tomorrow’s nuclear reactors on supercomputers composed entirely of chips modeled on the human brain. Although TrueNorth can, in principle, perform any computation, the speed and efficiency of such neuromorphic chips only shines in particular applications such as pattern recognition. Traditional computers will still be with us, Modha says: “What we’re offering is a complementary architecture.”
Engineers are still sorting out the best way to build an exascale supercomputer, says Brian Van Essen, a computer scientist at Livermore’s Center for Applied Scientific Computing. Heterogeneous computing, which could mix of different computing technologies such as CPUs, graphics processing units, FPGAs, and neuromorphic chips—“is definitely one potential path,” he says. But, he adds, “it’s not clear what the final system design is going to look like.”
Van Essen says one area Livermore hopes to explore with the TrueNorth chips is their potential role in large-scale simulation. “As we scale simulations and modeling [of] physical systems up to large sizes, sometimes the simulations can get into an area where the numerics get kind of garbled up,” he says.
He says a team is in the midst of evaluating whether machine learning can be used to detect problems before a simulation crashes and correct for the behavior. Van Essen says that if the approach looks promising, one could envision chips distributed thoughout the system that will monitor the progress of a simulation. It would take a “nontrivial amount of horsepower to monitor the system,” Van Essen says, adding that it would be a good application for a l0w-power technology such as TrueNorth.
Read More:Technology: The Beast System