Analog computing pre-dates digital computing but has been long forgotten due to the fast development of the latter. While very powerful, digital computing is highly inefficient at performing perception related tasks, and the problem becomes increasingly significant with the fast development of artificial intelligence and transistor scaling approaching their physical (and economic) limit. Since memristors were experimentally shown in 2008 by Hewlett Packard Labs, researchers have been extensively exploring their capability to store and process information in the analog domain.
Despite great promises shown in the laboratory environment, memristor crossbar, or non-volatile resistive analog memory, based matrix multiplication accelerators, has remained a high risk, particularly in device performance and peripheral circuitry. In this talk, I will present our recent progress in tackling those challenges. First, we have operated using directly integrated CMOS and nanoscale memristors for fully on-chip read/write/computing demonstrations. We operate in a much lower power regime, program with fine control, and demonstrated a multi-layer convolutional neural network. Due to the intrinsic stochastic nature of the memristor device, unexpected computing errors still occur, which, in many cases, are fatal in many applications. To make the computing system tolerant of device defects, we explored two possible solutions. The first method is the in-situ training directly on the crossbar, to self-adapt defects during the training process. The second method is a novel analog error correcting code, that detects and corrects error outliers that exceed a predefined threshold. We expect the schemes introduced here to make analog computing more feasible.