In “Deep Learning is Hitting A Wall,” Gary Marcus argues that deep learning is reaching its limits as a paradigm for artificial intelligence. Recent difficulties with self-driving cars and unreliable language model outputs illustrate the limitations of systems that cannot understand the world, or the meaning of the words they are parroting. Instead, Marcus advocates that field of AI turn its attention to symbolic approaches. Such approaches would focus on encoding information about the world and deriving information using simple operations:
What does “manipulating symbols” really mean? Ultimately, it means two things: having sets of symbols (essentially just patterns that stand for things) to represent information, and processing (manipulating) those symbols in a specific way, using something like algebra (or logic, or computer programs) to operate over those symbols.
This article is as interesting for its exposition of symbolic approaches (relatively unknown to me) as for its sociology of science. Embraced by early computer scientists such as von Nuemann, symbolic approaches became dominant in the 1970s after intradisciplinary fighting between the symbolic camp and the neural network camp. Neural nets regained prominance in the 1980s, advanced by researchers who avoided the symbolic approach. The schism remains until today: Marcus’s polemic is itself is a testament to the rift. These are competing paradigms, though there are glimmers of a synthesis on the horizon.
On Wednesday, the WHO declared COVID-19, the disease caused by the novel coronavirus, to be a global pandemic. The U.S. banned travel from Europe, the NBA suspended its season, and cities across the country implemented bans on large public gathers. Will these efforts to curb the spread of the coronavirus be successful? How will we know?
To answer this question, we need to know how many cases of COVID-19 we should expect given current trends. Unfortunately, I haven’t seen many readily accessible forecasts that are updated in real time as new data comes in. I realize that things can escalate quickly, but I didn’t have much sense of how soon to expect that to happen. So I set out to produce some short-term projections myself to help calibrate my expectations (and maybe quell some uncertainty-induced anxiety at the same time).