Month: December 2015

The Malthusian Trap Song

Speed Read This
Posted by on December 17, 2015

The Malthusian Trap is what happens to societies that reach the carrying capacity of their environment. For most of human history, humans have lived a Malthusian existence, constantly threatened by famine and disease and warfare. I wrote this for the 2015 MIT Winter Solstice. To the tune of Garden Song.

Dawn to dusk, row by row, better make this garden grow
Move your feet with a plow in tow, on a piece of fertile ground
Inch by inch, blow by blow, gonna take these crops you sow,
From the peasants down below, ‘till the wall comes tumbling down

Pulling weeds and picking stones, chasing dreams and breaking bones
Got no choice but to grow my own ‘cause the winter’s close at hand
Grain for grain, drought and pain, find my way through nature’s chain
Tune my body and my brain to competition’s demand.

Plant those rows straight and long, thicker than with prayer and song.
Emperors will make you strong if your skillset’s very rare.
Old crow watching hungrily, from his perch in yonder tree.
In my garden I’m as free as that feathered thief up there.

OpenAI Should Hold Off On Choosing Tactics

Speed Read This
Posted by on December 14, 2015

OpenAI is a non-profit artificial intelligence research company founded this lead by Ilya Sutskever as research director with $1B of funding from Sam Altman, Greg Brockman, Elon Musk, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys, and YC Research. The launch was announced in this blog post on December 11, saying:.

Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.

This is a noble and worthy mission. Right now, most AI research is done inside a few large for-profit corporations, and there is a considerable risk that profit motive could lead them to pursue paths that are not good for humanity as a whole. For the sorts of AI research going on now, leading towards goals like medical advances and self-driving cars, the best way to do that is to be very open. Letting research accumulate in secret silos would delay these advances and cost lives. In the future, there are going to be decisions about which paths to follow, where some paths may be safer and some paths may be more profitable, and I personally am much more comfortable with a non-profit organization making those decisions than I would be with a for-profit corporation or a government doing the same.

There are predictions that in the medium- to long-term future, powerful AGIs could be very dangerous. There are two main dangers that have been tentatively identified, and there’s a lot of uncertainty around both. The first concern is that a powerful enough AI in the hands of bad actors would let them do much more damage than they could otherwise. For example, extremists could download an AI and ask it to help them design weapons or plan attacks. Others might use the same AI help them design defenses and to find the extremists, but there’s no guarantee that these would cancel out; it could end up like computer security, where the balance of power strongly favors offense over defense. To give one particularly extreme example, suppose someone created a general-purpose question-answering system that was smart enough that, if asked, it could invent a nuclear bomb made without exotic ingredients and provide simple instructions to make one. Letting everyone in the world download that AI and run it privately on their desktop computer would be predictably disastrous, and couldn’t be allowed. On the other hand, the balance could end up favoring defenders; in that case, widespread distribution would be less of a problem. The second concern is the possibility of an AGI undergoing recursive self-improvement; if someone developed and trained an AI all the way to a point where it could do further AI research by itself, then by repeatedly upgrading its ability to upgrade itself it could quickly become very very powerful. This scenario is frightening because if the seed AI was a little bit flawed, then theory suggests that the process of recursive self-improvement might greatly magnify the effects of the flaw, resulting in something that destroys humanity. Dealing with this is going to be really tricky, because on one hand we’ll want the entire research community to be able to hunt for those flaws, but on the other hand we don’t want anyone to take an AI and tell it or let it start a recursive self-improvement process before everyone’s sure it’s safe.

At this point, no one really knows whether recursive self-improvement is possible, nor what the interaction will be between AI-empowered bad actors and AI-empowered defenders. We’ll probably know more in a decade, and more research will certainly help. OpenAI’s founding statement seemed to strike a good balance: “as broadly and evenly distributed as is possible safely”, acknowledging both the importance of sharing the benefits of AI and also the possibility that safety concerns might force them to close up in the future. As OpenAI themselves put it:

AI systems today have impressive but narrow capabilities. It seems that we’ll keep whittling away at their constraints, and in the extreme case they will reach human performance on virtually every intellectual task. It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly.

Yesterday, two days after that founding statement was published, it was edited. The new version reads:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

Since our research is free from financial obligations, we can better focus on a positive human impact. We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.

The word “safely” has been removed. There is no announcement or acknowledgement of the change, within that blog post or anywhere else, and no indication who made it or who knew about it.

I sort of understand why they’d do this. There’s a problem right now with ignorant press articles fearmongering over research that very clearly doesn’t pose any risk, and seizing on out-of-context discussions of long-term future issues to reinforce that. But those words where there for an important reason. When an organization states its mission in a founding statement, that has effects – both cultural and legal – lasting far into the future, and there’s going to come a time, probably in this century, when some OpenAI’s researcher is going to wonder whether their latest creation might be unsafe to publish. The modified statement says: publish no matter what. If there’s a really clear-cut danger, they might refuse to publish anyways, but this will be hard to defend in the face of ambiguity.

OpenAI is less than a week old, so it’s too early to criticize them or to praise them for anything they’ve done. Still, the direction they’ve indicated worries me – both because I doubt whether openness is going to be a safe strategy a decade from now, and because they don’t seem to have waited for the data to come in before baking it into their organizational identity. OpenAI should be working with the AI safety community to figure out what strategies to pursue in the short and long term. They have a lot of flexibility by virtue of being a non-profit, and they shouldn’t throw that flexibility away. They’re going to need it.