Election Fraud and a Roll to Disbelieve

Speed Read This
Posted by on November 8, 2016

In 2000, the US presidential election between George Bush and Al Gore was swung by hacked voting machines. This fact can be verified by anyone, beyond a shadow of a doubt. In the close race between Bush and Gore, Volusia County reported a total of negative-16,022 votes for Al Gore[1]. Upon investigation, the negative vote count disappeared and was replaced with a normal-looking one, but not before the number reached the press. At the time, this was unexplainable. It was not until 2005 that security researcher Harri Hursti demonstrated a technique for hacking voting machines that involves sneaking memory cards with negative vote totals on them into the counting process[2]. The idea is that, by inserting a memory card with positive votes for one candidate and negative votes for another candidate, one can change the vote totals without messing up the turnout numbers, which would reveal the fraud. But if one is performing the Hursti hack, and messes up by putting the fake memory card into the process *in the wrong place*, then a county may accidentally report a negative vote total – because a memory card that was supposed to be used in a large precinct was used in a small precinct, without enough votes to steal. The machines used in Volusia County were in fact vulnerable to this technique, and this is what happened.

Because the margin in Florida was small, a “recount” was triggered; in reality, ballots were being viewed by humans for the first time. Bush and Gore argued before the Supreme Court, and the Supreme Court ruled 5-4 (along party lines) to stop the counting. Gore then conceded. It all happened too quickly for little things like election fraud to be noticed; the media narrative, rather than being about fraud, was about “hanging chads” and other innocent things.

Since then, forces within the Republican party have worked to promote a false narrative of vote fraud as something symmetric, that both parties do, by manufacturing false evidence of Democratic voter fraud. Like voting machine hacking, this becomes most obvious when there’s a mistake and the mistake becomes a scandal. In 2006, seven attorneys were fired mid-term from the department of justice[3]. In 2008, the Inspector General determined[4] that these firings were motivated by refusal to prosecute voter fraud cases against Democrats.
Over the past few weeks, Donald Trump has been loudly warning that democrats would engage in election fraud to give the election to Hillary Clinton. This is a possible thing, but so is the reverse. Fortunately, there’s a standard strategy for determining whether election fraud took place, and which direct it went in: exit polls.
I don’t have access to exit poll data for today’s election. Neither do you, and neither do most of the news outlets that’re reporting the results. But Nate Silver has this to say about them on his blog:
“One reason people find Trump’s competitive margins across a wide range of swing states so surprising is because exit polls showed Clinton beating her pre-election polls in most states, instead of underperforming them.”[5]

At the time I’m writing this, FiveThirtyEight says (on its sidebar) that Clinton and Trump are equally likely to win, based on states that have been officially called; other sources are strongly favoring Trump’s chances, based on preliminary counts.

I roll to disbelieve. Maybe Trump really did get more votes than expected, and more votes than indicated by the exit polls Silver was referring to. Or maybe he didn’t. One thing’s for certain, though: the American people will hear a definitive result before any of this is investigated.

(Cross-posted on Facebook)

A Problem With Negative Income Tax Proposals

Speed Read This
Posted by on October 13, 2016

I frequently hear it proposed that we should institute a negative income tax or similar form of guaranteed minimum income, funded by removing parts of the existing welfare and social services system. Simple economic analysis suggests that the welfare system is spectacularly wasteful, compared to simple cash transfers, and that tearing it down is obviously correct.

It isn’t.

Consider the food-stamp programs run by most states. People whose income is low enough receive funds whose use is restricted, so that it can be used to purchase food – but not to purchase anything else. Trading food stamps for regular dollars or inedible goods and services is illegal for both the buyer and the seller. A naive economic model suggests that this is bad: people know what they need better than bureaucrats do, and if someone would rather spend less on food and more on housing or car maintenance or something, they’re probably right about that being the right thing to do. So food-stamp dollars are worth less than regular dollars, and by giving people food-stamp dollars instead of regular dollars, value is destroyed. This is backed up by studies analyzing poor peoples’ purchases when given unrestricted funds; those purchases tend to be reasonable.

This is a Chesterton Fence. Tearing it down would be a terrible mistake, because it has a non-obvious purpose:

The reason you give poor people food-stamp dollars instead of regular dollars is because they’re resistant to debt collection, both legal and illegal, resistant to theft and resistant to scams.

There are organizations and economic forces that are precisely calibrated to take all of the money poor people have, but not more. If you give one person unrestricted money, that person is better off. But if you give everyone in a poor community extra money, then in the long run rents, drug prices, extortion demands and everything else will rise, compensating almost exactly.

We have a government which says to its poor people: you may be scammed out of your money, your access to every luxury, and even your home, by anyone who can trick you into signing the wrong piece of paper. But we draw the line at letting you be scammed out of your access to food.

A negative income tax would erase that line. I haven’t heard anyone raise this as a problem, which sets an upper bound on how carefully people (that I’ve read) have thought about the issue. This worries me greatly. Can this issue be patched? And, thinking about it from this angle, are there more issues lurking?

(Cross-posted on Facebook)

National Psyches, Expressed As Sports

Speed Read This
Posted by on July 26, 2016

It’s often said that sports are a substitute for warfare, an alternative place for people to channel their most destructive instincts. As someone with only benevolent instincts I’ve never felt the appeal, but the thought occurred to me that one might learn quite a bit about cultures’ psyches by paying attention to their choice of sports.

For example, consider football. Each team has a leader, who controls the gaol and is allowed to break the rules. The rest of the players coordinate in a decentralized manner to set up shots at the other team’s leader, whose job is to survive the assassination attempts. Football is the most popular sport in Venezuela, Colombia, Nigeria, Brazil and Turkey. In the United States, however, the name refers to something else entirely.

In American football, each team’s leader is a general who sits on the sidelines and gives orders to a group of drug-enhanced supersoldiers, who were selected by the draft. One team’s goal is to take the “political football” and deliver it into the other team’s territory; the other team’s goal is to find the player with the football and detain him, so that the referee can see who’s really responsible. Recent studies have shown players suffering brain damage from repeated concussions, but no one knows what to do about this so things pretty much continue on as usual.

Not all sports represent warfare, though! Consider basketball, also popular in the United States. In basketball, two teams of mostly black people compete to see which is taller, thereby demonstrating their superior genes.

Um. I mean, um. Baseball! Look at baseball. In baseball, players approach one by one, and everyone works together to stop them from scoring home runs. A home run is a well-standardized euphemism for having sex. Other key players are the pitcher and catcher, which are well-standardized euphemisms for participants in gay sex. In fact, pretty much every baseball-related term is a euphemism for something sexual, at least according to UrbanDictionary. Whereas in Britain, they play a game that’s superficially the same except none of its terminology is euphemistic. This is because the British are uncultured barbarians.

I wish I could say that the historical shift in peoples’ choice of sports reflected the advancing enlightenment and universal culture. Instead, I will observe that in the de facto national sport of South Korea, which is Starcraft, one third of players are literally giant insects and the most exciting moments for spectators involve nuclear weapons.

What Google’s TPUs Mean for AI Timing and Safety

Speed Read This
Posted by on May 21, 2016

Last Wednesday, Google announced that AlphaGo was not powered by GPUs as everyone thought, but by Google’s own custom ASICs (application-specific integrated circuits), which they are calling “tensor processing units” (TPUs).

We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.

TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.

So, what does this mean for AGI timelines, and how does the existence of TPUs affect the outcome when AGI does come into existence?

The development of TPUs accelerated the timeline of AGI development. This is fairly straightforward; researchers can do more experiments with more computing power, and algorithms that stretched past the limits of available computing power before became possible.

If your estimate of when there will be human-comparable or superintelligent AGI was based on the very high rate of progress in the past year, then this should make you expect AGI to arrive later, because it explains some of that progress with a one-time gain that can’t be repeated. If your timeline estimate was based on extrapolating Moore’s Law or the rate of progress excluding the past year, then this should make you expect AGI to arrive sooner.

Some people model AGI development as a race between capability and control, and want us to know more about how to control AGIs before they’re created. Under a model of differential technological development, the creation of TPUs could be bad if it accelerates progress in AI capability more than it accelerates progress in AI safety. I have mixed feelings about differential technological development as applied to AGI; while the safety/control research has a long way to go, humanity faces a lot of serious problems which AGI could solve. In this particular case, however, I think the differential technological advancement model is wrong in an interesting way.

Take the perspective of a few years ago, before Google invested in developing ASICs. Switching from GPUs to more specialized processors looks pretty inevitable; it’s a question of when it will happen, not whether it will. Whenever the transition happens, it creates a discontinuous jump in capability; Google’s announcement calls it “roughly equivalent to fast-forwarding technology about seven years into the future”. This is slight hyperbole, but if you take it at face value, it raises an interesting question: which seven years do you want to fast-forward over? Suppose the transition were delayed for a very long time, until AGI of near-human or greater-than-human intelligence was created or was about to be created. Under those circumstances, introducing specialized processors into the mix would be much riskier than it is now. A discontinuous increase in computational power could mean that AGI capability skips discontinuously over the region that contains the best opportunities to study an AGI and work on its safety.

In diagram form:

asics-or-no

I don’t know whether this is what Google was thinking when they decided to invest in TPUs. (It probably wasn’t; gaining a competitive advantage is reason enough). But it does seem extremely important.

There are a few smaller strategic considerations that also point in the direction of TPUs being a good idea. GPU drivers are extremely complicated, and rumor has it that the code bases of both major GPU
manufacturers are quite messy; starting from scratch in a context that doesn’t have to deal with games and legacy code can greatly improve reliability. When AGIs first come into existence, if they run on specialized hardware then the developers won’t be able to increase its power as rapidly by renting more computers because availability of the specialized hardware will be more limited. Similarly, an AGI acting autonomously won’t be able to increase its power that way either. Datacenters full of AI-specific chips make monitoring easier by concentrating AI development into predictable locations.

Overall, I’d say Google’s TPUs are a very positive development from a safety standpoint.

Of course, there’s still the question of how the heck they actually are, beyond the fact that they’re specialized processors that train neural nets quickly. In all likelihood, many of the gains come from tricks they haven’t talked about publicly, but we can make some reasonable inferences from what they have said.

Training a neural net involves doing a lot of arithmetic with a very regular structure, like multiplying large matrices and tensors together. Algorithms for training neural nets parallelize extremely well; if you double the amount of processors working on a neural net, you can finish the same task in half the time, or make your neural net bigger. Prior to 2008 or so, machine learning was mostly done on general-purpose CPUs — ie, Intel and AMD’s x86 and x86_64 chips. Around 2008, GPUs started becoming less specific to graphics and more general purpose, and today nearly all machine learning is done with “general-purpose GPU” (GPGPU). GPUs can perform operations like tensor multiplication more than an order of magnitude faster. Why’s that? Here’s a picture of an AMD Bulldozer CPU which illustrates the problem CPUs have. This is a four-core x86_64 CPU from late 2011.

AMDBulldozerHighlightedFPU

(Image source)

Highlighted in red, I’ve marked the floating point unit, which is the only part of the CPU that’s doing actual arithmetic when you use it to train a neural net. It is very small. This is typical of modern CPU architectures; the vast majority of the silicon and the power is spent dealing with control flow, instruction decoding and scheduling, and the memory hierarchy. If we could somehow get rid of that overhead, we could fill the whole chip with floating-point units.

This is exactly what a GPU is. GPUs only work on computations with highly regular structure; they can’t handle branches or other control flow, they have comparatively simple instruction sets (and hide that instruction set behind a driver so it doesn’t have to be backwards compatibility), and they have predictable memory-access patterns to reduce the need for cache. They spend most of their energy and chip-area on arithmetic units that take in very wide vectors of numbers, and operate on all of them at once.

But GPUs still retain a lot of computational flexibility that training a neural net doesn’t need. In particular, they work on numbers with varying numbers of digits, which requires duplicating a lot of the arithmetic circuitry. While Google has published very little about their TPUs, one thing they did mention is reduced computational precision.

As a point of comparison, take Nvidia’s most recent GPU architecture, Pascal.

Each SM [streaming multiprocessor] in GP100 features 32 double precision (FP64) CUDA Cores, which is one-half the number of FP32 single precision CUDA Cores.

Using FP16 computation improves performance up to 2x compared to FP32 arithmetic, and similarly FP16 data transfers take less time than FP32 or FP64 transfers.

Format Bits of
exponent
Bits of
precision
FP16 5 10
FP32 9 23
FP64 11 52

So a significant fraction of Nvidia’s GPUs are FP64 cores, which are useless for deep learning. When it does FP16 operations, it uses an FP32 core in a special mode, which is almost certainly less efficient than using two purpose-built FP16 cores. A TPU can also omit hardware for unused operations like trigonometric functions and probably, for that matter, division. Does this add up to a full order of magnitude? I’m not really sure. But I’d love to see Google publish more details of their TPUs, so that the whole AI research community can make the same switch.

Suspicious Memes and Past Battles

Speed Read This
Posted by on May 17, 2016

When people organize to commit a crime, the legal term for this is a conspiracy. For example, if five people get together and plan a bank robbery with the intent to execute their plan, then each of them has committed the crime of “conspiracy to commit robbery”.

If you suspect that a crime has taken place, and it’s the sort of crime that would’ve involved multiple people, then this is a “conspiracy theory”, particularly if those people would’ve had to be powerful and well-connected. Until recently, everyone seemed to agree that conspiracy theories were exclusively the creations of idiots and schizophrenics. One of the more common, well known conspiracy theories is literally that “the CIA is causing my command hallucinations”. So if someone were to claim, for example, that the 2000 election was swung by hacked voting machines in Volusia County, or that Michael Hastings was murdered by one of the top-ranking military officials he singled out for criticism in Rolling Stone? Then the anticipation is that this would be followed by some incoherent rambling about aliens, the social expectation is that this person is to be shunned, and the practical effect is that those things are hard to say, hard to hear, and unlikely to be taken seriously.

Somehow, somewhere along the line, accusing governments and the powerful of crimes became associated with mental illness, disrespectability and raging idiocy. And when you pause and notice that, it is *incredibly creepy*. It’s certainly possible to imagine how such a meme could arise naturally, but it’s also a little too perfect — like it’s the radioactive waste of a propaganda war fought generations back.

As it turns out, the last few generations did have propaganda wars, and we know what they were: the counterculture, the civil rights movement, McCarthyism. Each side of each memetic conflict left its mark on our culture, and some of those marks are things we’d better scrub away. So now I’m wondering: what were they?

(Cross-posted to Facebook)

Introducing conservative-pad 0.4

Speed Read This
Posted by on March 24, 2016

Npm-logo.svg

There’s been some controversy lately in the Javascript world about micro-packages, version pinning, and the NPM package repository’s regulatory regime. For a serious take, I wrote a comment on the HN thread. There is a looming spectre threatening our entire software ecosystem, of which the deplorable decline of our dependency hygiene is but one example. The left has gone too far. That’s why I’m writing conservative-pad, a string-padding function free from the constraints of political correctness.

Here’s the original:

module.exports = leftpad;
function leftpad (str, len, ch) {
str = String(str);
var i = -1;
if (!ch && ch !== 0) ch = ' ';
len = len - str.length;
while (++i < len) { str = ch + str; } return str; }

This is terrible. You can expect that my replacement version will:

  • Be O(n). The original version uses Shlemiel the Painter's algorithm, which is mumble slur mumble.
  • Be written in LISP, to protect purity. These immigrant Javascript programmers are driving down wages and destroying our precious modularity.
  • Make web development great again.
  • Remove the unnecessary 'ch' parameter, a ridiculous piece of political correctness. It isn't our fault that string-padding is dominated by ' '; it won fair and square, and we shouldn't make exceptions to meritocracy just because someone wants a string-padding character that isn't white.
  • Compile with -Wall, because otherwise the bugs will keep coming back over the border.

Thank you.

Truth Coming Out of Her Well

Speed Read This
Posted by on February 28, 2016

There is a semi-famous picture used as a test image in computer graphics, the Lenna picture. It has an interesting history:

lenna

Alexander Sawchuk estimates that it was in June or July of 1973 when he, then an assistant professor of electrical engineering at the USC Signal and Image Processing Institute (SIPI), along with a graduate student and the SIPI lab manager, was hurriedly searching the lab for a good image to scan for a colleague’s conference paper. They had tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s. They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy.

The engineers tore away the top third of the centerfold so they could wrap it around the drum of their Muirhead wirephoto scanner, which they had outfitted with analog-to-digital converters (one each for the red, green, and blue channels) and a Hewlett Packard 2100 minicomputer. The Muirhead had a fixed resolution of 100 lines per inch and the engineers wanted a 512 x 512 image, so they limited the scan to the top 5.12 inches of the picture, effectively cropping it at the subject’s shoulders.

Yes, for strictly technical engineering reasons, they tastefully cropped Lenna to the shoulders. This reminded me of a more recently semi-famous picture, which suffers from similar aspect-ratio and other technical problems. And so, ladies, gentlemen and other persons, I present: Truth Coming Out of Her Well to Politely Clarify for Mankind.

TruthComingOutOfHerWellToPolitelyClarifyForMankind

For use as a meme, this version has several significant advantages over the 1896 Jean-Léon Gérôme original. In particular:

  • It is square
  • The woman’s face is properly centered
  • There is a uniform area at the top to overlay text on

Here are some examples of how you might use it:

Nice headline/The actual study says the oppositeThey didn't do a Bonferroni correctionThat sounds pretty damning/Unless you read the original context

I claim no copyright over this image, and the original painting is old enough to be expired. So feel free to use this meme to shame politely clarify for mankind wherever you see fit!

EDIT: Rob Bensinger suggests cropping slightly differently. This version gets a closer view, but adds graininess. Whether graininess is a technical failing reminiscent of 1970s image processing or an appropriate signal of meme-iness is up for debate.

TruthComingOutOfHerWellToPolitelyClarifyForMankind-cropped

Decentralized AGIs, or Singleton?

Speed Read This
Posted by on February 19, 2016

When predicting and planning for coming decades, we classify futures different ways based on what happens with artificial general intelligence. There could be a hard take-off, where soon after an AGI is created it self-improves to become extraordinarily powerful, or a soft take-off, where progress is more gradual. There could be a singleton – a single AGI, or a single group-with-AGI, which uses AGI to become much more powerful than everyone else, or things could be decentralized, with lots of AGIs or lots of groups and individuals that have AGIs.

The soft- vs hard-takeoff question is a matter of prediction; either there is a level of intelligence which enables rapid recursive self improvement, or there isn’t, and we can study this question but we can’t do much to change the answer one way or the other. Whether AGI is decentralized or a singleton, however, can be a choice. If a team crosses the finish line and creates a working AGI, and they think decentralized control will lead to a better future, then they can share it to everyone. If multiple teams are close to finishing but they think a singleton will lead to a better future, then they can (we hope) join forces and cross the finish line together.

There are things to worry about and try to prepare for in singleton-AGI futures, and things to worry about and prepare for in decentralized-AGI futures, and these are quite different from each other. Which is better, and which will actually happen? I think a lot of people talking about AGI and AGI safety end up talking past each other, because they are imagining different answers to this question and envisioning different futures. So let’s consider two futures. Both will be good futures, where everything went right. One will be a singleton future, and the other will be a decentralized future.

Let’s look at a singleton future, starting with a version of that future in which everything went right. There are some who want to make – or for others to make – a single, very powerful AGI. They want to design it in such a way that it will respect everyone’s rights and preferences, be impossible for anyone to hijack, and be amazingly good at getting us what we want. In a world where this was executed perfectly, if I wanted something, then the AGI would help me get it. If two people wanted things that were incompatible, then somewhere in the AGI’s programming would be a rule which decides who wins. Philosophers have a lot to say about what that rule would be, and about how to resolve situations when people’s preferences are inconsistent or would change if they knew more. In the world where everything went right, all of those puzzles were solved conclusively, and the answers were programmed into the AGI. The theory of how intelligence works was built up and carefully verified, all the AGI experts agreed that they AGI would do what all the philosophers and AGI experts together agreed was right. Then the AGI would take over the world, and everyone would be happy about it, at least in retrospect when they saw what happened next.

On the other hand, there are a lot of ways for this to go wrong. If someone were to say they’d built an AGI and they wanted to make it a singleton, we’d all be justifiably skeptical. For one thing, they could by lying, and building a different AGI to benefit only themselves, rather than to benefit everyone. But even the very best intentions aren’t necessarily enough. A major takeaway from MIRI and FHI’s research on the subject is that there’s a very real risk of trying to make something universally-benevolent, but getting it disastrously wrong. This is an immensely difficult problem. Hence their emphasis on using formal math: when something is mathematically proven then it’s true, reducing the number of places a mistake could be made by one. There’s a social coordination problem, to make sure that whoever is first to create an AGI makes one that will benefit everyone; another social coordination, to make sure that people aren’t racing to be first-to-finish in a way that causes them to cut corners; and a whole lot of technical problems. Any one of these things could easily fail.

So how about a world with decentralized AGI–that is, one where everyone (or every company) has an AGI of their own, which they’ve configured to serve their own values. Again, we’ll start with the version in which everything goes right. First of all, in this world, there is no hard take-off, and especially no delayed hard take-off. If recursive self-improvement is a thing that can happen, then any balance of power is doomed to collapse and be replaced with a singleton as soon as one AGI manages to do it. And second, the set of other (non-AGI) technologies need to work out in a particular way to make a stable power equilibrium possible. As an analogy, consider what would happen if every individual person had access to nuclear weapons. We would expect things to turn out very badly. Luckily, nuclear weapons require rare materials and difficult technologies, which makes it possible to restrict access to a small number of groups who have all more-or-less agreed to never use them. In a hypothetical alternate universe where anyone could make a nuclear weapon using only sand, controlling them would be impossible, and that hypothetical alternate universe would probably be doomed. Similarly, our decentralized-AGI world can’t have any technologies like sand-nuke world, or it will collapse quickly as soon as AGIs get smart enough to independently rediscover the secret. Or alternatively, that world could build a coordination mechanism where everyone is monitored closely enough to make sure they aren’t pursuing any known or suspect dangerous technologies.

The problems in singleton-AGI world were mostly technical: the creators of the AGI might screw it up. In decentralized-AGI world, the problems mostly come from the shape of the technology landscape. We don’t know whether recursive self-improvement is possible, but if it is, then decentralized-AGI worlds aren’t likely to work out. We don’t know if making-nukes-from-sand is a possible sort of thing, but if anything like that is possible, then the bar for how good the world’s institutions will have to be to prevent disaster will be very high. These things are especially worrying because they aren’t things we can influence; they’re just facts about physics and its implications which we don’t know the answers to yet.

Suppose we make optimistic assumptions. Recursive self-improvement turns out not to be possible, the balance of technologies favors defense over offense, and our AGI representatives get together, form institutions, and enforce laws and agreements that prevent anything truly horrible from happening. There is still a problem. It’s the same problem that happens when humans get together and try to make institutions, laws and agreements. The problem is local incentives.

Any human with above room temperature IQ can design a utopia. The reason our current system isn’t a utopia is that it wasn’t designed by humans. Just as you can look at an arid terrain and determine what shape a river will one day take by assuming water will obey gravity, so you can look at a civilization and determine what shape its institutions will one day take by assuming people will obey incentives.

But that means that just as the shapes of rivers are not designed for beauty or navigation, but rather an artifact of randomly determined terrain, so institutions will not be designed for prosperity or justice, but rather an artifact of randomly determined initial conditions.

Meditations on Moloch by Scott Alexander

If we give everyone their own AGIs, then the way the future turns out depends on the landscape of incentives. That isn’t an easy thing to change, although it isn’t possible. Nor is it an easy thing to predict, though some have certainly tried. (For example Robin Hanson’s The Age of Em). We can imagine nudging things in such a way that, as civilization flows downhill, it goes this way instead of that and ends up in a good future.

The problem is that, at the bottom of the hill as best I understand it, there are bad futures.

This isn’t something I can be confident in. Predicting the future is extremely hard, and where the far future is concerned, everything is uncertain. Maybe we could find a way to make having huge numbers of smarter-than-human AIs safe, and steer humanity from there to a good future. But for this sort of strategy, uncertainty is not our friend. If there were some reason to expect this sort of future to turn out well, or some strategy to make it turn out well, then for the same reason I’m uncertain in my belief that it will turn out badly, we would be uncertain in our belief that it will turn out well.

So, how do these comparative scenarios compare? To make a good future with a singleton AGI in it, humanity has to solve immensely difficult technical and social coordination problems, without making any mistakes. To make a good future with decentralized AGI in it, humanity has to… find out that luckily physics do not allow for recursive self-improvement or certain other classes of dangerous technologies.

I find the idea of building an AGI singleton intuitively unappealing and unaesthetic. It goes against my egalitarian instinct. It creates a single point of failure for all of humanity. On the other hand, giving everyone their own, decentralized AGIs is terrifying. Reckless. I can’t imagine any decentralized-AI scenarios that aren’t insanely risky gambles. So I favor humanity building a singleton, and AGI research being less than fully open.

A Series of Writing Exercises

Speed Read This
Posted by on January 27, 2016

Last Monday, my house hosted a Meta Monday on the subject of how to write about aversive things, particularly self-promotion. This spawned a discussion of how to avoid getting stuck on writing generally, and an interesting exercise. I’ve modified the description of the exercise slightly to refine it, so consider it untested, but I think this is worth doing at least once, especially if you struggle with writer’s block or want to write more.

The goal is to reliably summon and observe the feeling of writing, while definitely unblocked, on a series of subjects that starts out easy to write about and gets successively harder. It involves a facilitator, so it’s best done with a group or with a friend. Several people in the group thought of themselves as not being capable of prose-writing in general, but everyone who tried the exercise succeeded at it.

The facilitator explains the rules of the exercise, then picks a physical object in the room and sets a five-minute timer. For five minutes, everyone writes descriptions of the object. When the timer rounds out, the facilitator asks a question about the object. When we did it, the object was a projector. Everyone’s goal is for their 5-minute description to contain the answer the question. Since no one knows what the question’s going to be, the only way to have a high chance of answering the question is to mention everything. After five minutes were up, I pointed out the vent on the front of the projector, and asked: what’s this? Everyone’s description mentioned it, and everyone was able to write more or less continuously for the whole time without stopping. Success!

In round two of the exercise, everyone picked a project they had worked on. Same rules: after five minutes of writing, someone asks a question about your project – something basic and factual – and your goal is to have written the answer. Most people’s projects were software, so my question was “what programming language was it in”? Some other questions might have been “when did you start?” or “did you work with anyone?” This is very close to resume-writing, but because of the way it was presented, people in the room who were complaining about being stuck procrastinating and unable to do resume-writing were able to do it without any difficulty.

In round three, everyone picked a topic they’re interested in, and wrote about how they relate to that topic. This is similar to what one might write in a cover letter, but, continuing the format of the previous exercises, everyone optimized for answering all the basic factual questions. This one was harder; everyone was able to write something, but not everyone was able to go nonstop for the whole five minutes. Afterwards I asked “when did you first become interested in your topic?” and two-thirds of us had answered.

Several people observed that these exercises felt easy because they weren’t bullshit-y. So, for the fourth and final exercise, we ramped up the challenge level: Pick a villain from Game of Thrones (or some other fiction) and argue why they should rule. Someone else will independently pick a reason, and the goal is to have included that one. This one did, in fact, turn out to be more difficult; everyone managed to write something, but some of our pages were short.

My main takeaway from the exercise was the idea and mental motion of anticipating and trying to preemptively answer easy questions, but to find and answer all of them. This seems to work better for me than trying to answer a particular hard question, or to write without a mental representation of a questioner.

The Malthusian Trap Song

Speed Read This
Posted by on December 17, 2015

The Malthusian Trap is what happens to societies that reach the carrying capacity of their environment. For most of human history, humans have lived a Malthusian existence, constantly threatened by famine and disease and warfare. I wrote this for the 2015 MIT Winter Solstice. To the tune of Garden Song.

Dawn to dusk, row by row, better make this garden grow
Move your feet with a plow in tow, on a piece of fertile ground
Inch by inch, blow by blow, gonna take these crops you sow,
From the peasants down below, ‘till the wall comes tumbling down

Pulling weeds and picking stones, chasing dreams and breaking bones
Got no choice but to grow my own ‘cause the winter’s close at hand
Grain for grain, drought and pain, find my way through nature’s chain
Tune my body and my brain to competition’s demand.

Plant those rows straight and long, thicker than with prayer and song.
Emperors will make you strong if your skillset’s very rare.
Old crow watching hungrily, from his perch in yonder tree.
In my garden I’m as free as that feathered thief up there.