Author:


My OpenAPS G4 Case

Speed Read This
Posted by on April 13, 2018

I have type 1 diabetes, an autoimmune condition which requires I constantly measure my blood sugar and use insulin to control it. The simplified model is: blood sugar goes up as a result of food and as a result of time, down as a result of insulin, and the goal is to keep it inside a narrow range. I have a Dexcom G4 continuous glucose sensor, which measures blood sugar every five minutes, and a a Medtronic insulin pump, which gives insulin on command; but until recently, there was no way to make these talk to each other, so I had to make all insulin-delivery decisions by hand.

OpenAPS is an open source project which takes continuous glucose monitor data, and uses an insecure 900MHz radio protocol to command the pump. It works, and it works very well! One thing I found left a lot to be desired, though, was the case hardware–none of the options people had designed were both sufficiently protective, and small enough to fit in my pocket. (While I was waiting for the case to be printed and to arrive, I broke the USB OTG connector off my Explorer Block, by keeping it in my pocket with only an antistatic bag for protection, thus demonstrating that a good protective case is indeed necessary). So I downloaded the trial version of Fusion 360, measured all the parts carefully, and designed one. This post documents the case, what parts it works with and how to assemble an OpenAPS rig using it if you are using the right components (in particular, an Intel Edison and a Dexcom G4).

The rest of this post is aimed at people who use OpenAPS or are looking to get started with OpenAPS, who arrived here via a link from the “Get your rig hardware” section of the OpenAPS documentation. It is probably not as interesting to others.

Parts:

The end result is pocket sized (albeit just barely), and fairly close to the theoretical limit of how small it can be without exposing a connector to impact. It looks like this:
Photo of OpenAPS case+G4+battery fully assembled

Start with the case:
OpenAPS case for G4+Edison, disassembled

Mount the Intel Edison on the Explorer Block, and plug the USB OTG cable into the USB port labelled “OTG”. Thread a regular USB cable through the hole in the case and conect it to the Explorer block, and put the Explorer block in the case.
OpenAPS case with Explorer Block and USB cable

Plug in the G4. It should fit in the slot tightly.
OpenAPS-case-R2-withDexcom

Next, we’re going to guide the cable to prevent the USB port on the Explorer block from getting damaged if the cable gets pulled on. (This risk is not theoretical; before I built this case, I broke the connector off one Explorer Block that way). This will also give limited protection from moisture. Start by placing two pieces of electrical tape over the hole, one on each side of the cable:
OpenAPS-case-R2-strainrelief1

Add a horizontal piece of tape to guide the cable to the end, followed by a vertical piece to secure it in place. If you ever yank on the cable too hard, this will come off and absorb most of the force, protecting the rest of the rig from damage.
OpenAPS-case-R2-strainrelief2

Next, tape the top. Be careful not to cover too much of the speakers, or alerts migth stop being audible. Finally, hook up the battery.
Photo of OpenAPS case+G4+battery fully assembled

One limitation of this setup is that, because the battery is connected via USB instead of via the 3.7V power connector on the Explorer Block, the OpenAPS software can’t measure the battery’s charge level and warn you when it’s low. This is unavoidable, since if you power it via the 3.7V power connector instead, then it can’t communicate with the Dexcom G4, since USB OTG requires 5V power and it doesn’t have the circuitry to step 3.7V up to 5V. I’ve addressed this by adding a software feature which measures uptime, and delivers an alert when it’s been running for a duration which suggests the battery is probably low; since the rate of drain is fairly consistent, and since I’m swapping two identical batteries and always charging them fully before connecting them, this is good enough. This feature will probably land in OpenAPS 0.7, or (if you must) you can use my fork.

This is the second revision. The first was about 10% bigger on each axis than it needed to be, and had sharp corners. If someone wanted to make a third revision, the one thing that could be improved would be to carve out a compartment on the bottom for two AAA batteries, for use as pump spares.

A Nutrition-Focused Review of the Impossible Burger

Speed Read This
Posted by on February 26, 2018

People in my circle have been talking a lot about the Impossible Burger, which is a plant-based meat substitute designed to imitate the flavor of, and to compete with, ground beef hamburgers. Discussion so far has largely focused on how the taste compares to beef hamburgers. This is my attempt to compare the Impossible Burger to beef on nutrition. I’m not comparing price, since availability is still limited and the current price is unlikely to remain stable over time.

This post was difficult to write, and doesn’t reach any clear conclusions about which is better. This isn’t because I think they’re equivalent, or close to equivalent; rather, I think it’s probably the case that the Impossible Burger is either significantly more or less healthy than beef, but am left with considerable uncertainty about which way the difference goes. This is due to a combination of uncertainty about the Impossible Burger’s actual chemical composition (some important details aren’t disclosed) and uncertainty about how healthy beef is (the quality of research in this area is quite poor). Ultimately I was forced to settle with identifying the relevant differences, flagging many of them as subject to uncertainty and/or controversy, and leave it at that.

The Outside View

Both beef and the Impossible Burger fall in reference classes with possible health concerns. Beef is red meat; the Impossible Burger is a processed food. Red meat is correlated with bad health outcomes in observational studies (here’s a meta-analysis); unfortunately this is a study methodology with an extremely poor track record, and there isn’t really anything better to go on. Intervention studies exist, but none that are long-term enough to measure mortality, only indirect biomarkers, and they find no effect (meta-analysis).

Detailed Comparison

There are a number of differences between the Impossible Burger and beef, so to keep track of them all, here’s a nutrition-facts-panel-style table. More detail on each of the rows where there’s a difference below.

The Impossible Burger FAQ says that “bioavailable protein, iron, and fat content are comparable to conventional 80/20 ground beef”, but the fat content on the label matches more closely to 85/15 ground beef, so I’ll use that as the basis for comparison. (Beef is notated as percent-lean/percent-fat, and typically ranges from 93/7 to 70/30, with 80/20 or 85/15 most commonly used in hamburgers.)

Impossible Burger ingredients: Water, Textured Wheat Protein, Coconut Oil, Potato Protein, Natural Flavors, 2% or less of: Leghemoglobin (soy), Yeast Extract, Salt, Soy Protein Isolate, Konjac Gum, Xanthan Gum, Vitamin C, Thiamin (Vitamin B1), Zinc, Niacin, Vitamin B6, Riboflavin (Vitamin B2), and Vitamin B12.

Other information sources:

  • Impossible Burger nutrition facts
  • USDA data on 85/15 ground beef
  • USDA data on coconut oil
  • Nutrition Facts

    Serving Size: 85g

    Beef Impossible RDA Favors
    Calories 218 220 varies
    Total Fat 13g 13g
    Medium-chain 0g 7g* Impossible
    Omega-3 42mg 0*
    Omega-6 320mg 0*
    Vaccenic Acid 0.55g 0* Impossible
    Total Carbohydrate 0g 5g Beef
    Protein 23.6g 20g Beef
    Lysine 2.3g ? 38mg/kg Beef
    Sodium 76mg 430mg controversial
    Potassium 407mg 262mg 4700mg Beef
    Calcium 22mg 21mg
    Iron 2.9mg 3mg 18mg/8mg
    Vitamin C 0g 17mg** 90mg
    Thiamin .04mg 16.3mg 1.2mg
    Riboflavin 0.19mg 0.2mg
    Niacin 6.3mg 5mg
    Vitamin B6 0.4mg 0.2mg 1.7mg
    Folate 10mcg 57mcg 400mcg Impossible
    Vitamin B12 2.8mcg 2.2mcg 2.4mcg
    Zinc 6.6mg 3mg 8mg/11mg Beef

    * Estimated based on ingredients list, but probably pretty accurate
    ** Estimated based on ingredients list, with a large error margin
    † A difference that’s significant but where it’s unclear or controversial which is better

    Fat

    The fat in the Impossible Burger comes from coconut oil, which is mostly composed of medium-chain triglycerides (MCTs). MCTs are commonly used by people trying to induce ketosis – that is, to make their body use fat for short-term energy needs (as opposed to making cell membranes out of it, storing it in the liver for use over a longer time horizon, or storing it in fat cells for long-term storage). This is probably a good thing, but it’s unusual, and puts the Impossible Burger in a distinctly different nutritional niche than beef burgers, and may interact strangely with unusual metabolisms.

    Beef contains both omega-3 and omega-6 fats, while the Impossible Burger contains neither. The standard line is that absolute quantity of omega-3 and omega-6 fats doesn’t matter, but ratio does. Unfortunately, there isn’t a clear answer to what the ratio or amounts are in typical ground beef; they depend on both the breed of cow and on what it ate (grass-fed beef leads to a more favorable omega-3 ratio), and the 3:6 ratios range from 1.8 to 13.6. The former is probably good, the latter is probably bad. Beef also contains vaccenic acid, which is a trans-fat. Trans fats as a category have been found to have detrimental health effects, but this was based on the distribution of trans fats that resulted from partial hydrogenation.

    Overall, the Impossible Burger’s fat is probably better than beef fat.

    Protein

    The main dietary purpose of beef, as eaten in practice, is to provide protein. With regards to protein, the nutrition facts panels of ground beef and the Impossible Burger look pretty similar; beef has more, but only slightly (23.6g vs 20g per 85g serving). However, not all protein is created equal; proteins are made up of amino acids in varying proportions, and if something contains more of the amino acids you don’t need but lacks the amino acids that you do, that’s not ideal.

    From the ingredients label, the protein in the Impossible Burger mainly comes from textured wheat protein and potato protein, with additional small amounts from leghemoglobin, yeast extract, and soy protein isolate. The proportions aren’t disclosed, nor is the overall amino acid profile. There’s a reasonably good justification for this; providing that information isn’t customary for processed foods, and their upstream ingredient suppliers don’t necessarily provide amino acid profile information either. Still, we can infer some things about the protein in the Impossible Burger based on the order of the ingredients list: at least 13g of textured wheat protein, between 2g and 7g of potato protein, at most 1.7g of leghemoglobin and yeast extract, and at most 430mg of soy protein isolate. The closest I could find to a public statement about the amino acid profile of the Impossible Burger is a tweet saying

    The #impossibleburger contains all essential and non-essential amino acids. The amount of bioavailable protein is comparable to beef.

    This isn’t entirely reassuring. Suppose they had taken a large amount of bioavailable but incomplete protein, and mixed in a small amount of complete protein. That would make the statement true, but it would be nutritionally poor. As it turns out, potato protein is a complete protein and wheat protein isn’t, so this might in fact be what happened. Depending what part of the plant you use, wheat has a Protein Digestibility Corrected Amino Acid Score somewhere between 0.25 and 0.53, as compared to 0.92 for beef. I checked out some producers of textured wheat protein, but they didn’t disclose their amino acid profile either.

    Leghemoglobin

    The component that gives beef much of its distinctive flavor is hemoglobin, which is a blood protein that transports oxygen in mammals. The Impossible Burger’s signature ingredient is leghemoglobin, a protein which provides a similar taste and color. Leghemoglobin was originally isolated from the roots of soybeans, and is produced using genetically modified yeast. Leghemoglobin is the Impossible Burger’s most controversial ingredient, largely because of publicity around Impossible Foods’ conversations with the FDA, which were publicized when an environmentalist organization, ETC Group, used a FOIA request to get them. This was then used as a platform to criticize FDA practices in general.

    The FDA’s concerns, however, were specifically about allergenicity. With some searching, I was able to find one report of an allergic reaction possibly related to the Impossible Burger. I contacted the author of that post, and they reported that blood tests failed to identify any allergies, and the only time they’ve ever had an allergic reactions was the one time they tried the Impossible Burger. That is the only example of a possible allergic reaction to the Impossible Burger I could find; if there are others, they haven’t been written about publicly. And it’s somewhat speculative; it might’ve been a reaction to something else, since they (understandably) didn’t try it again. Overall, leghemoglobin doesn’t concern me very much; trying it doesn’t seem notably higher risk than trying any other new food processed food, and if there does turn out to be an allergenicity problem, I expect we’ll find out (with prevalence numbers) soon enough.

    Miscellaneous

    The Impossible Burger lists “natural flavors” on the ingredients above the 2% mark, meaning at least 2% of it consists of undisclosed ingredients from that category. Those ingredients could potentially be bad.

    Beef contains creatine, which was found to have cognitive benefits when given as a supplement to vegetarians (but not to meat eaters).

    The Impossible Burger contains 5g of carbohydrates (most likely as impurity in the textured wheat protein). If it’s on a bun, this is pretty small. It does make the Impossible Burger less suitable for low-carb/ketogenic diets, though.

    Conclusion

    Overall, the Impossible Burger seems good enough from a nutrition perspective to be competitive with beef. However, I have reservations about aspects of its composition that aren’t publicly known, particularly the amino acid profile and what might be lurking under the “natural flavors” heading. Limited availability means I won’t be eating this in the short term, but once it’s sold in grocery stores and reaches a price not too much higher than ground beef, I’ll probably start eating it.

    Sanity Check the News

    Speed Read This
    Posted by on August 15, 2017

    Regarding what happened recently in Charlottesville –

    Actually, first, I’m going to put this filler paragraph here, so that people looking for quick emotional engagement can go read something else. This post is about subtleties, and I would prefer that anyone who wants to eschew subtlety in favor of emotion unsubscribe from my wall.

    That out of the way, Alex Fields, the driver of the car that ran into a crowd, is by all accounts a racist and a scumbag. I fully expect that he will be hung to dry, and I have no problem with this. However, given his prominence in the news cycle, I believe it’s important to be truthful and precise about exactly what he did. The story running in the news is that he committed premeditated mass-murder, by running into a crowd of counter-protesters with his car.

    I make a habit of fact-checking news stories, both political and non-political, and calling out any errors I see. I decided to check this one. Was it really murder, or was it merely vehicular manslaughter? This is important, because falsely believing that other people have escalated the level of violence in a conflict can *cause* the level of violence to escalate. So, how strong is the case against Alex Fields?

    I’ve looked at it from as many angles as I could find, and reconstructed the sequence of events as best I could. Using a stopwatch for time and Google Maps for distance, I (imprecisely) estimated the car’s speed. I sorted through YouTube’s terrible search to get non-duplicate videos. I watched the important moments of them frame-by-frame. Here’s the sequence of events:

    1. Alex Fields is driving south towards a protest. There are pedestrians about 120ft ahead of him. His brake lights are on. The camera moves away from his car. and returns a second later.
    2. In 3.1s the car moves 105ft (about 24mph).
    3. Alex’s car is struck hard on the rear bumper by a stick or pole. Prior to this point he has not hit any pedestrians.
    4. About 50ft ahead of Alex’s car is a white convertible; in between, there are pedestrians.
    5. Alex’s car accelerates. Two pedestrians go over his hood, one is sandwiched between his car and the next, and several more on the sides are knocked over without being seriously hurt.
    6. Alex’s car hits the convertible, and the convertible hits the car in front of it, a red minivan. The minivan goes into the intersection, hitting about ten pedestrians
    7. Alex Fields shifts into reverse and starts backing up
    8. About seven people charge the car, striking it with sticks, smashing the rear windshield and the front windshield on the passenger side
    9. The car backs into two of the people who were attacking it, one of whom seems to end up seriously hurt

    I think the most likely explanation is that, after the first strike on his car, he panicked. I am fairly certain that this will be his defense in court, and, unless evidence is found which is not yet public, I don’t think his guilt can be proven even to a preponderance of evidence standard.

    Angles:
    From behind: https://www.youtube.com/watch?v=zeB2ZaUSa48
    From front: https://www.youtube.com/watch?v=r0guSatguSk
    People hitting the car after it crashes: https://www.youtube.com/watch?v=9mnywjPPDtU
    Drone view: https://www.youtube.com/watch?v=6QHXk2AB6kU

    (Cross-posted on Facebook)

    Election Fraud and a Roll to Disbelieve

    Speed Read This
    Posted by on November 8, 2016

    In 2000, the US presidential election between George Bush and Al Gore was swung by hacked voting machines. This fact can be verified by anyone, beyond a shadow of a doubt. In the close race between Bush and Gore, Volusia County reported a total of negative-16,022 votes for Al Gore[1]. Upon investigation, the negative vote count disappeared and was replaced with a normal-looking one, but not before the number reached the press. At the time, this was unexplainable. It was not until 2005 that security researcher Harri Hursti demonstrated a technique for hacking voting machines that involves sneaking memory cards with negative vote totals on them into the counting process[2]. The idea is that, by inserting a memory card with positive votes for one candidate and negative votes for another candidate, one can change the vote totals without messing up the turnout numbers, which would reveal the fraud. But if one is performing the Hursti hack, and messes up by putting the fake memory card into the process *in the wrong place*, then a county may accidentally report a negative vote total – because a memory card that was supposed to be used in a large precinct was used in a small precinct, without enough votes to steal. The machines used in Volusia County were in fact vulnerable to this technique, and this is what happened.

    Because the margin in Florida was small, a “recount” was triggered; in reality, ballots were being viewed by humans for the first time. Bush and Gore argued before the Supreme Court, and the Supreme Court ruled 5-4 (along party lines) to stop the counting. Gore then conceded. It all happened too quickly for little things like election fraud to be noticed; the media narrative, rather than being about fraud, was about “hanging chads” and other innocent things.

    Since then, forces within the Republican party have worked to promote a false narrative of vote fraud as something symmetric, that both parties do, by manufacturing false evidence of Democratic voter fraud. Like voting machine hacking, this becomes most obvious when there’s a mistake and the mistake becomes a scandal. In 2006, seven attorneys were fired mid-term from the department of justice[3]. In 2008, the Inspector General determined[4] that these firings were motivated by refusal to prosecute voter fraud cases against Democrats.
    Over the past few weeks, Donald Trump has been loudly warning that democrats would engage in election fraud to give the election to Hillary Clinton. This is a possible thing, but so is the reverse. Fortunately, there’s a standard strategy for determining whether election fraud took place, and which direct it went in: exit polls.
    I don’t have access to exit poll data for today’s election. Neither do you, and neither do most of the news outlets that’re reporting the results. But Nate Silver has this to say about them on his blog:
    “One reason people find Trump’s competitive margins across a wide range of swing states so surprising is because exit polls showed Clinton beating her pre-election polls in most states, instead of underperforming them.”[5]

    At the time I’m writing this, FiveThirtyEight says (on its sidebar) that Clinton and Trump are equally likely to win, based on states that have been officially called; other sources are strongly favoring Trump’s chances, based on preliminary counts.

    I roll to disbelieve. Maybe Trump really did get more votes than expected, and more votes than indicated by the exit polls Silver was referring to. Or maybe he didn’t. One thing’s for certain, though: the American people will hear a definitive result before any of this is investigated.

    (Cross-posted on Facebook)

    A Problem With Negative Income Tax Proposals

    Speed Read This
    Posted by on October 13, 2016

    I frequently hear it proposed that we should institute a negative income tax or similar form of guaranteed minimum income, funded by removing parts of the existing welfare and social services system. Simple economic analysis suggests that the welfare system is spectacularly wasteful, compared to simple cash transfers, and that tearing it down is obviously correct.

    It isn’t.

    Consider the food-stamp programs run by most states. People whose income is low enough receive funds whose use is restricted, so that it can be used to purchase food – but not to purchase anything else. Trading food stamps for regular dollars or inedible goods and services is illegal for both the buyer and the seller. A naive economic model suggests that this is bad: people know what they need better than bureaucrats do, and if someone would rather spend less on food and more on housing or car maintenance or something, they’re probably right about that being the right thing to do. So food-stamp dollars are worth less than regular dollars, and by giving people food-stamp dollars instead of regular dollars, value is destroyed. This is backed up by studies analyzing poor peoples’ purchases when given unrestricted funds; those purchases tend to be reasonable.

    This is a Chesterton Fence. Tearing it down would be a terrible mistake, because it has a non-obvious purpose:

    The reason you give poor people food-stamp dollars instead of regular dollars is because they’re resistant to debt collection, both legal and illegal, resistant to theft and resistant to scams.

    There are organizations and economic forces that are precisely calibrated to take all of the money poor people have, but not more. If you give one person unrestricted money, that person is better off. But if you give everyone in a poor community extra money, then in the long run rents, drug prices, extortion demands and everything else will rise, compensating almost exactly.

    We have a government which says to its poor people: you may be scammed out of your money, your access to every luxury, and even your home, by anyone who can trick you into signing the wrong piece of paper. But we draw the line at letting you be scammed out of your access to food.

    A negative income tax would erase that line. I haven’t heard anyone raise this as a problem, which sets an upper bound on how carefully people (that I’ve read) have thought about the issue. This worries me greatly. Can this issue be patched? And, thinking about it from this angle, are there more issues lurking?

    (Cross-posted on Facebook)

    National Psyches, Expressed As Sports

    Speed Read This
    Posted by on July 26, 2016

    It’s often said that sports are a substitute for warfare, an alternative place for people to channel their most destructive instincts. As someone with only benevolent instincts I’ve never felt the appeal, but the thought occurred to me that one might learn quite a bit about cultures’ psyches by paying attention to their choice of sports.

    For example, consider football. Each team has a leader, who controls the gaol and is allowed to break the rules. The rest of the players coordinate in a decentralized manner to set up shots at the other team’s leader, whose job is to survive the assassination attempts. Football is the most popular sport in Venezuela, Colombia, Nigeria, Brazil and Turkey. In the United States, however, the name refers to something else entirely.

    In American football, each team’s leader is a general who sits on the sidelines and gives orders to a group of drug-enhanced supersoldiers, who were selected by the draft. One team’s goal is to take the “political football” and deliver it into the other team’s territory; the other team’s goal is to find the player with the football and detain him, so that the referee can see who’s really responsible. Recent studies have shown players suffering brain damage from repeated concussions, but no one knows what to do about this so things pretty much continue on as usual.

    Not all sports represent warfare, though! Consider basketball, also popular in the United States. In basketball, two teams of mostly black people compete to see which is taller, thereby demonstrating their superior genes.

    Um. I mean, um. Baseball! Look at baseball. In baseball, players approach one by one, and everyone works together to stop them from scoring home runs. A home run is a well-standardized euphemism for having sex. Other key players are the pitcher and catcher, which are well-standardized euphemisms for participants in gay sex. In fact, pretty much every baseball-related term is a euphemism for something sexual, at least according to UrbanDictionary. Whereas in Britain, they play a game that’s superficially the same except none of its terminology is euphemistic. This is because the British are uncultured barbarians.

    I wish I could say that the historical shift in peoples’ choice of sports reflected the advancing enlightenment and universal culture. Instead, I will observe that in the de facto national sport of South Korea, which is Starcraft, one third of players are literally giant insects and the most exciting moments for spectators involve nuclear weapons.

    What Google’s TPUs Mean for AI Timing and Safety

    Speed Read This
    Posted by on May 21, 2016

    Last Wednesday, Google announced that AlphaGo was not powered by GPUs as everyone thought, but by Google’s own custom ASICs (application-specific integrated circuits), which they are calling “tensor processing units” (TPUs).

    We’ve been running TPUs inside our data centers for more than a year, and have found them to deliver an order of magnitude better-optimized performance per watt for machine learning.

    TPU is tailored to machine learning applications, allowing the chip to be more tolerant of reduced computational precision, which means it requires fewer transistors per operation.

    So, what does this mean for AGI timelines, and how does the existence of TPUs affect the outcome when AGI does come into existence?

    The development of TPUs accelerated the timeline of AGI development. This is fairly straightforward; researchers can do more experiments with more computing power, and algorithms that stretched past the limits of available computing power before became possible.

    If your estimate of when there will be human-comparable or superintelligent AGI was based on the very high rate of progress in the past year, then this should make you expect AGI to arrive later, because it explains some of that progress with a one-time gain that can’t be repeated. If your timeline estimate was based on extrapolating Moore’s Law or the rate of progress excluding the past year, then this should make you expect AGI to arrive sooner.

    Some people model AGI development as a race between capability and control, and want us to know more about how to control AGIs before they’re created. Under a model of differential technological development, the creation of TPUs could be bad if it accelerates progress in AI capability more than it accelerates progress in AI safety. I have mixed feelings about differential technological development as applied to AGI; while the safety/control research has a long way to go, humanity faces a lot of serious problems which AGI could solve. In this particular case, however, I think the differential technological advancement model is wrong in an interesting way.

    Take the perspective of a few years ago, before Google invested in developing ASICs. Switching from GPUs to more specialized processors looks pretty inevitable; it’s a question of when it will happen, not whether it will. Whenever the transition happens, it creates a discontinuous jump in capability; Google’s announcement calls it “roughly equivalent to fast-forwarding technology about seven years into the future”. This is slight hyperbole, but if you take it at face value, it raises an interesting question: which seven years do you want to fast-forward over? Suppose the transition were delayed for a very long time, until AGI of near-human or greater-than-human intelligence was created or was about to be created. Under those circumstances, introducing specialized processors into the mix would be much riskier than it is now. A discontinuous increase in computational power could mean that AGI capability skips discontinuously over the region that contains the best opportunities to study an AGI and work on its safety.

    In diagram form:

    asics-or-no

    I don’t know whether this is what Google was thinking when they decided to invest in TPUs. (It probably wasn’t; gaining a competitive advantage is reason enough). But it does seem extremely important.

    There are a few smaller strategic considerations that also point in the direction of TPUs being a good idea. GPU drivers are extremely complicated, and rumor has it that the code bases of both major GPU
    manufacturers are quite messy; starting from scratch in a context that doesn’t have to deal with games and legacy code can greatly improve reliability. When AGIs first come into existence, if they run on specialized hardware then the developers won’t be able to increase its power as rapidly by renting more computers because availability of the specialized hardware will be more limited. Similarly, an AGI acting autonomously won’t be able to increase its power that way either. Datacenters full of AI-specific chips make monitoring easier by concentrating AI development into predictable locations.

    Overall, I’d say Google’s TPUs are a very positive development from a safety standpoint.

    Of course, there’s still the question of how the heck they actually are, beyond the fact that they’re specialized processors that train neural nets quickly. In all likelihood, many of the gains come from tricks they haven’t talked about publicly, but we can make some reasonable inferences from what they have said.

    Training a neural net involves doing a lot of arithmetic with a very regular structure, like multiplying large matrices and tensors together. Algorithms for training neural nets parallelize extremely well; if you double the amount of processors working on a neural net, you can finish the same task in half the time, or make your neural net bigger. Prior to 2008 or so, machine learning was mostly done on general-purpose CPUs — ie, Intel and AMD’s x86 and x86_64 chips. Around 2008, GPUs started becoming less specific to graphics and more general purpose, and today nearly all machine learning is done with “general-purpose GPU” (GPGPU). GPUs can perform operations like tensor multiplication more than an order of magnitude faster. Why’s that? Here’s a picture of an AMD Bulldozer CPU which illustrates the problem CPUs have. This is a four-core x86_64 CPU from late 2011.

    AMDBulldozerHighlightedFPU

    (Image source)

    Highlighted in red, I’ve marked the floating point unit, which is the only part of the CPU that’s doing actual arithmetic when you use it to train a neural net. It is very small. This is typical of modern CPU architectures; the vast majority of the silicon and the power is spent dealing with control flow, instruction decoding and scheduling, and the memory hierarchy. If we could somehow get rid of that overhead, we could fill the whole chip with floating-point units.

    This is exactly what a GPU is. GPUs only work on computations with highly regular structure; they can’t handle branches or other control flow, they have comparatively simple instruction sets (and hide that instruction set behind a driver so it doesn’t have to be backwards compatibility), and they have predictable memory-access patterns to reduce the need for cache. They spend most of their energy and chip-area on arithmetic units that take in very wide vectors of numbers, and operate on all of them at once.

    But GPUs still retain a lot of computational flexibility that training a neural net doesn’t need. In particular, they work on numbers with varying numbers of digits, which requires duplicating a lot of the arithmetic circuitry. While Google has published very little about their TPUs, one thing they did mention is reduced computational precision.

    As a point of comparison, take Nvidia’s most recent GPU architecture, Pascal.

    Each SM [streaming multiprocessor] in GP100 features 32 double precision (FP64) CUDA Cores, which is one-half the number of FP32 single precision CUDA Cores.

    Using FP16 computation improves performance up to 2x compared to FP32 arithmetic, and similarly FP16 data transfers take less time than FP32 or FP64 transfers.

    Format Bits of
    exponent
    Bits of
    precision
    FP16 5 10
    FP32 9 23
    FP64 11 52

    So a significant fraction of Nvidia’s GPUs are FP64 cores, which are useless for deep learning. When it does FP16 operations, it uses an FP32 core in a special mode, which is almost certainly less efficient than using two purpose-built FP16 cores. A TPU can also omit hardware for unused operations like trigonometric functions and probably, for that matter, division. Does this add up to a full order of magnitude? I’m not really sure. But I’d love to see Google publish more details of their TPUs, so that the whole AI research community can make the same switch.

    Suspicious Memes and Past Battles

    Speed Read This
    Posted by on May 17, 2016

    When people organize to commit a crime, the legal term for this is a conspiracy. For example, if five people get together and plan a bank robbery with the intent to execute their plan, then each of them has committed the crime of “conspiracy to commit robbery”.

    If you suspect that a crime has taken place, and it’s the sort of crime that would’ve involved multiple people, then this is a “conspiracy theory”, particularly if those people would’ve had to be powerful and well-connected. Until recently, everyone seemed to agree that conspiracy theories were exclusively the creations of idiots and schizophrenics. One of the more common, well known conspiracy theories is literally that “the CIA is causing my command hallucinations”. So if someone were to claim, for example, that the 2000 election was swung by hacked voting machines in Volusia County, or that Michael Hastings was murdered by one of the top-ranking military officials he singled out for criticism in Rolling Stone? Then the anticipation is that this would be followed by some incoherent rambling about aliens, the social expectation is that this person is to be shunned, and the practical effect is that those things are hard to say, hard to hear, and unlikely to be taken seriously.

    Somehow, somewhere along the line, accusing governments and the powerful of crimes became associated with mental illness, disrespectability and raging idiocy. And when you pause and notice that, it is *incredibly creepy*. It’s certainly possible to imagine how such a meme could arise naturally, but it’s also a little too perfect — like it’s the radioactive waste of a propaganda war fought generations back.

    As it turns out, the last few generations did have propaganda wars, and we know what they were: the counterculture, the civil rights movement, McCarthyism. Each side of each memetic conflict left its mark on our culture, and some of those marks are things we’d better scrub away. So now I’m wondering: what were they?

    (Cross-posted to Facebook)

    Introducing conservative-pad 0.4

    Speed Read This
    Posted by on March 24, 2016

    Npm-logo.svg

    There’s been some controversy lately in the Javascript world about micro-packages, version pinning, and the NPM package repository’s regulatory regime. For a serious take, I wrote a comment on the HN thread. There is a looming spectre threatening our entire software ecosystem, of which the deplorable decline of our dependency hygiene is but one example. The left has gone too far. That’s why I’m writing conservative-pad, a string-padding function free from the constraints of political correctness.

    Here’s the original:

    module.exports = leftpad;
    function leftpad (str, len, ch) {
    str = String(str);
    var i = -1;
    if (!ch && ch !== 0) ch = ' ';
    len = len - str.length;
    while (++i < len) { str = ch + str; } return str; }

    This is terrible. You can expect that my replacement version will:

    • Be O(n). The original version uses Shlemiel the Painter's algorithm, which is mumble slur mumble.
    • Be written in LISP, to protect purity. These immigrant Javascript programmers are driving down wages and destroying our precious modularity.
    • Make web development great again.
    • Remove the unnecessary 'ch' parameter, a ridiculous piece of political correctness. It isn't our fault that string-padding is dominated by ' '; it won fair and square, and we shouldn't make exceptions to meritocracy just because someone wants a string-padding character that isn't white.
    • Compile with -Wall, because otherwise the bugs will keep coming back over the border.

    Thank you.

    Truth Coming Out of Her Well

    Speed Read This
    Posted by on February 28, 2016

    There is a semi-famous picture used as a test image in computer graphics, the Lenna picture. It has an interesting history:

    lenna

    Alexander Sawchuk estimates that it was in June or July of 1973 when he, then an assistant professor of electrical engineering at the USC Signal and Image Processing Institute (SIPI), along with a graduate student and the SIPI lab manager, was hurriedly searching the lab for a good image to scan for a colleague’s conference paper. They had tired of their stock of usual test images, dull stuff dating back to television standards work in the early 1960s. They wanted something glossy to ensure good output dynamic range, and they wanted a human face. Just then, somebody happened to walk in with a recent issue of Playboy.

    The engineers tore away the top third of the centerfold so they could wrap it around the drum of their Muirhead wirephoto scanner, which they had outfitted with analog-to-digital converters (one each for the red, green, and blue channels) and a Hewlett Packard 2100 minicomputer. The Muirhead had a fixed resolution of 100 lines per inch and the engineers wanted a 512 x 512 image, so they limited the scan to the top 5.12 inches of the picture, effectively cropping it at the subject’s shoulders.

    Yes, for strictly technical engineering reasons, they tastefully cropped Lenna to the shoulders. This reminded me of a more recently semi-famous picture, which suffers from similar aspect-ratio and other technical problems. And so, ladies, gentlemen and other persons, I present: Truth Coming Out of Her Well to Politely Clarify for Mankind.

    TruthComingOutOfHerWellToPolitelyClarifyForMankind

    For use as a meme, this version has several significant advantages over the 1896 Jean-Léon Gérôme original. In particular:

    • It is square
    • The woman’s face is properly centered
    • There is a uniform area at the top to overlay text on

    Here are some examples of how you might use it:

    Nice headline/The actual study says the oppositeThey didn't do a Bonferroni correctionThat sounds pretty damning/Unless you read the original context

    I claim no copyright over this image, and the original painting is old enough to be expired. So feel free to use this meme to shame politely clarify for mankind wherever you see fit!

    EDIT: Rob Bensinger suggests cropping slightly differently. This version gets a closer view, but adds graininess. Whether graininess is a technical failing reminiscent of 1970s image processing or an appropriate signal of meme-iness is up for debate.

    TruthComingOutOfHerWellToPolitelyClarifyForMankind-cropped