Luke Muehlhauser, of, but not on behalf of, Open Philanthropy, listed 12 tentative ideas for US AI policy. His attention is drawn to this in major part because of the relatively widespread concern of existential risk. The chance of human society literally ending being caused by AI.
I want to draw attention to his third idea: “Track stocks and flows of cutting-edge chips, and license big clusters,” and suggest that this is more practical and effective than it might first sound.
Think about nuclear weapons. The hardest part of building a nuclear weapon is getting the special nuclear materials. There is only a limited number of sources of these materials, so controlling the supply has a lot of leverage in preventing nuclear proliferation. Wouldn’t work as well to prevent chemical or biological weapons of mass destruction, but for nuclear weapons, it’s valuable.
Some classes of AI work require a lot of computation which is hard to do without specific hardware. There are barely a couple of dozen manufacturers, which is not terribly out of line with the number of producers of isotope separation hardware. (And tiny compared to the world’s 15,000 uranium mines.)
There are many bad things that AI can do (or, more properly, can be done with AI) even without massive hardware. Still, some risks seem controllable in this manner.
Would we get further in thinking about mitigating AI risks if we started to think carefully about the individual types of risks rather than talking about them as a single amorphous blob? This is, of course, not a new idea; the IEEE 7000 series that the issues of using AI for credit ratings were rather different from those for autonomous weapon series recognized this more than a decade. Perhaps it is time for such distinction to influence the public discourse.