A brilliant essay at New Yorker on A.I. and intellectual debt, output of an often employed “approach to discovery — answers first, explanations later”. I was aware of, but never thought in detail on what would be the implications of letting independent, isolated machine learning models interact freely. This example is indeed eyeopening.
In 2011, a biologist named Michael Eisen found out, from one of his students, that the least-expensive copy of an otherwise unremarkable used book—“The Making of a Fly: The Genetics of Animal Design”—was available on Amazon for $1.7 million, plus $3.99 shipping. The second-cheapest copy cost $2.1 million. The respective sellers were well established, with thousands of positive reviews between them. When Eisen visited the book’s Amazon page several days in a row, he discovered that the prices were increasing continually, in a regular pattern. Seller A’s price was consistently 99.83 per cent that of Seller B; Seller B’s price was reset, every day, to 127.059 per cent of Seller A’s. Eisen surmised that Seller A had a copy of the book, and was seeking to undercut the next-cheapest price. Seller B, meanwhile, didn’t have a copy, and so priced the book higher; if someone purchased it, B could order it, on that customer’s behalf, from A.
Each seller’s presumed strategy was rational. It was the interaction of their algorithms that produced irrational results. The interaction of thousands of machine-learning systems in the wild promises to be much more unpredictable.
Yep, indeed. May be such interactions need more control? May be we have one more thing about AI to worry about?
Much of the timely criticism of artificial intelligence has rightly focussed on the ways in which it can go wrong: it can create or replicate bias; it can make mistakes; it can be put to evil ends. We should also worry, though, about what will happen when A.I. gets it right.