★★ God & Golem Inc. — Norbert Wiener

2025/04/27

This is a very short book I read because I wanted to get an understanding of what Wiener is like, before I read some larger work of his. The work certainly convinced me he is worth reading some more.

The premise of the book, published in 1967, is to broadly cover the emerging technology of the future, attitudes around these technology, and some pitfalls of them. It instructs on what these technologies are, so it may have been for a general audience.

The highlight for a reader now will be his discussion of the ’learning machine’, what we now call artificial intelligence. His thoughts are extraordinarily grounded for the time. What I’ve seen of the mid to late twentieth century discourse on artificial intelligence is split between those who think we will make positronic brains, and those who think they will hardly amount to any meaningful capability or importance. Wiener has no fantasy about the capabilities of a learning machine, and no aimless pondering on if it can be conscious, when we hardly have an idea what distinguishes our consciousness, if there is a distinction. He makes a good assessment of what problems they should and should not be used for, and the trouble that could come from their indiscriminate use.

There is a part where he touches on ‘self-replicating machines’. Because of how he uses the word ‘machine’, I struggle to understand if he means physical machines that self-replicate, or biologically-inspired computation. The former, I don’t think exists and probably won’t ever. I don’t think it’s useful. The latter was not yet a field. This is perhaps the only part I would leave out of the book, since it was not particularly interesting, useful, or tied to the rest of the book.

His third essay was good, though it seemed directionless at first, it came altogether to a clear point by the end, which was an interesting idea about copying a system by learning it’s input/output.

I’d like to quote some passages that I found the most interesting. Here, for context, he was discussing atomic warfare and how the concept of failsafes is not sufficient when dealing with such a massive yet ill-understood danger.

As engineering technique becomes more and more able to achieve human purposes, it must become more and more accustomed to formulate human purposes. In the past, a partial and inadequate view of human purpose has been relatively innocuous only because it has been accompanied by technical limitations that made it difficult for us to perform operations involving a careful evaluation of human purpose. This is only one of the many places where human impotence has hitherto shielded us from the full destructive impact of human folly.

Here is a passage that puts nicely a thought I always had.

[Learning machines,] help us they may, but at the cost of supreme demands upon our honesty and our intelligence. The world of the future will be an ever more demanding struggle against the limitations of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.