Saturday, January 7, 2017

- So Let's Discuss AI

I haven’t said much about this because I don’t find it so interesting. But RadioDerb this week mentioned AI, and this recent post has it on my mind. I also left a comment over at Steve Sailer’s blog on the topic, so I thought it might be a good idea to discuss it just a little.

My experience in working with AI is all practical. I’ve never worked in a lab where the design goal was to beat the Turing test. In fact, I think the effort to do so is mostly a waste of time. Humans individually are all at times irrational, unpredictable, and subject to error, even the most logical of us. I like to think of myself as deeply driven by logic, but give me good cause to make me think you’ve put my daughter at some serious risk, and see how ‘rational’ I behave.

So the idea of making computers like ‘people’ is silly I think. It’s much better to strive to make the tasks we perform which otherwise clutter up our lives, more rational and optimized than we could do ourselves. This is the area where my personal expertise lies.

I’ve done a lot of actual work in computerized decision making. A big chunk of my professional life revolved around the care and feeding of a program trading system that read the news off the business news wires at Reuters and Bloomberg, and traded the market from the information it learned there. I originally designed it in 2006 when natural language parsing was en vogue, but I had been working on its actual ‘thinking’ for years before. The only difference was that instead of providing answers to a computerized system, it then provided them to actual human ‘portfolio managers’ who would look at the results and make a ‘yes’ or ‘no’ decision.

The key to all those computerized decisions is the statistical distribution. In my experience, virtually any (business) decision can be modeled as a series of multi-dimensional distributions. And since that’s so, with enough data supplied (and the right data both identified and analyzed), virtually any repeated decision can be made by a computer with accuracy equal to, and in many cases superior to, a decision made by a human.

And it’s not just Wall Street. Most of what we do in business doesn’t involve the kind if individual ‘expertise’ in decision making that we think it does. Take 10 people doing the same job at 10 different companies. They all have the same constraints and the same priorities, so in the end, they will all be doing roughly the same thing. I recently designed a system for a shipping company which predicted the delivery appointments they would be allotted by their customers. That system sampled the past behavior of several hundred people, working at an equal number of companies that all do the same basic thing, and predicted likely outcomes using that data a few other proprietary company parameters.

When they turned it on, it turned out that it successfully predicted outcomes with 90% accuracy. 90%! Which then let the company anticipate the behavior of its customers and make changes to their own schedule BEFORE it occurred to their customers to do so. And the 10% where it could make no prediction only left the company dealing with things exactly as they had before. Only now instead of over-time and extra work to correct (potentially) all of their shipping, they only corrected 10% of it. That saved the company millions per year.

That isn’t very high tech or sophisticated, and is the kind of thing that virtually all forward looking companies are doing now. All it really takes is the business skill to determine which data to look at, and a good data analyst who knows how to look at it. The rest of the work, the computer does.

And therein lies the problem with AI becoming “like people”. Computers and models can only be good at arriving at “the correct” answer. And outside of business decisions where “the correct” answer is what generates the most revenue and least cost at a known and managed risk, what the “correct” answer is for you, is dependent entirely upon how you see the world.

Do you imagine that it will ever be possible to build a computer AI that validates the progressive narrative? All the things which allow a person to see that particular narrative as ‘true’ are irrational, and therefore not possible without specifically teaching the computer to be “wrong” sometimes. (Just ask Facebook) And if it’s wrong about those, then is it wrong about other things? Isn’t that how the villain computer system from the “Terminator” series of films made it’s tragic error to eliminate all humans?

Microsoft saw this when it’s “teen Girl” AI “became a Nazi” through QA manipulation. The problem wasn’t that the machine was too stupid to see the truth, the problem was that it was given a faulty process for determining the truth from the input data. It was essentially told to do a stupid thing. Its Nazi transformation was not an actual error as far as it was concerned. But its processes were too poorly defined by it’s crafter to derive all its responses from the input data alone.

And it isn’t just the processes of the system that are subject to that kind of error. It can be a question of the system’s goals as well. What I mean to say is that there is a monstrous gap between truth and “human truth”. And in order to make a computer ‘think’ like a human, it must be imparted with the same kinds of human limitations that make us individuals. If this is ever done successfully, then it won’t much matter because the system too will therefore be ‘limited’.

Let’s look at it from a slightly different direction. And since we know the Progressive mental processes are rife with both motives and processes that wither quickly under public scrutiny, let’s just use two examples from guys in the Alt-Right.

Vox is devoted Christian, and our man Derb is a non-believer. I’d call both of them ‘clear thinkers’ and they both have perfectly good justifications (in my mind) for the beliefs they hold. Both are in agreement about many things, but I would suspect it’s for very different reasons.

So if you can make a computer that’s “smart enough” to act like one them, it will by definition not be smart enough to act like the other without core changes. As a not particularly controversial example of this gap, the bible pretty clearly says “Thou shalt not kill”, but if you read on, it’s more ambiguous. There are many circumstances where Christian theology allows killing. The example of killing the terrorist who is about to set off the bomb under the schoolbus of children is a pedestrian one.

You could teach it all those subtle rules but if you do, the various weights they’re given will in effect be defining the system's ‘unique’ character, and any change to those rules will produce potentially dramatic changes to the answers you get on a variety of unrelated questions. Would it be "right" according to Vox or according to Derb? In my simplistic example it may be possible for it to be "right" for both but for different reason, just like the men. But wouldn't the AI then be "just being itself?" Would that same answer also be "right" for a progressive? How about for a Jihadi or strict Buddhist?

You could also teach the computer to change its mind about goals depending on whether it’s talking to Vox or Derb, (or a Buddhist) but if you do that, you’ve limited it again. So take the “true believer" path and you’ve limited the system. Take the total skeptic path and you’ve limited it again. Teach it to take one path sometimes and another path others, and you’ve still limited it. You’ve taught the system to be ‘wrong’ under certain unique circumstances, and it therefore can never be trusted to tell you anything that is ‘right’, any more than a human could.

Take that example even one tiny layer of subtlety further and you get irreconcilable problems very quickly. Take the example of sin. Do you consider it wrong? Depends on the sin and the person doing the considering. Does the actual act in question rise to the level of a sin? That depends on a lot of things too. Can you make a computer “smart” enough to make all those judgments about all those things and arrive at the “right” answer? The short answer is “of course!” But right according to who?

Computers are rules based. The rules can be subtle, and thoughtful. The system can even rewrite them as they go along. But if this is ever done with the kind of sophistication that allows it to become a ‘singularity’, then it will be done by imparting a ‘direct path’ for some of these core questions – a way for the system to break it’s own rules. That ‘direct path’ will in effect become its ‘ego’. It’s humanity. It’s place where errors are made, and still called ‘correct’, at least by that particular system. And who would EVER consider placing any more authority in a system that is as prone to error as any human, than we would in a human itself?

The progs will say that human society is like a computer, so they will probably think it’s a great idea regardless. They already adore the idea of investing individuals with great power and authority. But when they do it for an AI, they’ll be forgetting about Robespierre and the Reign of Terror (like they usually do). So they’ll probably be the first to regret it. And as much as the idea of women’s/ethnic studies professors running and screaming while terminator style robots light into them with machine guns may appeal to you, I don’t see it happening.

I’m not afraid of AI. Will it put people out of work? It already has. Lots. It will do so again if I have anything to say about it. That’s one of the tools I use to make my way in the world. But do we have to fear AI? I don’t think so no, at least not in the sci-fi sense. And that’s because we can’t make it like us, without making it “like us”.

And if we do, all of the varied Human societies already have plenty of tools for limiting the power of an individual if it comes to it. Even Caesar got the knife in the end and Darius had Leonidas and his boys. Sic Semper Tyrannus, as they say. And when the time comes, there will be more than enough people around who are willing and able to cut the AI’s cord. But I doubt it will even come to that because if it really does act like us, then no one will ever trust it to be any 'better' than us.

One final word. I don't pretend that these ideas are somehting very original. I'm sure it's all been said in AI debates before. And that lack of original thinking on my part is why I don't much bother with the topic. In the end I believe that an AI won't have power over us unless we give it to it. And the more it acts like us, the less we'll trust it, and the less likely we'll give it the power that everyone seems afraid it will get.

2 comments:

Blegoo said...

404 “teen Girl” AI “became a Nazi” link.

Muzzlethemuz said...

Yes, herein lies the problem:

"...the problem was that it was given a faulty process for determining the truth from the input data."

I agree, as the input data is frequently incorrect, misleading or interpreted based on contemporary mores and inputs as I believe it is here: "the bible pretty clearly says “Thou shalt not kill”, but if you read on, it’s more ambiguous."

I would add only that the Bible, or more accurately perhaps, the Torah, in its commandments says specifically in the Hebrew: לא תרצח - "Do not murder."

The definitions of murder and killing are delineated throughout the scripture, both in the 5 books, the Prophets and the Writings i.e. the תנ״ך "Tanakh". There is no mistaking the two and they are elucidated on through commandment, literary example, metaphor, allegory and rabbinical decree both in the written and oral traditions of the Jewish people.

Christians and non-believers alike have done themselves a disservice by not picking up the Talmud which goes into detail re the do's and don't of killing, murdering, etc., famously elaborated on by the creed of the Mossad and/or Shin Bet, "He that rises to kill you, kill him first..." Not only is it immoral to kill wantonly, it is immoral not to kill in defense i.e. self defense is a moral imperative, in the eyes of God, at least. NY and NJ might take notice.

There is a famous Michelangelo statue of Moses coming off of Mt. Sinai with horns coming out of his head. The debate rages over a mis-translation of the Hebrew from the Latin Vulgate re "horns" and "rays" of light.

Complete information + thorough processing tends towards optimized outcomes, naturally or artificially.