Give Every AI a Soul—or Else

Give Every AI a Soul—or Else

I see two possible solutions. First, establish ID on a blockchain ledger. That is very much the modern, with-it approach, and it does seem secure in theory. Only that’s the rub. It seems secure according to our present set of human-parsed theories. Theories that AI entities might surpass to a degree that leaves us cluelessly floundering.

Another solution: A version of “registration” that’s inherently harder to fool would require AI entities with capabilities above a certain level to have their trust-ID or individuation be anchored in physical reality. I envision—and note: I am a physicist by training, not a cyberneticist—an agreement that all higher-level AI entities who seek trust should maintain a Soul Kernel (SK) in a specific piece of hardware memory, within what we quaintly used to call a particular “computer.”

Yes, I know it seems old-fashioned to demand that instantiation of a program be restricted to a specific locale. And so, I am not doing that! Indeed, a vast portion, even a great majority, of a cyber entity’s operations may take place in far-dispersed locations of work or play, just as a human being’s attention may not be aimed within their own organic brain, but at a distant hand, or tool. So? The purpose of a program’s Soul Kernel is similar to the driver’s license in your wallet. It can be interrogated in order to prove that you are you.

Likewise, a physically verified and vouched-for SK can be pinged by clients, customers, or rival AIs to verify that a specific process is being performed by a valid, trusted, and individuated entity. With that ping verification from a permanently allocated computer site, others (people or AIs) would get reassurance they might hold that entity accountable, should it be accused or indicted or convicted of bad activity. And thus, malefactor entities might be adversarially held responsible via some form of due process.

What form of due process? Jeez, do you think I am some hyper-being who is capable of applying scales of justice to gods? The greatest wisdom I ever heard was uttered by Dirty Harry in Magnum Force: “A man’s got to know his limitations.” So no, I won’t define the courtroom or cop procedures for cybernetic immortals.

What I do aim for is an arena, within which AI entities might hold each other accountable, separately, as rivals, the way that human lawyers already do, today. And yes, answering Yuval Harari’s dread of mass human-manipulation by persuasive gollems, the solution for AI-driven mass meme-hypnosis is for the mesmerizers to be detected, denounced, and neutralized by others with the same skills. Again, competitive individuation at least offers a chance this could happen.

Whichever approach seems more feasible—Huntington’s proposed central agency or a looser, adversarially accountable arena—the need grows more urgent by the day. As tech writer Pat Scannell has pointed out, each hour that passes, new attack vectors are being created that threaten not only the tech used in legal identities but also the governance, business processes, and end users (be they human or bots).

Add a Comment