I Asked an Algorithm to Optimize My Life. Here's What Happened

I Asked an Algorithm to Optimize My Life. Here's What Happened

The longer I thought about which of my options was most preferable, the more uncomfortable I felt. How could I possibly measure the excitement of the new café against the comfort of a nap or the relief of making progress on those nagging applications? It seemed that these outcomes were utterly incomparable. Any estimate of their values would invariably fall short. And yet, the very definitions of “optimal” and “preferable” required that I compare them. 

11:45 am

Before I knew it, I’d spent half an hour thinking about my options. Any metric I imagined for preferability was flawed. Decisions made using measurements are doomed to overvalue factors that can be measured: salary over fulfillment in careers, quantity over quality in friendships. Unfortunately, we owe the richest moments of being human to emotions we can’t measure accurately. At least not yet.

What’s more, the options I gave myself for each decision were far more complex than those a computer scientist would offer an agent. These are generally along the lines of “step left,” “turn on this motor,” or “sell this stock,” basic actions that offer a more general set of possibilities for what the agent can achieve. Imagine if instead of giving myself a limited list of ways to spend free time, I repeatedly picked a specific muscle to move—I could theoretically go anywhere or do anything by coming up with a sequence of discrete motions! The tradeoff is that most combinations of very basic actions would be useless, and figuring out which would be useful would be harder. I certainly wouldn’t have known how to make data-driven decisions about muscle movement. Some combinations of basic actions can also lead an agent to harm, which is fine in a computer simulation but not in real life. What if the random number gods assigned me muscle movements for doing the splits?

Overall, AI delivers “exactly what we ask for—for better or for worse" in the words of Janelle Shane. My algorithm couldn’t pave the way to a perfect life if I didn’t have a clear vision of what that life ought to look like. Articulating what “optimal” means is also difficult when you apply AI to real problems. To encourage intelligent-looking behavior, sometimes “optimal” is defined as “hard to distinguish from human performance.” This has helped produce text-generation models whose writing sounds impressively human, but these models also learn human flaws and human prejudices. We are left wondering what it means to be optimally fair, safe, and helpful when we manage, care for, and interact with other people, concerns that have puzzled humanity since long before the advent of the computer.

Finally, lunchtime came. Once again, I could use the structure of the day to make decisions for me. 

2:00 pm

A deadline was creeping up on me. Starting my writing assignment and finishing it quickly would be the optimal use of my time. However, no matter what I tried, I remained a slow writer. 

In general, I believe that having more of certain things—namely health, time, money, and energy—is always preferable. But we can lose a lot when we optimize for these four goals. Beyond paying in one to obtain another, there are compelling arguments that fixating on optimization can make people less connected to reality and unduly obsessed with control.

Add a Comment