AI’s Future Isn’t Fixed: Why Foundations Matter More Than Features

We were told it would be convenient, and… mostly, it has been. A proverbial “wolf in sheep’s clothing,” if you will. We trusted, or maybe we didn’t, but we leveraged the tools for convenience, anyway.
Sign in with Google.
Your memories from this day.
An endless feed of content, curated just for us.

On the surface it feels harmless (maybe even sweet and caring; the number one reason I share stories on social media is because I love getting “on this day” memories. It’s like having a personal archivist jotting down your story so you don’t have to). 

But underneath the convenience is the same hunger of that wolf. Jowls dripping with algorithmically slick saliva. And what it’s hungry for is us

(We are the lambs. Chomp chomp.)

The danger isn’t just that AI hallucinates (although… also not great). The danger is that it continues to evolve in a way that benefits the machine of global,  opportunistic, extraction.

Let’s go back in time to see what lessons the past holds, shall we? The year: 1983. The date: January 1. Not sure about you, but I was just a twinkle in my mother’s eye, not set to be born for another two years.

The internet was imagined as open when it soft launched on January 1, 1983; but, over the years, it has been surveilled and sold back to us. What started as an unregulated playground has become a privatized profit machine.

Like crawfish dropped alive into a rolling boil, we don’t always recognize the violence until it’s too late. At first it looks like celebration, community, even sweetness—the pot steaming while friends gather round. But underneath, something is being cooked. And this time… it’s us.

I know, it sounds doom and gloom; I’m not doom and gloom. But I am a Scorpio moon. And that means I speak to the shadow side, too. If you’ve felt uneasy about how much access these tools gain through our use, or noticed the way convenience is edging into control… you’re not alone. Part of why I’m writing is to give us language for that unease, context for why it matters, and a way to think about what else could be possible.

We had no idea what was going to happen when the internet was given to us. With that 20/20 hindsight in mind, let’s come back to the present: 

Anyone telling you they know what AI will become, with certainty, is lying.

AI is often called “artificial intelligence,” but that’s not really what it is. It isn’t intelligent at all; it isn’t sentient or all knowing. At the end of the day, it’s predictive modeling. And Sam Altman admits: we don’t fully understand how it works, only that it does. The math running it doesn’t care who we are or how we’re using it or what we’ll get out of it (or what it will get out of us). But the systems it’s built inside do. And those systems are not neutral. They are extractive.

The people building AI don’t care about what matters most (in my humble opinion): privacy, consent, sovereignty, or human values.

You can feel it when the tool irons your wrinkled words into something generic and… wrong.

You can feel it when you anonymize data before dropping it into a model because you don’t trust where it will end up. 

You can feel it when you lean on AI as memory in postpartum fog, and realize you’ve handed over more of yourself than you meant to, without any way to call it back. 

The foundation “they’re” building AI on is not for humanity. It is for extraction, surveillance, and privatization—under the guise of convenience.

Shoshana Zuboff named it years ago in her book The Age of Surveillance Capitalism: “We are no longer the subjects of value realization … we are the objects from which raw materials are extracted … Predictions about our behavior are sold into behavioral futures markets.”

In other words: we’ve stopped being the customers. We’ve become the raw material. Our lives are mined, packaged, and sold as predictions about what we’ll do next. AI didn’t invent that model, but it’s certainly got the potential to carry it further, faster, deeper.

And please feel free to call me naive, but I believe we still have time to adjust. 

I like AI. In many ways, it has changed my life. But change is also neutral—it cuts both ways.

We feel that opportunity for change, and the risks that come with it, too. 

Recent studies give weight to our feelings: Gallup found only 1 in 3 Americans trust AI to make fair decisions. Three-quarters of Americans want stronger regulation. And a German study found AI recommended men make $400K and women with the same résumé make $280K. Again: the math may be neutral, but the system is not. 

This data doesn’t tell me, “well, people are just paranoid.” It tells me that people are smart, and they’ve seen this story before—we’re feeling the heat before the pot totally boils over.

We already feel the extraction, even if we don’t have the words for it yet. We feel it in the labor market, in the gaps of trust, in that creepy-crawly tingling up the spine whispering: this system wasn’t built with you in mind

This isn’t working.

Because once foundations harden, surveillance, data ownership (or lack thereof), and systemic bias isn’t just a feature of AI; it becomes the infrastructure. It’s what every convenience rests on. Every autofill. Every autocomplete. Every “personalized” feed.

And yet we keep accepting the bargain, because what else is there? Retype your story for the hundredth time, or let the tool remember it for you and own it, too. Both choices feel like a bad deal.

The real frontier isn’t faster output. It isn’t bigger models. It isn’t even “smarter” machines. The real frontier is whether we can imagine technology that doesn’t strip us down into raw material. Whether we can plant new foundations: continuity without surveillance, consent that actually travels with us, memory we own.

Until then, every “helpful” feature is just another wolf grabbing a fluffy woolen costume from the heap.

That’s the value of naming all this out loud. It gives us language for what’s been gnawing at us, context for why it keeps happening, and a place to practice imagining something different together.

Critique is only half the work. The other half is action: testing, listening, and imagining together. 

xo,
Brittany

Previous
Previous

The Cost of Carrying What Isn’t Yours at Work (and How to Stop)

Next
Next

How Capitalism Shows Up in Small Business (and How to Resist It)