The lines between vision and execution are blurring thanks to technology, particularly with generative artificial intelligence (‘GenAI’). Just about a month and a half ago, OpenAI released a new and powerful iteration to their product, chatgpt: a multimodal model capable of disrupting almost everything you know about visual arts. While it wasn’t the first to arrive to that, it certainly cements this trend even further.
Anything and everything that touches data (code, written text, pixels, images) will be affected—potentially decimated— by AI. This is not paranoia talking. This is very, very real. This “hype” or “bubble” is as real as it gets and it will only get better, all evidence points out to that. If you’ve been following this topic (and adjacent topics to AI), you might be aware that this is not an uncommon view nor is it the minority.
There is no question about it. Pandora’s box is opened. The genie is out of the bottle, and yes, we are not ready for it, probably. Most likely.
I am not a purist, by any means on this, on technology in general. My personal interests lie in the intersection of too many things, I care about too many things. It’s absurd, even. However, sometimes, like today, like in the last few years, it all gets tested. My alliance as a human is tested, and is continuously being tested. It is exhausting to continuously seek morals and real goodness in a world that is perpetually in shortage of them. I feel strongly that there are too many things that are wrong with the kind of world that we are living in now. Incredibly powerful tools like AI can only make those even much worse. That being said, I don’t believe that we, as both consumers, users and makers, are completely powerless in the face of this big change. It’s easy to think we are. (The dire state of the job market isn’t also helping.)
I’m not writing this to provide you with my own optimistic view of the future; this goes far beyond that black-and-white thinking. Rather, my hopes would much more simpler than that. I challenge you to care. Whatever this thing is, it’s certainly not going away. While regulations, policies and political interventions can slow it down, it will not be a long term solution. Those can only deter the inevitable scientific innovations, but unlikely to stop them, as the nature of these things are. That also means the consequences of such events—as majestic and truly breathtaking as some of them are maybe—are also fated to happen. When that time comes, we’ll be looking back at this era and likely will say this, TODAY, felt like the turning point. Or something to that effect.
No matter what you feel about this, or which side of the the spectrum you are on, one thing is crystal clear, as clear as night & day: much of our future will be shaped by this technology. It will shape the way we live, and work, and do pretty much everything else that matters. I can’t help but feel like we are living in the era of unforeseeable change, a seismic shift in the way we do all things, possibly forever.
The biggest question in my mind with this hypothesis is this: knowing what we know now about AI and things/shifts/trends that might come, what can we do with that information in a way that will benefit our collective futures? Speaking as a designer, a creative knowledge worker, a parent and a human.
There’s probably a few good answers to that question. Perhaps, I’m being a bit more frank here if I say that the only wrong one is to ignore AI. “Choosing to ignore AI because it is scary” is not only a bad strategy; it is also a nihilistic one. We can opt not to use any of it, especially for the dreary marketing use cases that are as unremarkable as they are gimmicky1. We, however, just can’t ignore it and pretend it doesn’t exist.
To me, this is a very personal topic. It’s just one of those big existential topics that is extremely difficult to manage and come to terms with. It’s true in reality, as it is in fiction. One of the best portrayals2 of which on the small screen would have to be from Mad Men, a show about creativity and commerce and history and humans. Although it wasn’t a central theme on the show, the anxieties about early-stage technologies, and how the characters dealt with them, was a scene stealer. I mean who could forget Michael Ginsberg’s breakdown? It’s just one of those scenes that lingered long after the show ended3.

The show honored human creativity at its absolute best in the 20th century America. It’s a cultural masterpiece, really. And its is quite telling that, even in the smallest of ways, technological fears found a way of creeping in. And getting under the skins of its most creative characters.
And now, it is getting in under ours in similar fashion—as least in mine. Writing about it seems to help. As is being exposed to a small piece of it, in the application later, at least.
Fears are funny in that way. When you put a face to it, it becomes less mysterious. It becomes less abstract.
It becomes less… scary.
There is a place for fear, especially for these things. But if it is the main thing that occupies our thoughts about the future, it starts to become poisonous. That small dosage of poison can seriously damage whatever possibility for a prosperous future we might have. Strategic thinking about AI and how it might affect us starts inward. It’s about time we recognize it, and learn to live with it, in the best way we know possible.
I want to say many happy returns on your AI journey. May we all find our place in a world that is transformed by it. And in a future that is possible because of it.
Thank you for reading working title,
Nikki
Please, have some self-respect.
Technological anxiety was a heavy and prevalent theme on the latter part of the show. The height of it was made clear on one of the last episodes called The Monolith: https://madmen.fandom.com/wiki/The_Monolith
10 years ago today.