#81: I have some questions about AI.
On the IPs of AI-driven creative work, economics and that Helen Toner interview
Dear friends,
Welcome friends, new and old subscribers, to my 81st post. As a fair warning, this is what I would consider a rabbit hole entry. I went pretty wide with the scope of my writing today. If this is your thing, I would appreciate it if you can stick around. Think it’ll capture your curiosity at very least.
We are halfway through the year, and boy is it going by fast. The pace of progress in technology is dizzying. If you work in tech, or closely with tech, you can probably feel the burn already, as with everyone else.
I just recently attended a fantastic conference called ‘Designing with AI 2024’ produced by Rosenfeld Media. The overarching theme is a dead giveaway: Artificial Intelligence and what it means for Design (and the world, in general). While it did gave me a lot to think about, I haven’t had much clarity on where exactly are we headed towards. From where I stand, I’m prepared to change my mind as many times as possible just to arrive at the right type of insight on this topic.
There’s a lot of lego pieces on the floor. Some are already pre-built, and you can totally see the end product just by looking at them. Others look a bit shapeless, and oddly formed. It requires some heavy lifting on the imagination to get it to where it needs to be. The rest are just scattered endlessly across the floor. They will serve their purpose later, when the main pieces are done.
The main pieces come first.
There’s a lot of angles being discussed and argued all around. In my opinion, it’s really down to these three visions:
Artificial Intelligence as an augmentation tool to human capacity, intellect and creativity. Just some of research work being done to support this:
Artificial Intelligence as the last frontier, “humanity’s last invention”, and ultimately, the cause of its demise, from Stephen Hawking himself. He said almost a decade before ChatGPT was released to the public. — This is the more common public perception on AI with valid reasons, obviously.
A hybrid of both, with consequences (good and bad) being unevenly distributed in many different data points:
Location: some countries will reap the benefits of AI while others would be far behind
Industries: AI can only dominate in select industries
Social class: Who will use and control AI? and the contrary?
Education: Access to AI literacy will heavily depend on education
Culture: Culture will play a significant role into AI adoption, in fact I suspect this will be huge barrier to some areas of the world. On the other hand, America, home to the world’s top AI research labs and companies, won’t have this problem though.
Regulations and the law: Listen to Helen Toner1’s interviews and talks on this (here and here)
From the interview at Ted AI show on one of the reasons why AI is hard to regulate: “…it’s a moving target. So what the technology can do is different now than it was even two years ago, let alone five years ago, 10 years ago. And policymakers are not good at sort of agile policymaking2. They’re not like software developers.” - Helen Toner
Economics and funding:
How will the world economics change with AI?How would AI alter and transform the world economy, possibly forever?
Obviously, there’s a lot of incentives to be made with the 1st one however… this is also true with the last two options. Right now, we are somewhere in the middle, depending on which sources you believe in. Design, as an industry, craft and profession, is struggling with that. It’s too easy to get caught in the current, and lose your way, especially if you read too much news about these things. Adobe’s recent Llm and data-related policies fiascos aren’t helping either. More recently, OpenAI’s CTO Mira Murati said this—arguably divisive—quote from an interview at Dartmouth University:
"Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place if the content that comes out of it [the existing jobs/people] isn't high quality. I really believe that using it as a tool for education and creativity will expand our intelligence and creativity and imagination." - Mira Murati, OpenAI CTO, video courtesy of Dartmouth Engineering/Youtube
There’s a lot to unpack here. I would actually advise you to listen to the entire video, especially all the way to the QA. She added more insights to her thoughts on this towards the end.
I paused and thought a lot about this and what it might mean, for me, my profession and the industry’s future, in general. Like many rational human beings, I would hope that if AI were here to stay, it would be for the good of all. For the prosperity of all. But then anyone who’s ever read or have been exposed to this technology would understand that it’s more complicated than that—far, far more than a lot of us can handle. Instead of coming up with an underbaked stance on these heavy issues, I’m happy to share the top unanswered questions I have at the moment:
If AI will bring in more jobs, as replacement to the ones it took, what would that look like? Is it really possible to keep retraining humans over and over again in the course of their lifetimes just to keep up with the machines? Even in the creative field, that seems a bit of a stretch to ask considering how high the burnout rate is.
Consider this chilling quote from the book, 21 Lessons from the 21st Century by Yuval Noah Harari:
“The problem with all such new jobs, however, is that they will probably demand high levels of expertise, and will therefore not solve the problems of unemployed unskilled laborers. Creating new human jobs might prove easier than retraining humans to actually fill these jobs. During previous waves of automation, people could usually switch from one routine low-skill job to another. In 1920 a farm worker laid off due to the mechanization of agriculture could find a new job in a factory producing tractors. In 1980 an unemployed factory worker could start working as a cashier in a supermarket. Such occupational changes were feasible, because the move from farm to factory and from factory to supermarket required only limited retraining.”
“Despite the appearance of many new human jobs, we might nevertheless witness the rise of a new useless class. We might actually get the worst of both worlds, suffering simultaneously from high unemployment and a shortage of skilled labor. Many people might share the fate not of 19th-century wagon drivers, who switched to driving taxis, but of 19th-century horses, who were increasingly pushed out of the job market altogether.
In addition, no remaining human jobs will ever be safe from the threat of future automation, because machine learning and robotics will continue to improve. A 40-year-old unemployed Walmart cashier who through superhuman effort manages to reinvent herself as a drone pilot might have to reinvent herself again 10 years later, because by then the flying of drones may also have been automated. This volatility will also make it more difficult to organize unions or secure labor rights. Already today, many new jobs in advanced economies involve unprotected temporary work, freelancing, and one-time gigs. How do you unionize a profession that mushrooms and disappears within a decade?” - Thanks to Austin Rose for compiling these from the book itself
What is the economics of AI-driven creative work? Who will own the output and the IP (Intellectual Property)? How would ‘artists’3 be compensated in a way that is regulated, fair and ethical? Speaking of ethics, should it be left in the hands of the big tech to decide or should it be reinvented overall - with proper oversights from the government, academia ,the general public and select private companies?
Mira Murati mentioned quality as a (potential) factor to relevance4. Who gets to decide what that is and how that might fit on the free market? Would AI lower the bar even more for extremely menial jobs, mostly found on sites like Fiverr, Upwork, Truelancer etc? If you can get an icon made for the cost of almost $05, what is the true incentive for paying its actual market value? What will ‘quality design work’ even look like from a consumer POV? Would that still matter or the complete opposite - would there be a surge in premium, 100% human-made products amidst all the profoundly soulless AI-’crafted’ ones?
Would more technology solve for the problems it is currently creating? Can the good outweigh the bad with the help of future savior6 startups? Or should we just accept that this is a huge—potentially the biggest in our lifetime—cultural and societal change that is basically irreversible? A prime example to this: the rapid rise of issues in AI’s disruption in the educational sector. Just some of the stats I’ve seen online:
I like asking questions because, in the face of complexity, it is perhaps one of the most productive thing anyone can do. These are pretty big questions that I’m quite convinced noone has a satisfying enough answer on. It’s all blurry, and morally confusing.
Normally, in my attempt to understand how things work, I generally lean towards the incentives. I follow the incentives of the people (or companies) behind the decisions they make. Not that I agree with this worldview, nor do I actively support it. There’s a big difference between how I want the world to work versus how it actually works. I tend to not shy away from the latter, even when it remains to be a bitter pill to swallow most of the time.
I can’t imagine it being any different in the subject of AI, but I’d like to be proven wrong. Otherwise, it’s really going to be difficult to look forward to the future. Because when it comes to thinking about the future, optimism is necessary as a key lens to view it with. At least if you plan on living a relatively productive life as a human, whatever that means for you.
It is not that easy to do that with AI and its looming threats to society. I do think it needs to be a part of a general conversation. The more people talking about AI, exchanging thoughts, ideas and whatnot, the better. We shouldn’t let things like this go by without any second thought, regardless if we’re in the tech industry or not. If it’s not obvious yet, it will have generational and lasting affects to all of us and our children after us, and our planet before us.
For good, and for bad.
Bilawal Sidhu (Interviewer): “What can we, as individuals, do when we encounter, use, or even discuss AI? Any recommendations?”
Helen Toner (Interviewee): “I think my biggest suggestion here is just not to be intimidated by the technology and not to be intimidated by technologists. Like, this is really a technology where we don't know what we're doing, the best experts in the world don't understand how it works. And so, I think just, you know, if you find it interesting, being interested. If you think of fun ways to use it, use them. If you're worried about it, feel free to be worried. Like, you know, I think the main thing is just feeling like you have a right to your own take on what you want to happen with the technology and no regulator, no, you know, CEO is ever going to have full visibility into all of the different ways that it's affecting, you know, millions and billions of people around the world. And so kind of, I don't know, trusting your own experience and exploring for yourself and seeing what you think is, I think, the main suggestion I would have.”
Thank you for reading working title,
Nikki
The absolute best piece of writing on AI I’ve read on the internet in months:
✨
If you have a spare moment, I’d love to hear your opinion on my newsletter. This will help me understand what to write about and curate better. I am also on Notes, where I post previews and premature ideas that fuel my writing streaks.
Helen Toner is an Australian researcher, and the director of strategy at Georgetown’s Center for Security and Emerging Technology, Ex-board member of OpenAI
Agile policymaking - the closest legitimate material I can find on this is from the UK Government site https://www.gov.uk/guidance/open-policy-making-toolkit/getting-started-with-open-policy-making
The definition of an artist in an AI-driven economy will change, as is with a lot of roles that we know of today
Presumably, it is referring to the idea of: “higher the quality of the art, the less likely it will be replaced by the robots”
Not counting the computing power
the mission-driven kind, maybe? it’s 2024, it is terribly hard to decipher which ones are actually for real
Thanks for this synthesis! I try to stay abreast of perspectives and general information on the topic, and tend to be pretty skeptical. I look to folks like yourself, who are examining the different angles, to help me with that. What strikes me the most is the quote about AI making it harder to unionize when fields of work or job types are more rapidly changing. Maybe it will be harder, or maybe it will work if there are new approaches that adapt to changing times. If we know one thing is that it's harder for workers if people don't step up to protect themselves and take back power, regardless of where the technology is at. The people building and investing in it are not incentivized by power sharing, that's for damn sure.