Artificial Intelligence won't take our jobs yet, but things are going to get weird
We're still 50 years away from artificial intelligence domination
Welcome to this week’s edition of Second Guess. Did you know that January is the longest month of the year? That’s why I’m doing a countdown to salary day. Join the movement. We got 6 days left, people! #WAGMI
If you missed last week’s edition of this newsletter, catch up: I wrote about how the myth of multiple income streams (or how side-hustle culture is killing us).
And now, to today’s story…
In 2016, Microsoft launched an AI personality on Twitter named “Tay”. They wanted the bot to engage in online conversations with Twitter users as a fun, interactive demo of Microsoft’s natural language processing technology. The experiment turned out to be one of the most unforgettable tech fails ever.
Within hours, internet trolls had gotten Tay to tweet the most offensive messages you could think of, ranging from “Hitler was right” to “I hate feminists and they should all die and burn in hell.” It was a hot mess and Microsoft quicky shut Tay down.
Fast-forward to December 2022 when I watched MKBHD’s YouTube video on AI where he revealed he’d used ChatGPT to generate the script—including the argument against AI. I was astounded when he said it and started to ponder about the existential threat AI poses to my daily bread, how much the world has evolved to the point that my career as a creative would be at such risk of extinction.
But just two weeks ago, netizens exposed tech publication CNET for quietly publishing dozens of AI-generated feature articles. CNET admitted that it was true, but claimed they were just experimenting. Whether the statement was just PR bullshit is up for debate, but one thing is clear: despite all the hype and FOMO around AI, it’s far from ready to take over the world.
CNET’s “experiment” failed and—colour me shocked—proved artificial intelligence isn’t any better at journalism than humans. In fact, it’s actually worse. Earlier this week, The Washington Post reported that CNET started adding lengthy correction notices to some of its AI-generated articles after Futurism, another tech publication, called out the stories for containing some “very dumb errors.”
Other tech sites like Bankrate have also come out to admit that their AI-generated articles have been riddled with silly errors since November and they will be effecting corrections. Which brings me to my argument—TLDR: Despite the noise, AI is still not ready to take over our jobs.
Artificial intelligence has rapidly become more and more sophisticated over the last decade. We now use AI in our daily life to learn our preferences and serve us ads, suggest movies, recognise our faces and even complete our sentences in ways our lovers can’t dream of.
Dall-E and OpenAI’s Chat GPT took the world by storm in 2022 when these tools showed they could create entire complex pieces of art and content with simple prompts. So when news broke out that CNET had been using AI to generate entire stories, the anxiety around its threat to journalists’ jobs was understandable. Robots could finally generate copy without needing salaries or bathroom breaks. “W” for capitalism, right?
Lol. Let’s be calming down.
The problem AI is trying to solve is harder than we think
The fundamental blocker to AI taking over is the difference in specificity between how humans carry out instructions vs how AI does.
I came across an analogy by AI expert Stuart Russel where he explained to TED-Ed that asking a human to do something and giving the same thing as an objective to an artificial intelligence system.
When you ask a human to run an errand for you—say, order food delivery, the human understands it’s not their life’s mission, neither must they bring your food to you by all means. With AI algorithms, it’s exactly that; we must give them a fixed objective and account for every specific scenario for everything, else they’ll make it their life’s mission even if they break everything else.
If you asked an AI system to get you food from a restaurant and it arrived just as the restaurant was closing, it could kill everyone at the restaurant just to get you your amala because well, it made it its primary purpose to get your food to you per instructions. AI simply doesn’t have common sense or empathy or the ability to understand its surroundings.
According to Russel, if you asked AI to fix the acidification of the oceans, it could cause a catalytic reaction that deacidifies the oceans very quickly and efficiently but also consume a quarter of all the oxygen in the atmosphere, which would apparently cause all of us slow, unpleasant deaths.
In theory, saying, “Just be more careful about specifying the objective” is a simple enough fix but what about atmospheric oxygen? What about possible side effects of the reaction in the ocean that poisons all the fish? Then you tell AI not to kill the fish, which opens another probability: well, what about the seaweed? Then you say, “Don’t do anything that might kill the seaweed”, and on and on and on.
We don’t have to go through these processes with humans because humans have intuition. Humans’ ability to emphatise, ask questions, seek clarification and basically take initiative isn’t something we consider very sophisticated. We just know, but AI doesn’t. In 2023, AI will tell you wrong answers with confidence and wouldn’t give a fuck, because it can’t think for itself.
In Microsoft’s Tay situation mentioned earlier, the problem wasn’t that the bot was immoral; it was that she was well, amoral. She didn’t have any understanding of the conception of right or wrong, and her utterances were just the output of a mindless statistical analysis from training data with no ability to evaluate the ethical significance of her statements. Welp.
Even if an AI system were to work well enough 90% of the time but had instances of being deeply harmful or inaccurate the rest of the way, there’s “still a lot of homework left”, as Maximilian Gahntz tells Techcrunch, “before a company should make it widely available.”
The recently documented failures of generative AI (Dall-E and ChatGPT) attempting to create but were hindered by glaring problems further buttress the fact that AI is still a long way from taking over. And even on the business side, companies aren’t ready to roll out something that generates messed up stuff frequently because it’ll only drive their customers away.
The intellectual property problem
Alex Kantrowitz, the author of Big Technology, recently caught a Substack writer that used AI to “copy, remix, and publish” content stolen from his newsletter.
The Kantrowitz incident is particularly interesting seeing that most generative AI crawl through mountains of publicly available information to generate their content or art. What happens is that, in journalism and the creator economy, even the most authentic-looking stories are essentially cut-and-join jobs, which lack new findings or thoughtful creation. In essence, AI still can’t do research, ask questions, show empathy, apply common sense, or actually create new things, so no, it’s not taking over our jobs just yet.
I’m no programmer but I remember when Github released Copilot and the entire dev community on Twitter was overran with hysteria. Nobody has lost their jobs because of AI yet. In fact, Microsoft, GitHub and OpenAI are currently being sued in a class action lawsuit that accuses them of violating copyright law by letting Copilot regurgitate sections of licensed code without providing credit.
I’ll go out on a limb to argue that when AI art becomes more sophisticated, artists will reinvent themselves. Naturally created art will only become more valuable, just like how handmade products (e.g. shoes) are typically more valuable than mass-produced junk, but I digress.
TL/DR: The doomsday chants about AI are a little too premature.
But things are about to get weird
While AI isn’t yet ready to take over, it still represents an important existential threat. For a long time, creatives and creators were fine with automation reducing the need for human labour and rendering millions of manual jobs redundant. But now, automation is making giant strides on its way to disrupt highly creative work once thought to be outside its reach.
I’m also concerned that we’re becoming more and more dependent on machines. These AI models feed on data to learn about us and assist us. And now, we’re all too willing to hand our information over to machines as long as they make our lives easier. We must then ask ourselves: how much assistance is too much assistance? We might be inadvertently fast-tracking the process of AI dependence even though we keep expressing hysteria about it taking over.
Humans evolve by teaching younger generations—an unbroken chain that goes back thousands of generations. Right now, there is a real danger of losing the incentive to teach the next generation how to understand the machines powering the world they’re being born into. What happens when the chain is broken? Are we getting to the point of giving the keys to our civilisation to AI? I’s not our generation I’m worried about; it’s our children’s
We’re living in truly exciting times as AI is both fascinating and terrifying. While we can’t know when AI will come for our jobs, most experts peg it between 2045 and the end of the century.
Without a doubt, AI will continue to grow in leaps and bounds and will have to navigate ethical and practical concerns. But it’s still a long way away from replacing humans. Meantime, as artificial intelligence evolves, I’m cautiously optimistic that humans will evolve with it.
Have a lovely weekend and see you next Friday!
I agree with the AI not taking over our jobs. It’s far from what we want to fix but it’s job is to assist humans by achieving tasks under short amount of time. The issue is how this AI is being used and how we can grow from a level to another. Like Art. In art industry some artists realized that AI can help them create the next level of what art may be and begin to reinvent the new art world. Those that feel threatened aren’t ready to grow from art 3.0 to art 4.0.
I enjoyed your newsletter so far🤎🤎