What if Seinfeld made an episode about AI?
Too many people write words, very few tell stories.
Communicating through tech abbreviations isn’t gonna help onboard anyone to try and understand AI. My goal is to tell relatable stories where people can understand the impact of AI, and potentially gather the courage to play around a bit themselves. If there are any concepts you struggle to understand and would like to see contextualized, please let me know in the comments and I will write you a story.
I like to envision everything as if it were the start of a Seinfeld episode. Why? Cos it's simply fun and it's a great way to set the scene for any scenario.
Imagine Kramer busting through the door, going "JERRY! They're telling me my AI isn't responsible enough!"
Jerry: "Oh you mean the AI you created that could automate peoples taxes? Now who could possibly call that irresponsible?"
George barges in, "THE IRS WANTS TWO MILLION DOLLARS JERRY, TWO MILLION DOLLARS!"
Jerry: "See I don't know who's the bigger idiot here, the most irresponsible man I know, who lacks intelligence himself, launching an artificial intelligence for taxes. Or the guy who gave it his personal details."
Responsible AI
Terminator 2, Blade Runner, Wargames, Robocop. Us 80s kids grew up on these movies, who all depict a dystopian future with the common theme of computers, or artificial intelligence, having gone too far. Naturally, with AI being everywhere today and creeping closer into our lives, people are right to question the development and keep referring back to responsible AI. So how do we define it? Ethical decisions, fairness, transparency and safety are pillar stones to consider. Basically who controls what and in which way.
When discussing "responsible AI", it's important to try and envision the possible bad scenarios we'll end up in unless data privacy within AI becomes more regulated. A common problem today is using sensitive company data on sites you don’t fully trust. There’s multiple new free AI tools dropping every day, so how can you know who to trust? The short answer is you can’t. Rather try and get inspired by AI tools, and then write your content yourself.
But look, I'm just as hyped as you are about the recent AI developments, and no one wants to be the buzzkill. But in this new paradigm we're seeing being built in front of our eyes, aligning the first building blocks in a smart way will give us a better foundation to build from.
Newman runs in, out of breath: "YOU! You dumb rat!"
Jerry: "Who woke up tubby over here? I haven't seen you out of breath since your m&ms fell under the couch"
Newman: "Did you REALLY think Kramerica AI could replace mail? AMERICA RUNS ON MAIL!"
Jerry: "I haven't seen you run since..."
Newman: "Have your laughs, funny boy. AI will come for comedy soon too, mark my words"
Help or hinder?
Imagine AI development being a helium balloon. Once let go, the tiniest of wind will influence its direction and it could fly anywhere without our control. You could envision responsible AI to be a rock tied to its string, making sure the balloon stays in place and doesn't do anything crazy. But rather, envision a balloon flying straight up in a tube, making sure it doesn't go on any unwanted detours, and reaches the destination we set out. These metaphorical tubes are responsible AI regulations.
Elaine enters, silent and head hung low.
Jerry: "How was the date? Did sweet-talking Ricardo live up to the expectations?”
Elaine: "Well, turns out... This guy is an illiterate introvert who has been using ChatGPT to chat me up online, but he couldn't string two words together in person. NOT TWO WORDS! Is two words too much to ask from men these days?"
George: "Tell me about it, I talked to two women last week with ChatGPT, thought we hit it off in person but they've both ghosted me since."
Elaine: "Are you doing this too?!"
Jerry: "Elaine you have to understand, men like George never had a chance in life. With AI, he doesn't have to reveal who he really is until he's face to face without a screen. And frankly, that's the best shot he's ever had."
Kramer: "Sounds like you guys need to use AI more responsibly, cos this is sounding cuckoo."
George: "Yeah.... FOR TWO MILLION DOLLARS I'LL STOP!"
Don’t be like Kramer
So who's gonna be responsible for legislating responsible AI acts, and how could it be incentivized? The EU recently dropped a legislation suggestion, but implementation will be very tough to enforce.
We all want to automate our work as much as possible, but this is an aspect of AI where a human very much needs to set the rules. I envision a near future where governments incentivize responsible AI use through audits and tax cuts. Logically, it makes sense. If you, as a corporation, choose to not exploit workers by replacing their jobs with AI, your bottom line will suffer but humanity will be protected. This needs to be incentivized with tax cuts. But good luck explaining that to a government who's busy pointing fingers at each other like the shootout scene in The Good, The Bad and The Ugly.
So what's the takeaway from all of this? Don't be like Kramer. Responsible AI regulations are lingering, and if you can spend energy making sure you're using AI in a responsible way right now, it will pay off down the line. Don't entangle your company by becoming reliant on AI services you don't know or understand. The old proverb never lies: If the product is free then you are the product.