PODCAST OVERVIEW
Creative AI: What is happening at the nexus of AI and the creative process
Explore perspectives on using AI to fine tune business and creative processes, based on our latest Generation AI podcast
August 21, 2023 • 6 minutes
We’ve reached a tipping point when it comes to artificial intelligence. Over the past few decades, AI has shifted from a promise to a democratized tool accessible to almost anyone with an internet connection.
While the hype is calming, there seems to be no slowing down in the technology adoption. While many have raised concerns about these technologies and their potential implications, there is no concern quite like that of creative endeavors. Will AI devalue and automate human art and creative thinking?
According to our most recent guest on Generation AI, Vittorio Banfi, Co-Founder at Tailor, the answer is a resounding “no.” Instead, AI systems will be leveraged to empower creative people to focus on what they do best. But, it may take time and practice to get there.
Breaking through the creative AI hype cycle
Artificial intelligence is far from new. However, since the rise of ChatGPT, AI has been all the rage across personal and professional applications. According to Vittorio, there are two types of hype, and AI fits squarely into one category.
“There’s the hype cycle around something that could work in the future, and there’s a hype cycle around something useful,” Vittorio says. “Generative AI has passed the tipping point to where it’s actually useful.”
In this case, the hype around generative AI tools was much less about the promise, which has existed for many years, and much more about the accessibility and usability of the applications. Having steadily improved over the years, AI is now widely accessible and offers applications that increase efficiency — it goes far beyond basic chat capabilities and machine-learning techniques.
While the hype cycle is slowing, Vittorio doesn’t expect it to dwindle completely because the technology is incredibly useful.
“When there’s a hype cycle around something that’s actually useful, the decreasing part of the curve is not as steep,” he says. “I expect some hype to decrease, but the use of AI isn’t going anywhere.”
A daily creative brief, brought to you by AI
As co-founder of Tailor, Vittorio has a bright outlook when it comes to artificial intelligence. While many risks exist, he believes the potential behind the technology will empower people beyond expectation.
Tailor utilizes AI to offer users a personalized daily brief based on their interests and habits, ultimately making the unruly sea of endless content possible, even enjoyable, to navigate.
Each day, Tailor provides a 5-10 minute summary of all news, relevant content, and major publications related to your consumption habits and specified interests. From here, you can dive in deeper, exploring each piece of content if you choose.
A rise in ethical and privacy concerns in personalized AI
When TikTok provides you with an ultra-customized feed based on your preferences, interests, and viewing habits, do you feel catered to? Or do you get an eerie feeling? AI is constantly being leveraged to pull you into the newest Netflix show in your field of interest, suggest new songs and artists in the same style for your playlists, and sell you items via personalized ads in your social media feeds.
After all, a custom experience is exactly what digital users are looking for, right? But how far is too far?
This blurry line between customization and personal privacy is one of the largest concerns regarding modern AI. With little-to-no existing government regulations currently being enforced, it’s hard to tell exactly what data is being gathered and what it’s being used for.
“There’s a lot of questions around where is this data going and how is it being used, other than serving you content?” Vittorio says. “There’s always the balance between the trust that you have in the company.”
Guiding principles in establishing trust: An AI company’s perspective
With growing concerns regarding data privacy and usage and ethics, transparency and honesty are crucial for AI companies and their users. According to Vittorio, building a foundation of trust should come first for the benefit of an organization and its users.
First, users should understand the limitations of artificial intelligence. For example, in systems like ChatGPT, users must accept a warning that the information provided may not always be accurate. Likewise, large language models (LLMs) rely solely on the data in which they are trained, meaning that any inconsistency or incorrect information found in that training data will be construed as fact until corrected.
Tailor takes a clear approach to transparency, not only informing users exactly what their data is being used for but also providing clear links that allow users to trace information to its original source. Therefore, if the system provides information that raises red flags, users can find the origin and make a judgment call.
“We really explain to our users the things that are happening. They can always go back to where the information originated and make their own judgment call,” Vittorio says. “The AI is not opaque — It’s not trying to separate the user from reality. It’s trying to help them get where you need to go.”
The government’s role in overseeing and regulating AI
When it comes to AI, there are several moving pieces. While this makes potential opportunities practically infinite, it also raises several concerns regarding regulation. Many wonder whether AI and LLMs can even be properly controlled. But according to Vittorio, this complexity doesn’t mean the government doesn’t have a heavy responsibility in the future of AI.
When it comes to regulation, society plays a large role. While regulation has to face the complications of constantly evolving technology and new systems released daily, it’s also subject to society’s understanding of the technology, data, and related laws.
“You need to understand regulation is released on the population, so the level of knowledge and understanding of AI tools people have plays a role,” Vittorio says. “ChatGPT helped people realize what AI can do. Now the conversation about regulations is different than if we were only talking about backend systems.”
While the government will have a role in ensuring AI is used ethically, it will also be vital to consider society’s understanding and reaction to the technology. Even then, the regulation process will likely have widespread and unexpected consequences such as potentially thwarting further innovation, particularly by smaller companies with less political pull and investment capabilities.
The AI creativity dilemma
Vittorio looks forward with high hopes for the convergence of AI and transformational creativity. Where many see AI as a threat to human art, he is hopeful that tools can and will be built to streamline creative tasks, enabling creatives to focus on what they do best. He also predicts that AI may become a medium of sorts for creatives.
“I’ve always been fascinated by how creative people will leverage these types of technologies,” Vittorio says. “Can we build AI systems that actually empower creative people to focus on the work they do best?”
His fear for the future of creativity and AI is the exact opposite — a world where AI devalues human creativity.
“We need to build incentive structures so we don’t end up devaluating human creativity,” he says. “People like to see human creativity above all else — it’s human nature.”
While the threat of AI devaluing creativity and original ideas is valid, Vittorio believes it is the less likely of the two scenarios.