Wharton Webinar Series
AI Horizons
AI and Machine Creativity
Boston University information systems professor Dokyun Lee can’t wait to get his hands on an Apple Vision Pro.
The just-released $4,000 spatial computer allows users to navigate online content with their eyes, hands, and voice, effectively blending their physical and digital worlds through a futuristic headset.
“I’m looking forward to a text-to-world model just like Star Trek,” he said. “I want the holodeck.”
Perhaps he can compare notes with Yannick Exner, a doctoral researcher in digital marketing at the Technical University of Munich, who said his Vision Pro order was on the way.
Their fearless embrace of an AI-integrated future is evident in their research, which they shared during a webinar hosted by AI at Wharton to showcase emerging knowledge in the field. Streamed live on Feb. 2, the webinar focused on “AI and Machine Creativity.”
Wharton marketing professor Robert Meyer hosted the hourlong webinar that also featured Ankit Sisodia, a Purdue University marketing professor who specializes in the use of machine learning and AI for business applications.
The guest scholars offered glimpses into the profound ways in which AI is changing everything.
“What we learned is that people have a similar perception of AI-generated content with human-generated content,” Exner said.
His co-authored paper, “The Power of Generative Marketing: Can Generative AI Reach Human-level Visual Marketing Content,” compared how people responded to human-made versus synthetic images. In the study, AI-generated images outperformed human-made images on perception, were comparable on social media engagement, and achieved significantly higher click-through rates.
Sisodia developed a method to discover and quantify human-interpretable visual characteristics directly from image data. His co-authored paper, “Generative Interpretable Visual Design,” explores ways to use deep machine learning to find interesting product attributes, and then use those to generate new products that will resonate with consumers.
“We know that visual design matters for consumer choices,” he said.
Lee’s co-authored paper, “Generative AI, Human Creativity, and Art,” examines the ability of text-to-image AI to help humans create high-quality digital art. Using a dataset of more than 4 million artworks by more than 50,000 users, the study found that text-to-image AI enhanced creative productivity by 25% and the art’s value by 50% over time. The key takeaway is that AI-assisted artists experience more creativity and value, producing more content ideas and more art that is favorably evaluated.
“It’s great as a brainstorming tool,” Lee said.
The scholars agreed that AI has almost boundless potential to affect human creativity. As Meyer noted, today’s art is far better than the primitive drawings on cave walls. “Not that cave art was bad, but it was very limited because there were limited tools to produce it,” he said.
Exner wondered whether humanity will reach some point of diminished return with AI, because the models are built and trained on human knowledge. What happens if people start relying too much on the models and engage less in novel thinking?
Lee countered, “When photography came, art did not die. I feel like we’re going to have the same sort of impact, at least on the creative side. But I agree with Yannick in terms of what might happen to the knowledge base, where people might not be motivated to contribute, and the models don’t have anything to learn from after a while.”
Meyer said the notion of AI’s self-extinction would make a great sci-fi movie, but like the panelists, he’d rather hold out for the Apple Vision Pro. “I have it on my birthday list for a present my wife will get me, I hope,” he said with a smile.
Register here to attend “AI and Innovation” on March 1, moderated by Professor Kartik Hosanagar, faculty co-director.
– Angie Basiouny