Click here to go see the bonus panel!
Hovertext:
It's at its best when it interrupts a speech every twelve syllables.
Today's News:
Hovertext:
It's at its best when it interrupts a speech every twelve syllables.
A chef in Dubai used OpenAI's chatbot ChatGPT to come up with a pizza recipe — and as it turns out, it started selling like hotcakes.
Spartak Arutyunyan, head of menu development for the international pizza chain Dodo Pizza, told the BBC that the resulting recipe "was actually a huge hit, and it's still on the menu."
In a bid to reflect Dubai's culinary traditions, ChatGPT went all out, suggesting a wild mix of ingredients. The recipe includes "Arab shawarma chicken, Indian grilled paneer cheese, Middle Eastern Za'atar herbs, and tahini sauce," according to the report.
In simple terms, the chatbot decided to create a cross-cultural hodge podge of flavors to be thrown together on a pizza rather than anything particular original. Regardless, it seems to have struck a nerve.
"As a chef, I wouldn't mix these ingredients ever on a pizza, but still, the mix of flavors was surprisingly good," Arutyunyan told the BBC.
Arutyunyan revealed that other ChatGPT recipes didn't make the cut, including a pizza topped with strawberries and pasta, or a pie featuring blueberries and breakfast cereal.
We've already come across several lazy attempts to cash in on the AI hype in the culinary world. Last year, for instance, a dubious "bespoke smoothie shop" called BetterBlends in San Francisco closed down after its hyper-personalized smoothie recipe business failed to catch on.
A taco shop in Dallas also dabbled with generative AI — with equally mixed results.
Velvet Taco culinary director Venecia Willis told the BBC that ChatGPT spat out some "funky combinations."
"I think AI is a great tool to use when you're in a bit of a creative slump, to get the brain going again — ‘that combination might actually work, let's try it,'" she said. "The AI can suggest something maybe I wouldn't have thought of."
Other experts were far more skeptical of the use of AI chatbots in the kitchen.
"If you can get ChatGPT to spit out something that looks like a recipe, then it's because there are recipes on the internet," outspoken AI critic and University of Washington linguistics professor Emily Bender told the BBC.
More on ChatGPT: Did AI Already Peak and Now It’s Getting Dumber?
The post Chef Admits His Smash Hit Pizza Was Invented by ChatGPT appeared first on Futurism.
The post Turn That Frown appeared first on The Perry Bible Fellowship.
Apropos of almost nothing, the CEO of an AI startup suggested that a bad actor could use a mysterious image to "attack" human brains the way researchers have done with neural networks.
This kerfuffle began when Florent Crivello, the CEO of the Lindy AI staffing startup, agreed with researcher Tomáš Daniš' take that "there is no evidence humans can't be adversarially attacked like neural networks can."
"There could be," the Germany-based AI researcher wrote on X-formerly-Twitter, "an artificially constructed sensory input that makes you go insane forever."
In his own post, the Lindy CEO referenced a 2015 Google study that found that overlaying a noise-laden image atop a photo of a panda would cause a neural network to misidentify it as a monkey. To his mind, it seems "completely obvious" that such an attack — better known in the AI world as a "jailbreak" — could be used on humans as well.
I’m always surprised to find that this isn’t completely obvious to everyone.
There’s precedent suggesting that that’s the case — like that Pokémon episode aired in the 90s in Japan where a pikachu attack caused the screen to flash red and blue at 12hz for 6s, causing at least… https://t.co/fD0L4DYTzL pic.twitter.com/RKBooH8qXO
— Flo Crivello (@Altimor) June 5, 2024
Crivello cited the "Pokémon" episode that induced seizures in audiences across Japan in the 1990s when a strong electric attack from Pikachu resulted in strobing blue and red lights. Despite being parodied on "The Simpsons" stateside, the episode was never aired outside of Japan due to concerns that it might be harmful for audiences abroad as well — and as Crivello notes, that sort of thing could set a precedent for anyone wanting to use "very simple" imagery like that in the "Pokémon" episode to hurt the unsuspecting.
"No reason in principle why humans couldn’t be vulnerable to the same kind of adversarial examples," he wrote, "making a model think that a panda is a gibbon with 99.7% confidence because some seemingly random noise was added to it."
For the most part, to be clear, the idea of a deadly sensory input has mostly been constrained to the realm of science fiction.
Back in 1988, science fiction writer Robert Langford wrote a chilling short story titled "BLIT" about an image that could drive people mad and even kill them. The in-universe theory behind the story is known as the "Berryman Logical Image Technique," or "BLIT" for short, in which near-future supercomputers accidentally create so-called "basilisk" images that the brain, through a mysterious process known as "Gödelian spoilers," cannot compute.
Told from the perspective of Robbo, a right-wing terrorist who uses the imagery for his group's racist ends, the story follows the young man as he posts one such basilisk, called "The Parrot," around an unnamed British town, wearing goggles that distort his vision so that it does not harm him the way it does those who view it with their naked eyes.
Robbo is eventually caught with the image and arrested. Although there are no laws besides petty vandalism for which he can be prosecuted for stenciling the dangerous graffiti, he eventually realizes that he's seen "The Parrot" enough times that his brain filters out the distortion from the goggles, leading him to die in his jail cell before the night is through.
Indeed, in the comments of Crivello's post, at least one person made the leap to the "BLIT" story — which, if nothing else, shows how powerful that 30-something-year-old meme truly is.
Beyond being fodder for a nearly decade-old story on the popular r/NoSleep horror subreddit, it doesn't appear that anyone in the AI community or elsewhere has developed a "BLIT"-style adversarial attack of their own. We've reached out to Crivello to ask if he knows of any such research or any additional theories, but if he doesn't, we'll chalk it up to yet another instance of AI enthusiast yarn-spinning.
More on AI forecasts: OpenAI Insider Estimates 70 Percent Chance That AI Will Destroy or Catastrophically Harm Humanity
The post CEO Suggests That Humans Could Be "Adversarially Attacked" Like Neural Networks appeared first on Futurism.