Search the Community
Showing results for tags 'generative ai'.
-
Using generative AI at work makes co-workers question your competence, study claims
Karlston posted a news in Technology News
Recently Google came out with a report sharing findings from a study where customer service agents used an optional conversational AI assistant. Google found this tool could significantly boost productivity, reporting that agents using the AI saw an average 14% increase in efficiency. This gain, according to Google's calculations, could save a full-time worker approximately 122 hours per year, surpassing Google's initial estimate. The study also noted the AI had a particularly large impact on lower-performing agents, helping them handle more difficult tasks and boosting their output by 35%, compared to a more modest 7% gain for higher performers. However, these potential efficiency gains may come with a hidden social cost, according to a new study from Duke University, published recently in the Proceedings of the National Academy of Sciences. This research claims that despite AI's productivity benefits, using tools like ChatGPT, Claude, or Gemini might lead to your coworkers and managers viewing you as less competent. The research, titled "Evidence of a social evaluation penalty for using AI," involved four experiments with over 4,400 participants. Researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll from Duke's Fuqua School of Business found a consistent pattern: employees who use AI tools tend to face negative judgments from colleagues and managers regarding their competence and motivation. In the first experiment, participants imagined using either an AI tool or a standard dashboard creation tool for a work task. Those in the AI group anticipated being seen as lazier, less competent, less diligent, and more replaceable. They also indicated they would be less willing to tell their managers or colleagues about their AI use. A supplemental study replicated these core findings, showing that participants in the AI condition expected significantly lower competence ratings from colleagues compared to those using a dashboard tool. In the table below, higher scores indicate a more positive outcome for "Disclosure," "Competence," and Diligence. On the other hand, higher scores for "Lazy" and "Replaceable" represent a more negative outcome. The second experiment seemed to confirm these anxieties. When participants evaluated descriptions of employees, those who received help from AI were consistently rated as lazier, less competent, less diligent, less independent, and less self-assured compared to those who got similar help from non-AI sources or no help at all. Another supplemental study investigated whether these social penalties changed if AI use was described as common versus uncommon in the workplace. Interestingly, the perceived norm of AI use did not significantly alter these negative social evaluations, suggesting the penalty is quite robust. The researchers also found that this bias can influence real-world business decisions. In a hiring simulation, managers who did not personally use AI frequently were less likely to hire candidates who reported regular AI tool use. Conversely, managers who were frequent AI users themselves showed a preference for AI-using candidates. This aligns with findings from another part of the study, indicating that the perception of laziness in an AI-using candidate is stronger among evaluators who themselves use AI less frequently. The final experiment identified perceived laziness as a primary driver for this negative evaluation. However, this penalty could be lessened if the AI tool was clearly beneficial and appropriate for the specific task at hand. When AI use made obvious sense for the job, the negative perceptions were significantly reduced. For example, the study detailed that for manual tasks, using AI had a negative direct effect on perceived task fit, even beyond the laziness factor. In contrast, for digital tasks where AI could be seen as more useful, AI use had a positive direct effect on perceived task fit, which helped to partially counteract the negative impact of perceived laziness. Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of April): 1,811 RIP Matrix | Farewell my friend -
Reddit mods are fighting to keep AI slop off subreddits. They could use help.
Karlston posted a news in General News
Mods ask Reddit for tools as generative AI gets more popular and inconspicuous. Like it or not, generative AI is carving out its place in the world. And some Reddit users are definitely in the “don't like it" category. While some subreddits openly welcome AI-generated images, videos, and text, others have responded to the growing trend by banning most or all posts made with the technology. To better understand the reasoning and obstacles associated with these bans, Ars Technica spoke with moderators of subreddits that totally or partially ban generative AI. Almost all these volunteers described moderating against generative AI as a time-consuming challenge they expect to get more difficult as time goes on. And most are hoping that Reddit will release a tool to help their efforts. It's hard to know how much AI-generated content is actually on Reddit, and getting an estimate would be a large undertaking. Image library Freepik has analyzed the use of AI-generated content on social media but leaves Reddit out of its research because “it would take loads of time to manually comb through thousands of threads within the platform,” spokesperson Bella Valentini told me. For its part, Reddit doesn't publicly disclose how many Reddit posts involve generative AI use. To be clear, we're not suggesting that Reddit has a large problem with generative AI use. By now, many subreddits seem to have agreed on their approach to AI-generated posts, and generative AI has not superseded the real, human voices that have made Reddit popular. Still, mods largely agree that generative AI will likely get more popular on Reddit over the next few years, making generative AI modding increasingly important to both moderators and general users. Generative AI's rising popularity has also had implications for Reddit the company, which in 2024 started licensing Reddit posts to train the large language models (LLMs) powering generative AI. (Note: All the moderators I spoke with for this story requested that I use their Reddit usernames instead of their real names due to privacy concerns.) No generative AI allowed When it comes to anti-generative AI rules, numerous subreddits have zero-tolerance policies, while others permit posts that use generative AI if it's combined with human elements or is executed very well. These rules task mods with identifying posts using generative AI and determining if they fit the criteria to be permitted on the subreddit. Many subreddits have rules against posts made with generative AI because their mod teams or members consider such posts “low effort” or believe AI is counterintuitive to the subreddit’s mission of providing real human expertise and creations. "At a basic level, generative AI removes the human element from the Internet; if we allowed it, then it would undermine the very point of r/AskHistorians, which is engagement with experts," the mods of r/AskHistorians told me in a collective statement. The subreddit's goal is to provide historical information, and its mods think generative AI could make information shared on the subreddit less accurate. "[Generative AI] is likely to hallucinate facts, generate non-existent references, or otherwise provide misleading content," the mods said. "Someone getting answers from an LLM can’t respond to follow-ups because they aren’t an expert. We have built a reputation as a reliable source of historical information, and the use of [generative AI], especially without oversight, puts that at risk." Similarly, Halaku, a mod of r/wheeloftime, told me that the subreddit's mods banned generative AI because “we focus on genuine discussion.” Halaku believes AI content can’t facilitate “organic, genuine discussion” and “can drown out actual artwork being done by actual artists.” The r/lego subreddit banned AI-generated art because it caused confusion in online fan communities and retail stores selling Lego products, r/lego mod Mescad said. “People would see AI-generated art that looked like Lego on (i)nstagram or [F]acebook and then go into the store to ask to buy it,” they explained. “We decided that our community's dedication to authentic Lego products doesn't include AI-generated art.” Not all of Reddit is against generative AI, of course. Subreddits dedicated to the technology exist, and some general subreddits permit the use of generative AI in some or all forms. "When it comes to bans, I would rather focus on hate speech, Nazi salutes, and things that actually harm the subreddits," said 3rdusernameiveused, who moderates r/consoom and r/TeamBuilder25, which don't ban generative AI. "AI art does not do that... If I was going to ban [something] for 'moral' reasons, it probably won’t be AI art." “Overwhelmingly low-effort slop” Some generative AI bans are reflective of concerns that people are not being properly compensated for the content they create, which is then fed into LLM training. Mod Mathgeek007 told me that r/DeadlockTheGame bans generative AI because its members consider it “a form of uncredited theft," adding: Other moderators simply think generative AI reduces the quality of a subreddit's content. "It often just doesn't look good... the art can often look subpar," Mathgeek007 said. Similarly, r/videos bans most AI-generated content because, according to its announcement, the videos are “annoying” and “just bad video” 99 percent of the time. In an online interview, r/videos mod Abrownn told me: An r/fakemon mod told me, “I can’t think of anything more low-effort in terms of art creation than just typing words and having it generated for you." Some moderators say generative AI helps people spam unwanted content on a subreddit, including posts that are irrelevant to the subreddit and posts that attack users. "[Generative AI] content is almost entirely posted for purely self promotional/monetary reasons, and we as mods on Reddit are constantly dealing with abusive users just spamming their content without regard for the rules," Abrownn said. A moderator of the r/wallpaper subreddit, which permits generative AI, disagrees. The mod told me that generative AI "provides new routes for novel content" in the subreddit and questioned concerns about generative AI stealing from human artists or offering lower-quality work, saying those problems aren't unique to generative AI: Generative AI “wastes our time” Many mods are confident in their ability to effectively identify posts that use generative AI. A bigger problem is how much time it takes to identify these posts and remove them. The r/AskHistorians mods, for example, noted that all bans on the subreddit (including bans unrelated to AI) have “an appeals process,” and “making these assessments and reviewing AI appeals means we’re spending a considerable amount of time on something we didn’t have to worry about a few years ago.” They added: Several other mods I spoke with agree. Mathgeek007, for example, named "fighting AI bros" as a common obstacle. And for r/wheeloftime moderator Halaku, the biggest challenge in moderating against generative AI is “a generational one.” “Some of the current generation don't have a problem with it being AI because content is content, and [they think] we're being elitist by arguing otherwise, and they want to argue about it,” they said. A couple of mods noted that it’s less time-consuming to moderate subreddits that ban generative AI than it is to moderate those that allow posts using generative AI, depending on the context. “On subreddits where we allowed AI, I often take a bit longer time to actually go into each post where I feel like... it’s been AI-generated to actually look at it and make a decision,” explained N3DSdude, a mod of several subreddits with rules against generative AI, including r/DeadlockTheGame. MyarinTime, a moderator for r/lewdgames, which allows generative AI images, highlighted the challenges of identifying human-prompted generative AI content versus AI-generated content prompted by a bot: Mods expect things to get worse Most mods told me it’s pretty easy for them to detect posts made with generative AI, pointing to the distinct tone and favored phrases of AI-generated text. A few said that AI-generated video is harder to spot but still detectable. But as generative AI gets more advanced, moderators are expecting their work to get harder. In a joint statement, r/dune mods Blue_Three and Herbalhippie said, “AI used to have a problem making hands—i.e., too many fingers, etc.—but as time goes on, this is less and less of an issue.” R/videos' Abrownn also wonders how easy it will be to detect AI-generated Reddit content “as AI tools advance and content becomes more lifelike.” Mathgeek007 added: Moderators currently use various methods to fight generative AI, but they're not perfect. r/AskHistorians mods, for example, use “AI detectors, which are unreliable, problematic, and sometimes require paid subscriptions, as well as our own ability to detect AI through experience and expertise,” while N3DSdude pointed to tools like Quid and GPTZero. To manage current and future work around blocking generative AI, most of the mods I spoke with said they’d like Reddit to release a proprietary tool to help them. “I've yet to see a reliable tool that can detect AI-generated video content,” Aabrown said. “Even if we did have such a tool, we'd be putting hundreds of hours of content through the tool daily, which would get rather expensive rather quickly. And we're unpaid volunteer moderators, so we will be outgunned shortly when it comes to detecting this type of content at scale. We can only hope that Reddit will offer us a tool at some point in the near future that can help deal with this issue.” A Reddit spokesperson told me that the company is evaluating what such a tool could look like. But Reddit doesn’t have a rule banning generative AI overall, and the spokesperson said the company doesn't want to release a tool that would hinder expression or creativity. For now, Reddit seems content to rely on moderators to remove AI-generated content when appropriate. Reddit's spokesperson added: Making a generative AI Reddit tool wouldn’t be easy Reddit is handling the evolving concerns around generative AI as it has handled other content issues, including by leveraging AI and machine learning tools. Reddit's spokesperson said that this includes testing tools that can identify AI-generated media, such as images of politicians. But making a proprietary tool that allows moderators to detect AI-generated posts won't be easy, if it happens at all. The current tools for detecting generative AI are limited in their capabilities, and as generative AI advances, Reddit would need to provide tools that are more advanced than the AI-detecting tools that are currently available. That would require a good deal of technical resources and would also likely present notable economic challenges for the social media platform, which only became profitable last year. And as noted by r/videos moderator Abrownn, tools for detecting AI-generated video still have a long way to go, making a Reddit-specific system especially challenging to create. But even with a hypothetical Reddit tool, moderators would still have their work cut out for them. And because Reddit's popularity is largely due to its content from real humans, that work is important. Since Reddit's inception, that has meant relying on moderators, which Reddit has said it intends to keep doing. As r/dune mods Blue_Three and herbalhippie put it, it’s in Reddit’s “best interest that much/most content remains organic in nature." After all, Reddit's profitability has a lot to do with how much AI companies are willing to pay to access Reddit data. That value would likely decline if Reddit posts became largely AI-generated themselves. But providing the technology to ensure that generative AI isn't abused on Reddit would be a large challege. For now, volunteer laborers will continue to bear the brunt of generative AI moderation. Advance Publications, which owns Ars Technica parent Condé Nast, is the largest shareholder of Reddit. Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. News posts... 2023: 5,800+ | 2024: 5,700+ | 2025 (till end of January): 487 RIP Matrix | Farewell my friend -
From energy to resources, data centers have grown too greedy. In 2025, AI and climate change, two of the biggest societal disruptors we're facing, will collide. The summer of 2024 broke the record for Earth’s hottest day since data collection began, sparking widespread media coverage and public debate. This also happens to be the year that both Microsoft and Google, two of the leading big tech companies investing heavily in AI research and development, missed their climate targets. While this also made headlines and spurred indignation, AI’s environmental impacts are still far from being common knowledge. In reality, AI’s current “bigger is better” paradigm—epitomized by tech companies’ pursuit of ever bigger, more powerful large language models that are presented as the solution to every problem—comes with very significant costs to the environment. These range from generating colossal amounts of energy to power the data centers that run tools such as ChatGPT and Midjourney to the millions of gallons of freshwater that are pumped through these data centers to make sure they don’t overheat and the tons of rare earth metals needed to build the hardware they contain. Data centers already use 2 percent of electricity globally. In countries like Ireland, that figure goes up to one-fifth of the electricity generated, which prompted the Irish government to declare an effective moratorium on new data centers until 2028. While a lot of the energy used for powering data centers is officially “carbon-neutral,” this relies on mechanisms such as renewable energy credits, which do technically offset the emissions incurred by generating this electricity, but don’t change the way in which it’s generated. Places like Data Center Alley' in Virginia are mostly powered by nonrenewable energy sources such as natural gas, and energy providers are delaying the retirement of coal power plants to keep up with the increased demands of technologies like AI. Data centers are slurping up huge amounts of freshwater from scarce aquifers, pitting local communities against data center providers in places ranging from Arizona to Spain. In Taiwan, the government chose to allocate precious water resources to chip manufacturing facilities to stay ahead of the rising demands instead of letting local farmers use it for watering their crops amid the worst drought the country has seen in more than a century. My latest research shows that switching from older standard AI models—trained to do a single task such as question-answering—to the new generative models can use up to 30 times more energy just for answering the exact same set of questions. The tech companies that are increasingly adding generative AI models to everything from search engines to text-processing software are also not disclosing the carbon cost of these changes—we still don't know how much energy is used during a conversation with ChatGPT or when generating an image with Google’s Gemini. Much of the discourse from Big Tech around AI’s environmental impacts has followed two trajectories: Either it’s not really an issue (according to Bill Gates), or an energy breakthrough will come along and magically fix things (according to Sam Altman). What we really need is more transparency around AI’s environmental impacts, by way of voluntary initiatives like the AI Energy Star project that I’m leading, which would help users compare the energy efficiency of AI models to make informed decisions. I predict that in 2025, voluntary initiatives like these will start being enforced via legislation, from national governments to intergovernmental organizations like the United Nations. In 2025, with more research, public awareness, and regulation, we will finally start to grasp AI’s environmental footprint and take the necessary actions to reduce it. Source Hope you enjoyed this news post. Thank you for appreciating my time and effort posting news every day for many years. 2023: Over 5,800 news posts | 2024 (till end of November): 5,298 news posts RIP Matrix | Farewell my friend
-
Tim Cook confirms Apple’s generative AI features are coming ‘later this year’
Karlston posted a news in Technology News
The Apple CEO says his company is putting “tremendous time and effort” into integrating AI into its software platforms. During Apple’s quarterly earnings call on Thursday afternoon, CEO Tim Cook mentioned that the company is working on generative AI software features that will make their way to customers “later this year.” That aligns with reporting from Bloomberg’s Mark Gurman, who said recently that iOS 18 could be the “biggest” update in the operating system’s history. Cook’s teases — he mentioned generative AI several times, but never got specific — seem to confirm that we’re in for a big release this fall. “As we look ahead, we will continue to invest in these and other technologies that will shape the future. That includes artificial intelligence, where we continue to spend a tremendous amount of time and effort, and we’re excited to share the details of our ongoing work in that space later this year,” Cook said in his prepared remarks. Analysts tried to press Cook for more details, but he didn’t offer much. “Our M.O., if you will, has always been to to do work and then talk about work, and not to get out in front of ourselves. And so we’re going to hold that to this as well. But we have got some things that we’re incredibly excited about, that we’ll be talking about later this year.” AI software features ranging from advanced photo manipulation to word processing enhancements have been a major selling point of smartphones from Google and Samsung in recent months. It’s rare for Apple to telegraph its upcoming moves, so you can take this as a sign that the company has ambitious plans to integrate AI into its software platforms — iOS, iPadOS, and macOS — later this year. “Let me just say that I think there’s a huge opportunity for Apple with generative AI and with AI, without getting into many more details or getting out ahead of myself,” Cook said to conclude the call. Source -
Generative AI could be a massive threat to search engines in the next two years
Karlston posted a news in Technology News
Generative AI will pose a serious threat to search engines in just the next two years according to a forecast from the analyst firm Gartner. It said that search engine volume will fall by one quarter by 2026 due to the adoption of AI chatbots and other virtual agents. With the shift to artificial intelligence away from traditional search engines, Gartner says that companies will have to adjust their marketing channels strategies. Alan Antin, Vice President Analyst at Gartner, said: The analyst firm also said that search engine algorithms will favour quality content to help offset the growing amount of AI-generated content. In addition, it’s expected that watermarking will become more important as a means to highlight high-value content. While Google will probably not be liking this prediction, it’s important to note that it and many of the other providers of search engines are the main providers of generative AI services; Google has Gemini and Microsoft has Copilot. So while traditional search engine usage may decline in favour of AI, it will still be the likes of Google getting the traffic, however, it could mean that there will need to be some sort of replacement for sponsored links which Google relies on for revenue, Gartner didn’t mention this in its forecast but the generative AI revolution also means smaller players could become significant competitors to Google in search. Most will know of Microsoft’s Copilot which is essentially ChatGPT with web access but there is also Perplexity, a startup that has attracted funding from Amazon founder Jeff Bezos and NVIDIA. Source: Gartner Source -
AI used to be weird. Now ‘sounds like a bot’ is just shorthand for boring. In 2018, a viral joke started going around the internet: scripts based on “making a bot watch 1,000 hours” of just about anything. The premise (concocted by comedian Keaton Patti) was that you could train an artificial intelligence model on vast quantities of Saw films, Hallmark specials, or Olive Garden commercials and get back a bizarre funhouse-mirror version with lines like “lasagna wings with extra Italy” or “her mouth is full of secret soup.” The scripts almost certainly weren’t actually written by a bot, but the joke conveyed a common cultural understanding: AI was weird. Strange AI was everywhere a few years ago. AI Dungeon, a text adventure game genuinely powered by OpenAI’s GPT-2 and GPT-3, touted its ability to produce deeply imagined stories about the inner life of a chair. The first well-known AI art tools, like Google’s computer vision program Deep Dream, produced unabashedly bizarre Giger-esque nightmares. Perhaps the archetypal example was Janelle Shane’s blog AI Weirdness, where Shane trained models to create physically impossible nuclear waste warnings or sublimely inedible recipes. “Made by a bot” was shorthand for a kind of free-associative, nonsensical surrealism — both because of the models’ technical limitations and because they were more curiosities than commercial products. Lots of people had seen what “a bot” (actually or supposedly) produced. Fewer had used one. Even fewer had to worry about them in day-to-day life. But soon, generative AI tools would explode in popularity. And as they have, the cultural shorthand of “chatbot” has changed dramatically — because AI is getting boring. “If you want to really hurt someone’s feelings in the year 2023, just call them an AI,” suggested Caroline Mimbs Nyce in The Atlantic last May. Nyce charted the rise of “AI” as a term of derision — referring to material that was “dull or uninspired, riddled with clichés and recycled ideas.” The insult would reach new heights at the start of the Republican primary cycle in August, when former New Jersey governor Chris Christie dissed rival Vivek Ramaswamy as “a guy who sounds like ChatGPT.” And with that, “AI” — as an aesthetic or as a cultural descriptor — stopped signifying weird and is pretty much just shorthand for mediocre. Part of the shift stems from AI tools getting dramatically better. The surrealism of early generative work was partially a byproduct of its deep limitations. Early text models, for instance, had limited memory that made it tough to maintain narrative or even grammatical continuity. That produced the trademark dream logic of systems like early AI Dungeon, where stories drifted between settings, genres, and protagonists over the span of sentences. When director Oscar Sharp and researcher Ross Goodwin created the 2016 AI-written short film Sunspring, for instance, the bot they trained to make it couldn’t even “learn” the patterns behind proper names — resulting in characters dubbed H, H2, and C. Its dialogue is technically correct but almost Borgesian in its oddity. “You should see the boys and shut up,” H2 snaps during the film’s opening scene, in which no boys have been mentioned. “I was the one who was going to be a hundred years old.” Less than a decade later, a program like Sudowrite (built on OpenAI’s GPT-3.5 and GPT-4 models) can spit out paragraphs of text that closely imitates cliched genre prose. But AI has also been pushed deliberately away from intriguing strangeness and toward banal interactions that often end up wasting humans’ time and money. As companies fumble toward a profitable vision of generative artificial intelligence, AI tools are becoming big business by blossoming into the least interesting version of themselves. AI is everywhere right now, including many places it fits poorly. Google and Microsoft are pitching it as a search engine — a tool whose core purpose is pointing users to facts and information — despite a deep-seated propensity to completely make things up. Media outlets have made some interesting attempts at leveraging AI’s strengths, but it’s most visible in low-quality spam that’s neither informative nor (intentionally) entertaining, designed purely to lure visitors into loading a few ads. AI image generators have shifted from being seen as bespoke artistic experiments to alienating huge swathes of the creative community; they’re now overwhelmingly associated with badly executed stock art and invasive pornographic deepfakes, dubbed the digital equivalent of “a fake Chanel bag.” And as the stakes around AI tools’ safety have risen, guardrails and training seem to be making them less receptive to creatively unorthodox uses. In early 2023, Shane posted transcripts of ChatGPT refusing to play along with scenarios like being a squirrel or creating a dystopian sci-fi technology, delivering its now-trademark “I’m sorry, but as an AI language model” short-circuit. Shane had to resort to stage-setting with what she dubbed the “AI Weirdness hack,” telling ChatGPT to imitate older versions of AI models producing funny responses for a blog about weird AI. The AI Weirdness hack has proven surprisingly adept at getting AI tools like Bloom to shift from dull or human-replicating results to word-salad surrealism, an outcome Shane herself has found a little bit unsettling. “It is creepy to me,” she mused in one post, “that the only reason this method gets BLOOM to generate weird designs is because I spent years seeding internet training data with lists of weird AI-generated text.” AI tools are still plenty capable of being funny, but it’s most often due to their over-the-top performance of commercialized inanity. Witness, for instance, the “I apologize but I cannot fulfill this request” table-and-chair set on Amazon, whose selling points include being “crafted with materials” and “saving you valuable and effort.” (You can pay a spammer nearly $2,000 for it, which is less amusing.) Or a sports-writing bot’s detail-free recaps of matches, complete with odd phrases like “close encounter of the athletic kind.” ChatGPT’s absurdity is situational — reliant on real people doing painfully serious work with a tool they overestimate or fundamentally misunderstand. It’s possible we’re simply in an awkward in-between phase for creative AI use. AI models are hitting the uncanny valley between “so bad it’s good” and “good enough to be bad,” and perhaps with time we’ll see them become genuinely good, adept at remixing information in a way that feels fresh and unexpected. Maybe the schism between artists and AI developers will resolve, and we’ll see more tools that amplify human idiosyncrasy instead of offering a lowest-common-denominator replacement for it. At the very least, it’s still possible to guide AI tools into clever juxtaposition — like a biblical verse about removing a sandwich from a VCR or a hilariously overconfident evaluation of ChatGPT’s art skills. But for now, you probably won’t want to read anything that sounds “like a bot” any time soon. Source
-
Despite the challenges, Tim Cook believes in generative AI
Karlston posted a news in Technology News
As AAPL reports its Q3 2023 profits, CEO Tim Cook is also highlighting the company's progress in the field of generative AI. According to Cook, Apple has been actively involved in research related to various AI technologies, including generative AI, for several years. The company aims to incorporate these AI advancements into its products to enhance people's lives positively. The significance of this statement lies in the fact that Tim Cook has not openly discussed generative AI in the past. Instead, he focused on more general forms of AI and machine learning (ML). This indicates a shift in Apple's approach and highlights its growing interest in generative AI technologies. "We’ve been doing research across a wide range of AI technologies, including generative AI, for years. We’re going to continue investing and innovating and responsibly advancing our products with these technologies to help enrich people’s lives." -Tim Cook on Reuters (Image credit) This is a significant remark coming from Cook, who has avoided discussing generative AI in the past in favor of discussing more general forms of AI and ML. Although Cook welcomed AI's promise in May, he acknowledged that "issues need to be sorted." Apple & generative AI Apple is using generative AI in a variety of ways to improve its products and services. For example, generative AI is being used to develop new features for Siri, to create new augmented reality experiences. One of the most visible ways that Apple is using generative AI is in its virtual assistant, Siri. Siri is constantly being updated with new features, and many of these features are powered by generative AI. For example, generative AI is being used to improve Siri's ability to understand natural language. This means that Siri will be able to understand better what you are saying, even if you are not speaking perfectly. These kinds of generative AI efforts also give Apple an opportunity to create something "bigger." According to a recent patent, Apple is already working on a lip-reading Siri. If this project comes to life, generative AI will have an undeniable contribution. Featured image credit: Apple Source -
Disney’s Loki faces backlash over reported use of generative AI
Karlston posted a topic in Entertainment Exchange
A Loki season 2 poster has been linked to a stock image on Shutterstock that seemingly breaks the platform’s licensing rules regarding AI-generated content. Online designers are upset over what appears to be an AI-generated stock image in the poster for Loki’s second season. Image: Disney / Marvel A promotional poster for the second season of Loki on Disney Plus has sparked controversy amongst professional designers following claims that it was at least partially created using generative AI. Illustrator Katria Raden flagged the image on X (formerly Twitter) last week, claiming that the image of the spiraling clock in the background “is giving all the AI telltale signs, like things randomly turning into meaningless squiggles” — a reference to the artifacts sometimes left behind by AI-image generators. The creative community is concerned that AI image generators are being trained on their work without consent and could be used to replace human artists. Disney previously received backlash regarding its use of generative AI in another Marvel series, Secret Invasion, despite the studio insisting that using AI tools didn’t reduce roles for real designers on the project. Visual errors like wonky linkes, smudged lettering, and ‘meaningless squiggles’ can be seen in the image — suggesting the background was created using generative AI. Image: Disney / Marvel / The Verge Several X users (including Raden) noted that the background on the Loki artwork appears to have been pulled from an identical stock image on Shutterstock titled “Surreal Infinity Time Spiral Space Antique.” According to @thepokeflutist who purchased the stock image, it was published to Shutterstock this year — ruling out the possibility of it being too old to be AI-generated — and contains no embedded metadata to confirm how the image was created. Several AI image checkers that scanned the Stock image also flagged it as AI-generated. According to Shutterstock’s contributor rules, AI-generated content is not permitted to be licensed on the platform unless it’s created using Shutterstock’s own AI-image generator tool. That way the widely used stock image site can prove IP ownership of all submitted content. Shutterstock says its AI-generated stock imagery — which is clearly labeled as such on the platform — is safe for commercial use as it’s trained on its own stock library. Shutterstock did not respond to The Verge when asked if the time-spiral image violates its own rules about AI-generated content, or to clarify what the company is doing to enforce such rules. AI-generated stock imagery is a real issue for many creative professionals. As Raden notes: “licensing photos and illustrations on stock sites has been a way many hard-working artists have been earning a living. I don’t think replacing them with generated imagery via tech built on mass exploitation and wage theft is any more ethical than replacing Disney’s own employees.” Shutterstock doesn’t label the image as AI-generated, but does promote it as a “top choice” that’s in high demand. Image: Shutterstcok / Svarun Many of the other images uploaded by the same stock contributor also appear to be AI-generated, despite not being labeled as such. Image: Shutterstcok / Svarun Companies like Adobe and Getty are also promoting ways for AI-generated content to be commercially viable, but it’s unclear if these platforms are any better than Shutterstock at moderating submissions that don’t abide by their contributor rules. The poster has been widely distributed across platforms like Apple’s App Store since its release. Image: Apple / Disney / Marvel It also isn’t clear if generative AI was used elsewhere by Disney to create the promotional material for Loki. Some X users have speculated that it may have been used on sections of the image like the miniaturized characters surrounding Tom Hiddleston’s Loki, noting their awkward positioning. Disney has ignored our request to clarify if AI was used in the Loki promotional art, and to confirm if the company had licensed the aforementioned Shutterstock image. There’s the argument here that since the clock image used for Loki isn’t labeled as AI-generated by Shutterstock, Disney might not be aware of its origins. Still, the errors present in the stock image would be easy for most graphic designers to spot, so the inclusion of random artifacts in the final poster isn’t a good look for Disney’s design or editing process. The creative industry has become saturated with AI-powered tools like Adobe Firefly and Canva Magic Studio over the last year. These tools aim to make things easier for folks with limited design experience, and are typically promoted to organizations who want to produce cheap art at scale. Stock images are often used by companies because they’re fast, affordable, and accessible, reducing the need to hire experienced designers to make content from scratch. As AI-generated stock also grows in popularity, it’s easy to understand why creative professionals are concerned about the future of their industry. Source