Generative Ai Concerns Usage Challenges Opportunities And Sentiments
The field of deep generative modeling has grown rapidly in the last few years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models show tremendous promise in synthesizing high-resolution images and text, as well as structured... However, we argue that current large-scale generative AI models exhibit several fundamental shortcomings that hinder their widespread adoption across domains. In this work, our objective is to identify these issues and highlight key unresolved challenges in modern generative AI paradigms that should be addressed to further enhance their capabilities, versatility, and reliability. By identifying these challenges, we aim to provide researchers with insights for exploring fruitful research directions, thus fostering the development of more robust and accessible generative AI solutions. The past few years have demonstrated the immense potential of large-scale generative models to create powerful AI tools capable of impacting society profoundly.
Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Rae et al., 2021) and their dialogue agents, such as ChatGPT (OpenAI, 2023) and Llama 3 (Grattafiori et al., 2024). have enabled the development of highly effective text generation systems that produce coherent, contextually relevant, and user-tailored outputs across a wide range of use cases. Similarly, advancements in diffusion models (Sohl-Dickstein et al., 2015; Song et al., 2020; Ho et al., 2020) have led to groundbreaking advancements in image synthesis tasks, such as large-scale text-to-image generation (Ramesh et al.,... These successes show that highly effective AI systems can be built using a relatively straightforward recipe: combining simple generative modeling paradigms (Larochelle & Murray, 2011; Sohl-Dickstein et al., 2015) with successful network architectures (Vaswani... The impact of generative AI has not been limited to text and image generation applications. It has fueled accelerated progress across a variety of research fields and practical applications, spanning from biology (Jumper et al., 2021) to weather forecasting (Ravuri et al., 2021), from code generation (Chen et al.,...
Amidst the excitement and anticipation surrounding this new wave of Deep Generative Models (DGMs),111In this paper, we refer to Generative AI as a collection of large-scale DGMs and use the term DGM henceforth. it is easy to overlook the new set of challenges they introduce. Unlike many of the traditional machine learning models, DGMs generate outputs in very high-dimensional spaces, which introduces several technical complexities. These include significantly increased computational demands, a need for larger datasets to accurately capture the underlying data distribution, and challenges in effectively evaluating and interpreting the generated outputs. And while significant progress has been made in improving interpretability and computational efficiency for traditional models (Marcinkevics & Vogt, 2020), these existing methods are frequently ill-suited for DGMs, at least in part because of... Consequently, there is a pressing need for the development of a new set of techniques and tools tailored to these models, particularly to enable efficient inference, interpretability and quantization.
These challenges lead us to conclude that scaling up current paradigms is not in isolation the ultimate path towards a perfect generative model. While increasing model size and training data can enhance performance on benchmarks, it does little to address the fundamental shortcomings of DGMs, such as their inefficiency, lack of inclusivity, limited transparency, and barriers to... This work offers a collection of views and opinions from different communities about these key unresolved challenges in generative AI, with the ultimate goal of guiding future research toward what we perceive are the... Concretely, we discuss key challenges in (a) broadening the scope and adaptability of DGMs, i.e., their ability to robustly generalize across different domains and modalities (Section 2); (b) improving their efficiency and resource utilization,... This paper emerged as a result of the Dagstuhl Seminar on Challenges and Perspectives in Deep Generative Modeling222https://www.dagstuhl.de/23072 held in Spring 2023. By outlining a comprehensive roadmap of the current state and open challenges of generative AI, we hope to empower researchers and practitioners alike, fostering the development of generative AI models that are not only...
You have full access to this open access article Generative Artificial Intelligence (Gen-AI) is a new advancement that has revolutionized the concepts of Natural Language Processing (NLP) and Large Language Model (LLM). This change impacts various aspects of life, stimulating industry, education, and healthcare progression. This survey presents the potential applications of Gen-AI across various sectors, highlighting the risks and opportunities. Some of the most pressing challenges include ethical consideration, the rise of disinformation (including deepfakes), concerns over Intellectual Property (IP) rights, cybersecurity risks, bias and discrimination. The survey also covers the fundamental models of Gen-AI, such as Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), and transformers.
These frameworks are extremely important in various sectors, including medical imaging, drug discovery, and personalized medicine, and offer valuable insights into the future of technological advancements in the scientific community. The study contributes substantially by exploring positive elements and addressing the challenges of adequately deploying Gen-AI models. Using these insights, we hope to provide a comprehensive knowledge of the potential challenges and complexities associated with the widespread implementation of artificial intelligence technologies. Avoid common mistakes on your manuscript. Artificial Intelligence (AI) [1, 2] is a rapidly expanding domain of computer science that deals with all aspects of emulating cognitive functions to solve problems in the real world and develop computers that can... Being considered the oldest field of computer research , it is commonly referred to as machine intelligence [4] to differentiate it from human intelligence [5].
According to Tenenbaum et al. [6], the field centered on cognitive and computer science. AI is currently receiving great attention because of the achievements made in Machine Learning (ML). Throughout the history of AI, there has always been a solid connection to explainability. In 1958, McCarthy’s Advice Taker, described it is a “program with common sense” [7]. Common sense reasoning abilities were possibly being proposed for the first time as the cornerstone of AI.
Rather than only focusing on solving pattern recognition problems, artificial intelligence systems should be able to construct causal models of the world that assist explanation and comprehension, according to recent research [8]. The consistent update of AI technology has led to the introduction of new advanced LLMs models such as GPT, PaLM, and Llama [2]. These models fall under the category of Gen-AI, showcasing significant progress in NLP capabilities. These models employ the neural capabilities required for processing labeled, unlabeled, or semi-supervised data via different learning methods. Adopting advanced transformer architectures characterized by encoder-decoder structures empowers LLMs to process different data modalities, including text, visual, and audio information. This versatility highlights how LLMs are key contributors to the ongoing wave of digital transformation [9].
Posted by Isobel Bartlett, James Gikas, Ngo Suet Hon, Irakli Kupatadze and Mariana Shchotkina | Aug 20, 2025 | Computer Science | 0 Generative Artificial Intelligence (GenAI) is increasingly reshaping a wide range of sectors, including business, healthcare and education, through its ability to generate personalised content and support complex tasks. This paper provides an overview of GenAI’s development from early neural networks to advanced transformer-based models, highlighting its rapid adoption following the release of ChatGPT in 2022. While the benefits of GenAI are substantial – enhancing efficiency, creativity and innovation – its accelerated deployment also raises pressing ethical, social and environmental concerns. These include high energy consumption, electronic waste, privacy breaches, algorithmic bias and the spread of misinformation. Psychological impacts, such as artificial intimacy and overreliance on AI for mental health support, further complicate its use.
The paper also considers GenAI’s potential to transform the future of work and support sustainability goals. Ultimately, it calls for a balanced approach to GenAI development – one that fosters innovation while ensuring transparency, fairness and long-term sustainability. Generative Artificial Intelligence (GenAI) represents a transformative breakthrough within the world of technology. GenAI focuses on generating new creative content – such as text, images, audio, code or video – something that extends far beyond the previous capabilities of existing AIs (Stryker et al., 2025). Unlike traditional AI models, which attempt to distinguish or predict categories within data, generative AI models learn the patterns and relationships within massive datasets and use this knowledge to produce original content in response... The emergence of GenAI has enabled people to train these models to learn complex subjects, including human language, programming, art and biochemistry, and apply this understanding to craft innovative outputs that mimic human creativity.
The most prevalent types of GenAI models include Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs) and modern language models such as Generative Pre-Trained Transformers (GPT) (Lawton, 2025). These models rely on machine learning, utilising neural networks to encode observed data structures in order to generate new, similar content (Njoroge, 2025). While today’s strong interest in GenAI, shared by both consumers and businesses alike, was sparked by the rise of ChatGPT in 2022 (Marr, 2023), the technology used by OpenAI to develop the GPT models... In the late 1980s to the 1990s, the AI field advanced with the development of Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks (Bernard, 2023). These networks were able to process sequential data, making them suitable for tasks like speech and language modelling (Marr, 2023). In 2014, this was enhanced with the advent of GANs, pitting two networks – a generator and a discriminator – against each other to generate more high-quality images and video (Stryker et al., 2025).
Three years later, the transformer architecture was published, leading to more sophisticated developments in natural language processing, including OpenAI publishing their prototype for the GPT model (ibid.). Authors: Daswin de Silva; Okyay Kaynak; Mona El-Ayoubi; Nishan Mills; Damminda Alahakoon; Milos Manic Generative artificial intelligence (Generative AI) is transforming the way we live and work. Following several decades of artificial narrow intelligence, Generative AI is signaling a paradigm shift in the intelligence of machines, an increased generalization capability with increased accessibility and equity for nontechnical users. Large language models (LLMs) are leading this charge, specifically conversational interfaces, such as ChatGPT, Gemini, Claude, and Llama (large language model meta AI). Besides language and text, robust and effective Generative AI models have emerged for all other modalities of digital data, image, video, audio, code, and combinations thereof.
This article presents the opportunities and challenges of Generative AI in advancing industrial systems and technologies. The article begins with an introduction to Generative AI, which includes its rapid progression to state-of-the-art, the deep learning algorithms, large training datasets, and computing infrastructure used to build Generative AI models, as well... The contribution, value, and utility of Generative AI is presented in terms of its four capabilities of accelerating academic research, augmenting the learning and teaching experience, supporting industry practice, and increasing social impact. The article concludes with an expeditious message to the academic research and industry practitioner communities to invest time and effort in the training, adoption, and application of Generative AI, with consideration for AI literacy... As generative AI becomes increasingly integrated into research workflows, it brings both transformative potential and pressing challenges. This table highlights key benefits, such as enhanced productivity, personalization, and scientific discovery, alongside critical concerns, including data privacy, misinformation, and ethical use.
Understanding both dimensions is essential for researchers to engage responsibly with AI tools and contribute to the ongoing conversation about their role in academia and society.
People Also Search
- Generative AI: Concerns, usage, challenges, opportunities and sentiments
- On the Challenges and Opportunities in Generative AI
- A Critical Analysis of Generative AI: Challenges, Opportunities, and ...
- PDF Generative Artificial Intelligence: Opportunities, Challenges, and ...
- The Rise of Generative AI: Opportunities and Challenges (I)
- Opportunities and Challenges of Generative A
- Issues and Benefits of Using Generative AI - AI Tools and Resources ...
- 10 Major Challenges of Generative AI & How to Overcome Them
- PDF Generative AI Challenges and Potential Unveiled
The Field Of Deep Generative Modeling Has Grown Rapidly In
The field of deep generative modeling has grown rapidly in the last few years. With the availability of massive amounts of training data coupled with advances in scalable unsupervised learning paradigms, recent large-scale generative models show tremendous promise in synthesizing high-resolution images and text, as well as structured... However, we argue that current large-scale generative AI mode...
Large Language Models (LLMs) (Brown Et Al., 2020; Chowdhery Et
Large Language Models (LLMs) (Brown et al., 2020; Chowdhery et al., 2022; OpenAI, 2023; Rae et al., 2021) and their dialogue agents, such as ChatGPT (OpenAI, 2023) and Llama 3 (Grattafiori et al., 2024). have enabled the development of highly effective text generation systems that produce coherent, contextually relevant, and user-tailored outputs across a wide range of use cases. Similarly, advanc...
Amidst The Excitement And Anticipation Surrounding This New Wave Of
Amidst the excitement and anticipation surrounding this new wave of Deep Generative Models (DGMs),111In this paper, we refer to Generative AI as a collection of large-scale DGMs and use the term DGM henceforth. it is easy to overlook the new set of challenges they introduce. Unlike many of the traditional machine learning models, DGMs generate outputs in very high-dimensional spaces, which introdu...
These Challenges Lead Us To Conclude That Scaling Up Current
These challenges lead us to conclude that scaling up current paradigms is not in isolation the ultimate path towards a perfect generative model. While increasing model size and training data can enhance performance on benchmarks, it does little to address the fundamental shortcomings of DGMs, such as their inefficiency, lack of inclusivity, limited transparency, and barriers to... This work offers...
You Have Full Access To This Open Access Article Generative
You have full access to this open access article Generative Artificial Intelligence (Gen-AI) is a new advancement that has revolutionized the concepts of Natural Language Processing (NLP) and Large Language Model (LLM). This change impacts various aspects of life, stimulating industry, education, and healthcare progression. This survey presents the potential applications of Gen-AI across various s...