Dblp Will The Technological Singularity Come Soon Modeling The
dblp is humbly asking you for your kind support. The dblp computer science bibliography is, and will always be, a free and non-profit service for the international computer science community, and a common good for our community. Today, the global service provided by dblp faces a strong increase in demand. However, its net budget is shrinking. There are no signs that this trend will turn around. This is why we humbly ask for your support in the form of a donation to Schloss Dagstuhl LZI.
Even just a small contribution can make a difference. Thank you very much! Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For more information please see the Initiative for Open Citations (I4OC). Please also note that there is no way of submitting missing references or citation data directly to dblp.
We are currently in an era of escalating technological complexity and profound societal transformations, where artificial intelligence (AI) technologies exemplified by large language models (LLMs) have reignited discussions on the ‘Technological Singularity’. ‘Technological Singularity’ is a philosophical concept referring to an irreversible and profound transformation that occurs when AI capabilities surpass those of humans comprehensively. However, quantitative modeling and analysis of the historical evolution and future trends of AI technologies remain scarce, failing to substantiate the singularity hypothesis adequately. This paper hypothesizes that the development of AI technologies could be characterized by the superposition of multiple logistic growth processes. To explore this hypothesis, we propose a multi-logistic growth process model and validate it using two real-world datasets: AI Historical Statistics and Arxiv AI Papers. Our analysis of the AI Historical Statistics dataset assesses the effectiveness of the multi-logistic model and evaluates the current and future trends in AI technology development.
Additionally, cross-validation experiments on the Arxiv AI Paper, GPU Transistor and Internet User dataset enhance the robustness of our conclusions derived from the AI Historical Statistics dataset. The experimental results reveal that around 2024 marks the fastest point of the current AI wave, and the deep learning-based AI technologies are projected to decline around 2035-2040 if no fundamental technological innovation emerges. Consequently, the technological singularity appears unlikely to arrive in the foreseeable future. We are in an era of technological explosion, where emerging technologies are proliferating at an unprecedented pace, profoundly impacting the global socio-economic landscape, industries, and cognitive paradigms. Among these technologies, Artificial Intelligence (AI) stands out as particularly transformative, it has caused a stronger impact in society and its popularity has been increasing since 1986 [1]. AI has a history spanning nearly 70 years, with its conceptual foundations laid at the Dartmouth Conference in 1956 [2].
Throughout this period, AI development has witnessed ’three peaks and two troughs’, as shown in Fig 1, and we are presently in the third wave, characterized by the ’Deep Learning’ era. Deep learning, a method adept at uncovering hidden patterns in large datasets and solving practical problems, has significantly influenced the global industrial chain. However, it has also inevitably been over-hyped by some media and capital. Therefore, it is crucial to quantitatively model the historical development of AI technology and forecast its future trends. Such an approach allows us to comprehend the objective laws governing AI technology evolution and to evaluate its societal impact with greater rationality and composure. Since 2020, Large Language Models (LLMs) exemplified by the GPT series have emerged prominently [4], with the annual proliferation of notable LLM developments depicted explosively, as illustrated in Fig 2.
LLMs demonstrate remarkable capabilities in comprehending text, images, sounds, and even videos within the human domain, proficiently generating samples indistinguishable from ground truths. [5, 6]. Notably, GPT-4 recently passed the medical license examination [7], prompting some researchers to speculate that it may have surpassed the ’Turing Test’ [8]. These achievements underscore the growing belief among the public that the ’technological singularity’ is drawing nearer. The technological singularity refers to the critical point at which the emergence of superintelligent AI drives an ’intelligence explosion’, meaning that the development speed of artificial intelligence systems continues to grow at an infinite... Nevertheless, as researchers in the AI research community, it is imperative to recognize that we are still in the third wave of AI technology, nearing its zenith due to advancements such as LLMs.
Despite these strides, LLMs remain extensions of classic deep learning architectures like Transformers [10] and BERT [11], lacking significant scientific theoretical breakthroughs. Moreover, they exhibit several unresolved limitations such as hallucinations and high computational overhead [12, 13, 14]. They do not establish a complete understanding of the physical world but only mechanically summarize knowledge from massive data samples, rendering them less efficient in learning from sparse data. Reflecting on the history of AI development, discussions about the technological singularity have been persistent, recurring with each wave of AI advancements. As early as 1965, Good [15] posited that the AI singularity could likely arrive in the 20th century. Vinge [16] predicted that machines would surpass human intelligence between 2005 and 2030, while Yudkowsky [17] forecasted the arrival of the AI singularity in 2021.
Kurzweil [9] anticipated that human-level AI would emerge around 2029, with the singularity occurring in 2045. Conversely, other scholars have expressed skepticism about the imminence of the technological singularity. In 2017, an email survey of authors who published papers at the NeurIPS and ICML conferences revealed that nearly half of the respondents doubted the likelihood of the AI singularity occurring in the foreseeable... In summary, experts hold diverse opinions on the future trajectory of AI technology. However, there remains a notable absence of widely accepted and effective quantitative methods to predict the future of AI. Particularly lacking are methods to model the historical development of AI and make reliable extrapolations about its future trends.
To address the aforementioned problems, we need to propose an effective quantitative methods that can be expressed mathematically to characterize the dynamics of AI technology development. As shown in Fig 2, we fit the cumulative number of LLM-related papers on the Arxiv website using both logistic growth and exponential growth process. The logistic growth model aligns with past empirical predictions related to industrial or information technology development [19, 20], while the exponential growth model corresponds to the viewpoint of ’technological singularity’ [9]. From the experimental results, the logistic growth process exhibit a higher R-squared value compared to exponential growth, indicating a better fitting performance to the real data points. Considering also the historical pattern of AI technology development characterized by ‘three peaks and two troughs’, it is evident that AI development cannot follow the exponential growth. Therefore, we hypothesize that the development dynamics of AI technology may be modeled as the superposition of multiple logistic growth processes.
Building on this hypothesis, we propose the multi-logistic model to fit the annual cumulative numbers of famous AI systems in the AI Historical Statistics dataset. Our model significantly outperformed other models, thus providing preliminary validation for our hypothesis. We then conduct a comprehensive analysis of the parameters and derivative characteristics of our proposed model to forecast future trends in AI technology. Additionally, we focus on the current third wave of AI, conducting cross-validation experiments using the AI Arxiv Paper dataset. The results from the cross-validation experiments align closely with our findings from the AI Historical Statistics dataset: the fast point of the current AI wave is anticipated around 2024, but without further theoretical breakthroughs,... Our main contributions in this paper are summarized as follows:
The technological singularity, often simply called the singularity,[1] is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization.[2][3] According to the most popular version of the... J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing an explosive increase... Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence could result in human extinction.[5][6] The consequences of a technological singularity and its potential benefit or harm to the human race have been... Prominent technologists and academics dispute the plausibility of a technological singularity and associated artificial intelligence "explosion", including Paul Allen,[7] Jeff Hawkins,[8] John Holland, Jaron Lanier, Steven Pinker,[8] Theodore Modis,[9] Gordon Moore,[8] and Roger Penrose.[10]... Stuart J.
Russell and Peter Norvig observe that in the history of technology, improvement in a particular area tends to follow an S curve: it begins with accelerating improvement, then levels off without continuing upward into... Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper "Computing Machinery and Intelligence" argued that a machine could, in theory, exhibit intelligent behavior equivalent to or indistinguishable from that of a human.[12] However, machines capable of performing at or... The Hungarian–American mathematician John von Neumann (1903–1957) is the first known person to discuss a coming "singularity" in technological progress.[14][15] Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on... dblp is humbly asking you for your kind support. The dblp computer science bibliography is, and will always be, a free and non-profit service for the international computer science community, and a common good for our community.
Today, the global service provided by dblp faces a strong increase in demand. However, its net budget is shrinking. There are no signs that this trend will turn around. This is why we humbly ask for your support in the form of a donation to Schloss Dagstuhl LZI. Even just a small contribution can make a difference. Thank you very much!
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q. By one major metric, artificial general intelligence is much closer than you think. Here’s what you’ll learn when you read this story:
In the world of artificial intelligence, the idea of “singularity” looms large. This slippery concept describes the moment AI exceeds beyond human control and rapidly transforms society. The tricky thing about AI singularity (and why it borrows terminology from black hole physics) is that it’s enormously difficult to predict where it begins and nearly impossible to know what’s beyond this technological... However, some AI researchers are on the hunt for signs of reaching singularity measured by AI progress approaching the skills and ability comparable to a human. One such metric, defined by Translated, a Rome-based translation company, is an AI’s ability to translate speech at the accuracy of a human. Language is one of the most difficult AI challenges, but a computer that could close that gap could theoretically show signs of Artificial General Intelligence (AGI).
People Also Search
- dblp: Will the Technological Singularity Come Soon? Modeling the ...
- Will the Technological Singularity Come Soon? Modeling the Dynamics of ...
- Technological singularity - Wikipedia
- [PDF] Will the technological singularity come soon? Modeling the ...
- Technological Singularity: An Impending "Intelligence Explosion"
- Guangyin Jin - dblp
- Singularity: Here's When Humanity Will Reach It, New Data Shows
- Technological Singularity: What Do We Really Know? - MDPI
- Experts Predict Humanity Could Reach Technological Singularity Soon - MSN
- (PDF) Will the Technological Singularity Come Soon? Modeling the ...
Dblp Is Humbly Asking You For Your Kind Support. The
dblp is humbly asking you for your kind support. The dblp computer science bibliography is, and will always be, a free and non-profit service for the international computer science community, and a common good for our community. Today, the global service provided by dblp faces a strong increase in demand. However, its net budget is shrinking. There are no signs that this trend will turn around. Th...
Even Just A Small Contribution Can Make A Difference. Thank
Even just a small contribution can make a difference. Thank you very much! Please note: Providing information about references and citations is only possible thanks to to the open metadata APIs provided by crossref.org and opencitations.net. If citation data of your publications is not openly available yet, then please consider asking your publisher to release your citation data to the public. For...
We Are Currently In An Era Of Escalating Technological Complexity
We are currently in an era of escalating technological complexity and profound societal transformations, where artificial intelligence (AI) technologies exemplified by large language models (LLMs) have reignited discussions on the ‘Technological Singularity’. ‘Technological Singularity’ is a philosophical concept referring to an irreversible and profound transformation that occurs when AI capabili...
Additionally, Cross-validation Experiments On The Arxiv AI Paper, GPU Transistor
Additionally, cross-validation experiments on the Arxiv AI Paper, GPU Transistor and Internet User dataset enhance the robustness of our conclusions derived from the AI Historical Statistics dataset. The experimental results reveal that around 2024 marks the fastest point of the current AI wave, and the deep learning-based AI technologies are projected to decline around 2035-2040 if no fundamental...
Throughout This Period, AI Development Has Witnessed ’three Peaks And
Throughout this period, AI development has witnessed ’three peaks and two troughs’, as shown in Fig 1, and we are presently in the third wave, characterized by the ’Deep Learning’ era. Deep learning, a method adept at uncovering hidden patterns in large datasets and solving practical problems, has significantly influenced the global industrial chain. However, it has also inevitably been over-hyped...