A Far-Fetched Idea Comes True – On the History of AI
In light of the jaw-dropping results artificial intelligence is currently yielding in a variety of highly complex tasks, which a few years back no one would have expected to be, if at all, performed by something other than a human being having reached an advanced level of proficiency in the respective field, the only thing the majority of people tends to be concerned about is AI algorithms becoming too powerful, posing a potential threat to certain existences, or even mankind in general. However, the rise of the by now most renowned computer science subdomain has by no means been as seamless as its current dominance might suggest, which serves us as a reason to take a look back.
The first concepts of artificial beings embowed with humanlike intelligence dates back to ancient history, where giants built from bronze, protecting the isle of Crete made their way into greek mythology and automata being capable of singing and dancing theirs into the Chinese one. But it wasn’t until 1941 when the first programmable computer going under the name of Z3 got introduced to the world by Konrad Zuse, that a system came forth that was deemed in possession of the potential to one day meet the requirements of hosting intelligent behaviour.
Due to the recent findings in neurology, showing that the brain is a network of neurons firing if provided with sufficient inputs, coupled with Claude Shannons newly founded information theory and Alan Turings prove, that any computation could be described digitally, a general consent as to the feasibility of constructing autonomous cognitive systems arose. This paved the way for the conception of the first artificial neural networks and thereon artificial intelligence being established as a scientific field on the Dartmouth conference in 1956. A snowball effect was set off, inaugurating what is nowadays referred to as the golden era of AI. Researchers came up with a multitude of algorithms, knowledge concerning which one of them to select for which task was built up and computers performed even better. Both optimism and expectations were high at that time, which warranted the immense funding the domain benefited from.
By 1970, the achievement of a machine possessing the intelligence of an average human being was considered to be just within reach, that is merely a matter of 3 to 8 years. But expectations and reality had already started to diverge notably. The persistent lack of computer memory, rendering the creation of parameter-rich models impossible, turned out to be an insurmountable obstacle, which lead to poor results in proof of concept tasks such as speech processing, logical reasoning and pattern recognition. Investors turned their backs on research and it took subsequently almost 10 years for the AI spark to be ignited anew. This was due to John Hopfield and David Rumelhart popularizing the first deep learning techniques, enabling algorithms to learn from experience and Edward Feigenbaum coming up with a so called expert system, which was designed to mimic the decision making of human experts and on this basis provide novices with sophisticated problem solutions. This induced Japan to initiate a tremendous investment endeavour in the wake of their Fifth Generation Computer Project from 1982 to 1990, whose goal of taking artificial intelligence as a whole to the next level wasn’t attained once again.
However, in the absence of public expectations and albeit the by now well-known lack of funding, AI prospered remarkably in the consecutive decade, leading to the defeat of chess world champion Gary Kasparov by IBM’s Deep Blue in 1997 and the field ultimately gaining a foothold. Subsequently, the immense hardware improvements and data sources the beginning of the 21st century entailed caused entirely new doors to open up and hence an avalanche of machine learning powered state of the art performances to be triggered, one of which being the defeat of the former champion of Go, a game offering vastly more valid board positions than there are atoms in the entire universe, by AlphaGo, a program written by the Google Brain Team in 2017, drawing a particularly high amount of attention at that time.
Nowadays, there is hardly any scientific domain or big tech company, that isn’t employing artificial intelligence in order to facilitate their reasoning processes or carry out dull, yet somewhat demanding tasks. Everyday life is experiencing a notable automatization and even creative services start to receive support from it. Evidently, the same applies to us at Wandelbots, where we incorporate machine learning driven algorithms predestined to infer the actual intention from the noisy input which human path demonstration is bound to represent into our product, making robot teaching even more seamless and furtherly decreasing the time required to create complex processes.
I don’t know about you, but I’m psyched to see where the conscious development of this once downright surreal seeming technology will be able to take us a few steps further down the line.
“The new spring in AI is the most important development in computer science in my lifetime. Every month there are amazing new applications and transformative new techniques. But such powerful tools also bring new questions and responsibilities.”
Founder of Google
About the author
AI Working Student
As a follow-up on the first article in this series, this article highlights how the TracePen is the ‘one-product-fits-all’ solution to the major problems of today’s industry.
In this article, we discuss the main pain points that exist in the industry which our product – the TracePen solves.
automatica Munich, Germany
Hall C6 booth 417
Unhide the Champions,