Lately, artificial intelligence has become very much the new topic in Silicon Valley and also the broader tech scene. To people involved in that scene it seems like an amazing momentum is building across the topic, with all sorts of companies constructing a.-. in to the core of their business. There has also been a rise in A.-.-related university courses that is seeing a wave of extremely bright new talent rolling in to the employment market. But this may not be a basic case of confirmation bias – fascination with the topic has become on the rise since mid-2014.
The noise around the subject will undoubtedly increase, and also for the layman it really is all very confusing. Based on everything you read, it’s simple to feel that we’re headed for the apocalyptic Skynet-style obliteration at the hands of cold, calculating supercomputers, or that we’re all likely to live forever as purely digital entities in some kind of cloud-based artificial world. Put simply, either The Terminator or even the Matrix are imminently planning to become disturbingly prophetic.
When I jumped on the A.I. bandwagon in late 2014, I knew almost no about this. Although I actually have been involved with web technologies more than twenty years, I hold an English Literature degree and am more engaged using the business and creative likelihood of technology compared to the science behind it. I used to be interested in A.I. because of its positive potential, however, when I read warnings through the likes of Stephen Hawking regarding the apocalyptic dangers lurking within our future, I naturally became as concerned as anybody else would.
So I did what I normally do when something worries me: I started understanding it so that I really could comprehend it. Greater than a year’s worth of constant reading, talking, listening, watching, tinkering and studying has led me to a pretty solid understanding of what it really all means, and I wish to spend the following few paragraphs sharing that knowledge in the hopes of enlightening anybody else who is curious but naively scared of this unique new world.
One thing I discovered was that Udacity, as an industry term, has actually been going since 1956, and has had multiple booms and busts in that period. Within the 1960s the A.I. industry was bathing in a golden era of research with Western governments, universities and large businesses throwing enormous amounts of money on the sector with the idea of creating a brave new world. However in the mid seventies, when it became apparent that A.I. had not been delivering on its promise, the market bubble burst and the funding dried out. Inside the 1980s, as computers became popular, another A.I. boom emerged with similar levels of mind-boggling investment being poured into various enterprises. But, again, the sector did not deliver as well as the inevitable bust followed.
To comprehend why these booms neglected to stick, you first need to know what artificial intelligence is actually. The short solution to that (and believe me, you can find very long answers out there) is the fact A.I. is a variety of overlapping technologies which broadly handle the process of the way you use data to create a decision about something. It contains a tstqiy of various disciplines and technologies (Big Data or Internet of Things, anyone?) but the most significant the initial one is an idea called machine learning.
Machine learning basically involves feeding computers huge amounts of data and letting them analyse that data to extract patterns from where they can draw conclusions. You may have probably seen this actually in operation with face recognition technology (such as on Facebook or modern cameras and smartphones), in which the computer can identify and frame human faces in photographs. To do this, the computers are referencing an enormous library of photos of people’s faces and also have learned to spot the characteristics of any human face from shapes and colors averaged out spanning a dataset of numerous countless different examples. This method is actually the same for just about any use of machine learning, from fraud detection (analysing purchasing patterns from bank card purchase histories) to generative art (analysing patterns in paintings and randomly generating pictures using those learned patterns).