Artificial Intelligence has long captured the imaginations of pop culture auteurs. And so much of what was first imagined in books, movies and television has come to pass that we can be forgiven our dystopian fears.

But truly, robots are not coming to take our jobs. In fact, the artificial intelligence that is already expanding our economy is doing so because it eliminates the dull, dirty and dangerous work that people aren’t that great at anyway. AI takes dictation, makes appointments, calculates the most efficient delivery routes, all tasks that leave humans time for crucial interpersonal and creative work.

Think of it this way. Artificial Intelligence is less Rosie from the Jetsons, and more your personal Roomba.

That’s because artificial intelligence as it has been imagined in science fiction so far is what computer scientists call general intelligence—machines capable of human like intelligence including complex problem solving, socio-emotional engagement and creativity. Artificial general intelligence is, as of now, hypothetical.
html { overflow: scroll; overflow-x: hidden; }

The Evolution of AI

1950
Issac Asimov publishes I, Robot, a collection of science fiction short stories that imagines the future of intelligence machines. It also lays out the three laws of robotics, the first being, “A robot may not injure a human being.”
Alan Turing publishes the academic paper Computing Machinery and Intelligence, which proposes that machines could use existing information and logic to solve problems. But computers are not yet sophisticated enough to do the work.
1950
1956
Scientist John McCarthy introduces the term “artificial intelligence” at the Dartmouth Summer Research Project on Artificial Intelligence.
The U.S. Defense Department funds the MIT Artificial Intelligence lab, hoping AI might give them the edge in the Cold War.
1963
1965
The lab subsequently develops ELIZA, a program that simulates a conversation between man and machine, but falls short of true understanding.
Intel CEO Gordon Moore observes that the number of integrated circuits in a computer chip doubles every year. Blue skies ahead for AI.
1965
1965
The lab subsequently develops ELIZA, a program that simulates a conversation between man and machine, but falls short of true understanding.
2001, A Space Odyssey features HAL 9000, an intelligent computer that is “fool proof and incapable of error.”
1968
1970
AI scientist Marvin Minsky tells Life magazine that we will develop a computer with the general intelligence of a human being within three to eight years.
Progress in artificial intelligence hits a wall, due to a lack of computing power. A prominent mathematician predicts machines would never outmatch human chess players. Funding is slashed. AI Winter ensues.
1973
1980
The algorithmic toolkit expands. “Deep learning,” the concept of allowing computers to learn from experience, becomes popular. Advances in neuroscience fuel AI innovations.
Business began to imagine commercial applications for “narrow AI.” The first such system, called the RI, begins operating at the Digital Equipment Corporation helping configure orders for new computing systems. By 1986 it was saving DEC $40 million a year.
1981
1986
Carnegie Mellon builds the first autonomous car.
IBM’s “supercomputer” Deep Blue beats chess champion Gary Kasparov, showing the power of AI when taking on specific problems with clear sets of rules.
1997
2002
Robotics company iRobot introduces the first in-home robot, the Roomba.
Google introduces cloud computing while Intel introduces the microprocessor chip. The two innovations--paired with an explosion of personal data--pave the way for great leaps forward in artificial intelligence.
2006-2007
2011
IBM is back at it again with Watson, an artificial intelligence trained for three years to recognize patters in questions and answers. It beats reigning champ Ken Jennings on Jeopardy!
Amazon introduces its Alexa personal assistant. Google follows with ‘Hey Google,’ in every phone in 2016.
2014

AI and its applications

What we’re working with, and what’s working for us, is narrow artificial intelligence. Narrow artificial intelligence—also known as “weak AI”—are programs that do very specific tasks.

Computer scientists start small by “training” algorithms to complete particular tasks by feeding reams of “structured data,” labeled data that may include categories and numbers. The more data they’re fed, and the more time their given to fine tune their performance, the more effective they become. This process is known as machine learning.

More recently, computer scientists have refined the workings of neural networks. Based on the workings of the human brain, neural networks are able to tackle “unstructured data”—text, images, video and more—in a way that unearths connections without human instruction. Pinterest, the image-search site that allows people to create digital inspiration boards, recently told the Wall Street Journal that neural networks are driving 100 percent of its growth by drawing conclusions about what kinds of images people are most likely to pin.

Say what now?

A layperson’s guide to key AI terminology

Algorithm

A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer. A recipe is a kind of algorithm that human beings follow. In computer science, an algorithm is usually asked to solve a discreet problem, like sorting labelled information into groups.
X
Algorithm

Artificial Intelligence

The science and engineering of computers to mimic the problem-solving and decision-making capabilities of the human mind.
X
Artificial Intelligence

Automatic Speech Recognition

A subset of NLP that transcribes human speech into data. A device records spoken words into a digitized audio file, then artificial intelligence matches or predicts phonemes (or the sounds we make) to words stored in its database.
X
Automatic Speech Recognition

Deep Learning

A subset of machine learning, deep learning eliminates some of the human intervention needed in simpler machine learning. A neural network that includes more than three layers of inputs and outputs can be considered a “deep learning” algorithm.
X
Deep Learning

Machine learning

The use and development of computer systems that are able to learn and adapt without following explicit instructions, by using algorithms and statistical models to analyze and draw inferences from patterns in data.
X
Machine learning

Natural Language Generation

The branch of artificial intelligence that works to give computers the ability to replicate language in a way that makes sense to human beings. Digital assistants like Alexa employ NLP to respond to our requests.
X
Natural Language Generation

Natural Language Processing

The branch of artificial intelligence dedicated to giving computers the ability to break down and understand human language. NLP can be used to turn text into structured data, as in breaking down chart notes and filing them into a searchable database. It can also be used to help computers understand spoken commands.
X
Natural Language Processing

Neural Network

A series of algorithms that aims to understand relationships in a set of data by filtering information through a series of “neurons.” Each neuron is programmed with its own bias and weights so that when it is given an information “input” it produces a distinct “output.” Working together, these neurons can recognize hidden patterns and correlations in raw data, cluster and classify it, usually without human intervention. Over time, the networks can learn and improve.
X
Neural Network

Predictive Analytics

A statistical tool in which people use past and present data to predict future outcomes. AI can run predictive analytics without human intervention to portend price fluctuations, tooth decay over time, or your estimated time of arrival during rush hour.
X
Predictive Analytics

Structured Data

Data that is organized into a particular format for storage, usually a database so that the information can be more efficiently processed and analyzed. Data on patients in a practice that have been seen in the past six months may be organized into a spreadsheet. That data can be analyzed to predict trends like patient volume, typical diagnosis and supplies needed.
X
Structured Data

Unstructured Data

Data that hasn’t been organized into a database or other format. It is usually text-heavy but might also include dates and numbers. Medical charts, dentist’s notes and emails all comprise unstructured data.
X
Unstructured Data
Narrow AI is already automating some work tasks in a wide variety of fields, and could automate about 50% of work tasks across all sectors of employment, according to research firm McKinsey. Very few jobs—about 5%—could be fully automated. In that gap lies the future of so many industries. AI can lift productivity and catalyze growth, but to do so workers across industries will have to learn new skills and polish others.

In fact, among the sectors primed for the most job growth is healthcare. In those sectors, AI is being harnessed to recognize cancer cells, personalize diagnoses and find new applications for FDA-approved drugs. While AI takes over jobs that computers do best, humans can shift to what they do best: applying expertise, communicating with empathy, managing people.

Demand for first-line providers including doctors, nurses, health aides and pharmacists will grow by 30% in the United States. But not all jobs in healthcare are predicted to grow. McKinsey forecasts that support roles like front office workers will plummet by 20%. That’s due in part because many front office tasks can be automated.

In our next section, we’ll dive deep into just how this shift toward automation is changing healthcare and dentistry for the better.
02
Read Next Post >
Dental AI and Patient Care
Coming Soon