Artificial Intelligence is not a Magic Box

If you haven’t heard of Artificial Intelligence by now then you’ve been living under a rock. AI is everywhere and it’s touted as a solution to all humanity’s problems. However, despite the sexy term and AI’s household-name status, it’s still very much a science that requires deep expertise to use it correctly. One cannot simply write their data into a magic AI box and have it spit out useful, meaningful and actionable output.

Before explaining why AI is not a magic box, let’s clear up some of the confusion with these terms.

In short, artificial intelligence or AI is a general term that covers any task related to making machines smart. Despite all of the technological advancement, there are some tasks in which machines cannot match human performance, like understanding natural language. AI’s ultimate goal is to make the machine understand and respond to these tasks like humans do. 

Machine learning (ML) is a subset of AI and as the name suggests, includes systems that empower machines to learn. ML covers a range of mathematical methodologies that enable machines to learn from experience. The experience is provided by us (humans), but we don’t intervene in the learning process; machines learn by themselves. 

Deep Learning (DL) is a subset of ML which is responsible for most of the revolutionary AI technologies we have today. DL models are very large mathematical models built from lots of smaller blocks which are based on Artificial Neural Networks (ANN). ANN are algorithms that are inspired by the human brain, hence the name neural network

Natural language processing (NLP) includes ML tasks that aim to enable computers and humans to talk. NLP allows computers to comprehend human language. The modern NLP models are making use of DLs and other neural-net-based models to understand unstructured language contents. 

Now that you are clear on the definitions (we know you have memorized them all by now…), let’s take a step back and talk about how ML models learn from experience. We said the experience should be provided by us humans. In practice, this means that we should provide some examples of input and expected outputs for the task the machine is learning. Using these examples, the machine will learn how to make predictions for the scenarios that it hasn’t yet seen. 

You must have heard of the phrase “Garbage in, garbage out”. Well, that is exactly the case in machine learning.

For a specific example, let’s talk about EIDA, one of the NLP tools that our data science team is developing. EIDA, or EHS Incident Description Analyzer, is an AI agent that is specialized in analyzing, comprehending, and understanding Occupational Health and Safety (OHS) incident descriptions. In order to train EIDA for this task, we give her lots of incident description examples and outcomes that we expect to extract from them (such as incident type, nature of injury, body part affected, etc.). 

It should be clear now why we believe no ML algorithm is magic. To have a smart machine we need good data to train the machine with. You must have heard of the phrase “Garbage in, garbage out”. Well, that is exactly the case in machine learning. The task of preparing the data for training ML models is often called labeling. It includes carefully labeling lots of examples with the correct terms, and it needs to be done by humans. For training EIDA, we need multiple types of data labeling to happen. 

The magic comes once this labelling is complete: our EIDA system can then predict when a certain type of incident might happen, using current incident data. We are not saying that we can predict the future, but we are not saying we can’t…