Computational linguistics: Can AI listen like a human?

Tom Shepherd

The history and future of natural language processing

We’ve taught computers to process information faster than humans. But when it comes to conversations, processing raw data isn’t enough.

People don’t always articulate every thought. Subtext matters. Will AI evolve to understand these nuances?

First, let’s look at where we came from to understand how AI linguistics will develop in the future. 

Conversation analytics developed from first merely recognizing words and strings of words to now detect meanings, analyze sentiment, and recognize semantics.

In the future, AI may help technology process nuanced conversations as easily as a computer can churn out math equations. 

What is computational linguistics?

Computational linguistics is the science of applying formal logic and mathematics to the understanding of natural language. 

It begins with one of the simplest, meaningful components of any natural language: the word.

The first phase of computational linguistics: keyword detection 

Words carry meaning. Single words refer to the objects, actions, and properties of the natural world. Combine certain words and you begin to form meaning. 

You can classify words topically according to what they reference. Nouns represent objects or things,. Verbs represent actions. Adjectives and adverbs represent the properties of things and actions. A variety of prepositions and conjunctions link these elements together. 

The mere presence of words together may be sufficient to convey meaning. As a result, computational linguistics often starts with the detection of important keywords, or text analytics. 

Keyword detection models detect specific words or groups of words and use them to infer its meaning. Although words are a logical starting point in computational linguistics and language understanding, the value of words, by themselves, has proven extremely limited. 

Individual words, for example, can have many different meanings and even serve as different parts of speech in a sentence. 

Understanding word structure and context

The presence of other words, and their proximity and order, provide important context for understanding the meaning. Word order, or the coupling of different words, can alter the meanings of the individual words by providing a specific context for interpretation. 

For example, the words “fly” and “ball” by themselves may evoke the concepts of a small insect or the act of flight and a spherical object or a formal party respectively. 

When put together as “fly ball”, the meaning is changed to indicate the arching trajectory of a baseball struck by a batter in a baseball game, rather than the act of flying or the ball itself. 

For this reason, Noam Chomsky and others developed formal rules to account for the influence of structure and context in understanding language. 

These rules help derive the meaning of a sentence by parsing and deconstructing it into simpler elements.

You can break down a sentence by a noun, verb, and object to get a basic, direct meaning. Parts of speech tagging can help understand basic text data. 

Analyzing word combinations represented a major advancement in linguistics and computerized linguistics. But, it fell short as a general model for understanding language. 

As with the meanings of words, linguists found that context also influences the meanings of basic elements of a sentence. 

Linking abstract concepts to phrases 

Word combinations are part of complex relationships. Their meanings can change based on real-world knowledge – even when no one says a word related to the concept. 

Consider the sentence, “Abraham Lincoln ran for the U.S. House of Representatives in 1843.” 

Parsing this sentence gives us “Abraham Lincoln” as the actor or subject, “ran for” as the action, and “U.S. House of Representatives” as the object of that action. 

Yet, without knowledge of what “running for office” means in the broader sense, text analysis could interpret the sentence literally as meaning that Lincoln ran toward the building known as the “House of Representatives” rather than campaigning for political office.

It is only our knowledge of political campaigns and the language used to describe them which allows us to understand such an utterance. 

The introduction of Schema Theory

The importance of general knowledge and abstract concepts became increasingly evident. Because of this, some linguists began to adopt Schema Theory.

Schema Theory introduces a collection of key concepts linked together by well-defined relationships. Such models allow for artificial intelligence to make inferences by traversing the relationships between concepts to arrive at a logical interpretation. 

So, given the example above, the concepts of “Abraham Lincoln”, “ran for”, and “House of Representatives” are connected within the schema to concepts of “politics,” “political office,” and “campaigning.” 

These additional concepts are not part of the original utterance, but are assumed to be part of the reader’s general knowledge. 

The activation of these knowledge-based elements leads to the reinterpretation of “ran for” as a political campaign rather than a literal sprint toward a building. 

While the use of Schema Theory marked another great advancement in both language theory and computational linguistics, it too fell short as a model of language understanding.

New models in conversation linguistics 

It quickly became apparent that the complexities of natural languages were beyond the limits of existing schemas. 

As a consequence, new models emerged.

 Among these new learning models were Hidden Markov Models (HMMs), Bayesian network models, and Latent Semantic Analysis (LSA), to name a few. 

To better capture the immense complexity of natural language, statistical methods and machine learning drew upon large linguistic bodies of text to map out underlying semantic relationships. 

Machine learning models drew from a wide range of texts, so that linguists could analyze and quantify statistical relationships between patterns of words and abstract concepts. These became the bundling blocks to understand natural language. 

The future of AI speech analytics 

A variety of tools for natural language processing (NLP) have sprung from this research. NLP algorithms use artificial neural networks and deep learning methodologies, making machine translation possible for even more complex language models. 

These natural language processing algorithms capture abstract relationships between words and concepts by leveraging context, structure, and hidden knowledge. 

Now, the challenge is to capture context even better. To do this, we must introduce transformational layers to neural network models. 

Transformational layers, such as the Bidirectional Encoder Representations from Transformer (BERT), have been used to improve and refine deep learning language models. This hierarchical deep learning approach brings computational linguistics even closer to human performance levels.

Computers aren’t listening like humans yet, but we are getting closer. 

Conversational intelligence and customer experience 

When a customer calls a call center, we want to not just understand the words they exchange with an agent, but how their entire customer experience went. That way, we can help you serve the needs of your customers. 

That’s why we’re always evolving. We embrace state of the art computational linguistics and use deep learning neural networks. 

We also add in our own proprietary transformational layers, which capture not only local contextual influences, but context as it relates to the back and forth exchange of information. This enables us to leverage context across multiple utterances and even entire conversations. 

As an example, imagine beginning a book in the middle. For many books, you’d have a hard time following the story. Even if every sentence made sense by itself, you need to understand the premise and the characters.

In the same way, we must analyze the entire back and forth of a conversion and not just individual utterances. 

We based our machine learning on entire conversations. This way, they can detect and predict complex business metrics such as customer effort, churn, agent performance, and sales potential.

 We couple this with exploratory AI. This makes it possible to quantify optimal agent behaviors, which can improve performance on these diverse metrics. 

Our goal has not only been to understand the voice of the customer. We want to provide clear recommendations that will improve customer experience, minimize churn, and maximize sales outcomes. At Tethr, we continue to push the boundaries of computerized linguistics and machine learning. We develop new technologies to ensure that every customer’s voice is not only heard, but understood.

Learn more about Tethr
Share