Concept-level sentiment analysis: The next level of understanding emotion in text feedback
If you work with text feedback data in any form – product reviews, support tickets, open-ended survey responses, to name a few – you’ve undoubtedly encountered sentiment analysis. You might even own or have built an analytics system that has a sentiment analysis tool. Standard in almost all text analytics products, sentiment analysis is the feature that tells us how people feel about things like our offerings, products, and brand.
What is sentiment analysis?
Sentiment analysis is technology that computationally determines whether text contains positive, negative, or neutral polarity. Sentiment analysis has existed since the early 2000s, when large quantities of opinionated data from the internet – reviews, forums, blogs, microblogs, and social media posts – suddenly became available, and the need to analyze them in an automated way became apparent. Now, sentiment analysis is more often used by organizations to track customer and employee satisfaction, and enable swift reaction to feedback.
How has the understanding of sentiment analysis changed over time?
Early days: Document-level sentiment analysis
In the early 2000s, both researchers and practitioners of sentiment analysis focused on the overall sentiment of an entire body of text or document, like an entire review or a single open-ended survey response. Such a traditional, document-level sentiment analysis system analyzed text and came up with an impression of its overall polarity: either positive, negative, or neutral. An output of such a system would be able to answer a question of “Overall, is this reviewer happy?”.
Sometimes, this approach is just good enough. Consider the following example support ticket:
The text is short and focused on a single topic. If all documents in a body of text share similar characteristics, as is frequently the case for these types of datasets, it isn’t necessary to have an analytical tool that goes beyond helping users understand an overall impression of sentiment.
Today, organizations receive text feedback through more channels than ever before: support tickets, app store reviews, survey feedback, chat transcripts, employee satisfaction surveys … the list goes on. The old approach of only focusing on the overall sentiment of a complete body of text has simply become not detailed enough to handle this onslaught of information, let alone the nuanced sentiment it contains. Questions like “What are my customers happiest about?”, “What employee concerns do I need to address first?” require a more focused, granular solution.
Consider the following review of a banking app:
Keeping in mind that this is just one document, this review is quite nuanced: it contains multiple topics, or concepts, each expressing different polarities. The reviewer likes the app and compliments the ease of use and the design. But the narrative shifts from positive to negative: the user specifically singles out “notifications” as a feature of the app that they’re unhappy about. There are also concepts mentioned that don’t seem to have any sentiment associated with them at all, like “check my balance” or “make transfers”.
A document-level sentiment tool, which assigns an overall polarity to the text, would mark this as a positive review. From there, this kind of solution could extrapolate that all topics mentioned with the document, like “check my balance”, “make transfers”, “navigate”, “design”, and “notifications” are positive. And this conclusion would be just false.
A document-level system would miss valuable feedback on the notifications not working correctly, and give any user or analyst working with this data the incorrect assumption that this particular reviewer is entirely happy with the product. These sorts of missed opportunities, multiplied over many documents within a dataset, would misinform any decisions made based on their analysis.
With the amount of text feedback skyrocketing – as well as the added complexity of its nature – the science behind analyzing text has needed to play catch up. As a result, the focus in the sentiment analysis community has shifted from a document-level sentiment approach to one in which we assign a sentiment polarity to each concept within text individually: aspect-level, or, more generally, concept-level sentiment analysis.
How does concept-level sentiment work?
There are two main approaches to assessing polarity of a topic within text. One makes use of rules handcrafted by linguists or domain experts. The other has its roots in machine learning.
Rule-based approaches typically involve a sentiment lexicon, such as the AFINN word list. Sentiment lexicons are word lists that contain thousands of words such as “amazing” and “angry”, together with a numerical score which shows how positive or negative a word is. The scores are used to approximate as close to an absolute polarity judgment as is possible for a word.
But having a sentiment lexicon is not enough. In order to be able to target specific topics within text, researchers have to make a lot of decisions: they have to create rules about what constitutes a topic’s context and how the words within that context influence the polarity of the topic. For example, one sample rule to handle negation could look something like:
RULE: If you see “not” near a sentiment word and a topic, set the polarity of that topic to the opposite of the polarity of the sentiment word.
So, when examining a document such as:
… you’d conclude that since “imaginative”, which is a positive word, is preceded by “not”, the polarity of “game” would be set to negative.
Other rules could be based on more advanced natural language processing (NLP) techniques, like analyzing syntax or resolving which pronouns refer to which nouns.
Creating individual handcrafted rules is relatively easy. However, you’d need to come up with a ton of them to see acceptable performance from your system. Maintaining many rules is tedious and their interactions could be hard to predict. And even then, it’s impossible to come up with all the rules that govern how sentiment works – especially when faced with domain-specific datasets you’d encounter in practice at your organization.
Machine learning approaches
Unlike rule-based approaches, machine learning approaches don’t require defining explicit rules. Instead, machine learning (ML) models are presented with many examples of sentiment topics together with their contexts. Then the models come up with the rules themselves.
ML approaches, especially deep learning approaches, constitute the state of the art for many sentiment analysis tasks. Many of these models, like transformer models, try to model how the context of a word influences its polarity. Though this is similar to what rule-based approaches aim to achieve, the difference is that where rule-based approaches require human input, ML approaches do not. They require a lot of data annotated with sentiment judgments for each topic, and procuring good quality data can be a difficult task.
Why concept-level sentiment is crucial to the modern organization
Global organizations analyze a diverse set of documents, among them open-ended survey responses, product reviews, support tickets, and social media posts. When working with such data types there’s a similar need for a granular sentiment analysis, especially in documents in which people are prompted to talk about as many aspects of a service or product as they see fit.
Voice of the customer
Concept-level sentiment analysis is critical for capturing and understanding the voice of your customers, or VoC. A common example, product reviews rarely contain just one type of feedback, and it’s important to tease apart the good from the bad. Getting a polarity for each of the topics in the following text enables an analysis of what works and what doesn’t for your customers:
A traditional approach to sentiment analysis would pick up on the overall negativity of the review but it wouldn’t report on the fact that there was one aspect of the product – its size – that really resonated with the customer. It simply isn’t possible to address fixes while maintaining what people love about your offerings if your feedback analyses can’t get down to this level of granularity.
Voice of the employee
In voice of the employee (VoE) datasets, which usually consist of open-ended survey responses, we often observe that feedback is overwhelmingly positive and delivered in an upbeat way. It’s not that negative feedback isn’t also there … but it’s typically lost in a sea of positivity.
Consider the following example response for an HR survey:
The critical part of this response highlights a complaint around the bonuses. Traditional document-level sentiment analysis would miss arguably this most important part, which could turn into an actionable insight for the business analyst examining this dataset.
Level up your sentiment knowledge.
Ready to take your understanding of sentiment to the next level? Join us on May 13 as we take a deep dive into the science behind sentiment, and how concept-level sentiment analysis can give you a more nuanced understanding of the people you serve.