Event Recap: Argyle CX New York
Last Tuesday, I was delighted to once again speak at an Argyle Customer Experience (CX) Leadership forum, this time at the Convene conference center just off of Times Square in New York City.
Here were my three biggest takeaways from the event:
CX’s impact is spreading throughout the organization. Even more so than the Boston event last December, attendees in New York were often in departments adjacent to customer experience initiatives, not CX leaders themselves. I met product managers creating software to help CX teams; operations leaders with day to day contact center concerns; and marketing leaders managing their brand across customer touchpoints.
Data, data, everywhere. A common theme throughout the day was how to best use all the data available to these CX and CX-adjacent teams. It’s not just a problem for CIOs, Chief Data Officers, or data scientists and analyst teams. Practical business contributors want to use the data they already have to better satisfy customers, eliminate friction points, and improve efficiency.
Practicality over theory. The best conversations and presentations were around real ways to put into practice techniques for taking advantage of the resources available to CX. Technologies like AI recommendation engines and chatbots (often conflated with “AI”) seem theoretically promising — but with one or two exceptions, most CX practitioners reported few successes using them to drive measurable improvements.
My topic was “Maximizing the Customer Experience Through Data.” Jeff Parkinson, the Sr VP of Customer Data at Dow Jones, moderated a panel that included Rob Poach of Wowza Media Systems, Brian Venuti of Luxottica, and myself. We discussed how digital transformation and the explosion of available data was affecting customer experience departments. We recommended broad strokes to translate that data into the holy grail of all CX groups — “actionable insights” — but also gave specific tactics of how to use digital customer support to drive business decisions. We gave examples of how teams were creating roles to deal with the influx of data, be they centralized analyst groups or decentralized “citizen data scientists.” Finally, we talked about major trends in the use of customer satisfaction and customer retention data to improve the digital customer experience going forward.
I compared the pre-digital transformation CX world — a world of reviewing call recordings, interviewing customers, and manually gathering qualitative data — to the explosion of information available in a post-digital transformation world. With ways to quantify everything from Average Handle Times to satisfaction ratings, from omnichannel usage to self-help deflections, it’s become easier to measure the quantitative data. Unfortunately, it has also become easier to lose sight of the human touch through the rich information available from those original, manual methods of listening to the Voice of the Customer.
That led me to talk about the “6-month voyage of discovery” that we’ve seen with many companies before they become Luminoso clients. Faced with the insurmountable task of manually consuming unstructured text data from support requests or survey verbatims, analyst teams try to build something themselves through Python scripts and open-source NLP software, or buy general purpose CX software that includes limited text analytics capabilities. They quickly run into the traditional barriers of most machine learning systems: training them to automate the analysis of data either requires massive amounts of data to achieve even passable accuracy, or constant vigilance by a team of experts manually tuning the model.
I also shared stories of Luminoso’s clients successfully using our QuickLearn technology to beat these machine learning limitations. No longer do they need months of consulting or millions of examples to train a system. More importantly, by looking at mixed data sets of structured and unstructured data to identify score drivers, by analyzing those thousands of support tickets to spot emerging issues, they’re able to keep pace with the data that digital transformations have provided to them.
As we closed the session, I saw a lot of audience heads nodding when I brought up the subject of bias in data. It’s not just that people can’t make intelligent decisions if we’ve got the wrong data. It’s that often we bring our own pre-conclusions to an analysis, hunting for evidence to support our hypotheses. Automating analysis can solve that problem by presenting a less subjective view of the data. But as we rush towards more intelligent automation, we risk exacerbating the problem of AI systems favoring incorrect results if trained on biased data. Fortunately, practitioners aware of the problem can take steps to de-bias their own data or rely on others doing so.
I’m always appreciative of any opportunity to share knowledge and experiences with other leaders in the Customer Experience industry. It’s one reason we’re sponsoring the Common Sense CX 2019 event in Chicago this June 3-5. Please reach out to us if you’d like an invitation to that event.
Author: Jeff Foley, VP of Marketing at Luminoso, has spent over 23 years working with CRM, CX, customer service, and natural language technologies.