Finding Product Insights from Missing Data

In Star Trek: Discovery S04E08, the Discovery crew realizes that an impossible puzzle they’ve been trying to solve about an alien threat is related to a mineral that civilization is trying to mine. This mineral “boronite” has a significant radioactive signal in the sensor array.

The crew doesn’t know where the alien tech is located but does know that they are removing this material. By looking for sectors nearby where there is an absence of a signal they are expecting to be emitted by the material, the team finds the alien technology even though that alien object didn’t appear on their sensors.

Whether you care about Star Trek: Discovery or not, thinking about the way that generative AI tools create results requires you to think about the absence of the signal you are seeking.

Asking: “what should I be seeing that I’m not seeing?” is a good frame when you’re using AI to build expected content. ChatGPT – and other generative AI – examine the current token to measure the expected next token to generate.

In simpler terms, this means when you are asking ChatGPT to be a partner in developing content by analyzing other data, you need to watch carefully to confirm that what you’re seeing is an effective analysis or a rote repetition of the next likely token.

For example: when you prompt the AI this way to summarize some text transcribed from a meeting:

Headline: Produce a summary headline indicating whether the customer wants to buy

You’ll likely get a generic response like:

Headline: Based on the customer’s engagement, they are likely to buy

Which is a lot less useful than the summary a human would write. When you no longer see rote repetition of a prompt and start seeing different individual results for each summary, you’ll know you’re getting a better summation. (Hint: remove the text “Headline:”)

If you provide a more generalized, abstract prompt like:

When you summarize the call, add a brief and informative summary at the top to share with a sales leader

You’ll get results that vary quite a bit more and come closer to matching the way a person would summarize the results of a meeting, while adding the structure that generative AI composes easily.

Building an Internal Meeting Summary Product

Recently I started working on a data product to analyze sales meetings. The twin goals were to produce a workable summary and to create a transcript of the meeting itself. While AI tools easily transcribe meetings these days, creating a useful meeting summary is more challenging than simply asking the feature “summarize the meeting”.

Desired outcomes for this output include:

  • top line summary – what is the main focus of the sales call

  • identifying objections – if the prospect had questions or strong objections, list them and any counters offered by the sales rep

  • summarizing the call – identify the main points of the call including any items raised by the prospect that haven’t been listed yet

Overall, we want to identify enough detail to provide value without over-prompting the system to give us answers that look exactly like the tokens we provided in the prompt. To do this, we ask for abstract concepts in the prompt (“identify any concerns raised by the prospect”) rather than asking for a specific call out on a feature or service.

The goal – like in the fictional Star Trek analogy – is to find the absence of signal where we expect it to be. We’re looking for the top items not already contained in the summary, and we’re also looking for a traditional summary that can inform someone without listening to the entire call.

To build our summary, why not ask a bot? I asked ChatGPT to produce a prompt for a generative AI to build a meeting summary as a starting point for creating the meeting summary prompt, and refined from there.

Moving beyond a “One-shot” Strategy

Here’s what we’ve found so far as we examine the results provided by AI in summarizing sales meetings:

  1. The summaries are often quite good! Although they don’t always convey the meaning you get when listening to a call or reading the call transcript, they do a good job getting the basic elements of the call and in calling out prospect needs.

  2. Too much prompting yields a boring summary. AI tools are so good at following the script (think of this as “Mad Libs” for computers) that if you provide a very structured prompt and don’t allow for “creativity” or variation in the result, you’ll receive back … exactly what you put in. When every customer is “enthusiastic” and “wants to buy” you probably don’t have a realistic sales summary.

  3. Most AI Models are not Multi-modal, yet … What’s the difference between an experienced sales manager hearing an introductory call and a newly ramping salesperson? The sales manager will always see and hear different things that are affecting the call, and may have a divergent opinion on whether that prospect can close. When AI is analyzing text only, there’s a limit to the available context.

How do you move beyond a first take at meeting summaries?

Here are the things to try next:

  • Carefully change prompts to limit the variables in the outcome

  • Experiment with new prompt versions as we learn more

  • See if extracting structured information from the transcripts will help us to fine-tune summaries and make them more effective

It’s a big step forward toward knowing more about the sales process.

What’s the takeaway? AI prompts of unstructured data like call transcripts are a potential gold mine of insights for the sales process. Yet you should be cautious about allowing generative features to summarize too much and focus on identifiying the outlier data presented by each prospect. In the end, you’ll still need to listen to calls and it will be a lot easier to identify which ones need attention.

gregmeyer
gregmeyer
Articles: 566