Meltwater’s GenAI Lens gives you visibility into how your brand is being represented across the world’s leading generative AI platforms. It captures the responses generated by large language models (LLMs), showing not only what’s being said about your brand but also where that information is coming from. This level of transparency helps you stay informed, spot inaccuracies, and better understand the sources influencing AI-generated content.
GenAI Lens currently supports 90%+ of the most widely used LLMs, including ChatGPT, Gemini, Perplexity, Claude, Grok, and Deepseek. Meltwater is actively expanding coverage as new models enter the market. By monitoring these AI-generated responses, you can uncover early reputational risks, track brand visibility, and shape more data-informed communications strategies.
This article will cover:
Using GenAI Lens
To use GenAI Lens, follow these steps:
Click Monitor in the left-hand navigation bar
Select GenAI Lens
Click Create a prompt
In Step 1, Choose a template or select Custom Prompt to create your own
Note: You will also now see the total number of prompts available in your account, and how many you have used/saved, displayed at the top of the workflow screen.
In Step 2, Customize Your Prompt. Make any adjustments to the existing Prompt Name or Prompt, or create your own if you selected Custom Prompt
Click Preview Prompt to preview your prompt in the right-hand slide out.
For additional edits and fine-tuning, click Edit Prompt. Or, select Looks Good.
Note: Previews are generated using ChatGPT and display a sample of data to help you understand the type of response your prompt might produce. Results may still vary when using other models. Previews do not include emotion, key phrases, brands, products, people, or links.
In Step 3, Add Prompt Variations. Select up to 10 variations of your prompt.
Note: GenAI Lens automatically generates these variations, thereby increasing the depth and accuracy of insights and providing you with greater confidence in your results. Each variation can be previewed or edited using the icons on the right-hand side.
To add your prompt to a Folder, click Select Folder (optional)
Note: When the original prompt is added to a folder, all of its variations are stored in the same folder, keeping the prompt library organized as it grows.
Any selected variations will count toward your total prompt allocation.
If the number of prompts selected exceeds the total prompt allocation, you will not be able to create the prompt.
Select an existing folder or create a new one
Click Create Prompt
To view and manage all folders after a prompt has been created, select the Prompt drop-down on the GenAI Lens main page
Click Manage prompt folders
From here, you can:
Create a new folder by selecting Create Folder
Edit or delete an existing folder by selecting Edit Folder or Delete Folder
To move a prompt into an existing folder, select the Prompt drop-down
Click Move Prompt
Select the folder you want to move the prompt to
The results grid is customizable, and you can manage, pin, hide, and reorder columns. You can also sort results by the date column.
Select any kebab menu to
Pin to the left
Pin to the right
Hide Column
Manage Columns
Understanding Your Results
Overview Tab
Results Grid View
The results grid view features eight columns:
Each row features results for a prompt from a specific AI Assistant/LLM model broken out by column. Click View more details via the Prompt analysis slide-out panel.
The prompt analysis slide-out panel expands the details of each result in the grid.
Details include:
The prompt analysis slide-out panel also includes an Export button, which includes all of the above data in a CSV format.
Results Word Cloud
The results word cloud includes:
Keywords
Organization
Product
Person
Location
Note: Word size in the cloud is dependent on the frequency of results. The more occurrences of the word, the larger it appears in the cloud.
Hover over a keyword to view the number of instances it appears in your results.
Hover over the entity type in the legend to isolate those entity types. Click to exclude them from the cloud.
Click to exclude them from the cloud.
Filters
The results grid and word cloud are both dynamic and reflect the filters applied at the top of your GenAI Lens results.
Filters include:
Date range
Responses from all models are collected daily. When you set up prompts, you can later filter results by a specific date or date range to view only the responses from that period. Each response includes its collection date, which also appears in a dedicated Day column in the results table.
Saved prompt
LLM/AI Assistant Model
Bulk Exporting
Click Monitor in the left-hand navigation bar
Select GenAI Lens
In the Overview Tab, click Export
A pop-up will appear letting you know that your export is being prepared.
Once your export is ready, a confirmation pop-up will appear, allowing you to download it.
The exported CSV file includes two tabs — Overview and Results — for easy access to both summary and detailed data.
Overview tab - Provides a high-level summary of the export, including the context of when and how the data was generated. This helps you quickly understand the scope of the dataset before diving into detailed results.
Date range — The time period selected when running the prompts.
Selected models — The AI models used to generate responses.
Prompts (folders, name, questions) — The list of prompts included in the export, showing their folder structure, titles, and associated questions.
Results tab - Contains the detailed output of each prompt. This tab is designed for deeper analysis, comparison, and sharing of specific responses.
ID — A unique identifier for each prompt-response entry
Model name — The specific AI model that generated the response
Prompt folder — The folder location where the prompt is organized in GenAI Lens
Prompt title — The saved name of the prompt
Prompt question — The query given to the model
Prompt response — The full text output returned by the model
Date — The timestamp of when the response was generated
Sentiment — The sentiment score (positive, negative, neutral) of the response
Key phrases — Extracted key phrases found in the response
Organizations and brands — Entities identified in the text that match companies, brands, or organizations
Products and people — Entities identified in the text that match product names or individuals
Cited links — Any URLs referenced within the model’s response
Trends Tab
The Trends Tab helps you track how conversations, narratives, and topics are changing over time—so you can spot what’s gaining traction and take action faster.
Click Monitor in the left-hand navigation
Select GenAI Lens
Click Trends
The Trends Tab consists of two visualizations, Trends Graph and Ranking Grid, each with the ability to track and rank trends across seven categories
Trends Graph: Displays trends over time for your selected category (e.g., Brands)
Hover over any day on the graph to see the corresponding data for that day
Click on items in the legend to turn them on or off in the graph
Select from one of seven categories in the top tool bar
Click the Mentions drop-down in the top right-hand corner to change between Mentions or Prevalence
Ranking Grid: sits below the Trends Graph and feature five columns:
Sources - shows the name relevant to the category selected (e.g., brands)
Total Mentions - how many times the brand, product, person, source, or link is referenced in the LLMs. The % shows the change compared to the previous time period.
Prevalence - how many mentions also contained references to products, people, or other entities. The % shows the change compared to the previous time period.
Prevalence Score - expressed as a percentage of total mentions. It shows how many of those mentions also contained references to products, people, or other entities.
Chart Visibility - use these checkboxes to control what appears in the chart. Unchecking hides the source from the trends graph.
Note: Hover over Total Mentions, Prevalence, or Prevalence Score then click the arrow to sort in ascending or descending order.
The Trends Tab supports analysis across seven categories:
Brands – Track how your own brand and competitors are mentioned over time. Understand shifts in share of voice and reputation
Products – Monitor product-level visibility to see which offerings are gaining or losing traction in conversations
People – Identify individuals driving the narrative, including executives, influencers, or public figures linked to your industry
Locations – Discover geographic hotspots where mentions are increasing, helping you see where conversations are concentrated
Key Phrases – Surface emerging keywords and hashtags that signal new topics, themes, or market opportunities
Sources – Analyze which news outlets, websites, or publishers are most cited in relation to your brand or industry
Links – Track which URLs are appearing most frequently, revealing the specific articles, reports, or resources shaping perception
Use Cases
Person | What GenAI Does for Them | KPIs Positively Impacted |
PR/Communications Leader | Brand Visibility Across GenAI tools – See how their brand is portrayed across major language models like ChatGPT and Gemini, offering a new layer of brand visibility they currently lack.
Early Risk Detection – Identify and act on reputational risks or inaccuracies early, with enriched sentiment, emotion, and source data to mitigate crises before they escalate.
Data-Driven Comms Strategy – Track sentiment, narrative trend, and competitive positioning over time to inform more targeted PR, media outreach, and content strategies grounded in AI-world intelligence. | Brand Sentiment Score
Media & Message Consistency Across Platforms
Crisis Response Time
|
Marketing | Brand Visibility Across GenAI tools – See how their brand is portrayed across major language models like ChatGPT and Gemini, offering a new layer of brand visibility they currently lack.
Content and Campaign Optimization – Use customizable prompts to track campaign, product, or brand messaging and identify content gaps that impact performance in generative search environments.
Competitive Intelligence – Monitor how competitors are positioned and how narratives shift over time, informing smarter content and positioning strategies based on real AI-world exposure. | Share of Voice in AI-generated Content
Campaign Message Alignment Across LLMs
Brand Visibility Score in Generative Environments
|
💡 Tip
Need more help? Feel free to reach out to us via Live Chat or check out our Customer Community.
Find answers and get help from Meltwater Support and Community Experts.