So What? Marketing Analytics and Insights Live
airs every Thursday at 1 pm EST.
You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!
In this episode of So What? The Trust Insights weekly livestream, you’ll learn how to optimize content for AI. You’ll discover how to transform existing content into various formats using AI tools. You’ll explore methods for repurposing content across different platforms such as podcasts, blogs, and social media, all optimized for AI consumption. You’ll uncover how to make your content semantically rich for improved performance and engagement.
Watch the video here:
Can’t see anything? Watch it on YouTube here.
In this episode you’ll learn:
- 9 steps for how to optimize content for AI
- Generative AI tips for building content that AI models love
- Why jargon matters to AI and your content
Transcript:
What follows is an AI-generated transcript. The transcript may contain errors and is not a substitute for listening to the episode.
Katie Robbert – 00:30
Well, hey everyone! Happy Thursday! Welcome to “So What?”, the Marketing Analytics and Insights live show. It looks like StreamYard has changed some of their settings, so we will have to fix that on a future episode because that blue is new, and clearly it’s distracting me now. Chris, John, welcome. Happy Thursday!
Christopher Penn – 00:50
Happy Thursday.
Katie Robbert – 00:51
Let me get back on track. Oh, and there was the high five. Last week, we started the first of a three-part series, SEO for AI. Last week we covered more of the technical aspects of optimizing your content for AI. Generative AI is not a traditional search engine, but you can appear in the results when you query generative AI. It’s getting lumped under search engine traffic—organic traffic. So, we’re talking about the SEO. This week in part two, we’re talking about how to optimize content for AI. If you missed it, there is a cheat sheet, a checklist, in our free Slack group, Analytics for Marketers.
Katie Robbert – 01:39
You can go to Trust Insights AI/Analytics for Marketers to get the checklist of the three different parts of SEO for AI and what you need to do to optimize your content. So Chris, where should we start this week?
Christopher Penn – 01:55
This week, we should start with talking through a bit of things like the Video First Transmedia Content framework, which one of these days I’m going to come up with a better, more catchy name, because that’s a mouthful.
Katie Robbert – 02:11
It is. It is a mouthful. We talked about this on our podcast earlier this week. Again, quick plug: Trust Insights AI Podcast where we’re talking about how there could be an AI version. Do you want to walk through the framework as it is?
Christopher Penn – 02:31
So, the original framework—this framework—believe it or not, began in 2008. Our former agency owner, when we worked at the PR agency, Todd Defren, came up with this back in 2008. Back then it was called the Content Atomization Framework, and the idea was: take big content, turn it into little content. The same thing is generally true here. However, we’ve said since the beginning of Trust Insights, back in 2018, that you want to start with video first because video contains the most information of any content format. So, if you are a marketer and you don’t have time to make unique content for every possible channel, if you start with video, you can then break that up and transform it into different formats. You can take a full video and turn it into video clips.
Christopher Penn – 03:24
Our account, Magic Kelsey, does this with our shows and our podcasts, and you’ll see those shorts on YouTube and Instagram and such. You can remove the audio from a video and turn that into a podcast that can become an audio podcast of its own. You can then take that audio, use the AI tool of your choice, and turn it into a transcript. That can then be used in blogs, newsletters, ebooks, and so on. You can take still images from a video and turn those into images you can post on places like Instagram, Pinterest, etc. You can also, of course, incorporate those into longer-form static works. You can take the video course and put it up as is in places like YouTube or Vimeo and so on.
Christopher Penn – 04:10
If you have a platform like Twitch, it can be hosted there. If those things do well, you can turn them into webinars, then to proposals for conference sessions, and potentially be on stage with a show that really resonated with people. I’ve had this happen relatively recently. The post I did on optimizing content for SEO for AI, which was in my personal newsletter, has become this series. So, it’s taking one thing and turning it into another. That was the status for the last five years. What’s changed is instead of having to start with video, you can start anywhere in this. So, instead of it being a framework from beginning to end, left to right, it can almost be like a cycle. Like, this can turn into this.
Christopher Penn – 05:01
It’s kind of like Pokemon evolutions or like old Chinese elements, where one element becomes the next in an ongoing circle. So today, I thought we would talk about how you would do that. Because in our Analytics for Marketers Slack group, a number of our members responded to say they typically start with a newsletter and have to turn it into content from there.
Katie Robbert – 05:24
I just want to make sure, in addition to how we repurpose the content, we make sure we’re talking about optimizing the content as well. Because we know from last week’s episode—which, if you missed it, you can find on our YouTube channel, trustinsights.ai YouTube—go to the “So What?” playlist. The way that generative AI consumes content is different from how a search engine like Google or Bing is consuming content.
Christopher Penn – 05:54
Exactly right. In general, you want your content to be semantically rich, meaning that it covers a topic thoroughly, and to a degree, you would want that for humans as well. But it can occasionally get overwhelming for humans. If you say, “Hey, take this 300-word blog post and make it 20,000 words,” no one wants to read that. But you also want to make sure that it is covering as many aspects of the topic as you possibly can, within reason. On a blog post, as an example, you might say, “What’s, what could, what else could be done with this?” So, should we dig into our first demo?
Katie Robbert – 06:36
Yeah, absolutely! Let’s light it up. As Guy Fiori would say, “Bam!”
Christopher Penn – 06:44
I know different, Jeff. Let’s start with something as simple as a services page on our website. What could we do to not only expand this or change formats, but also make it richer? One of the things we could do is we could say this is a good page. The intent of this page is to get a person to want to hire us. How could we use AI to do that better? The first thing that we’d want to do is we’d want to get the content of the page itself. So, we’ll take this content here. We’re going to put it inside of a prompt, and the prompt we’re going to put it inside of is going to be very straightforward. We’re going to say, “Let’s figure out what questions our ideal customer profile would have.”
Christopher Penn – 07:38
You will need an ideal customer profile, which we have at Trust Insights. We have one for Trust Insights. I’m going to go into Google’s AI studio. You can use the AI of your choice. It is not dependent on using this particular one. If you use ChatGPT, that’s fine. If you use Claude, that’s fine. I’m going to start by saying, “You are a CMI, Content Marketing World, content marketing expert with a specialty in content creation, content generation, and content repurposing. Using our best practices guide and our ideal customer profile, anticipate five follow-up questions that our ICP would ask us after reading the provided content.” I should change that to, “create the content, order them by probability, most probable to least probable, provide the question, the explanation, and the probability expressed as a percentage.” So, this is part of the prompt.
Christopher Penn – 08:31
We’re going to put this in. We’re going to bring in, just at the bottom here, the Trust Insights ideal customer profile, because we want to have that in here. Let’s see if… Yes, my content marketing best practices guide is also available. This is an evolution of the RAPPEL framework. So, if you’re not familiar, go to TrustInsights.ai RAPPEL for our RAPPEL prompting framework. What’s different now process-wise—but the same conceptually—is that instead of asking the model, “What do you know about this topic?”, we use a tool like Gemini Deep Research to say, “What do you know about this particular topic?”, in this case, content repurposing or content marketing. “Build a guide.” It comes up with a 21-page guide, and we’re going to add that in as one of the ingredients to this as well.
Christopher Penn – 09:31
So we have our ICP, we have our original content, we have our guide, and we have our prompt. Let’s see what traffic… Five questions. Just to make sure, I’m going to copy and paste the prompt at the very end. The reason I’m doing this is because we’ve got a lot of content here. We want to make sure that the model does not forget what it’s supposed to be doing. Many AI models, with very long prompts, it’s a good idea to put the prompt at the beginning and end. It actually helps. There’s a prompt engineering technique called RE2 that… you said, just repeat yourself.
Katie Robbert – 10:10
One of the steps that we didn’t cover—but we’ve covered in previous shows, or you can reach out to us if you are interested in learning how to build your own—is the ideal customer profile. So again, you can catch that episode on our YouTube channel. If you want help constructing one, just reach out to us. The ideal customer profile essentially covers deeper insights than a typical persona. It gets into pain points, buying behaviors, what keeps them up at night, how they make decisions—that kind of thing.
Christopher Penn – 10:46
So, here are five follow-up questions for the digital customer journey analysis: “Can you tell me more about the specific digital marketing channels your machine learning model covers? We use a wide range of platforms. I want to ensure it’s relevant to our mix.” So that’s something that, on our page, we might want to add more information about. “Regarding marketing, mix modeling, can be examples of the types of marketing channels and tactics your model can analyze or try and optimize a budget allocation.” Again, now this is for a sales landing page. So, these are all things that we would definitely want to do. We could also go over to our blog. Let’s take just a blog post of ours. Here’s a recent one that Kelsey has put up on the site. This is, Katie, from something that you wrote at one point.
Christopher Penn – 11:28
We’re going to take this content, and I’m going to put it essentially right back into the same prompt. If I go to my storage locker here…let’s go and put this in. Now we’ll have it ask the same five questions of this blog post. Say, “What would our ICP ask Katie after reading this post?” It says, “You mentioned this. Could you elaborate how to practically apply the platform aspect? What were your recommended platforms for showcasing these human-centered testimonials? You emphasize preserving authentic language, including context. What are some practical guidelines for editing testimonials for clarity without losing authenticity? You advocate for testimonials as AI-proof. What are some best practices for actually creating…effective video testimonials?” and so on. So, what we’re doing here is we’re taking a really good piece that Katie already wrote and saying, “Well, what else?”
Christopher Penn – 12:30
What else would our ICP care about that we could add more content to this?
Katie Robbert – 12:36
Got it. Because if I’m following correctly, we’ve started with the human audience version of the content. Now, in order for it to appeal to generative AI models, there needs to be a lot more content. So we need to expand upon the content in a way that isn’t just dumping keywords into it, but really done in a thoughtful way, very similar to the way Google’s search algorithm, at one point, was looking for content. It has to be helpful. It has to answer questions. It can’t just be word vomit, but it actually has to be useful to someone. AI systems, AI models are looking for that as well.
Katie Robbert – 13:24
If we know that there’s going to be a public version of this content, we should probably still consider the human audience. Make sure that it’s… maybe it’s frequently asked questions, maybe it’s expanding upon a certain landing page, or whatever it is. But we still need to make sure that we’re sort of balancing the two. I think that’s where it’s going to get tricky for people.
Christopher Penn – 13:43
Exactly right, which brings us to nice segue, Katie. How to do those frequently asked questions. One of the most straightforward tools to do this is Google’s NotebookLM. NotebookLM, as many folks are familiar with because everyone and their cousin uses it to make audio overviews—which is fine, there’s nothing wrong with that—I’m going to take our content here. I’m going to save this as a PDF, which is just this article that Kay wrote. I’m going to put this into NotebookLM. Let’s see if it’s saved in a location that was intelligent. Where did you go? Yes, there it is. So, I’ll plop our PDF in. Once it’s finished loading and it reads through Katie’s writing, you’ll see there are little chiclets alongside here: the study guide, the frequently asked questions, the briefing document. It did not enjoy that.
Christopher Penn – 14:40
All right, let’s do a text-only version of it. Then by saying cut, paste, CD article, and remove the downloaded… Did not like, “remove that source and add in just the plain text version”. It appreciated that much more.
Katie Robbert – 15:11
Well good. I like that it has the shouting emoji as well.
Christopher Penn – 15:17
Exactly. One of the things that NotebookLM has built in—because it’s intended as a study tool—is the ability to generate automated cards: things like timelines or briefing documents or study guides or frequently asked questions. Here we have the frequently asked questions where it took this and spit out, essentially, “Here’s the frequently asked questions of this article.” So, I could take this verbatim and literally just glue it to the end of the post. As a way of having machines look at those questions and say, “Yeah, this reiterates what’s in the content.” Because what we’re trying to do is we’re trying to increase the number of relevant tokens, AKA keywords, that a machine would see when consuming the article as a whole. So, if I glue on this extra seven questions, I’m basically restating the article.
Katie Robbert – 16:14
Which is really what frequently asked questions are. It’s information that is already available, but for some reason, people can’t find it. So if you stick it in FAQs, it’s like, “Oh, that’s the thing I was looking for.”
Christopher Penn – 16:32
Exactly. You stick that in there, and that would work. Now, here’s something else that you could do with this. If you wanted to get fancy, you could take these Frequently Asked Questions and put it in a tool—for example, like ElevenLabs, which is a text-to-speech voice interface. Or, you could do it in Google’s Text-to-Speech, or any platform that can read things aloud. You could create an audio version just of the frequently asked questions. Just by having that content, having this discussion, you would have a podcast. You could take that text and turn it into audio. Now, you might not want how it’s written exactly like that. It’s kind of awkward to have it just read this list of questions and answers aloud.
Christopher Penn – 17:27
What you could do though is send that back to Gemini. Let’s start a new prompt here and say, “Convert this content into a speaking script for a narrator to read aloud in the style of an educational video. Restructure it so that it has a narrative flow that’s easy for a listener to follow along without needing to read it.” It’s going to come up with essentially the same information, but in a little easier narrative. “In today’s digital world, we’re surrounded by information, marketing messages, product descriptions, interviews. It’s everywhere online.” and so on. Or we could use the original blog post as well, or a combination of the two to have them merged together.
Christopher Penn – 18:15
But what you see here is we now have essentially a voiceover script. This voiceover script can be handed to a tool—again, like ElevenLabs—that could process this and turn it into audio.
Katie Robbert – 18:31
Interesting.
Christopher Penn – 18:33
So, let’s go ahead and take this. I’m going to copy… Let’s see, let’s copy the markdown, because I see some interesting stuff in there. In this script, there’s a lot of cues, and I don’t really want a service reading those aloud. That’s not going to help. We’re going to get rid of those. Now that we have this here, we don’t need to have it declare every line: “The narrator says this”. We’re going to just copy and remove that as well so that it’s a nice, clean script. This is pretty decent. Let’s remove the markdown asterisks. We don’t need those; that just confuses text-to-speech models. We are at 7,000 characters now. Let’s go to… Who do we want to use? Do we want to use ElevenLabs?
Christopher Penn – 19:26
Do you have a preference as to the kind of voice you want to have this read aloud, Katie?
Katie Robbert – 19:33
Not really. I mean, I’ll be honest… Sorry. Go ahead, John.
John Wall – 19:36
I just said Morgan Freeman.
Katie Robbert – 19:38
Well, sure.
Christopher Penn – 19:39
I don’t have Morgan Freeman. I could do Casey Kasem.
Katie Robbert – 19:42
No. I don’t know a lot about the different voice platforms. If you want to do a quick run-through of them, I… know what my options are.
Christopher Penn – 19:54
There are a lot of text-to-speech platforms. There are platforms that are built into your computer. Mac OS has built-in voices that can read content aloud. They sound fairly robotic. You can definitely tell—without a shadow of a doubt—they are completely machine-generated. There are platforms like ElevenLabs. It is one of many, and they’re pretty good. What makes them nice is that they are very user-friendly. A user can just copy and paste text into them and be off and running, and it is very simple to use. However, they’re not the most natural-sounding, because there’s no way to provide things like inflection. The model is going to guess at how a piece of text should be read aloud.
Christopher Penn – 20:50
Another option would be Google’s own text-to-speech library. Google’s is a lot more complicated to use because you have to use it through its API. However, it sounds the most natural. It sounds incredible when you listen to it. Let’s go ahead and have ElevenLabs generate this speech. Rachel is coming out to come and chat about this. I can’t pipe the audio—I don’t think—from here to there. So, we’ll have to put the audio samples in Analytics for Marketers. But we can come up with a few different options for how this would sound. Once you’ve got this audio, then you can put it up as a podcast. I could even take your voice, Katie, with your permission, and have a version of you read this aloud.
Katie Robbert – 21:53
If this was something we were going to do more consistently, I would probably opt for that so that it was me versus some generic Rachel person. Not to derail this conversation, but with these text-to-speech tools, what are you able to choose? You said it doesn’t do inflection, but can you choose different accents? For example, does it always default to the generic flat American accent, or can you actually do something that isn’t necessarily… Not that I would want this because it’s going to make me sound unintelligent, but could I do a Boston accent without having to program something in? Is that one of the options?
Christopher Penn – 22:50
If you had the audio, the training data with the Boston accent, yes, you could. It has to be in the data that you give it. I just regenerated this with my friend Ruby’s voice. She’s English. You’ve met her at Marketing AI Conference, Katie. She has a Southern England voice, and it will read this in her accent. So, if we had Southie Katie on record, then yes, it absolutely could. Not gonna happen.
Katie Robbert – 23:22
But, good to know. We do enough audio and video that you have enough samples of my voice. So, if it were going to do text-to-speech of something that I’d written, I personally would want it done in my voice versus some other voice or accent. That’s something I would need to think about when I’m putting together my content strategy: is text-to-speech part of that? Who’s going to be representing the stuff that I write? Is it going to be me? Is it going to be someone else? Because if people are watching me on a video or listening to me on a podcast and then this pops up with a completely different voice, that could be confusing for my personal brand or the company’s brand.
Katie Robbert – 24:07
Those are just things that we want to consider long-term as we’re playing with these tools. John Wall, you’re the voice of Marketing Over Coffee. If someone else is reading all the frequently asked questions about Marketing Over Coffee, I could see how, for your audience, that might be a little jarring.
John Wall – 24:31
Yeah, the whole soundscape is a big part of when you get to professional-level stuff. You have to spend a lot of time working on that. So, picking the right voice is a critical thing. Although I still would love to have Southie Katie do some of our stuff. I think that would resonate.
Katie Robbert – 24:47
All right, we’ll see what we can do.
Christopher Penn – 24:52
Look, I was able to get the voice loaded. Let’s see if we can play this and see how it sounds.
Speaker 4 – 24:56
In today’s digital world, we’re surrounded by information: marketing messages, product descriptions, even reviews. It’s everywhere online. But as artificial intelligence becomes more and more sophisticated, blurring the lines between what’s real and what’s generated, something truly valuable is rising to the surface: genuine human connection.
Christopher Penn – 25:17
So that’s the generated audio from ElevenLabs.
John Wall – 25:23
The…
Christopher Penn – 25:24
The user interface permits you 5,000 characters. If you use the API, which requires a bigger technical lift, that can be essentially as much data as you can pay for.
Katie Robbert – 25:38
It’s always whatever you can afford, isn’t it?
Christopher Penn – 25:41
Exactly. The other thing with ElevenLabs—which is a good thing—is that if you put in a sampled voice, the human being has to authenticate it live. They have to go into the setup wizard and say, “Yes, this is really me. Here’s a picture of me. Here’s me.” They turn on the video, and you have to speak aloud so that it matches the voice recordings with you, so that people can’t be doing deepfakes of you without your permission.
Katie Robbert – 26:10
I do like that. I do like that.
Christopher Penn – 26:14
So that takes text. We’ve gone from a blog post to an enhanced blog post, to frequently asked questions, to a speaking script, and now turn that into a podcast. Our next stop on our journey would be to go to a tool to turn that into video. One of the easiest ones to use is a tool called Headliner. Headliner allows you to create what’s called an audiogram. So, I’m going to choose this audiogram here, and we will take… We’re going to take just a specific piece of audio. I’m going to upload it. I’m going to upload just the audio. I don’t want a transcript. Let’s load up a template. I always like this one here. Let’s change the text to…human testimonials.
Katie Robbert – 27:13
This question came about a few weeks ago when we were talking about different channels that people should be on. John, I think you’ve asked this question before: “What do I post on YouTube? What if I don’t create a lot of video?” I think that’s one of the misconceptions about YouTube as a channel. Yes, it’s a video channel, but you don’t have to sit in front of a camera and record hours and hours of video to have content to post. I think this is a great example of, if you’re not recording yourself, what else can you create that you can then put up and create a YouTube channel? Oh good, it should be ready by 1:42. That’s helpful.
Christopher Penn – 28:05
This is one of many options. You could, if you wanted to…it looks a little uncanny valley. You can take sampled video of yourself and, using a tool like HeyGen, load that in and train a video avatar that will speak like you. For anybody who knows you, they’ll go, “That’s not real,” because it will not capture your mannerisms correctly, particularly if you’re like me and you move around a lot in a video frame. It will do some very strange things. But if you are stuck still, you could…persuade yourself to sit perfectly still for two minutes and read aloud a script. It will do a decent job of simulating you. Then you could hand the script to that and have…have HeyGen generate the video of you speaking it aloud.
Katie Robbert – 28:54
I think that’s a good option for you, John. You don’t record a lot of video of yourself.
John Wall – 28:59
Yeah, only if I could get like a Max Headroom skin to put on top of it. That would be good. I would have to have a slightly fake video guy.
Katie Robbert – 29:07
I was going to say, don’t forget to blink during your sample.
John Wall – 29:11
That’s creepy after a while.
Christopher Penn – 29:17
There are also tools like Cling and Halo and Huan Yin and a bunch of different Chinese models that allow you to do very similar things, or to just create video in general. There are tools like Storybook IO or Vizla, where you give it a script and it assembles a video using stock footage. If you pay for it, then it authorizes the construction of the video. If we go to…believe it’s Storybook IO…Nope, that’s not it. Storybook AI. There we go: the AI Story generator. You can give it a script, and it will just make video. It can assemble video from stock footage. It looks kind of like an infomercial. If you’re insufficiently specific in the script, in the prompt, it can definitely have that sort of late-night cable TV look.
Christopher Penn – 30:10
But what you were saying earlier, Katie, if you just don’t want to be on screen, it’s a valid option.
Katie Robbert – 30:19
These tools are fairly new. I remember a while back we had a client, and all we had for them were static images. They kept saying, “We want to see video,” but they weren’t providing video. We didn’t have a way to capture video. These tools, at the time, just were not readily available. Now, especially for social media platforms—YouTube, Instagram, TikTok—ones that are more video-forward, where you’d want to have something more engaging, it is going to be easier if you have a bunch of static images to stitch them together to make a video. Say you don’t have images, you have a PowerPoint presentation, or you have a bunch of charts and graphs, you can do the same thing.
Katie Robbert – 31:08
These apps are going to help you put something together that you can use as a stand-in for shooting a lot of video.
Christopher Penn – 31:17
Exactly. Another thing that you should be thinking about, in addition to extending your content and making it work, first things first, you want to make sure…and we talked about this last week…you should be using well-structured content. If, for example, in this issue of the newsletter you can see they were using heading two, heading three, we’re using HTML-based syntax that will correctly show up even without the style sheet. This is what it looks like without our style sheets, so that it’s clear to a machine what the layout is. Every image, for example, should have alt text so that we know what the image is, because that helps machines to understand things as well.
Christopher Penn – 31:58
One of the things that you might want to do is take this kind of text and translate it into other languages. For example, let me go into AI Studio. I’ll give it a prompt, and I’m going to say, “I want you to turn this newsletter, which is in HTML format, and I want you to convert it into Mandarin Chinese.” It will correctly preserve the HTML structures, URLs, and such, but it will keep the rest of the content in Mandarin Chinese. Then you could put this up as a separate post on your website. Having that content in a different language might be useful.
Katie Robbert – 32:46
Katie, how do you know it’s correct? The reason I ask this… I’ll tell you a brief anecdote. When I was working on substance abuse intake tools, developing them for clinicians, the versions we had were English, Spanish, and Mandarin and Cantonese. Nobody—literally nobody—I don’t know why we chose that on my team—spoke Mandarin or Cantonese. Once we actually hired a proper translator, we found out that some of the things that we thought were translated correctly were actually translated very not safe for work. Find me in Analytics for Marketers if you want to know specifically what that was. So, how do you know… I don’t speak Cantonese. How do I know that this is correct?
Christopher Penn – 33:44
The general best…the best practice is to have a native language speaker review it. That is the best practice. If you do not have that, then feed it into a different language model, perhaps not the one you use, and review the translation. For example, folks may have heard of DeepL. It’s a professional translation tool that does a very good job. I can take the output from Gemini and read through it and see if it did something completely and totally wrong. In this case, it seems to have done a very good job. A good chunk of that is because, Katie, our newsletter is written to be…doesn’t have a lot of weird contortions in language. It’s very straightforward.
Christopher Penn – 34:38
If you were doing this with things like fiction, where you had very specific narrative turns of phrase, it might fall down. But for our case, business content… If you look at this: “In this episode of In-Ear Insights, the Trust Insights podcast, Katie and Chris discuss your opt…optimizing your content strategy for the age of artificial intelligence.” This is almost perfectly correct. There’s a slight variance from the original, but not enough that anyone would go, “I don’t understand what this means,” or “Wow, that’s offensive.”
Katie Robbert – 35:14
I would make a recommendation, just a quick plug to make a data-driven decision because, yes, you can translate into the language that your little heart desires. But because there are so many different options, I would recommend going into your web analytics system—like a Google Analytics Four—and seeing where your traffic is coming from geographically, so that you can start to narrow down. Does it make sense to translate this into Cantonese? Are we getting a lot of traffic from a geographic area that speaks Cantonese? Chris has this exact analysis pulled up: English, Spanish, Indonesian, Korean, Chinese, Arabic, French, German. I would look at this and go, “So obviously things need to be in English. They also probably need to be grammatically correct. So that’s on us. But now Spanish is an interesting one.
Katie Robbert – 36:13
Having done translations in Spanish, there’s a lot of different kinds of Spanish. We would need to narrow that down even further. The same with each of these, there tends to be more than one way to speak the language. Therefore, this is a good start. If we suddenly saw a huge spike in Spanish, we would then want to drill down into what region so that we could get the correct version of Spanish.
Christopher Penn – 36:40
One thing I will say here is it’s a good idea for anyone who wants to influence AI—this is now specifically about AI—to have a Chinese language version. Some of the best AI models in the world are coming out of China. If you have content in both English and Chinese, you’re basically doubling your coverage. As their scrapers go out on the web, their tools—for obvious reasons—prefer Chinese language content. So, if you’ve got both, when someone queries these language models, the tokens associated inside—because Chinese is broken down into tokens just like English is—you’re essentially increasing the token count for relevance around your brand name.
Christopher Penn – 37:31
It’s a good idea—assuming that you follow the recommendations Katie laid out here to make sure you’re doing it safely—to have Chinese language content, regardless of the number of human beings who come to your website. Because we are trying to influence DeepSeek and Alibaba’s Tongyi Qianwen and Baidu’s ERNIE Bot—to a lesser degree, models like x01—all these Far Eastern state-of-the-art models, we want to be speaking in their language as well as our own.
Katie Robbert – 38:05
So John, how soon can we expect the multilingual versions of Marketing Over Coffee?
John Wall – 38:10
Right. In every language running, that would be one thing that we would put up there. But I’m still struggling with the video. We’ve got video going up now. I’m at least getting ahead of that. The one good thing though is we do have the language breakdowns. I do know the ten languages we would start with. So, I at least have that going for me. But yeah, not there yet.
Christopher Penn – 38:33
To recap: number one, you should be using generative AI to extend your content, whatever form it’s in. What are the five questions your ideal customer is going to ask of any piece of content? Build that out. If you are clever, you might have a Claude project or a GPT or a Gemini gem with your writing style and background knowledge, and you can—with the article you have—infer a first draft of the extended version of the content. Second, use NotebookLM to generate things like frequently asked questions. Then use the model of your choice to extend that even further. Convert things into scripts that can be read aloud by text-to-speech tools so that you can have an audio version of a piece of text content.
Christopher Penn – 39:21
One of the things that I found personally is that about 30% of your audience would like to listen to your content. If you make it easy, it also allows you to be in Spotify, Apple Podcasts, Google Podcasts, and so on. From there, use any of the popular AI tools to turn your audio into video. Now you have content for YouTube, Wistia, and—depending on the length—LinkedIn. You could use a tool like Opus Clip to then take that long version, split it into short versions, and now you have Instagram Reels, Facebook Reels, TikToks, and such.
Christopher Penn – 39:55
Finally, take all of this in the language that you’re used to, and use the tools to follow the same process within languages that maybe you aren’t as fluent in, or that you don’t necessarily speak, but you have the tools to fact-check and generate. You take one piece of content—one video, one newsletter—and potentially have 50 or 60 pieces of content come out of it.
Katie Robbert – 40:23
I like it. It sounds like a lot of work, but like anything, if you build a process around it—perhaps even use something like the 5P framework to get yourself organized—you can determine your purpose, people, process, platform, and performance. Then it becomes part of your daily, weekly, monthly routine as you’re creating content. “Okay, I’m going to write the newsletter, but while I’m writing the newsletter, I’m also going to give it to this system and then do this.” It becomes part of what you do all the time. I already have some thoughts about some hopefully easy wins we can achieve with our newsletter—things we should have probably been doing all along.
Christopher Penn – 41:08
Exactly. That’s going to do it for this week’s show. Next time, we’ll cover part three, which is what you should be doing outside of your website for AI optimization. We will talk to you on the next one. Take care! Thanks for watching today! Be sure to subscribe to our show wherever you’re watching it. For more resources, and to learn more, check out the Trust Insights podcast at TrustInsights.ai/TI-Podcast, our weekly email newsletter at TrustInsights.ai. Got questions about what you saw in today’s episode? Join our free Analytics for Marketers Slack group at TrustInsights.ai/Analytics-for-Marketers. See you next time!
Need help with your marketing AI and analytics? |
You might also enjoy:
|
Get unique data, analysis, and perspectives on analytics, insights, machine learning, marketing, and AI in the weekly Trust Insights newsletter, INBOX INSIGHTS. Subscribe now for free; new issues every Wednesday! |
Want to learn more about data, analytics, and insights? Subscribe to In-Ear Insights, the Trust Insights podcast, with new episodes every Wednesday. |
Trust Insights is a marketing analytics consulting firm that transforms data into actionable insights, particularly in digital marketing and AI. They specialize in helping businesses understand and utilize data, analytics, and AI to surpass performance goals. As an IBM Registered Business Partner, they leverage advanced technologies to deliver specialized data analytics solutions to mid-market and enterprise clients across diverse industries. Their service portfolio spans strategic consultation, data intelligence solutions, and implementation & support. Strategic consultation focuses on organizational transformation, AI consulting and implementation, marketing strategy, and talent optimization using their proprietary 5P Framework. Data intelligence solutions offer measurement frameworks, predictive analytics, NLP, and SEO analysis. Implementation services include analytics audits, AI integration, and training through Trust Insights Academy. Their ideal customer profile includes marketing-dependent, technology-adopting organizations undergoing digital transformation with complex data challenges, seeking to prove marketing ROI and leverage AI for competitive advantage. Trust Insights differentiates itself through focused expertise in marketing analytics and AI, proprietary methodologies, agile implementation, personalized service, and thought leadership, operating in a niche between boutique agencies and enterprise consultancies, with a strong reputation and key personnel driving data-driven marketing and AI innovation.