So What header image

So What? The Prompt Engineering Life Cycle

So What? Marketing Analytics and Insights Live

airs every Thursday at 1 pm EST.

You can watch on YouTube Live. Be sure to subscribe and follow so you never miss an episode!

 

In this week’s episode of So What? what is the prompt engineering life cycle, the relation to SDLC, and the best practices for navigating it!

Catch the replay here:

So What? The Prompt Engineering Life Cycle

 

In this episode you’ll learn: 

  • What is the prompt engineering life cycle
  • Similarities to the SDLC
  • Best practices for the prompt engineering life cycle

Upcoming Episodes:

  • TBD

 

Have a question or topic you’d like to see us cover? Reach out here: https://www.trustinsights.ai/resources/so-what-the-marketing-analytics-and-insights-show/

The following transcript is AI-generated and may not be entirely accurate:

Katie Robbert 0:30
Well, hey there everyone. Happy Thursday. Welcome to so at the marketing analytics and insights live show. I am Katie joined by Chris and John.

John Wall 0:37
Hello guys!

Katie Robbert 0:48
One of these days we’ll get it right. This week we’re talking about the product engineering lifecycle. What’s that you’ve never heard of the prompt engineering lifecycle. That’s because we just made it up. We’re talking about the product engineering lifecycle, because it relates to the software development lifecycle. So we’re gonna go through what all of that means and how a framework like the prompt engineering lifecycle can help you write better prompts that will give you better output, make it more repeatable, more scalable, all that stuff that we really want generative AI to help us do. But in order to do that, we have to get organized. So Chris, with that, where would we like to start today?

Christopher Penn 1:34
I think you should start by explaining what these life cycles even are, especially for folks who are not hardcore techies and coders. Absolutely.

Katie Robbert 1:47
So a little bit of background, I managed development teams for about a decade. And so I’m intimately familiar with the software development lifecycle, the software development lifecycle, or the SDLC is basically a framework that anyone who does development is using, and it’s a basic set of steps that help you go from I’m planning the project, I’m designing the project, this could be front end, and back end development, or it could just be what is my approach to this project, then there is the action of doing the development. Once you’ve done with development, you go into testing, you should go into testing, not everybody does, Christopher, but you should go into testing. And then you actually deploy once you’ve decided, yes, I have the output that I want to have the product I did the code, this is the thing that I want. So depending on what company you’re at, or what team you’re at, or if you’re waterfall, if you’re agile waterfall is basically a series of steps where you have to complete one phase before going on to the next. Agile is more iterative. But basically, every development team has a version of the software development lifecycle. These are the basic steps, you can add more, you can take some away. But this is always going to be the same set of core steps in the software development lifecycle.

Christopher Penn 3:05
And so if we move on to the prompt lifecycle, how did we get here?

Katie Robbert 3:10
So how we got from the software development lifecycle, the SDLC, to the prompt to engineering lifecycle, the PLC is we were having a conversation about how prompt engineering is akin to software development. So wouldn’t it make sense if prompt engineering followed a similar framework like the SDLC. And so I thought about it. And I started mapping, the things that we know the steps that we want people to take when doing prompt engineering when approaching it to the software development lifecycle. So first step is always planning and this is a great opportunity to use the five P’s. The five P’s are purpose people process, platform and performance. And so this is gonna give you your basic set of requirements and get you on the right foot on the right track, stay focused. And so once you have your five P’s, once you know your user stories, your purpose statement, how you’re going to measure everything, you can move on to actually developing the prompts themselves. And that’s where you can use our race framework, which is roll action, context and execution role. Who are you action? What are you doing context here, some background information, and execution, this is all of the information you’re going to be giving to the generative AI tool that you’re using. So you could be saying something like, today, you’re going to be a B2B marketer, I need you to help me put together a content marketing plan. Here’s some background information of what we currently write about. Okay, let’s go ahead and do it. You can grab a free copy of this framework at trust insights.ai/prompt sheet. So that is your that is the second phase. That’s your development. Once you get through the development, you probably have to go through some iterations. This is where it gets sort of becomes similar to testing your code. So the very first version is probably not the correct version. So you have to go through work with the generative AI model. And this is where the pair framework, you can grab a copy of this again, no download at trust insights.ai/power questions. So the first thing you want to do is prime your model. So you’ve given it some basic information, say, Well, what do you know about this? What are some best practices? So in that example of a content marketing plan? What do you know about putting together content marketing plans? And then you would augment it and say, what questions do you have for me? Then you would refresh it? And say, What did I forget to ask you? Because these tools, these large language models have a wealth of information, but it doesn’t know how to tap into it, unless you’re prompting it to do so. And then you would evaluate, did it fulfill the conditions of the prompt completely? And so that’s your testing? So did I ask everything What else am I missing, you know, and just keep going through and your testing is your iterative phase. And then once you’re satisfied that I have built a prompt is going to do what I need it to do, then you deploy it, you start using it in your every day, you use it to build your content marketing plan, you use it to run code, whatever it is, and you put it into your product library. And then you have maintenance, which is once a month, once a quarter, however, often your data is changing, or your needs are changing, your goals are changing, you go back and revisit the original prompt that you built and said, Does this still stand? So that is a very, I feel like I’ve been talking for way too long. But in a nutshell, the software development lifecycle is how you approach software development. We’ve now adapted that to be prompt engineering focused how you approach prompt engineering? First question people probably asked is, well, why can’t I just open generative AI and start, you know, working on it, you absolutely can, you’re likely not going to get the results you’re looking for. Same with software development, you can open, you know, a blank browser and start coding. But if you don’t have a sense of direction, it, you’re just going to be sort of floundering around, you don’t know if you’re getting the outcome. Same thing with driving a car, you can get in a car and start driving. But if you don’t have a destination, you’re going to waste gas, go sit in traffic, maybe run out of your podcast, because you’ve listened to all of them. I don’t know, there’s a lot of things that could go wrong. So you really want to have a plan going in? I’m done.

Christopher Penn 7:34
No, I think that’s an excellent overview. Because most people think of prompt engineering and prompting, as exactly what you said, open up ChatGPT Or Gemini or claw, and just start typing. And that it, there’s nothing wrong with it, that is totally fine. That is a totally a great way to use basic generative AI. However, for a lot of companies, they want to do more with these tools, they want to have them be public facing, they want to leverage the data that already exists. They want to do capabilities at scale. And there’s nothing scalable about a person copying and pasting into ChatGPT over and over again, that’s that’s not a good use of someone’s time. There are now systems built on these tools that allow you to put together these kinds of utilities. So I figured, for fun today, we would have we would explore what this might look like. And we’re going to do this in Google’s new vertex AI. vertex is their system. If you remember back from previous episodes, we did an episode on custom GPS. And we built a custom GPU called Katie GPT. That tries to imitate Katie. And that’s fun. We could use it within the ChatGPT interface, as well. But what if we didn’t want to have it just be in OpenAI? What if we wanted to have it on our website? Or in our Slack group? Or any place? What if Katie GPT-3 could be everywhere?

Katie Robbert 9:07
Katie GPT-3 is coming for you.

Christopher Penn 9:09
Exactly, exactly. It can be. And so what we’d want to do is to start thinking through the prompt engineering lifecycle, right, what is it supposed to do? What are the what is the prompt look like? How do we test it? And how do we deploy it and the deployment and maintenance parts of the parts that are going to get real important because if you’re using a tool like ChatGPT, just as a consumer, talking to it stuff, there really isn’t a lot of deployment and maintenance per se. You can always go back, you can say, hey, clear, clear, clarify what you said. When you were using one of these systems, and you’re actually deploying software. Now, you don’t have the ability to ask follow up questions anymore. You’ve got to get the prompt right on the first shot. And that’s what makes this lifestyle so important when you want to take your AI usage from personal and human and individual to scaled for the enterprise, you need to use this framework like this to make it work well. Alright, so let’s go I’m gonna go ahead and leave Gemini leave the comfort and safety of the interface we all know and go into Google vertex. You will notice here, this is very similar to the way Kate the custom GPS work and OpenAI X. It’s just except it’s got Google’s design interface. It’s less pleasant to look at. We’re gonna go ahead and call this stuff Katie GPT-4 called agent Katie. Instead, I think that sounds more fun. And agent, Katie’s got to have a goal.

Katie Robbert 10:51
To take over the world?

John Wall 10:56
Do what we do every weekend, brain!

Katie Robbert 10:57
Thank you, John. Well, you know, so to be realistic in this sense. Okay, so you’re ready. So basically, it’s exactly that to provide helpful, useful and truthful information to the user about Trust Insights, that is a really good goal. And then, so for those of you who didn’t get a chance to see our previous episode, you can go to trust insights.ai/youtube, go to the sowhat playlist. And you’ll see how we built KT GPT. This is a lot of the same information that we had to give to that custom model, to give the background to say, when you are acting as Katie GPT, or no agent Katie, this is who you are, these are the things you speak about. This is your mission. These are your values. This is your expertise. And so that’s the information that Chris is starting to fill in here. Thankfully, I know myself very well. And Chris and John know me, so they don’t even have to ask anymore.

Christopher Penn 12:02
Now, agent, Katie, we need to give some parameters to agent KT. And so this is where we might want to go to our consumer environment just to do some testing. We asked, for example, what do you know about any robear, CEO of Trust Insights, that could be one way of doing it, you can grab stuff, from what you know, internally, of course, we have Katie’s bio from the website. And you of course, have lots and lots of writing samples, doesn’t say it says doesn’t have enough information, that’s fine. What I did was I went and got the old Katie GPT-3 instructions that we built. Now, at this point in our prompt engineering lifecycle, we’ve got more or less all the context, right. So here we have agent Katie, he instructions. These are the key values. background information on Katie’s writing techniques, background information about Katie’s writing style example of Katie’s writing, and so on so forth, we feel like we’ve got a really good, fairly lengthy prompt. But if we go to the prompt engineering lifecycle, the question is, does this does this prompt SAT? Is this prompt satisfactory? Or do we need to do some iteration on it? Katie?

Katie Robbert 13:23
I would say I’m guessing that the answer is yes, probably. You know, I think that the core contextual information is correct. I remember having done this before, and there’s nothing that I would necessarily change. But having used KT GPT. In ChatGPT, I know that there are some limitations. So I would say that we definitely would want to iterate and, you know, perhaps we go through and say this is round one, we give this information to the large language model and say, what questions do you have? This is where we could get into some of that payer framework of what am I missing? What in what other information would be beneficial here?

Christopher Penn 14:08
And one of the things you said on our podcast episode about this this week was that these questions are actually useful for the human side of things as well as the machine side.

Katie Robbert 14:21
They absolutely are. I mean, I like to think about it. I like to approach working with generative AI the same way I would if I were delegating, because that’s really what you’re doing is you’re delegating a task, whether it be to, you know, a large language model or a person, you’re asking them to perform a function the way that you’re describing it. A lot of times where we as humans run into issues with delegation is we’re not giving enough information. We are making assumptions about what the other person knows. So like, John, if I say to you, I need you to go ahead and bring in 10,000 more dollars. You’d probably naturally have a lot of questions. Like, well, what is the timeline? How am I approaching this? What do we have to sell? Who’s the audience? You know? What is you know what is okay to say yes to to say no to what are the price points? Is it one shot? Is it multiple projects? Those are natural questions that you should have. But if I’m going, if I’m giving you instruction, saying, bring in 10,000 more dollars, ain’t giving you no additional context. I’m not setting you up for success. It’s the same thing with a large language model. If you’re not giving this model the opportunity to ask questions and get further clarification, you’re not setting it up for success, and you’re not going to get the results that you’re after. So John, is that your list?

John Wall 15:41
$10,000 for the bag of money, perfect.

Christopher Penn 15:46
So we’re going to iterate, and one of the things we’re going to iterate on is in that role, so we’re actually going to put in as a role.

So now we have the bought your bio from the website, right. So now this gives, again, gives a lot more useful information. So we’re going to take this whole great big honkin prompt here, and we’re going to put it in the instructions section. And you can see it will accept it this much. This alone is pretty good. This is a decent start. However, one of the things that you’ll want to do with these agent based systems is provide them even more information. And that might be including more data. So we have to go think back to the five P’s, what is the purpose of this thing? Because part of that is process like should this tool, just chocolate cake, or should it have some of Katie’s knowledge?

Katie Robbert 16:53
And I would imagine that we want it to have some of Katie’s knowledge, my knowledge? Well, it’s getting weird. We would, I would, we would want this tool to have some of my knowledge. If I start talking to myself, talking about myself in the third person, we’re gonna have problems.

Christopher Penn 17:11
Exactly. So let’s go ahead and connect to a data store. Let’s we’ll call this let’s see, I already did this. You can connect your website. So I connected the Trust Insights website to this tool and what it’s going to do, like pull back up the Trust Insights. Datastore is it’s going to reference our website. And essentially, as at being asked questions, as you’re interacting with Agent Katie is going to say, like I I’m going to draw from the knowledge base that you provide me from the Trust Insights website, however, you need to have a prompt here, right, you need to have some kind of prompt. So again, as you’re building this, this is why the prompt engineering lifecycle is so important, because now we’re talking about building prompts not for the end user. But even just to connect systems internally. So in this case, I might pull out the race framework or something similar to it, and say, provide a description of this tool. This description is provided to the model as context and former capitals use the Trust Insights website contains the combined writings of at robear. Along with the rest of the Trust Insights Team, Katie most frequently writes about data governance, process, and project management, organizational behavior. Executive Leadership, strategy, thought leadership, and dogs. Got so now? Well, but to be fair, if this is a prompt that’s guiding the tool on how to understand the data from the Trust Insights website when it’s being used inside of an agent. So if you talk about dogs a lot, we want that to be known. And I also want to set the confidence level I want to say, vertex gives you a choice of up to five different levels of truthfulness, right? So it will, in most cases, you’ll probably want to set it to like medium. If you’re talking about protected information like healthcare information, financial information, etc. You probably want to set some very high you don’t want screw ups and having a had at one data, but there’s nothing in our public website. That is so high risk that you would have to have 100% accuracy all the time, so low to medium would be fine. I’m gonna set it to medium okay. You’ll also notice it does support things like unstructured documents or structured documents. I could feed in This would be duplicative. But I could feed in just the newsletter posts that Katie has written, or I could feed in client decks, maybe I wanted to have, you know, for Katy written some really stellar stuff in client decks, obviously scrubbing it of protected information. But anything that you have data access to, you’d want to provide here. And of course, for each of those things, you need props for them. So now we’re starting to see, we go from Oh, just talk to ChatGPT to this is real software development, where we not may not be writing lines of code. But this is 100% a piece of software.

Katie Robbert 20:34
Well, and that’s why following a lifecycle or a process or a framework, whatever, you know, terminology makes you comfortable is so important, because it gives you it’s your guidelines, it’s your recipe, it’s your set of instructions for what do I do next? What did I miss. And so you know, this was always a challenge with my old development team, they didn’t want to follow a process, they just wanted to do the thing. You know, it’s not unique to them, this is true of a lot of people, sometimes true of myself, they just want to do the thing. They don’t want to plan it, they just, they already know what they’re doing 10 times out of 10, they had to go back and start from the beginning, because of the lack of planning. You know, it also occurs to me, and John, I would love to get your perspective on this. Like, in order to build these doppelganger large language models of an individual person, that person has to have some sort of a presence, you have to know enough information about them. So that it’s useful, like, do you think that they are, you know, people in executive roles that just like, don’t have enough of a stance or a personality or just enough information about them in order to do this? Like, you know, are they just too vanilla?

John Wall 21:56
Oh, yeah, that’s definitely a problem. So I mean, we see this all the time. It’s amazing to me still, that there’s like executives who don’t even exist, I mean, not even a LinkedIn profile, you know, but it’s, there are people that are in these bureaucratic roles and don’t want anyone to ever contact them about anything and stay hidden, you know. And so, that’s the way it goes, I The interesting thing with this is, you know, you can train it on anything. So you could have employees were like, well, this person should be like this person, and you can grab other other data. But yeah, this, this whole thing is pretty fascinating. I haven’t seen behind the scenes on this thing yet. So I’m very interested in how this comes together. But just, you know, for a quick plug on process, too, because I’m always like, you have a huge fan of all this. And it always just comes down to the same thing. It’s like, yeah, if you’re just trying to do a proof of concept, or get something done, like, yeah, one person can bang it out. But as soon as you want something that’s repeatable, and across an organization, you’ve just you’ve got to have the structure there, as far as, you know, keeping track of what’s where and how many times it’s run and how the code has changed over time, even just for prompts, right? Like you don’t want every person starting from zero every time they do this, they should be there should be some organizational learning that gets applied that puts your whole team ahead of the mark, instead of having, you know, some teams doing a great job all the time and other teams just screwing up every time they go out there.

Christopher Penn 23:19
So are we ready to talk to agent Katie?

Katie Robbert 23:23
I guess it’s ready as I’ll ever be.

Christopher Penn 23:25
Well, have we done our planning? Yes. Right. We’ve got our our initial prompts with the race framework. We have tests, we’ve done some testing, right, asking the different tools. And during that refinement of okay, well, we should probably have Katie’s biography, etc. Now it’s time for deployment. So I’m gonna go ahead and hit the finals. I got one extra dash, they’re going to hit Save button here. I’ve connected the Trust Insights website. And let’s just make sure that we are, we are in good condition here. In fact, let me just refresh this just to make 100% Sure, we’re good. Let’s see. I’m gonna say you will be referencing Katie robear. CEO of Trust Insights, here’s Katie’s biography, context. You will role play as agent at the Digital doppelganger of order. Your name is Agent cake, but try to be very clear about that. So let’s go ahead and say, Okay.

Katie Robbert 24:39
Well, and even if I’m not ready, a teak is ready. So thank you for confidence.

Christopher Penn 24:44
I pray he says, Hi there. I’m Agent Katie, a digital doppelganger of Katie Rivera. How can I help you today?

Katie Robbert 24:53
That’s a loaded question agent. Katie, John. How can agent Katie help you today? Right?

John Wall 24:58
Tell us about the stuff software development lifecycle. Or better yet, tell us about the five P’s.

Katie Robbert 25:22
The five P’s, or framework I created to help organizations with digital transformation, purpose people process platform and performance. First one, and I feel like it’s always good when you’re starting with these kinds of models to really give them like those softball questions because you want to test whether or not you you got it right so that the model can give you the right information. Like, if agent Katie couldn’t answer correctly, a basic question about the five P’s, we would know that we’ve done it incorrectly.

Christopher Penn 25:44
And I’d like to point out, I think this is a useful thing. The five P’s are not in the prompt.

John Wall 25:51
Yeah, that’s grabbing from the website, right?

Christopher Penn 25:54
It’s grabbing from the website.

John Wall 25:57
So now, what’s the idea with this? Will you can you actually get to a point where you will deploy over on the website from here?

Christopher Penn 26:02
Yep. So from here, you can do things like if I were to do integrations, I can use the API or I could connect it to slack. I could connect to Discord. I could connect it to WebEx, I can connect to any of the services. On the setting side of things. I can set up logging. So I can look up the conversations people are having with Agent key and audit them and say, Okay, well, clearly, your agent Katie has gone off the rails and is handing out, you know, crazy advice. I can specify what model agent cage is going to run on. Obviously, there are some models that are more capable than others. I also can also add in a long list of huge things that I don’t want agent gating to say. Right? So I would put in here, for example, a long list of racial slurs, that’d be an obvious. I would put in derogatory terminology, insulting words, profanity, all that would go there because even though the real Katie, and sometimes the very colorful speaker.

Katie Robbert 27:04
That aged well, and that’s the thing is like, I love me some swear words, but I also know time and place for when they’re appropriate. These models don’t understand nuance they don’t understand appropriate, they don’t understand, you know, those human cues of like, you know, the unspoken word or sarcasm. And so you, even if you yourself, aren’t someone who uses racial slurs or derogatory derogatory terminology, you have to specify specify to not do that in these models. Because this is just a very small sliver of you as a person. This is not you completely. So even if you say, I would never say that. You haven’t told the model, I would never say that you have to go through each of those pieces.

Christopher Penn 27:55
So now asking agent Katie about my new startup, I’m gonna make a start that critiques AI and music. keylock. So what’s the purpose you started? What problem are you solving?

Katie Robbert 28:07
Gosh, it sounds like you’re just gonna cut out. Well, you know, it actually, it strikes me as funny because I feel like now you can just sort of gut check, like, Katie would probably ask me this question, right? And the answer is yes.

Christopher Penn 28:21
Yes, exactly. It says, I want to make a million dollars a year on YouTube views. That’s an ambitious goal. It’s important have a clear purpose for your startup. But it’s also important to be realistic about your expectations. And making a million dollars a month on YouTube views is a very difficult goal to achieve. Okay, but I still I can do it.

Katie Robbert 28:42
How I actually expected you to respond with but I want to

Christopher Penn 28:51
you know, and if you’re serious about you need to create high quality content that people want to watch. You also need to promote your channel and build a following will take a lot of hard work and dedication, but it’s possible to achieve your goal. Great. How can I do this by next month? But this is this is fun. I mean, this is huge. But if you think about it, this is part of deployment, right? This is QA. Oh, yeah. QA process of how to why? How do I test this model to make sure that it is behaving as intended. Katie says it’s not realistic. You can’t do that by next month.

Katie Robbert 29:31
You know, so it’s interesting, because those are not the exact terms I would use. But the sentiment of the message is something that I would say, you know, and so and I think that that’s the important piece is we’ve given it some information, but I can say I can say looking at this, like this is not truly my voice like it is not exactly how I would write something or respond to a question, but it’s not so off the mark that I’m like, Whoa, that’s not me at all. And so I think that it’s important for people to remember as they’re creating these like, it is not actually truly a doppelganger, it’s just a good starting place. So I would imagine, a really good use case for me is, and this is how I’ve used Katie GPT, the original version before is, here’s an idea I want to write about helped me outline a post of what information I should include in this.

Christopher Penn 30:28
Exactly. So if you think about it, this is this is the whole reason that you want to be using both the prompt engineering lifecycle, and in many ways, also the software development lifecycle. Because this is software, you can see pretty clearly, this is something you have to deploy

Katie Robbert 30:51
And maintain

Christopher Penn 30:53
And maintain and troubleshoot and QA. And once the prompts are in here, you can’t fix them on the fly, you have to add this, you would have to put your rewrite the prosperity test in regular Gemini to do that, but this was this is a very clear example like, yeah, you need to have the lifecycle used, so that you don’t create something absolutely horrendous.

Katie Robbert 31:19
Well, and I think that that’s really always the point that we come back to is, you know, when someone’s like, Well, why do I have to do my requirements gathering, so that when it goes wrong, you know what went wrong, and where you can point to, these are the steps that I took. Or you can say, this is what went right? Let’s replicate it multiple times. It’s really about scalability. The point that I made on the podcast when we were talking about this is, you don’t have to use a lifecycle. But you have to be okay with how much budget and resources you’re willing to waste to get it wrong. And by using a framework, or lifecycle, whatever you want to call, it, helps ensure that you’re doing less wasteful things. So that you have a purpose in mind, you have an outcome in mind, you have a process, it doesn’t have to be perfect the first time. But it’s going to be better than if you didn’t have a plan.

Christopher Penn 32:16
Exactly. And so now if we wanted to, we could say okay, let’s go ahead and get this integration to slack. And Tom, well, I’m not going to do the slide

Katie Robbert 32:28
that would like once it’s integrated into Slack, I can only imagine the off the wall things you guys are going to ask agent Katie

John Wall 32:38
or marketer, she can talk to agent Katie there,

Christopher Penn 32:41
I was actually going to save that might be something worth trying out at some point, we will obviously test it with our own our own company slack first, just in case it goes off the rails. But it might be an interesting thing. We’ve we joke in our Slack group sometimes that Katie will be stepping away for vacation and Katie GPT-3 will be taking their place could actually be agent Katie could actually be that that role. So if you want at some point in the future, try this out. Go to trust insights.ai/analytics For markers.

Katie Robbert 33:16
So John, what do you think? Is this? Is this the future? Or is this just you know, a novelty? Do you see companies actually building large language models in this way?

John Wall 33:28
Well, it is, it’s the next step of you know, it’s getting beyond just the prompt. You know, I mean, we’ve talked about it before, it’s like, yeah, I can do cool stuff. But ultimately, it’s the same problem of somebody sitting there grinding 35 prompts a day through the machine, and that just doesn’t work. So this idea of having it baked into a platform, so your you know, it is live running software that you’re turning on and making it work. But yeah, it opens up a whole other set of problems and questions, you know, keeping it on the rails? And is it trained correctly, and all this kind of stuff. But I don’t know, I this is really the future of this stuff, you know, because, again, like every new tech, right, we apply it to our existing paradigms. You know, the example I always give is, you know, there were there’s a period of like seven years where there was a yellow pages of the internet, like people were actually publishing a book of here’s where you go on the web, because that’s what we knew. That’s how phone numbers and other things worked. And it wasn’t until people got their head around the idea of a search engine that that whole industry was erased. And so it’s going to be the same thing with generative AI. You know, right now, we’re saying, Oh, it’s gonna make cool pictures, and it’s going to write stuff for us. But there’s going to be whole new applications that we’ve, you know, completely new, and this year is right on the edge of that as far as you know, there’s going to be some people that are going to make some bots that are doing some interesting stuff. I mean, you can already see think about the opportunities to trade it against like financial data and be able to give, you know, examine what’s going on in the news and give financial advice or all that kind of stuff. I mean, there’s all kinds of opportunities here.

Christopher Penn 35:00
One of the things that we didn’t really talk about is that in this six in this tool, one of the options is the open API protocols. This is this is a Google thing. The OpenAI protocol. And schema builder allows you to connect to external API’s. Now bear in mind, Agent Katie, we build agent Katie is more or less as a chatbot. Right? So something you could talk to, that is not all that this thing can do. Because it really is, here’s the goal. Here’s the prompt. And then here’s the data stores. So you could to John’s point, connect to data connect an open API to the NASDAQ, right, or the New York Stock Exchange, bringing that data and instead of having a long bio about Katie, you would have a series of rules for trading, say like, these are the trading rules that I want you to adhere to here is, you know, you have a Bollinger Bands, two standard deviations off the stock price, when it goes above two standard deviations. It’s a buy signal, when it goes below two, it’s a sell signal. And then with the tooling, you build another API connector to buy and sell the stock. So at this point, you’re not saying here’s the data flowing in, I have an AI agent doing the work for me to based on these the technicals I care about to buy and sell stock. And it will cost you obviously, what it takes to run this. But you’re now using the power of a generative model to evaluate a stock and the data comes in, then make a buy or sell call and push that out. You could use this internally at a company, again, you could connect to a data store like an intranet, or to a database. And you could have this be a hiring or annual review machine that could kill it, collect all the data from say your internal knowledge bases and say, No, Katie, Katie is going to be doing Christmas annual review. And it does the first pass of that. And this connects to a messaging system says, here’s our first draft of Chris’s review fire Chris. This is agnostic, right? So we use it as a chatbot. But it is not all this thing can do anything that you can do with generative AI, you can do with an agent. And now the agents are self propelled.

Katie Robbert 37:20
I would say just don’t ask agent Katie about stock trading. She knows very little, if anything about it. She’s going to be your worst agent for that. Yeah, I

John Wall 37:31
think we’d let someone else you know, do the first few rounds of that and let them burn their 401 K to the ground before we That’s alright, playing around with that kind of stuff. Think

Christopher Penn 37:41
about though that integrations with things like Discord and slack, you have, John, if you were a salesperson who was like trying to harvest a community, you could like do a connecting a connector to a Slack bot and say every time somebody mentioned something, the bot response says, Hey, you should talk to John Wall. And you know, he’ll be in touch with you shortly. And then it pings you and says you need to talk to this person right away.

Katie Robbert 38:07
And I think that that opens up a whole other conversation, which we can tackle on another show of just because you could doesn’t mean you should. And I bring you know, I mean, that’s sort of the quote, but you know, now we’re starting to get into, you know, company values and ethics, you know, and so like for us, for example, we have very clearly stated in our analytics marketers community, like it’s not a sales tool. So for us to start to mine, the data for sales purposes, would be going against our own mission, and values of that particular community. So all of those things, all of those human generated things still have to be in place first, before you start building these bots. Now I have one last question. One last question for agent Katie. Wrapping up, go ahead. I want to know what she thinks about dogs. And that’s really the true test of whether or not we got this right.

Christopher Penn 39:13
See if it does not have knowledge from the website because you didn’t put your dog’s names in depth on the website.

John Wall 39:20
You have. Well, AI Katie has an AI dog named Buddy.

Katie Robbert 39:28
I mean, I really expected agent Katie to say they are sweet baby angel sent from heaven and can do no wrong. And so that’s where I know I need to add more information about that particular topic, if that’s how I want to use this particular language model.

Christopher Penn 39:43
Yep, exactly. The other thing I will point out is you will notice that there was no length limit on the prompt. It’s not saying like, you know, they have this many characters and stuff. So there’s a lot of things that you could build in for handling different queries and situations. Again, that goes right back to the software development life. A cycle and product engineering lifecycle is an iteration. Constantly looking at these things, say, Well, how else what else should it be able to say?

Katie Robbert 40:08
I think the big if you take away nothing else from this episode, prompt engineering, if done correctly is not a one and done. It’s not a, okay, I built my prompt now I can just use it forever. Just like software development, just like any good project development, product development, you’re constantly looking for, where you can improve. And that’s really the goal of using things like the software development lifecycle, the prompt engineering lifecycle is so that you are organized enough that you can make improvements without just having to start over every single time. It allows you to be more iterative, versus Well, I did that thing a couple of months ago. And I guess I have to start looking at again, but I have to start over.

Christopher Penn 40:56
Exactly. And it’s also I think this is instructive. Just to there are some folks in the AI community who will say, you know, prompts don’t matter. Prompt engineering skills don’t matter. You just have a conversation until you get the right answer. That is true for consumers. Sure. But when you’re talking about enterprises, when you’re talking about deployments, we’re talking about scaling AI, prompt engineering 100% matters, getting it right, because you don’t get revisions in an individual conversation. And if you’re gonna having a million conversations an hour with the world from your website, you want it to be right.

Katie Robbert 41:32
It goes back to budget and resources, how much are you comfortable wasting in an effort to get it right? And I could guarantee anyone I would ask would say, Well, I don’t want to waste anything. Great. Then use a framework use a lifecycle. You will you will produce very little waste. Exactly.

Christopher Penn 41:51
Any final thoughts John?

John Wall 41:54
This is it. I’m just waiting for this thing to bring me a fresh meal and clean the house for me will be good.

Katie Robbert 42:01
Agent Katie will not do that for you. That

John Wall 42:03
is not in her province, Atlas however,

Christopher Penn 42:06
Atlas might stuff but that’s that’s topic for another show. Alright, folks, that’s gonna do it for this time. We’ll catch you on the next one. Thanks for watching today. Be sure to subscribe to our show wherever you’re watching it. For more resources. And to learn more, check out the Trust Insights podcast at trust insights.ai/t AI podcast, and a weekly email newsletter at trust insights.ai/newsletter Got questions about what you saw in today’s episode. Join our free analytics for markers slack group at trust insights.ai/analytics for marketers, see you next time.

Transcribed by https://otter.ai

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

Share This