Creativity and efficiency often seem at odds, but AI bridges that gap. So, how exactly does AI work in the realm of content marketing and agencies and working on behalf of clients?
In this recap of the Social Pulse: Agency Edition, hosted by Agorapulse’s chief storyteller, Mike Allton, Kristin Tynski shares how she seamlessly integrates AI into the fabric of her content marketing, SEO, and PR strategies. As the co-founder and senior VP of creative at Fractl, Kristin has harnessed AI to turn data into compelling content, automate mundane tasks, and ultimately drive remarkable results.
[Listen to the full episode below, or get the highlights of the Social Pulse: Agency Edition, powered by Agorapulse. Try it for free today.]
The New Way of Working With AI
Mike Allton: One of the things I talk about on that show often is the fact that I use Claude to prepare for each and every one of these podcasts. I’ve created a custom persona and a tool called Magai, which is basically a custom GPT that allows me to use Claude or whatever large language model I want.
I tell it who the guest is, I give them their LinkedIn, and I tell them what topics they want to talk about. It understands me, [and] it understands Agorapulse and the show format.
It starts to give me things like: “All right, great, based on everything I know. Here’s what we could talk about with Kristin. Here are five topic ideas, I pick one, and this is cool. Here are 10 title ideas for the show based on that topic and I pick one and then it generates the questions and the interview description and of course the bio.”
While you did a lot of work to get there, I’m reading what Claude ultimately thinks.
Kristin Tynski: And that’s what you just described: a new way of working. People are starting to adapt to it and realize that the way that they used to construct a problem and problem-solving in their mind used to be much different than it is now. You would have a question, and then you might go to Google, and then you might test some things or try some things, iterate, and then maybe find the solution.
But with AI, that process can be short-cutted a lot.
The real skill set is understanding what these models are capable of and how to prompt them in the right ways to get the sorts of outputs that you want. And then also think critically about the steps that you need to take in order to accomplish a specific task. I’m sure what we’ll talk about a lot today is agents and pipelines and how you can use AI and discrete specific steps to create something much better than you could get with just a simple prompt and response.
What you just described as your process of preparing for this interview, that entire process that you said, “I did this and this and this,” and had this conversation with the AI that could be partially or wholly automated. So it just became a single input and an output. A guest on the show and the output is every asset you need. All of the questions, all the interview prep work, whatever else. From my position, what I’m seeing happening is the capabilities of AI at this point enable agents like that, but well beyond that as well.
What we’re seeing is AI eating every task really that humans can do over the next three to five years. And I’m sure what we’ll get into more is the specifics of the sorts of things that can be automated, but that’s sort of like the framework I’m thinking of it within … [This] is a … fundamentally transformational time, and everything will be disrupted. Every task and every process will be automated either partially or fully and fully really over the next two to three years.
Talk about AI agents
Kristin Tynski: I think of agentic AI as the next step or stage in the evolution of how we’re integrating AI into our workflows. And all an agent is some sort of generative model, usually a large language model that’s put in some kind of step-by-step process that maybe loops or iterates, but it has a specific input and a specific set of outputs.
In terms of marketing, any agent could be any specific task. So, from doing research for a content piece to an entire pipeline that does the research, the writing, and the iteration, the social media post creation, and the dissemination of it.
You could have a very simple agent, or you could have a very complex agent pipeline that does a lot of things at once. You could also think of modules, so agentic modules that do one task, another agentic module does another task, and so on and so on. Then perhaps you tie them together, put those tasks in more complex pipelines, or have overarching LLMs manage specific agents that are doing specific types of task work to create more complex systems.
So, if you extrapolate on that, you can imagine that once you have all of these agents that can do all of the specific tasks of an agency or a specific marketing activity, then you can have it managed by an LLM on top of all of that and have almost the entire process of an agency fully automated.
Don’t miss out on transcript highlights of every episode of Agency Edition.
How are you seeing some changes trickle through to content marketing?
Kristin Tynski: Through every type of content creation is being influenced. Obviously, first was text with GPT 3 or even earlier. We started to see disruption with that. But every writer is now using it, or almost every writer is using it to some extent.
I think it’s driving the cost of at least a certain type of content to zero over time which just makes it even more important to consider: How do we create even more interesting, advanced, and thorough content using agentic processes that are doing more than one thing?
It’s not just, “Hey, write me an article, Gemini.”
It’s: Do all of this deep research, distill it, refine it, iterate on it, come up with a thesis, then write the article, then distill it, refine it.
You could imagine a looped process with several different agents with different responsibilities. All contribute to the quality of something much larger than just an article that you’re creating.
Mike Allton: That’s the kind of way that I’m using AI every single day. You talked about the showrunner process that I mentioned at some point I might not have to be even in that loop the way I am today.
Like I said, it’s a custom GPT or a custom set of instructions where we’re going back and forth and I have predefined steps. But at some point, you’d think I could program the AI to know what would be good topics to know how to properly format and hype somebody up and all these kinds of things to make sure that the output at each step is exactly what we want and then take it to the next step.
Kristin Tynski: I actually think that would be pretty easy to do. You need probably a connection to LinkedIn or BlueSky or whatever social media is, so you can put in a social media handle of your guest and then you would scrape that content.
Around the guests to understand who they are, have an LLM distillate, and maybe do some additional LLM work to expand on the profile a bit. So, if you got my LinkedIn profile, you could probably understand or estimate maybe some other things about me. You could do data enhancement with LLMs and then have it write a report for you, and that could be one asset that’s generated. Have it, create different templates for different social media posts associated with that. All of it together could be done pretty easily. You just need those several data sources with APIs, and then stitch it together with the large language models.
What led you to start to bring AI into your actual content agency?
Kristin Tynski: I’m not sure exactly. I find it difficult to understand why an agency owner would be keeping AI at arm’s length.
I think that’s … an existential risk. If you’re doing that, if you’re scared of AI and you’re not doing everything you can to understand it and how it integrates into your processes and how it’s changing our industry, you’re going to get left behind.
So, for me, I’ve been thinking about and worrying about the implications of generative since GPT 2 and experimenting with it to try and understand what its capabilities are, what its capabilities are going to be in the next 10, and then try and think strategically for how to best leverage those tools in a way that’s not going to be like a moot point because some new model or some new tool comes out that basically does everything for you.
That’s sort of the environment that we’re operating in now because things are happening so quickly.
There’s a real risk of doing something that’s not worth doing because it’s just going to be done better. And, you know, two weeks from now.
How is AI helping you with content ideas and other forms of content marketing?
Kristin Tynski: My job at Fractl now is to figure out how AI can be applied to our processes and to understand its implications for the industry and how things are going to change. What I’ve been working on for the last few years is trying to automate internal Fractl processes so that our team can use them and that I can understand how they could be put together in larger agentic pipelines that do bigger, more complex things.
Starting from ideation, which we do a ton of at Fractl since we’re a data journalism content creator on behalf of brands that also do PR promotion. But it starts with ideation. So we have a giant corpus of all the ideas we’ve come up with over the last 10-plus years. It’s thousands and thousands of ideas and metadata associated with it, peer review of the quality of the idea, whether it was selected, and a ton of other stuff.
Using all that, putting it into a well-formatted data set that we could then use to train to fine-tune, GPT4, and then use a fine-tuned model in a pipeline for doing ideation. We have an internal tool that basically has a fine-tuned model to look at a Fractl ideator based on all of our training data from all of the thousands of ideas that we’ve come up with. And then that generates hundreds of ideas based on an input topic, goes through several different refinement cycles where different agent LLMs are, evaluating the ideas across a bunch of different criteria that we’ve defined at Fractl and scoring them and then sorting them and then making a recommendation and then we select the ones that are shortlisted and then it builds out research, what we call production cards, which are essentially like deep research on exactly how the campaign would be executed, what data sources would be used, what the complexities are with the timelines, estimates, and things like that.
All of these things were pieces that were very difficult to estimate, or very time-consuming to estimate on each idea that we were coming up with for clients because we would come up with dozens of ideas a day. So that sort of thing saves a huge amount of time. And then, of course, content creation work.
So right now, because we do data journalism for the most part, we’re not creating prompt-and-response AI content, which I wouldn’t recommend anyone do. We’re using AI within the framework of creating something of larger value. So it’s totally managed by AI, and I have created a few pipelines that do that sort of thing—it’s research, iteration refinement, more research iteration, refinement to get to a much better final product that’s well-sourced and cited has information that was a source of truth because it leveraged Google’s search results or something like that.
Content creation is a huge piece of it, and ideation is a huge piece of it. And then PR—which is the other half of our business—takes all the content and the newsworthy data journalism that we’ve done on behalf of brands and pushes that out to journalists and top-tier journalists at major publications to try and get them to pick those stories up.
The best way to do that is to do it in a high-touch way. Journalists get hundreds of emails a day. Breaking through that noise is the hard part. The way you do that is by having a really, really, personalized subject line and a really deep understanding of the sort of things that they cover, the beats that they cover and what they write about.
We’ve also created automated processes for doing that where we have databases of journalists and then the content that we’re pitching to them. We have a pipeline where that content is analyzed, all the newsworthy and noteworthy things that could be pitched are taken out of it, and then pitch lists are built automatically based on that, and then for each of those specific, pitch targets, so each journalist, a custom pitch is written based on the content that’s being pitched to them, what they’ve written about previously.
What we know about them based on their social profiles and other things like that so that we can be sure when we do this automated outreach that it’s going to be high touch, it’s going to apply to the things they care about, and therefore it’s going to have a much higher success rate than like some mass blast approach that’s not targeted. And then, yeah, I mean, other automation, there are literally hundreds perhaps thousands of other automations that exist within like the marketing, advertising, and PR space. A lot of them that I’m playing with in different ways. I mean, there’s a ton that is related to social media, a ton that is related … more management, like overarching processes of managing multiple agents. The rabbit hole is super deep.
How has incorporating AI impacted or improved effectiveness—particularly for content marketing?
Kristin Tynski: I think it’s improved the quality of our work substantially because we can do a lot of things that weren’t feasible before.
Doing a lot more initial research, coming up with many more ideas, and being able to vet those ideas at scale. So it enables us to find those hidden gems that before sort of felt like now you can be thorough enough and comprehensive enough to find really, truly good ideas on almost every occasion.
And then just the daily task work—a lot of it is able to be automated now and we’ve written small automation or small agentic frameworks for automating all sorts of different things. There are so many things that we’ve tried. I don’t know if you can maybe ask more specifically if there were specific tasks set you wanted to know about.
If you go to my GitHub, I’ve written about 25 different [things], and I’ve been doing a lot of automation tasks over the last couple of years that go into some of these different things that you could do. But they’re still just the tip of the iceberg.
Check out Mike’s other podcast all about AI!
Challenges When Implementing AI
Kristin Tynski: I mean, I think the hardest challenges are programming-related, so I didn’t start learning to program until like maybe three years ago, and I’m self-taught, so there’s been a learning curve for me.
Obviously, having an LLM to help you is amazing—which I didn’t have for the first couple of years—but now that I have it, I feel like I’ve been able to level up my skill set considerably. But the challenges were around learning how to program with an LLM, most effectively, and understanding the limitations that exist now with the models and the systems that we currently have to manage and work with them.
So, Cursor is an IDE, like a programming IDE that incorporates large language models into it. And so there are a lot of different ways that you could go about trying to program within a large language model. And some of them are efficient, and some of them are super-inefficient. And so I’ve gone back and forth and tested a lot of different ways of working.
I think I’ve finally found the most efficient way for me, at least at this point, but the biggest learning curve [has been] the process of figuring out those methodologies and also learning what these models are truly capable of and figuring out how to talk to them in the right ways and push them in the right ways.
How do you balance AI with human creativity?
Kristin Tynski: Well, I would say it starts when you’re creating a pipeline. [With] any sort of agentic automated content creation pipeline, you need to have in mind exactly what you want the output to look like. And that end result needs to be considerably better than what is really like generally possible with a large language model. So if 95 percent of people are just going to type in some prompt into the chat system of GPT or whatever is the default thing, they’re all going to get the same sort of output.
To be competitive, you need to do something more than that. So it’s not just a prompt and response. It’s a prompt and then that prompt goes out and does something, collects research or data from other sources or APIs, does something with that data, synthesizes it, manipulates it, understands it in some way, processes it, and presents it or displays it. It does more than just ask the large language model to do one input-output. And that may change. I mean, like these new models, like O1 and O3, when it comes out, what they’re doing behind the scenes is all of this work, like this thinking work that needs to be done.
I don’t know what’s going to happen. I’m not sure exactly how much like a human in the loop is going to be needed beyond being the orchestrator where you are determining what the inputs and the outputs are and what is a satisfactory output and making sure that whatever that pipeline is creating is creating something of unique value that other people aren’t creating.
Mike Allton: I think the key lesson there is that you’re spending sufficient time initially to make sure that whatever that process is, whatever the prompt or system is that it aligns with, what you’re trying to accomplish as a brand, as an agency, or on behalf of your clients. It’s something I’ve talked about in my other show, having custom GPTs and instructions that are trained in my voice, my target audience, my goals, all the kinds of assets that I’ve got an AI chief of staff that I can talk to, who knows everything I need to know about me. And we can have legitimate conversations.
One other conversation I had on that other show was with Mitch Jackson, an attorney, and we talked at length about AI and the law and copyright concerns and that sort of thing.
I’m curious about what you think when it comes to ethics and ethical considerations about using AI-generated content and AI for content creation.
Kristin Tynski: It’s a great question. I think a lot of it is still up in the air. I mean, I worry a lot about the dead internet theory, which if you haven’t heard of that, is this idea that the internet’s going to get clogged up with so much AI-generated junk that it will be impossible to parse and become useless. I think that is starting to happen.
And so I think there are, there are considerations that you should have when you’re creating agentic pipelines about what the output is and the value that that output creates. Is it creating just generic slop that’s not worth much to anyone but maybe has a profit motive for you?
Maybe don’t do that. If it can create real true value in a new, unique way that’s leveraging large language models in a way that hasn’t been leveraged before, do that.
In terms of copyrights and plagiarism and all of that sort of thing, I think it’s important to check your work, make sure that you’re using real sources of truth in content creation, and you’re not just relying on a large language model to use its weights to determine if something’s true or not.
You should have an input that’s the source of true information that is then synthesized by the large language model. Not just relying purely on the large language models knowledge and then what are you doing with it? What is the goal of it? I mean, are you trying to do something positive for society? Are you trying to influence something in a negative or positive way? Like the power associated, the force multiplying effect of these tools is a greater responsibility than I think we’ve really ever had as marketers.
It’s important to think about when you create automated pipelines, what the implications are of those, and what happens if they’re scaled.
How are you currently measuring whether the content you’re creating is impacting business in a positive way?
Kristin Tynski: Well, I mean, because we’re not typically creating content entirely with AI. So it’s not just an input and a content output. We’re using it in the process of creating larger, more sophisticated data journalism work. So it has, it’s impacted our clients and that we’re able to do more sophisticated things. We can do a deeper analysis, we can do more complex statistical testing, we can gather data faster because we can write scrapers faster and interact with APIs faster and, you know, come up with more ideas, refine those ideas faster, write better briefs. Every aspect that I’ve talked about contributes to a better end result.
Thanks for reading the highlights from this episode on AI in content marketing. Don’t forget to find the Social Pulse Podcast: Agency Edition on Apple, and drop us a review. We’d love to know what you think. Don’t miss other editions of the Social Pulse Podcast like the Retail Edition, Hospitality Edition, and B2B Edition.