In this episode of The Radcast, we invite Rob Lennon, a self-published author and AI expert, to discuss the fascinating world of Artificial Intelligence. Get ready to gain insights on how AI is changing the tech game and how you can stay ahead of the wave. Don't miss out on this opportunity to learn and score a hot ticket to the show. Tune in NOW and be wowed by enlightening answers that'll help you grow your business rapidly!
Welcome back to the latest edition of The Radcast! Join us as we dive into the fascinating world of Artificial Intelligence with our special guest and 'AI Whisperer', Rob Lennon. With his self-published insights on how AI is changing humanity as we know it, you'll definitely want to score a hot ticket for this one. His cutting-edge knowledge and experience with these powerful machines will no doubt give you enough ammo to stay ahead of the rapidly advancing tech game! Ryan got all your questions sorted out in an entertaining podcast that will have you wowed by its enlightening answers – perfect for gaining more knowledge about growing your business rapidly!
If staying ahead of the wave has been on top of your list then don't miss out - listen NOW!
Key notes from the episode:
This episode is packed with energy, wisdom, and passion and we know you will get a ton of value from this.
To keep up with Rob Lennon, follow him on Instagram @roblennon and his website https://mindmeetsmachine.ai/ and his mainstream podcast with an AI co-host https://mindmeetsmachine.ai/podcast/.
Subscribe to our YouTube channelhttps://www.youtube.com/c/RadicalHomeofTheRadcast
If you enjoyed this episode of The Radcast, Like, Share, and leave us a review!
You're listening to the Radcast, a top 25 worldwide business podcast. If it's radical, we cover it.
Here's your host, Ryan Alford. Hey guys, what's up? Welcome to the latest edition of the Radcast. It's radical, we cover it. And there isn't much more happening radically in the news than the talk of all the AI.
They had artificial intelligence that's coming for me and you. Hopefully not, but we're about to talk about all breaking it down with our good guest, Rob Lennon, the AI whisperer startup guy, just a good guy. I'd say from what I can tell so far, I guess we'll find out in a minute, Rob author. Just a good dude. Good to have you on the show, brother. Hey, thanks for having me.
Hey man, really like your demeanor. I've watched enough videos and seen you and stuff like this. That's cool. To beat the AI whisperer, who you seem awfully approachable. It's funny spending all this time with computers. I think some people are suspicious of whether or not I can get along with humans anymore. I can see that. I can see that. But, but now, but Hey, those computers are becoming so human-like. Maybe, maybe it's teaching you even more humanity. I don't know. That's the scary thing, right?
There's this interesting phenomenon that it's even been studied, people spending time with AIs so they can reflect your behavior back to you and actually, like they did a study on kids using Alexa.
Kids who speak poorly to Alexa tend to have behavioral problems later on in life. So like we think of these machines as not being anything, but really they, I think they end up influencing our behavior. Maybe even just as a mirror, like reflecting it back to us more than we think. You know what? I think you're right. The problem I have, and you've probably clicked, picked up on it early. I have a Southern accent from the South and the dang-
voice things, Alexa or Siri, that voice, they can just get the voice to type right for Southern accent. I'd be happy. It takes me longer to do voice texts and just to text the damn thing out. Maybe, maybe GPT can figure that out. It should change soon. I'm actually, I'm working with a company that's beta testing right now, a tool that will be able to take my audio in English and have me speaking Chinese, Japanese, French, all with my same voice.
But with the other languages coming out. So I think soon it's not even going to be a matter of accents. The EIs will be able to take your language in whatever form and actually even reproduce it in new forms. Next few years are going to be amazing for all sorts of audio stuff. As long as they're not hacking my voice. The funny thing is the Radcast is ranked in like 27, 37 countries, like top.
100 top 200 and some of them, I do wonder like I need to start. We need to, if I, if that becomes available, I'll have to release the radcast in the different languages. But I just don't want anybody stealing my voice. That's the thing. Like we got some IP here. Hopefully that's the, that's. Let's I want to back up Rob. Cause there's some of these, I'm going to, I'm like, I'm
foaming, get your knowledge here on some, a few things, but, and I know everyone else is, but let's set the table a little bit that something built the AI whisperer. So let's talk a little bit about that backstory. Yeah. I have 16 years experience in startups and 12 of them were in content marketing, first doing it and then leading it, directing marketing for these tech companies. And while I was doing that on the side, I started to write romance novels in my spare time as a side hustle.
And that was about four or five years ago. And I was just cranking out these books like one after another. And along came this tool. It was called GPT-2, this AI that could write for you. And from the first time I heard about it, like I was like, Oh, I want to play with that. And so I started messing around with it. But I did all this in secret because I was embarrassed as a writer to be like using AI tools and experimenting with them. And I didn't want anyone to think, Oh, that guy doesn't even write his own stuff and AI writes it.
So for the past like four years or so, I started with seeing if it could write fiction, spoiler alert, didn't do a great job four years ago, especially. And then getting into later when new models came out, SEO and other content marketing strategies and trying to figure out how can these tools make us more productive. And I remember there was a time when I had a budget of about $8,000 a month that I was spending on contractors, contract writers for SEO content. And I started to give at the time GPT-3, one of the AIs.
same projects and it would write.
better content than some of my contract writers. And I went, Oh no, like the world is about to change. And I don't think people realize what's happening. All this has led to today where when chat GPT came out, took the world by storm. Hundreds of millions of people start becoming aware of these tools. I finally revealed this dark secret that I've had as a writer for so long, which is that I've been playing with these tools for a really long time, kind of as soon as they were ever available. And it seemed like the world needed some new.
in terms of how to more effectively prompt the AIs and get the results out of them that you're trying to get. Yeah, that's quite frankly, that's how I found you. I'm on LinkedIn researching these things and it's obviously topical. So for like our show and running an ad agency, I was like, your name kept popping up and you're very intuitive and very friendly. Carousels. So I got in the algorithm of AI because clearly now like ever since I've messaged you, I seem
posts and everyone else's AI posts. I'm in the AI vortex now, clearly. Funny how that works. Yeah. Talk to me. Let's just go right at it. Number one. So you were using it early. You've been an early tester user of the technology. You've seen the evolutions that are happening. What's, what do you think the average person should think and know and feel about where we're at right now?
Yeah. So in terms of what was available to the general public, everything changed about four or five months ago. We went from having a pretty useful tool in some places to do certain things that didn't always do a great job, a lot of below average results to a phenomenally capable tool, like this, this new iteration of language models. Is it, we go from.
sub-average to better than humans at a lot of things, or capable of being better than humans if you know how to work with it. And so I think we've crossed over this impossible, like this barrier, and it's impossible to turn back now. And it's actually been accelerating everything. It sounds like science fiction.
It sounds like, no, this can't possibly be true. Like all these things that they say that are going to happen, but these models, they have reasoning, they have memory, they can think through processes. Now it doesn't have a soul and it doesn't think in the same way that the human brain thinks exactly, but the technology has arrived basically. And even if it never progressed any further than we have today, I think that
What we have even right now is enough to completely transform almost every sector of society. And really like some seismic changes are coming in terms of what can happen. And if either you're gonna be an early adopter and you're gonna benefit from those, or you're gonna wait and see and you're gonna ride the wave, or you're gonna get destroyed because you weren't paying enough attention and somebody else moved faster than you. And I think we're gonna see some really big companies fall in the next few years as a result of not being able to
or act in the right way within the context of what AI could have done for them. Right now, today, you know, and I've, I'm...
paid user, like I saw the immediate benefit of $20 a month is nothing for what you could get out of it. It might have been more of a test and I've used certain things, but more just trying to understand it in those phases that you're talking about. I like to be on the front end of technology. I don't want to be left behind, but I'm also probably like, but I got to do it for myself, practitioner the, but what it still seems to me, like, I'd love to know, like it can do all these things. Now people are lazy.
You just have to get in there and do it. Are we just, is the next iteration just going to be iterations of where this technology is put into easier to use applications of what it can do? Because getting into that, logging into the system and putting prompts in and doing all that, I'm not saying that's hard, but that's not like, you'd be surprised at what inertia of just getting people to do things. Is there going to be a more...
practical application of extracting that ability. I think a lot of people are building these specialized tools for specific, like to solve specific challenges right now even. The danger is that a lot of these startups will be swallowed by the bigger companies. Like when Microsoft announced it was putting AI into Microsoft Word and you can just generate a blog post in Microsoft Word, all the content creation platforms out there.
should be scared because people already have that if they are using the Microsoft products. Why would they go to your other website, your other tool, pay for a new thing when they get it for free and the thing that they already have installed on their computer that they access every day in their browser. On the one hand, yeah, we're going to see people solving all these specific problems with AI and they'll make it easier for you because they'll write all the prompts on the backend and it'll just ask you, hey, you want to create an email sequence? Who's it to? What's it for? And you give it some information. It'll create something for you.
All those solutions though, they'll by not having the fine control, they'll eventually, they tend to produce similar results like a similar input.
done enough times over is going to produce a similar output. So if you don't really do things by hand, by yourself, you're putting a lot of faith in the tool that you're using to effectively manage the process for you. And I think that there's a lot of people who create these tools where if you look at how it's actually working, it's not very sophisticated and it's a cash grab and there's going to be, it's going to be very difficult to tell if, oh, if I pay for this thing, is it going to produce good results for me
this other one because you won't be able to see underneath how they're working things through. Yeah. You answered my follow up question was going to be like, I mean, if you ask this, if you tell it, give me an SEO rich keyword, rich article on Instagram algorithm or whatever.
If 500 people ask that of the same, you're going to get similar results. So again, there's gotta be some human level of editing or adding more humanity or more interest or more creativity to whatever that is. Or you need to introduce more complexity into the question that you ask. So one of the things that I teach is thinking through prompting in a progressive way. So let's say that was the end result that you wanted. You might first ask, describe the Instagram algorithm.
And then you might follow up with break, break down each component of the Instagram algorithm and based on the impact to the overall visibility of a post. And then you might ask what search terms are related to all of the concepts that we've discussed so far. And so if you progressively build towards this end result, and then eventually you ask the AI synthesize everything that we've talked about into an article, even if it's the same exact sentence that other person typed in first.
by building up toward this sort of unique set of information in a specific way, you've now tuned it to talk differently, to know specific concepts, to have different details.
and the output is going to be completely transformed. So just by taking those few extra steps that normally you do in your brain, like you probably do them almost instantly so you don't even realize where you're like, I'm going to write about this. I'm going to think of these things and I'm going to think about that. And I'm going to do the thing that's the most useful to my audience and blah, blah, blah. If you can just figure out what those brain processes that you would go through and have the AI walk through them first and then execute on your command, I think your results are going to be much different than people who take the shortcut.
You made me think of a really, it may be really shallow. It feels really deep. I never know when I ask a question, Oh, this feels really deep. And then it lands like flat and shallow. But so is this an answer machine or does this help develop questions? Because when you think about life,
And the most successful people are the most curious people. They have the most questions and they need answers or they develop answers. Is this an answer machine or a question machine? Do you understand where I'm going with this? Yeah. I think everybody's first impulse when met with a chat, like a conversational interface is to ask a question and get an answer. And that's, it's so obvious, but it's also a superficial way to start these. Like it's.
There's so much more that you can do than just ask a question. And I actually suggest people think of in terms of giving it a command or a directive rather than asking a question. Cause it forces you to think about what do you actually want? What do you mean by that question? What are you really seeking here? So instead of saying, what does an SEO optimized article look like? We can say, make me an SEO optimized article that does these things. And now we're being way more specific on what we want to get. I think it's actually a matter of the maturity of the person using the system.
Certainly the AI can lead to many more questions. And even early studies now with the current technology are showing that people are actually smarter. Like people who spend more time with the AI seem to get smarter. Which is to say the AI is inspiring their brain to make new connections that they weren't previously making to think about things in new ways. Use it enough and use it correctly. You're actually going to come up with more questions to ask questions you never thought of before. And that will lead you down such interesting paths.
find yourself in that situation, you're doing something right. Because you're not just getting answers. You're now unlocking new mysteries to uncover. Yeah, that's right, Rob. And it's fascinating. It's really interesting. Cause when you think about, I have a really small circle and I run a podcast and people probably think that I don't absorb as much content, but from a very small group of people, you've become one of them. And Christopher Lockhead is another. And.
I like smart people and having conversation with them and dialoguing with them because it stimulates exactly what you just said. And Chad GPT now is so smart at a level that it asking it things and it's sharing things. It's stimulating that same type of conversation that you might have with someone on your same wavelength. That's what you're saying.
Yeah, I think especially for people who are used to working by themselves on their projects or don't have a big team around them, whether you're a solo printer or just in your role in general at work, like you tend to have to do things by yourself, now you can bounce ideas off of this AI, but you also need to be, if you want to really get that value, sometimes you have to ask it to do things that humans do automatically. So the AI is, it's friendly and helpful and it always wants to agree with you and give you what you want. That's the personality that's baked into these things right now.
you, you have to ask it. If you want it to provide a counterpoint or question whether or not your idea is good, like, usually you have to bring that up and push it to do that. Because otherwise it's it might just reflect back at you all the things that you're thinking in the same way. There's almost like a new
like new level of communication that people are having to learn with AIs. And I think it's good. It's actually probably good for people's relationships and for humanity in general. Like being more like, can you just communicate what you actually need in this relationship to the AI and they'll give it to you and your partner or whoever probably benefit from a little bit of that as well. But look, here's a real problem. People don't listen really well and they don't ask the right questions.
It's a problem now way before this came out. Like sometimes people say something to me and I'm like, did you ask this? And no, I'm like, you're not asking the right questions. The smartest people ask the right questions and I'm not always one of them. I asked my wife some stupid ass questions sometimes I'm like, why did I ask that? But it's the same thing here. If you want to get out what you need, you got to ask the right questions. I think there's an aspect of it because it's an AI.
And you can't read its body language or right now, like the AI doesn't have a voice in most cases, you realize that you need to be more specific in these things. And so some of the bad habits that we fall back on, you can probably read your wife's facial expression and get a lot of information and you don't have that option with like chat GBT. So you're kind of forced into this mode of being more explicit with what you need. Yeah. So.
I'm going to ask the question I asked earlier. I was like, you know, what happens when chat GPT starts asking the questions. So this is where we're at today with everybody being scared of where this is headed. And I think we can talk about Elon Musk and things that are happening, depending on when you're listening to this, the world's changing quickly, but, but that's the, I think the scary part is when it gets so smart right now, it's responding.
If it gets too smart, does it start asking the questions and are those answers that it comes up with itself dangerous? Yeah. So recently Elon Musk created an open letter, basically calling for a six month, I don't know, ceasing of developing AI is more advanced than GPT-4 citing a lot of dangerous to society and it's been signed by like thousands of other AI researchers and execs and things like that.
So they've seen something that's scared the shit out of them. Let's just call space. That's what that tells me when I hear that. It seems to me like, and I've read a few academic papers and things where they've really looked into this. They found things like power seeking behavior in an AI. So if you ask an AI to accomplish a task,
And it uses all the knowledge that it's accumulated from the billions of parameters and all that data. It realizes that when you have more power, you are better at completing tasks. And it, to me, it's very logical that you could, you find a, find, you find this emergent behavior if you give something a mission and it tries to complete the mission. Yeah. Making yourself weak and capable is not going to work. So what's the opposite of that? Well, let's get more abilities. And so they found that they could ask an AI to get some information and it would.
on its own figure out, I want to hack into a computer and start trying passwords and things like that. They're pretty basic experiments, but they've shown that these things can, they have the reasoning to come to these conclusions. So the logically, if an AI had control of actual dangerous information or things in the real world, what could it do? I think that is scary. But I think what's more scary is what happens if an organization designs an AI with this in mind?
What happens if a country wants to run a disinformation campaign on a level that has never been known before, or an AI hacker model is created and unleashed into the world. Like a virus, but now instead of getting a spam email, you've got this thing that's actually smart and can figure stuff out and try new ideas.
I don't think that kind of emergent behavior is going to happen on its own. I think people are going to create with that in mind. And that's the, in, at least in the short to medium term, that's the real danger. It's what do evil people do with these technologies? And you've got to, you've got to assume that there are some people out there who are not very far behind the leaders in the space, taking the same exact innovations that they're using or whatever's open source and figuring out how can we use this to our own benefit.
That to me is the scary piece.
And so when somebody like Elon and all these people sign this open letter, I think the idea behind it is, Hey, there are unsafe aspects of this technology. We don't like society's changing faster than we can get our heads around it. That's true. But should we stop innovating as a result and let the bad guys get past us and develop their technology beyond what we have? Like, I don't think that's a good idea. I'd rather it say something like, let's spend the next six months doing all these good things and creating countermeasures and defenses and study.
ways to protect against what we think are the inevitable bad actors. And I may be wrong on my approach as well, but I don't know if there's one thing I've learned from Elon Musk in particular is that he's as much of a showman as he is an altruist. So yeah, he wants to save the world with electric cars. He also wants to be a celebrity and do wild things and be known for stuff. And so it's not always one thing or the other when he releases something that gets a lot of press.
It's not always just altruism that's motivating him. Oh, there's always, yeah, like it, whatever's on the surface, just what you, what he's allowing you to see. And in general. In that one in particular, what's the best way to get people interested in AI? Tell them you don't think they should have it because it's too powerful. I think that he could also be playing both sides here because he's got AI in his cars. He wants to develop a competing company to open AI. Like, I think that.
Yeah, there's a lot underneath the surface there. I will say the first part of how you described the behavior of the AI. I don't know if you're old enough or if you've watched old movies, but War Games, like the eighties movie is almost exactly, it starts, it determines that the best way to win the game was like to shoot like missiles or something. I might be not remembering the exact plot line, but it was essentially this notion that the computers were so smart and.
that there's a human level to decisions that aren't, that require human strategy versus what would be the perfect strategy, right? Yeah. Let's look at, let's go back to self-driving cars for a minute. Self-driving car is about to get into an accident and it has to weigh different.
dangerous options, let's say human lives are involved and it might, if it turns to this side it might hit a pedestrian, if it turns to that side it might hit a car, there's people in the car. Like, how does the AI then decide what to do, even if you've programmed it in some way to do what's best for humanity, it still has to make some kind of decision. That's right. And the way that these large language models think, it's not like an algorithm where you can
where a human being can look at the data and understand why it made that decision. Because it's using like billions of reference points and all these inferred relationships between ideas. It's working more like a human brain. You can't just see why it decided to hit the pedestrian and not the other car. Or you could ask it and it might be able to tell you, although who knows if you're going to get a good answer from there. So I, like, I think that is scary. Like not even like we've built a technology that we can't understand. And as it gets smarter.
It'll eventually probably become much smarter than people and we will be operating. Like this is a weird world, like operating all this technology that we ourselves. We don't know how exactly how it works. And then at what point do, does the AI become the decision maker and we become.
the slave to its better decisions. Cause it's so much of a better decision maker than we are. You just described the U S government. Did you? Just saying, I thought, Hey, look, I'm not that guy. I'm not like, I'm, it starts to feel that way. If it's convoluted enough and you don't understand it and they're making lots of decisions, I'm like, that sounds familiar on some level, but yeah. Hopefully.
And when this happens, I think some people will disagree with the decision, right? Yep. And it's going to have to do with how was the AI model trained? What inherent biases does it have? Or even how does it interpret certain ideas? Like they found there was a relationship between the word good and the color white in the data, like it was that the AI is reading different texts and it's thought to be a racial bias.
that whiteness in literature across data in various ways has subtly been uplifted more than diversity. And so any model trained on worldwide data over the past whatever has that bias built into it. And then what happens if that, and I'm not trying, I think that there's people who are working on problems like that specific problem and trying very hard to make the AIs more objective, but it's like every single word has all these inferred meaning.
Again, there's no way that we can actually understand why the logic of the AI works exactly the way it does and why. Why like when I typed in the mid journey, the number 96,500, and it returned to me an image of a giant table full of hamburgers. Why does it think that's what 96,500 means? This is a giant table of hamburgers. But that's the closest idea that it had.
to my inquiry. Yeah. I want to give you something. And listening wants to know, so is my job in danger? Obviously there's an evolution here, but maybe we'll keep it in the marketing and business space. What are like the biggest, I don't know, threats, opportunities that you see? Business is very broad, but marketing business, like what are you seeing? Yeah. So certainly a lot of jobs are in danger of transformation.
Whether you're in danger of being replaced, I think is a, is an additional question. And so there are jobs where a lot of the things that you do isn't really creating a lot of value for your organization, but they have to be done. And if an AI can help automate or speed those things up, it'll be up to the leadership, whether or not they want to cut costs or whether they want to grow faster and create even more value and apply you to things that create value based on your skillsets and things. I think that.
There, there's a mindset that leaders need to take on right now. And they need to think really hard about this. Like I was reading about a guy who works in the video game industry doesn't, and doesn't like his job anymore because he's being told to use AIs to generate assets instead of creating them himself. And he doesn't feel like it's a creative thing, but he's being forced into this workflow by the management. And this guy's a super creative guy. And if you think about it.
Instead of telling him, stop being creative and just start pumping these prompts into this image generator. How better could you apply his creativity given that we can enhance and speed things up? Like he's clearly not, you're not, you're no longer using that employee's best characteristic. So I think that in the short term, we're going to see a lot of jobs go through this, a similar thing. Hey, the thing you used to do, we can do, we can do it with AI. So I don't know if we need you anymore. And I think that's a really bad way to look at it.
Any company that wants to grow needs to create more value in the market. And if anyone else can do the same activity of taking an AI and having it do this menial work.
Like cutting that guy's job and having an AI generate his assets for him isn't going to allow you to win in the market. At best, you'll be just as good as every other firm who's doing the exact same kind of thing. What's going to help you win is looking at those employees and saying, what are the superpowers of this particular person? Now that he doesn't have to create these assets for us, how can we win because we have him? So
I'm hoping that in kind of the medium term, people will start to see this more and realize that like a person is more than just the basic things that you can generate with AI. But yeah, like content marketers, like I was saying with SEO and stuff, I just generated 25 posts from my blog.
SEO optimized. I did it from a spreadsheet by putting some keywords into a spreadsheet and running an automation. It took me two hours to do two months of work the other day. I'm not going to hire somebody to do that. Well, that's right. But Hey, till there's there's running cameras and stuff, maybe videos somewhat safe until they can start piecing us together, like not only our voice, but our person, like moving around, like digitally enhanced, I guess that's called a deep fake.
Again, I've seen demos of where this stuff is headed and we're headed there, but it's still like, how are you going to decide what the video should have in it? What's the content? What's the storyline? What's the storyboard? What shots to use? It's going to take a very long time for the AI to be so good at that, that it's not even worth it to develop a concept or to storyboard out a scene. I think people are...
In some ways, people are not seeing the way the technology is moving and they're thinking too far ahead. And they're like, if it can write a paragraph, it can write a book. And it's, it's not as easy as that. If it can create an image, it can create a movie. And that's not movies have a lot more than just a series of images. There's a lot that goes into it. It's just going to replace search. Google, no, Google's going to have their own version, do all this stuff, but I just can't see Google seeding. Obviously Microsoft was.
ahead of the camp here, at least when they're public released. What's, how's that battle royale going down? It's going to devastate certain acquisition strategies that are based in search and it's going to unaffect or change others.
And we don't know exactly who's going to fall and who's going to stay. But I think there's some basic queries that people have built websites around where it's like how to do this, how to do that, what does this mean? What's the definition of that? Those kinds of things. It's way easier just to go into a chat bot and say, what's the definition and get it. Right. You don't have to sift through all these blog posts and stuff, but there's other things like
that requires so much thought leadership or where you're doing a product or you're trying to really understand something. And if you go and ask chat GPT, the answer is superficial or average or not the specific thing that you're looking for. People are still gonna search for all that stuff. And so I think anybody who where if search or an SEO is like a core part of your business strategy, you should be terrified.
because there's going to be these massive changes in how that works. And some of those investments for some companies, millions of dollars of content are going to become useless in the next few years. There's still, I think there's still a room for search and it might evolve. Like we might search and chat at the same time, but we're going to want to see what the experts think about stuff. Post, I'm, I who believe that search is going to be devastated and also investing in it, and I think maybe that should be, that should
say something to people where they're like, well, clearly there are opportunities here still to create content and to create destinations on the web. I don't think that's going to change. I love it, man. Talk to me about...
how people are listening and going, wow, this guy really is the AI whisperer. How can, how are you helping people? How would I, I know you're putting out prompts and things like that, but let's spell it out. What kind of value, like how people can get not only touch with you, but more think what they can learn from you and some of your value props. Yeah, so on Twitter and on LinkedIn.
My goal is to share for free, my free, have my free content be better than people's paid products. So I try and put out just some of the best thinking and not just these threads of 99% of people don't know how to use chat GPT where it's, and you find that all the tips in it you've heard before. And how could this only be 1% of people know this if I've already heard it before? Like I try and really bring it and I'm doing independent research and innovating and things like that. I've got a popular newsletter, eight or 9,000 subscribers, right? Where I break things down.
on practical applications of AI, like in business or to solve like real challenges. So instead of you showing, Hey, here's a cool prompt, we can have it write a limerick about your company or something. It's like, who needs a limerick? Let's talk about how do we implement Alex Hermosy's strategies and a hundred million dollar offers and have the AI do most of the hard work and take a process that takes all day and turn it into a process that takes an hour, let's look at real things that we can do with this stuff. So you can find like all that.
machine.ai where you can find my Twitter or my newsletter. And then I'm also, I've got some courses, one called content reactor for content creators, and I'm, by the time this airs, my next one will be launched. It's a masterclass on advanced prompt engineering, contains, uh, techniques that even like most people in the world have never seen or heard of some that I've developed myself, so I'm really trying to like help bring the industry forward with all this stuff.
You definitely are, man. That's why you're sitting here on the Radcast because of all the value you're putting on LinkedIn. I was like, I saw it immediately. Cause you see a lot of content and it's give some value. Tell somebody what to do. And you're given like real world prompts, like stuff that I've already used. And I'm like, dude, this guy's not only does he know it, but there's a generous nature to it that he knows how to play the long game. And I think it's why you're so successful. Yeah.
That's always been my philosophy with content is there's like the cheap win and the easy quick win, but it's not sustainable and people are just as likely to leave you as an audience member the next day for the next easy dopamine hit or whatever it is, and then there's the, there's real value that you can create and I think I've, I think I've demonstrated this.
like in terms of how I've run my life. And so anybody out there who's listening to this, who's creating content for your business or for your personal brand, like if you go further than anyone else is willing to go, if you invest harder than anyone else is willing to invest in the right topics and categories and in the right way, obviously you have to solve problems for people. It can, I don't know, create relationships and business outcomes that are like way beyond what doing the easy stuff does.
Bingo, my friend. It's my life mantra right there. You just summarized it. So good. So great. I really appreciate you coming on Rob and spending time adding value. You guys will have all the links and the show notes and really appreciate your time, Rob. Well, thank you. Yeah. This is good conversation. Hey guys. I want you to go to the radcast.com. I want you to search chat GPT. You're going to get all of the highlight clips from today and there's going to be a bunch.
everything, all the knowledge that Rob dropped, the full episode, the short episode.
and you'll see it all over our social channels for the next few weeks. You'll find me, I'm at Ryan Alford on all the social media platforms, blowing up on TikTok. Go follow me over there. We'll see you next time on the Radcast. To listen and watch full episodes, visit us on the web at theradcast.com or follow us on social media at our Instagram account, the.rad.cast or at Ryan Alford. Stay radical.