Will the rest of the world force the United States to slow down AI development? I'm not sure. But that's one of the things that we're going to be talking about today on Everyday AI. This is your daily podcast, live stream and newsletter bringing you the latest news, tools, trends in everything AI. So everyday people like you and me can keep up with it and actually use what's going on to help us in our lives and careers. Not as always, but as often. We have our guest host, Brandon. Brandon, what's going on? How are you doing?
I'm doing well. How are you doing? Hopefully everyone's having a good day. And hopefully you have as nice as weather as we're having in Chicago today.
The Chicago weather, you have to really fight through the winters, and I think you get rewarded. You get rewarded with four months of just absolute beautiful weather. So we got that this weekend.
So if you are joining us live, please leave us a comment. Kind of build this episode as ask us anything. So we haven't had one of these in a while. So I want to make sure that if you are tuning in live, you can get answers to your questions. Brandon and I and a couple others on our team are spending a lot of time every single month to keep up with all these newest developments and what's going on in the world of AI. And we want to help you. So this is ultimately what this is about. So if you have any questions, comments about anything going on, please leave us a comment.
AI Regulation Discussed at G7 Summit
So let's get going. So we're going to run down some of the newest developments in AI news and just what's going on. So kind of what we led the show with today. Brandon, this should be interesting to see. So G7. So that's just kind of the world leaders from some of the biggest countries, united States, Canada, other European Union. So they met in Hiroshima, Japan, Friday and over the weekend. And one of the biggest things that they were talking about was AI. And I think the rest of the world, whether it's for better or worse, is pushing for AI regulation a little harder than we are in the US. And a little bit more on that later. Why? Maybe I think that's happening. But one of the biggest things that they talked about at this kind of G7 summit in Hiroshima is making AI more trustworthy and talking about are there regulations that kind of G7 world leaders can agree to? Brandon, what do you think this means, if anything?
Yeah, I mean, I think this is important. I think this is definitely a big step for not only the united States, but the rest of the world, because if we can kind of maybe create some sort of general standards, even though each country is going to adopt their own thing, hopefully it helps us to just kind of get it further along. I know as we're pushing out news and whatnot in our newsletters, we've put a couple of stories out about China as an example, putting regulations already in place for AI. So there's other countries that are already ahead of the game and feel like the US is kind of in this spot of waiting game.
Yeah, exactly. PJ, thank you for your comment. So, PJ is asking what tools are available to check that your AI generated content is accepted by Google and not Plagiarism? So many people are asking about that. Brandon we actually did. I won't say it's a fun study because it was actually extremely time consuming and probably a little frustrating. PJ, I'll come with the hot take. It doesn't exist because there's no way to consistently tell if content is AI generated.
So there's three or four kind of big name companies that say they offer the service to detect if your content is generated by AI. So by ChatGPT or by Jasper copy AI. So all of these different platforms, and a lot of these platforms also, as a quote unquote feature or benefit, say, hey, it won't be Plagiarized. So there's two different things. Right? So Plagiarism is obviously when you're lifting content verbatim that already exists on the internet, right? So there's two different steps. So are there tools that can help make sure that your AI content is not Plagiarized? 100%. And that's accurate, but where the line starts to blend is now they're saying, hey, there's AI content detectors. And Brandon, I'd say at least with our testing, not a chance.
Yeah, they do work, but they're not consistent. So there are a few that can let you know if it feels that the content is written by AI, but from our testing, and maybe that's something that we can talk about in the future. Put out some of those examples. I do think that there's not one that's going to exactly give you the right answer all the time.
Yeah, that's a great question. So for Plagiarism alone, yes, there are those tools, and those work very well because obviously they're checking against exact copies of whatever system may be producing to see if that already exists. So on the Plagiarism side, I think it's fairly accurate. So you either have to use a service so I don't believe ChatGPT has this built in. I'd have to check the plugins section because new plugins are being released at all times, but I don't think ChatGPT has this built in. Other services do, but fantastic question.
US Slow to Regulate AI as US Tech Stocks Surge
So, getting back to some of these other things that are going on. So we talked about the rest of the world, kind of pushing for regulation in the US. Not really being on the forefront. So obviously we talked on the show last week, OpenAI CEO Sam Altman testified before Congress, I think at least in the United States, I don't see any major regulation happening soon, emphasis on major. And here's one of the reasons why. There was a report that just came out this morning, actually from Reuters, just looking at the stock market and the S and P 501 of the major indexes in the US. Showed just up about 10% this year. So 9% specifically. But what they looked at is if you take away the AI and tech stocks, the S and P 500 is down 1%. So if you want to know, at least in my kind of hot take opinion, why is the US seemingly dragging its feet on AI regulation? It's like there's the answer. At least that's what I think.
I mean it does make sense and I don't know, I could see this background fear maybe forming of like AI going to dictate basically everything for the US. Which is probably what's going to happen. And so I feel like if you don't have a good handle on how you're going to regulate it, how it's going to come into play for various reasons, I can see why you'd be hesitant, especially with the power that it's already driving for multiple industries in terms of funding. It's the number one thing that everybody wants to kind of get into. So it makes sense.
Yeah. And obviously the SP and Nasdaq I think are doing a little bit better than the Dow Jones as an example. But if you take away those tech stocks, AI companies or companies investing heavily in AI, so if the US government were to come in and just regulate hard, the economic factors that that would have on the US are they would be mind blowing. Look at all of the biggest companies that are helping fuel this positive growth for the United States. So your Microsoft, your Alphabet, Google, your Meta, Facebook, IBM, Nvidia, these biggest companies that are up 30, 50% over the year and driving this growth. If you come down too hard on AI regulation, it will really start to cut off this growth at the needs. Brandon, what do you think?
Yeah, I agree. Every major company is getting in the game right now. And so it's like taking away the future of these companies, basically. And so I think that would be interesting because if you take away I, what else are we working on in terms of future tech advancements? I don't know. I'm sure there's something out there, but this is really the wave right now.
Google's New Text to Speech and Audio
Yeah. So speaking of tech advancements, let's go to our next, kind of our next story. So this is also pretty new. I actually had to search quite a bit to find this. So Google released a new kind of text to audio or text to music and text to speech platform just this morning called Soundstorm. So we've talked about it on the show before and we linked it in the daily newsletter. So if you haven't subscribed to that yet, go to your Everydayai.com, sign up for the Daily newsletter.
So Google before had announced Music LM, which allows that was strictly a text to music, and you can type in a little prompt and get a 1520 2nd kind of like, little music piece out of it. So they announced Soundstorm, and it's going to be faster. It's going to allow you, again, text to text to speech, text to audio, text to music. It's going to allow you to generate a little more. But I think the interesting thing here, Brandon, as I play a sample or I try to play a sample, is dialogue. So all right, we're going to try this. I haven't presented audio live yet on everyday AI, so hopefully this works. We're going to give this a test here. So I'm going to hit the Play button. Hopefully you hear, but what you should hear is some dialogue. And let's take a listen. It's 20 seconds.
Voice 1 [00:11:07]:
Did you hear about Google's paper on Soundstorm?
Voice 2 [00:11:10]:
No, I must have missed it. What's it about?
Voice 1 [00:11:13]:
Well, it's a parallel decoder for efficient audio generation. It can even be used to generate dialogues.
Voice 2 [00:11:20]:
Voice 1 [00:11:22]:
Yeah. Like, this one was generated by Soundstorm.
Voice 1 [00:11:25]:
Brandon, what are your thoughts?
I know on one hand, someone's listening to this going like, this is scary, but to me I think that's pretty cool, actually. Just the cadence, like the pauses, the general reactions that they created. It sounds very natural. It doesn't sound like any AI generation was going on there. The pauses were normal. Yeah, that was good. Yeah, I mean, I think emotion, the way that you'll be able to start to capture emotion in Texa dialogue is going to be interesting just because that's one of the last pieces missing, just to make it sound more natural. So that was really good.
Yeah, I think that's crazy wild when I listened to that this morning. So I didn't fully read so let me let me actually just share share my screen again here. So I didn't fully read this part right here. I just saw the big Play button and I hit Play. Right. So I didn't even know until the end of that that was actually AI generated. Right. I kind of thought like, oh, maybe this is but I didn't read the full thing. And then when they said, oh, this was produced by SoundStorm, I was like.
Whoa, yeah, that's really good. I know. We've personally used other services in the past, and this one definitely has to be the best so far. So this is definitely going to be kind of like an industry breakthrough for Text to Dialogue man.
So, yeah, one of the big companies in that space is called Eleven Labs. They're probably one of the top two or three, at least, maybe number one. But Brandon, something we haven't even talked about, maybe we save it for another episode. So when the big companies, the Meta, the Google, Microsoft, when they start developing these other models like text to speech or video, right?
So Runway, the company, Runway ML, has been running away in the text to video sector. So all these different generative AI companies that started small and they grew so big, like, what happens now when Google releases its text to video? Or what happens now that we have Sounds Storm and we have this text to speech? I don't know. What do you think?
Yeah, I don't know. As much as we've been talking about all Google's kind of hesitant sitting in the background, I could easily see them. Maybe they're doing it for a reason. And I would not be surprised if when they come out with their own services, they just blow everything out of the water. And so everything that we thought was like this new gold standard is just like now kind of thrown to the back. So it'll be interesting to see if they produce the same kind of quality as that text to speech or text dialogue. That'll be interesting.
Challenges of Addressing Biases in AI Algorithms
Yeah. Sayetta, so we have a comment first. Thank you. I know you're always interacting with us on LinkedIn. Sayetta so thank you for that. So she has a question. How can we identify and address biases in AI algorithms to ensure fair and unbiased outcomes across various domains such as hiring, criminal justice, and loan approvals? Wow, that's a huge one. Where do we even begin on that one? So I'll say this, even background so people who aren't following AI development as closely as the rest of us.
So AI is nothing new, right? It's been around for actually, we talked about this, Brandon. It's been around for technically like 60 years, but AI has been used in commercial development, different sectors pretty consistently since the 80s. So, say, to get to the second half, loan approvals, I know that the banking industry has been heavily using AI and deep learning for decades. So they are already using that in loan approval processes. Criminal justice. I'm not sure that will be interesting to take a look at. Maybe we'll have to do a little research on that.
Hiring is another fantastic one, and I think out of the three that you kind of asked about here, hiring, criminal justice, and loan approvals, I'd say a lot of the current or more recent trends in AI are being applied more in hiring, in HR, in talent acquisition. But the biases are extremely important. So we've mentioned this on the show once or twice before. I think it's worth a listen. Lex Friedman had a super, not super long because his podcasts are kind of long, but about a two and a half hour talk with Sam altman, the CEO of OpenAI, and they talked a lot about biases and how they train models. So I'd love to pretend I'm an expert or that we're an expert in AI algorithms and biases, but we're not. I think that's a good place to start is to hear the CEO of probably one of the biggest companies in the AI space talk about biases because it's real. Brandon, what do you think? I know we don't have the answers necessarily, but what do you see with bias playing into all this and what can you do about it?
Something that you just made me think about is right now it seems that we're basing a lot of these tools that we are creating with, let's just say, three language models. For example, you have the big players, you have OpenAI, you have Bard, you have the Bing Chat. And so I'm wondering, let's just say right now those get published as what their biases are or how they work. We kind of know what to expect depending on what's released.
But then what happens down the line? Do we get so advanced to where every company is just able to create their own language model at a certain point? Or is that part of maybe what the US needs to figure out is like, okay, we're only going to base it off of these four major companies? And by doing so, to get back to the question, maybe we're able to understand what biases or what's in place to understand how these are working for being used, for hiring process, for loan approvals, criminal justice, if we get to a point where we're using AI for those things. So it'll be interesting to see, are we just talking about those major companies or is everyone going to have their own kind of customized AI in the future?
How Will AI Impact Employment and Inequality?
Yeah, for sure. And just a reminder, if you are tuning in live, this is kind of the Ask US anything session of everyday AI. So if you are listening on the podcast, make sure to check out the live stream once. So, taking your questions and Sayetta another fantastic question. Wow. For Monday morning. You're making us work for it, but I love it. So Sayetta has another question. So what are the potential socioeconomic impacts of AI on employment, income distribution, and economic inequality? And how can we mitigate any negative consequences? Wow. Another fantastic question.
And again, I will reference that same interview, but I'll take something out that Sam Altman actually said. I'm not saying this is one of the reasons why we really started everyday AI, but it definitely played into it. So when I first listened to that interview, one little nugget that people I don't think picked up on in that two and a half hour interview is Sam Altman said there will be economic shock. And I think that specifically he was probably referencing the US. But I do think that there will be some of this going on throughout the world. But even real quickly with my background, I've spent thousands of hours creating content, doing photo editing, video, audio. So I've been doing some sort of this marketing and communications professionally for 20 years. And AI is better than all of me at all of it. Right.
So what does that mean? Unemployment Sayetta, I always say follow the money. So these same companies in the US that are driving our economic growth, what's happening? They're laying off thousands of people, number one. Number two, they're investing tens of billions with a B. These same companies driving our economic growth and investing tens of billions of dollars into AI and also laying off thousands of employees. So what does that tell you? I think there is going to be some kind of economic shock. What that means, I'm not sure. I don't want to be the AI doomsday guy saying AI is going to take every job or every single job, but I do believe, and this isn't just my background, being able to see all of this, especially generative AI and what it can do. But so many companies, big companies now are laying off thousands of people and they are open enough to say this is because of AI. Brandon, I know there's a lot to unpack there, but what are your thoughts on Cydo's question?
Yeah, I mean, it's again, a really great question. Honestly, I don't know how much more I can add in terms of just like, what's going on right now to what Jordan mentioned, but I guess just personal opinion. Yeah, I do think that a specific example that's already coming to mind is the writer strike that's still going on.
And so you're going to have a lot of these situations like the writer strike. And again, if no one's familiar right now in Hollywood, there's the writers strike, the Writers Guild. America is basically just striking for various reasons, but one of the reasons being that a lot of these production companies are starting to use AI and they just want to make sure that they have a job secured for the future. So they just want to create kind of standards of when to use AI and how to use it with the writers' work.
And so that's just one of the examples of kind of just like, people trying to, I guess, mitigate negative consequences, but we'll see how it turns out. And so I really think this question is great, but I feel like there's so many kind of variables up in the air to kind of know exactly how we'd be able to maybe avoid those negative consequences right now.
Yeah, it's definitely going to be, I'd say, a roller coaster of emotions for a lot of people, if I'm being honest, because some sectors and industries are going to be faster because it makes more sense. Brandon's example was a great example. The Writer Guild. If people are training large language models to write scripts, to write TV scripts, to write movie scripts. Storyboarding is another big piece that goes into just that industry. The writer's guild, the strike, all of that. And there's so many of these tools now that can be leveraged in those environments, kind of in the Hollywood scene, movies, TV, all of that.
But I do think other sectors are going to start to get hit. And it's the big companies, right? Like I always say, if you want to know what's happening, follow the money. Money is getting poured into AI, and those same companies that are pouring money into AI are laying off thousands of people and attributing it to AI. So, yeah, it's going to be interesting. But that's one of the reasons why we started the show, because we want everyone listening. We want you to really be able to keep up with everything that's going on and actually use it. The newsletter is such a fantastic resource, right? I do think hopefully the show here or the podcast will help keep you informed. But the newsletter is actually where we share those practical next steps to say, hey, we shared about this, or we had a guest on about this. Now, if you want to do something about it, here's how.
China Using AI to Bring Loved Ones Back From the Dead
So, I know we're a little over. We have one more newspaper to wrap up, but if you have any other questions, please drop them. So this last one, Brandon, it's a little weird, but it's like, all right. It's kind of cool, but also very weird. So there's a story in Business Insider that will link in the newsletter saying essentially they're saying China is using AI to raise the debt, give people a last shot to say goodbye to their loved ones. So there's a lot of smaller companies out there in China that are specifically marketing themselves as, hey, we're going to take your loved one. We're going to use all this a loved one that's passed, and we're going to use all this technology and to help you, after the fact, kind of bring them back to life so you can have conversations with them, so you can get them to still talk to them.
So there's different applications, whether it's text, audio, kind of crazy because obviously you can do this now and replicate your voice with different AI technology, but that's if you're training it right? Like a live person training it. Brandon, what's your initial thoughts on this is, like, creepy or cool?
I don't know. I think it's both. I feel like this just comes down to personal preference. I could see a lot of people benefiting from this, just for whatever reason. I know a lot of people end up in situations where they wish they said something or they wish something went differently. So I think it gives people opportunity for closure. But at the same time, I could see other people being very disturbed by it. So I'm in between. I think it's cool and a little creepy at the same time.
Yeah. I don't know how to feel about this one. And again, this goes into just with everything generative AI and how fast it's being developed. And in the US. The lack of at least right now, regulation is misinformation as well. Right. So I see this as when someone passes away, when a famous person passes, there's always rumors swirling on the Internet and all these different things. I see this especially when someone famous comes out and passes, or maybe they don't, using this technology, especially when someone's voice is so readily out there, available on the Internet, TV interviews, movies, whatever. I can see people using this technology for misinformation and creating if someone has passed, creating something in their voice and saying, oh, I'm not actually dead. I'm doing this, this and this. And there's already schemes, right? There's already schemes that people are using this technology to say, oh, this is your grandson. I'm stuck in a different it's like all those email scams that have been around for decades. People are using kind of this voice cloning for those reasons. So I don't know.
That's true. Yeah, I was thinking about that as you started giving that is a good example. How can people use this probably negatively? So that'd be interesting.
Human Rights in AI
Yeah. And Sayetta, that kind of goes to answer your last question. I think we'll actually maybe create an episode on this in the future about human rights in AI, because that one would take a very long time to dip into, so we'll have to get to that. But I do think Sayetta that with human rights specifically, that requires a deep dive because there's so many implications, and especially with that piece right there with China and being able to clone people's voices, being able to clone them as people. Right. Like, you can generate your own digital avatar if you're watching us. I promise. This is actually me and Brandon in real life, but we could clone our likeness to speak for us, to have video to present for us. So the human rights piece, I definitely want to tackle a little bit more.
Tristan, what's up? How are you doing? So, Tristan, commenting on that story out of China is just saying, I think it's a great way to have loved ones who've passed on but still be able to read books. Oh, gosh, such a yeah, that's a good idea. And then saying, yeah, of course there's going to be bad actors. But yeah, I do think that's a great idea.
And they have those some toy companies will have something if you have somebody in your life who's passing where they can kind of record their voice on a stuffed animal or a children's book. Right. So that's been I guess I hate to call it a market for products, but that's been out there. But I do think so many people pass unexpectedly or don't take advantage of that while the person is still living.
So, Tristan, I do think that's a good point. That will be great kind of after the fact for so many people that had someone pass on years ago or they never did do those kind of like voice recordings. Brandon, I do think that what Tristan brings up is a pretty good use case.
No, I agree. I like that a lot. I think that would be an awesome idea.
Yeah, like when people actually use it for good. But that's what this gets to. You know, there's all this technology, I think of it like the Internet, but, you know, times ten and it's like, was the Internet used for good? Absolutely. Was it used to create chaos in society and just help push war and terror and bullying? Like, yes. Right.
So there's two sides to the coin, just like there was with the Internet. With the invention of the Internet, it connected people, it brought people together, it helped us research diseases much faster and all these great things, but there was an ugly side to the Internet and letting the world talk to each other as well.
So I do think Tristan, kind of to your point, I think that's a great, fantastic use case, being able to take someone who passed years ago and being able to use their voice for that. But yeah, I do think on the flip side, like what you said, there's going to be people who ruin it for sure.
Brandon, we wrapped it up. As you look over the past week or what's to come, what are you going to be keeping your eye on? We obviously have a lot of really cool guests lined up for the rest of the week, but what are you going to kind of be paying attention to in AI this week? Or what do you think people should be paying attention to?
Definitely continue keeping an eye on Google just because of the event that they had last week where they just announced a ton of things. We went over some of that last week, so feel free to go check out those episodes. But I think with them releasing this new text to dialogue, I'm curious, like, if they have any other things just in the background that they're waiting to just release. So Google will be one.
And then I think maybe also if Meta I don't know, I think Meta has been quite besides their huge announcement about their new chip, I'd be curious to see kind of what else comes out about that or if there's anything else that they have going on that maybe we don't know about. So those are my two personal companies of interest that I'm going to be taking a look at. Otherwise, I feel like every day there's something new that I don't expect.
Oh, there is, but that is what we're here for. So. With that. Brandon, thank you. Thank you for jumping on this morning. Also go to your everydayai.com what Brandon talked about. We're spending hours every single day to help everyday people make understand what's going on. And we actually spent a lot of time on the newsletter, the second half specifically to say, hey, here's what went on in the world of AI today and here's how you can actually use it.
So when we talk about economic uncertainty and job security and all of those things, that's one of the main things that we're trying to do with this newsletter. Not just to help you keep your job and grow in your career, but also to grow your business. The flip side of this is there's so much potential to be able to do things in minutes or seconds that used to take hours or days or weeks. So there's so much potential to use all of these updates, to use all this new technology for you and your business. So make sure to check it out your everydayai.com. With that, we hope to see you back tomorrow and every day at everyday AI. Thanks guys.