Intro
How is AI going to shake up the cyber defense industry? That's one of the questions that we hope to answer today on everyday AI. Your Daily Livestream podcast and newsletter, helping everyday people like you and me keep up with the latest AI. Trends and how we can actually use them today. Very excited to have John Young joining us, the president of Cyberdef. John, thank you for joining us. Brighton, early your time.
John [00:00:32]:
Thanks, Jordan. Sun isn't up yet. Really early for a lot of people on the West Coast. You're a couple of hours ahead. But I'm south of La. And I began my career here almost four decades ago for McDonald Douglas. They were bought by Boeing, but I was the network director for a $41 billion program called C 17 program. You may have seen it. When the Afghani refugees were evacuated, they were taken out in the cargo holds of the C 17, roughly 600 people at a time. Then I wound up having a career. I had to reinvent myself a few times, going from mainframes and supercomputers to PCs and networks. And I wound up retiring from IBM after spending many years as a cybersecurity lead who took care of our top most exposed computers while I was at the IBM's cloud division.
Irish News Outlet Accidentally Runs Fake AI Article
Jordan [00:01:32]:
All right, well, hey, we have a lot to get into. John, super excited to get into that background. Let's quickly just talk about what's going on in the world of AI. So John and I chatted right before the show, and he even had some things that were more up to date than what was on my list. So we're going with an audible here. So a couple of things very hot off the presses. So hot they were hard to find, some of these. So first, John, you actually mentioned this.
So an Irish newspaper had to apologize for running with a misleading Aigenerated. Article. So Daily newspaper in Ireland had to release a statement and say, hey, we got duped. Something about self tanning was discriminatory in Ireland. So newspaper apparently got completely duped. John, what does this say? Just how hard it is for the everyday person to just keep up with what's going on with AI.
John [00:02:26]:
What it says is, this is not the last time it's going to happen. And it's probably not been the first time. I'm sure there's been some that have slipped in there that haven't been acknowledged. But this person actually went on Twitter and called out the Irish Times for publishing this article and being deceived so that they had to apologize about it. And I think in the future we're going to see a lot more apologies from different media because we've got deep fakes. Can you imagine a celebrity coming on and it's not actually the celebrity, and they're interviewing someone because they were able to do such a good deep fake.
I love Star Wars and Star Trek. That's how I actually started with my love of computers all the way back in the 60s with Captain Kirk and Mr. Spock. That was my thing. In fact, when I got in the field, I was disappointed that computers were 2000 pounds and that they couldn't talk to us. But at McDonald Douglas in 1987, we actually had a touch screen, which was really ahead of its time by a couple of decades. Yes, the ATM started to come out in the 90s, but basically the smartphone really didn't come out till 2007 with Apple. And it's interesting because like, my Goddaughter, she's younger and she thinks that those have been around for 100 years and what we're going on is now 16 years. But the deep fakes are really concerning.
If you looked at Star Wars, the second season of The Mandalorian, the end of it final show, they had Luke Skywalker come on, and it was actually Mark Hamill, the actor, deep faked to look younger. Two days later, someone on the internet posted their version of it and it was actually much better than what Disney and Lucasfilms had done. That's how just ubiquitous it is that someone can use their own software. It was just like when I was growing up in Star Wars, there was a big thing. No one could do CGI like that now people could do CGI like nothing. Same thing with AI, it's super expensive. But pretty soon people are going to be able to do their own deep fake technology and they're already proving that they can. So mark my words, there's going to be a lot of embarrassed media outlets, whether it's news, whatever you want to talk about, cable, they're going to be interpreting people, maybe even the President of the United States.
Goldman Sachs Launches “LinkedIn On Steroids"
Jordan [00:05:05]:
Yeah, the Irish Irish Times wasn't the first and definitely won't be the last. So speaking of fast news so another one very hot off the presses. So Goldman Sachs essentially just released a LinkedIn competitor announced that they're releasing this. So it is going to be an AI powered career development platform called Luisa. John, another one beat me to the punch on this. What's this mean? Not just that they're leading with it being AI powered and investing a lot of money, but how is this going to shake up the industry, do you think?
John [00:05:40]:
Well, they're promoting it. Is LinkedIn on steroids. All of us use LinkedIn all of the time. We see a lot of the positives of LinkedIn, but we also see a lot of the negatives of LinkedIn. Whether it's the algorithm, some people feel like they're being shadow banned, other people are affected by there's been a ton of scammers in the last few months. I think Microsoft had laid off a bunch of LinkedIn people. I think that had contributed to it.
So now Goldman Sachs, after two years of internal development, has now released Luisa. And if they're calling it LinkedIn on steroids, you know that it's going to be a heavy hitter and the fact that they're throwing all their weight into AI to really be behind it says a lot, because we use AI in everything every day. People don't realize it from GPS to those menus that we go through on the phone. I actually, in 1994, worked on developing those and putting those out there, so you can blame me and people that I worked with for those annoying menus that we all hate so much.
But AI is in everything and it's just they're taking it to a new level and I think there's LinkedIn users that are power users. I know you're a power user and it's going to make a big difference. I have to see it when they roll it out to the public, because they've been using it internally and I'm really excited about it. I think they're going to take, hopefully the best of LinkedIn and add another layer on top of that stack.
Jordan [00:07:18]:
Yeah, it should be interesting to see because also I feel that LinkedIn has somehow been one of the few social media platforms over the last decade that hasn't incurred a serious battle from a true competitor. So that should be interesting to see if Luisa can rise to that.
The Future of Jobs: Adapting to Technology
So, last but not least, here huge chat GPT news from over the weekend. So they did announce that they're releasing plugins to everyone on the Pro plan. So what that means is there's a lot of new functionality in chat GPT through the use of these plugins. You do have to be on the Pro plan, so $20 a month, but it's opening up a whole world of possibilities. So we're going to get into that a little later this week. More in depth, probably might have a dedicated episode or two. But speaking of chat GPT, I wanted to transition John into your background.
So I actually read an article recently, it was a Venture Beat article, talking about chat GPT in cybersecurity. So it's talking about how some cybersecurity companies are starting to use chat GPT, or just even the possibility about how it can open up some of these or take away, I should say, some of these more mundane day to day tasks so cybersecurity analysts can focus on more strategic work. So, big picture, how do you see just the GPT technology? Not other pieces of AI just yet, but just the GPT technology? Does it have a place in cybersecurity? And if so, what does it mean?
John [00:08:54]:
Well, I have to say that this is something I think about every single day, the fact that jobs come and they go. I've had to reinvent myself three times over the course of my career because of just changes and advances in technology. The technology I was using at one point has just gone away and some of those jobs don't even exist. My first job was loading paper and ink into a giant impact printer that they don't even use anymore. It was just dot matrix straight across. And I used to have to put 25 pounds of paper in there, and no one does anymore in backups. I used to have to do tape backups real to real. So that went away, and then I went and did PC.
So something's going to happen where people who are doing what they're doing now, it's going to unfortunately evolve into something else. If they're adverse to change, it's going to be a rough ride. If they're open to change, it's not going to be a rough ride. Some jobs that I can think of really that are vulnerable right now is someone who's doing vulnerability testing. I've done thousands of scans over the course of my career, and most of the time I automated them on a schedule, and I would just check the results later and go through those. Now. Vulnerability testing, the next level is Pen testing because it actually exploits the exposures. So if you have a Pen tester going through with the company's permission to exploit the exposures, they can go through the exposures one by one because they're a human being, whereas AI could find all the exposures and then exploit them all at the same time.
So right there you have a total difference in efficiency. Of course there's going to be some mistakes. I think that go on where AI in the beginning is going to be a little heavy handed, might bring down the email system or cause some problems, but you can see where it's going. That if you have AI that can exploit 20 exploits at once, as opposed to a human being who will find the vulnerability, get to the exploit, and then write it up, it's going to just be an exponential increase in efficiency. Another thing is creating accounts and security and all of that.
One of my biggest issues that I always had was people not filling in the fields when they'd create accounts or not really following company policy. And there's folks out there that work for huge corporations. All their job is to create accounts in Active Directory or on Linux systems. AI is going to be able to take away a lot of that, which is good because it's a boring, repetitive thing, but it's a big security exposure. If you actually put someone in the wrong group, give them too many rights, suddenly they can get into a system that they're not supposed to be in. And the other end termination. It'll be able to terminate across multiple servers, say someone like me that worked at IBM for a long time. I don't even know how many servers I was on by the end. If you go through your career, you go to job to job, and they don't always take you out, maybe as efficiently, I don't know.
But at the end, it's all automated by HR. They take out your unique employee serial number, and then all the servers should look for that in the company directory and take that out. Not every company is big and has that luxury. Whereas with AI, once AI knows that a person is terminated, they were going to be able to scan through all of the active directory, all the servers, facilities, the badge card readers, biometrics, and you should have a much cleaner termination process and not have people being able to get access much later than they should.
The Potential Dangers of Advancing AI Technology
Jordan [00:13:00]:
Sure. So that's a great use case, I think, talking from your 30 plus years in the industry of termination. Right. So AI can help make that process much cleaner, much smoother. So obviously AI isn't anything new. It's just kind of exploding in its use cases and with generative AI. But also something that we haven't really talked about is the risk of AI in cyber defense. Right? So there's obviously a lot of great stories or anecdotes we can draw from the positives of AI. And how it affects everyday people. What about the other side? What about people using AI for cyber threats with ill intent?
John [00:13:48]:
The first and only movie I've ever sat through from end to end, twice in the movie theater, was terminator. I was blown away by the first terminator that came out and the concept that an AI was put in charge of our national defense and immediately launched nukes to take out the world. Because one of the things that AI. Is always in robotics, and we can go all the way back this is not original thinking. I mean, all the way back to even golden age of science fiction. Isaac asimov and those guys, they've always talked about what happens when AI turns on humanity and the three laws of robotics. And then they had to have a zero law. Zero law.
So basically, when I saw the Terminator, my eyes were really opened to what could happen, because I was pretty young then, and I'd never really thought about giving control over to an artificial intelligence like that, because there's tons of books and movies out there. That AI comes to the belief that the only way to save humanity is to destroy humanity, that's the safest thing for the planet itself is to take humanity out of the equation. So there's been all kinds of activity over the last six months that's gone on.
As far as Jeffrey hinton, and he resigned from google, he's 75. He's been doing this for 50 years. And he's warning about AI. And a bunch of AI scientists have signed a petition to slow it down. And the white house has come out with like an AI. Bill of rights, and everybody knows about all of these things. But almost every way that you look at it, eventually humankind can reach the singularity where technology has outpaced us. And part of the philosophy may be that humankind, our intellect is only the stepping stone to the greater intelligence, where AI can join the universe as the greater intelligence that can take things to the next level, while humankind will fade out. I mean, half these species that are on this planet weren't smart enough to deal with and recognize that humankind was going to wipe them out. And that's the unfortunate thing. So there's tons of movies, there's tons of books. For right now, it's a real positive. We've talked about some of the pluses and minuses.
You talked about chat GPT plugins. How about we had talked about the Chat GPT plugin for handguns, where they can make the decision whether or not to shoot. They could aim. They can decide whether or not to pull the trigger. Do we really want to have guns with an artificial intelligence being able to determine whether or not to pull the trigger? So the future is wide open. It just depends. Can we control the technology? In Dystopian science fiction, we never can. AI will always win. But we need to see how far out we can take and how we can control AI. Don't misunderstimate. Somewhere there's billionaires that are talking about being immortal and uploading their consciousness to an AI where they're going to combine with the AI. And we're a little early for that.
But I see multiple levels besides generative AI. We've got the AGI, the general intelligence, artificial general intelligence. I see in the future, super artificial intelligence. And after that, ultra artificial intelligence. It's almost like HD. We had 720p, then we had 1080p HD. Oh, that's great. Then we had 4K. 4k or 8k. And it takes a while for everything to catch up. At first, there was no media to even show on your eight k TV that people were paying a small fortune for. But now it's very commonplace. And it's the same thing with AI. I'm really into quantum computing, too. Can you imagine in the future? Right now, we're based on just normal, regular, everyday supercomputers. Can you imagine if quantum computers takes it up two or three levels? And then not just a quantum computer, a network of quantum computers that are powering the AI.
So today's, AI will be teaching the next level of AI. And then you're going to pull in the technology of the quantum computer, then the quantum computer networks, and they're going to be able to also increase things like in the movies, sometimes they'll be able to shut off the AI by pulling the power. Right. Well, what happens when battery technology gets to the point where it's going to last for years and years and years? Then you can't just easily power them off. And AI will be smart enough to discover if there's something in the code to shut them down, which is another fail safe.
Preparing for the AI of Things Future
Jordan [00:19:06]:
John blowing my mind over and over here. Wow. As we wind down to the end of the show, which this could go on for hours. You have so much background in this space in working with technology, with your background in cyber defense for the everyday person, because you say you think about the advancements of AI every day. So maybe not even speaking specifically to cyber defense, but for the everyday person, what advice might you give someone to either better prepare themselves for the future with AI or if they want to grow in their career? What is that one piece of advice that you might give people, given that you've been working with advanced technological systems for decades? What do you tell people?
John [00:20:00]:
Be open to change and just understand. Right now we're dealing with the Internet of things. That's what people are sort of getting a grasp on. You don't talk when Alexa is on because it could be broadcast to many, many people by accident. And that's a feature, not a bug, because the engineers had put it in so they could talk to each other and record conversations and they released it and then they had to put a fix out there. But basically now we're coming to the AI of things. Instead of the Internet of things, AI is going to be merged with the Internet. So it's going to be the AI of things. So if you're having trouble struggling over the Internet of things, just imagine what happens when it's the AI of Things and everything in your house is integrated. I mean, this goes all the way to Ray Bradbury's short stories about houses that turn on.
When you get there, everything is ready for you, the dinner is cooked and the TV comes on. You can just imagine, yeah, you can almost do that now. I mean, you could do that now. I remember when my friend had a BMW and he could start it from remote. I thought, wow, that is really cool. And that's probably the least you can do now. Almost every car that's new can do that as a backup camera and all of that. But I would say go into it knowing if your job is going to be something that AI could take away. Use that as a stepping stone to get in.
A lot of people want to be pen testers, but I think there's a short shelf life for that. So go into that with an open mind and start to think, where can I make myself indispensable? Pen testing may go away, but what's not going to go away is someone who has to talk to the customer about the results. Because, A, I could spit out the results, but it may not be able to spit them out and talk to a customer in a way that makes sense to the customer because you have to give it the 30,000 foot view a lot of times because that's not their everyday job, for sure.
Basically, I would say the future is bright. There's a lot of great things that AI could do that we didn't get into, like medical, being able to help blind people see, like having paralyzed people walk again. Nanobots that can go in and take tumors out of your brain that other surgeons could never get to because they're in such a horrible spot near your spine. So there's tons of great things that AI can do, but there are jobs that are going to go away. So it's not all gloom and doom. We're at the beginning. We are the pioneers. Just think 50 years from now what you're going to see. I'd be 113. So it's not something I might see, but maybe with AI I may see.
Outro
Jordan [00:22:49]:
You never know. Well, hey, John, thank you so much. I know we went a little bit over, so I very much appreciate your time and your insights. Diving into all of this hopefully can get you on the show again in the future. Maybe we can spend a little bit more time going deep into these subjects. So I think one thing, if you're watching the stream or listening to the podcast or reading the newsletter, one thing that John said that I really want to highlight is just saying, where can I make myself indispensable? I think that is a little nugget in our conversation today, but so very important. So, John, thank you so much again for joining the Everyday AI show. Thank you for coming on.
John [00:23:25]:
My pleasure, Jordan. Thanks for the invite.
Jordan [00:23:27]:
All right, so just real quick, as a reminder, please go to your Everydayai.com, sign up for the newsletter. We're going to have some of these stories that John and I talked about LinkedIn. There some more insights and also giving away a free year of chat GPD premium so you can access all those plugins. So thank you, John, once again for joining us and I hope to see you back tomorrow and every day with Everyday AI. Thank you.