Resources
Join the discussion: Ask John and Jordan questions about governing AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Overview
Artificial Intelligence (AI) has emerged as an indispensable tool in the modern business landscape, offering a myriad of opportunities for organizations to optimize operations, enhance customer experiences, and accelerate growth. However, as AI becomes more pervasive, the need for effective governance and responsible use of this transformative technology is crucial. In this article, we delve into the critical factors surrounding AI governance and the steps businesses can take to navigate this complex landscape.
Defining the Audience and Setting Goals:
Before implementing AI, it is imperative for businesses to define their target audience and align AI initiatives with specific goals. By doing so, organizations can effectively tailor AI solutions to meet customer needs and achieve desired outcomes.
Embracing Continuous Research and Adaptability:
The rapid evolution of AI necessitates a commitment to continuous research and staying abreast of the latest advancements. By fostering a culture of innovation and adaptability, businesses can leverage new opportunities and ensure that their AI strategies remain relevant and effective.
Inclusive Decision-Making:
When it comes to AI, decision-making should involve a diverse range of stakeholders. Engaging employees at all levels, from frontline workers to senior management, can provide invaluable perspectives and insights. This inclusive approach fosters acceptance, reduces resistance, and encourages a collective sense of responsibility.
Optimizing Resources and Addressing Job Fears:
While AI demonstrates immense potential for increased productivity and cost-efficiency, it is natural for employees to worry about job displacement. To address these concerns, organizations should focus on effectively reallocating resources and providing opportunities for upskilling and reskilling. By leveraging AI to streamline processes, businesses can create new roles and emphasize the value of human expertise.
Conclusion:
AI governance is no longer a distant concern; it is a necessity for businesses to harness the true potential of this groundbreaking technology. By defining objectives, charting a roadmap, involving diverse stakeholders, and optimizing resources, organizations can implement AI in an accountable and ethically responsible manner. Through careful AI governance, businesses can strike the delicate balance between reaping the benefits of AI and safeguarding against potential risks, paving the way for a successful AI-driven future.
Topics Covered
Importance of defining audience and customers before introducing AI (Defining Audience and Customers)
Creating a roadmap for AI implementation, setting goals and objectives (Creating a Roadmap)
Emphasizing the need for continuous research in AI implementation (Continuous Research)
Involving employees at all levels for diverse perspectives on AI (Employee Involvement)
Potential benefits of using AI, including better revenue and optimized resources (Benefits of AI)
Addressing job replacement fears and resource readjustment (Job Replacement and Resource Adjustments)
Recommendations for companies new to AI (Recommendations for Newcomers)
- Research and assemble a team
- Consider AI principles and one applicable aspect
Acknowledging the benefits of AI for companies that have not considered it (Benefits for Non-AI Companies)
Not all companies using generative AI, seeking recommendations (Generative AI and Recommendations)
Importance of taking an AI class and John's agreement (Importance of AI Classes)
Educators' polarization on AI detection in education space (Educators' Polarization)
Use of ChatGPT by students in writing papers and issues with professors (Use of ChatGPT in Education)
Concerns about students becoming dependent on AI tools in education (Concerns in Education)
Importance of roadmaps and governance in AI implementation (Importance of Roadmaps and Governance)
Starting the process of mitigating risk in AI and the importance of education (Mitigating Risk in AI)
Balancing the benefits of AI with ethical frameworks and policies (Balancing Benefits and Ethical Frameworks)
Podcast Transcript
Jordan [00:00:17]:
How do we even govern AI. Can we? Is it just the Wild West, and we're all just living in this current reality? So that's one of the things that we're gonna talk about today on everyday AI. This is your live daily It's a daily live stream where we bring on expert guests so you can ask them questions, going to podcast on Apple, Spotify, everywhere else, and a free daily newsletter. So make sure you check your check that out at your everydayai.com.
Daily AI news
So before we talk about how can we even govern what's going on in the world of AI? Let's talk about what's going on in the world of AI because there's a lot. So let's start at the top because this actually has to do with governance a little. So US senators had their first ever classified senate briefing on AI yesterday. So they kind of hinted at there may be legislation in months, but this was a pretty a pretty big meeting. So the director of National Intelligence, the secretary of defense, and others, coming together with US senators to hold some first official discussion on the governance of AI, my hot take y'all, it's not gonna it's whatever the US senate comes up with, I don't think it's gonna work. You you know, if if you've seen the some members of Congress ask questions of tech leaders. It's not good. So we'll see you know, hope for the best, but we'll see what actually happens.
The next piece, an Indian CEO is facing huge criticism after he kind of went viral on Twitter, talking about how he laid off 90% of his support staff in favor of an AI support bot. Okay. What's the big deal? Well, turns out he was also selling that support bot. So took took a bunch of heat on that one.
So another new story worth talking about is it's not just Hollywood writers fearing a high. A new story that we're gonna be sharing about in the newsletter is talking about how Great Britain's writers Guild as well, also fear being replaced AI is a worry for them as well. And last but not least, this is a I don't I wouldn't say this is fun. This is actually kind of scary, but the VC Billionaire, Mark Anderson laid out 2 scenarios for AI. So this is, you know, Anderson is a, you know, kind of a very well known person in the tech space, but he says AI would do one of 2 things, either eliminate the need for labor in a best case scenario or lead to Chinese world domination. So I hope there's an option c. You know? I don't I I I I don't know if if if I or people at least here in the US like any of those 2 options. But but let's let's talk a little bit more about it because we do need to figure out what to do with AI. How can we control it? How can we government? Should we? And for that, don't worry. You're not just gonna hear me rambling on. I have a guest today that is going to help answer some of those questions. So let's Let's bring our guests on today. So we have joining us live. Very excited about this. John Chapada is the Principal and CEO of HG Technical Consulting.
John [00:03:33]:
John, thank you so much for joining us. Jordan, thank you for having me. Looking forward to
Jordan [00:03:39]:
Alright. So this is let's let's first before we dive into the specifics, John, give give everyone a little bit of of your background, you know, because it's not just in the consulting space, but you're also out teaching AI as well.
Intro to John Chiappetta
John [00:03:59]:
Yep. Currently, I'm working at Harper College in Palatine as an adjunct instructor and working with the students there and and teaching them about an introductory level of AI, along with other teachers there, we've put together curriculum. I think that's gonna be probably the best in the Northwest suburbs, to be honest. haven't heard it from many of the schools, and they're really wanting to get this type of education out there accordingly. So the students that I've taught already are they get excited after the class. They wanna have more. Their hunger just started to get built up, and then I was like, oh, look. I want the classes over. I'm like, oh, we got more classes. So but, yeah, I'm doing that. And in fact, we're having a meeting in a in a couple of weeks to go over and to kind of firm up what our fall semester is gonna look like. So Harper's doing a lot, and I'm glad I'm part of it. Yeah. That's exciting. But, also, you know, in in your background. So you've worked with you know, just just so we can set the concepts a little bit. You've worked with big companies. Right? So yeah. 1st 10 years, I was with the the Quick Rose company. demo craft for the next 5 years after that. Then I wanna ask for my own consulting, but in the consulting, I wanna wait for the food business and at companies of Allstate, Ameritech at the time, another company called Wheels. Blue Cross Blue Shield is another one that I work with. also Medline High Hotels -- Yeah. -- was now -- Yeah. -- and so and then the American Medical Board of specialties, which kinda governs the doctors that have specialties like pediatric, cardiologists, things like that. And all of that was basically helping them get along with this technology, whatever challenges they had we tried to resolve. And and and Gopher, it's been a lot of fun. Learned a lot.
Jordan [00:05:41]:
Yeah. Wow. So just just wanted to set the stage because we do have, you know, people tuning in from all across the US, but also all over the world. But what I wanted to say is Chandosh knows his stuff. Right? He's he's working he's working not only has a background working with with large and medium sized organizations, but also he is teaching AI as well. So With that with that setup now, John, we have the the ball on the t. So now you can smash this one out of the park. So when it comes to AI, it is it seems like the Wild West. Right? There's there there's no at at least here in the US, there's no official laws, rules, or regulations. So how should companies start to govern AI internally, should they?
John [00:06:24]:
Oh, absolutely. I mean, to me, the thing that companies today want to do the most is they want to mitigate risk. They want to ensure that there's policies and procedures in place. and to ensure that their their private information, their intellectual property doesn't get distributed and copied. Okay? And so with that, you've got this governance approach that we should be taking that actually came up with one of the classes because students brought up like, well, AI is doing good for here and but AI can also do bad things too. How do we balance that with the right proper governance and policies in place, Jordan, that can happen. And there's there's characteristics of that governance as well too. You know, need to be transparent. That's number 1. I mean, right now, you need to explain what your systems are doing when you're working with AI. You don't wanna be like, you know sorry. I can't can't see that. You need to cover up I mean, you need to understand and and and release this. We'll be transparent as number 1. You need to be held accountable. And that to me is, like, clear lines of understanding and identifying responsibility with what you're doing. And this last one, it's there's more, but ethical frameworks need to be put in place. So what what does that mean? It really prioritizes the rights, the fairness, and the and the social well-being of what this AI product is doing. Focus on those top 3, you're on your way to form your own governance.
Jordan [00:07:54]:
Yeah. You made it you made it sound so easy, John. You know? Hey. Hey. It's as easy as this. Right? So I wanna I wanna unpack that there. But before before we do, just as a reminder, if you are tuning in live, please drop drop any questions that you have about governance in AI for John. So we do have a couple of people joining us, so I just just wanted to give them a shout out. So, Scott, I'll definitely send you over the the PPP information. Rasa fa, thank you for for joining us. So if you do have any questions, please. toss them up. That's what we're here for. So, John, we we went over a lot of things there in in that first response talking about mitigating risk, intellectual property, balancing the good and bad. Multiple frameworks write so much to unpack there. But maybe let's let's start. Let's start at the top because I think one thing that is causing a lot of headlines maybe is when people aren't or companies aren't mitigating risk, or they don't have any safeguards in place. And then you see, oh, you know, this person uploaded sensitive document or or sensitive information into chat g p chat GPT or, you know, this person submitted false, you know, hallucinations to the court. So so How do you even start that process? Because mitigating risk is huge because as as much good as there is, there can be just as much bad. So how do companies start that? I mean, the first of almost thing is education. I think, Jordan, I think the companies really need to understand what's going on. If they want to pivot
John [00:09:21]:
in terms of what they're doing with the with their company and understand what's coming at them. AI is not only in the at them, we're gonna have to deal with them as well. So does my dog. But the idea is that one of the things that's happening is the education Understand understanding is the best thing to kind of do.
Jordan [00:09:43]:
Yeah. So so speaking speaking of understanding, Let's let's pivot, but I do wanna come back. But let's talk about kind of what you're doing right now at Harper College. So -- Okay. -- you know, there's there's there's not a lot of I don't think, anyways. Maybe, you know, drop drop a comment if I'm wrong, but there's not a lot of know, colleges And Universities that are fully embracing AI. So let's let's talk about that quick, and then we we'll come back to the business side. So you know, what does that look like right now, you know, just kind of teaching the the the AI class at Harvard?
Examples of AI in the classroom
John [00:10:18]:
It's exciting. It really is. I mean, Harper is is really putting in the time. They're gaining the funding to get these classes in place. They're educating the educators as well about this whole subject, and they're taking pride in making sure that it's communicated out to the corporate space as well too. Students are actually filling up these classes. I've been teaching it up for for about good 10 years now, And I think this is something that everyone is interested. So classes fill up really quick. And they're excited when they get in because they wanna know what's gonna what what are some examples? What are some things that AI is doing? And in the class, we we show them examples live. We actually get them to be interactive -- Mhmm. -- with some of the things that are presented So it's, like, learning how what is machine learning alike? Can we do that? Can we create a a process that we can train a model. So it's it's kinda cool when they say, wow. This is really you know? And they did end their own. It's not that I told them what to do. They it's just follow the instructions, and they could see it on their own. So, like, training the model, I think, is one of the biggest things is You use a a certain tool and they show them a a pen or a pencil. And once you train the model, if you show the model of the the pen or pencil, it'll tell you which one it is. Mhmm. So they get a kick out of that. And then they start unleashing, like, ideas on their own. So it's it's really kinda cool.
Jordan [00:11:50]:
Yeah. So speaking of that, you know, you talked about the the outputs of what's coming out of, you you know, this this course in AI at Harper. And and maybe students are quicker to pick up and learn and to make adjustments than businesses. Right? Because a lot of times, especially small, like or or sorry, medium and large businesses, It can take them a while to self govern. Right? They can't just be like a student, learn something new, and put it into practice because it's not that easy. So so even going back into, you know, what we started talking about, ethical frameworks -- Mhmm. -- and businesses, how you know, if if someone out there listening, maybe they're a a small to medium sized business owner, maybe they're a director at a larger business. what can they do to to address that ethical piece in using AI in their business?
John [00:12:41]:
Yeah. I mean, one of the things that they need to be aware AI also brings a certain level of bias. Okay? And that's something that they need to be aware of and how do you handle that bias within that tool. Okay? And we talk about that in the class, and Harpreet will teach. The other thing is also the marginalized communities. you know, how are we impacting those areas for the better and not for the worst? Because that awareness level is the key. making sure that companies are putting some forward thinking towards that and investing and making sure that they can have a framework set up that's gonna not only protect their assets, but also protect whatever products they're gonna be producing in the future, And security is huge with AI. The the ability that you could actually tell AI to go do something, and it'll figure it out in in, like, nanoseconds is just driving all the security chiefs at thinking companies stay crazy. Absolutely not. Yeah. A great speaking of that, a great question about security
Jordan [00:13:46]:
from from Brownwyn. So Brownwin asking, would it be a good time to learn cybersecurity? So, like, not just that, but, you know, how should companies be looking at cybersecurity as well in this new age of of AI, deep fakes, all of this.
Cybersecurity is crucial for risk assessments
John [00:14:03]:
Talk talk a little bit about that. Yeah. I think that's a a a key point because one of the things that we need to do in the governance side of things is do risk assessments. And without cybersecurity, we really won't be able to assess what type of threats we have, okay, learning from the the history that we've collected, AI is great at collecting data, big, large amounts of data that we can then take a look at and see that the information that we're gathering is gonna be new. It'll be identified quickly And the the ability then to affect a threat or to remediate a threat can be done in that instance as well too. we're not gonna have that many well, we're gonna have, again, that balance to good and bad that are gonna be out there, and we need to be able to be top of that, and part of that attribute of a governance program would be risk assessment with cybersecurity as their major link supporting them. Mhmm. And I think Brandon's right in terms of learning cybersecurity. Not a bad idea at all.
Jordan [00:15:10]:
Yeah. I think I think there's plenty of of other, you know, industries that are far worse to get in. I think cybersecurity is there's gonna be a lot of security and growth in that field. So, you know, John, we did open the show talking a little bit about, you know, at least here in the US, you know, what legislators trying to do. Do you think you know, I I think a lot of companies are are are kind of playing the wait and see game to say, okay. Is the US government gonna do something? Is my state or local, you know, are, you know, are there gonna be any other laws and rules and regulations? you know, do you think business owners or, you know, directors, managers? I mean, should we be waiting on you know, some sort of authority to say, hey. Here's how to use it, and here's not. Or should we at least be developing best practices in our own businesses?
John [00:16:00]:
I think the latter, John, honestly, because the way that the speed and technology moves together, if you wait, you're gonna be left in the dust in my opinion. be proactive. Okay? Let's understand what's happening. Get a a type of assessment workshop put together that that you can put together and get that going. Don't if you wait, you're not gonna learn as much as if you are actively involved.
Jordan [00:16:28]:
Yeah. Yeah. No. That's that's a great point. Yeah. Because and I agree with you, John, because I think so many companies are just waiting. And, you you know, I think it doesn't take long to at least get a baseline. So kinda like what Yaddie is is, you know, saying right here in this comment saying she feels it's good to start with the AI principles. And then even if they're not fully baked, you can build, and you can work on those. So another, actually, question from from from yada here. So we're doing doing a little bit of a turn here, John. So asking in the education space, why our educators polarized on AI detection in general. So, you know, I don't I don't know if that's something that that you, you know, deal with a lot, John, but, you know, there's obviously students are using ChatGPT to write papers, you know, might might not be as problematic in the class that you take, but Have you seen this at all in education, you know, kind of professors, teachers, you know, really pushing off AI and even with this this content sectors?
Professors want students to learn, not rely on GPT
John [00:17:27]:
Yeah. I've I've got a lot of friends that are in the academia space, similar tenured professors in local colleges here as well. And what they're really looking at is how do we get the students to learn, period? If the a student goes into chat, GBT says, hey. Write me a a paper on this. Great. They've done it. They've submitted it. The chances are the professors should know the student to see whether or not that's in that language that they're used to seeing, but all of a sudden, you know, you got structured sentences, for other things that they may not have used in past papers. Okay? And on top of that, we have a chat, GPT. They can go check to see if that was you know, the percentages check GBT generated or not. So the instructors as a as and the professors, the educators, okay, are looking to see are the students learning? That's their biggest reward. And if the if they're fostering the dependency on a tool, the you know, I could see where they're going back and forth, but make sure they know that, okay. You wanna use JPS. We can find out if you did. And did you really wanna take that chance. Yeah.
Jordan [00:18:37]:
And I think, you know, I I have hot takes on this, but I'll I'll save that because we actually if you are listening to this and you wanna know more, we actually full episode on this actually yesterday with with with Kelsey Behringer, the CEO of Packback, So make sure if if you wanna know more about that, make sure to check that out. So so so, John, I have another question for you, and I it looks like we might have 1 or 2 more from the from the audience here as as we wrap. But there's there's so much. Right? So I started the show saying, is working with AI right now, kind of the Wild West. And it can. You you know, it can take a while to to get those best practices to to get steam to get momentum. But where do companies start? Where would you recommend? Like, what is that first, right, even when we talk about, like, oh, okay. Get guide lines. Okay? Where? Like, how how would you recommend that when it comes to governance? What is that first step that most companies need to take?
Defining audiences and conducting research to govern AI
John [00:19:34]:
I I think, 1st of all, define who your audience is, who your customers are, number 1. Number 2, go in and say, what product do we want to introduce AI to Alright? So put together kind of like a a road map of what you wanna be able to do. But then the biggest point of this is do your research. Okay? Make sure that, you know, with AI, there's always something out there to take a look at. Get a right level of group that from the senior level, all the way down to, I will consider the the soldiers or the boots on the ground type of employee that can be involved. Diversity between those levels gonna give you the best answer on how a company should approach AI, how companies should use AI to foster well, better revenue, better margins, and potentially take a look at their resources that they have as well too. And I think what's not being talked about, Jordan, is how do we readjust the resource gain by which we have this AI. A lot of there's a lot of fear out there that AI is gonna replace. Mhmm. If you look at it that way, it'll happen. But if you look at the way, how do we readjust Our responsibilities as employee of the company is a key factor in how they approach it too. So how do we get started, do your research, get a team together,
Jordan [00:20:54]:
focus on one aspect of your company that AI can help with and see what and see what goes from there. Yeah. What about even companies that haven't even thought about AI. Right? So I think that, you know, even here on the show, we we we always talk and we assume that most companies are using some sort of AI even if we're not, you know, going down the route that you're going through with your students, you know, training models, machine learning, deep learning, all that. But You know? We assume all companies are using some sort of generative AI, but that's not always the case. Right? Like, some some companies don't even fully understand it or know it. for those companies, John, like, what what would you recommend? Is it better to talk about governance before you even dip your toe into the usage, or do companies first need to see those applications like what you said, identify the product, identify do do the companies need to go through those steps first or just put put some sort of baseline governance in place?
Roadmap to governance of AI
John [00:21:55]:
I mean, to me, I'd say go take a class. Have a workshop at your company. Okay? Honestly, because to me, the more they can see visualize what it can do, will spark the ideas in their employees, in their management, hopefully, and making sure that they can say, well, do we wanna pursue this or not? Are we ready to pursue this. Okay? Because it's it's gonna take that type of organization to realize what's at stake. Okay? Now you take in the workshop. You got some some ideas on Nutri Bell just trying to understand it. Then when you wanna formulate that plan to go forward and make that road map happen for real. Then let's talk about governance. So governors isn't gonna drive whether a company does or not. Governance is going to protect a company that's made a decision that they wanna go forward with AI.
Jordan [00:22:43]:
Yeah. This such such great advice, you know, talking about the roadmaps and the road map and the governance, I think, are 2 just fundamentally important steps for any company to make. So so, John, we went all over the place today. We -- No problem. -- risk risk mitigation to to students learning, machine learning, to ethical frameworks. We talked about a little of everything. Can't thank you enough for for joining the show and for just letting all of our our our listeners know some of the basics. So thank you for for joining us and imparting your wisdom. And thanks for having me, Jordan. Looking forward to a lot more Thank you so much. Alright. So as a reminder, if that was too much to keep up with, don't worry. That was a lot. We are gonna be breaking all of this down in our free daily newsletter. So make sure you go to your everyday AI.com. Sign up for that. Let us know. Reply back. Let us know you know, what you liked, what you didn't what you learned. And, also, thank you. Thank you for for for tuning in. We're doing this. We're we're we're going on 50 plus shows. So I'm excited to see, you you know, some some other guests that we have coming up. So with that, thank you for joining us, and we hope to see you back tomorrow and every day with everyday AI. Thanks.