AI in Education: Teaching, Hiring Juniors, and Human Judgment -- with Shahid Khalil

If you're tired of arguing with strangers on the internet, try talking with one of them in real life.

Welcome to Back in America, the podcast.

Thank you so much for being here with us today in Back in America. Tell me, is our education ready for AI? It is in certain areas, Dan, where people are understanding that this is not a tool for us to use as part of education, but it is an educator and it is almost a co-student. So is education ready for AI if they use it as part of their education itself? The teachers have to be using AI to teach better at a hyper-personalized level for languages, algorithms, et cetera. That's one area. Then there's the other area of, is the school ready to treat and teach students to use AI as a colleague, as a co-equal coder with them? There the answer is no. We don't quite know how to teach people to use AI as a companion.

What you'll see is people using AI for minimal kinds of stuff, but in the real world, we are hiring in a business world today, we don't hire one guy who knows 50 things. We hire 50 people who know 500 things, and then we have to coordinate their work. We use AI to do that coordination, but education currently, of course, is well behind because it doesn't know how to do that any better than the business world does. All right. We're going to come back to that. Let me try to introduce you. When I did some research, I learned that you were an award-winning fractional CTO, and you're going to tell us what it is, and a CISO. You've got a hands-on approach to technology. You've got over 45 years of experience of engineering. You've built secure

compliance systems for complex regulatory industries, from medical devices to digital health for the federal government, among others. You blend deep technical skills with real grasp on how the market and the regulation shape innovation. Yeah. Number one, as a fractional CTO, my background is as a computer scientist. I was trained to write code. But as a fractional CTO, what I've spent the last couple of decades on is coming in, almost like parachuting as a special ops guy, to come in and assist people who may already have a CTO as well, but they need a coach for that CTO, or they might need a CTO because the CTO is in transition. So the chief technology officer in product-oriented companies is literally the chief technical person there who makes

decisions, et cetera. Then I've also done work as a chief information security officer, or a CISO, that of course sounds exactly like what it is, which is that software needs to be secured, infrastructure needs to be secured. Now, I like to say that I'm hands-on because a lot of CTOs that are at my skill level or experience level, I'm at 35 plus years now since I got my degree and I've been working on lots of different projects, a lot of those CTOs no longer code. The tools are coming out fast and furiously. And if we're going to instruct younger people, whether it's from an educational perspective or a training perspective at work, you have to know something about what you're training them on in order to do that training.

And so that's why I like to say that I'm hands-on. Tell me, what put you toward this kind of high-stake engineering working for highly regulated companies? Is there an attraction for you to these kinds of industries? Yeah, the first attraction for guys like me is impact, right? We like to say I've worked on medical devices that I know save lives. For me, if at the end of a long day, like I'm an entrepreneur, I work long, long hours and long days, at the end of the day, I don't want to say I did a fantastic job on a new cartoon creation app. For me, I like to say when I finished the job, it helped somebody save a life, get a job done, et cetera. So it's mostly about impact, and doing so in a heavily regulated industry allows me to use my special skillset,

which is I'm one of those few engineers that can speak to humans as well as machines. And that's not common. And so in order to do all of this really highly technical work in regulated environments, you have to understand the law. So I'm part lawyer. I have to read the law, understand the regulations, know what compliance means, et cetera. But there are a lot of people that can do one or not the other and not the other, vice versa. The best of those amongst us now at this level of skills, we have to understand a bigger world. And we have to understand that the code that we generate from AI has to live in the real world where you can accidentally spend more money. You can accidentally kill somebody. You can put this code into a chip

or a microchip that might be in an airplane that could fall out of the sky. It's really important. That's why I would get up in the morning because I'm close to retirement age, but I don't see myself retiring anytime soon. So you spoke about interpreting the code that comes from a machine and keeping your humanity. Take me back to a time where you were building something and you faced a bug, the system crashed, and how it really impacted how you see human judgment versus machine logics. Yeah, that's such a great question because it takes me back in the early 2000s, late 90s. I worked on a class three, very large medical device in a blood bank. Blood banks are obviously you get one person gives blood, you could be the one receiving it

because you got into a car accident and you're at the hospital and you need the blood. So it's a super critical thing that blood be taken properly. It'd be tracked properly. It'd be tested properly, et cetera. So I worked on this blood banking software and there were multiple times over years where we saw bugs, but we didn't understand what would happen if we didn't fix it. A lot of times you're working in a business app, you've got a bug and the wrong color shows up on the screen. When that bug occurs, somebody's not going to get their blood or they might get the wrong blood type. You're supposed to get O negative, but you got AB positive. That's death. That's not a small thing that happens there. So having worked in these environments where I understand that the bug,

when it could impact a life, that's where your humanity comes in. That's where you spend the extra 40 hours that weekend trying to get through and making sure that this code can get out the next Monday. And that's really where, because I've lived through that, I've worked on probably a dozen or so medical devices and I know the code that I wrote had bugs that probably accidentally harmed people. And that stays with you. I can actually remember two or three cases, I can't talk about them publicly, where yes, it did. And we know that people were harmed. Medical devices do harm all the time, just like planes do fall out of the sky when things go wrong. But that's when your humanity comes in is you see that every decision that you make,

if AI is going to generate the code for you and a human doesn't watch, the AI doesn't care what it's being used to save a child's life or draw a cartoon figure. It's just generating code. But you and I know that one requires a high amount of reliability, a high amount of safety checks, et cetera, versus the other. And that's where we have to come in and recognize that it doesn't matter how much AI comes in. If the target of the software that we are building is other human beings, you need us in the middle to verify that AI doesn't harm the other human. So you talk about AI and when did you realize that AI was not just another tool, but something that would change the way we learn, the way we walk, the way we code? When was that?

Yeah, some of us worked with AI very early on. So as we know, in late 2022 is when ChatGPET was released. But that version of ChatGPT was available in open communities a few years earlier. When we used to use that before ChatGPT came out, you had to wire a bunch of stuff. You had to write a lot of code, you had to write a lot of prompts and put those in and then wire it all up. And so that never gave us the impact that this could have. But when ChatGPT came out, that exact software with a direct interface, like a human to human, like you and I are talking right now, the very first time that I realized it is, it was about probably the second or third month when ChatGPT was out. I'd already been using it for a while. I signed up for it literally the

weekend it came out. So I'd been using it for code and other things for a while so I knew what it could do. But the one really cool thing that I ended up doing is, so I have family members that have chronic conditions like diabetes and cancer, et cetera. So when I started asking it by giving a little bit of context for family members who needed medical advice, even with knowing the same thing, that's when the light bulb went off is that I'm getting real information that a real doctor would give me that I can make actionable. That's how I know that it's now a colleague. That's not a tool that I use. Just like I have friends who are doctors when I have, before ChatGPT, I have oncologist friends, I have cardiologist friends. Whenever I had a problem, I called my

cardiology colleague, my oncology colleague and said, and they would help out. They would brainstorm, they would discuss and say, did you try this paper? Did you read that paper? In essence now, the 90% that I do now, back the first time I realized this is, do it all there and then bring in the human colleague after that and things get even better because the humans can now take obvious stuff and make it unobvious, or the humans can say, oh, wow, I never thought about that. You know what? I just saw a patient two days ago where we did do this. We didn't even think about it. Let me go back and do it with that patient as well. So it became crystal clear that the AI is your colleague, it's your friend, it's a worker. How did you feel at the time when that

light bulb came on? At first I was like, no way, this can't be. It has to be fluke. So sometimes when you see a generative AI, because I know the technology about generative AI is enough to be able to say, I know it's going to generate English because that's what it's been trained on. But when it put together the words and the order that it did, that is not just what it was trained to do because there's more reinforcement learning, more training that happens on top of that, which means that they're the bank of humans at OpenAI, at Claude, et cetera, that we know. Tens of thousands that are reading and rewriting and updating these prompts. But what it made me feel is, does this obviate me? Am I no longer necessary? And then of course, in a few days you find out,

of course not, because it's incomplete, it doesn't know how to prompt itself. It doesn't know how to execute on things. It doesn't know how to be empathetic and talk to one patient versus the other. If you don't set the prompts properly. So that's when I knew, oh, I'm not going anywhere. There's plenty of opportunity here. I want to come back to education. So we see today that a lot of companies are letting AI do the groundwork of debugging, boilerplate, all that kind of stuff. What does it mean for young, freshly out of school student looking for a mandatory job? Yeah. It's going to be tougher and tougher for those entry level positions, but not in the obvious way. The obvious way is that companies are not looking for the junior guy they want, the mid or

senior who can help operate AI. That's the obvious part. The unobvious part is that the juniors don't know enough to be able to, in their mind, the ones who are hiring this way, the juniors don't know enough in order to drive the AI faster in their environment. Said another way, if you're really good at what you do, the AI makes you a hundred times better. But if you're bad at what you do, like, of course, all younger people are going to be bad at coding because they don't have enough expertise and you got to build that expertise. But if you're bad at coding, you're a hundred times worse. It doesn't mean if you're bad at coding, you get good because you have AI. The first thing you have to realize as a young person is you don't know shit. You don't

know what you don't know. And so you pretend that you do know and you put that prompt in and then you get stuff out. Code is going to come out the other end. You just don't know whether it's good or bad. That's why. So if I'm a leader who has the opportunity to hire five seniors and five mid levels versus five entry level, just for my own sanity, so that I don't have to check the work of those younger people, I need to hire the mid or upper end. That's the normal case. Even without AI, that was the case. If I had a choice, I would go get a senior. I often did not have a choice. So I needed 10 people. I knew I could only get two seniors. I could get three mid levels and five juniors. So like I knew I needed 10 people. So this is what you and if you're being educated today

and you're coming out of college, you need to recognize. In the old days, I might have needed 10, five juniors. I had to take you because I had no other choice. Now, if I have two seniors and one mid, I may not need the rest of them. Mentally speaking, I may decide that and say, I don't need the rest of them. It's not quite true, but that's what I'm going to think. And so what do I do? I just let my seniors continue and my mid levels continue. Now, what I don't realize is if I don't build my juniors now, how are they going to become that mid? When my senior leaves, the mid's going to go up to become a senior, but I've got this whole area, which is going to be completely empty. If I'm an idiot as a senior executive, I will stop hiring junior people. Only if I'm an idiot.

If I'm not an idiot, I have to get those juniors to be able to shadow the seniors and the mids, just like in the old days. And so this is what the education establishment, Comp Sci Education, they have to be teaching the juniors and the fresh guys coming out of college to be so useful with AI that it would be silly not to use you in combination with the seniors and the mids. And senior executives and senior people like me are making a mistake right now. We're going to pay for it in two years or five years or 10 years where we're not building that community of juniors to be able to become a mid, which then becomes a senior.

Is it what you call the adapter? Someone who guide AI? So you could think of them as context engineers. Really, you don't really need to guide the AI per se. In general, what you're saying is correct. But the guiding of the AI really is telling it the background of what it needs to know so that it can generate what you're asking it to generate. If you go to ChatTPT in a fresh account today and you say, write to me a function that can sort a linked list. And that's all you say. It will most likely pick Python because it's a popular language. And it will do so in a way that is fairly advanced because it'll use traditional linked list algorithms and you'll get a function. Now, you were supposed to tell it that this is to

be an assembler or Pascal or C or Ada or whatever. You didn't give it that background, which is what was lacking, right? You didn't give it the context. Now, in the modern era, I would rather get a junior guy who understands context engineering and then I could give him or her the extended language specific things that I need rather than a mid-level who has the skills but cannot prompt an AI. As an example, if you are a designer, HTML, CSS, et cetera, you're fresh out of college. You don't know all the advanced ways of doing that. But boy, do you know how to use lovable. Lovable is a popular design tool as we all know. Lovable and you don't have super high-level skills in HTML or CSS. You're still more valuable as a junior guy who knows lovable that can take my words or my

prompts, put it into the tooling, get something out that I can then give down the stream to someone else rather than a senior guy who has been writing HTML and CSS his or her life for years but doesn't even know lovable exists. Now you have to ask yourself, which one do you want? Do you want the junior guy who knows lovable exists? Not only that, but they can drive lovable better than everybody else. Or do you want the senior guy who doesn't know lovable exists? I'm taking the junior one any day of the week and twice on Sunday for sure.

And that brings me to a natural question about the skills, right? So being curious, being innovative is what we talked about using those new tools. But what are the human skills? You talked about empathy earlier on and I believe that properly trained AI can be extremely empathetic. So what are the skills you are looking for as a CTO when you look at those skills fresh on the market? Yeah, three main things. Thing one is when you deal with AI, how much do you use AI to help you with AI? Now this sounds very weird and very meta, but the best prompters in the world, like myself, when I work with AI, the very first questions I ask when I teach it context or need a task done is, what do you need from me in order to do your job? Give me the prompts that you would

have me give to you because when you give me the prompt, I understand it more. So if you've got, I have 35 years of programming experience, you could give me almost any problem. And with a little bit of books, obviously we're all old and we're going to have to go back to reference books and things like that. I would be able to write your function, write your code, et cetera. But what I wouldn't be able to do is if I hadn't already done six, seven years of this work already, I wouldn't know how to organize my thoughts and provide the context to an AI. So that's thing one. If you can understand and tell your hiring manager or the guy interviewing you and say, look, I don't know everything, but I know what Marissa Meyer said at Google a long time ago. It's not what you

know, it's what you can find out. Now it is not what you know, not what you can find out, but what you can prompt through context. So if you extend that and you say, thing one is, can you talk to the computer better than the next guy that they're trying to hire? If the answer is yes, prove it. Write the article, show your prompts, explain how you do things, go onto GitHub, put project after project, get everything that you know that you're doing with AI public, because that's how you're going to be able to prove to me that you know what to do. That's thing one. Thing two then is, do you have the basic skills? If I said, give me a circular linked list versus a unidirectional linked list, give me a directed graph versus an undirected graph. If you're going to say, oh,

I know how to tell AI what to do, and you don't know the difference between a directed graph and an undirected graph, it doesn't really matter how well you communicate in English because you don't have that skill set. Kids coming out today, they not only have to have the basics, but they've got to be stronger in those basics than ever because the AI is going to generate code in the way that you ask it to. So if you tell it to create type safe code, it creates type safe code. If you say, give me unsafe code, it gives you unsafe code. So you have to know what to ask for. And so the first problem is, can you communicate what you're communicating well to a machine, which is not the same as communicating to humans? It is different and we know that.

But two, if you say you know how to communicate to the machine, but you don't have the basic skills in computer science and understanding the algorithms versus when to use one versus another, if you can't do that, you're not very useful. The first skill is not very useful in the second. But the third, I'm really good at what I do. I'm not great at the algorithms and things like that, but I understand how to talk to humans to get requirements input, understand what are their expectations. And I will use that to coordinate and work with a mid-level engineer to fill in with AI so that the two of us, a junior plus mid, where the mid knows more about the computer science part, but the junior knows how to talk to other humans. If you can seal that gap in all three of those

things, if you can give me two of those three, starting with the first one is a must, right? But the other two I'm willing to negotiate on depending on how good you are with the tooling. Now, the world is hard for younger people. I know it's always hard for young people. Like when I was young, it wasn't like the world was easy, but at least I was only competing with other human beings. Today, kids are not only just competing with other human beings, they're competing with the AI colleagues and the mids and seniors who are five, 10, 15, 25 times more productive than they were just five years ago. You said you had grandkids. As a parent of student, what would be your fears when it comes to AI and your whole?

Yeah, my biggest fears are that people our age and those that are in the job market today, we don't understand the benefits of AI and how operating it as an additional colleague is more valuable than trying to replace the humans that we have. That's my biggest fear. Of course, it's understandable that this stuff is two or three years old. It's not like we've lived with it for decades. So the fear is when you think it can do too much, it is going to harm your environment. You won't build the human skills that are necessary to operate complex machinery, complex equipment, et cetera. If you say that, okay, I'm treating it as a colleague, that every new hire that I have deserves a proper twin of their AI, well-trained,

designed to work with the person that comes in, that's okay. Now, my hope is that, so I have a couple of grand-aurs, one is six, other is two, turning three, this is in November. Now, the biggest hope that I have is their teachers can do what I do. When they come over, I often play with them with AI. So instead of playing a traditional Donkey Kong game or just on their Nintendo Switch or whatever, they do that. Of course, they watch TV and they play with that. But when they spend time with me, we are on AI. So each session might be, hey, let's invent a game together and let's play together with AI. Or, and when they're coloring, they're not coloring Disney characters. They're coloring pictures, like I will allude to pictures, create coloring books,

stories, et cetera, with their parents, with their grandparents, with their siblings, with their cousins. So the stuff that they do is still hyper-creative, but it's personalized to them. So the two-year-old stories that the AI works with me are great for a two-year-old. The six-year-old, the games that we play, it escalates the game because it knows the request and response. But if you see this, the power of this, so what I'm really excited about is, could teachers recognize within the next months, years, however long it takes, that each student in their class can have a personalized teacher that can operate at their speed, at their level, at whatever pace that they want to go to, and that the teacher in the classroom, whether it's for a six-year-old or

an 18-year-old, doesn't really matter, that the teacher in the classroom is a proctor or an organizer or an orchestrator of all the other teachers that every single student has in their environment. Now I'm just imagining the level of intelligence that the students could have if each one of them had a bank of personalized teachers teaching them math, science, biology, et cetera, at their level, at the pace that they want to go to, instead of being bored out of their mind and going into different places in their head because they're just bored because they're really good at biology, but their biology teacher is too behind. They're really good at math, but they're too behind. You see this, that part? So that's really exciting. So we are getting almost at the end of this

interview, and we've got parents, we've got teachers listening to us today. If you have one thing you would like to leave parents and teacher with, what would it be? Yeah, just think about the AI as a personal third parent, for example. So parents, good parents, will spend time with their children, right? And they'll spend time doing homework, et cetera, et cetera. When you are busy as a parent, often you're not able to do the things that the kid needs attention on, but the AI knows where it needs attention, right? Because you can actually give it the prompts and say, my kid got this homework. I don't have time to do it with them. Could you help with it? It'll do it for him. Teachers, the one thing you can start doing is start to see which students

could be the ones that are your voice to AI. And like in my case, when I spent time with my six-year-old, she, as soon as she saw me chatting with the AI, she started giving it prompts. In her mind, it didn't even occur to her that this was an amazing piece of technology that she didn't have when she was born. It's like she started talking to it as if it was always with her in her entire life. And she's, oh, ask it to do this. Oh, this is a better idea. Let's do this. This is a six-year-old now. Imagine what an eight-year-old or a 14 or an 18-year-old could do if we trusted them. And so teachers today are scared of AI just like everybody else is in many different circumstances. But what I would recommend for teachers or parents to do is adopt it, but you sit there with AI. Do

not leave your kids with their AI on their own. Like in our case, when our kids were young, we only had computers in a central area where we could see them. Do that. Let them use AI, but be there with them. But don't be afraid of it. There are universities out there that are preventing kids from using AI to turn in their homework. It's the dumbest thing in the world. How are you going to be able to teach them that it's a companion or a colleague when you say you can't talk to this massively useful technology? So don't be afraid of it. Work with your kids in class and outside whenever you can in an interactive way so you can see, are my building their prompting skills or am I accidentally making them dumber? Because they're not going to look at

anything on their own. They're not improving their prompting skills. They're not saying, I learned this in my environment. Let me put it in. Instead, they're just waiting for information to come out. That is a horrible situation that we have to try to prevent. Thank you so much. So this podcast is called Back in America. And in the podcast, we look at what makes America. We look at the culture, the identities and the values. And if you are familiar with the podcast, I always end with one question, which is what is America to you? To me, America is... So I've been here since I was four years old. I started elementary school here. The number one thing America is to me is opportunity, opportunity, opportunity. In this era,

so having seen lots of other countries that I've been flying around for my entire life, what we see is that even today, lots of other countries have caught up in many other areas, but they haven't exactly caught up with the amount, just the sheer amount of opportunity available to do anything that you want to do. And now with AI, it's easier in one way, because you could do whatever you wanted to do, but it's the tyranny of choice. You can literally now do whatever you want to do. And there's an old scientific American paper, if you don't know what I'm talking about, but in 2004, there's a paper that came out called the tyranny of choice. How having choice, like we do in America, is wonderful at the surface level. But when you

have a lot of choice, your life is actually much harder than when you don't have any choices at all. So read that paper, understand it's about opportunity, and start selecting your opportunities wisely. Shade Shah, thank you so much for your time today. It was a pleasure to speak with you. Yeah, I had fun. Thanks. We'll do it again.

AI in Education: Teaching, Hiring Juniors, and Human Judgment -- with Shahid Khalil
Broadcast by