Skip to Content
Buccino Leadership Institute

AI Clear Screen Technology

In the Lead is a conversation with industry leaders on key trends and leadership challenges. In this issue, we spoke with Polly Mitchell-Guthrie, VP of Industry Outreach and Thought Leadership at Kinaxis. Here we discuss how humans and AI can work together to achieve more comprehensive outcomes.

Ruchin Kansal: What is AI, what is generative AI, and will it take away our jobs?

Polly Mitchell-Guthrie: Yes, these questions are certainly top of mind for many people these days. Yesterday, somebody was at my house installing blinds, and we were talking about AI. He is a golfer and said that the range of motion that a human has far exceeds that of any robotics we can build. I thought it was an interesting perspective.

In general, I like to start off by saying that AI is the science of how computers can mimic humans. A subset of AI is machine learning, which is most of what we see these days, despite the advent of generative AI. Machine learning means computers learn from data to predict something. If we know what has been done in the past, we can learn from the data and predict what will happen in the future.

What’s so different about generative AI is captured in the word “generative.” What I would like to point out is that it is still grounded in predictive modeling.

ChatGPT, the most well-known example of generative AI, is a probabilistic sentence completion machine. What I mean by that is if you input a question, something like “Can you give me a song about the supply chain in the voice of Bruce Springsteen?,” it will come up with something. It’s predicting something that hasn’t been done before, but it’s going to do it probabilistically. It will come up with a song that could likely be in Bruce’s voice.

And I say “likely” because one of the things people say about generative AI is that it lies. For example, when it comes up with the song in Bruce’s voice, even though there might not ever have been such a song, what AI is saying is that probabilistically, this is a song Bruce Springsteen could have written, even if he never wrote it. Those are called hallucinations, a technical term for when it is coming up with something that could have existed but never did.

But to us humans, it feels like a lie. Because how can you say that? How can AI say Ruchin wrote a paper that you never actually wrote? Or you spoke at a conference when you never actually did? That behavior can be deceiving.

And in terms of whether it will take away our jobs, I find people have polarized thinking. Some are excited about what AI could do and want to pursue those possibilities, and some are worried it’s going to take away jobs. What I like to say about AI is that it still lacks what I call the three C’s: context, collaboration and conscience.

AI cannot create context from nothing. For example, AI can see a fork, knife, plate and glass on your table, but it won’t know that this thing is called a table setting. In my world of supply chain, it can tell you that something happened, but may not have the context of why it happened, the organizational history, etc.

Second, it cannot collaborate. You and I could come up with great ideas that we might not be able to come up with on our own individually, because of the richness of our complementary skills and thinking and experiences and diverse perspectives. AI cannot work in this way collaboratively.

Finally, AI does not have a conscience. It does not know right from wrong, and that’s one of the things that I’m sure we’ll talk about later. It just comes up with an answer based on probabilities. And that’s where humans come in to say this is an action we should take Given that it lacks those three C’s, what generative AI is going to do is increase our productivity, as it will allow us to do more things, different things, and automate the obvious things that are not really the best use of our time, giving us the chance to focus on the things that are higher value.

There are two examples I like to give on this. When ATMs were first installed, people worried that there would be mass layoffs of bank tellers because people were used to going into the bank to get cash or make a deposit, and now you could do that with an ATM. In fact, what banks did is open more branches and hire more tellers because tellers could now do things that they could not do before: cross-sell and up-sell.

The second example is radiology. They’ve been saying for years that radiologists would be going away as a profession because of predictive models. One of the things that AI is good at is image recognition. For a long time, it’s been able to recognize an image, classify it and predict whether it is a tumor or a fracture, etc., more accurately than humans.

But some economists did a study and found that there are 30 tasks in the radiology workflow and only two of those have to do with the actual image classification. The other 28 involve steps that a machine cannot do. So, what we may be doing is automating and predicting tasks, but humans will remain in the workflow and continue to matter.

RK: That is an optimistic view on AI’s impact on jobs. With that in mind, how can leaders alleviate the concerns that are out there regarding AI and create secure and innovative cultures?

PMG: It’s a great question and certainly a leadership challenge. One of the things I’d like to point out is that so much of what leads to job dissatisfaction is the enormous amount of time spent on tedious tasks that are mind-numbing and neither creative nor inspiring. With AI, we can automate what is not worth a person’s time.

What I really want to do as a leader is encourage people to see that AI will help take the tedious stuff off their plates and give them a chance to focus on what is important. And then redevelop them and help them build the skills that they need in the contemporary world. That’s one way to think about what leaders need to do — to help employees see that AI is going to give them a chance to be more creative and operate at the top of their license.

RK: I was going to say top of the license, and you caught me right there. That is certainly a positive implication of AI, and I think we as leaders must do a better job of communicating that. What I also hear in conversations today is that historically, organizations were built on workflows, and in the future, organizations will be built on data?

PMG: I don’t imagine workflow is going to go away. However, our workflow is going to change. Another important point to make about AI is that it will complement and augment humans, and not replace them.

What that means is that we are going to automate workflows, flag exceptions and focus on solving those exceptions. Which ones we would flag will depend on the nature of the business we are in. We might care about a minor variation in one and allow for greater latitude in another. So, we set the parameters. We are in charge. That’s one thing.

Research conducted by the University of Chicago and the University of Pennsylvania found that people are unwilling to trust AI or machines if they don’t have a sense of control over them. They did a test and found that if people thought a forecast had been done by a machine or a model, they didn’t trust it. But if it was done by a human and had a mistake in it, they would be willing to say that “I can understand why this happened.”

Further, if people were given the ability to set some parameters, for example, they felt like they still have their hand at the wheel, even though they might have been on cruise control. They were more willing to trust the machine in that instance. So, we must give people a chance to see that they’re still overall in control.

Secondly, human control is important for validation and monitoring. These models will make mistakes that don’t make sense. When COVID-19 happened, mathematical models had to go out the window, because suddenly history was no longer a good predictor of the future. We had to have humans who had history and experience, who had domain expertise, who knew what they were doing. We had to validate and monitor.

I would say that we must be prepared to change the workflow, be adaptable, build new skills, seek new opportunities, and I’d say, we should think big. I’d like to think about what we can do differently that we couldn’t do before because we were so busy spending time, say, updating lead times.

The new workflow is humans and models working together in a complementary fashion, where models can do a lot on their own, and humans monitor and validate.

RK: You brought up two interesting concepts. One, think differently and think big, and two, humans and robots come closer together as a team. How do leaders harness this and create a positive culture?

PMG: I’ll step back for a second and put generative AI in context. We think language is something that is uniquely human. When we see AI-generated output, what’s underneath it is a mathematical model that can produce language or even images and voice. When we see what it can do, we’re amazed and may feel like, oh my gosh, it’s taking over the world.

I’d liken it to a magic trick in the sense that a magician can pull a rabbit out of the hat, but only under carefully controlled circumstances with a specific sequence of activities. A magician is doing things in a particular order and misdirecting your attention. If you were to see that same magician out at a restaurant that night and go up to her table and say, “Can you now pull a rabbit out of the hat,” without her tools, the sequence, the proper setting, she can’t do it.

Therefore, what I say to leaders is that they need to do three important things in the age of generative AI. One is to ask it the right questions. Two, to direct the right attention, and three, to exercise the right judgment.

In terms of asking the right question, what is important is to understand that the questions you ask make a difference. There’s a skill in asking what’s called the right prompt. The question you type into the chat box is a prompt. If you ask a vague, amorphous prompt, you’re going to get a vague, not helpful answer. However, if you ask for a specific prompt and you can start iterating, you’re going to get better answers.

Similarly, a leader’s role is to ask the right questions about what’s going on in the organization. Who needs to be involved? Who are we not thinking about? Or what are we not thinking about related to what is going on here? What have we done before? Where are we going? Those are all examples of asking good questions. A leader’s role also is not to assume you have all the answers and to engage a broader team that can collectively do more together.

Second, is directing the right attention. As I said before, generative AI can misdirect your attention. It is easy to have our attention distracted. We all know in this age of smartphones, FOMO [fear of missing out], etc., it can be hard to hold our attention and to hold our attention on the right things.

I often tell my teams to remember that these days, our challenge is to say no to the things that are not worth our time. The question is, what’s the best use of your time? How do you operate at the top of your license? It can be hard to say no, so directing the right attention is saying what is most worthy of our attention.

How do I lead an organization to have my people focused on the important, not just the urgent? What’s the most value that we can add? What is the most aligned with our strategy?

So, asking the right questions, directing the right attention, and then exercising judgment are all things that the machines cannot do. Machines cannot look for bias on their own. They cannot know those complex trade-offs that can’t be weighed into a mathematical model — what’s the right thing to do in a situation, with a strategy, with an opportunity, with a particular challenge. Judgment is a unique human capacity. And that’s why leaders need to exercise that judgment.

RK: So, if leaders can ask the right questions, direct the right attention and exercise judgment, it sounds like AI can help us become a lot more efficient and enable us to make smarter decisions. How do you make room for empathy and human connection in all of this?

PMG: As we look at what AI models can do, automate or compute, we still want these models to complement humans. And if we’re going to have the models complement humans, we need to help the models do the best they can, and that involves data scientists who can help tune them. I know you’ve told me you’re into cars. Tuning a model is just like tuning a car, you know. We tune a model to do the best it can do.

We also need to tune humans. What humans need is compassion, leadership and vision. And empathy, which means saying, “I understand that you are here to do a job, but you’re also a human who’s got a lot of other things going on in your life. As a leader, I need to pay attention to when there’s a challenge going on in your life, or what opportunities I see for you. What are you best at? What really excites you and motivates you the most?”

Empathy is about recognizing that and helping my employees perform at the top of their license. My company partnered with an organization headquartered in the U.K. called boom! — The Global Community for Women in Supply Chain. They conducted a survey last year on what supply chain practitioners need to thrive and survive. One of the top things respondents asked for is compassionate leaders.

Compassionate leaders are the ones that can recognize that I am more than a data point. That my child is sick today. My father just died. My spouse is undergoing chemotherapy for cancer. Or that I’m no longer excited and motivated at work, or this project has been long and hard. Or that I feel overwhelmed. Compassionate leaders pay attention to these things to help humans become high performing. A high performing human is best for everyone because everyone feels most satisfied.

I’m not talking about pushing beyond limits. I’m saying most of us feel excited when we are at the top of our license. When we are getting joy out of our work or what we’re doing, we are going to give our best to the organization and be the happiest ourselves.

RK: It was so well said. It is a leader’s job to unleash that passion and the potential for their teams to work at the top of their license. My question is, what do you foresee as the skills that will be needed to perform at the top of the license in the age of AI?

PMG: Absolutely. What is critical to be able to maximize the efficacy of technical skills is our other skills as well.

The ability to communicate rises to the top. A physician wrote an interesting article about how he was having trouble with the family members of a patient because there was a particular treatment that the patient needed, but the family was concerned that it wasn’t right.

So, the physician asked ChatGPT, “How do I communicate what the family of the patient needs to do right now?” The answer he got gave him some words to use that he hadn’t thought of before. He was able to communicate the message in an empathetic and humane way when he literally showed the machine output to the family as the prompt for the discussion.

So that’s about communication. How do I communicate a difficult message and tell a story? How do I take an AI output and explain it to a business leader who may not have the mathematical training that I do? All these are important communication skills.

Critical thinking skills are equally important. They are about the ability to monitor the bias in the machine-generated output and saying I’m not going to trust the results on their own. It is saying I’m going to use my critical thinking skills when something doesn’t seem right.

And of course, judgment. I spoke about the importance of exercising judgment earlier. You can’t ask ChatGPT, “What should my business strategy be?” It’s a judgment call.

Change management is another critical skill needed moving forward, particularly in this age where change is happening so fast. We need to reflect on that and what we need to adapt our organizations. Even if you’re not a change-management expert yourself, you need to know when your team, a situation, a task force or a group needs change. You need to know who can help you figure out how to adapt.

Business acumen is also important, whether you’re a physician, a supply chain practitioner, or a marketing leader. Understanding the business you are in, what’s happening in your business or organization, and what’s the context around it. And then based on that, how to use judgment and make a good decision.

RK: I fully agree with you. We know that it is the above skills that are a better predictor of long-term success, and we must double down on building those skills. And that brings me to the last question of our discussion. Are there ethical issues that we should be concerned about as we adapt to AI? Also, I was watching The AI Dilemma. They said that 50 percent of AI researchers believe that there is a 10 percent chance that AI can disrupt humans as we know them. Then they ask if 50 percent of airplane engineers told you that if you get on this plane, there’s a 10 percent chance you will die, would you get on that plane? But with AI, it feels like we are just embracing it without fully understanding it. What do you think of that?

PMG: I think the concerns about AI fall into four categories. One that you mentioned is the policy-alignment problem. The notion that AI could suddenly act on its own and work against what is in the best interest of humans, and therefore needs to be regulated. I think the most honest answer to any of this is I don’t know. I can’t tell you for sure that’s not going to happen. I can tell you based on the readings I’ve done and the people I talk to that at the present time and in the foreseeable future, I don’t see that alignment problem as a grave concern.

A second concern is arming bad actors. We must monitor and pay attention to that, to what could go wrong. I’ve heard examples of generative AI mimicking voice and bad actors using it for ransom. But bad actors have been around since the dawn of time, and I don’t think that’s a new threat. We’re just giving bad actors new tools. We certainly need to pay attention to that.

Third is jobs. I believe that there will be net job creation, but there will be some whose jobs are more impacted, and there will be some who will be out of work. So, we have to think as a society about how we invest in job training, reskilling and restructuring.

Professors at the University of Toronto — Ajay Agrawal, Joshua Gans and Avi Goldfarb — have written a fascinating book Power and Prediction: The Disruptive Economics of Artificial Intelligence. They point out that it has typically taken new technologies even like electricity decades to be adopted. I think the pace of change with AI in many ways will be slower than people think. I don’t think professors are going to lose their jobs overnight.

The fourth, and the one that I think is going to have the biggest and most widespread impact and therefore needs the most attention, is bias and misinformation. I’ll give two examples.

ChatGPT was asked to write performance reviews for an engineer, a female engineer, an African American engineer and an Indian engineer. Given just that little level of information, it was remarkable how biased the results were. If that’s all you’re given, you would assume that they all get the same review, but in fact, the results were quite differentiated. Women were rated more harshly. People of color were rated more harshly, and this reflects the bias we have in society. It’s well known that this bias exists, but it’s remarkable that it comes out of a mathematical model that is given little information. We have to monitor and pay attention to that kind of bias.

A second example I’ll give is an earlier precursor of generative AI in image recognition. Joy Buolamwini, a leading researcher in AI, particularly in image recognition, conducted the Gender Shades Project, and what she showed is that the classification capabilities of AI were far worse on women than men and on people of color versus white people.

So literally, if you take a brown-skin person versus a blackskin person, the darker the skin, the worse that recognition was. To the degree that only 65.3 percent of the time could it get an African American female face recognized correctly whereas, for a white male, it was 99.02 percent accurate. Part of the challenge they found is that these models were trained by teams that were white male majority, so the human bias was built into the model.

I think that kind of bias is going to be sometimes widespread and sometimes subtle. We need to recognize that it is a problem in the first place and care about the problem. We need to have diverse teams building AI models, and we need to have monitoring and validation. That is where we as humans will need to play a critical role. That’s where critical thinking, judgment skills, compassion and empathy are all essential in being able to recognize, identify and act on that kind of bias. And misinformation.

RK: That was a good framing of the four key risks that AI presents. We need to have a deliberate, measured approach to addressing each of those. Thank you so much for your time and for your unique insights.

PMG: Thank you for taking the time to interview me and for the invitation in the first place.


This article originally appeared in the Fall 2023 issue of In the Lead magazine, from Buccino Leadership Institute. The bi-annual magazine focuses on leadership perspectives from the field of health care, with content that is curated from leaders across the industry who share lessons learned from real-world experiences.

Categories: Science and Technology

For more information, please contact:

  • Ruchin Kansal
  • (973) 275-2528

Related Posts