Having immersed myself in AI research for several years, beginning during my time at Columbia Business School, I have witnessed firsthand the escalating influence of AI across education, work dynamics, business landscapes, and societal structures.

Eager to articulate and share the multifaceted impacts of AI, I sought to craft a narrative intertwining perspectives of both a recent MBA graduate and a seasoned MBA professor. My goal is to hopefully inspire others to foster a deeper understanding of AI’s implications not solely as it relates to career, but also various dimensions of life.

This story centers on an interview I conducted with Columbia Business School Professor Dan Wang (my Strategy professor), in which he shares his expert and intricate views on the impact of AI on business, work, and education. Professor Wang is the Lambert Family Associate Professor of Social Enterprise and Faculty Co-Director of The Tamer Center for Social Enterprise, and a 2018 Poets&Quants 40 Under 40 honoree. He possesses unparalleled expertise in technology strategy, making him an invaluable source of nuanced insights.

And while AI could write a generic version of this article, I hope to write it with my own unique take, formed by my diverse and unconventional set of experiences and research – which hopefully makes this story more compelling and valuable.

ACTING & INNOVATION

As a recent Columbia MBA graduate, my immersion into AI research and strategy has been surprising and enlightening. Transitioning from a pre-MBA career as a television actor, I never anticipated that my passion for storytelling, forging human connection, and building relationships would intersect so profoundly with the rapidly evolving domain of emerging technology.  Yet, having navigated both the arts and the technology industries, the direct relevance of my experience is remarkably strong.

Actors are wildly imaginative. They envision world’s other people can’t see…yet – and through words and emotions paint a picture for audiences that open the mind to possibilities – learning to connect with people in a deep way through stories.

Core skills required of actors are creativity, collaboration, deep empathy, out of the box ideas, the ability to connect, and the ability to get in front of people – and challenge them to think critically – provoking people to open their mind to ideas that contradict their own.

HOW THIS RELATES TO AI

Those concepts are directly linked to innovation and strategic foresight – spearheading initiatives, confronting complex situations, building products that don’t yet exist, and then creating impactful messaging that resonates and connects with target audiences.

AI’s ability to automate jobs and replace core skills is already present and gaining more ground as we speak. And while thinking deeply on this is enough to send anyone into an existential crisis, there are core skills that are part of only the human experience that are more important than anything AI will ever do.

And the skills that I believe will be most important for workers in the next decade, are the skills that make us uniquely human – collaboration, connection, cooperation, critical thinking, and the infusion of personal life experiences into work.

CHATGPT & COLUMBIA BUSINESS SCHOOL

When ChatGPT was released in 2022 while I was still a student at Columbia Business School, I watched as professors began experimenting with its integration into the curriculum. At first, even though I was permitted to use ChatGPT for homework or during an exam (as long as I listed where and how), I felt like I was cheating.

However, after I spent years interviewing over 600 CEOs, Executives, and MBAs at public technology companies on their view on the implications, benefits, and risks of emerging technologies like AI, I determined that using it (when allowed) enhances (not replaces) critical thinking. I see the potential for AI to improve the education system drastically and to move us away from memorization and traditional classroom settings – if ethical researchers and experts lead the way.

I have continuously sought expert opinions in an effort to learn from different perspectives. And on this topic, the person that comes to mind first, is a Columbia Business School Professor I greatly admire – whose strategy class was my first building block of confidence.

I took Professor Dan Wang’s Strategy course in 2021, and his class was unlike any other class I had ever experienced. The class is focused on students gaining an understanding of frameworks for approaching strategic decision making. Prior to every class, we read a business case and wrote up a response to a specific strategic question. And then, during the class, we engaged in a discussion and a debate about our responses and views.

This discussion requires a high level of engagement, presence, and the willingness to both challenge others on their ideas and be challenged on yours. Every class, I felt like I was exercising the muscle of cognitive flexibility – the willingness to change my mind, as well as developing more confidence in asserting my own opinions when I know others would disagree.

In an effort to enhance the class even further and keep up with the pace of innovation, last year Professor Wang spearheaded an AI integration into his Tech Strategy course. From prompting students to use AI tools as an AI discussion partner/adversary, to a research analyst, to a design collaborator – to using AI for creating simulations – his thought process being the integrations, and the conclusions of the experiment have an insurmountable depth. Let’s dive in.

Jane Bernhard, MBA: How do you envision AI’s role in the classroom? Do you think of it as a potential threat to critical thinking or rather as a complementary tool that enhances students’ abilities to question, analyze, and engage with confrontation and challenges? Or perhaps it embodies aspects of both perspectives?

Professor Dan Wang: It’s not clear what the role of AI in the classroom is as yet. Largely because we haven’t collected enough data about experiences from different types of settings.

My experience is an intentional integration of AI, as opposed to a peripheral integration of AI, or a compliance-driven implementation of AI.

If we conceive of the traditional classroom as a format that’s primarily predicated on broadcast, which is a single instructor using their knowledge and their ability to communicate broadly to help students understand concepts – then (in this case) the role of AI for student learning is not clear. The traditional classroom is one that doesn’t involve a ton of interaction. It involves a sequence of unidirectional communication and subsequent reflection.

Now, for some settings and subjects, that might work. Sequential learning and sequential teaching can be effective, but those are also the kinds of modes of classroom instruction that AI could disrupt, in that it interrupts the very intention of that type of learning and teaching styles.

Essential AI integration in the classroom

The way that I thought about AI is to envision scenarios that are practical, in which AI is not just a form of augmentation, but a necessity. One way in which I used AI in the classroom is as a conversational partner. This is useful because it gives students the ability to get immediate feedback that is tailored to their own thoughts on how to prepare for a class. Now, my class doesn’t focus on the instruction of fairly fixed material. It is a discussion-based class in which subjectivity and diversity of opinion is the very point.

You have to learn to not only use your words to try to convince others, but also be open to being convinced as well. And that’s where the learning happens – when you encounter disagreements and tension and work through it.

This is how high stakes decisions are made in the real world – through dissent and reconciliation.

So how do we then help students use a tool to get in the mode of practicing this skill? A conversation partner fulfills this role and responsibility. Having an antagonist there to coach them through opposing viewpoints evokes different ideas and prepares them more effectively for debate in the classroom.

Using AI created richer discussions

The way that I saw this come through in the classroom is that when we would get into a discussion that I would ask them to prepare for using an AI conversational partner, students would bring up conversations that they remember having the night before with their AI conversation partner. In a debate, the students would have already anticipated the opposing viewpoint, because they heard them before.

So, it creates this much deeper discussion. It also makes students more sympathetic when they go into the discussion because they’re aware that their way of thinking is not the dominant way of thinking.

This creates greater cognitive flexibility, which I think is an essential skill of anyone who wants to get an MBA and wants to ascend to a leadership position, which is that you have to consider this whole universe of different opinions and somehow, make a decision out of them. You have to be attuned to the fact that you can also be convinced about things.

This may not work for every classroom setting 

The very clear boundary condition for all of this is that it works for this particular mode of learning. I will be the first to admit that I don’t have experience using AI to teach a programming class or statistics.

That being said, I have taught statistics classes and data science classes in the past – and my experience is that they involve less discussion than the strategy classes I teach at CBS. The discussion in classes that focus on quantitative skills and methods is mainly Q & A about facts and answers. Could an AI do that? Yes – and the risk is less to the student and more to the instructor.

So there’s a lot of potential market opportunity for that type of AI coaching for more hard skills, and many of the ventures that I’ve seen in the AI space have focused on that.

Jane Bernhard, Columbia MBA: As teachers figure out how to integrate AI into the curriculum, students are also trying to understand what skills are going to be needed in the next five to ten years. 

In my mind, having just finished business school where so much of the work is quantitative, I wonder what will happen once AI can build a financial model just as well as a person. The student or worker would need to be able to understand it deeply to write better prompts, but the question of whether they need to know every detail of how to build it remains unanswered.

Do you feel like there are certain skills or expertise that students should be cultivating like cognitive flexibility, the ability to pivot, or the ability to understand new technologies because their industry might be disrupted in a way that they didn’t foresee?

Professor Dan Wang: Yes. The first part is about what kinds of essential skills maximize the output that you get from using an AI tool. That’s just knowing what to do with AI is to be effective at your job or more effective at your job or role or ambition goal. The second part is – how does the kind of dissemination and also proliferation of AI ultimately change the rules and norms of work? That’s a different question.

The first one is ever evolving. When ChatGPT was first released to the public, people thought the new skill was definitely going to be prompt engineering.

But as it turns out, how good you are at writing prompts really depends on your background domain and expertise. And that’s something that’s deeply human. If you’re using AI to help you craft a murder mystery novel and you’re a writer, you will have more experience writing murder mystery novels and will come up with better prompts.

Maximizing AI output: The role of expertise

If you’re looking to have AI help you as a management consultant put together a slide deck for a recommendation for a client, a more experienced management consultant will come up with better prompts than a neophyte management consultant.

So although I agree, there are baseline approaches to writing prompts based on certain types of language, ultimately getting the most out of AI depends on your domain. So, that’s a case to be made that human based training, domain expertise, specialization, and depth really matter.

That’s why, one of the most productive AI use cases so far is with programming and developers, which is that the best developers immediately see how they can 10X their productivity because they know exactly what they’re looking for. A new programmer can also improve with AI, but their domain expertise is lacking. And so you can see specialization still matters.

The emergence of more generalist responsibilities

AI tools will also enable all of us to become more generalists as well. Not only does AI make those who are highly specialized even more productive in their roles and goals, but if you know nothing about a topic, AI enables you to do something of substance in it.

For example, product managers at tech companies typically have a multifunctional role. They have to know about engineering, data, strategy, marketing, management, project management, and they have to put all that together.

Product management is therefore a fairly generalist role. What could happen in the future is that product managers could be more effective, largely because the use of AI tools will make them less reliant on all of the different functions that they have to manage.

The real kind of skill that becomes privileged in the medium or long term is the ability to synthesize, make connections across different domains, and draw complementarities out of different skills.

Organizations will also become flatter because there’s not a need for so many specialized functional silos. Even large companies might begin to look more like startups, and with the recent restructuring of large tech enterprises, we have already seen this in action.

Jane Bernhard, MBA: On the topic of disruption, how do you think about the transformation of different industries and the likelihood of certain functions or industries being impacted more than others? For example, I come from the television industry and I talked to a lot of artists who are very anxious – rightfully so – about AI – questioning if AI can soon write better scripts and write them faster, will society as a whole value more the idea that a human built the content, or do they just want content, even if it is made by an AI? 

Professor Dan Wang: Disruption is prevalent, but it will be slower than we expect. There’s almost certainly replacement already occurring. These functions that involve fairly basic applications of creative writing are already being supplanted.

At the same time I think there are many market and non-market forces that will prevent the transformation from being as dire and rapid as we think it’s going to be.

The first thing that is going to hold some of this back is a dispute about copyright (already manifested in the New York Times lawsuit with OpenAI.)

Those types of legal disputes are only going to accelerate with forces that are as powerful, if not more powerful, than the New York Times. They will have to be considered in court. And I think that will hold back some of the application. You can already see that companies are beginning to build this into their strategy.

Apple’s strategy

I’ll give you a really good example. OpenAI basically scraped everything. Nobody cared at the time. They’ve been doing this for the past several years and all of a sudden now that there’s a powerful tool, people care.

Apple hasn’t released an LLM yet, but they’re almost certainly working on it. One of the really interesting approaches is that Apple is venturing to get ahead of legal issues by not just scraping the web or using data with ambiguous provenance. Instead, Apple is paying for the legal right to license data to train their models, giving the company legal cover.

That approach is also slower. There are almost certainly trade-offs that they’re considering with this approach as well. There’s also differentiation which creates competition. Competition and differentiation also slow down the process of displacing human functions.

Human originality achieves a premium

The last part of this has to do with strategy.  I think as a collective, consumers will be far more savvy in our consumption patterns as well. We’ll become used to seeing content that is AI generated.

We won’t think about it that way, but it does create an opportunity for differentiation through authenticity. That authenticity might have a basis in human originality.

One other kind of version of this future, and it can be concurrent as well, is that human talent or human originality actually achieves a premium, more than it does right now.

There is certainly risk, there’s certainly concern, but at the same time, there’s also so much uncertainty about the upside as well that I don’t think it’s fair to have a conversation just about the risk without considering what a transformation could look like.

Skill-biased technological change

The other interesting comparison to make is the advent of digital spreadsheet software. You can read in the news in the 1980s and 1990s, all these articles about how it would kill the accounting industry and financial services. People thought there would be no jobs. Of course that didn’t happen. If anything, it actually created greater productive potential.

I think if you were to place a bet on what AI is going to do today, it’s very similar to what prior iterations in human history of general purpose technologies – electricity, the internet, and software have done to alter the economy. The term for this is called skill-biased technological change – which means that highly skilled professions and roles benefit the most from technological advancements.

With the advent of AI, certainly everyone can benefit, but there is this vision that creates greater inequality in terms of skills. The haves will get more out of this than the haves nots. That’s certainly a concern as well when it comes to social implications.

Jane Bernhard, MBA: Regarding society implications, I think deeply about the human experience and what makes us unique to an AI. What immediately comes to mind is the ability to communicate in a room with a person, understand their emotions, connect, and empathize. 

With AI making certain skills and functions obsolete, the skills that would be vital, to me, are interpersonal communication, empathy, understanding how different people think, and being able to adapt, as opposed to just one technical speciality. For example, when AI inevitably gets to the point where it can code better than a person, the skills required of people would be the interpretation, and the deeper understanding of how to use it to make decisions – which means less focus on smaller details and more focus on strategies for implementation. 

Do you agree or disagree?

Professor Dan Wang: I agree with everything that you said. What you articulated is, how can you replicate the tacit elements of togetherness? How can you replicate the feeling of being in a room or feeling pressure or putting yourself out there with an unpopular opinion, right? How is an AI supposed to make you learn from that experience?

So I agree that in terms of this tacit kind of sensation, it would be very hard to imagine AI in its current state being able to replicate that. Not ruling it out, of course.

A surprising AI case study

But on this interesting point, we had a session in our class this past year that was focused on chatbots. Half the session was about the economics of chatbots. The other half was about how good chatbots are. This was a really interactive part of the class. The case that we discussed was about a company trying to integrate a chatbot into its sales funnel and replace salespeople with chatbots.

Chatbot technology, conversational AI, it’s now known, has been around for a really long time. Almost every customer service portal of any major consumer -facing company, uses chatbots.

What I wanted the students to see is just how sophisticated chatbots were. And so we had two exercises in class. We used a custom AI that we built and also an off the shelf one that you can download for free, called Pi. It’s a coach, mentor, therapist type of AI. It uses OpenAI’s Whisper, which means that you can actually talk to it.

I asked the students to try out all of this. I asked them to treat the career coach version like their own recruiting situation.  I told them it would only work if they bought into it. And they did. And the overwhelming response was that they could not believe that we did not have this during their first year recruiting.

Then we did another exercise in which there’s a function on Pi that just lets you vent to it. They said it was a pretty empathetic experience. I knew this was going to happen because research on this topic has long found the same result.

Data on AI, openness, and honesty

We had a debate about whether, if you’re using a chatbot as a company, you should disclose to the customer or client that you’re using AI as a chatbot.

Half the students said, yes, you should disclose. The other half said it doesn’t matter – what matters is the effectiveness. But there was a really interesting point that only a few of the students picked up on.

Specifically, there is research about why a company should disclose to a customer that they are talking to an AI. It’s because when you know that you’re talking to an AI, you tend to be more truthful and open with it.

The reason why is because if you’re talking to another human and you know it’s another human, you care about how you are perceived and how you are judged. You care about saying the wrong thing.

With an AI, you don’t care. And so the conversation, in a weird way, actually gets deeper. That blew my mind, because I knew that research. And then I saw the students do it in front of me.

So there’s this interesting case to be made, that we might be able to achieve a more mutually empathetic conversation with an AI. That is a sentence that is weird to say out loud because empathy is a deeply human quality.

The ironic part of this is that the thing that holds people back from using AI is a lack of trust. But then there’s all this research that says, in AI conversation partner conversations, there’s greater levels of trust. It’s almost like when you cross the pattern to use AI, you become totally different.

Jane Bernhard, MBA: The idea of an AI being perceived as more empathetic is concerning, interesting, and surprising. What it really says to me is this is an extremely complex and nuanced issue, it is not black and white.

When I think about these issues and the research on it, I feel the innovation is very inspiring and exciting because it is very interesting to watch the world shift. At the same time, I worry about the detrimental impact on certain industries and how it could be used maliciously.

When you think about your level of optimism or pessimism with AI, how do you feel about it? Do you feel excited? Do you feel nervous? 

Professor Dan Wang: Let me talk about this generally. I think disruption and transformation are inevitable. But I’m also optimistic. I’m constantly seeking novel ways of helping students and also learning myself too.

I’m into taking risks. I’m into learning from failure. So that’s the bias that I go into this way. At the same time, the really interesting takeaway was that this is a technology strategy class of students who have selected into the class, have an interest in technology, and are young people who are tech savvy, from fairly economically secure backgrounds. And so they have a lot of autonomy and freedom and also are highly educated so that they are the best poised to adopt the technology.

And still, if you look at the numbers, not many students have really integrated this as a part of their lives. What that said to me was that it’s going to be slow.

AI integration will not happen overnight

There are many forces that are ossified in our society and organizations that work against the adoption of technology like AI. One is just the very human trait of resistance to change or resistance to sudden change.

You can try to impose AI software across an entire organization. That is going to take a long time for anyone to adopt. And it’s a huge organizational and management problem. It’s not a technology beneficial problem. It’s going to take time.

There’s the policy angle as well. This is definitely a strategic area for countries like the US, China, UK, etc. There will be regulatory control over this in a way that did not really exist for the Web3 technologies. But for AI, the government will be involved. There will be regulations that will alter the course of its development, most likely in a more thoughtful direction, but also a slower one.

The last thing that creates a lot of resistance is simply the fact that most organizations in the world are not set up to be transformed in any way in general, much less the dramatic transformation of AI. They’re just bureaucracies.

Anything that changes needs to be subjected to revision and rules and review. And so a wholesale transformation or a disruption, things are going to change –  but they will change slowly.

It also creates a misperception of what AI is. ChatGPT, for example, opened up this whole world of possibilities because it was intuitive.

AI is more than a chatbot

It also created an anchor that’s not necessarily a good thing, which is that it created an anchor that AI is chatting.

I don’t think that’s right at all. The basis of AI is about prediction and generativity. The mix of those two in a really nice user friendly format was ChatGPT.

But AI is actually built into all different kinds of functions. I think you’re beginning to see how interactions with AI go beyond just instructional chatting with Microsoft’s integration of AI through Copilot.

It might be really intuitive to have a little assistance in Microsoft Word or Excel or PowerPoint, which points into the direction that I think AI is going to go – which is that AI is going to be integrated in all different kinds of functions, probably to the extent that it’ll be imperceptible.

Making slides will just become easier. A presentation that might have taken 10 hours before might only take three hours, but you won’t notice that reduction in time happening. It’ll just be software features. It just knows.

AI integration into daily life

I think AI will create the biggest impact in ways that we’re not directly communicating with it.

In fact, if this transformation goes by the same pattern as prior moments of enormous technological change, we probably won’t even use the term AI. We don’t use the term electricity or the internet.

We don’t think of the internet as a separate thing. It’s just part of our life. I see that as the transformation that we’re headed toward with AI. We won’t need to use the term AI. It’s just going to be our lives.

AND THERE YOU HAVE IT

I hope you found this interview as interesting and thoughtful as I did. While there are many inherent risks and benefits to AI, what is clear to me is that the impact is not just one or the other. It is nuanced, complex, and multifaceted.

My personal takeaways is to continue to build skills that well researched people like Professor Wang advocate building – like cognitive flexibility, adaptability, comfort with confrontation, and critical thinking. I encourage everyone to research and speak with people in their industries on the impact AI is/will have – so that you can focus on building skills that will be relevant long term.

And as someone who has always loved contemplating the realm of the impossible, I envision the yet to be realized potential, while also urging companies not to unleash monster products and services without cages.

Lastly, my biggest hope is that technology forces people to amplify the best parts of what makes us human – which to me are empathy and connection.

If you thought this was interesting and want to talk more, just give me a shout on LinkedIn!

 

This article was originally published by Poets & Quants.