menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Stanford Economist Studying A.I.’s Jobs Impact Is ‘Mindfully Optimistic’

1 0
yesterday

Business Finance Media Technology Policy Wealth Insights Interviews

Art Art Fairs Art Market Art Reviews Auctions Galleries Museums Interviews

Lifestyle Nightlife & Dining Style Travel Interviews

Power Index Nightlife & Dining Art A.I. PR

About About Observer Advertise With Us Reprints

The Stanford Economist Studying A.I.’s Jobs Impact Is ‘Mindfully Optimistic’

The Stanford economist argues A.I. rewards those who actively use it to create value, not those who fear or ignore it.

Earlier this month, a student walked into Erik Brynjolfsson’s office hours with a blunt fear: that she might never find a job in the A.I. era. She worried her generation could be left behind entirely. Brynjolfsson, a Stanford economist who studies technology and labor, doesn’t share that outlook. He calls himself “mindfully” optimistic.

Sign Up For Our Daily Newsletter

Thank you for signing up!

By clicking submit, you agree to our terms of service and acknowledge we may use your information to send you emails, product samples, and promotions on this website and other properties. You can opt out anytime.

“I told her that, in many ways, this is a great time to be alive,” Brynjolfsson told Observer. “People who are using the tools are going to be able to seize the opportunity—and people who don’t use the tools are going to have a lot harder time keeping up.”

The tension between new technologies’ disruption and opportunity sits at the center of his work. Brynjolfsson leads the Stanford Digital Economy Lab and is a senior fellow at the Institute for Human-Centered AI, led by pioneer A.I. researcher Fei-Fei Li. He has spent decades studying how technology reshapes jobs.

His recent research suggests A.I. is already reducing opportunities for early-career workers. Even so, he remains an advocate for hands-on experimentation. He’s particularly enthusiastic about “vibe-coding,” for example, where users generate code with natural language prompts. Brynjolfsson said he often vibe-codes himself late into the night. “This technology is not just for a small, elite group of people who have to be highly trained anymore,” he said.

For Brynjolfsson, the real risk isn’t A.I. itself, but how people respond to it. Both blind optimism and outright fear, he argues, can lead to passivity. “That’s the worst thing we could do.”

Observer spoke with Brynjolfsson about how individuals can adapt, stay competitive, and make the most of A.I.’s rapid rise.  The following conversation has been edited for length and clarity.

How did you first become interested in the intersection of A.I. and the economy, and what has it been like to see the field gain so much attention in recent years?

As a kid, I was always reading science fiction and Isaac Asimov, and that just was enthralling for me. I remember standing in line for the first Star Wars movie. And then in college, I studied math and computer science as well as economics. Just as I graduated, I, along with a college roommate, taught a pair of courses on A.I. in the Harvard Extension School.

Then I went to get my Ph.D. at MIT, and I tried to do both at the same time: economics and A.I. That’s one of the reasons I went to MIT, because it had really good people in both those areas and so really all my papers, my research, from when I was in my 20s, has been about how A.I. and digital technologies are changing the economy. That said, I was kind of by myself for a long time. Not a lot of other people were working on this, and I just kept plugging away, and now it’s really exploding.

I could kind of always see this coming. Actually, for one of my first projects as a Ph.D. student, my advisor asked me to plot the growth of information technology in different industries, and I plotted and started seeing these exponential curves back then.

As a professor, you have a front-row seat to how young people are navigating A.I. Are most students worried about their professional prospects?

I probably get way more people of the other type coming to me with business ideas and saying, “Hey, I want to start this company.” I mean, I’m at Stanford, so maybe it’s not a totally representative group.

I think there are a lot of people who are very much leaning into this and excited about it. There are also a lot of people who are really worried, and a lot of people are both at the same time, which is not crazy because it is a very turbulent time.

Outside of experimenting with vibe-coding, you’ve also created an A.I. avatar of yourself for classes, right?

That’s true. When I teach my Stanford class, the students all use A.I. to do their homework, which is fine. Obviously, that’s part of the experience, but I want them to make sure that they actually think about what they’re writing and submitting and not just take my question, cut and paste it over to the A.I. and then hand it in.

I cold-call on them in class, and that helps a little bit, but I have like 80 students, so I can’t cold-call on everybody. So we had this “Erik” avatar that basically, after they do their homework, I ask them to have a five or ten-minute conversation on camera with voice, with the avatar, and the avatar has read their homework and asks them questions about it, like: “Why did you say this? And what about this other counterargument? And have you considered this possibility?”

It gives them a chance to dive deeper into the topic and explore different aspects of it. And also, to be frank, sort of verifies that they’ve really engaged with the material, as opposed to just cutting and pasting.

A recent study of yours examined the impact of A.I. on entry-level roles. What do you see ahead for young professionals?

I think an under-reported aspect of that is that one of the results showed the outcomes depended heavily on how people were using the technology. In one section of the paper, we discussed how people who are using the technology mainly to automate tasks and eliminate some of the things that they were doing, that tended to lead to falling employment. Whereas the folks who are using it to augment and learn how to do new things, expand the set of things that they were creating value on, they actually had growing employment.

The story is more complicated than just “A.I. eliminates jobs.” Depending on how you use the A.I., it can, when you use it to augment, create more demand for people to do work and increase productivity, because people who are able to do new things and create value in new things, those are going to be a lot more in demand, and employers want those people.

What’s your advice for someone trying to get up to speed on A.I. tools?

It used to be that you would go to K-12, maybe university or even graduate school, and then you’d be done. And then the next 40 years, you use that material in your job. Now, I think that the pace of change is so fast. Everybody needs to be learning all the time. People who are taking my [MasterClass] course, it doesn’t matter if they’re 18 years old or 50 years old or 67 years old, they should all be learning new material because there are new opportunities.

Some of the tools that we’re teaching in the class weren’t available six months or a year ago, so there’s a big opportunity. A year from now, there’s going to be some additional tools, and people can layer those on top of what they’ve learned. Fortunately, I think it’s kind of fun to learn about these tools. But that should be part of your daily process, or at least your periodic process, to do some kind of learning.

Is it harder to convince companies than individuals that using A.I. to augment work is better than fully automating jobs?

It is harder. It’s great to be more efficient, and I’m not opposed to that. The problem I have is that I think there’s a huge overemphasis on that approach and not enough attention on creating more value.

We want people to be value creators, and they should put some of it on their own shoulders and go to their boss and say, “Hey, here’s this new way of creating value. I figured out how to do a new kind of marketing campaign using these tools, or these new kinds of graphics, or there are new ways of reaching customers through social media.”

People have to be proactive. One of the things that I’ve started saying is that A.I., you shouldn’t think of it as just artificial intelligence. You should think about it as amplifying intention, that if you have a lot of like ideas and intention about how you can do things better, A.I. will amplify that, A.I. will allow you to be 10x more effective. But you have to have that agency to begin with to really make it happen.

You’ve described yourself as a “mindful optimist” when it comes to A.I. What does that mean?

“Mindful” is an important part of that. I run into a lot of people in Silicon Valley who tend to fall into two camps. One are just the unconditional optimists, and I won’t say their names, but there’s people who just say, “Hey, it’s always worked out in the past. It’s going to work out this time. Just chill. We don’t have to worry about it.” Then there’s another group that thinks the opposite, which is like, ‘Oh no, this is really bad. A lot of people are going to be hurt, and we’re in big trouble.’

I think those two: they’re not opposites, they’re both making the same mistake, which is they are implicitly thinking that A.I. is going to do something to us, and we’re just sort of passive. That’s the wrong mindset, in my view. We need to think about how A.I. is a tool, and we get to use that tool to change the world. The question isn’t what A.I. is going to do to us; it’s what we’re going to do with A.I. That’s where the mindful part comes in.

I don’t think either future is inevitable. I think it depends a lot on our choices, and so it’s a kind of call to arms.

SEE ALSO: Valerie Mercer and the Long Work of Putting African American Art Where It Belongs

We noticed you're using an ad blocker.

We get it: you like to have control of your own internet experience. But advertising revenue helps support our journalism. To read our full stories, please turn off your ad blocker.We'd really appreciate it.

How Do I Whitelist Observer?

Below are steps you can take in order to whitelist Observer.com on your browser:

Click the AdBlock button on your browser and select Don't run on pages on this domain.

For Adblock Plus on Google Chrome:

Click the AdBlock Plus button on your browser and select Enabled on this site.

For Adblock Plus on Firefox:

Click the AdBlock Plus button on your browser and select Disable on Observer.com.


© Observer