A piece of advice if you’re meeting with Lisa Su: Wear sneakers.
Su, the leader of AMD, moves fast these days, though I suspect that’s always been the case. Her company’s chips underpin the artificial intelligence that’s changing the world at breakneck speeds. To hear Su and literally everyone else in semiconductors talk about it, the US is in an AI race with China—and the rules keep changing. The Trump administration has once again shifted its stance on what kind of chips can and can’t be shipped to China, with the latest decree being that the US will take a 15 percent cut of AMD and Nvidia chip sales to China. Meanwhile, on the home front, Su has claimed that AMD’s newest AI chips can outperform Nvidia’s—part of her strategy to keep eroding Nvidia’s dominance in the market.
So, yeah: Be ready to keep up.
Under Lisa Su, the stalwart American semiconductor company has reasserted itself as a force in the age of AI. “Reasserted” doesn’t do it justice: Su took a struggling AMD and executed a 10-year turnaround that has been, as one economist put it, nothing short of remarkable. Since 2014, when Su took over as CEO, AMD’s market cap has risen from around $2 billion to nearly $300 billion.
Aside from her well-known bona fides, Su herself—what drives her, what inspires her, what irritates her, where her politics lie—is less known. This is what I was hoping to learn when I visited AMD’s offices and labs in the hills of Austin, Texas, on a day in late June when the wind seemed to do little more than push heat around.
Our conversation kicked off with China, which accounts for nearly a quarter of AMD’s business. She betrayed no anxiety. Su now travels frequently to Washington, DC, to grease the wheels. “We’ve come to realize that export controls are a bit of a fact of life,” she told me, “just given how critical the chips that we make are.” In other words, it’s precisely because AMD’s chips are so darn important—to national security, to national economies—that they’re now at the heart of modern statecraft.
Another thing I learned about Su: She plays the long game. Politics is a cakewalk compared to what she’s managed to pull off professionally.
Su was born in Taiwan in 1969 and raised in Queens, New York. Her father worked for the city as a statistician; her mother was an accountant who became an entrepreneur in her mid-forties. Su earned a doctoral degree in electrical engineering from MIT, then went on to stints at Texas Instruments, IBM, and Freescale Semiconductor, where she served in executive roles. After joining AMD in 2012, she quickly rose to COO. As Su tells it, six months in, the chairman of the board called her and said, “It’s time, Lisa.” Su’s response: “Really? That seems kinda quick.”
As CEO, Su smartly steered AMD toward the high-performance computing market. She embraced chiplets, a modular approach to building chips that has paid off enormously. She impressed the industry by launching the world’s first 7-nanometer data center GPUs. More recently, she doubled AMD’s data center revenue in two years. And she has struck deals with juggernauts like OpenAI, Meta, Google, and a couple of Elon Musk’s companies. During a keynote speech at AMD’s annual event this June, OpenAI CEO Sam Altman trotted onto the stage to hug it out with Su.
These are all impressive data points. Yet AMD is still a fraction of the size of its most notorious competitor, the $4.4 trillion Nvidia. Comparisons between the two companies are inevitable—especially since Su is distant cousins with Nvidia CEO Jensen Huang. (I was warned that Su hates being asked about this. I asked anyway.)
During my visit to Austin, Su led me on a tour of AMD’s test labs, where rows of server racks are rigged up to be put through the extremes. Engineers straightened up when Su paused at their stations. One of them surprised her with a celebratory cake in the shape of AMD’s EPYC Venice processor. Su appeared genuinely delighted. She posed for a photograph, then moved on to the next row in the lab, striding with purpose in white Prada sneakers. Along the way I fired questions at her, raising my voice to be heard above the din. What do models like DeepSeek mean for her business? Will AMD build its own LLMs? What drives Lisa Su?
Lisa Su in the AMD lab in Austin, Texas.
Photograph: Linda Liepina
Later on, Su invited me to join her in her chauffeured car. (Alas, it was not one of her Porsches, which have license plates bearing the names of her favorite AMD chips.) The noise was gone; the walls were down. During the nearly 30-minute drive to the next lab, Su pressed me on my thoughts on AI, called out my skepticism, and shared why powering the AI revolution—particularly around health care—is personal to her.
What is something that you would hope that the Trump administration—but also the general public—would understand about the AI accelerators that you’re making?
That we, as tech companies, benefit from more users. And limiting the number of users in our ecosystem is actually bad, not just for AMD but really for the US. Because there will be alternatives out there. The idea that somehow, if we don’t ship chips to the rest of the world, that AI progress is going to stop—it’s not going to stop. AI progress is going to continue to develop, and we’d rather them develop on us than on someone else.
There have been incentives recently to bring more chipmaking back to the US. What’s the most complicated part of bringing that manufacturing back?
We absolutely should bring manufacturing back to the US. Absolutely, 100 percent.
Why is that?
Because it is such a critical part of national security and the economic interests. We had an ice storm here in the central Texas area a couple of years ago. For days nothing could move around. A couple of fabs around here were shut down. For good business practice, you want diversity.
To bring manufacturing back will take time. But it’s doable. The thinking went from “Oh, you can’t do leading-edge manufacturing in the US” to today, how in TSMC’s Arizona fab we’re running some of our latest server processors, and it’s looking really good. So it can be done. It is more expensive, and that’s OK, too. I think it requires a change of mentality, that you don’t always go for the lowest-cost thing.
Lisa Su holds a MI355 chip.
Photograph: Linda Liepina
When you became president and CEO of AMD in 2014, did you think that folks like yourself would be expected to weigh in so much on what’s going on in terms of geopolitical and social issues? Do you feel more pressure to participate in conversations with the current administration?
Well, it certainly has changed. I wouldn’t say that we’re political or I’m political. You won’t see me weighing in on general social issues, because I don’t necessarily think that that’s where my value-add is. But when it comes to technology policy and where semiconductors are in the world, yes, we have to participate. And I wouldn’t call it increasing pressure. I would call it an increasing responsibility to do so, because we want the rules to be written right.
You’re one of the most prominent women leaders in technology, in semiconductors. Why don’t you think you add value?
Maybe value-add is not the right way to state it. It’s more like, my personal opinion might be interesting, but frankly, it’s much more important that we get the policy correct, based on what I consider facts.
Your least favorite question these days is probably “When?” Meaning, when might AMD surpass Nvidia in the AI GPU market? My guess is that 10 years ago people might’ve laughed at the idea, and now they might be more willing to entertain it.
When I started as CEO, people would ask me, “Why are you taking this job?” and I would be very confused. I was like, “Are you kidding me?” To me, it was the best. We were a company in an industry that really matters that had been underperforming for a while, and I had a chance to take this team and do something which I felt was important. If you asked me, “What do I want to be when I grow up?” it was not “I want to be a CEO,” it was “I want to work on something that matters.”
At that time, everyone was comparing us to Intel, and we would have to defend that. And my comments to the team are, “Look, we know what we can do, and let’s just show the world what we can do.” And so I mean, this notion of when, I don’t love this. I know the media loves this. It’s always A versus B, is that right? That’s what you guys—
I wouldn’t say it’s A versus B. I mean, I think AMD is more nuanced in some ways, because you still have so many clients and customers who are reliant on x86 and CPUs. And you have this fast-growing data center business, which you grew from $6 billion in 2022 to $12.6 billion last year. But Nvidia is the big question right now.
Look, my point is I don’t necessarily want to be compared against Intel or Nvidia. My vision, our vision, has always been, “There’s no one-size-fits-all.” You need the best in the data center. You need the best CPUs. You need the best AI accelerators. You need the best in your personal computing. We have this portfolio that is quite broad, and the market is humongous. It’s more than $500 billion over the next three to four years. We have plenty of opportunities.
So, Sam Altman was at your AMD AI event in June, and you talked about working with OpenAI. You’re working with Meta. You’re working with Elon Musk’s companies, Tesla and xAI. But the reality with AMD is that companies like that love their Nvidia GPUs, and you’re an “also.” Do you want to get to the point where you’re like, “No, we are the primary partner?”
Of course. That’s where we are today in CPUs. So if you were to ask many of those same companies, I think they would say that AMD is their strategic CPU partner. And absolutely, we expect to be there in AI as well. But I’m not impatient with this.
You don’t want to put a time stamp on it.
Look at it this way: When I first joined AMD in 2012, Microsoft was just an early partner for us in gaming. Over the past 10-plus years we’ve built a lot of trust, and now we’re cocreating with them, so Microsoft just announced they’re using AMD not only for their next-generation Xbox consoles but across their entire cloud.
Our work with Meta has been absolutely the same way. I remember the first conversation I had with Meta, and I said, “Just give me a shot. I don’t have to tell you that I’m going to be the best in the world. I know that I’m going to prove that to you. I’m going to be your best partner. Not just your best technology partner but also your best partner in helping you get your technology infrastructure together.” And that’s what we do.
At the AMD AI event in June you talked a fair amount about DeepSeek, the large language model that came out of China and reportedly cost a lot less money and computing power to train. How are AI models like that changing the way you’re thinking about computing power?
Well, it’s another example of just how much workloads have been changing in AI. People were previously all focused on large-scale training, and now with reasoning models and fine-tuning, the industry has transitioned to a place where inference-type computing is actually growing faster. It’s why you have to have very flexible hardware.
So it’s not changing the way we’re thinking about it, because we always thought inference was going to be more important. I guess that means we were right in betting on that. We’ve optimized for memory capacity and other key things that are important for inference computing.
I’m going to bring up Nvidia again: Nvidia has been training some of its own models and offers a framework, NeMo, for developers to build their own generative AI models. How seriously have you thought about AMD training its own models?
We are training our own AI models. We have an AI models team. But we’re not training our models for the sake of competing with big model builders. We train them to learn from them. The more we dog-food our own stuff, the more we learn, and then we can accelerate our building.
Analysts and even some of your customers have told me that AMD is incredibly customer-oriented. On the other hand, folks who are deep in the technical weeds say that ROCm, the set of software tools for programming AMD’s hardware, still isn’t as good as Nvidia’s CUDA. What specific steps do you plan to make to lure in more developers?
Yeah, I mean, I agree that the software is the most critical layer, because it’s what developers see. And when you think about ROCm and CUDA, it’s not that one is better than the other. It’s that CUDA has been around for a long time. People have gotten used to a certain ecosystem, and so we’re actually teaching them a different ecosystem. That’s the way to think of ROCm.
What I hear is that it’s not just entrenchment, though. Developers say, “The compilers don’t work as well” or “The performance libraries aren’t as good” or “We want more portability.”
Sure. But the reason for that is because, hey, things are done a certain way in Nvidia’s CUDA, and they would like to have it done a very similar way in ROCm. I have not yet met an AI customer that we haven’t been able to get working, performant, and all of that stuff. We don’t necessarily have all the libraries yet. There are lots of, let’s call it special kernels, that have been written. So to your question of, What are we doing? I mean, we’re running faster. That’s the best answer: We are running faster.
I learned very early that you don’t have to agree with criticism, but you have to understand it’s a perspective. And then you decide what you’re going to do on top of it. There’s still a lot to do. And we’re hiring like crazy. We’re acquiring, and we’re listening to developers. And I think what we’ve seen is you can actually make progress pretty quickly.
What do you make of the Meta Superintelligence Lab and Mark Zuckerberg reportedly offering AI talent people up to nine figures to go work for Meta? What does that kind of compensation do to hiring in the Valley?
I can’t say I have direct experience with it, frankly. I think competition for talent is fierce. I am a believer, though, that money is important, but frankly, it’s not necessarily the most important thing when you’re attracting talent. I think it’s important to be in the zip code [of those numbers], but then it’s super-important to have people who really believe in the mission of what you’re trying to do.
I think people have done relatively well here, because the stock’s done OK. But from a recruitment standpoint, it’s always like, “Do you want to be part of our mission?” Because the ride is really what we’re trying to attract people to. It’s the ride of, “Look, if you want to come do important technology, make an impact, you’re not just a cog in the wheel, but you’re actually someone who’s going to drive the future of our road map, then you want to be at AMD.”
Do you envision ever offering anyone a nine-figure compensation package to come build out your software ecosystem?
I don’t think so.
Because you’d have to answer to shareholders or make a case to your board?
Because it’s not really about one person in our world. I mean, it’s really about great people, don’t get me wrong—we have some incredible people. We acquired Nod.ai, and Anush Elangovan, who was the CEO of that, has now become the head of our software ecosystem work. He’s just absolutely phenomenal with his passion. He’ll go after every single person who has an issue with ROCm. And I’m like, “Anush, how do you do that?” So that’s what I mean. We’re looking for people who have that type of passion for the work that we do, and there are lots of those people around.
What is “superintelligence” to you?
I think the idea that AI can make all of us superintelligent is a wonderful vision, and we’re still in the very early innings of how to do that.
One of the areas that I’m most personally passionate about is health care, because I have had experience with the health care system, and I think it should be much, much better than it is today. We should be able to cure these diseases. We shouldn’t have to do trial and error like we sometimes do. This is a perfect use case for AI. Being able to stitch all those pieces together to go from drug discovery to therapeutics to inpatient care, all of that is ripe for—let’s call it transformation. I don’t know if you call that “superintelligence.”
There’s this idea floating around that AI is going to be so smart that it’s going to eventually be able to delete humanity. How do you think about those kinds of predictions? Do you believe in AGI?
I do believe in AGI, but I don’t believe in the idea that AI will be smarter than people. I also am not a big doomsday person or believer, either. Look, I mean, technology is great, but technology is as great as the people who build it and create it and channel it in the right direction. So I find those conversations a little bit esoteric. And our focus is, “The tech is good, but it’s not great yet. How do we make it great?”
How do you measure “great”?
I think this idea of having AI solve real hard problems is when it gets great. And we talk about agents as one of the next big things. I think agents right now are doing, let’s call it, relatively more of the mundane tasks of the world.
Lisa Su outside of AMD’s headquarters.
Photograph: Linda Liepina
Putting things in your shopping cart.
Right. I think there’s two directions AI goes. One is pure productivity, you know, how do I remove some of, let’s call it the menial work that people do, so that they can work on more interesting things? That’s one aspect of it, and we’re using that.
But the other aspect of it is when AI can solve really, really hard problems. It can take what would’ve taken us 10 years to figure out and do that in six months. I think about a world where it normally takes us three years to design a chip, and what does that look like if I could do that in six months?
Does humanity just not keep up at some point, though?
I don’t know. I would bet on humanity being OK.
Technology can be a little bit overwhelming now.
Well, but Lauren, I think that’s the point. When technology is good enough, you don’t have to think about it. Today, you still have to think about when you go and ask—what’s your favorite? Do you use ChatGPT or Grok?
I use ChatGPT, yeah. Not every day, but—
Often?
Often enough. I mean, I’m obligated to test these things.
Yeah, but you still have to make sure, “Hey, did it give me the right answer?”
Oh, absolutely. I mean, as a journalist in particular. I don’t use it for my writing in any capacity. We draw a very hard line between using it for learning how to cook a steak versus using it for our journalism.
But you use it for research.
Sometimes, but the hallucinations are concerning.
But that’s the point of it’s not good enough yet. At some point, it’s going to be good enough. You’d want to be able to take your AI at face value.
You mentioned health care. When we’re older and infirm, will the generation of folks who are treating us be ChatGPT doctors?
I would like there to be a generation of folks treating us who have the vast amount of data that ChatGPT will have, so that they’re better informed to make diagnoses.
When you think about AI philosophically, is it like the internet all over again? Is it more comparable to Linux, like it’s going to be some operating system that runs on everything that we own? Is it electricity? Is it fire? I think Sundar Pichai has compared AI to fire, in terms of how transformative it is.
The internet is not a bad comparison, but I think AI is much more than the internet. Because, if you think about it, the internet was a lot about moving traffic. AI is more about something foundational in terms of productivity. Sometimes people compare it to the Industrial Revolution, and that’s not a bad comparison, actually.
With other revolutions we weren’t overwhelmed so much with thinking about what was true and real and what was not.
You can choose two ways to think about it. One is you try to hold back on AI because it could be dangerous, or you try to go as fast as you can but put the right lens on the information. I’m a big believer in the second camp. And as a result I don’t believe in these cases where you’re not going to need lots and lots of people. Because in the end, people are the judge of what truth is. We’re still hiring more and more engineers, because they’re the final arbiters of our engineering.
I am hopeful that humanity will figure it out.
And it will be so much better. It will be like the internet is to us today, which is you just take it for granted. We shouldn’t evaluate the technology based on this point in time. We should evaluate it on the slope of what we’re going to be capable of doing. We’re going to get these things right. But we may have a few bumps in the road.
You seem a little concerned about AI. Are you just playing devil’s advocate?
I tend to think the people who stand to benefit the most from these technologies are the ones who have the luxury of being a little bit more optimistic about it and who are hyping it up. There’s that famous line: The future is not distributed evenly. Even with the advancements in medicine, we’re going to see biases emerge that lead to people getting denied health care or insurance coverage. We’ve already seen this.
Health care, for me, is quite personal, because my mom was quite ill. For a while. And so I got to watch her journey going through that. And I realized, like, it doesn’t matter who you are. You can’t guarantee the best health care, because it’s really an art right now. It’s not a science. And I believe it should be a science.
Why do you think it’s an art?
The body is a very complex system. So you have specialists, like a heart specialist or a kidney specialist. But there are not that many generalists that can pull it all together. And that, to me, is a travesty. I’m like, come on—this is solvable.
That’s what we do in tech, right? We take complex systems and put them together, and we make them work. But we’re often only looking at one aspect of health, and it’s my firm belief that if we can use technology to help pull all of that expertise together, we’ll be able to treat people better. I watched it firsthand. So, anyways, in my next life when I have time to do something other than this—
You’ll be a doctor?
I won’t be a doctor, but I hope to be someone who can help bridge the divide and use technology for what it’s actually capable of.
You could do that in this lifetime, too.
I have a few things to do right now.
Was your mother able to recover?
No, unfortunately.
I’m sorry.
But you know what I mean? I just realized that—wow.
You realized, it could happen to any of us.
Yes, but it’s about the quality of care, even with the best doctors.
About a year ago my own mother had some really serious health issues, and she ended up in the ICU on a ventilator. Doctors kept coming in and looking at scans and couldn’t figure out what was going on. And I was sitting in the hospital thinking, I’m so deep in the world of reporting on AI, where people are touting incredible medical advancements, but we can’t tell what’s happening from these scans?
So you know. You know exactly what I mean. It’s infuriating for me to think my mom was in the ICU for 60 days, and people said, “Nobody walks out of that.” They were like, “She’s not going to be able to do it.” And I was like, “Yes, she is. I know she is.” I wasn’t the one qualified to make those calls, right? And she did. She survived another two years after that.
You talk often about resilience. How do you personally—not the company, but you—stay resilient? Is it Starbucks? I noticed you had one at your AI event.
Yes, this is a passion tea lemonade, and it does a lot for me. I’m not a big caffeine drinker, so this and a bit of exercise does it for me. I go in waves [with Starbucks]. Sometimes I have a lot, and sometimes I cut myself off.
Same. What’s your preferred form of exercise?
I like to box. I have a trainer that comes to the house, and he lets me hit him. Well, not him, but mitts.
How long have you been doing that?
I don’t know. Seven, eight years, something like that.
So two to three years after you became CEO, you were like, “I need to work some stuff out here.”
Yes.
How much do you sleep per night?
Five, six hours. Six is a really good number. On weekends, I might be seven.
What impresses you most as a leader? When you have your first meeting with someone, what impresses you?
Passion for what they’re doing. Because I think that stays with you through good times and bad times. Things are always going to go wrong, but if you’re truly passionate about what you do, then I think you shine.
What irritates you?
What irritates me?
Yes.
Well, I can’t say people who ask me about Jensen being my cousin?
You can totally say that.
To tell you the truth, it doesn’t really irritate me, but it’s more like, “Really? Is that the most important thing we have to talk about?”
What is it you feel like people don’t really know about you, that you want them to understand about you?
I feel like people know me. No? Well … I get up every day because I believe our products can change the world, and they can make the world a better place. So there’s always noise—this, that, export controls, whatever. Those are noise.
So is it that you’re a supreme optimist?
I don’t think of myself that way, but I’m probably a supreme technology optimist. I’m actually quite pragmatic. So I’m a pragmatic supreme technology optimist. How does that sound?
It sounds like ChatGPT generated it.
That was not a programmed response.
How would I describe myself? I do believe tech has the opportunity to change so much of how we experience life in a very, very positive way. So in that case, I am a supreme technology optimist. But I’m pragmatic in how you get there. And how you get there is every day, step by step. We learn, we listen, we adjust. We apply what we learn. That’s just what we do.
Let us know what you think about this article. Submit a letter to the editor at mail@wired.com.