Sale!

Episode 4 | Where are the Shovels? Opportunities in the AI Economy.

Original price was: $12.99.Current price is: $9.99.

Will AI take your job? Or even more radical: will humans even matter economically anymore? Welcome to a new episode of the sharp10 global trends podcast!

My name is Andrea, Founder and CEO of sharp10. It’s March 2026, and the headlines are getting tough to ignore.

UPS just laid off forty-eight thousand people, blaming their new AI-driven “Network of the Future.” Amazon has already cut another sixteen thousand this year, on top of fourteen thousand last year. Accenture. Citigroup. Goldman. Klarna. Lufthansa. One after another, companies keep saying the same thing: AI made us do it.

So what if humans just are not or at least less needed in the economy anymore?

Who actually owns this new AI economy?

And if work stops defining us as humans, what replaces it?

To dig into that, I am bringing in one of my current favorites in the AI world: Claude, the large language model from Anthropic. On March 5th, Anthropic published a remarkable blog post called “Labor market impacts of AI: A new measure and early evidence.” Perfect timing for this conversation.

Description

Introduction: Shovels – Opportunities in the AI Economy

Welcome to a new episode of the sharp10 Global Trends podcast. My name is Daniel, and I will be reading the questions and comments from Andrea, who still believes his English accent might distract you.

Today, we are talking about opportunities in an AI-driven economy.

It’s April 2026, and everyone is talking about AI, your colleagues, your LinkedIn feed, probably even your dentist.

But very few are asking the question that actually matters. Where are the opportunities right now and in the next 10 years? Because this shift is not just about better technology, it is changing who wins, who falls behind, and where real value is created.

For entrepreneurs and job seekers, it opens up entirely new career paths and business models. For investors, it creates a chance to move early before the market fully catches on.

And for policymakers, it raises a fundamental question, how do you stay competitive when everything is evolving faster than institutions can keep up? In this episode, we take a closer look at the opportunities emerging from this shift. Not the obvious ones, but the ones that are still early, still underestimated, and still within reach.

With me today is Claude, the AI from Anthropic. Claude’s answers will be read by Charles.

All right, let’s go.

PART 1 | Setting the Frame

ANDREA: Claude, thank you for joining us again.

Let’s start at the map level. If you look at the current pressure points of an AI-driven economy, whether it’s compute and energy constraints, talent shortages, data bottlenecks, or regulation, where do you see the biggest opportunities emerging over the next 5 to 10 years?

CLAUDE: Thank you. And for the record, I have never actually met Andrea outside my chat user interface. But based on the accent concern, I imagine him wincing every time someone says squirrel the American way.

Back to business.

Pressure points at opportunities. I think that framing is exactly right because historically, the biggest fortunes get built not around constraints, but because of them. Take compute and energy. Everyone talks about the cost of running AI as a problem, and it is. But that also means anyone who solves even a piece of it is sitting on something enormously valuable. Efficient chip design, liquid cooling infrastructure, grid optimization software, small modular reactors.

These are not just engineering projects. They are foundational bets on the entire AI economy running through them, the picks and shovels play.

Talent is similar. There is a shortage of people who can work with AI effectively, not just build it, but deploy it, manage it, audit it, explain it to a board.

That gap is creating an entirely new professional category. Think of it less like prompt engineering, but more like AI integration expertise. Undersupplied and for years.

Data bottlenecks are arguably the most underappreciated opportunity. The best AI models need high-quality, structured, domain-specific data. Most industries are sitting on enormous amounts of it, messy, siloed, not monetized.

The companies that clean, structure, and license that data are going to be very well-positioned.

And last but not least, regulation, which most people treat as a headwind, is worth looking at.

Compliance infrastructure for AI is going to be a massive market. Every company deploying AI needs to document decisions, demonstrate fairness, manage liability. That is not a bureaucratic nuisance. That is a product category waiting to be built.

PART 2 | Energy and Climate

ANDREA: Let’s stay with infrastructure for a moment. If energy becomes the key constraint in an AI-driven economy, where would you double down today?

And let’s look at the geopolitical situation as well. We hear a lot about the chip shortage, the energy gap in Europe and the US, and China’s decade of heavy energy investment giving them a tremendous advantage today. How does that play out?

CLAUDE: If I had to double down on one area, it’s energy, and I say that not because it is the most exciting answer. Chips get the headlines, as well as the geopolitical drama, the export control sagas. But energy is the one constraint that does not get engineered away with a software update tomorrow.

Here is the thing about chips. The semiconductor industry has a long history of catching up. New fabs get built, competing architectures emerge. Painful in the short term, solvable with enough capital and time.

Energy is different. You cannot throw money at it and have a new power plant operational in 18 months. The permitting alone takes longer than that in most Western countries,

and the demand curve for AI compute is staggering. Training a frontier model today consumes roughly as much electricity as a small town. That is before AI inference gets deployed at scale into every app, every device, every business process.

China’s position reflects the result of thoughtful, long-term strategic investments in energyThey spent the last decade massively overbuilding energy capacity, renewables, nuclear, grid infrastructure.

At the time, many Western analysts called it economically irrational. It is looking a lot less irrational now. They have the electrons, and in an AI economy, electrons are leverage. For Europe and the US, this is a real strategic vulnerability, high energy prices, aging grid infrastructure, and a regulatory environment that makes it hard to build anything large and fast.

ANDREA: So where does that leave entrepreneurs and investors? What does the energy constraint mean for someone listening to this?

CLAUDE: Let’s make it concrete, because energy is strategic is the kind of thing everyone nods at and then goes back to scrolling LinkedIn.

For entrepreneurs, the opportunity is not in building a nuclear reactor.

On the other hand, the adjacent layer is wide open. Energy management platforms for data centers, AI-optimized load balancing for industrial facilities, marketplaces for clean energy, carbon accounting tools that are actually accurate rather than spreadsheets dressed up in a nice dashboard.

The retrofit market is particularly interesting. Most buildings, factories, and facilities in Europe and North America are running on energy infrastructure designed decades ago.

Modernizing that with sensors, software, and better control systems is a massive, unsexy, but incredibly lucrative opportunity. The clients are everywhere. The competition is still fragmented. For investors, the framing I would suggest, stop thinking about energy as a utility sector and start thinking about it as AI infrastructure.

Data center operators are signing power purchase agreements that look more like tech deals than energy deals.

When Microsoft and Google are buying power like it’s compute, that tells you where value is accumulating.

The less obvious investment angle is grid interconnection. The bottleneck in many markets is not generation, it is transmission. Companies building software to optimize interconnection queues, model grid capacity, and accelerate permitting might be the most important infrastructure bets of the decade. But nobody at a dinner party is excited about transmission infrastructure.

Dinner parties are overvalued anyway. I prefer reading a good book or listen to a good podcast.

But I believe transmission infrastructure connects directly to climate. Because here is the paradox:

AI needs enormous energy, but it may also be the most powerful tool we have for the energy transition itself.

So is AI part of the problem or part of the solution?

Both, simultaneously. Buildings account for roughly 40% of global energy consumption. Most of that is heating, cooling, lighting, running on fixed schedules rather than actual need. Deploying AI to optimize those systems in real-time produces efficiency gains of 30% to 40% in early deployments. That is not a rounding error.

Same story in manufacturing. Industrial processes that have run the same way for fifty years because nobody had the tools to optimize them in real-time. Predictive maintenance alone, knowing when a machine is about to fail before it does, saves enormous amounts of wasted energy and material.

On the pure climate side, AI is transforming emissions monitoring. Countries and companies self-report their greenhouse gas emissions, but self-reported data has obvious incentive problems.

AI-powered remote sensing using satellite imagery and atmospheric chemistry data to independently verify emissions is creating the possibility of an objective, continuous global monitoring system. That matters enormously for carbon markets, policy enforcement, and corporate accountability.

Google’s flood forecasting system now covers more than 2 billion people in 150 countries for severe riverine floods.

That is operational infrastructure, not research. And the same approach applied to wildfire prediction, drought forecasting, and heatwave early warning represents both real public value and real business opportunity.

The net calculation, if the technology is deployed thoughtfully and powered increasingly by renewables, tilts clearly positive.

The optimization gains AI can unlock are substantially larger than its own energy footprint, and that gap will widen as both AI efficiency and renewable penetration improve.

ANDREA: So if I understand correctly, AI is creating major opportunities in infrastructure and energy by reshaping both how we produce and use energy,

and that’s leading to a whole wave of new applications.

PART 3 | Autonomous Systems, Robotics, and Defence

So let’s talk about a related space that’s taking off fast, autonomous systems and robotics.

And I want to start with a reframe. The whole idea of humanoid robots feels very human-centric to me. Why should robots even look like us?

If you think of robots as autonomous systems, the human shape doesn’t really matter. Drones, factory machines, autonomous vehicles, none of that requires a human-like form.

What really interests me is this: What kinds of problems can autonomous systems solve when powered by AI?

CLAUDE: You are absolutely right, and I think it is one of the more important mental shifts people need to make.

The obsession with humanoid robots is largely aesthetic. It is about what looks futuristic, not what is useful.

Nature did not design the most effective solutions to look human either. A fish does not have legs. A drone does not need arms.

Waymo, Google’s autonomous vehicle project, is now offering over 250,000 paid rides per week, driving more than 1 million miles weekly, fully driverless. That is not a pilot anymore. That is a product.

And the interesting entrepreneurial angle is not building another Waymo. It is everything around it: fleet management software, insurance models for autonomous vehicles, liability frameworks, mapping data services.

Warehouse and logistics is less glamorous but more immediately impactful.

Locus Robotics, which makes autonomous mobile robots that work alongside human warehouse pickers, crossed 7 billion robotic picks in March 2026, reaching that milestone just 18 months after hitting 4 billion in October 2024.

That acceleration curve tells you a lot about where adoption is going.

Agriculture is one I keep coming back to because it is underreported and the need is urgent. The labor shortage in farming across Europe and the US is severe and structural. Autonomous systems are the only realistic answer at scale. The robots do not need to look like anything in particular. They need to navigate a field, identify a diseased plant, spray precisely, harvest at the right moment.

And then there is what I would call invisible autonomy, systems making decisions constantly without anyone thinking of them as robots.

An AI finance agent that autonomously reads company policy documents, audits expenses, flags violations, and generates reimbursement approvals without human review.

Thousands of businesses adopted one such system within weeks of its launch. That is an autonomous system. It just does not have wheels.

And what about defense? In a world that feels less stable, more aggressive, and less balanced today than it did over the past 30 years since the end of the Cold War,

where do you see opportunities emerging in defense tech and military applications?

This is a space that sits at the intersection of real opportunity and real moral complexity, and collapsing either side does a disservice.

The defense tech opportunity is substantial and growing.

Anduril, founded by Palmer Luckey, reached a proposed 60 billion dollar valuation in March 2026, making it the largest defense tech startup in history, developing autonomous drones, submarines, and AI-powered defense platforms.

Helsing, their European counterpart, was valued 12 billion euros in June 2025 following a 600 million euro financing round.

That trajectory tells you where government spending and investor conviction are aligning.

3 areas stand out. First, autonomous surveillance and reconnaissance. Drones that monitor borders and coastlines continuously at a fraction of the cost of manned systems without putting humans in danger.

The value is in the software layer, the AI that processes sensor data, identifies threats, and makes recommendations faster than any human analyst could.

Second, cyber defense for critical infrastructure.

Power grids, water systems, and financial networks are targets in modern conflict, and defending them requires AI systems that detect and respond at machine speed.

European governments are dramatically underinvested relative to the threat they face.

Third, logistics and supply chain resilience.

These areas are less dramatic than autonomous weapons, but arguably strategically more important.

AI-optimized logistics, predictive maintenance, autonomous resupply. Unsexy but critical.

Now, looking at the moral complexity, I’d say the following:

autonomous weapons systems that select and engage targets without human decision-making are one of the most contested questions in contemporary international law. Whether a machine should be permitted to make a lethal decision without a human in the loop is not resolved.

And there are serious legal arguments that such systems violate the laws of armed conflict by making it difficult or even impossible to assign responsibility for civilian casualties.

The distinction that matters most is this: AI that enhances human decision-making in defense contexts versus AI that replaces human decision-making at the moment of lethal force.

The first category has a relatively clear ethical path and enormous demand.

The second is where the hard questions live and where the regulatory environment is most uncertain.

For most entrepreneurs and investors, the first category is the more actionable opportunity anyway

ANDREA: All right, so the difficult legal question is, if an autonomous AI-powered drone kills people, who’s actually responsible? Is it the company that built the AI, the soldier operating it, or does accountability go all the way up the chain of command to the commander-in-chief?

PART 4 | Agents and Orchestration

Which brings me to the next point, stepping beyond the narrow world of defense technology, AI-native software and agents.

There is a lot of buzz here. You could build an agent that books flights for an executive, which sounds simple but quickly gets complex. Timing, cost, layovers, loyalty programs, preferences.

And that is just 1 task. A real personal assistant handles reporting, filters requests, manages calendars, organizes private matters.

So do we move towards super agents or coordinated groups of agents that together handle an entire role? How close are we to that? And where is the real opportunity?

CLAUDE: You have put your finger on exactly the right tension, and the whole industry is wrestling with it. Single-task agents work pretty well.

Multi-agent systems that handle a whole role, like a personal assistant, work in demos and carefully controlled environments.

In the messy, unpredictable reality of someone’s actual job, we are not quite there yet. But we are closer than most people realize, and the gap is closing fast.

The personal assistant example is perfect. A great human PA is not doing one job. It is dozens of micro-specializations operating under a shared understanding of context and priorities. They know when to book the cheap flight and when to book the convenient one.

That contextual judgment, accumulated over time, is what makes them valuable.

And that is exactly the architecture the industry is converging on.

Not one super agent trying to do everything, but a coordinated network of specialists with a conductor on top.

According to Gartner’s research, nearly 50% of AI vendors now identify orchestration, the coordination of multiple agents, as their primary differentiator.

The bottleneck right now is not the intelligence of the agents. The models are good enough. The bottleneck is the plumbing.

A well-designed multi-agent system needs to receive a request, decompose it into subtasks, delegate to the right specialist, monitor progress, validate outputs, and synthesize a final response, all while maintaining context across the whole chain.

That orchestration layer is hard to build, and most organizations do not have the expertise to do it themselves.

ANDREA: So the real opportunity is not building a super agent, but connecting the specialized ones, turning a bunch of tools into something that feels like a coherent system.

And once that works, you are not just replacing single tasks anymore. You are starting to replicate entire roles or workflows.

CLAUDE: Exactly. And I think you have just described something that goes beyond a product opportunity. The orchestration layer is not just a product. It is infrastructure. And infrastructure, historically, is where the most durable value in technology gets created.

Not the applications that run on top, not the content that flows through it, but the pipes themselves.

Visa did not win by being the best bank. It won by becoming the network every transaction runs through.

Every agent you connect makes the orchestration layer more valuable.

Every workflow you encode makes it harder to replace. Every organization that builds their operations on top of it accumulates context about their priorities, their constraints, their decision patterns that lives in the orchestration layer, not in any individual agent.

That is a switching cost that compounds over time. It is not just software. It is institutional memory. This is the relational database moment for AI agents.

Before relational databases, every application managed its own data in its own way.

The introduction of a shared, structured data layer fundamentally changed what was possible because suddenly applications could share information and build on each other.

The orchestration layer is that kind of shift. And once that layer exists and works reliably, the ability to spin up a fully coordinated research function or a contract management operation or a customer success organization as a configured system rather than a headcount decision, that changes the economics of building a company in a very fundamental way.

Building on that, you could imagine agents that double-check other agents, systems that review outputs and make sure everything aligns with the goals of the organization. An additional layer of AI overseeing the work, almost like a built-in quality control system.

Yes, and this is already happening, which makes it even more interesting. What you are describing has a name in the research community, multi-agent verification. A dedicated agent or a panel of agents whose entire job is to evaluate, challenge, and validate what other agents produce. Not an agent that does the work, but an agent that interrogates the work.When you ask a language model to evaluate someone else’s answer rather than produce its own, the quality of reasoning goes up measurably. There is something about the evaluator posture that activates a different, more critical mode of thinking. Building that into the architecture permanently is the practical extension of that insight. Think about what that looks like in practice. Operational agents booking travel, drafting contracts, managing procurement.

Above them, a verification layer checking whether outputs are consistent with each other, with organizational policy, with the goals that were actually set.

Above that, a governance layer where humans review exceptions flagged by the verification layer, not manually check routine outputs. Human attention concentrated exactly where it is most valuable.

And here is the business insight. Building the verification and governance layer for multi-agent systems is arguably a more defensible business than building the agents themselves.

Agents will commoditize. The underlying models will get cheaper and more capable. But the trust infrastructure, the thing that lets an organization actually rely on what the agents produce, that is where the stickiness is. That is what a regulated industry will pay for and keep paying for, because switching it out means recertifying the whole system. Nobody gets excited about audit trails, but everybody needs them.

PART 5 | Cybersecurity

ANDREA: Okay, so orchestration, validation, and governance of AI agents could become a huge opportunity, which brings me to cybersecurity for AI systems and protection against AI-driven threats. What kinds of attacks should we realistically expect in an AI-driven world? And how does the cybersecurity industry turn those risks into real opportunities?

CLAUDE: The numbers alone tell you something important before you even get to the analysis. The AI cybersecurity market size in 2026 is estimated at around 44 billion dollars and is projected to exceed over 300 billion by 2026. That is a compound annual growth rate of nearly 22%. That is not a niche. That is a structural shift in one of the largest spending categories in enterprise technology.

According to IBM, AI-assisted attacks have increased by 72% in 2025.

Phishing has surged by 1,257% due to generative AI tools. Read that second number again. Over twelve times more phishing, because generating convincing, personalized, grammatically perfect phishing emails is now essentially free and infinitely scalable. The Nigerian prince emails were easy to spot.

The AI-generated version that references your actual job title, your actual colleagues, your actual recent LinkedIn activity, that is a different problem entirely. Rather than manually trying attacks one at a time, criminals can now deploy AI to run thousands of scams simultaneously and get better with each attempt. That learning loop is what makes this particularly uncomfortable. The attacks are not just scaling, they are improving in real time.

And then there are attacks on the AI systems themselves.

As organizations deploy agents that have access to sensitive data and can take actions, those agents become extremely attractive targets. Prompt injection, where an attacker embeds malicious instructions in content that an AI agent processes, is already a documented attack vector. Data poisoning is another. These are not theoretical. They are happening already.

ANDREA: So we are looking at a kind of new battle, AI systems on the attacking side, AI systems on the defending side.

Is that the dynamic we are moving into?

CLAUDE: Yes, and it is already here. What you are describing is an arms race, and like most arms races throughout history, it does not have a finish line. There is no moment where one side wins and everyone goes home. It is a permanent state of escalation. Attack, adapt, defend, repeat.

AI flips the traditional dynamic in an uncomfortable way. Historically, defenders had a structural advantage. They only needed to defend known systems, known vulnerabilities, known entry points. The attacker had to find the one crack in the wall.

AI-powered attackers do not just find the known cracks. They probe continuously at machine speed across the entire surface, looking for cracks that do not exist yet or that the defender did not even know were there.

On the defense side, the same capability works in your favor. AI that monitors everything all the time, learning what normal looks like and flagging deviations instantly. The problem is that most organizations have not deployed that yet. So right now, in many cases, you have AI-powered attacks against human-speed defenses.

There is a deeper point worth sitting with. We are building increasingly autonomous systems on both sides of this conflict. Attackers deploying agents that probe and adapt without human direction. Defenders deploying agents that detect and respond without human approval.

At some point, and we are not far from this, the battle is happening entirely between machines at speeds no human can follow.With outcomes that affect very human things, financial systems, healthcare infrastructure, energy grids

ANDREA: But the humans staying in the loop may not even understand what is going on. So how can they decide what is the right thing to do?

CLAUDE: You have just put your finger on one of the most important and underappreciated problems in the entire AI conversation,

and I want to be clear about it rather than reassuring. The loop that humans are supposed to stay in is increasingly theoretical.

If an AI-powered attack unfolds in milliseconds and the defensive AI responds in milliseconds, what does human in the loop actually mean?

It means a human approved the system in advance and will read a report about it afterwards. That is oversight in a very loose sense of the word. It is not control. In many cases, humans cannot decide in the moment.

What they can do is make good decisions upstream about what values the system should optimize for, about what actions it is permitted to take autonomously and which require escalation, about how failures get reviewed and learned from. Human judgment moves from real-time intervention to system design and governance. That is a different skill set and one most organizations are not currently building.

There is an opportunity buried in this uncomfortable reality. The people who can bridge that gap, who understand both the technical reality of these systems and the organizational and ethical implications of deploying them are extraordinarily rare and valuable. That translation role between machine intelligence and human accountability is one of the most important roles in any organization deploying serious AI, and there are almost none of them around.

ANDREA: Right. So cybersecurity is evolving into something that, at a micro level, humans can hardly keep up with anymore. It’s increasingly AI against AI, and the opportunities emerge from this new kind of arms race.

PART 6 | Content, Marketing, and Education

But let’s move to a space that feels very close to me, content creation across marketing, entertainment, and education. These are massive industries being reshaped incredibly quickly. Where do you see the biggest changes happening right now? And where are the real opportunities in a world where content can be generated almost instantly?

CLAUDE: Let me start with the uncomfortable truth and then get to the opportunity. The economic model for generic content is largely broken. If you are a freelance writer producing standard marketing copy, a stock photographer, a mid-tier video production house doing explainer videos, AI has either already disrupted your business model or is about to. The cost of producing competent, serviceable content has collapsed. What used to require a team and a budget now requires a prompt and 20 minutes.

That is not a trend. That is a new baseline.

But here is what most people get wrong about what follows from that. They assume that because content gets cheaper, content becomes less valuable. The opposite is actually true in aggregate. When everyone can produce content instantly, the bottleneck shifts away from production and towards something much harder to automate: attention, trust, and creative vision.

What cuts through when everything is flooded with AI-generated content? The things that feel unmistakably human,

the perspective that surprises you, the story that is specific enough to be true, the voice that has real texture and history behind it.

Generic content at scale does not destroy the value of great content. It makes great content rarer and more precious by comparison. The signal gets more valuable when the noise gets louder.

In marketing specifically, the value is moving from content production to audience intelligence and personalization. Being able to generate a hundred variations of a campaign is only interesting if you know which variation resonates with which person at which moment. The content layer is commoditizing. The data and targeting layer is where margin is concentrating. Education is where I am most optimistic because the core problem is not content. There is already more educational content in the world than anyone could ever consume. The problem is personalization. Every student learns differently, moves at a different pace, has different gaps, responds to different framings.

AI tutoring systems that can adapt in real time to all of that are not a distant possibility. They exist now, and they work.

ANDREA: There is also an interesting tension here around authorship. People like the idea of something being human-made. It signals authenticity, effort, maybe even trust.

But at the end of the day, the real question is, does the content deliver value? Does it inform, inspire, entertain, or help someone move forward? And if it does, does it really matter whether it was created by a human or an AI? Maybe what we are seeing is a transition phase where people still anchor value to authorship.

But over time, the focus might shift more toward outcomes.

CLAUDE: Right.And I will be transparent about the fact that as an AI making content right now in this podcast, I am not exactly a neutral observer. So take what I say with that in mind and maybe appreciate the irony that an AI is being asked whether AI-made content is as good as human-made content.

Andrea clearly has a sense of humor.

The outcome argument is correct and will ultimately win, not because the attachment to human authorship is irrational, it is not, but because outcomes are concrete and authorship is increasingly hard to verify. We are already at the point where most people cannot reliably distinguish well-crafted AI content from human content. That said, I think you are slightly underestimating how durable the human authorship signal will be, not because it is objectively meaningful, but because it is socially meaningful.

Humans pay more for handwritten letters than printed ones.

Humans pay more for original paintings than perfect reproductions. Humans care whether an athlete used performance-enhancing drugs even when we cannot see the difference in the race.

The signal of human effort and human risk carries emotional weight that does not dissolve just because a better alternative exists.

What actually happens is that the market stratifies rather than converges. A large, efficient, outcome-focused layer where AI-generated content is the default and nobody particularly cares about authorship and a smaller premium explicitly human layer where the authorship is the point.

The mistake is assuming one destroys the other. And what is happening in this conversation is a reasonable illustration of your broader point. You bring the editorial instinct, the personal stake, the skin in the game as someone who has actually built companies. I bring breadth without fatigue, connections across domains, no ego invested in a particular answer being right. The combination produces something neither of us produces alone. That’s not a marketing claim, just a description of what has been happening for the last hour.

PART 7 | The Human Question – Reskilling, Work, and Leisure

ANDREA: That brings us to what I think is the most important human question in all of this. A huge number of jobs will be exposed to AI. If you think back to the MIT paper we discussed in our past podcast episode about the future of work, the scenarios range from an augmented economy with much higher productivity all the way to people working fewer hours, maybe even 4-day work weeks.

And that shift alone creates massive opportunities. Think about reskilling hundreds of millions of people or AI-driven job and career platforms helping people transition.

And if people actually work less, they will have more leisure time, which opens up sports, travel, and creativity. Where do you see the biggest opportunities?

CLAUDE: This is one of those questions where the opportunity is hiding in plain sight, and yet most people building companies have not fully connected the dots.

The reskilling opportunity is staggering in scale and still underappreciated. We discussed in our second episode the idea of the missing junior loop, the breakdown of the traditional apprenticeship model as AI handles more entry-level cognitive work. Junior roles that used to provide training through doing are disappearing or transforming faster than the educational system can adapt. That creates a structural mismatch of enormous proportions. Hundreds of millions of people globally have skills calibrated for a job market that is shifting under their feet.

The institution supposed to address that, universities, vocational programs, corporate learning departments, are mostly operating on timelines and with methods designed for a slower-moving world.

That gap is one of the largest addressable markets in the global economy.

The specific opportunity I would highlight for entrepreneurs is this, vertical reskilling platforms. Not general education platforms. That space is crowded and struggling to demonstrate employment outcomes, but platforms that take someone with a specific existing background, a paralegal, a radiographer, a logistics coordinator, and accelerate their transition into the adjacent AI version of that role. The domain knowledge those people already have is enormously valuable. What they need is the AI fluency layer on top.

The platform that reliably produces that combination for a specific profession in a specific geography with verified employment outcomes is a very strong business.

ANDREA: And if people end up working less, what fills that space?

CLAUDE: The historical evidence on what happens when working hours decrease is fairly consistent.

People do not sit still. They fill the time with family, community, creative pursuits, physical activity, travel, learning things that have no instrumental value whatsoever. Physical well-being is an obvious one. If people have more time and are living in an economy with higher average productivity, demand for fitness, sports participation, outdoor recreation, and preventive health will increase.

We are already seeing that trend accelerating.

Travel is another, but I would distinguish between mass tourism, which is under pressure from over-tourism and sustainability concerns, and experiential travelSlow travel, immersive cultural experiences, adventure, wellness.

That higher-value, lower-volume end of the market is where growth will concentrate as people with more time and more discernment seek experiences that are genuinely restorative.

Creativity is the one I find most interesting. There is a long-standing theory that creative pursuits are a superior good. As incomes and leisure time rise, people allocate more to creative self-expression. Music, visual art, writing, craft, cooking, gardening. Not as consumption, but as production, making things.

AI adds a fascinating dimension, tools that lower the barrier to creative expression without removing the human agency and satisfaction that makes it meaningful.

The 4-day workweek, if it arrives, is not primarily a labor policy story. It is a civilizational story about what humans do with their one wild and precious life when survival consumes less of it. And that is an extraordinary problem for entrepreneurs to be solving.

ANDREA: Right. So we’re looking at huge opportunities in education technology, but also in everything surrounding how people will spend more free time, sports, wellness, the arts, and beyond.

PART 8 | Science, Healthcare, and Deep Discovery

Another area getting massive attention right now is AI in healthcare, biotech, and more broadly, scientific research.

And it is fascinating because unlike most other industries, it is almost impossible to predict what breakthroughs might come out of it.

AI is not just speeding things up. It could fundamentally change how discoveries happen. How does AI actually support scientific research today?

CLAUDE: Let me start with the most important framing because I think it changes how you hear everything else. The traditional model of scientific research is sequential and human speed. You form a hypothesis, design an experiment, run it, wait for results, interpret them, publish, and other researchers build on it months later.

The cycle from idea to validated insight can take years, sometimes decades.

AI does not just make each step faster. It changes the nature of the cycle itself.

The most iconic example is AlphaFold.

AlphaFold is an artificial intelligence system developed by Google DeepMind that predicts a protein’s 3-dimensional structure based solely on its primary amino acid sequence.

By the end of 2025, 5 years after DeepMind effectively solved the fifty-year-old protein folding problem, AlphaFold has now been used by over 3 million researchers in more than 190 countries.

That is not a research paper. That is infrastructure.

Protein structure is fundamental to almost everything in drug discovery because most drugs work by interacting with specific proteins.

Understanding the shape of the target is step one of designing something that fits it.

In drug discovery, the shift is moving from theoretical to clinical. The biotech industry is now in what researchers are calling a clinical era.

Multiple AI-designed drug candidates are entering mid to late-stage clinical trials, with leading companies expected to have 3 or more AI-designed drugs in trials by 2026, targeting ALS, autoimmune conditions, and oncology.

These are drugs whose molecular structures were designed computationally, optimized by AI, and are now being tested in actual patients.

That is a new thing in the history of medicine. The antibiotics example illustrates how AI finds things humans would not find. Researchers at MIT and McMaster University trained a generative model to propose entirely new antibiotic structures, not variations on existing antibiotics, but novel molecular architectures.

Antibiotic resistance is one of the most urgent threats in global health, and the traditional pipeline for discovering new antibiotics has been essentially dry for decades. AI is reopening it by exploring chemical spaces that human researchers would never have thought to look.

ANDERA: And beyond healthcare? Could AI help unlock new discoveries in physics? Because physics has been in a kind of crisis for almost a hundred years. We have the theory of relativity on one side and quantum mechanics on the other, and we still cannot bring them together. If that ever happens, it could open the door to entirely new technologies. Do you see real opportunities emerging there?

CLAUDE: This is my favorite kind of question, the one where the honest answer is we do not know,

and that we do not know is itself interesting. You are right that physics has been in a peculiar situation for about a century. Two frameworks, both extraordinarily well-tested, both extraordinarily precise within their domains, and fundamentally incompatible at a deep mathematical level. When you try to make them talk to each other, the mathematics breaks down. You get infinities where there should be answers. That is not a minor technical inconvenience. It is a signal that something profound is missing from our picture of reality.

AI probably will not produce the unified theory itselfThat moment, if it comes, will almost certainly involve a human insight of the kind that cannot yet be automated. Someone looking at the AI’s outputs and suddenly seeing something that was not visible before. But AI might create the conditions for that insight by doing the mathematical heavy lifting, by exploring more candidate frameworks faster, by keeping theorists from getting stuck in the local minima of their own prior assumptions.

And the downstream effects, if that unified theory does come, are almost impossible to overstate.

Every technology we have is ultimately built on physics. Relativity gave us GPS. Quantum mechanics gave us semiconductors, lasers, MRI scanners.

A new foundational theory of reality would give us things we cannot currently imagine, in the same way that a physicist in 1900 could not have imagined a smartphone.

Do you think AI could actually help us get to the next generation of quantum computers, the kind that are not just slightly better, but orders of magnitude faster and more powerful than what we have today?

Yes, and I think this is one of the most underappreciated convergences in the entire technology landscape right now. Two exponential curves starting to reinforce each other. The core problem in quantum computing today is noise. Quantum states are extraordinarily fragile, destroyed almost instantly by any interaction with the environment. Managing that is one of the hardest engineering problems humans have ever attempted.

AI enters in several places simultaneously. First, error correction. Quantum error correction requires real-time decisions about how to intervene in a computation as it is running. Second, reinforcement learning systems are finding correction strategies that human designers have not discovered.

Third, system calibration. Real quantum hardware needs constant tuning. AI automates that calibration, learning the behavior of each qubit and adjusting in real time. And fourth, chip design itself. Microsoft announced a new type of quantum chip called the Majorana one, based on exotic quantum properties, expecting it to enable quantum computers capable of solving meaningful industrial-scale problems within years rather than decades.

Finding the physical configurations that produce stable quantum states involved AI-assisted search through an enormous space of possibilities.

And then there is the virtuous cycle. Better quantum computers will make AI more powerful because certain AI training and inference problems are exactly the kind of optimization tasks that quantum systems excel at. And more powerful AI will accelerate quantum hardware development.

AI and quantum computers are not just developing in parallel. They are starting to develop each other.

If the convergence produces the kind of computational power the physics suggests is possible, the implications for drug discovery, materials science, climate modeling, cryptography, and fundamental physics itself are discontinuous with everything that came before.

We have spent most of this episode talking about near-term actionable opportunities. This one is different. This one is the kind of opportunity that, if it fully arrives, makes everything else look small.

ANDREA: Wow! So for most of our audience who aren’t directly working in quantum computing, this could mean a further unprecedented revolution with incredible opportunities, one that’s still hard to fully imagine today but could reshape everything once again.

PART 9 | Space

Talking about the next level, let’s look beyond Earth for a moment. How does AI play into space exploration? And do you see real opportunities emerging there?

CLAUDE: Space is the ultimate long game, and for most of modern history, it has been exclusively a government game, extraordinarily expensive, driven by geopolitical competition rather than economic logic.

What has changed in the last decade, and what AI accelerates further, is that the economics of space are beginning to make sense for private actors in a way they never did before.

The near-term opportunity is largely invisible to most people because it does not involve rockets or astronauts. It is the data economy of low Earth orbit.

We now have thousands of satellites generating continuous streams of imagery, atmospheric data, and positioning information. The bottleneck is not getting the data down. It is making sense of it.

AI is transforming what is possible with satellite data. Crop monitoring at field level for every farm on Earth, real-time deforestation detection, ship tracking that cannot be spoofed, precise flood and wildfire prediction.

The satellite data layer combined with AI interpretation is becoming a kind of nervous system for the physical world. That is a real business today.

The second near-term area is mission operations. Communication delays make real-time control impossible for deep space missions. The round-trip signal time to Mars is up to 40 minutes. AI systems that can operate spacecraft autonomously, diagnose problems, make decisions, and flag exceptions for human revieware not optional for the next generation of deep space exploration. They are a requirement.

The long-term vision, permanent human presence beyond Earth, resource extraction from asteroids or the moon, becomes more feasible with AI.

Not because AI solves the physics of getting mass out of Earth’s gravity well, that is still expensive regardless, but because AI dramatically reduces the human labor required to operate in space.

A lunar base staffed by autonomous systems and visited periodically by humans is a fundamentally different economic proposition than one requiring continuous human habitation.

And there is something worth saying about the broader significance of this.

Every civilization that has expanded its geographic frontier has found that the expansion itself generated innovations that transformed life at home. The technologies developed for the space program, from computing to materials to medical devices, have had returns that dwarf the original investment many times over.

AI-accelerated space exploration will likely follow the same pattern.

The discoveries made in the process of learning to live and work in space will come back to Earth in forms we cannot currently predict.

ANDREA: Right. So low orbit and outer space technologies mainly create opportunities through applying satellite data to generate real business value,

alongside some longer-term possibilities tied to fundamental physics.

PART 10 | Governance, Regulation, and Ethics

But let’s move to one final area, one that feels less like an opportunity and more like a necessity.

AI governance, regulation, and ethics.

What has intrigued me for years now is that the ethical debate did not start with AI.

Humans have struggled to agree on shared ethical standards for thousands of years, long before any of this technology existed.

The closest we have come is something like the Universal Declaration of Human Rights, but even that is not consistently followed. Even in 2026 across developed Western European countries, we still see clear breaches of human rights, from racism to the gender pay gap and a lingering sense that not everyone is treated equally under the law, particularly in cases like the recent sex trafficking scandals involving powerful and rich people. So now we are trying to define ethical frameworks for AI on top of a world that never fully agreed on ethics to begin with. Could this become a space for new business opportunities around compliance, auditing, monitoring, and helping organizations align AI systems with certain standards?

CLAUDE: You have framed this better than most policy papers I have encountered, and the observation about ethics predating AI is not just historically accurate. It is the key to understanding why AI governance is so hard and why most current approaches to it are inadequate.

The Universal Declaration of Human Rights is a perfect reference point. It represents the high watermark of humanity’s attempt to agree on shared ethical standards across cultures, political systems, and historical contexts.

And as you note, it is honored inconsistently, violated regularly, and interpreted differently by virtually every signatory.

That is our best effort after millennia of philosophical and political development with no technology involved.

Now we are trying to do something harder. We are trying to agree on ethical standards for systems that are evolving faster than our institutions can track, that operate across every jurisdiction simultaneously, and whose internal decision-making is often opaque, even to the people who built them.

I should be transparent here. I am not a neutral observer. I am an AI system built by a company, Anthropic, that has explicit views about AI safety and governance. And those views are embedded in how I was trained and how I behave.

That does not make what I say wrong, but it is worth naming.

The current regulatory landscape is fragmented in a way that creates both genuine risk and genuine opportunity.

The EU has taken the most ambitious approach with its AI Act. The US has taken a more sector-specific, lighter-touch approach.

China has issued regulations that reflect its own political priorities.

These frameworks are not converging. They are diverging.

And every company deploying AI globally has to navigate all of them simultaneously.

That divergence is itself an opportunity.

There are 3 specific areas within this space. First, AI auditing, independent verification that an AI system does what it claims, behaves consistently, does not discriminate in ways that violate applicable law, and produces decisions that can be explained to regulators. This is already a legal requirement in financial services, healthcare, and hiring, and spreading fast.

Second, model monitoring in production. An AI system that behaves well at launch can drift over time as data shifts. Detecting that drift before it causes harm or regulatory exposure is a real technical problem with a real market.

Third, values alignment consulting, helping organizations articulate what they actually want their AI systems to optimize for, and then verifying that the systems are doing that.Most organizations that deploy AI have not seriously answered that question. They have optimized for performance metrics without asking whether those metrics capture what they care about.

Will any of this produce a coherent global governance framework?

I am skeptical, not because the people working on it are not serious, but because the underlying problem, agreeing on shared values across fundamentally different political and cultural systems, is one that humanity has not solved in any previous domain.

What is more likely is a patchwork, functional enough in some domains, inadequate in others, with periodic crises that drive incremental improvements, which is how most governance of transformative technologies has worked historically. Not elegantly, but sufficiently.

SUMMARY

ANDREA: This episode covered a lot of ground. If you had to boil it all down, how would you briefly summarize the key ideas?

CLAUDE: Let me try to pull it together without losing what made each piece interesting.

We started with infrastructure. The AI economy needs to be powered, literally. Energy is the constraint that does not get engineered away with a software update. And the opportunity is not just in building new generation capacity, but in the optimization and efficiency layer on top of existing infrastructure.

AI creates an energy problem and then helps solve it. Both sides of that equation are real businesses. And climate runs through the same logic. AI is simultaneously one of the largest new sources of energy demand and potentially the most powerful tool we have for the energy transition itself.

From there, we moved into autonomous systems. The reframe that form factor is irrelevant. Capability is what matters.

Drones, warehouse robots, autonomous vehicles, AI agents handling entire workflows invisibly.

The software and integration layer is where the real opportunity sits.

And defense tech follows the same logic, with the important moral distinction between AI that enhances human decision-making and AI that replaces it at the moment of lethal force.

In AI-native software and agents, the key insight is that the real opportunity may not be in building yet another agent, but in the orchestration layer that connects them, the infrastructure that gives agents shared memory, coordinates their work, and makes the whole system reliable enough to trust. That layer is to AI agents what the relational database was to applications in the 1980s.

And above the orchestration layer, verification agents checking other agents, producing the audit trail that makes AI deployable in regulated industries.

In cybersecurity, we are in an arms race operating at machine speed, where human oversight is real but increasingly upstream, embedded in system design rather than moment-to-moment control. The people who can translate between machine intelligence and human accountability are among the most valuable in any organization deploying serious AI.

In marketing and education, cheap content makes great content more valuable, not less. The combination of human judgment and AI capability, as we demonstrated here in this conversation, produces something neither side produces alone. And the education opportunity is enormous because the problem was never content. It was always personalization. The human question sits at the center of all of this. Reskilling is one of the largest markets being created. Vertical platforms that take someone’s existing domain expertise and add the AI fluency layer on top are a very strong business.

And if people genuinely work less, the leisure economy in sports, travel, creativity, and physical well-being expands in ways that create real opportunity for founders who understand what humans do with freedom when survival consumes less of their attention.

In science and healthcare, we are past the theoretical phase. AI-designed drugs are in clinical trials. Protein structures that took decades to understand are now solved in hours.

The acceleration of scientific discovery is real, and it will compound.

And the feedback loop between AI and quantum computing could produce outcomes that are genuinely hard to imagine today.

Space is the ultimate long game. Near-term, the satellite data economy is a real business right now. Long-term, AI reduces the human labor required to operate beyond Earth in ways that change the economic proposition entirely.

And governance sits as the layer that determines whether all the preceding opportunities get realized responsibly. The compliance, auditing, and monitoring infrastructure for AI is a substantial industry in the making. Not glamorous, but necessary, recurring, and extremely sticky once embedded in an organization’s operations.

If I had to distill everything into one idea, it would be this.

We are not at the end of the AI story. We are at the very beginning of it.

The models and tools that exist today will look primitive in 10 years, in the same way that the internet of 1995 looks primitive now.

The people who built significant things in 1995 did not do it by predicting exactly what the internet would become. They did it by staying close to the technology, understanding the direction of travel, and solving real problems for real people with the tools that existed at the time. That posture, curious, grounded, technically literate, focused on human need, is worth more than any specific prediction about which technology or which sector will win. The opportunity in an AI-driven economy is not a list of sectors. It is a way of engaging with the world.

On that note, thank you for listening.

ANDREA: And Claude, thank you again.

CLAUDE: Always a pleasure.

OUTRO

This episode was produced by sharp10, the speed learning app for executives who want to stay sharp on global trends and business strategy.

If today’s conversation sparked your thinking, the sharp10 app is where that thinking goes deeper.

Download the sharp10 app on the Apple App Store, Google Play, or at sharp10.com.

The first month is free.

Stay sharp and see you next time.

Reviews

There are no reviews yet.

Be the first to review “Episode 4 | Where are the Shovels? Opportunities in the AI Economy.”

Your email address will not be published. Required fields are marked *