<-- back to articles
Cover image for How Operational Efficiency Reshaped the Entire Tech Industry
· 2,568 words · 13 min read

How Operational Efficiency Reshaped the Entire Tech Industry

My take on the real signals behind tech's mass layoffs — and why AI is just the latest chapter

Years in the Making. One Headline Gets the Credit.

You must have seen the headlines. In February 2026, Block cut 40% of its workforce. Just days earlier, Amazon slashed 16,000 employees — weeks after announcing record revenue of $716.9 billion for 2025. Record profits. Record layoffs. Same quarter.

Every headline blamed AI but what if I told you they’re just throwing fuel on a fire that’s been burning for years — long before the first prompt was written? It might sound contrarian, but hear me out…

When you are passionate about truly building software and live through multiple Gartner hype cycles, you start seeing the patterns. They are always the same and they have the same driver behind them. Revenue.

Artificial Growth just before Artificial Intelligence.

I’ll take you back just a few years, when we all thought the world was ending and locked ourselves in our homes. We humans are quite adaptable and an unstoppable force - so what did we do? We bought all the toilet paper off the shelves and got online. This resulted in several side effects: accelerated adoption of remote work and education, a lot of time in front of the computer screens, big revenue opportunities for the already established tech companies and oh, well… global toilet paper and chip shortages.

The revenue was pouring in, interest rates were near zero, and Wall Street was rewarding growth above everything else. So we hired. Like… A lot. It felt like the second dot-com boom and a great opportunity to cash in big! Between end of 2019 and peak 2022, Meta nearly doubled its workforce from 44,942 to 87,314 (+94%). Amazon added 810,000 employees in just two years, growing from 798,000 to 1,608,000 (+102%). Google grew from 118,899 to 190,234 (+60%). Microsoft expanded from 144,000 to 221,000 (+54%).

Unfortunately, most of us know that 10x the engineers don’t ship 10 times as fast. So in reality what we did was increase expenses, balloon operational complexity, and become less efficient — all for the sake of some fast money. We knew it wasn’t sustainable. We knew it was going to hurt the industry. We went with it anyway.

Operational Efficiency: A Journey Through Time.

Big companies have always been obsessed with revenue rather than efficiency. They need to make their shareholders happy and they rarely care if you have full test coverage or your Kafka cluster has hot spots.

So as a result of that they have pushed passionate people like you and me to come up with solutions which led to this point of time. For the sake of this article - let’s call them signals. Signals we missed because we were too busy making them rich.

What they didn’t realize is that the solutions we built to survive their chaos were quietly doing something more valuable than any feature we shipped.

When I was an architect, my boss told me something that stuck in my head to this day: “A dollar saved from costs is better than a dollar gained from revenue.” It took me some time to fully understand why. Revenue is a bet — you build a product, you invest months of engineering time, and you hope the market rewards you. It might exceed expectations. It might flop. Cost savings are certain. They hit the bottom line immediately, they’re not subject to market risk, and they’re not diluted by taxes, commissions, or cost of goods. A dollar saved is worth more than a dollar earned — every executive knows this, and it’s the quiet principle behind every signal I’m about to walk you through.

2014-2016: Full-stack replaces specialists — While the concept of full stack emerged way back the mass adoption of JavaScript and Node proving that it can run in both ends just as well cemented that role on the market. The cost argument was brutal. Every handoff between a frontend and backend engineer meant a ticket, a meeting, a PR review, a miscommunication, a delay. Multiply that across 10 teams and you’re burning man hours on coordination, not code. And that’s before you account for the human factor - the ego, the territory and the “that’s not my responsibility” conversations. Companies looked at that overhead and started posting “Full-stack engineer” instead of separate roles.

2013-2016: DevOps and Automation — In 2013, “The Phoenix Project” put into words what every ops engineer already felt: the wall between development and operations was killing delivery speed. Infrastructure as Code tools like Terraform, Ansible, and Chef started replacing manual server provisioning. CI/CD pipelines turned deployments from orchestrated rituals into a thumbs up from your team lead. The business case was pretty similar to Full-stack — remove the handoff, remove the waste. But DevOps went deeper. It wasn’t just about merging two roles. It was about removing the human from the pipeline so they can build but not break.

2015-2018: Public Cloud Mass Adoption — In the aim of shorter time to market, public cloud was supposed to be the Ryanair of infrastructure — pay only for what you use, scale up when you need it, scale down when you don’t. Ship fast, pre-built managed services and “serverless infrastructure” which required no time to provision and were ready to go. The pitch was compelling: no more data centers, no more capacity planning, no more hardware procurement cycles — just a few fast clicks onto the interface. And it wasn’t really a choice — the industry adopted it as the standard. If you were not using it you were not one of the cool kids anymore. Job postings required it, clients expected it, investors asked about it. But as adoption grew it swung the other way and at some point you practically needed a PhD in cloud billing to understand what you were paying for. That complexity went against efficiency, so we gave birth to FinOps.

2019-2021: FinOps and Cost Control — The industry was spending $723 billion on cloud by 2025 and wasting roughly a third of it. Executives started to question this cool new toy as sometimes it accounted for a quarter of their operational expenses. Lack of long-term commitments, over-provisioned resources, forgotten hard drives and excessive backups, badly routed traffic — there was a million and one reasons behind these numbers. We spent hours and hours analyzing and inventing new practices and guidelines to protect us from that bottomless invoice. FinOps turned cloud spending from an engineering afterthought into a financial discipline — the same “dollar saved is better than a dollar earned” principle, applied to your AWS bill.

2020-2022: Platform Engineering and DORA — What platform engineering actually did was redistribute the accountability and remove the ticket queue from the infrastructure team. The idea was sound: infrastructure knowledge shouldn’t be locked inside a single team. It should be common knowledge. But what started as a cultural shift quickly ran into reality. As public cloud providers became quite complicated, the engineers inheriting these responsibilities didn’t always have the expertise or interest, and even if they did, the tooling kept abstracting that knowledge further away from them. You can hand someone a Terraform module, but that doesn’t mean they understand what it provisions or why. We were in a situation where we hired a lot of engineers and never bothered if they knew infrastructure or not, but platform engineering demanded end-to-end thinking across application, infrastructure, and delivery.

That kind of experience takes years to build and a genuine passion for the stack, and from what I’ve seen those people are rare. So companies ended up with platform responsibilities scattered across people who were never quite equipped for them — and that’s where DORA came in. DORA gave leadership the scoreboard they’d been missing — deployment frequency, lead time, failure rate, recovery time. You can’t optimize what you can’t measure, and now everyone had the numbers to see what was actually working and what wasn’t. Yet again inventing new problems just to fix them by ourselves.

2022-2024: The return of the monolith (not really, no…) — After years of splitting everything into microservices and cloud-native adoptions, the industry had a collective hangover. What was supposed to give us independence and scalability gave us distributed debugging nightmares and made DevOps probably the most important figures in engineering. Turns out another hype cycle began as people started calling out the operational complexity and started to make monolith structures trendy again. You know wha… LLM has entered the chat …that would be cool if it was not for the context window but I will tell you about it a little bit later…

I’m sure there are many more signals that went over my head. But as you can see, we can summarize this in a few words: Roles & Responsibilities consolidation based on accountability, regardless of technical stack. Automation, which not only reduces context switching and work interruption but also reduces the chances of human error. And better Cost Control — making every dollar of infrastructure spend visible and accountable.

This is creating a horizontal cut across the industry — one driven by business needs, not technical silos. For decades we’ve been building vertically. The signals are telling us to think horizontally.

AI Made Us Faster. It Didn’t Make Us Smarter.

Let’s give credit where it’s due — AI coding tools genuinely changed how we work. It abstracted away the language specifics but left us working with the same design patterns and concepts. It granted us speed but forced us to think more about what we’re building and less about how we’re building it - something that the good engineers were already doing.

Here’s the thing though — garbage in, garbage out. Give AI to an engineer with solid architecture and domain knowledge and you might finally get that 10x engineer everyone has been talking about since the 60s. Give it to someone who was already making a mess — and you’ve just handed a machine gun to a madman.

If your requirements were vague before AI, speed isn’t going to save you — you’re just shipping garbage faster. That’s why some older methodologies around using product design requirements as the foundation of your codebase are making a comeback. Remember Test Driven Development — that old practice the dogmatists swore by and the rest of us made fun of? Well, it’s back from the dead. Understand the specifications, write the test first, define what “correct” looks like, and then let AI generate the code. It’s the fastest feedback loop you can give it — and the only honest one.

That’s why engineers should care. Engineers should care about the domain, the product requirements, about the architecture and constraints. AI is a multiplier and multipliers work in both directions. The ones who will thrive are those who master the context — keeping it in the efficient token window, feeding the right constraints, the right architecture, the right domain knowledge into these models.

These models impose rules and constraints on us but in the name of efficiency we will build around them. Remember when Monolith architecture was just making a comeback? Well - nope. Models don’t perform well with a million-line codebase in its context scope so we will still keep building micro-services, not because we want to but because they are a good fit for the modern LLMs. Sure, the good news is that we can outsource the operational complexity to AI, but this part of the architecture itself is no longer our choice to make.

Road to a Trillion: Same Goal, Different Price.

Apple. For some of you who know me, you will know that despite being a total nerd I admit the practicality of all Apple devices and I use the entire ecosystem because “it simply works”. I started to use them way back when Steve Jobs was at his peak - I had the first iphone and I was amazed by the product quality and details. Funny enough it was not his genius that got Apple to a multiple trillion company. It was Tim Cook. Tim Cook is well known for his operational efficiency and how he sucked the air and creativity out of the brand but more than tripled the valuation for just a few years. He slashed inventory from 30 days to 6 days, made the iPhone boxes small enough to fit 70% more units per shipping pallet, and turned Apple into the first company to hit $1 trillion, $2 trillion, $3 trillion, and $4 trillion — all without a single mass layoff.

Meta. After an all-time high in 2021, in October 2022 Meta’s stock crashed 76% to $88 — an all-time low since 2016. They had to act fast. So what do they do? Well it turns out firing 21,000 people could triple your stock price in less than a year. Meta’s stock went from $88 to $796. What followed was a Year of Efficiency for Meta. Revenue per employee soaring, stock recovering 194%. Wall Street rewards lean operations & other companies took note.

What both companies proved — through very different means — became an industry standard. A playbook that everybody will follow. A shortcut teleportation device to a good valuation and some happy shareholders. We built it and gave them the tools.

Am I the Next Sacrifice in the Name of the Shareholders.

Full-stack engineers were replacing specialists in 2014. DevOps was automating the pipeline in 2013. Cloud was supposed to save us money and didn’t. FinOps was invented to clean up the mess. Platform engineering tried to solve an issue but created few others. The monolith came back and not really. Every single one of these signals reduced the need for headcount — years before anyone typed a prompt.

And here’s the part nobody talks about: there is not a single multi-trillion dollar company built by AI. Not one. The AI companies themselves are burning billions. OpenAI lost $5 billion in 2024. The companies that actually reached trillions — Apple, Meta, Microsoft — got there through operational efficiency. Smaller boxes. Fewer managers. Leaner teams. Better pipelines. Not better prompts.

But here’s what genuinely worries me. If we keep cutting the engineers who actually understand how systems work — the ones with deep architectural knowledge, the ones who’ve debugged distributed systems at 3 AM, the ones who know why something should be built a certain way — then who’s left to prompt? AI won’t own the outage. It won’t own the deadline. It won’t own the decision. It needs someone who does. And if we fire all those people in the name of efficiency, we’re not automating intelligence. We’re automating ignorance.

Understanding the pattern is the first step to not repeating it. The engineers who care about craft, who refuse to cut corners, who treat their domain knowledge as non-negotiable — they’re not the cost to be optimized. They’re the reason any of this works. And in an industry that keeps sacrificing people for stock prices, holding onto your moral values isn’t idealism — it’s survival.

AI didn’t cause this. Revenue did. The same driver it’s always been. The same pattern it’s always followed. We just gave it a new name.

So next time you see a headline that says “Some Company cuts 10,000 jobs due to AI” — remember: the fire was already burning. AI is just the latest thing we’re throwing on it.