I rewrote my resume a few weeks ago. Not because anything on it was wrong, but because I wanted to tailor it for a specific type of role. In the past, that meant the usual ordeal: find the PDF, realize you can't edit a PDF, upload it to Google Drive, let it convert into a Google Doc that mangles your formatting, spend half an hour wrestling margins and bullet points back into place, add the new content, realize the spacing is off, fix it, export it back to a PDF, and hope nothing shifted in the conversion. Thirty minutes of friction for what amounts to moving a few sentences around.
This time I pasted the old resume into Claude, told it what I wanted to change, reviewed the output, made a couple of corrections, and was done in five minutes. The result was cleaner, better formatted, and more precisely targeted to the roles I was applying for than anything I would have produced wrestling with Google Docs.
And I thought: if I'm doing this, everyone is doing this. Not because I'm a trend forecaster. Because the tools are free, they're obvious, and the incentive is overwhelming. Which raises a question that doesn't have a comfortable answer: if the document that's supposed to represent you is now largely produced by a machine, what is it actually representing?
the arms race
The numbers bear it out. No single survey gives a clean percentage, but synthesizing recent data from Resume Genius, Insight Global, and others points to roughly 30 to 50 percent of applications being AI-assisted in some form, with the share almost certainly higher in tech and white-collar roles. Resume Genius found that 74 percent of hiring managers say they've encountered AI-generated content in applications. The tools are everywhere. The usage is not subtle.
But here's the other side: 97 percent of large companies, including nearly all of the Fortune 500, use applicant tracking systems to process incoming resumes. Industry estimates, many originating from ATS vendors like Jobscan, suggest that roughly 75 percent of resumes are filtered out before a human ever sees them. The exact number is hard to pin down independently, but the direction is not in dispute. The screening is automated. The writing is automated. By the time a human actually looks at a resume, most of the decisions have already been made by software.
And it's about to get worse. As AI reshapes workforces, companies are trimming headcount. Fewer available jobs and more displaced workers means more applications per opening, which means more pressure on automated screening, which means more reason for candidates to optimize with AI. Dario Amodei, Anthropic's CEO, has estimated that AI could push unemployment to 10 to 20 percent within the next few years. Even if those projections prove aggressive, the direction of pressure is clear: more applicants per role, with both sides leaning harder on automation to cope.
The latest response to this problem is almost too perfect. Some companies are now buying AI tools to detect AI-written resumes. The vendors claim 98 to 99 percent accuracy. Independent academic studies tell a different story: an SSRN evaluation of five popular detectors found they "often misclassify human writing as AI and vice versa." A separate study benchmarked GPTZero, QuillBot, and Polygraf.AI against their own marketing and found substantial error rates, especially on shorter or edited text. Resumes are both short and edited. Most HR teams don't even use these tools.
So the loop closes. AI writes the resume. AI screens the resume. AI tries to detect that the first AI wrote it. And at every step, the accuracy is questionable.
the rational move
I don't lose sleep over the fact that AI helped write my resume. I read everything it produced. I verified that nothing was missing, nothing was exaggerated, and everything was represented accurately. The content is mine. The ideas are mine. The work history is mine. AI made the formatting cleaner and the language tighter than I would have managed in a Google Doc at midnight.
This is not the same as turning in a college essay you didn't write. A resume is not an exercise in original thought. It's a structured summary of what you've done, formatted for a machine that will parse it into fields and score it against a job description. Spending thirty minutes hand-crafting that document when AI produces a better version in thirty seconds is not integrity. It's inefficiency.
But I should be honest about something. My careful, verify-everything approach is not what's breaking the system. The applicant floods that recruiters describe, hundreds of applications in hours for a single role, aren't coming from people who spend five minutes reviewing AI output for accuracy. They're coming from candidates using AI to generate heavily embellished resumes and auto-apply tools to blast them across hundreds of openings. The same tools that let me save twenty-five minutes on formatting let someone else fabricate an entire work history in the same amount of time. The system can't tell the difference.
That's the uncomfortable part. Each individual person using AI on their resume is making a rational decision. If 97 percent of large employers screen with software, optimizing for that software is common sense. But when everyone optimizes, the signal that resumes were supposed to carry collapses. Each person is making the right call. The sum of those right calls is destroying the system. The people who should worry are not the candidates. It's the people who designed a hiring process where the best strategy for getting hired is to have a machine write the thing that another machine is going to read.
what the other side sees
Katie Tanner, an HR consultant in Utah, posted a remote tech role requiring three years of experience. She expected interest. She got 400 applications in 12 hours and 600 within 24. LinkedIn reported a 45 percent rise in applications year over year, with the platform processing roughly 11,000 applications every minute globally. Hung Lee, a former recruiter and curator of the Recruiting Brainfood newsletter, called it an "applicant tsunami."
Robert Half's 2026 survey found that 67 percent of HR leaders say AI-generated applications are slowing the hiring process, with 20 percent reporting delays of more than two weeks. Sixty-five percent of hiring managers say AI-enhanced resumes make skills harder to verify. The applications look polished. They sound similar. And recruiters are spending more time, not less, trying to figure out who can actually do the job.
The efficiency gains that ATS promised have reversed. Companies adopted automated screening to reduce workload. Candidates adopted AI writing in response. Now both sides are burning more time than before the automation existed.
And the filters have a deeper problem than speed. They're biased. A 2021 Harvard Business School and Accenture study estimated that automated screening excluded over 27 million workers who were fully capable of doing the work, filtered out by blunt rules around degree requirements, employment gaps, and tenure thresholds. That was before the current wave of AI-generated applications compounded the problem. A 2024 Fisher Phillips study tested popular AI resume screening tools in controlled conditions and found they preferred resumes associated with white male candidates roughly 85 percent of the time, a finding consistent with broader research on algorithmic bias in hiring. The system is not just inefficient. It is systematically favoring some candidates and excluding others based on criteria that have nothing to do with ability, which means the people who benefit most from the current system have the least incentive to change it.
skills-first is a press release
There is an obvious answer to all of this, and companies love to talk about it. Skills-based hiring. Evaluate candidates on what they can do, not what's on a document. Eighty-five percent of companies claim to practice it. Google dropped degree requirements. IBM launched "New Collar" pathways. Apple and Microsoft followed.
But a Burning Glass Institute analysis found that fewer than 1 in 700 actual hires were affected by companies dropping degree requirements. The press releases moved faster than the hiring practices. Most companies still collect resumes. Most still screen them with ATS. The front door hasn't changed, even if the sign above it has.
The problem is that the entire infrastructure is built around the resume. ATS systems expect a document. Recruiters are trained to read documents. Job boards are designed around uploading documents. Switching to skills-based evaluation means rebuilding the pipeline, not just announcing a policy. And rebuilding is expensive, disruptive, and risky. It's also a measurement problem: no one has figured out how to evaluate skills at resume-screening scale. Work samples don't work for 600 applicants. Custom assessments require job-specific design. So everyone keeps feeding resumes into the same machine and hoping the output improves.
The signal-to-noise ratio is already collapsing. A resume can tell you that someone listed a skill. It cannot tell you whether they actually understand it. It cannot tell you whether the project they described was something they led or something they watched. A parser sees a keyword. A conversation reveals whether there's depth behind it. The resume was never a great signal. It was just the only scalable one. And now that both sides have automated it, it's not even that.
the question
We are in a system where AI writes a document, AI screens that document, AI tries to detect whether the first AI wrote it, and a human shows up somewhere at the end to interview the five people who made it through. Both sides know the process is broken. Both sides keep participating because no one wants to be the first to abandon a system that everyone else still uses.
That's the core of it. This is a coordination problem, not a technology problem. The resume persists not because it works, but because replacing it requires everyone to move at once, and no one will. It's simple, it's universal, and until recently, it scaled. The thing that made it scale, automation, is the same thing that's now destroying its signal. But the infrastructure is entrenched, the alternatives are unproven at scale, and the cost of switching is high enough that most organizations will keep running the broken version until it becomes more expensive than the alternative.
If you're a candidate, stop feeling guilty about using AI on your resume. You're not cheating. You're responding rationally to a system that stopped being human a long time ago. And if you're hiring, stop pretending the resume tells you what you need to know. It tells you what a model can generate in thirty seconds.
I opened this essay with a question: if the document that's supposed to represent you is now largely produced by a machine, what is it actually representing? After looking at the data, I think the answer is: not what either side thinks. The resume no longer reliably represents the candidate's abilities, their depth, or their fit. It represents their ability to optimize for a machine. The resume has become a simulation of a hiring process, a familiar shape that both sides keep filling out because no one has agreed on what comes next. The system will hold until the cost of maintaining it exceeds the cost of replacing it. Based on what recruiters are already reporting, that math is getting close.