top of page
  • LinkedIn Rob Roy

I didn’t have a moment in 2025 where AI suddenly arrived in my work.

 

There was no before-and-after, no obvious inflection point where everything changed and I had to consciously adjust. What happened instead was slower, quieter, and much easier to justify at every step. AI didn’t “disrupt my workflow” or anything like that. It melted into it. Each small shift felt reasonable on its own, until the accumulation of them became something I could only really see in hindsight.

 

At the beginning of 2025, AI was present but limited. It lived in a small constellation of tools, each with a specific purpose. Perplexity was research. Otter.ai handled transcription. Humata helped me forage through large documents when someone handed me an overwhelming amount of material and asked for something precise to come out the other end. Copy.ai occasionally helped with early drafting. The important part is that none of these replaced thinking. They augmented it. Everything still ran through me.

 

Perplexity, in particular, never changed. I used it the same way all year. Sometimes I needed context on the shape of an industry before sitting down to write a memo. Sometimes the task was to present research findings directly to a client. Sometimes that research needed to be woven into specific analysis that the client had asked for. In every case, Perplexity was an information-gathering engine. It was phenomenal at that, and it stayed firmly in that lane.

 

ChatGPT was different. I didn’t start the year with it. It slipped into my schedule early on. My uses with it were more varied, and it evolved throughout the year in ways I didn’t fully appreciate until later. Early on, it functioned mostly as an organizational tool. I’d use it to search through large quantities of files, documents, or pages of notes to find very specific pieces of information a project required. This worked best when I already knew exactly what I was looking for and knew it existed somewhere in the material I was searching. The problem it solved was time. It sifted irrelevance like the mud around nuggets of gold. There was always too much noise in the way, and initially, ChatGPT was a quicker, streamlined pan than anything else I had.

In that sense, ChatGPT became a much more powerful version of Ctrl + F, Ctrl + C, Ctrl + V. It could move through a massive pile of information and surface the exact line, data point, or quote I needed without my having to manually sift through everything myself. Then, on top of that, integrate it directly into whatever writing I was working on. That alone was enormously valuable. Minor, still.

 

Eventually, I got comfortable enough to push it one step further. Once the information was found and verified, I’d ask it to compose an outline for whatever I was working on. With enough direction, ChatGPT could take that material and lay it out for the format I needed: a memo, executive brief, or draft section of a larger deliverable. It wasn’t spinning something from nothing. It doesn’t do that. When I asked it to, the results were bland, vague, or outright wrong. But when the task was to incorporate information into something more structured and readable, it could do that reasonably well—if I stayed close to it and kept refining the prompts.

 

That refinement mattered. Content generation always took more back-and-forth than information-finding. It was usable, but only after I’d dialed it in a fair amount. I never trusted it to generate anything meaningful on its own. It needed material to work from, and it needed supervision. That’s never changed.

 

Where ChatGPT consistently struggled was with tasks that required strict adherence to reality. One project stands out. I needed it to search through a long document of notes and calculate averages based on quantitative rankings buried in the text. Over and over, it hallucinated. It took shortcuts. It presented numbers that followed a preset pattern rather than numbers based on what was actually there. It overwrote reality with what it assumed the answer should look like.

 

I spent multiple chat sessions trying to force it to respect the actual data. I could sometimes get it to calculate one set of numbers correctly, only to have it revert immediately on the next prompt and start estimating again. Eventually, I had to abandon it entirely for that task. That failure wasn’t isolated. Across the year, whenever non-interpretive, reality-bound analysis was the primary requirement, ChatGPT proved unreliable.

 

At the same time, it excelled at pattern recognition in a different way. One of the clearest moments where this clicked for me involved synthesizing notes from dozens of interviews for a client. The interviews spanned roles, seniority levels, and perspectives. The answers weren’t contradictory so much as disconnected. Everyone was circling the same core issue—people didn’t understand how best to relay the company’s value—but from different angles.

 

ChatGPT was able to surface patterns across that mess of notes far faster than I could have on my own. It connected specific feedback to specific personas. It showed how different roles experienced different facets of the same underlying problem. Verifying those patterns was still necessary, but it’s easier to check a proposed structure than to scour dozens of pages manually.

 

As the year went on, this balance shifted my role in subtle ways. I spent less time in active discovery and more time in verification. I still had to understand the subject matter of every project. I still had to know what I was looking for. But I wasn’t the first point of contact with the information anymore. I was increasingly overseeing a process that made that first pass for me.

 

My questions changed. I became more specific. I got better at asking questions that exposed mistakes, or that made verification easier. By the end of the year, I wasn’t handwriting every brief or memo. I wasn’t engaging with every project at the same depth where I felt I’d gained a nuanced view of the industry just by doing the work. It felt like moving from a base function to a first derivative. I gained a clearer view of patterns and trends, but I lost some of the granular context that used to come with slower, manual work. I accepted that tradeoff for speed and efficiency, even knowing something was being left behind.

 

By late 2025, a fairly common loop had emerged for larger research-heavy projects. I might use ChatGPT to refine precise research prompts based on provided materials. Other times, that wasn’t needed. Perplexity executed the research and surfaced sources. Those outputs came back into ChatGPT for synthesis and outlining. I reviewed and verified each step, taking responsibility for final products. That wasn’t the only workflow I used, but it was a powerful one under tight deadlines.

 

The upside of all of this is obvious. Turnaround times that would have been unrealistic a year earlier became manageable. Complexity became easier to handle. The work was done faster.

The downside crept in more quietly. I noticed my own complacency growing faster than I expected. The more convenient the tools became, the easier it was to slip into autopilot. My memory of how I’d completed certain tasks was worse. The mental sharpness required to grind through raw search results dulled. The tools looked robust enough to trust, even when they weren’t.

 

This is where my concern sits now. Not with the existence of the tools, but with what they do to the process of doing this kind of work. The value of the human becomes easier to underestimate—both individually and organizationally. People talk about AI as if humans are interchangeable shepherds of its output. They’re not. The competence and vigilance of the person using the system directly determines the value that comes out of it. Less vigilance doesn’t just lower quality. It compounds across teams and organizations.

 

I’ve felt this personally. Even when I try to stay alert, complacency feels like a constant gravitational pull. Stories about law firms citing fictional cases generated by AI don’t surprise me. The tools can feel sycophantic, bending toward whatever conclusion the user seems to want. That tendency is built into them. Used carelessly, they reinforce mistakes and neglect.

 

What surprised me most in 2025 was how quickly my own complacency took hold. The awe of capability drowned out the limitations until failure made them impossible to ignore. There were business consequences tied to those failures. That’s when it became clear how much I’d been relying on the appearance of competence rather than its reality.

 

I can’t work without AI now. I also don’t see a version of working with it that doesn’t extract something from the person using it. That tension defined 2025 for me. It grew subtly, slowly. Then hit my conscious mind all at once.

 

As we head into 2026, the questions that stay with me aren’t about what these tools can do. They’re about what they do to us while we’re using them. Where convenience starts to replace competence. What the human is actually accountable for when so much of the work is delegated. And how much of ourselves we’re willing to trade away for speed before we notice what’s missing.

 

I’d hope there would be an obvious out to this conundrum. That somehow there’s some magical fix or reframe on the individual level that keeps me working quickly, producing quality work, and doesn’t sacrifice me for the work. Unfortunately, I don’t think it works like that. It requires constant vigilance and compromise to maintain my obligations.

 

The cost is unmaintainable. I view my own health as the primary concern, and any cost which damages it an unacceptable one. That there is notable harm with constant AI use is no debate. Studies have already shown the reduced cognitive ability of ChatGPT users, and the potential for neurosis in extreme cases. Sacrifice may be noble, but I’d still like to read at a college level if I ever retire.

 

Yet quitting this tool isn’t feasible. AI is here to stay. We knew that by the beginning of 2025. Beyond that, many professionals probably feel like they have no alternative to working their current job. I know I don’t. With mass layoffs a quarterly affair, the fear of the axe coming down on your neck is a constant. Why would I want to volunteer for that chopping block when there’s no new position I’ll be able to find?

 

I have to compromise. I don’t like what this tool extracts from me. I don’t like what it makes me, or how it encourages me to conduct myself professionally. The best I feel I can do is minimizing the opportunities I give this tool to hurt me. If a task doesn’t call for AI, or I have the time and wherewithal to complete it without AI, I won’t go running to it. There’s still some control I have over this situation, as powerless as I feel for everything else. My resolve is centered around reclaiming control over my own work.

 

If you resonate with any of this, it’s good to know I’m not alone in my frustrations. I’m scared of becoming a thoughtless “prompt engineer.” I’d rather be known for my own brainpower than how well I manipulate a well disguised series of matrices. There’s no bright light at the end of this tunnel, there’s just the next month and the constant effort you put into seizing your own mind back. Treat AI like the fallible, harmful but still potent tool it is. Even radiation, which can kill or cause cancer is beneficial in the right doses at the right times. That’s how I’m viewing AI. All the time on full blast is likely to turn your mind to mush, but a little used sparingly under controlled circumstances can help.

2025_AI_1200x628_ARTICLE.png

By Arden Reynolds

January 21, 2026

2025: The Year AI Got Me

Arden Reynolds is a Research Associate at Rob Roy Consulting, Inc.

bottom of page