← Back to blog

When Your Users Code With AI, What Does DevRel Actually Teach?

Tutorials are the thing AI writes best. So if Copilot already produces the boilerplate, what is left for developer relations to actually teach? I think the answer changes the job entirely.

DevRel Community
When Your Users Code With AI, What Does DevRel Actually Teach?

A friend of mine runs DevRel at a mid-size API company. We were on a call last week, and he said something that stuck: "I have no idea what content to produce anymore."

His team built their reputation on tutorials. Videos, written walkthroughs, the polished "How to set up auth in 10 minutes" stuff. Easily a million views and reads across their channels. And then, sometime in the last 18 months, the numbers stopped growing. Not because people lost interest in the product. Because nobody needs the tutorials anymore.

They sit in Cursor or Copilot, type "add auth using SDK X", and get working code back. The tutorial gets read by the model, not by them.

So what does DevRel actually teach now?

The thing AI does well is also the thing tutorials do

Step-by-step instructional content is exactly the format LLMs were trained on. Millions of Stack Overflow answers, README files, dev.to posts, official docs. When you ask Claude how to do something concrete and well-documented, it composes those patterns better than most humans can.

The numbers say this is not a fringe behaviour anymore. GitHub's 2024 Octoverse report showed AI tools are now part of mainstream developer workflow across every major language ecosystem. Stack Overflow's 2024 Developer Survey found 76% of developers using or planning to use AI tools, with 72% of professional developers favourable or very favourable toward them. Stack Overflow itself reported a meaningful drop in question volume over the same period, which the moderators correlate with people asking AI first.

Reading is still happening. The reader changed.

If you produce content that competes with what an LLM can already generate from your own docs, you are competing with the model on its strongest ground. That is not a fight I would pick.

What AI cannot do (yet)

It cannot tell me why.

Why does this SDK use long-polling instead of websockets. Why is rate-limiting bucketed per token rather than per IP. Why did your team kill the v1 schema even though plenty of customers still relied on it. Why is the recommended pattern actually the recommended pattern, and what happens when you ignore it.

These are all things models can guess at. They do, sometimes confidently. But guesses are not the same as the answer from someone who was in the room when the decision was made.

When someone integrating my work has Copilot do the wiring, the value of human-produced content shifts upward in the stack. They do not need my "getting started." They need:

  • The reasoning behind the API design
  • The failure modes the SDK does not surface clearly
  • The patterns that look fine but break at scale
  • Honest comparisons with the alternative tools they are also evaluating
  • Direct access to a human when they hit something weird

That last one matters more than people credit. A Discord ping to an actual maintainer is something an LLM cannot reproduce, and the bar for what counts as "good DevRel" in 2026 is increasingly about whether that ping gets answered.

Mental models, not muscle memory

Honestly, I do this myself. When I am picking up a new framework, I no longer want a 40-minute video walking me through npm install to "hello world." I want a 10-minute talk where the creator explains the three or four ideas that make the framework feel different from the others.

That is teachable content AI does not replace. It is opinionated, contextual, and it is the layer above the API surface. The mental model. The taste. The specific failure mode the maintainer fixed last year because someone hit it in production.

A good example I keep coming back to is the way the HTMX team writes essays alongside their docs. The docs explain syntax. The essays explain worldview. The essays are what convert people, and they are also exactly the thing an LLM has the hardest time replicating from a corpus, because they do not exist in the corpus until the maintainer writes them.

This is part of why I think DevRel is more important now, not less. The job just stopped being "make people aware of feature X." It is now: build the explanation no model could produce on its own.

What this means for the day-to-day

A few things are worth saying out loud, because the implications for how you spend your week are real.

Stop measuring tutorial views as a primary KPI. They are going to keep dropping, and the drop is not your fault. Measure depth signals instead. Time-on-page for design rationale posts. GitHub Discussions activity. Office-hours attendance. Things that imply the human stayed for the part the AI cannot give them.

Move some content into the AI's diet on purpose. Make sure your docs are clean, structured, and machine-readable, because the AI is now your most prolific reader. If your docs are messy, every Copilot autocomplete in your ecosystem gets quietly worse. There is a real first-mover advantage here for teams who treat their docs as input data, not just output.

Be more present, not more polished. The trend I keep seeing is that the DevRel folks who are doing well right now are not the ones with the slickest content. They are the ones who answer Discord messages within an hour, who jump into community calls, who write the messy "here is the post-mortem of what we got wrong" essays.

The job changed from explaining how to build with your product to explaining why your product was built that way, and being available when someone needs the human in the loop.

If your DevRel strategy still assumes humans will sit through your tutorials, it is quietly aging out. The good news is the new job is more interesting. The bad news is it is harder to fake, because models are getting really good at the part that used to pad the calendar.

I am still figuring out what mine should look like. Honestly, I think most of us are.