<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>TCDEV Blog</title>
  <subtitle>Practical notes on AI, engineering, and developer tooling.</subtitle>
  <link href="https://www.tcdev.de/en/blog/feed.xml" rel="self" type="application/atom+xml" />
  <link href="https://www.tcdev.de/en/blog/" rel="alternate" type="text/html" />
  <id>https://www.tcdev.de/en/blog/</id>
  <updated>2026-05-03T00:00:00Z</updated>
  <author>
    <name>TCDEV Blog</name>
  </author>
  <entry>
    <title>When Your Users Code With AI, What Does DevRel Actually Teach?</title>
    <link href="https://www.tcdev.de/blog/when-your-users-code-with-ai-what-does-devrel-teach/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/when-your-users-code-with-ai-what-does-devrel-teach/</id>
    <updated>2026-05-03T00:00:00Z</updated>
    <summary>Tutorials are the thing AI writes best. So if Copilot already produces the boilerplate, what is left for developer relations to actually teach? I think the answer changes the job entirely.</summary>
    <content type="html">&lt;p&gt;A friend of mine runs DevRel at a mid-size API company. We were on a call last week, and he said something that stuck: &lt;em&gt;&amp;quot;I have no idea what content to produce anymore.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;His team built their reputation on tutorials. Videos, written walkthroughs, the polished &amp;quot;How to set up auth in 10 minutes&amp;quot; stuff. Easily a million views and reads across their channels. And then, sometime in the last 18 months, the numbers stopped growing. Not because people lost interest in the product. Because nobody needs the tutorials anymore.&lt;/p&gt;
&lt;p&gt;They sit in Cursor or Copilot, type &lt;em&gt;&amp;quot;add auth using SDK X&amp;quot;&lt;/em&gt;, and get working code back. The tutorial gets read by the model, not by them.&lt;/p&gt;
&lt;p&gt;So what does DevRel actually teach now?&lt;/p&gt;
&lt;h2&gt;The thing AI does well is also the thing tutorials do&lt;/h2&gt;
&lt;p&gt;Step-by-step instructional content is exactly the format LLMs were trained on. Millions of Stack Overflow answers, README files, dev.to posts, official docs. When you ask Claude how to do something concrete and well-documented, it composes those patterns better than most humans can.&lt;/p&gt;
&lt;p&gt;The numbers say this is not a fringe behaviour anymore. &lt;a href=&quot;https://github.blog/news-insights/octoverse/octoverse-2024/&quot;&gt;GitHub&#39;s 2024 Octoverse report&lt;/a&gt; showed AI tools are now part of mainstream developer workflow across every major language ecosystem. &lt;a href=&quot;https://survey.stackoverflow.co/2024/ai&quot;&gt;Stack Overflow&#39;s 2024 Developer Survey&lt;/a&gt; found 76% of developers using or planning to use AI tools, with 72% of professional developers favourable or very favourable toward them. Stack Overflow itself reported a &lt;a href=&quot;https://meta.stackoverflow.com/questions/425049/&quot;&gt;meaningful drop in question volume&lt;/a&gt; over the same period, which the moderators correlate with people asking AI first.&lt;/p&gt;
&lt;p&gt;Reading is still happening. The reader changed.&lt;/p&gt;
&lt;p&gt;If you produce content that competes with what an LLM can already generate from your own docs, you are competing with the model on its strongest ground. That is not a fight I would pick.&lt;/p&gt;
&lt;h2&gt;What AI cannot do (yet)&lt;/h2&gt;
&lt;p&gt;It cannot tell me &lt;em&gt;why&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;Why does this SDK use long-polling instead of websockets. Why is rate-limiting bucketed per token rather than per IP. Why did your team kill the v1 schema even though plenty of customers still relied on it. Why is the recommended pattern actually the recommended pattern, and what happens when you ignore it.&lt;/p&gt;
&lt;p&gt;These are all things models can guess at. They do, sometimes confidently. But guesses are not the same as the answer from someone who was in the room when the decision was made.&lt;/p&gt;
&lt;p&gt;When someone integrating my work has Copilot do the wiring, the value of human-produced content shifts upward in the stack. They do not need my &amp;quot;getting started.&amp;quot; They need:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The reasoning behind the API design&lt;/li&gt;
&lt;li&gt;The failure modes the SDK does not surface clearly&lt;/li&gt;
&lt;li&gt;The patterns that look fine but break at scale&lt;/li&gt;
&lt;li&gt;Honest comparisons with the alternative tools they are also evaluating&lt;/li&gt;
&lt;li&gt;Direct access to a human when they hit something weird&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That last one matters more than people credit. A Discord ping to an actual maintainer is something an LLM cannot reproduce, and the bar for what counts as &amp;quot;good DevRel&amp;quot; in 2026 is increasingly about whether that ping gets answered.&lt;/p&gt;
&lt;h2&gt;Mental models, not muscle memory&lt;/h2&gt;
&lt;p&gt;Honestly, I do this myself. When I am picking up a new framework, I no longer want a 40-minute video walking me through &lt;code&gt;npm install&lt;/code&gt; to &amp;quot;hello world.&amp;quot; I want a 10-minute talk where the creator explains the three or four ideas that make the framework feel different from the others.&lt;/p&gt;
&lt;p&gt;That is teachable content AI does not replace. It is opinionated, contextual, and it is the layer above the API surface. The mental model. The taste. The specific failure mode the maintainer fixed last year because someone hit it in production.&lt;/p&gt;
&lt;p&gt;A good example I keep coming back to is the way the &lt;a href=&quot;https://htmx.org/essays/&quot;&gt;HTMX team&lt;/a&gt; writes essays alongside their docs. The docs explain syntax. The essays explain worldview. The essays are what convert people, and they are also exactly the thing an LLM has the hardest time replicating from a corpus, because they do not exist in the corpus until the maintainer writes them.&lt;/p&gt;
&lt;p&gt;This is part of why I think DevRel is more important now, not less. The job just stopped being &amp;quot;make people aware of feature X.&amp;quot; It is now: &lt;em&gt;build the explanation no model could produce on its own.&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;What this means for the day-to-day&lt;/h2&gt;
&lt;p&gt;A few things are worth saying out loud, because the implications for how you spend your week are real.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stop measuring tutorial views as a primary KPI.&lt;/strong&gt; They are going to keep dropping, and the drop is not your fault. Measure depth signals instead. Time-on-page for design rationale posts. GitHub Discussions activity. Office-hours attendance. Things that imply the human stayed for the part the AI cannot give them.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Move some content into the AI&#39;s diet on purpose.&lt;/strong&gt; Make sure your docs are clean, structured, and machine-readable, because the AI is now your most prolific reader. If your docs are messy, every Copilot autocomplete in your ecosystem gets quietly worse. There is a real first-mover advantage here for teams who treat their docs as input data, not just output.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Be more present, not more polished.&lt;/strong&gt; The trend I keep seeing is that the DevRel folks who are doing well right now are not the ones with the slickest content. They are the ones who answer Discord messages within an hour, who jump into community calls, who write the messy &amp;quot;here is the post-mortem of what we got wrong&amp;quot; essays.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The job changed from explaining how to build with your product to explaining why your product was built that way, and being available when someone needs the human in the loop.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If your DevRel strategy still assumes humans will sit through your tutorials, it is quietly aging out. The good news is the new job is more interesting. The bad news is it is harder to fake, because models are getting really good at the part that used to pad the calendar.&lt;/p&gt;
&lt;p&gt;I am still figuring out what mine should look like. Honestly, I think most of us are.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="devrel" />
    <category term="ai" />
    <category term="developer-experience" />
    <category term="community" />
  </entry>
  <entry>
    <title>AI Answers and the Trust Problem in Developer Communities</title>
    <link href="https://www.tcdev.de/blog/ai-answers-and-the-trust-problem-in-dev-communities/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/ai-answers-and-the-trust-problem-in-dev-communities/</id>
    <updated>2026-04-30T00:00:00Z</updated>
    <summary>Stack Overflow banned AI answers, then partnered with OpenAI, then watched its question volume keep falling. The unresolved tension underneath that mess is the real story for anyone running a developer community right now.</summary>
    <content type="html">&lt;p&gt;A few weeks ago I was lurking in a Discord for a tool I use occasionally. Someone asked a fairly specific question about a config edge case. Two minutes later, a confident-sounding answer appeared from a regular member. The answer had the right shape. Right code style. Plausible API references. Half of it was wrong.&lt;/p&gt;
&lt;p&gt;Nobody caught it for almost a day. The person who asked actually thanked the responder, went off, presumably wasted a couple of hours, and came back annoyed.&lt;/p&gt;
&lt;p&gt;I do not know for sure that the answer was AI-generated. I am pretty sure though. And the reason I am pretty sure is that this is the new failure mode in developer communities, and once you start looking for it, it is everywhere.&lt;/p&gt;
&lt;h2&gt;The Stack Overflow story is the obvious one&lt;/h2&gt;
&lt;p&gt;In December 2022, Stack Overflow &lt;a href=&quot;https://meta.stackoverflow.com/questions/421831/temporary-policy-generative-ai-e-g-chatgpt-is-banned&quot;&gt;issued a temporary ban&lt;/a&gt; on ChatGPT-generated answers. The reasoning was direct: the answers had a high error rate, but they looked correct, which made them harder to moderate than just bad answers from humans. Plausible-but-wrong is more dangerous than obviously-wrong.&lt;/p&gt;
&lt;p&gt;That ban became permanent policy and then, in May 2024, Stack Overflow &lt;a href=&quot;https://stackoverflow.co/company/press/archive/openai-partnership&quot;&gt;announced a partnership with OpenAI&lt;/a&gt; to feed Stack Overflow content into ChatGPT. So the platform that banned AI-generated answers started licensing its human-generated answers to train the AI generating those answers. A lot of users were not thrilled. Some &lt;a href=&quot;https://meta.stackexchange.com/questions/399695/&quot;&gt;deleted their highest-voted answers in protest&lt;/a&gt;, and a number got suspended for it.&lt;/p&gt;
&lt;p&gt;Meanwhile the actual question volume on Stack Overflow has &lt;a href=&quot;https://gist.github.com/hopeseekr/cd2058e71d01deca5bae9f4e5a555440&quot;&gt;continued to decline&lt;/a&gt;, with various analyses pointing at the obvious cause. People ask the AI first.&lt;/p&gt;
&lt;p&gt;That sequence of events is messy in the telling, but the underlying tension is simple. Communities exist because humans contribute knowledge to other humans. The moment the contribution might not be human, &lt;em&gt;and might not even be correct&lt;/em&gt;, the trust contract breaks. And once trust breaks in a community, the contributors leave first. Then the answer quality drops further. Then the readers leave too. The doom loop is well documented.&lt;/p&gt;
&lt;h2&gt;This is not a Stack Overflow problem, it is everyone&#39;s problem&lt;/h2&gt;
&lt;p&gt;Run a Discord. Run a Discourse. Run a subreddit. Run a Slack workspace for your open source project. The same dynamic is showing up everywhere, just less visibly.&lt;/p&gt;
&lt;p&gt;I have seen it in:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A Discord where a couple of &amp;quot;helpful&amp;quot; members were clearly piping every question through ChatGPT and pasting the response, complete with hallucinated method names&lt;/li&gt;
&lt;li&gt;A Discourse forum where a moderator noticed answers in a niche subforum suddenly getting more polished but less correct&lt;/li&gt;
&lt;li&gt;A subreddit where the moderators had to add a rule about AI-generated submissions because low-effort posts were drowning out the genuine ones&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The pattern is always the same. AI lowers the cost of producing a plausible answer to roughly zero. Plausibility is the thing that historically gated who got upvoted, replied to, or believed. And now the gate does not work.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.commonroom.io/blog/what-is-developer-community/&quot;&gt;Common Room&#39;s community research&lt;/a&gt; and the &lt;a href=&quot;https://orbitmodel.com/&quot;&gt;Orbit model&#39;s writing on community health&lt;/a&gt; both point at the same underlying signal: communities live or die on the trust relationships between members. AI-generated content does not just add noise. It poisons the signal that lets a community function in the first place.&lt;/p&gt;
&lt;h2&gt;The honest unresolved tension&lt;/h2&gt;
&lt;p&gt;Here is the part nobody has a clean answer to, and the reason I think this is the conversation worth having:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Faster answers are good. Reliable answers are good. Right now you can have one or the other, not both.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;If your Discord allows AI-generated responses, your average response time goes down and your accuracy gets noisier. If you ban them, you slow the community down and probably can&#39;t enforce the ban anyway, because plausibility is exactly what makes detection hard.&lt;/p&gt;
&lt;p&gt;There are some directions that look promising, but none of them are settled.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Disclosure norms.&lt;/strong&gt; Some communities have adopted a culture of marking AI-assisted answers explicitly. &lt;em&gt;&amp;quot;Claude says this, I have not verified it.&amp;quot;&lt;/em&gt; This works in small, healthy communities. It scales poorly because the bad actors are exactly the ones who will not disclose.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Reputation gating.&lt;/strong&gt; Make the cost of contributing high enough that low-effort AI dumps are filtered out by the contribution friction itself. This is basically what Stack Overflow&#39;s reputation system was designed to do, before AI shifted the cost equation.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Human-only spaces.&lt;/strong&gt; A few projects I have seen are explicitly carving out invitation-only or paid spaces for &amp;quot;verified human&amp;quot; conversation. There is a real argument for this, and it is also depressing as hell, because what we are saying is that open developer community is becoming a thing you have to pay to access.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;AI as triage, not as answer.&lt;/strong&gt; The most workable pattern I have seen is using AI to summarize, route, and tag, while keeping the actual answer human. The model is doing the boring work, the human is providing the credibility. This is roughly what the better-run community teams I know are doing now.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The trust contract that makes a community function was always implicit. AI made it the most important thing to be explicit about. Communities that name the contract directly, and enforce it, will probably make it through. The ones that pretend nothing has changed will not.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;I do not have this figured out for the communities I am part of. Honestly, I think most community managers are flying blind right now, and the tooling has not caught up.&lt;/p&gt;
&lt;p&gt;The thing I am sure of is that &amp;quot;we will deal with it later&amp;quot; is not a strategy. The trust loss is happening now, quietly, in the gap between when a wrong answer gets posted and when somebody finally calls it out. Every gap like that is a small withdrawal from the community&#39;s credibility account.&lt;/p&gt;
&lt;p&gt;If you run a developer community of any kind, this is the conversation worth having with your members in the next month or two. Not in a year.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="community" />
    <category term="devrel" />
    <category term="ai" />
    <category term="trust" />
  </entry>
  <entry>
    <title>Let Your LLM Think in English</title>
    <link href="https://www.tcdev.de/blog/let-your-llm-think-in-english/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/let-your-llm-think-in-english/</id>
    <updated>2026-04-27T00:00:00Z</updated>
    <summary>Reliable RAG and tool calling usually need one stable working language. Keep English inside the model loop, then localize for users at the edges.</summary>
    <content type="html">&lt;p&gt;You ship a chatbot for your German team. The UI is German. The source docs are partly German, partly English. The tools behind the assistant expect English enum values, English function descriptions, English product names, English everything.&lt;/p&gt;
&lt;p&gt;Then someone asks a perfectly normal question in German, the model picks the right tool, passes one argument in German instead of English, and the whole thing quietly falls apart.&lt;/p&gt;
&lt;p&gt;I&#39;ve seen versions of this enough times now that I no longer think of it as a translation issue. &lt;strong&gt;It is an execution issue.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;My current default is simple: let users speak whatever language they want, but let the LLM do its retrieval, reasoning, and tool work in English whenever reliability actually matters. Then localize the answer outside that loop.&lt;/p&gt;
&lt;p&gt;That sounds slightly heretical at first. It is also, in my opinion, the most practical thing you can do right now.&lt;/p&gt;
&lt;h2&gt;The research is starting to say this out loud&lt;/h2&gt;
&lt;p&gt;The numbers are not subtle anymore.&lt;/p&gt;
&lt;p&gt;In the &lt;a href=&quot;https://aclanthology.org/2025.findings-emnlp.1099/&quot;&gt;MASSIVE-Agents benchmark&lt;/a&gt;, researchers evaluated multilingual function calling across 52 languages, 47,020 samples, and 21 models. The best average score across all languages was just 34.05%. English reached 57.37%. Amharic dropped to 6.81%.&lt;/p&gt;
&lt;p&gt;That is not a small quality wobble. That is a reliability cliff.&lt;/p&gt;
&lt;p&gt;Then there is &lt;a href=&quot;https://arxiv.org/abs/2601.05366&quot;&gt;Lost in Execution&lt;/a&gt;, which gets even closer to the real systems problem. The paper shows that many multilingual tool-calling failures happen &lt;strong&gt;after the model already understood the intent and selected the correct tool&lt;/strong&gt;. The dominant issue was parameter value language mismatch. In plain English, the model knew what to do, but expressed the executable bits in the user&#39;s language instead of the interface language, so the call failed anyway.&lt;/p&gt;
&lt;p&gt;And this is not limited to tool calling. In &lt;a href=&quot;https://aclanthology.org/2024.naacl-short.46/&quot;&gt;Do Multilingual Language Models Think Better in English?&lt;/a&gt;, Etxaniz and colleagues found that self-translation into English consistently beat direct non-English inference across five tasks. Their phrasing is refreshingly blunt: models are &amp;quot;unable to leverage their full multilingual potential when prompted in non-English languages.&amp;quot;&lt;/p&gt;
&lt;p&gt;So yes, multilingual models are impressive. But if your bar is not &amp;quot;sounds pretty good&amp;quot; and is instead &amp;quot;must behave correctly in production,&amp;quot; English still looks like the safer operating language remarkably often.&lt;/p&gt;
&lt;h2&gt;Why RAG breaks in the same place&lt;/h2&gt;
&lt;p&gt;People usually hear this argument and think of agents first. Function calling, structured output, API execution, that kind of thing.&lt;/p&gt;
&lt;p&gt;RAG has the same weakness, just one layer earlier.&lt;/p&gt;
&lt;p&gt;If your retrieval layer has to match a user&#39;s local phrasing against content written in mixed languages, with inconsistent terminology, translated product names, and half-localized taxonomy labels, you create more chances for the system to drift before generation even starts. Honestly, this is where a lot of &amp;quot;the model is unreliable&amp;quot; complaints come from. The model may be fine. The content interface is not.&lt;/p&gt;
&lt;p&gt;I would rather normalize early.&lt;/p&gt;
&lt;p&gt;Translate the question into English. Retrieve against an English canonical corpus. Let the model reason over one stable terminology layer. Generate an answer draft in English if needed. Then translate or localize the final response for the user.&lt;/p&gt;
&lt;p&gt;That gives you one place where naming stays stable:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;one canonical document title&lt;/li&gt;
&lt;li&gt;one canonical product vocabulary&lt;/li&gt;
&lt;li&gt;one canonical tool schema&lt;/li&gt;
&lt;li&gt;one canonical set of retrieval labels&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can still support every user language on the outside. You just stop asking the core execution path to be perfectly multilingual at every step.&lt;/p&gt;
&lt;h2&gt;This is not anti-localization&lt;/h2&gt;
&lt;p&gt;Quite the opposite.&lt;/p&gt;
&lt;p&gt;Bad multilingual AI architecture usually hurts local users first. They get the nice localized interface, then the hidden English-centric system underneath behaves inconsistently and makes them pay the price.&lt;/p&gt;
&lt;p&gt;Proper localization means being honest about where language should flex and where it should not.&lt;/p&gt;
&lt;p&gt;For me, the split looks like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Localize the UI, prompts, help text, onboarding, and final answers.&lt;/li&gt;
&lt;li&gt;Localize the source content people read directly when that content needs to exist in-market.&lt;/li&gt;
&lt;li&gt;Keep internal tool definitions, canonical identifiers, retrieval labels, and reasoning pivots in English if that is the most stable layer.&lt;/li&gt;
&lt;li&gt;Add explicit post-processing or human review where a localized output has legal, regulatory, or contractual weight.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That last point matters more than teams like to admit. If the model is talking to a human, localization is a user experience decision. If the model is talking to another system, language is an interface contract.&lt;/p&gt;
&lt;p&gt;Those are not the same thing.&lt;/p&gt;
&lt;h2&gt;The architecture I trust most right now&lt;/h2&gt;
&lt;p&gt;This is the version I would bet on today for multilingual AI products:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;User asks in their language.&lt;/li&gt;
&lt;li&gt;System translates or normalizes the request into English.&lt;/li&gt;
&lt;li&gt;Retrieval, reasoning, ranking, and tool calls happen against English canonical data.&lt;/li&gt;
&lt;li&gt;Final answer is localized back into the user&#39;s language.&lt;/li&gt;
&lt;li&gt;High-risk outputs get an extra validation step before they leave the system.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;It is not philosophically pure. It is operationally sane.&lt;/p&gt;
&lt;p&gt;The nice thing is that recent research points in the same direction. &lt;a href=&quot;https://arxiv.org/abs/2601.05366&quot;&gt;Lost in Execution&lt;/a&gt; found that pre-translation of user queries generally reduced language mismatch errors better than post-hoc fixes, even if it still did not fully recover English-level performance. That matches what many builders already suspect in practice. If you wait until the end to clean up multilingual inconsistency, you are usually too late.&lt;/p&gt;
&lt;p&gt;And yes, there are exceptions. If you are building for low-resource languages, domain-specific language, or culturally dependent phrasing, translating everything into English can introduce drift. The paper above explicitly warns about that. So do not turn this into dogma.&lt;/p&gt;
&lt;p&gt;But as a default for enterprise copilots, internal assistants, multilingual RAG, and tool-using agents, I think the rule holds surprisingly well.&lt;/p&gt;
&lt;h2&gt;What this means in practice&lt;/h2&gt;
&lt;p&gt;This is exactly why I care so much about canonical content structure.&lt;/p&gt;
&lt;p&gt;If your knowledge base has one clean source layer, stable terminology, and controlled localization on top, AI gets easier to trust. If every language version drifts independently inside the execution path, you are asking the model to improvise where your system should be precise.&lt;/p&gt;
&lt;p&gt;this platform&#39;s whole approach is built around separating those concerns cleanly. Keep a canonical core. Localize deliberately. Track where variants exist. Do not pretend every layer of the stack should be equally multilingual just because the UI is.&lt;/p&gt;
&lt;p&gt;I used to think the best multilingual AI experience meant &amp;quot;do everything in the user&#39;s language.&amp;quot; I do not think that anymore. Not for systems that have to retrieve the right paragraph, choose the right tool, and return something you can trust.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;The practical rule is simple: users should stay local, but the LLM&#39;s execution path should stay stable. Right now, that usually means English in the middle and localization at the edges.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That will change over time. I hope it changes quickly. But if you are shipping today and reliability matters more than aesthetics, I would let the model think in English and let your product speak the user&#39;s language.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="multilingual" />
    <category term="developer-experience" />
    <category term="knowledge-management" />
  </entry>
  <entry>
    <title>Claude Design and the One-Person Creative Agency</title>
    <link href="https://www.tcdev.de/blog/claude-design-the-one-person-creative-agency/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/claude-design-the-one-person-creative-agency/</id>
    <updated>2026-04-18T00:00:00Z</updated>
    <summary>Anthropic just shipped design, prototyping, and presentation tools inside Claude. Combined with Code and Cowork, one person now has a full creative agency on their laptop.</summary>
    <content type="html">&lt;p&gt;Yesterday Anthropic &lt;a href=&quot;https://www.anthropic.com/news/claude-design-anthropic-labs&quot;&gt;launched Claude Design&lt;/a&gt;, a new product from their Anthropic Labs team that lets you create designs, prototypes, presentations, and marketing collateral by talking to Claude. And I sat there looking at the announcement thinking: okay, so now one person with a Claude subscription genuinely has most of what a small creative agency offers. Design. Code. Automation. Presentations. Brand consistency. All inside the same ecosystem.&lt;/p&gt;
&lt;p&gt;That&#39;s a wild sentence to write in 2026. But I don&#39;t think it is an exaggeration.&lt;/p&gt;
&lt;h2&gt;What Claude Design actually does&lt;/h2&gt;
&lt;p&gt;The short version: you describe what you need, and Claude builds a first version. Then you refine through conversation, inline comments, direct edits, or custom sliders that Claude generates for you. It is powered by &lt;a href=&quot;https://www.anthropic.com/news/claude-opus-4-7&quot;&gt;Opus 4.7&lt;/a&gt; and it is surprisingly good at maintaining visual consistency.&lt;/p&gt;
&lt;p&gt;But the feature that caught my attention is the onboarding. During setup, Claude reads your codebase and existing design files to build a design system for your team. Colors, typography, components. Every project after that automatically follows your brand. You can import images, documents (DOCX, PPTX, XLSX), or point it at your codebase directly. There&#39;s a web capture tool that grabs elements from your live website so prototypes look like the real product.&lt;/p&gt;
&lt;p&gt;Have a Figma mockup you want to iterate on? Export it, drop it into Claude Design, and start a conversation about what to change. Or just capture your existing website and say &amp;quot;make the hero section bigger and add a testimonial carousel.&amp;quot; That kind of thing.&lt;/p&gt;
&lt;p&gt;The testimonials from the announcement are telling. Brilliant&#39;s Senior Product Designer said their most complex pages, which took &lt;a href=&quot;https://www.anthropic.com/news/claude-design-anthropic-labs&quot;&gt;20+ prompts to recreate in other tools, only required 2 prompts in Claude Design&lt;/a&gt;. Datadog&#39;s Product Manager described going from a rough idea to a working prototype before anyone leaves the room, and said what used to take a week of briefs, mockups, and review rounds now happens in a single conversation.&lt;/p&gt;
&lt;p&gt;A week of back-and-forth, collapsed into one conversation. Think about that for a second.&lt;/p&gt;
&lt;h2&gt;The stack that changes everything&lt;/h2&gt;
&lt;p&gt;Here is where it gets interesting. Claude Design does not exist in isolation. Anthropic now has three products that, combined, cover an absurd amount of ground:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Claude Code&lt;/strong&gt;: Write, review, and ship actual software. &lt;a href=&quot;https://www.anthropic.com/news/anthropic-raises-30-billion-series-g-funding-380-billion-post-money-valuation&quot;&gt;$2.5 billion in run-rate revenue&lt;/a&gt; as of February, responsible for an estimated 4% of all public GitHub commits worldwide.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Claude Cowork&lt;/strong&gt;: Automate knowledge work. Research, analysis, document processing, recurring tasks from your desktop.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Claude Design&lt;/strong&gt;: Create visual work. Prototypes, presentations, marketing collateral, brand-consistent assets.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And they hand off to each other. When a design is ready to build, Claude packages everything into a handoff bundle you can pass to Claude Code with a single instruction. Design to production in one flow. No Jira ticket. No handoff meeting. No &amp;quot;can you send me the specs in Slack.&amp;quot;&lt;/p&gt;
&lt;p&gt;I keep thinking about Sam Altman&#39;s prediction from early 2024 about &lt;a href=&quot;https://every.to/napkin-math/the-one-person-billion-dollar-company&quot;&gt;the one-person billion-dollar company&lt;/a&gt;. At the time it sounded aspirational, maybe a little hyperbolic. The tools were not there yet. You could generate text and images, sure, but the gap between generating things and shipping real products was enormous.&lt;/p&gt;
&lt;p&gt;That gap is shrinking fast.&lt;/p&gt;
&lt;h2&gt;What I would have given for this two months ago&lt;/h2&gt;
&lt;p&gt;When I built this platform, the marketing site was one of the most tedious parts. Not the code. The code was fine, Claude handles HTML and CSS like a champion. But the visual design decisions? The hero layout, the pricing page cards, the feature comparison tables? I described things in words, Claude produced something, and then I spent hours going back and forth trying to adjust spacing, colors, typography. All through text prompts. No visual feedback loop.&lt;/p&gt;
&lt;p&gt;With Claude Design, that workflow becomes: &amp;quot;here is my website&amp;quot; (web capture), &amp;quot;redesign the pricing section to emphasize the team plan&amp;quot; (conversation), tweak the green accent color with a slider, approve, hand off to Claude Code for implementation. I estimate that would have saved me an entire weekend.&lt;/p&gt;
&lt;p&gt;(Honestly, I&#39;m a little annoyed it did not launch two months earlier.)&lt;/p&gt;
&lt;p&gt;And the Canva export is clever. &lt;a href=&quot;https://backlinko.com/canva-users&quot;&gt;Canva has 220 million active users&lt;/a&gt; and $3 billion in annualized revenue. Anthropic is not trying to replace Canva. They&#39;re trying to be the place where ideas start before landing in Canva for final polish and distribution. That is a smart positioning play. You generate the creative in Claude Design, then export to Canva where your marketing team picks it up. Or export to PPTX for that investor deck. Or export as standalone HTML for a landing page.&lt;/p&gt;
&lt;h2&gt;The &amp;quot;just add tools&amp;quot; multiplier&lt;/h2&gt;
&lt;p&gt;This is the pattern I keep seeing across the AI space in 2026. The base model gets smarter, sure, but the real productivity jump comes from connecting tools together. Claude by itself is a very smart text generator. Claude with Code is a software engineer. Claude with Cowork is a research analyst. Claude with Design is a creative director. Claude with all three? That&#39;s a small agency.&lt;/p&gt;
&lt;p&gt;And the connections keep growing. Anthropic said they will add more integrations over the coming weeks. MCP servers already let you connect Claude to external tools and data sources. The ecosystem is building itself.&lt;/p&gt;
&lt;p&gt;For solo founders, freelancers, and small teams, this changes the calculus completely. You don&#39;t need a designer on retainer to produce professional-looking decks and prototypes. You do not need a separate frontend developer to turn mockups into code. You don&#39;t need a project manager to coordinate the handoff between design and engineering because there is no handoff. It is one continuous conversation.&lt;/p&gt;
&lt;p&gt;I&#39;m not saying designers are obsolete. Far from it. Brilliant and Datadog both described their &lt;em&gt;designers&lt;/em&gt; using Claude Design. The tool makes good designers faster and gives everyone else access to competent visual output. That&#39;s a different thing from replacing people.&lt;/p&gt;
&lt;h2&gt;What this means for documentation products&lt;/h2&gt;
&lt;p&gt;This one I&#39;m watching closely. In documentation products, visual quality matters. Entry pages need to look good. Quick-start guides need clear diagrams. Marketing docs need brand consistency across languages and teams.&lt;/p&gt;
&lt;p&gt;A world where every team member can generate on-brand visual documentation, hand it off to a translation engine, and distribute it in seven languages without touching Figma or Photoshop or InDesign? That&#39;s exactly the workflow gap many documentation tools still have.&lt;/p&gt;
&lt;h2&gt;The part where it gets weirdly expensive&lt;/h2&gt;
&lt;p&gt;So here is the thing nobody is talking about yet. I created a brand new Anthropic account specifically to try Claude Design. Fresh account, no history, no prior usage. I imported one small Figma file and generated a single asset. That was it. Two operations. And I was out of credits.&lt;/p&gt;
&lt;p&gt;On a brand new account. With a fresh allocation.&lt;/p&gt;
&lt;p&gt;I don&#39;t know what the token math looks like on Anthropic&#39;s side when Opus 4.7 is doing visual generation, but whatever it is, it burns through credits at a pace that makes the &amp;quot;one-person creative agency&amp;quot; pitch feel a lot more expensive than expected. If importing a small Figma mockup and producing one image eats your entire budget, the economics of using this as your daily design tool don&#39;t work yet.&lt;/p&gt;
&lt;p&gt;To be fair, this is a research preview and pricing will probably change. But right now there&#39;s a meaningful gap between the promise (replace your design workflow) and the reality (you might run out of credits before lunch). The productivity gains are real. Whether the credits-per-output ratio makes it practical for regular use is a different question entirely.&lt;/p&gt;
&lt;h2&gt;The honest caveat&lt;/h2&gt;
&lt;p&gt;Claude Design is in research preview. It will have rough edges. The testimonials are from design teams at well-funded companies with established design systems. Your mileage will vary, especially if you are starting from scratch with no brand guidelines to feed it.&lt;/p&gt;
&lt;p&gt;But the trajectory is clear. Eighteen months ago you needed a designer, a developer, and a project manager to go from concept to shipped landing page. Today a single person with a Claude subscription can do a surprisingly credible version of that same workflow in an afternoon.&lt;/p&gt;
&lt;p&gt;We are not at the one-person billion-dollar company yet. But the one-person creative agency? I think we just arrived.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="developer-experience" />
    <category term="collaboration" />
  </entry>
  <entry>
    <title>One API Key, Many Tenants: Isolating DeepL Translations in a Multi-Tenant SaaS</title>
    <link href="https://www.tcdev.de/blog/one-api-key-many-tenants-deepl-isolation/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/one-api-key-many-tenants-deepl-isolation/</id>
    <updated>2026-04-18T00:00:00Z</updated>
    <summary>How to use a single DeepL API key across many tenants without glossaries, style rules, or cached translations leaking between customers. Here&#39;s the approach I use for Rasepi.</summary>
    <content type="html">&lt;p&gt;Every time I explain the translation architecture behind Rasepi to another developer, I get the same question: &amp;quot;Wait — all your tenants share one DeepL API key? How do you keep their glossaries and style rules from leaking into each other?&amp;quot;&lt;/p&gt;
&lt;p&gt;It&#39;s a fair question. And the answer involves more design work than you&#39;d expect.&lt;/p&gt;
&lt;p&gt;I wrote about the &lt;a href=&quot;https://www.tcdev.de/en/blog/inside-the-translation-engine-glossaries-style-rules-and-smart-retranslation/&quot;&gt;full translation pipeline&lt;/a&gt; in a previous post — the block-level hashing, the orchestrator, the whole flow from document save to translated output. This post zooms into a specific sub-problem: how you take a third-party API that has no concept of tenants and build proper tenant isolation on top of it.&lt;/p&gt;
&lt;h2&gt;DeepL doesn&#39;t know about your customers&lt;/h2&gt;
&lt;p&gt;DeepL&#39;s API authenticates with a single API key. Everything created under that key — glossaries, style rule lists, translation history — belongs to the same account. There&#39;s no concept of &amp;quot;this glossary belongs to Customer A&amp;quot; on DeepL&#39;s side.&lt;/p&gt;
&lt;p&gt;When you call &lt;code&gt;GET /v2/glossaries&lt;/code&gt;, you get &lt;em&gt;all&lt;/em&gt; glossaries from &lt;em&gt;all&lt;/em&gt; tenants. When you create a style rule list, it lives in the same namespace as everything else. The API is flat.&lt;/p&gt;
&lt;p&gt;For a self-hosted product where every customer runs their own instance with their own DeepL key, that&#39;s fine. For a multi-tenant SaaS where you manage the infrastructure? You need an isolation layer, and you need to build it yourself.&lt;/p&gt;
&lt;h2&gt;The database is the source of truth&lt;/h2&gt;
&lt;p&gt;My core design decision here: &lt;strong&gt;the database owns all glossary content and style rule configuration. DeepL is a runtime execution target, nothing more.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Every &lt;code&gt;TenantGlossary&lt;/code&gt; and &lt;code&gt;TenantStyleRuleList&lt;/code&gt; entity implements &lt;code&gt;ITenantScoped&lt;/code&gt;, which means EF Core global query filters automatically scope all reads to the current tenant. A query for glossaries in Tenant A&#39;s request context will never return Tenant B&#39;s entries. This is the same isolation pattern I use everywhere in Rasepi, enforced at the ORM level — I didn&#39;t build anything special for translations specifically.&lt;/p&gt;
&lt;p&gt;Here&#39;s what makes this interesting. When a tenant edits a glossary term, I do not immediately call DeepL. I update the database row and set &lt;code&gt;IsDirty = true&lt;/code&gt;. That&#39;s it. The actual DeepL glossary gets created (or recreated) lazily, right before the next translation needs it.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;string?&amp;gt; GetOrSyncDeepLGlossaryIdAsync(
    string sourceLanguage, string targetLanguage)
{
    var glossary = await _db.TenantGlossaries
        .Include(g =&amp;gt; g.Entries)
        .FirstOrDefaultAsync(g =&amp;gt;
            g.SourceLanguage == sourceLanguage &amp;amp;&amp;amp;
            g.TargetLanguage == targetLanguage);

    if (glossary?.Entries.Count == 0) return null;

    if (!glossary.IsDirty &amp;amp;&amp;amp; glossary.DeepLGlossaryId is not null)
        return glossary.DeepLGlossaryId;

    // Dirty: delete old, create new
    if (glossary.DeepLGlossaryId is not null)
        await _deepL.DeleteGlossaryAsync(glossary.DeepLGlossaryId);

    var entries = glossary.Entries
        .ToDictionary(e =&amp;gt; e.SourceTerm, e =&amp;gt; e.TargetTerm);

    var created = await _deepL.CreateGlossaryAsync(
        $&amp;quot;tenant-{glossary.Id}&amp;quot;,
        glossary.SourceLanguage,
        glossary.TargetLanguage,
        entries);

    glossary.DeepLGlossaryId = created.GlossaryId;
    glossary.IsDirty = false;
    glossary.LastSyncedAt = DateTime.UtcNow;
    await _db.SaveChangesAsync();

    return glossary.DeepLGlossaryId;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The query filter on &lt;code&gt;TenantGlossaries&lt;/code&gt; does the isolation. The &lt;code&gt;IsDirty&lt;/code&gt; flag does the lazy sync. The naming convention (&lt;code&gt;tenant-{glossary.Id}&lt;/code&gt;) exists only for debugging in the DeepL dashboard — it has no functional purpose in the code.&lt;/p&gt;
&lt;p&gt;Why lazy? Because &lt;a href=&quot;https://developers.deepl.com/docs/api-reference/glossaries&quot;&gt;DeepL v2 glossaries are immutable&lt;/a&gt;. You cannot edit them. Any change means delete and recreate. If a team imports a CSV with 200 terms and then fixes a typo in one entry, I don&#39;t want to delete and recreate the DeepL glossary twice. I just set &lt;code&gt;IsDirty&lt;/code&gt; both times, and the single recreate happens when the next translation runs. Batching for free.&lt;/p&gt;
&lt;h2&gt;Style rules: same pattern, different API&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;https://developers.deepl.com/docs/api-reference/translate/openapi-spec-for-text-translation&quot;&gt;DeepL&#39;s style rules&lt;/a&gt; are newer (v3 API) and actually mutable, which is nicer. You can update configured rules in place with &lt;code&gt;PUT /v3/style_rules/{style_id}/configured_rules&lt;/code&gt;, and custom instructions can be individually added or removed.&lt;/p&gt;
&lt;p&gt;I still use the same &lt;code&gt;IsDirty&lt;/code&gt; pattern though, mostly for consistency. A &lt;code&gt;TenantStyleRuleList&lt;/code&gt; has a &lt;code&gt;DeepLStyleId&lt;/code&gt; that maps to DeepL&#39;s runtime identifier, plus &lt;code&gt;ConfiguredRulesJson&lt;/code&gt; for the formatting rules and a collection of &lt;code&gt;TenantCustomInstruction&lt;/code&gt; entries for free-text translation directives.&lt;/p&gt;
&lt;p&gt;The real power is in those custom instructions. Each one is a plain-language directive, up to 300 characters, that shapes how DeepL translates. Some real examples I&#39;ve seen work well:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Always use &#39;Sie&#39; form, never &#39;du&#39;&amp;quot;&lt;/em&gt; — for formal German contexts&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Translate &#39;deployment&#39; as &#39;Bereitstellung&#39;, never &#39;Deployment&#39;&amp;quot;&lt;/em&gt; — context-dependent terms that go beyond simple glossary mappings&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Use British English spelling (colour, organisation, licence)&amp;quot;&lt;/em&gt; — when translating between English variants&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Put currency symbols after the numeric amount&amp;quot;&lt;/em&gt; — European formatting conventions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each tenant can configure completely different instructions per target language, all behind the same API key. The isolation comes from the fact that every translation call includes only the &lt;code&gt;glossary_id&lt;/code&gt; and &lt;code&gt;style_id&lt;/code&gt; belonging to the requesting tenant. Other tenants&#39; DeepL resources are never referenced — they&#39;re not even queried.&lt;/p&gt;
&lt;h2&gt;The translation call: everything composes&lt;/h2&gt;
&lt;p&gt;When the orchestrator translates a block, it assembles all tenant-specific settings into a single request:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var glossaryId = await _glossaryService
    .GetOrSyncDeepLGlossaryIdAsync(sourceLang, targetLang);
var styleId = await _styleRuleService
    .GetOrSyncStyleIdAsync(targetLang);
var formality = langConfig.Formality ?? &amp;quot;default&amp;quot;;

var options = new TranslationOptions
{
    GlossaryId = glossaryId,
    StyleId = styleId,
    Formality = formality,
    Context = documentContext,
    ModelType = styleId != null ? &amp;quot;quality_optimized&amp;quot; : null
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Every parameter here is tenant-scoped. The &lt;code&gt;glossaryId&lt;/code&gt; was resolved through a tenant-filtered query. The &lt;code&gt;styleId&lt;/code&gt; was resolved the same way. The &lt;code&gt;formality&lt;/code&gt; comes from &lt;code&gt;TenantLanguageConfig&lt;/code&gt;, also tenant-scoped. Even the &lt;code&gt;context&lt;/code&gt; — surrounding paragraphs sent to improve translation quality, which DeepL doesn&#39;t bill for — comes from the tenant&#39;s own document.&lt;/p&gt;
&lt;p&gt;One thing worth noting: when &lt;code&gt;style_id&lt;/code&gt; is set, DeepL automatically uses their &lt;code&gt;quality_optimized&lt;/code&gt; model. You can&#39;t combine style rules with &lt;code&gt;latency_optimized&lt;/code&gt;. That&#39;s a DeepL constraint, but honestly a reasonable trade-off. If you&#39;re investing in custom style rules, you probably want the best quality output anyway.&lt;/p&gt;
&lt;h2&gt;Block-level caching: your database as translation memory&lt;/h2&gt;
&lt;p&gt;I don&#39;t call DeepL for blocks that haven&#39;t changed. The caching mechanism is the &lt;code&gt;TranslationBlock&lt;/code&gt; table itself.&lt;/p&gt;
&lt;p&gt;Every source &lt;code&gt;EntryBlock&lt;/code&gt; has a &lt;code&gt;ContentHash&lt;/code&gt; — a SHA256 of its semantic content, with metadata attributes like &lt;code&gt;blockId&lt;/code&gt; and &lt;code&gt;deleted&lt;/code&gt; stripped out. Every &lt;code&gt;TranslationBlock&lt;/code&gt; stores the &lt;code&gt;SourceContentHash&lt;/code&gt; that was current when the translation was made. When the source block changes, its hash changes. The orchestrator compares hashes and only queues blocks with mismatches.&lt;/p&gt;
&lt;p&gt;The decision tree for each block:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Hash matches, translation exists&lt;/strong&gt; → skip (cached, up-to-date)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hash changed, machine-translated, not locked&lt;/strong&gt; → retranslate automatically&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hash changed, human-edited or locked&lt;/strong&gt; → mark as Stale, do not overwrite&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That third case matters a lot. If a translator manually refined a paragraph, I don&#39;t want to blow it away just because the English source changed. Flag it as stale so the team knows it needs review, but leave the translated text intact.&lt;/p&gt;
&lt;p&gt;The practical result: editing one paragraph in a 30-paragraph document triggers exactly one DeepL API call (one batch, one block). The other 29 paragraphs across all languages are already cached. They don&#39;t cost anything.&lt;/p&gt;
&lt;h2&gt;Why not give each tenant their own key?&lt;/h2&gt;
&lt;p&gt;I considered it. Give each tenant their own DeepL API key, eliminate the isolation problem entirely.&lt;/p&gt;
&lt;p&gt;Three reasons I didn&#39;t go that route:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Billing complexity.&lt;/strong&gt; Every tenant would need their own DeepL subscription or a way to provision sub-accounts. DeepL doesn&#39;t offer multi-tenant key management natively. Managing that onboarding flow is more overhead than building an isolation layer.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cost efficiency.&lt;/strong&gt; Shared infrastructure means shared volume. Aggregate usage gets better pricing than dozens of individual small accounts.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Operational simplicity.&lt;/strong&gt; One key to rotate, one quota to monitor, one integration to maintain. That&#39;s genuinely valuable.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The trade-off is that you need the isolation layer I described. But if you already have tenant-scoped EF Core queries for everything else in your system — which you should — adding it to glossaries and style rules is straightforward. You&#39;re applying an existing pattern, not inventing a new one.&lt;/p&gt;
&lt;h2&gt;What actually isolates what&lt;/h2&gt;
&lt;p&gt;To summarize the guarantees I rely on:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Glossary entries&lt;/strong&gt; are stored in &lt;code&gt;TenantGlossary&lt;/code&gt; (implements &lt;code&gt;ITenantScoped&lt;/code&gt;), filtered by EF Core global query filters. DeepL glossary IDs are opaque references that only get resolved within tenant context.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Style rules and custom instructions&lt;/strong&gt; follow the same pattern through &lt;code&gt;TenantStyleRuleList&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translated content&lt;/strong&gt; lives in &lt;code&gt;TranslationBlock&lt;/code&gt;, scoped via its parent &lt;code&gt;Entry&lt;/code&gt; → &lt;code&gt;Hub&lt;/code&gt; chain, which is also tenant-scoped.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;The &lt;code&gt;SaveChanges&lt;/code&gt; guard&lt;/strong&gt; sets &lt;code&gt;TenantId&lt;/code&gt; automatically on new entities and throws on cross-tenant writes.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;No &lt;code&gt;IgnoreQueryFilters()&lt;/code&gt;&lt;/strong&gt; in production code.&lt;/li&gt;
&lt;/ul&gt;
&lt;blockquote&gt;
&lt;p&gt;DeepL sees resource IDs. My application sees tenant-scoped entities. The mapping between them never crosses tenant boundaries because the query that resolves the mapping is physically incapable of returning another tenant&#39;s data.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you&#39;re building a multi-tenant SaaS on top of third-party APIs that weren&#39;t designed for multi-tenancy — and there are a lot of them — this approach works well. Treat the external API as a stateless execution engine. Keep all configuration in your own tenant-scoped database. Sync lazily. And never trust external resource listings for isolation, because those listings are flat.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="architecture" />
    <category term="translations" />
    <category term="multilingual" />
  </entry>
  <entry>
    <title>Tokens Burned Is the New Lines of Code</title>
    <link href="https://www.tcdev.de/blog/tokens-burned-is-the-new-lines-of-code/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/tokens-burned-is-the-new-lines-of-code/</id>
    <updated>2026-04-13T00:00:00Z</updated>
    <summary>Measuring AI adoption by token spend is the same mistake we made with lines of code in the 90s. Same flaw, new dashboard, much higher stakes.</summary>
    <content type="html">&lt;p&gt;My LinkedIn feed has been full of it for weeks. My X timeline too. People posting token spend screenshots like they&#39;re progress reports. Startup founders bragging they spent $16k on Claude Code last month and are aiming for $60k next. Leaderboards. Rankings. Titles like &amp;quot;Token Legend&amp;quot; and &amp;quot;AI God.&amp;quot;&lt;/p&gt;
&lt;p&gt;And then last week, it hit critical mass. Forbes &lt;a href=&quot;https://www.forbes.com/sites/richardnieva/2026/03/31/the-ai-gods-spending-as-much-as-they-can-on-ai-tokens/&quot;&gt;reported on the &amp;quot;tokenmaxxing&amp;quot; movement&lt;/a&gt; sweeping Silicon Valley, where companies compete to see who burns the most AI tokens. Jensen Huang went on the All-In podcast and said: &lt;em&gt;&amp;quot;That $500,000 engineer, at the end of the year, I&#39;m going to ask him, &#39;How much did you spend in tokens?&#39; If that person says &#39;$5,000&#39; I will go ape-something else. If that $500,000 engineer did not consume at least $250,000 worth of tokens, I am going to be deeply alarmed.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Then &lt;a href=&quot;https://fortune.com/2026/04/09/meta-killed-employee-ai-token-dashboard/&quot;&gt;Fortune reported&lt;/a&gt; that a Meta employee had built an internal leaderboard called &amp;quot;Claudeonomics&amp;quot; tracking token consumption across the company&#39;s 85,000+ staff. Top users got titles. In a 30-day window, total usage hit 60 trillion tokens. The top individual user averaged 281 billion. Mark Zuckerberg didn&#39;t even crack the top 250. Meta CTO Andrew Bosworth, meanwhile, was publicly saying his best engineer was spending his salary equivalent in tokens but running &amp;quot;5x to 10x more productive.&amp;quot; &amp;quot;It&#39;s like, this is easy money,&amp;quot; Bosworth said. &amp;quot;No limit.&amp;quot;&lt;/p&gt;
&lt;p&gt;I&#39;ve been in software long enough to recognize what&#39;s happening here. This is &amp;quot;lines of code&amp;quot; with a much higher price tag.&lt;/p&gt;
&lt;h2&gt;We&#39;ve been here before&lt;/h2&gt;
&lt;p&gt;In 2003, Martin Fowler wrote &lt;a href=&quot;https://martinfowler.com/bliki/CannotMeasureProductivity.html&quot;&gt;a short piece on why software productivity cannot be measured&lt;/a&gt; that should probably be required reading for every technical executive. His argument on lines of code was precise:&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;quot;One of my biggest irritations are studies of productivity based on lines of code. Any good developer knows that they can code the same stuff with huge variations in lines of code.&amp;quot;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;The problem is obvious once you say it out loud. LOC measures activity, not output. Two developers can build the same feature: one writes 1,200 lines, the other writes 80. The concise one probably built a better system. Under a LOC regime, the verbose one looks more productive.&lt;/p&gt;
&lt;p&gt;Teams evaluated on LOC responded rationally. They wrote more lines. They copy-pasted rather than abstracting. They avoided refactoring because deleting code would hurt their numbers. The metric shaped behavior, but not toward better software. More code. Worse systems.&lt;/p&gt;
&lt;p&gt;Then in 2023, McKinsey published a piece claiming to have cracked objective developer productivity measurement. &lt;a href=&quot;https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity&quot;&gt;Gergely Orosz and Kent Beck&#39;s thorough response&lt;/a&gt; pointed out the same flaw: nearly every McKinsey metric was measuring effort and output, not outcomes. Kent Beck recounted watching Facebook&#39;s internal developer sentiment surveys devolve from useful feedback into managers negotiating with engineers for higher scores. That&#39;s what happens when you incentivize a proxy metric. The number improves. The thing you actually cared about does not.&lt;/p&gt;
&lt;p&gt;You&#39;d think we would have learned. We haven&#39;t.&lt;/p&gt;
&lt;h2&gt;Same mistake, different unit&lt;/h2&gt;
&lt;p&gt;The seductive logic of tokenmaxxing runs like this. Token consumption = AI usage. More AI usage = teams are using AI. Therefore, high token spend = high AI adoption = good.&lt;/p&gt;
&lt;p&gt;It is precisely as flawed as measuring lines of code, just with a billing dashboard instead of a commit graph. And to be fair to the Forbes article, Sendbird&#39;s CEO John Kim basically said exactly that: &amp;quot;We&#39;ve seen this movie before.&amp;quot; He was referring to the 1990s and 2000s LOC culture. The real indicator, he noted, is how much AI-generated code actually makes it into production. Token spending &amp;quot;is more of a conversation starter.&amp;quot; I agree with that. It becomes a problem when the conversation starter gets promoted to the headline KPI.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;GitHub&#39;s 2024 developer survey&lt;/a&gt; found that 97% of enterprise developers had used AI coding tools at work at some point. Meaningful organizational adoption, though, required clear policies, workflows, and measurable outcomes tied to actual business results. Not just usage. Not just consumption.&lt;/p&gt;
&lt;p&gt;Boris Cherny, the engineer behind Claude Code, &lt;a href=&quot;https://x.com/bcherny/status/2004626064187031831&quot;&gt;publicly shared&lt;/a&gt; that he didn&#39;t open an IDE at all during one month of work, with Opus 4.5 writing around 200 PRs. That&#39;s impressive. But what makes it impressive is not the tokens those 200 PRs consumed. It&#39;s that they were 200 real merged contributions with working software on the other end.&lt;/p&gt;
&lt;p&gt;The value is in the outcome. Tokens are the energy that got you there, nothing more.&lt;/p&gt;
&lt;h2&gt;When the metric becomes the target&lt;/h2&gt;
&lt;p&gt;There&#39;s a principle called Goodhart&#39;s Law: when a measure becomes a target, it ceases to be a good measure. The history of software development is basically a museum of Goodhart&#39;s Law in action.&lt;/p&gt;
&lt;p&gt;Tracking tokens as an AI adoption KPI sets up the exact same dynamic. Engineering teams measured on token consumption will consume more tokens. That&#39;s just how incentives work. Want to look more productive? Run a few more agentic loops. Let the model reason at length before generating output. Wrap every task in an orchestration layer that calls four tools where one would do. Token spend goes up. Value delivered does not.&lt;/p&gt;
&lt;p&gt;Actually, the Claudeonomics story proved this almost immediately. Fortune noted that &amp;quot;some employees have put AI agents to work for hours to maximize their token usage.&amp;quot; There it is. Goodhart&#39;s Law executing in real time, inside a company that&#39;s supposed to be at the frontier of AI-driven productivity. The leaderboard had been up for maybe a few weeks before it was shut down, and employees were already gaming it by running agents in loops. The metric was three weeks old and it had already stopped measuring what it was supposed to measure.&lt;/p&gt;
&lt;p&gt;Any developer reading this can probably think of five ways to inflate token usage metrics at no benefit to anyone. I won&#39;t list them. But if I can think of five, so can the engineers being measured on this.&lt;/p&gt;
&lt;p&gt;Andrej Karpathy described &lt;a href=&quot;https://x.com/karpathy/status/2004607146781278521&quot;&gt;the current moment in software engineering&lt;/a&gt; as a &amp;quot;magnitude 9 earthquake&amp;quot; for the profession. He&#39;s right. But earthquakes don&#39;t get measured in the electricity consumed. They get measured in what moved.&lt;/p&gt;
&lt;h2&gt;The documentation version of this problem&lt;/h2&gt;
&lt;p&gt;This isn&#39;t only a problem for engineering teams. The same dynamic shows up in knowledge management too.&lt;/p&gt;
&lt;p&gt;&amp;quot;We published 400 documents this quarter&amp;quot; is a number that sounds good in a slide deck. It has nothing to say about whether those documents are accurate, whether anyone read them, or whether the information in them is still true six months later. You can hit that number with AI and no thinking whatsoever. Token-assisted noise published at scale.&lt;/p&gt;
&lt;p&gt;The honest metric is harder to collect but much more useful: what percentage of your knowledge base actually reflects how your systems work today? How many people reached a correct answer using your documentation? How many tried, failed, and ended up asking someone on Slack instead?&lt;/p&gt;
&lt;p&gt;Those questions don&#39;t have pretty dashboards yet. They require actual thought about what you want documentation to do for your organization. Forced expiry dates, for example, exist so teams have to reckon with whether content is still valid, rather than letting it silently decay behind a high page-count metric.&lt;/p&gt;
&lt;h2&gt;What to track instead&lt;/h2&gt;
&lt;p&gt;The honest answer to &amp;quot;is our AI investment paying off?&amp;quot; cannot be read from a billing dashboard.&lt;/p&gt;
&lt;p&gt;You can approximate it with better questions: are cycle times improving? Is the ratio of features shipped to bugs reported trending in the right direction? Are engineers reporting they spend more time on judgment-heavy work and less on typing? Is your documentation staying current instead of accumulating like sediment?&lt;/p&gt;
&lt;p&gt;These are harder to pull from an API. They require thinking about what output you actually want from your teams, which, admittedly, is the harder work. But they&#39;re the questions that matter, because they&#39;re about outcomes rather than inputs.&lt;/p&gt;
&lt;p&gt;Token spend tells you how much compute you bought. Whether that compute became something useful is an entirely separate question. Companies that don&#39;t maintain that distinction are going to build very expensive dashboards that show them almost nothing.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We spent years optimizing the wrong metric for developer productivity. We have maybe one quarter before the same mistake gets baked into every AI adoption report in the enterprise. The window to avoid this is open, but it won&#39;t stay that way.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="developer-experience" />
    <category term="knowledge-management" />
  </entry>
  <entry>
    <title>The AI Divide Is Splitting Your Team in Half</title>
    <link href="https://www.tcdev.de/blog/the-ai-divide-is-splitting-your-team-in-half/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/the-ai-divide-is-splitting-your-team-in-half/</id>
    <updated>2026-04-10T00:00:00Z</updated>
    <summary>Half your team is building the future with AI. The other half thinks it&#39;s a fad. The gap between them is becoming the biggest competitive risk most companies don&#39;t see.</summary>
    <content type="html">&lt;p&gt;I was on a call last week with a friend of mine who told me about one of their customers, a logistics company. A team lead there had a planning meeting where two of her people had built an entire scenario model using AI before the meeting even started. Forecasts, risk breakdowns, three alternative approaches. The other four people on the team showed up with the same slide deck format they&#39;ve been using for two years. Same structure, same manual process, same timeline estimates.&lt;/p&gt;
&lt;p&gt;The meeting went sideways fast. The AI-assisted pair couldn&#39;t understand why the others hadn&#39;t done basic prep that &amp;quot;takes five minutes now.&amp;quot; The others felt ambushed, like the rules of the game changed and nobody told them. The team lead spent the rest of the day doing damage control.&lt;/p&gt;
&lt;p&gt;That story doesn&#39;t surprise me anymore. I&#39;ve been hearing versions of it for months.&lt;/p&gt;
&lt;h2&gt;The gap is measurable now&lt;/h2&gt;
&lt;p&gt;This isn&#39;t just vibes. Microsoft&#39;s &lt;a href=&quot;https://www.microsoft.com/en-us/worklab/work-trend-index/2025-the-year-the-frontier-firm-is-born&quot;&gt;2025 Work Trend Index&lt;/a&gt;, a survey of 31,000 workers across 31 countries, found that 67% of leaders are familiar with AI agents, compared to just 40% of employees. Leaders are far more likely to see AI as a career accelerator (79% vs. 67% of employees), and they&#39;re saving more time with it, too. Nearly a third of leaders say AI saves them over an hour every single day.&lt;/p&gt;
&lt;p&gt;But here&#39;s the part that really stuck with me: when asked how they see AI, 52% of respondents said they treat it as a command-based tool. Give it an instruction, get a result. Only 46% described it as a thought partner, something you have a back-and-forth with.&lt;/p&gt;
&lt;p&gt;That&#39;s not a small difference. That&#39;s two fundamentally different relationships with the same technology. And those two groups are sitting in the same meetings, working on the same projects, supposedly moving in the same direction.&lt;/p&gt;
&lt;h2&gt;Two speeds, one team&lt;/h2&gt;
&lt;p&gt;The practical consequence is that teams are now operating at two completely different speeds. The people who&#39;ve integrated AI into their daily work don&#39;t just produce faster. They think differently. They approach problems differently. They arrive at meetings with work that used to take a week done in an afternoon.&lt;/p&gt;
&lt;p&gt;And the people who haven&#39;t adopted AI (or who&#39;ve tried it once, found it underwhelming, and moved on) are doing genuinely solid work. I want to be clear about that. It&#39;s not that they&#39;re bad at their jobs. It&#39;s that the ceiling of what&#39;s possible has moved, and they&#39;re working under the old one.&lt;/p&gt;
&lt;p&gt;A &lt;a href=&quot;https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5188231&quot;&gt;Harvard study on generative AI in teams&lt;/a&gt; found something remarkable: a single individual with AI outperforms an entire team without it. But a team where everyone uses AI outperforms them all. The implication is brutal. Mixed adoption doesn&#39;t give you a middle ground. It gives you friction.&lt;/p&gt;
&lt;p&gt;I saw this firsthand at a workshop I ran last month. The participants who used AI regularly were finishing exercises in half the time, then getting frustrated waiting for the rest. The participants who didn&#39;t use AI felt rushed and, honestly, a bit humiliated. Nobody intended that outcome. It just happened because the speed gap is that large now.&lt;/p&gt;
&lt;h2&gt;The competitive advantage nobody talks about&lt;/h2&gt;
&lt;p&gt;Here&#39;s where it gets really consequential. McKinsey&#39;s &lt;a href=&quot;https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai&quot;&gt;State of AI 2025 survey&lt;/a&gt; found that 88% of organizations are using AI in at least one function. Sounds great, right? But nearly two-thirds are still stuck in experimentation and pilot phases. Only about a third have begun scaling AI across their business. And the companies that have scaled, the ones McKinsey calls &amp;quot;high performers&amp;quot;? They represent roughly 6% of respondents.&lt;/p&gt;
&lt;p&gt;That 6% is pulling away from everyone else at a speed that I think most people underestimate.&lt;/p&gt;
&lt;p&gt;High performers are three times more likely to have fundamentally redesigned their workflows around AI. They&#39;re three times more likely to have senior leaders actively championing and role-modeling AI use. Three-quarters of them are scaling or have already scaled AI across their organization, compared to one-third of everyone else.&lt;/p&gt;
&lt;p&gt;Microsoft&#39;s data tells a similar story. Companies they call &amp;quot;Frontier Firms&amp;quot; (those with org-wide AI deployment and advanced maturity) report dramatically different outcomes. 71% of Frontier Firm leaders say their company is thriving, compared to 39% of workers globally. 55% say they can take on more work, versus 25% globally. And they&#39;re less afraid of AI taking their jobs, not more.&lt;/p&gt;
&lt;p&gt;The gap between these companies and everyone else isn&#39;t narrowing. It&#39;s accelerating.&lt;/p&gt;
&lt;h2&gt;This is a people problem disguised as a technology problem&lt;/h2&gt;
&lt;p&gt;The temptation is to solve this with tools. Roll out Copilot, buy some licenses, send a company-wide email about AI resources. Done.&lt;/p&gt;
&lt;p&gt;But the actual challenge is cultural. It&#39;s the team lead on that call trying to hold together a group where half the people feel supercharged and the other half feel left behind. It&#39;s the manager who has to explain to a 20-year veteran that their workflow, the one they perfected over a decade, might not be the best approach anymore. It&#39;s the junior employee who&#39;s quietly using AI to produce senior-level work and doesn&#39;t know whether to be proud or worried about political fallout.&lt;/p&gt;
&lt;p&gt;Microsoft found that 47% of leaders list upskilling existing employees as a top workforce strategy. That&#39;s encouraging, I guess. But upskilling only works if people actually want to learn. And right now, a meaningful chunk of the workforce has decided that AI is either not relevant to them, not reliable, or not worth the effort. Some of them might be right about specific tools. But the broader trajectory isn&#39;t optional (I say that as someone who&#39;s been skeptical of plenty of tech hype cycles over the years, and this one feels different).&lt;/p&gt;
&lt;h2&gt;Where this is heading&lt;/h2&gt;
&lt;p&gt;I don&#39;t think the divide goes away. I think it widens. The people who adopt AI will keep getting faster, keep producing more, keep raising the bar for what &amp;quot;normal output&amp;quot; looks like. The people who don&#39;t will feel increasing pressure, whether from management, from peers, or just from the ambient reality that their colleagues are doing things they can&#39;t.&lt;/p&gt;
&lt;p&gt;Companies that figure out how to bring their whole team along, not just the enthusiasts, will have a genuine advantage. And that advantage compounds. Every month of organizational AI fluency is a month your competitors spend arguing about whether to buy ChatGPT licenses.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The biggest competitive advantage in the AI era won&#39;t be which model you use. It will be whether your entire team actually uses it.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That logistics team I mentioned? My friend told me the team lead booked a two-day internal workshop. Not &amp;quot;here&#39;s how to prompt.&amp;quot; More like &amp;quot;here&#39;s how this changes the way we plan together.&amp;quot; The skeptics needed to see what was possible in the context of &lt;em&gt;their&lt;/em&gt; work, not in some generic demo with a made-up scenario. And the enthusiasts needed to learn patience. To bring people along instead of running ahead.&lt;/p&gt;
&lt;p&gt;That feels like the job right now. Not just adopting AI. Closing the gap. Before it closes you.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="collaboration" />
    <category term="knowledge-management" />
  </entry>
  <entry>
    <title>Build vs Buy Reimagined: What It Actually Means in 2026</title>
    <link href="https://www.tcdev.de/blog/build-vs-buy-reimagined-what-it-means-in-2026/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/build-vs-buy-reimagined-what-it-means-in-2026/</id>
    <updated>2026-04-07T00:00:00Z</updated>
    <summary>The cost of building just collapsed. So what does that mean for every SaaS company betting their business on &#39;you don&#39;t have to build it yourself&#39;?</summary>
    <content type="html">&lt;p&gt;Last week I watched a junior developer on our team spin up a working CRUD app with auth, database migrations, and a halfway decent UI in about 90 minutes. With Copilot. From scratch.&lt;/p&gt;
&lt;p&gt;Five years ago, that same task would have taken a week. Maybe two if you count the yak-shaving around deployment configs and OAuth flows. And that shift, that compression of building time from days to hours, is quietly dismantling one of the oldest questions in software: should we build or should we buy?&lt;/p&gt;
&lt;h2&gt;The Old Framing is Dead&lt;/h2&gt;
&lt;p&gt;For decades, &amp;quot;build vs buy&amp;quot; was a cost calculation. You&#39;d estimate how many developer-months it would take to build a thing, multiply by loaded salary, add some buffer for maintenance, and compare it to the annual license fee of whatever SaaS product did roughly the same thing. If the SaaS was cheaper, you bought. If your requirements were weird enough, you built.&lt;/p&gt;
&lt;p&gt;That framing assumed building was expensive. And it was. But &lt;a href=&quot;https://github.blog/ai-and-ml/generative-ai/how-ai-is-reshaping-developer-choice-and-octoverse-data-proves-it/&quot;&gt;according to GitHub&#39;s Octoverse 2025 data&lt;/a&gt;, AI-assisted development now produces a 20 to 30 percent increase in throughput. Eighty percent of new developers on GitHub use Copilot within their first week. Over 1.1 million public repositories already integrate LLM SDKs. Building got dramatically cheaper, almost overnight.&lt;/p&gt;
&lt;p&gt;So the question isn&#39;t really &amp;quot;build vs buy&amp;quot; anymore. It is something more like: what are you actually paying for when you buy SaaS?&lt;/p&gt;
&lt;h2&gt;The New Calculus&lt;/h2&gt;
&lt;p&gt;Here&#39;s what I think most SaaS founders (myself included, honestly) don&#39;t want to hear: if your entire value proposition is &amp;quot;we saved you from building it,&amp;quot; you&#39;re in trouble. Because that moat just got a lot shallower.&lt;/p&gt;
&lt;p&gt;When a team can prototype a functional internal tool in a day, the bar for what justifies a monthly subscription goes way up. You do not just need to be better than what they could build. You need to be better than what they could build &lt;em&gt;with AI helping them&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.gartner.com/en/newsroom/press-releases/2026-04-02-gartner-expects-most-enterprises-to-abandon-assistive-ai-for-outcome-focused-workflow-by-2028&quot;&gt;Gartner predicted in April 2026&lt;/a&gt; that by 2028, over half of all enterprises will stop paying for assistive intelligence and favor platforms that commit to workflow results. Even more stark: they expect that by 2030, software companies layering bolt-on AI over legacy applications rather than redesigning for agentic execution will face margin compression of up to 80%.&lt;/p&gt;
&lt;p&gt;Eighty percent. That is not a rounding error.&lt;/p&gt;
&lt;h2&gt;So What Actually Survives?&lt;/h2&gt;
&lt;p&gt;I have been thinking about this a lot, partly because I&#39;m building this and I need to be honest with myself about where our value sits. And I think the answer comes down to three things that are genuinely hard to replicate with a weekend coding sprint, no matter how good your AI assistant is.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Domain depth you can not fake.&lt;/strong&gt; Anyone can build a text editor. Building a translation system that tracks content changes at the paragraph level, detects stale translations through content hashing, and handles structural adaptation across languages? That takes years of domain knowledge baked into architecture. The AI can help you write the code faster, but it can not tell you &lt;em&gt;what&lt;/em&gt; to build.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Someone still has to run and maintain it.&lt;/strong&gt; Here is the thing about building: it is fun. Maintaining? Not fun at all. Handling edge cases in multi-tenant permission systems, keeping up with browser quirks, managing database migrations across versions, patching CVEs at 2am, dealing with that one PDF export bug that only shows up in Safari. AI makes the initial build faster, sure. But &lt;a href=&quot;https://www.forrester.com/press-newsroom/forrester-three-years-into-genai-enterprises-are-still-chasing-its-true-transformative-value/&quot;&gt;Forrester&#39;s April 2026 research&lt;/a&gt; shows most enterprises still can not turn AI adoption into measurable impact, partly because the hard part was never writing code. It is keeping the thing running, updated, and working correctly for years. The build is the easy part. It&#39;s the uptime, on-call rotations, and incremental fixes that actually cost you.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trust, security, and data privacy.&lt;/strong&gt; This one is underrated. When you build something yourself, &lt;em&gt;you&lt;/em&gt; own security. You&#39;re responsible for encryption at rest, audit logging, penetration testing, GDPR compliance, SOC 2, and the next regulation nobody has heard of yet. A good SaaS vendor has an entire team whose only job is making sure your data does not end up somewhere it should not be. For most companies, that is not a cost they want to carry internally. And honestly, most internal tools I have seen do not even have proper access controls, let alone a security audit trail.&lt;/p&gt;
&lt;h2&gt;The Composable Middle Ground&lt;/h2&gt;
&lt;p&gt;What is interesting is that the answer increasingly is not &amp;quot;build&amp;quot; &lt;em&gt;or&lt;/em&gt; &amp;quot;buy.&amp;quot; It is compose. Pick the SaaS tools that do hard things well, expose good APIs, and let you build around them.&lt;/p&gt;
&lt;p&gt;This is why plugin architectures matter so much right now (and yes, this is exactly what we&#39;ve been investing in with this platform&#39;s plugin system). The SaaS products that will thrive are the ones that say: &amp;quot;We handle the hard, domain-specific core. You customize everything else.&amp;quot; Not &amp;quot;here&#39;s our monolith, take it or leave it.&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.forrester.com/press-newsroom/forrester-three-years-into-genai-enterprises-are-still-chasing-its-true-transformative-value/&quot;&gt;Forrester&#39;s April 2026 report&lt;/a&gt; found that most enterprises are still struggling to turn AI adoption into measurable business impact. High adopters are 47% more likely to work with consulting partners to prepare their data and systems. The message is clear: raw building capability is not the bottleneck. Knowing what to build, and having the infrastructure to support it, that&#39;s the actual constraint.&lt;/p&gt;
&lt;h2&gt;What This Means for SaaS&lt;/h2&gt;
&lt;p&gt;If you&#39;re running a SaaS company in 2026, I think there are a few uncomfortable truths:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Your &amp;quot;we&#39;ll save you time&amp;quot; pitch is weaker than ever.&lt;/strong&gt; Time savings was the classic SaaS sell. But when AI compresses build time by 20-30%, the &amp;quot;time saved&amp;quot; number in your ROI spreadsheet shrinks proportionally. You need a different story.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Features are table stakes, outcomes are the product.&lt;/strong&gt; Nobody cares that you have 47 integrations. They care that their documentation stays fresh, their translations stay accurate, their team actually uses the tool. Gartner&#39;s language about &amp;quot;outcome-focused workflow&amp;quot; is not just analyst jargon. It is where the market is heading.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Openness beats lockdown.&lt;/strong&gt; The instinct to close your platform and make switching hard is understandable. But Gartner explicitly warned that &amp;quot;legacy SaaS providers that attempt to close systems of record risk being bypassed by orchestration layers enterprises trust more.&amp;quot; Ouch.&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The Honest Version&lt;/h2&gt;
&lt;p&gt;I will be blunt about where I land on this. Build vs buy was never really about the technology. It was always about trust. Do I trust this vendor to understand my problem deeply enough that their solution will be better than what I could cobble together?&lt;/p&gt;
&lt;p&gt;In 2026, &amp;quot;cobble together&amp;quot; got a massive upgrade. So the trust bar went up too.&lt;/p&gt;
&lt;p&gt;For documentation vendors, that means you can&#39;t just be a tool that happens to support translations. You have to be deeply good at the hard problems, block-level translation tracking, content freshness enforcement, and multi-tenant complexity, so replacing your product is genuinely painful even with the best AI tools in the world.&lt;/p&gt;
&lt;p&gt;That is the new build vs buy. Not &amp;quot;can you build it?&amp;quot; but &amp;quot;should you spend your energy building it when someone else has already solved the hard parts?&amp;quot;&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The question was never really about cost. It was about where you want to spend your attention. And in a world where building is cheap, attention is the only scarce resource left.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="developer-experience" />
    <category term="knowledge-management" />
  </entry>
  <entry>
    <title>Three Weeks, One App: What AI Can Build For You and What It Absolutely Cannot</title>
    <link href="https://www.tcdev.de/blog/three-weeks-one-app-what-claude-cant-do-yet/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/three-weeks-one-app-what-claude-cant-do-yet/</id>
    <updated>2026-04-05T00:00:00Z</updated>
    <summary>I built a full SaaS product, marketing site, developer docs, and blog in three weeks with Claude. Here&#39;s the honest breakdown of where AI shines and where you&#39;re completely on your own.</summary>
    <content type="html">&lt;p&gt;Three weeks ago I had a .NET backend with maybe 40% of the services wired up, a half-finished Vue frontend, and a vague plan. Today this platform has a block-level translation engine with glossary management and style rules, a freshness scoring system with expiry templates and review workflows, AI-powered semantic search with RAG, a full plugin SDK with action guards and event pipelines, collaborative real-time editing, a complete marketing website with pricing pages, a developer documentation portal, a blog with 14 posts, automated translations into 7 languages, and a waitlist form that actually sends emails.&lt;/p&gt;
&lt;p&gt;I did not do this alone. I had Claude running in VS Code every evening for hours and sometimes all day even, and it was genuinely transformative for the parts it could help with. But there&#39;s a chasm between &amp;quot;building an app&amp;quot; and &amp;quot;having something you could actually sell to another human being,&amp;quot; and that chasm is filled with tons of setup pages, manual configuration, email deliverability settings, and DNS records. Claude just can&#39;t talk to all these services yet.&lt;/p&gt;
&lt;p&gt;People rarely talk about that part.&lt;/p&gt;
&lt;h2&gt;Could I Have Done This Without AI?&lt;/h2&gt;
&lt;p&gt;Look, I have over 30 years of experience building software. Could I have built all of this without Claude? Probably. But not in three weeks. Not even close. The AI accelerated everything that involves typing code into files, and that is a massive part of any project.&lt;/p&gt;
&lt;p&gt;But here&#39;s the thing people miss when they talk about &amp;quot;vibe coding&amp;quot; and building entire apps with AI: &lt;strong&gt;you still need to know what you&#39;re doing.&lt;/strong&gt; Claude can tell you every single step required to deploy a Cloudflare Worker with a D1 database. It can walk you through OpenIddict configuration. It can explain DNS records and SPF setup. The problem is that its knowledge is often outdated. Platforms update their dashboards, move settings around, deprecate features, rename things. And Claude doesn&#39;t know.&lt;/p&gt;
&lt;p&gt;I didn&#39;t even use only one AI. ChatGPT occasionally knew more about specific services, especially when Claude&#39;s training data was a few months behind on a particular platform&#39;s documentation. Some days I had both open side by side, cross-referencing their suggestions against what I was actually seeing in the dashboard.&lt;/p&gt;
&lt;p&gt;And then there&#39;s Codex. I used OpenAI&#39;s Codex frequently to analyze the codebase from the outside. Not to write code, but to review it. Having a different agent look at code that Claude wrote catches things that Claude itself is blind to. It&#39;s like having a second pair of eyes on a pull request, except both reviewers are AI and neither of them gets offended. I&#39;d point Codex at a service layer and ask &amp;quot;what&#39;s wrong with this?&amp;quot; and it would find issues that Claude had confidently introduced three sessions ago. Different models have different blind spots, and running them against each other produces genuinely better results than trusting any single one.&lt;/p&gt;
&lt;p&gt;But the deeper point is this: to have a sellable app, you absolutely need to know how hosting works. How domains work. How code signing certificates work. How databases work. How email deliverability works. How OAuth2 flows actually function, not just the code that implements them. Can you build an app without that knowledge? Sure. Will it get you anywhere? Likely not. You&#39;ll have something that runs on localhost and impresses nobody outside your own machine.&lt;/p&gt;
&lt;h2&gt;The 80% That Felt Like Magic&lt;/h2&gt;
&lt;p&gt;Let me be clear about what worked, because it genuinely worked well. For churning out service interfaces, implementing CRUD controllers, writing EF Core configurations, and building Vue components, Claude is absurdly fast.&lt;/p&gt;
&lt;p&gt;Here&#39;s an example. When I needed to add the glossary management system, I described the requirement: tenant-scoped glossaries, CSV import/export, individual term CRUD, and a sync mechanism with DeepL&#39;s glossary API. Claude produced the entity models, the service interface and implementation, the controller with proper authorization attributes, and the Pinia store. All in maybe 20 minutes. Would have taken me most of a day to write all that by hand.&lt;/p&gt;
&lt;p&gt;The translation engine was similar. The block-level architecture with SHA256 content hashing, the staleness detection, the orchestrator that coordinates between services. Claude understood the pattern after I explained it once and then replicated it consistently across dozens of files. The freshness scoring system, the review workflows, the expiry notification pipeline. Service after service, wired up and working.&lt;/p&gt;
&lt;p&gt;For the marketing site, Claude built entire HTML pages from descriptions. &amp;quot;A pricing page with a free tier, a team tier, and an enterprise tier. Dark background. Use the green accent.&amp;quot; And it just... produced one. Including responsive breakpoints and hover states.&lt;/p&gt;
&lt;p&gt;That&#39;s the magic part. It is real.&lt;/p&gt;
&lt;h2&gt;Taming the Machine&lt;/h2&gt;
&lt;p&gt;But it&#39;s not like I just typed &amp;quot;build me an app&amp;quot; and walked away. Working with Claude is its own skill, and I spent the first few days doing it badly.&lt;/p&gt;
&lt;p&gt;The initial output is always... fine. Technically correct, reasonably structured, but generic. Claude writes code the way it writes prose: competent, predictable, and deeply average. Left to its own devices, it&#39;ll produce the same controller structure every framework tutorial uses. The same service pattern. The same component layout. It works, but it&#39;s not &lt;em&gt;yours&lt;/em&gt;.&lt;/p&gt;
&lt;p&gt;So you start to train it. Not formally, not with fine-tuning, but through repetition and correction. &amp;quot;No, I want the service interface separate from the implementation.&amp;quot; &amp;quot;Always use this authorization attribute pattern.&amp;quot; &amp;quot;The tenant context comes from middleware, not from the request body.&amp;quot; Turn after turn after turn. Some days I felt like I was pair programming with a very enthusiastic junior who keeps forgetting what we decided yesterday.&lt;/p&gt;
&lt;p&gt;And then something clicks. After enough corrections, after enough examples in the codebase for it to read, Claude starts getting it right on the first try. It picks up your naming conventions. It knows where you put your DTOs. It follows your error handling pattern without being asked. That transition from &amp;quot;annoying&amp;quot; to &amp;quot;productive&amp;quot; took maybe four or five days of consistent work.&lt;/p&gt;
&lt;p&gt;The blog posts were a similar story. Claude&#39;s default writing voice is instantly recognizable. That polished, slightly distant, perfectly structured style that reads like every AI-generated blog post you&#39;ve ever seen. I went through multiple rounds building a style guide, feeding it examples of how I actually write, pointing out every &amp;quot;it&#39;s worth noting&amp;quot; and every em dash (seriously, the em dash addiction is real). Eventually I built a whole skill file, a set of instructions that Claude loads before writing anything for the this platform blog.&lt;/p&gt;
&lt;p&gt;This post, for the record, is Claude. With my input, my corrections, my direction. I described what I wanted to say, pointed it at the style guide, and then spent time going back and forth until the voice felt right. That&#39;s the actual workflow. Not &amp;quot;AI writes it&amp;quot; and not &amp;quot;I write it.&amp;quot; It&#39;s a conversation that produces something neither of us would have written alone.&lt;/p&gt;
&lt;p&gt;I also built custom instructions for the codebase itself. A copilot-instructions file that explains the architecture, the translation system, the tenant isolation rules, the coding conventions. Claude reads this at the start of every session, and the difference is night and day. Without it, Claude guesses. With it, Claude knows.&lt;/p&gt;
&lt;p&gt;The point is: the productivity gains are real, but they&#39;re not free. You invest time upfront teaching the AI how you work, and that investment pays off over weeks. Skip that step and you&#39;ll spend more time fixing Claude&#39;s output than you would have spent writing the code yourself.&lt;/p&gt;
&lt;h2&gt;When the Machine Fights You&lt;/h2&gt;
&lt;p&gt;I don&#39;t want to paint too rosy a picture here. For every session where Claude nailed a complex service implementation in 20 minutes, there was another session where it drove me up the wall.&lt;/p&gt;
&lt;p&gt;The worst habit is re-introducing bugs that were fixed days ago. You spend an evening tracking down a race condition in the SignalR hub, you fix it, you move on. Three days later Claude is editing a nearby file and quietly puts the old broken pattern back. Not maliciously, obviously. It just doesn&#39;t remember. Every session starts fresh, and if the fix wasn&#39;t obvious from the code alone, Claude will happily revert to whatever pattern its training data prefers. I learned to write very explicit comments above tricky fixes. Not for future developers. For future Claude.&lt;/p&gt;
&lt;p&gt;Then there&#39;s the circling. You ask Claude to fix a failing test. It changes something. The test still fails. It changes something else. Still fails. It reverts the first change and tries a third thing. Then it combines the first and third changes. Then it goes back to the second approach but with a slight variation. Thirty minutes later you&#39;ve watched it try nine permutations of the same wrong idea and not once did it stop to reconsider whether the whole approach was off. I&#39;ve had sessions where I finally said &amp;quot;stop, let me look at this&amp;quot; and found the actual issue in about two minutes. It was a one-line fix. Claude had spent half an hour rearranging deck chairs.&lt;/p&gt;
&lt;p&gt;And the confidence. Claude never says &amp;quot;I&#39;m not sure about this.&amp;quot; It presents every suggestion with the same calm authority, whether it&#39;s a perfect solution or complete nonsense. After a while you develop an instinct for when it&#39;s guessing, but early on I wasted real time implementing suggestions that sounded reasonable and turned out to be hallucinated API methods or deprecated configuration patterns.&lt;/p&gt;
&lt;p&gt;This is exactly why I started using Codex to review Claude&#39;s output. And it&#39;s why the experience argument matters so much. A junior developer wouldn&#39;t catch these regressions. They wouldn&#39;t recognize the circling. They&#39;d trust the confident hallucination. Thirty years of knowing what correct code looks like is the difference between AI as a productivity multiplier and AI as a very fast way to create technical debt.&lt;/p&gt;
&lt;h2&gt;Then You Need to Actually Deploy the Thing&lt;/h2&gt;
&lt;p&gt;Here is where the story changes.&lt;/p&gt;
&lt;p&gt;You have a working application on localhost. Beautiful. Now put it on the internet. Make it send emails. Let people sign up. Accept payments eventually. Protect it from bots. Give it a domain name that resolves correctly.&lt;/p&gt;
&lt;p&gt;Claude cannot help you with any of this. Not really.&lt;/p&gt;
&lt;p&gt;I don&#39;t mean it produces bad suggestions. I mean it fundamentally cannot interact with the systems you need to configure. And the configuration is where you spend your time, not writing code.&lt;/p&gt;
&lt;h3&gt;Cloudflare: A Case Study in &amp;quot;Figure It Out Yourself&amp;quot;&lt;/h3&gt;
&lt;p&gt;this platform&#39;s marketing site runs on Cloudflare Pages. The waitlist API is a Cloudflare Worker with a D1 database. Sounds straightforward until you actually have to set it up.&lt;/p&gt;
&lt;p&gt;Claude has never seen your Cloudflare dashboard. It can tell you &amp;quot;add a CNAME record&amp;quot; but it cannot tell you which of the 14 tabs contains the DNS settings for your particular domain. D1 database bindings need a specific database ID in your &lt;code&gt;wrangler.toml&lt;/code&gt;. Environment secrets go through &lt;code&gt;wrangler secret put&lt;/code&gt;. CORS has to match your actual deployed origins, not localhost. Turnstile needs keys from yet another dashboard section.&lt;/p&gt;
&lt;p&gt;I spent almost an entire day getting the Worker to correctly verify Turnstile tokens, accept form submissions, store them in D1, and send confirmation emails. Claude helped me write the Worker code itself. But the deployment, the wrangler configuration, the secret management, the DNS propagation debugging? That was all me.&lt;/p&gt;
&lt;h3&gt;OAuth2: The Configuration Labyrinth&lt;/h3&gt;
&lt;p&gt;Authentication is the best example of the gap between &amp;quot;code&amp;quot; and &amp;quot;product.&amp;quot;&lt;/p&gt;
&lt;p&gt;Claude can absolutely write you an OAuth2 integration. It knows the OIDC spec, it can produce middleware, it understands JWT claims. For our dev environment I have a &lt;code&gt;DevAuthHandler&lt;/code&gt; that mints tokens with &lt;code&gt;tenant_id&lt;/code&gt; and &lt;code&gt;sub&lt;/code&gt; claims from a simple bearer string pattern. Claude wrote that in minutes.&lt;/p&gt;
&lt;p&gt;But production auth means OpenIddict, and OpenIddict means figuring out &lt;code&gt;sub&lt;/code&gt; claims, &lt;code&gt;tenant_id&lt;/code&gt; claims, callback URLs, JavaScript origins, logout URIs, and all the other shenanigans that come with a real identity setup. And that&#39;s before you even get to the external providers.&lt;/p&gt;
&lt;p&gt;Because your users want to log in with Google, Microsoft, or GitHub. And Claude can&#39;t log into any of those developer consoles for you. It cannot:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Create an OAuth application in the Google Cloud Console and generate a client ID and secret&lt;/li&gt;
&lt;li&gt;Register an app in the Microsoft Entra portal and configure the redirect URIs&lt;/li&gt;
&lt;li&gt;Set up a GitHub OAuth App and grab the credentials&lt;/li&gt;
&lt;li&gt;Configure each provider&#39;s callback URLs for every environment you run&lt;/li&gt;
&lt;li&gt;Wire up the correct scopes, consent screens, and token endpoints&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each provider has its own developer portal, its own terminology, its own flow for generating credentials. Google calls it a &amp;quot;consent screen.&amp;quot; Microsoft calls it &amp;quot;app registrations.&amp;quot; GitHub calls it &amp;quot;OAuth Apps&amp;quot; (not to be confused with &amp;quot;GitHub Apps,&amp;quot; which are a different thing entirely). And every single one of them requires you to manually copy a client ID and secret into your configuration.&lt;/p&gt;
&lt;p&gt;Claude can write the OpenIddict server configuration, the external provider middleware, the claim transformation logic. But the actual credential generation, the portal navigation, the environment-specific URL setup? That&#39;s all you, in a browser, clicking through dashboards.&lt;/p&gt;
&lt;h3&gt;Email: It&#39;s Never Just &amp;quot;Send an Email&amp;quot;&lt;/h3&gt;
&lt;p&gt;The code to send an email via the Resend API is about 15 lines. Claude wrote it without issue. But making emails actually arrive in someone&#39;s inbox? That requires a verified sending domain, DNS records for SPF, DKIM, and DMARC, waiting for propagation, and then testing deliverability because Gmail and Outlook have their own opinions about whether your domain is trustworthy.&lt;/p&gt;
&lt;p&gt;And designing an email template that doesn&#39;t look terrible in every email client. Outlook on Windows still uses the Word rendering engine in 2026. Let that sink in.&lt;/p&gt;
&lt;h2&gt;The Full List of Things I Did Without AI&lt;/h2&gt;
&lt;p&gt;Looking back at three weeks of work, I started keeping a rough mental tally of what Claude built versus what I configured by hand. The &amp;quot;by hand&amp;quot; list is longer than I expected:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Cloud Infrastructure:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Cloudflare Pages project setup and custom domain configuration&lt;/li&gt;
&lt;li&gt;Cloudflare Worker deployment and D1 database provisioning&lt;/li&gt;
&lt;li&gt;DNS records for marketing site, API, and email sending&lt;/li&gt;
&lt;li&gt;SSL/TLS certificate configuration (mostly automatic, but debugging when it&#39;s not is painful)&lt;/li&gt;
&lt;li&gt;Build pipeline configuration for the blog (Eleventy + translation + OG image generation)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Authentication &amp;amp; Security:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Google, Microsoft, and GitHub OAuth app registration and credential generation&lt;/li&gt;
&lt;li&gt;OpenIddict configuration with correct claims, callback URLs, JS origins, and logout URIs&lt;/li&gt;
&lt;li&gt;Turnstile bot protection setup (site keys, secret keys, dashboard config)&lt;/li&gt;
&lt;li&gt;CORS policy configuration between frontend, API, and Worker origins&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Email:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Resend account and API key setup&lt;/li&gt;
&lt;li&gt;SPF, DKIM, DMARC DNS records&lt;/li&gt;
&lt;li&gt;Email deliverability testing and troubleshooting&lt;/li&gt;
&lt;li&gt;Template testing across email clients&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Third-Party Integrations:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DeepL API account and key management&lt;/li&gt;
&lt;li&gt;Google Analytics setup with cookie consent integration&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Azure Hosting:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Azure App Service setup and configuration for the .NET backend&lt;/li&gt;
&lt;li&gt;Azure SQL database provisioning, firewall rules, and connection strings&lt;/li&gt;
&lt;li&gt;Azure Cache for Redis setup and connection configuration&lt;/li&gt;
&lt;li&gt;Azure OpenAI resource provisioning for embeddings and RAG&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Docker configuration for the .NET backend&lt;/li&gt;
&lt;li&gt;Environment variable management across three different deployment targets&lt;/li&gt;
&lt;li&gt;Database connection strings for different environments&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;And honestly I&#39;m probably forgetting a few things. Every third-party service has its own dashboard, its own credential model, its own documentation quality (varying wildly), and its own quirks.&lt;/p&gt;
&lt;h2&gt;Why This Matters More Than People Think&lt;/h2&gt;
&lt;p&gt;Here&#39;s the dimension that gets lost in every conversation about AI-assisted development: &lt;strong&gt;AI tools have zero context about your infrastructure.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Your codebase lives in files that an AI can read. Your Cloudflare configuration does not. Your Google OAuth app settings do not. Your DNS records do not. Your Resend domain verification status does not. The entire operational surface area of a real product is invisible to AI tools, and that surface area is enormous.&lt;/p&gt;
&lt;p&gt;Writing code is the easy part of software engineering, and it&#39;s getting easier by the day. The hard parts are what you do with that code. Operating it, understanding it when something breaks at 2am, extending it when requirements change, and governing it across its entire lifecycle. AI makes the easy part faster. It does nothing for the hard part.&lt;/p&gt;
&lt;h2&gt;The Marketing Site Deserves Its Own Section&lt;/h2&gt;
&lt;p&gt;I built the entire this platform marketing site in roughly four days. Homepage, pricing page, signup and contact forms with bot protection, privacy policy, four feature deep-dive pages. Claude did probably 70% of the HTML/CSS.&lt;/p&gt;
&lt;p&gt;But then I needed it to actually exist on the internet. The blog runs on Eleventy with an 8-step build pipeline: translate posts via DeepL, build the site, translate static HTML pages, copy shared assets, generate OG images from SVGs, generate audio versions, manage audio manifests, produce a multilingual sitemap. Claude helped write pieces of that pipeline, but getting it all to work together with the right file paths and the right Cloudflare Pages deployment settings took a full day of trial and error.&lt;/p&gt;
&lt;p&gt;And the developer documentation site? That&#39;s a separate Cloudflare Pages project with its own domain, its own build config, and its own deployment triggers. Another dashboard, another set of environment variables, another round of DNS.&lt;/p&gt;
&lt;h2&gt;The Pattern I Keep Seeing&lt;/h2&gt;
&lt;p&gt;For any given feature, Claude handles about &lt;strong&gt;80% of the work by volume&lt;/strong&gt;. Lines of code, files created, problems solved. But the remaining 20% is entirely manual configuration work: clicking through web dashboards, copying keys between services, debugging integration issues that only show up in deployed environments.&lt;/p&gt;
&lt;p&gt;And that 20% takes at least as long as the other 80%. Sometimes longer.&lt;/p&gt;
&lt;p&gt;But here&#39;s the thing that changed compared to how solo development used to work: in the past, you were either writing code or doing config. Never both. If you spent a day setting up Stripe webhooks and testing payment flows in their dashboard, that was a day you wrote zero application code. Your project just stopped moving forward on one front while you worked on the other.&lt;/p&gt;
&lt;p&gt;With Claude, that&#39;s no longer true. While I was deep in the Stripe dashboard figuring out webhook endpoints and event types, Claude was building out the next service interface. While I was clicking through Google&#39;s OAuth consent screen setup for the third time because I got the scopes wrong, Claude was writing Vue components. My head was in configuration land, but the codebase kept growing. That&#39;s genuinely new. A solo developer can now move on two fronts at once, and that might be the biggest practical difference AI makes for small teams.&lt;/p&gt;
&lt;p&gt;That said, when you&#39;re writing code with AI help, you&#39;re in a tight feedback loop. Write, test, fix, iterate. When you&#39;re debugging why your Cloudflare Worker returns CORS errors only in production, you&#39;re staring at dashboard screenshots, reading community forum posts, and trying random configuration changes hoping one of them sticks.&lt;/p&gt;
&lt;h2&gt;What Needs to Change&lt;/h2&gt;
&lt;p&gt;I do not think this is a permanent limitation. The missing piece is obvious: AI tools need to be able to interact with third-party service APIs and dashboards. Not just write code that calls them, but actually configure them.&lt;/p&gt;
&lt;p&gt;Some of this is starting to happen. MCP (Model Context Protocol) servers for various services are popping up. Anthropic is clearly thinking about tool use as a first-class concept. But we&#39;re nowhere near the point where I could say &amp;quot;set up my Cloudflare Worker with a D1 database, configure the custom domain, and add Turnstile protection&amp;quot; and have it actually happen.&lt;/p&gt;
&lt;p&gt;Until then, the honest story of building a product with AI is this: &lt;strong&gt;AI is an incredible accelerator for writing application code. But a sellable product is only about half application code.&lt;/strong&gt; The other half is infrastructure, third-party integrations, deployment pipelines, email deliverability, domain configuration, and security setup. And for all of that, you&#39;re on your own.&lt;/p&gt;
&lt;p&gt;(This is, incidentally, one of the reasons I&#39;m building this as a hosted platform and not just shipping open-source code. Getting documentation software to run is not that hard. Getting it to run reliably, with proper auth and email and hosting? That&#39;s the product.)&lt;/p&gt;
&lt;h2&gt;If You&#39;re About to Try This&lt;/h2&gt;
&lt;p&gt;A few practical things I learned that might save you time:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Start with the infrastructure, not the code.&lt;/strong&gt; Set up your hosting, your auth provider, your email service, and your custom domains first. Get a &amp;quot;hello world&amp;quot; deployed to production before you write a single line of real application code. The number of problems that only surface in deployed environments is depressing.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Keep a credentials doc.&lt;/strong&gt; You will have API keys, client IDs, callback URLs, database IDs, and secret keys scattered across 8 different dashboards. I use a local encrypted file. You can use 1Password or whatever. Just have a single place.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Budget twice as much time for &amp;quot;the last mile&amp;quot; as you think.&lt;/strong&gt; If Claude helps you build the feature in 2 hours, budget another 2 hours minimum for deploying it, configuring the integrations, and testing in production.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Accept that some days will be all dashboard work.&lt;/strong&gt; There were full days where I wrote essentially zero code but made critical progress: registering OAuth apps across three providers, setting up email, debugging DNS. Those days feel less productive but they&#39;re not.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Use multiple agents against each other.&lt;/strong&gt; Don&#39;t just use one AI. Have Claude write the code, then point Codex or ChatGPT at it and ask what&#39;s wrong. Different models catch different things. It sounds redundant, but it&#39;s the closest thing to a code review you&#39;ll get as a solo developer.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Three weeks is still wildly fast for what I built. I&#39;m not complaining about Claude. It let a single developer build something that would normally take a small team the better part of a year. But the story being told in the AI hype cycle (prompt, code, ship, done) is missing the entire middle section where you make it real.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The app is the easy part. Making it real is the job.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="developer-experience" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Stop Firing People Because AI Exists</title>
    <link href="https://www.tcdev.de/blog/stop-firing-people-because-ai-exists/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/stop-firing-people-because-ai-exists/</id>
    <updated>2026-04-04T00:00:00Z</updated>
    <summary>One person with AI can do the work of ten. But did anyone stop to ask what happens to that one person? Or what happens if you keep the ten?</summary>
    <content type="html">&lt;p&gt;A friend of mine runs content for a mid-size SaaS company. Last year, her team was eight people. Writers, editors, a localisation specialist, someone who handled the knowledge base. Good team, solid output. Then the CEO attended a conference, came back fired up about AI, and within three months the team was down to three. The reasoning? &lt;em&gt;&amp;quot;With AI tools, three people can produce what eight used to.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;And technically, that&#39;s true. The three remaining people are producing roughly the same volume. Blog posts, help docs, product updates, internal comms. The numbers look fine on a dashboard.&lt;/p&gt;
&lt;p&gt;But my friend hasn&#39;t slept properly in months. She&#39;s context-switching between writing, editing, prompt engineering, QA-ing AI output, managing translations, and doing all the strategic work that used to be shared across the team. Her two remaining colleagues are in the same boat. One of them is already looking for another job.&lt;/p&gt;
&lt;p&gt;The company saved five salaries. It&#39;s also slowly losing the three people who actually know how things work.&lt;/p&gt;
&lt;h2&gt;The math that looks right but isn&#39;t&lt;/h2&gt;
&lt;p&gt;Here&#39;s the pitch that&#39;s been making the rounds in boardrooms since ChatGPT went mainstream: one person with AI can now do the work of ten. And if that&#39;s the case, why keep ten?&lt;/p&gt;
&lt;p&gt;It&#39;s a compelling argument. Simple. Clean. Fits on a slide.&lt;/p&gt;
&lt;p&gt;It&#39;s also dangerously incomplete.&lt;/p&gt;
&lt;p&gt;Yes, AI can compress tasks. According to &lt;a href=&quot;https://www.microsoft.com/en-us/worklab/work-trend-index/ai-at-work-is-here-now-comes-the-hard-part&quot;&gt;Microsoft&#39;s 2024 Work Trend Index&lt;/a&gt;, 90% of AI users at work say the tools help them save time. The heaviest Microsoft Teams users summarised eight hours of meetings using Copilot in a single month. That&#39;s a full workday reclaimed just from meeting summaries. And 85% say AI helps them focus on their most important work.&lt;/p&gt;
&lt;p&gt;Those are real numbers. The productivity gains are not imaginary.&lt;/p&gt;
&lt;p&gt;But here&#39;s what the &amp;quot;fire nine people&amp;quot; crowd never talks about: the person who remains doesn&#39;t just absorb the output. They absorb the cognitive load, the context, the decision-making, the coordination, the quality assurance, and every bit of institutional knowledge that walked out the door with those nine former colleagues.&lt;/p&gt;
&lt;h2&gt;Mental load is not a spreadsheet&lt;/h2&gt;
&lt;p&gt;There&#39;s a concept in psychology called cognitive load theory. It describes the total amount of mental effort being used in working memory at any given time. And every time you ask one person to do the thinking that five people used to share, you&#39;re not saving effort. You&#39;re concentrating it.&lt;/p&gt;
&lt;p&gt;I think about this a lot when people tell me AI makes workers &amp;quot;10x more productive.&amp;quot; Productive at what? Producing more words? Shipping more tickets? Generating more slide decks? Sure. But the actual hard part of knowledge work has never been the producing. It&#39;s the thinking. Deciding what to produce. Understanding context. Making judgment calls. Knowing when something is wrong even when it looks right on the surface.&lt;/p&gt;
&lt;p&gt;AI doesn&#39;t do that for you. AI gives you a first draft, and now you need to be smart enough to evaluate it, experienced enough to catch the subtle errors, and present enough to notice when the output is confidently wrong. (If you&#39;ve ever watched someone ship an AI-generated internal doc without reading it, you know exactly what I mean.)&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.gallup.com/workplace/659279/global-engagement-falls-second-time-2009.aspx&quot;&gt;Gallup&#39;s 2025 State of the Global Workplace&lt;/a&gt; report found that global employee engagement fell to 21% in 2024, down from 23% the year before. That drop cost the world economy an estimated $438 billion in lost productivity. Manager engagement dropped even harder, from 30% to 27%. Female managers saw a seven-point decline. Managers under 35 dropped five points.&lt;/p&gt;
&lt;p&gt;These are the people who are supposed to be leading AI adoption. And they&#39;re burning out.&lt;/p&gt;
&lt;h2&gt;The amplification argument&lt;/h2&gt;
&lt;p&gt;Let me offer a different way to think about this.&lt;/p&gt;
&lt;p&gt;If one person with AI can do the work of ten, then ten people with AI can do the work of a hundred.&lt;/p&gt;
&lt;p&gt;Read that again. Because this is the part that almost nobody is talking about, and it&#39;s the part that should keep every CEO awake at night. Not because it&#39;s scary, but because it&#39;s an enormous opportunity that most companies are throwing away.&lt;/p&gt;
&lt;p&gt;The companies laying off half their workforce because &amp;quot;AI can handle it&amp;quot; are not being efficient. They&#39;re being short-sighted. They&#39;re optimising for a quarterly headcount number while their competitors figure out what happens when you give powerful tools to a full team of motivated, experienced people.&lt;/p&gt;
&lt;p&gt;I saw this play out at a startup event in Zurich last month. Two companies in the same space. Roughly the same size, same market. Company A had cut their content team from twelve to four. Company B had kept all twelve and given them AI tools plus training. Guess which one was producing multilingual content in six languages, running experiments with new formats, and shipping weekly product updates to their knowledge base? (It wasn&#39;t Company A.)&lt;/p&gt;
&lt;h2&gt;What actually happens when you cut&lt;/h2&gt;
&lt;p&gt;Let me walk through what happens in practice when you replace a team of ten with one or two &amp;quot;AI-enhanced&amp;quot; super-workers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Week one feels great.&lt;/strong&gt; The remaining people are energised. They have new tools. They&#39;re producing a lot. Leadership is thrilled. The dashboard numbers look incredible relative to headcount.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Month two, the cracks appear.&lt;/strong&gt; The one person responsible for documentation discovers that AI-generated content needs serious review. Not light editing. Deep review. Because the AI doesn&#39;t know your product nuances, your customer context, or the three things you changed last week that invalidated half of what was written. The review work alone eats the time that was &amp;quot;saved&amp;quot; by generating content faster.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Month four, institutional knowledge gaps emerge.&lt;/strong&gt; Remember those eight people you let go? They didn&#39;t just write content. They had relationships with product managers. They understood customer pain points from years of support ticket patterns. They knew which documentation topics generated the most questions. That knowledge is gone. The AI certainly doesn&#39;t have it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Month six, you&#39;re hiring contractors.&lt;/strong&gt; Because the remaining people are overwhelmed, quality has dropped, and someone finally noticed that the knowledge base hasn&#39;t been properly updated in weeks. But contractors don&#39;t have context either, so you&#39;re paying more per hour for worse results.&lt;/p&gt;
&lt;p&gt;I&#39;m not making this up. I&#39;ve watched this pattern repeat at three different companies in the last year alone.&lt;/p&gt;
&lt;h2&gt;The data says keep your people (and train them)&lt;/h2&gt;
&lt;p&gt;The &lt;a href=&quot;https://www.weforum.org/publications/the-future-of-jobs-report-2025/digest/&quot;&gt;World Economic Forum&#39;s Future of Jobs Report 2025&lt;/a&gt; asked over 1,000 global employers about their workforce plans. The numbers tell an interesting story. Yes, 40% of employers plan to reduce staff where AI automates tasks. But 85% plan to upskill their existing workforce. And 70% expect to hire people with new skills, not fewer people.&lt;/p&gt;
&lt;p&gt;The report projects net job growth of 78 million by 2030. That&#39;s after accounting for the 92 million displaced roles. The world isn&#39;t moving toward fewer workers. It&#39;s moving toward differently skilled workers.&lt;/p&gt;
&lt;p&gt;And here&#39;s the one that should give every &amp;quot;let&#39;s cut headcount&amp;quot; CEO pause: 64% of employers identified supporting employee health and well-being as a key strategy for talent availability. Not &amp;quot;reduce costs.&amp;quot; Not &amp;quot;automate everything.&amp;quot; &lt;strong&gt;Support well-being.&lt;/strong&gt; Because companies that burn through their people don&#39;t get to hire the good ones later.&lt;/p&gt;
&lt;p&gt;Meanwhile, a &lt;a href=&quot;https://www.bcg.com/publications/2023/how-people-create-and-destroy-value-with-gen-ai&quot;&gt;BCG and Harvard Business School study&lt;/a&gt; found that when teams used AI for creative tasks, around 90% improved their performance, with output quality rising 40% above control groups. But the study also found something that should make every leader uncomfortable: the diversity of ideas among AI-assisted groups dropped by 41%.&lt;/p&gt;
&lt;p&gt;Think about what that means. You fire seven people from your ten-person team. The three who remain use AI to produce the same volume. But the range of ideas, perspectives, and approaches shrinks by nearly half. Your output looks productive but gradually becomes homogeneous. And nobody notices until a competitor ships something genuinely creative and you can&#39;t figure out why your team isn&#39;t doing the same.&lt;/p&gt;
&lt;h2&gt;The mental load nobody budgets for&lt;/h2&gt;
&lt;p&gt;Microsoft&#39;s survey found that 68% of people struggle with the pace and volume of work, and 46% feel burned out. And this was the state of affairs &lt;em&gt;before&lt;/em&gt; you told them they&#39;re now doing the jobs of their three former teammates.&lt;/p&gt;
&lt;p&gt;Here&#39;s something that doesn&#39;t show up in productivity dashboards: the cognitive cost of being the last line of defence. When you&#39;re the only person reviewing AI output, you don&#39;t get to have an off day. When you&#39;re the sole owner of the knowledge base, every support question lands on your desk. When there&#39;s nobody to bounce ideas off because the team was &amp;quot;right-sized,&amp;quot; every decision is yours alone.&lt;/p&gt;
&lt;p&gt;I&#39;ve been building this platform partly because I&#39;ve seen this problem up close. When documentation teams shrink, the knowledge doesn&#39;t shrink with them. The amount of content that needs to exist, stay current, and be accurate across languages doesn&#39;t decrease just because there are fewer people maintaining it. If anything, it grows (this is exactly the problem I&#39;m building this to solve, by the way, with features like forced expiry dates and block-level translations that make smaller teams genuinely more effective rather than more overwhelmed).&lt;/p&gt;
&lt;p&gt;But even the best tools don&#39;t fix a fundamentally broken staffing decision. You can&#39;t automate away the need for human judgment, context, and care. You can only make those humans more effective.&lt;/p&gt;
&lt;h2&gt;What smart companies actually do&lt;/h2&gt;
&lt;p&gt;The most impressive thing about the Microsoft data is what the &amp;quot;AI power users&amp;quot; look like. These are people who use AI multiple times a day and save over 30 minutes. They&#39;re 68% more likely to experiment with different ways of using AI. They don&#39;t just generate more output. They redesign how work happens.&lt;/p&gt;
&lt;p&gt;And here&#39;s the kicker: they exist within organisations that invest in them. AI power users are 61% more likely to hear from their CEO about the importance of AI at work. They&#39;re 53% more likely to receive encouragement from leadership to rethink their entire function. They get tailored training, not just a ChatGPT login.&lt;/p&gt;
&lt;p&gt;In other words, the most productive AI workers aren&#39;t lone survivors of a layoff. They&#39;re members of supported, invested-in teams.&lt;/p&gt;
&lt;p&gt;Let me contrast that with what I see at companies that took the &amp;quot;cut headcount&amp;quot; route. Their remaining employees aren&#39;t power users. They&#39;re overwhelmed generalists desperately trying to keep things running. They don&#39;t have time to experiment with AI because they&#39;re too busy using it for survival. There&#39;s no rethinking the function because the function is just... them, alone, doing everything.&lt;/p&gt;
&lt;h2&gt;The knowledge problem nobody mentions&lt;/h2&gt;
&lt;p&gt;There&#39;s one more thing. And I don&#39;t hear anyone talking about it, which is odd because it should be obvious.&lt;/p&gt;
&lt;p&gt;When you fire experienced knowledge workers, the knowledge leaves with them. It does not stay in the building. It&#39;s not in the wiki. It&#39;s not in the AI. It&#39;s in the heads of the people who built the processes, understood the edge cases, and knew which customers cared about which details.&lt;/p&gt;
&lt;p&gt;You know what happens when you have great AI tools and no institutional knowledge? You get beautifully formatted, confidently delivered, completely wrong information. At scale.&lt;/p&gt;
&lt;p&gt;I talked to a head of documentation at a fintech company last month (she didn&#39;t want to be named, which tells you something). After their team was cut from six to two, they started relying heavily on AI to maintain their developer docs. Within four months, they noticed a spike in support tickets. The docs looked fine. They were well-written, up to date on the surface. But they contained subtle errors that only someone with deep product knowledge would have caught. An API parameter description that was technically correct but practically misleading. A migration guide that missed a step everyone on the old team just knew about. Little things that AI can&#39;t know because AI doesn&#39;t attend your standups, doesn&#39;t read your Slack threads, doesn&#39;t hear the frustrated &amp;quot;oh, that doc is wrong again&amp;quot; from the support engineer at the coffee machine.&lt;/p&gt;
&lt;h2&gt;The real question&lt;/h2&gt;
&lt;p&gt;So here&#39;s what I think the conversation should actually be about.&lt;/p&gt;
&lt;p&gt;Not: &lt;em&gt;&amp;quot;How many people can we cut now that we have AI?&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;But: &lt;em&gt;&amp;quot;What becomes possible when we give AI to everyone we already have?&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Your ten-person documentation team with AI tools doesn&#39;t become redundant. It becomes a team that can maintain content in twelve languages instead of two. That can keep every piece of content current with automated freshness checks. That can experiment with new formats, run A/B tests on help content, build interactive guides, and still have time to think strategically about what customers actually need.&lt;/p&gt;
&lt;p&gt;Your ten-person marketing team with AI doesn&#39;t become five people doing the same work with more stress. It becomes ten people who can personalise campaigns at a scale that was previously impossible, test more creative variations, respond faster to market changes, and still have the cognitive bandwidth to come up with genuinely original ideas that the AI never would have generated.&lt;/p&gt;
&lt;p&gt;That&#39;s not a cost. That&#39;s an investment with a return that compounds.&lt;/p&gt;
&lt;h2&gt;Where this ends up&lt;/h2&gt;
&lt;p&gt;The companies that win the next five years won&#39;t be the ones who cut the most heads. They&#39;ll be the ones who figured out how to make their existing teams genuinely more capable.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The question isn&#39;t whether one person can do the work of ten. The question is what happens when all ten can do the work of a hundred.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;If you&#39;re a leader reading this, I&#39;d ask you one thing. Before you approve that next round of &amp;quot;AI-enabled restructuring,&amp;quot; talk to the people who stayed after the last one. Ask them how they&#39;re doing. Ask them what they&#39;ve stopped doing because there&#39;s no time. Ask them what&#39;s falling through the cracks.&lt;/p&gt;
&lt;p&gt;And then imagine what they could accomplish if, instead of carrying the load alone, they had a full team and the best tools available.&lt;/p&gt;
&lt;p&gt;That&#39;s not a fantasy. For the companies willing to invest in their people instead of replacing them, that&#39;s the next twelve months.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="knowledge-management" />
    <category term="collaboration" />
  </entry>
  <entry>
    <title>Readers and Writers Are in Different Mental Modes. Why Does Every Tool Give Them the Same UI?</title>
    <link href="https://www.tcdev.de/blog/readers-and-writers-need-different-interfaces/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/readers-and-writers-need-different-interfaces/</id>
    <updated>2026-04-03T00:00:00Z</updated>
    <summary>Documentation platforms force readers, writers, and AI into one interface. But consuming knowledge and creating it are cognitively different tasks. this platform separates them.</summary>
    <content type="html">&lt;p&gt;Open Confluence right now and find a document you need to read. What do you see?&lt;/p&gt;
&lt;p&gt;A toolbar. Edit buttons. Comment boxes. Page history links. A sidebar full of navigation you don&#39;t need. Breadcrumbs. Metadata fields. Permission indicators. An entire authoring interface wrapped around the text you came here to read.&lt;/p&gt;
&lt;p&gt;Now think about what you actually wanted: the answer to a question, or the next three steps in a process, or a policy you need to reference before a meeting in ten minutes.&lt;/p&gt;
&lt;p&gt;You came to consume. The interface assumed you came to create.&lt;/p&gt;
&lt;p&gt;This is the default in almost every documentation platform. Confluence, Notion, SharePoint, GitBook, Nuclino, Slite. They all present the same environment to readers and writers. The page is the page. Everyone gets the same view, give or take a few permission-gated buttons.&lt;/p&gt;
&lt;p&gt;It feels normal because we&#39;ve never had anything else. But it&#39;s a design decision, not a law of nature. And it&#39;s the wrong one.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/readers-writers-ui.svg&quot; alt=&quot;The same interface for reading and writing creates cognitive overhead&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Reading and writing are not the same cognitive task&lt;/h2&gt;
&lt;p&gt;This isn&#39;t a UI preference. It&#39;s a fundamental difference in how the brain works.&lt;/p&gt;
&lt;p&gt;When you write, you&#39;re in generative mode. You&#39;re constructing, organising, deciding what to include and what to leave out. You need tools: formatting options, structure controls, media embedding, metadata fields, version history, collaboration features. The interface should give you power and flexibility.&lt;/p&gt;
&lt;p&gt;When you read, you&#39;re in receptive mode. You&#39;re scanning, filtering, extracting what&#39;s relevant, and trying to move on. You need clarity: clean typography, focused layout, minimal distraction. The interface should get out of the way.&lt;/p&gt;
&lt;p&gt;Cognitive psychology has a clear framework for this. &lt;a href=&quot;https://www.instructionaldesign.org/theories/cognitive-load/&quot;&gt;Cognitive Load Theory&lt;/a&gt;, developed by John Sweller in the late 1980s, distinguishes between intrinsic load (the difficulty of the material itself), germane load (the effort of learning and integrating), and extraneous load (everything the environment adds that doesn&#39;t help). Every toolbar, sidebar, and edit button visible to a reader is extraneous load. It doesn&#39;t help them understand the content. It actively competes for attention.&lt;/p&gt;
&lt;p&gt;Research by &lt;a href=&quot;https://doi.org/10.1207/S15326985EP3801_6&quot;&gt;Mayer and Moreno (2003)&lt;/a&gt; on multimedia learning demonstrated that reducing extraneous elements improves both comprehension and retention. Their coherence principle is direct: &lt;em&gt;people learn better when extraneous material is excluded rather than included.&lt;/em&gt; A documentation interface that shows authoring controls to readers is violating this principle on every single page load.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The reader doesn&#39;t need to see the writer&#39;s tools. Showing them anyway isn&#39;t neutral. It&#39;s actively harmful to comprehension.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;How current platforms handle this (they mostly don&#39;t)&lt;/h2&gt;
&lt;p&gt;Let&#39;s look at what exists.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Confluence&lt;/strong&gt; has a read mode and an edit mode, but the read mode is still surrounded by the platform&#39;s navigation, metadata, and page tree. The editing toolbar disappears when you&#39;re not editing, but the mental frame of &amp;quot;this is an editable wiki page&amp;quot; never fully goes away. Every reader sees the &amp;quot;Edit&amp;quot; button. The page whispers: &lt;em&gt;you could change this.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Notion&lt;/strong&gt; is worse in this regard. Its core design philosophy is that everything is always editable. Click anywhere and you&#39;re typing. That&#39;s brilliant for writers. It&#39;s terrible for readers who just want to absorb content without the anxiety of accidentally modifying something. Notion&#39;s own &lt;a href=&quot;https://www.notion.com/templates&quot;&gt;template gallery&lt;/a&gt; shows this: every template is a workspace, not a publication.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;SharePoint&lt;/strong&gt; technically supports different page layouts for viewing and editing, but the overall experience is still corporate intranet. Readers feel like they&#39;re inside an enterprise tool, not reading a document optimised for understanding.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;GitBook&lt;/strong&gt; comes closest to a reading-first experience, with its clean documentation-style output. But even there, the reader experience serves the assumption that the reader is a developer looking at technical docs. It&#39;s not designed for the general knowledge consumer.&lt;/p&gt;
&lt;p&gt;None of these platforms treat reading as a fundamentally different activity from writing. They treat it as writing with the toolbar hidden.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/readers-writers-current-tools.svg&quot; alt=&quot;Current tools: one interface, all audiences&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The cost of a single interface&lt;/h2&gt;
&lt;p&gt;This isn&#39;t just an aesthetics problem. It has measurable consequences.&lt;/p&gt;
&lt;h3&gt;Information overload reduces comprehension&lt;/h3&gt;
&lt;p&gt;A &lt;a href=&quot;https://doi.org/10.1086/209336&quot;&gt;study published in the Journal of Consumer Research&lt;/a&gt; found that information overload leads to poorer decision quality, with the effect increasing as the ratio of irrelevant to relevant information grows. A documentation page with visible authoring controls, navigation trees, and metadata fields increases that ratio for every reader who isn&#39;t there to write.&lt;/p&gt;
&lt;h3&gt;Context switching has a real cost&lt;/h3&gt;
&lt;p&gt;When an interface signal says &amp;quot;you can edit this,&amp;quot; it activates a different cognitive frame than &amp;quot;read this.&amp;quot; &lt;a href=&quot;https://www.ics.uci.edu/~gmark/&quot;&gt;Gloria Mark&#39;s research at UC Irvine&lt;/a&gt; on attention and multitasking found that it takes an average of 23 minutes and 15 seconds to fully refocus after a context switch. A reader who momentarily considers editing (even to fix a typo) has been pulled out of reading mode. That&#39;s not a hypothetical. Anyone who has used Notion knows the experience of clicking to select text and accidentally starting to type.&lt;/p&gt;
&lt;h3&gt;Readers and writers have different needs from the same content&lt;/h3&gt;
&lt;p&gt;A writer needs to see structure, formatting markers, block types, metadata, and collaboration signals. They need the full machinery.&lt;/p&gt;
&lt;p&gt;A reader needs to see clean text, clear hierarchy, and the fastest path to the information they&#39;re looking for. They need the content, not the machinery.&lt;/p&gt;
&lt;p&gt;Serving both from the same interface means neither gets an experience optimised for what they&#39;re actually doing.&lt;/p&gt;
&lt;h2&gt;And then there&#39;s the third audience: AI&lt;/h2&gt;
&lt;p&gt;This is where it gets complicated, and where existing platforms are completely unprepared.&lt;/p&gt;
&lt;p&gt;Documentation in 2026 has three distinct consumers, not two:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Writers&lt;/strong&gt; who create and maintain content&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Readers&lt;/strong&gt; who consume content visually&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AI systems&lt;/strong&gt; that retrieve, parse, and synthesise content programmatically&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each of these audiences needs a fundamentally different interface to the same underlying content.&lt;/p&gt;
&lt;p&gt;Writers need rich editing tools, collaboration features, and structural controls. Readers need clean, focused presentation with minimal distraction. AI needs structured, machine-parseable output with explicit metadata: freshness signals, classification labels, block-level addressing, and clean semantic markup.&lt;/p&gt;
&lt;p&gt;As we discussed in &lt;a href=&quot;https://www.tcdev.de/blog/builders-not-developers-how-claude-changed-devrel/&quot;&gt;Builders, Not Developers&lt;/a&gt;, AI intermediaries are already the dominant consumer of documentation for a growing share of knowledge workers. &lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;GitHub&#39;s 2024 developer survey&lt;/a&gt; found 97% of enterprise developers have used AI coding tools. By 2026, &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;84% of developers use AI tools regularly&lt;/a&gt;, with 41% of all code being AI-generated.&lt;/p&gt;
&lt;p&gt;These AI systems don&#39;t care about your sidebar or your toolbar. They need clean data. And a platform that conflates the reader view with the writer view is also conflating the AI-consumable surface with the human authoring surface. That&#39;s three mismatches in one interface.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/readers-writers-three-audiences.svg&quot; alt=&quot;Three audiences, three different needs&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;How this approach separates the experiences&lt;/h2&gt;
&lt;p&gt;this platform is built around the principle that creating content and consuming content are different activities that deserve different interfaces.&lt;/p&gt;
&lt;h3&gt;The writer&#39;s environment&lt;/h3&gt;
&lt;p&gt;When you&#39;re writing in this platform, you get a full authoring environment. Rich text editing with TipTap, block-level controls, translation status indicators, expiry management, collaboration tools, content structure views, and everything else a writer needs to create and maintain high-quality documentation.&lt;/p&gt;
&lt;p&gt;The writer sees the machinery because they need the machinery.&lt;/p&gt;
&lt;!-- Screenshot: this platform writing environment --&gt;
&lt;h3&gt;The reader&#39;s environment&lt;/h3&gt;
&lt;p&gt;When someone consumes a this platform document, they see a clean, focused reading experience. No editing chrome. No toolbars. No &amp;quot;you could modify this&amp;quot; signals. Just the content, presented in a layout optimised for comprehension and scanning.&lt;/p&gt;
&lt;p&gt;The reader doesn&#39;t see the edit button because they&#39;re not here to edit. They&#39;re here to learn something, follow a process, or find an answer. The interface respects that intent.&lt;/p&gt;
&lt;!-- Screenshot: this platform reading experience --&gt;
&lt;h3&gt;The AI surface&lt;/h3&gt;
&lt;p&gt;For AI consumers, this platform exposes content through structured APIs with full metadata. Every block carries its freshness score, translation status, content hash, and classification labels. AI systems can query content at the block level, filter by freshness, exclude stale or draft material, and retrieve exactly the structured data they need.&lt;/p&gt;
&lt;p&gt;No scraping a wiki page and hoping for the best. The AI gets a purpose-built interface, just like the reader and the writer do.&lt;/p&gt;
&lt;!-- Screenshot: this platform AI surface / API --&gt;
&lt;h2&gt;One content layer, three interfaces&lt;/h2&gt;
&lt;p&gt;The important thing is that we&#39;re not maintaining three copies of the content. This isn&#39;t the five-copies-of-onboarding problem we discussed in &lt;a href=&quot;https://www.tcdev.de/blog/stop-maintaining-five-copies-of-the-same-document/&quot;&gt;Stop Maintaining Five Copies of the Same Document&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;It&#39;s one content layer, stored as structured blocks, served through three different views optimised for three different audiences.&lt;/p&gt;
&lt;p&gt;The writer edits blocks. The reader sees assembled, styled content. The AI queries structured data with metadata. Same blocks. Same source of truth. Different presentation layer for each consumer.&lt;/p&gt;
&lt;p&gt;This is only possible because of the block-level architecture. Each piece of content is an individually addressable unit with its own metadata. You can present those blocks differently depending on who&#39;s asking for them:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Audience&lt;/th&gt;
&lt;th&gt;Needs&lt;/th&gt;
&lt;th&gt;Gets&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Writer&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Formatting, structure, collaboration, metadata&lt;/td&gt;
&lt;td&gt;Full authoring environment with block-level controls&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reader&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Clean text, clear hierarchy, fast scanning&lt;/td&gt;
&lt;td&gt;Focused reading view, no editing chrome&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;AI&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Structured data, freshness scores, classification&lt;/td&gt;
&lt;td&gt;Block-level API with full metadata&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;h2&gt;Why this matters more than it looks&lt;/h2&gt;
&lt;p&gt;You might read this and think: &amp;quot;It&#39;s just UI. Different views of the same thing. How important can it be?&amp;quot;&lt;/p&gt;
&lt;p&gt;Very important, it turns out.&lt;/p&gt;
&lt;h3&gt;Reader trust&lt;/h3&gt;
&lt;p&gt;People trust content that looks published. When a page looks like a wiki that anyone can edit, readers unconsciously discount it. When the same content is presented in a clean, publication-quality reading view, it carries more authority. This isn&#39;t irrational. It&#39;s a signal that someone took the presentation seriously, which implies they took the content seriously too.&lt;/p&gt;
&lt;p&gt;Nielsen Norman Group has studied this extensively. Their &lt;a href=&quot;https://www.nngroup.com/articles/trust-signals-content/&quot;&gt;research on content credibility&lt;/a&gt; shows that design quality and presentation are among the strongest signals users rely on to assess content trustworthiness. A cluttered editor view actively undermines the credibility of the content it displays.&lt;/p&gt;
&lt;h3&gt;Writer productivity&lt;/h3&gt;
&lt;p&gt;Writers who work in a dedicated authoring environment don&#39;t have to context-switch between &amp;quot;am I reading or am I writing?&amp;quot; The tools are there because they&#39;re supposed to be there, not because the interface couldn&#39;t decide who was looking at it.&lt;/p&gt;
&lt;h3&gt;AI reliability&lt;/h3&gt;
&lt;p&gt;When AI systems have a purpose-built surface with structured metadata, they can make better decisions about what to retrieve and what to exclude. They can check freshness scores before including a block in an answer. They can respect classification labels. They can filter by language, status, or audience. None of that is possible when the AI is scraping the same HTML page that was designed for human readers.&lt;/p&gt;
&lt;h2&gt;The mental model shift&lt;/h2&gt;
&lt;p&gt;The fundamental assumption of most documentation platforms is: &lt;em&gt;the page is the unit, and everyone interacts with the page.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;this platform&#39;s assumption is different: &lt;em&gt;the block is the unit, and different audiences interact with blocks through purpose-built surfaces.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That sounds like a small architectural distinction. It&#39;s not. It&#39;s the difference between a tool that accidentally shows content to AI systems and one that deliberately serves them. Between a writing environment that happens to be readable and a reading experience that was designed from scratch. Between one good-enough interface and three great ones.&lt;/p&gt;
&lt;p&gt;Documentation is no longer just written and read. It&#39;s written, read, queried, translated, scored, classified, and served to AI systems at scale. A single interface can&#39;t optimise for all of that, and pretending it can is how we ended up with wikis that nobody wants to read and AI assistants pulling answers from pages that were never designed to be machine-consumed.&lt;/p&gt;
&lt;p&gt;Readers and writers are in different mental modes. AI is in a different mode entirely. The interface should reflect that.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ux" />
    <category term="documentation" />
    <category term="knowledge-management" />
  </entry>
  <entry>
    <title>The State of Docs in 2026: Five Trends That Will Define the Next Era</title>
    <link href="https://www.tcdev.de/blog/the-state-of-docs-in-2026/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/the-state-of-docs-in-2026/</id>
    <updated>2026-04-03T00:00:00Z</updated>
    <summary>AI readership is up 500%. Notion shipped 21,000 agents. Confluence got Rovo. GitBook published the State of Docs. Five trends from across the industry that tell us where documentation is heading.</summary>
    <content type="html">&lt;p&gt;Every few months I block out a morning to just read. Not this platform code, not GitHub issues. Competitor blogs, industry reports, keynote announcements, developer surveys. Whatever shipped in the last quarter that touches documentation, knowledge management, or AI-assisted workflows.&lt;/p&gt;
&lt;p&gt;I did that last week, and the picture that emerged was sharper than I expected. Not because any single announcement was groundbreaking, but because five separate trends are converging, and when you line them up, they paint a very clear picture of what documentation platforms will need to do in the next two years.&lt;/p&gt;
&lt;p&gt;Here&#39;s what I found.&lt;/p&gt;
&lt;h2&gt;1. AI is the primary reader now. Not humans.&lt;/h2&gt;
&lt;p&gt;GitBook published a striking number in their &lt;a href=&quot;https://www.gitbook.com/blog/ai-docs-data-2025&quot;&gt;AI docs data report&lt;/a&gt;: AI readership of documentation increased over 500% in 2025. Five hundred percent. That&#39;s not a rounding error.&lt;/p&gt;
&lt;p&gt;Meanwhile, Stack Overflow&#39;s &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;2024 Developer Survey&lt;/a&gt; showed that 61% of developers spend more than 30 minutes a day searching for answers. But how they search has shifted. GitHub&#39;s own survey found &lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;97% of enterprise developers&lt;/a&gt; have used AI coding tools. By 2026, &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;84% of developers&lt;/a&gt; use AI tools daily, with 41% of code now AI-generated. These people aren&#39;t navigating your wiki sidebar. They&#39;re asking Claude or Copilot, and the AI is reading your docs on their behalf.&lt;/p&gt;
&lt;p&gt;The implication is hard to overstate. Your most frequent documentation consumer is no longer a person with a browser tab open. It&#39;s a language model making retrieval calls. And that model has no ability to squint at a page and think &amp;quot;hmm, this looks outdated.&amp;quot;&lt;/p&gt;
&lt;p&gt;GitBook spotted this early and responded with their &lt;a href=&quot;https://www.gitbook.com/blog/state-of-docs-2026&quot;&gt;State of Docs 2026 report&lt;/a&gt; and a push toward machine-readable formats. They also shipped &lt;a href=&quot;https://www.gitbook.com/blog/skill-md&quot;&gt;skill.md&lt;/a&gt;, a convention for structuring product information specifically for AI agents. Google went further with their &lt;a href=&quot;https://blog.google/innovation-and-ai/technology/developers-tools/gemini-api-docsmcp-agent-skills/&quot;&gt;Gemini API Docs MCP&lt;/a&gt;, which connects coding agents to current documentation via the Model Context Protocol. Their reasoning was explicit: agents generate outdated code because their training data has a cutoff date. The MCP fix brought their eval pass rate to 96.3%.&lt;/p&gt;
&lt;p&gt;So the first trend is settled. AI is the primary reader. The platforms that treat this as a core design constraint, not a feature to add later, will have a structural advantage.&lt;/p&gt;
&lt;h2&gt;2. Freshness and trust metadata are becoming mandatory&lt;/h2&gt;
&lt;p&gt;Anthropic interviewed &lt;a href=&quot;https://www.anthropic.com/81k-interviews&quot;&gt;81,000 Claude users&lt;/a&gt; in December 2025 and published the results in March 2026. It&#39;s the largest qualitative study of AI users ever conducted (159 countries, 70 languages). The single most-cited concern? Unreliability. 27% of respondents named it as their top worry, and 79% of those people had experienced it firsthand.&lt;/p&gt;
&lt;p&gt;That number should keep every documentation team up at night.&lt;/p&gt;
&lt;p&gt;When AI answers are unreliable, the problem isn&#39;t always the model. Often the model is faithfully reproducing what it found in a stale document. The model didn&#39;t hallucinate. Your docs were just wrong, and nobody flagged them.&lt;/p&gt;
&lt;p&gt;Stack Overflow&#39;s data reinforces this from a different angle: &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;81% of developers&lt;/a&gt; expect AI to be more integrated in how they document code in the coming year. If 81% of your users are feeding docs to AI, and 27% of AI users say unreliability is the biggest issue, you have a trust problem that no amount of prompt engineering fixes. The fix is at the source.&lt;/p&gt;
&lt;p&gt;This is why freshness metadata matters. Not &amp;quot;last edited&amp;quot; timestamps (those tell you when someone touched the file, not whether the content is still accurate). Real freshness: review status, link health, translation alignment, readership signals, content drift detection. Metadata that a machine can read and use to decide whether a document is safe to cite.&lt;/p&gt;
&lt;p&gt;I keep coming back to a simple framing. Your documentation needs a credit score. Not a timestamp. A credit score. (We&#39;ve been building exactly this with this platform&#39;s &lt;a href=&quot;https://www.tcdev.de/features/freshness&quot;&gt;freshness scoring system&lt;/a&gt;, and honestly, seeing the industry data only makes me more convinced it&#39;s the right call.)&lt;/p&gt;
&lt;h2&gt;3. Translation is moving from &amp;quot;project&amp;quot; to &amp;quot;pipeline&amp;quot;&lt;/h2&gt;
&lt;p&gt;DeepL published a piece in February called &lt;a href=&quot;https://www.deepl.com/en/blog/six-translation-transformations&quot;&gt;&amp;quot;The 6 Translation Transformations Global Businesses Can&#39;t Afford to Miss&amp;quot;&lt;/a&gt;. Their argument: translation is becoming a continuous operating challenge, not a batch project you do quarterly.&lt;/p&gt;
&lt;p&gt;That tracks with everything I see.&lt;/p&gt;
&lt;p&gt;The old model was straightforward. Write in English. When you have budget, hire a translator or run it through a service. Get the translations back. Upload them. Done until next time. The problem is that &amp;quot;next time&amp;quot; comes faster and faster when your product ships weekly and your docs update constantly. By the time the German version is back from review, the English source has already changed twice.&lt;/p&gt;
&lt;p&gt;DeepL&#39;s own &lt;a href=&quot;https://www.deepl.com/customization-hub&quot;&gt;Customization Hub&lt;/a&gt; now offers glossaries, style rules, and formality settings, which is great. But if those tools live outside your documentation platform, you&#39;re managing a translation toolchain: editor, export, translate, review, reimport, repeat. Every step is a chance for drift.&lt;/p&gt;
&lt;p&gt;Notion has no native multilingual support at all. Confluence offers it through marketplace plugins. GitBook &lt;a href=&quot;https://www.gitbook.com/blog/new-in-gitbook-august-2025&quot;&gt;added auto-translate in August 2025&lt;/a&gt;, which is a step, but it operates at the page level.&lt;/p&gt;
&lt;p&gt;The real shift is from page-level to block-level. When you track translations at the paragraph level, you only retranslate what actually changed. A typical edit touches maybe two paragraphs out of forty. That&#39;s 94% less translation work. (This is this platform&#39;s core translation architecture and, honestly, the thing I&#39;m most proud of in the product. But even setting us aside, the industry direction is clear: continuous, incremental, embedded translation is where this is heading.)&lt;/p&gt;
&lt;h2&gt;4. AI agents need structured content, not wiki pages&lt;/h2&gt;
&lt;p&gt;This one crystallised for me when Notion announced &lt;a href=&quot;https://www.notion.com/blog/introducing-custom-agents&quot;&gt;Custom Agents&lt;/a&gt; in February. 21,000 agents built during early access. Agents that answer questions from knowledge bases, route tasks, compile status reports. Ramp alone has over 300 agents.&lt;/p&gt;
&lt;p&gt;Atlassian went in a similar direction. &lt;a href=&quot;https://www.atlassian.com/blog/confluence/create-and-edit-with-rovo&quot;&gt;Rovo AI in Confluence&lt;/a&gt; pulls context from across Atlassian and third-party apps to generate content. Their pitch: &amp;quot;context-rich, high-quality content grounded in your team&#39;s existing work.&amp;quot;&lt;/p&gt;
&lt;p&gt;And then Anthropic shipped &lt;a href=&quot;https://www.anthropic.com/news/claude-opus-4-6&quot;&gt;agent teams in Claude Code&lt;/a&gt;, where multiple AI agents coordinate autonomously on complex tasks. Opus 4.6 scores 76% on the 8-needle 1M MRCR benchmark (up from 18.5% for the previous model), meaning it can actually retrieve information buried deep in massive document sets without losing track.&lt;/p&gt;
&lt;p&gt;All three companies are building agents that consume documentation. None of them have solved the quality-of-source problem.&lt;/p&gt;
&lt;p&gt;Notion&#39;s Custom Agents documentation explicitly acknowledges the &lt;a href=&quot;https://www.notion.com/blog/introducing-custom-agents&quot;&gt;prompt injection risk&lt;/a&gt; when agents read untrusted content. Atlassian&#39;s Rovo grabs whatever it finds in your Confluence. If that content is three months stale, Rovo doesn&#39;t know. It builds on it anyway.&lt;/p&gt;
&lt;p&gt;For agents to work reliably, they need more than pages of text. They need structured content with stable identifiers, explicit freshness signals, clear classification metadata, and the ability to distinguish &amp;quot;this is current and reviewed&amp;quot; from &amp;quot;this exists but nobody&#39;s touched it in a year.&amp;quot; Wiki pages don&#39;t provide that. Structured block-level content with trust metadata does.&lt;/p&gt;
&lt;h2&gt;5. Open source and self-hosting are making a comeback&lt;/h2&gt;
&lt;p&gt;This last one is more of a gut feeling backed by data than a single announcement.&lt;/p&gt;
&lt;p&gt;GitBook &lt;a href=&quot;https://www.gitbook.com/blog/free-open-source-documentation&quot;&gt;open-sourced their published documentation&lt;/a&gt; in late 2024 and launched an OSS fund. Their reasoning: open source projects deserve free, high-quality documentation tooling. But the move also signals something broader.&lt;/p&gt;
&lt;p&gt;Notion is cloud-only. No self-hosted option. Confluence Data Center exists but requires a license. When your documentation platform holds your most sensitive operational knowledge (incident playbooks, compliance procedures, architecture decisions), the question of &amp;quot;who controls this data?&amp;quot; is not abstract.&lt;/p&gt;
&lt;p&gt;Anthropic&#39;s &lt;a href=&quot;https://www.anthropic.com/news/claude-is-a-space-to-think&quot;&gt;&amp;quot;Claude is a space to think&amp;quot;&lt;/a&gt; post from February made an interesting argument about trust and business models. Their core claim: advertising incentives are incompatible with a genuinely helpful AI assistant. They chose to stay ad-free so users can trust the tool.&lt;/p&gt;
&lt;p&gt;I think there&#39;s a parallel for documentation platforms. If your docs system is closed-source and cloud-only, you can&#39;t verify what it feeds to AI. You can&#39;t audit the freshness calculations. You can&#39;t ensure your data stays in your control. For teams that are deploying AI assistants on top of their knowledge base (and increasingly, everyone is doing this), auditability matters.&lt;/p&gt;
&lt;p&gt;This is not a polemic about open source being morally superior. Closed-source products can absolutely be trustworthy. But when you&#39;re building AI-powered workflows on top of your internal documentation, the ability to inspect and verify the system is a practical advantage. For us, MIT licensing this platform wasn&#39;t an afterthought. It was a design decision rooted in the same logic: documentation infrastructure should be auditable.&lt;/p&gt;
&lt;h2&gt;What these five trends mean together&lt;/h2&gt;
&lt;p&gt;Individually, each of these trends is manageable. AI reads your docs? Okay, add some machine-readable metadata. Freshness matters? Fine, add review dates. Translation needs to be continuous? Sure, integrate DeepL. Agents need structure? Fair, improve your content model. Sovereignty matters? Great, offer a self-hosted option.&lt;/p&gt;
&lt;p&gt;But taken together, they describe a platform that looks fundamentally different from what most teams are using today.&lt;/p&gt;
&lt;p&gt;The gap is architectural. These aren&#39;t five features you bolt on. They&#39;re five assumptions that need to be baked into the foundation. How content is stored (block-level, not page-level). How trust is modelled (freshness scores, not timestamps). How translation works (incremental, embedded, per-paragraph). How AI agents access content (structured APIs with metadata, not page scrapes). How data is controlled (open, auditable, self-hostable).&lt;/p&gt;
&lt;p&gt;No established platform was designed around all five of these simultaneously. Some are adding them piece by piece. GitBook is moving fastest on the AI readability front. Notion is building agent infrastructure. Atlassian has enterprise distribution.&lt;/p&gt;
&lt;p&gt;But designing for all five from day one? That&#39;s the advantage of starting fresh when the ground shifts.&lt;/p&gt;
&lt;p&gt;I realise I&#39;m biased here. I built this specifically because we saw these trends converging and wanted a platform that assumed all of them from the start. Block-level translation, forced expiry, freshness scoring, structured AI-ready content, open source. It&#39;s the thesis of the whole project.&lt;/p&gt;
&lt;p&gt;But even if we didn&#39;t exist, I think any honest reading of what happened in the first quarter of 2026 points in the same direction. Documentation is becoming infrastructure. And infrastructure has different requirements than wiki pages.&lt;/p&gt;
&lt;p&gt;The teams that figure this out first won&#39;t just have better docs. They&#39;ll have more reliable AI agents, lower translation costs, fewer compliance surprises, and knowledge bases that actually stay trustworthy over time.&lt;/p&gt;
&lt;p&gt;That&#39;s the state of docs in 2026. The question isn&#39;t whether these trends are real. It&#39;s whether your platform was designed for them.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;Five trends. One architectural question: was your documentation platform designed for 2026, or is it still serving assumptions from 2016?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;Sources: &lt;a href=&quot;https://www.gitbook.com/blog/ai-docs-data-2025&quot;&gt;GitBook AI docs data report&lt;/a&gt;, &lt;a href=&quot;https://www.gitbook.com/blog/state-of-docs-2026&quot;&gt;GitBook State of Docs 2026&lt;/a&gt;, &lt;a href=&quot;https://www.gitbook.com/blog/skill-md&quot;&gt;GitBook skill.md&lt;/a&gt;, &lt;a href=&quot;https://blog.google/innovation-and-ai/technology/developers-tools/gemini-api-docsmcp-agent-skills/&quot;&gt;Google Gemini API Docs MCP&lt;/a&gt;, &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;Stack Overflow 2024 Developer Survey&lt;/a&gt;, &lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;GitHub 2024 developer survey&lt;/a&gt;, &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;Index.dev developer productivity statistics&lt;/a&gt;, &lt;a href=&quot;https://www.anthropic.com/81k-interviews&quot;&gt;Anthropic &amp;quot;What 81,000 People Want from AI&amp;quot;&lt;/a&gt;, &lt;a href=&quot;https://www.anthropic.com/news/claude-is-a-space-to-think&quot;&gt;Anthropic &amp;quot;Claude is a space to think&amp;quot;&lt;/a&gt;, &lt;a href=&quot;https://www.anthropic.com/news/claude-opus-4-6&quot;&gt;Claude Opus 4.6&lt;/a&gt;, &lt;a href=&quot;https://www.notion.com/blog/introducing-custom-agents&quot;&gt;Notion Custom Agents&lt;/a&gt;, &lt;a href=&quot;https://www.atlassian.com/blog/confluence/create-and-edit-with-rovo&quot;&gt;Atlassian Rovo in Confluence&lt;/a&gt;, &lt;a href=&quot;https://www.deepl.com/en/blog/six-translation-transformations&quot;&gt;DeepL &amp;quot;6 Translation Transformations&amp;quot;&lt;/a&gt;, &lt;a href=&quot;https://www.deepl.com/customization-hub&quot;&gt;DeepL Customization Hub&lt;/a&gt;, &lt;a href=&quot;https://www.gitbook.com/blog/free-open-source-documentation&quot;&gt;GitBook open source documentation&lt;/a&gt;, &lt;a href=&quot;https://www.gitbook.com/blog/new-in-gitbook-august-2025&quot;&gt;GitBook auto-translate&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="documentation" />
    <category term="platforms" />
  </entry>
  <entry>
    <title>Builders, Not Developers: How Claude Changed Who Your Docs Are For</title>
    <link href="https://www.tcdev.de/blog/builders-not-developers-how-claude-changed-devrel/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/builders-not-developers-how-claude-changed-devrel/</id>
    <updated>2026-04-02T00:00:00Z</updated>
    <summary>The person integrating your API no longer reads your docs. They sit in Claude and describe what they want. Developer relations, API documentation, and the whole getting-started funnel need to be rethought for this new reality.</summary>
    <content type="html">&lt;p&gt;There is a person right now, somewhere, integrating your API. They&#39;re not on your documentation site. They haven&#39;t opened your getting-started guide. They have never seen your interactive playground or your carefully designed sidebar navigation.&lt;/p&gt;
&lt;p&gt;They&#39;re sitting in Claude. Or Copilot. Or Cursor. They typed something like &lt;em&gt;&amp;quot;integrate the Stripe billing API with my Next.js app using the app router&amp;quot;&lt;/em&gt; and waited for working code to come back. The AI read your docs on their behalf. It found the relevant endpoints, understood the authentication flow, picked the right SDK methods, and produced an implementation.&lt;/p&gt;
&lt;p&gt;Two weeks ago at Start Summit Hackathon in St. Gallen, I watched this happen in real time. I was talking with a group of CS students and a couple of early-stage startup founders about how they approach new APIs, and every single one of them described the same workflow: paste the problem into an AI, get code back, iterate from there. One of the students laughed when I asked if she&#39;d read the docs. &amp;quot;Why would I? Claude reads them for me.&amp;quot;&lt;/p&gt;
&lt;p&gt;The person never visited your site. They may never visit your site. And this is increasingly just how software gets built.&lt;/p&gt;
&lt;h2&gt;The core shift&lt;/h2&gt;
&lt;p&gt;Documentation now has two fundamentally different consumers: humans who read it and AI assistants that read it on behalf of builders. Most documentation is optimised exclusively for humans. The AI is already the dominant reader.&lt;/p&gt;
&lt;p&gt;This changes everything downstream:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Freshness is now a reliability issue.&lt;/strong&gt; When an AI serves stale content, the builder has no way to detect the problem. The damage scales silently.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;&amp;quot;Developer&amp;quot; is too narrow a word.&lt;/strong&gt; Product managers, designers, and analysts are shipping software through AI assistants, often without ever reading a line of documentation themselves.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Machine-readable structure matters more than visual design.&lt;/strong&gt; Clean markdown, self-contained blocks, and explicit metadata are what allow AI to represent your product accurately.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Format requirements have split.&lt;/strong&gt; Human readers need narrative. AI intermediaries need structured, parseable specs. You need to serve both.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The rest of this post unpacks how we got here, what this means for DevRel, and what you can do about it right now.&lt;/p&gt;
&lt;h2&gt;The journey nobody planned for&lt;/h2&gt;
&lt;p&gt;For a long time, developer relations followed a well-understood path. You wrote comprehensive documentation. You published quickstart guides. You gave conference talks. You maintained a presence on Stack Overflow. You made your API reference searchable, your SDKs idiomatic, your error messages helpful.&lt;/p&gt;
&lt;p&gt;That path assumed the developer would read your content. Navigate your structure. Follow your steps.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;GitHub&#39;s 2024 developer survey&lt;/a&gt; found that 97% of enterprise developers have used AI coding tools at some point. &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;Stack Overflow&#39;s annual survey&lt;/a&gt; showed 76% of all developers are using or planning to use AI tools, with 62% of professionals actively using them day to day. By 2026, that number &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;climbed to 84%&lt;/a&gt;, with 41% of all code now AI-generated and 51% of professional developers using AI tools daily. Those numbers aren&#39;t slowing down.&lt;/p&gt;
&lt;p&gt;The new journey looks different. Someone describes what they want in natural language. An AI assistant reads the documentation, finds the relevant sections, and generates the integration. The builder reviews the output, maybe refines the prompt, maybe asks a follow-up. Minutes, not hours.&lt;/p&gt;
&lt;p&gt;The getting-started funnel that DevRel teams spent years perfecting? It&#39;s being bypassed. Not because it was bad. The entry point just moved.&lt;/p&gt;
&lt;h2&gt;Two consumers, one set of docs&lt;/h2&gt;
&lt;p&gt;Documentation now has two fundamentally different audiences.&lt;/p&gt;
&lt;p&gt;The first is the human reader. This person still exists. They show up for architecture decisions, edge case debugging, compliance review, and conceptual understanding. They want narrative explanations, well-organised reference material, and clear reasoning about trade-offs.&lt;/p&gt;
&lt;p&gt;The second is the AI intermediary. It reads your documentation on behalf of a builder. It does not care about your sidebar. It does not appreciate your visual design. It needs structured, machine-parseable content: clean markdown, consistent formatting, explicit specifications it can reason about without ambiguity.&lt;/p&gt;
&lt;p&gt;Almost every documentation site today is optimised exclusively for the first audience. The second audience is already the dominant consumer.&lt;/p&gt;
&lt;p&gt;Jeremy Howard identified this tension when he &lt;a href=&quot;https://llmstxt.org/&quot;&gt;proposed the /llms.txt standard&lt;/a&gt; in 2024. His observation was precise: &lt;em&gt;&amp;quot;Large language models increasingly rely on website information, but face a critical limitation: context windows are too small to handle most websites in their entirety.&amp;quot;&lt;/em&gt; The proposal is simple. A curated markdown file at &lt;code&gt;/llms.txt&lt;/code&gt; that gives AI models a structured overview of your product and links to the most important resources. FastHTML, Anthropic&#39;s own docs, and a &lt;a href=&quot;https://llmstxt.site/&quot;&gt;growing directory of projects&lt;/a&gt; now ship one.&lt;/p&gt;
&lt;p&gt;It is a useful convention. But it is also a symptom of a deeper problem. The real issue is not format. It is that most documentation was never designed with machine consumption in mind.&lt;/p&gt;
&lt;h2&gt;The builder is not cutting corners&lt;/h2&gt;
&lt;p&gt;There&#39;s a temptation to look at the person who prompts Claude instead of reading docs and conclude they&#39;re taking shortcuts. That they don&#39;t really understand what&#39;s happening in the code. That they&#39;re somehow a lesser kind of developer.&lt;/p&gt;
&lt;p&gt;I&#39;ve had this conversation enough times now to know that&#39;s usually wrong.&lt;/p&gt;
&lt;p&gt;Many of these builders are senior engineers making deliberate efficiency choices. They understand the code, they just don&#39;t want to navigate four pages of documentation to find the three lines they actually need. They&#39;ve learned that an AI assistant can extract those lines faster than they can scan for them, so they delegate the reading. (Honestly, I do this myself. I can&#39;t remember the last time I read a getting-started guide top to bottom.)&lt;/p&gt;
&lt;p&gt;Anthropic recognised this pattern when they built the &lt;a href=&quot;https://modelcontextprotocol.io/introduction&quot;&gt;Model Context Protocol&lt;/a&gt;. MCP is now supported by Claude, ChatGPT, VS Code, Cursor, and others. It&#39;s explicitly designed so AI assistants can reach into external systems, pull context, and act on it. The specification describes it as providing &lt;em&gt;&amp;quot;access to an ecosystem of data sources, tools and apps which will enhance capabilities and improve the end-user experience.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Read that carefully. It&#39;s infrastructure language, not convenience language. The builders using these tools aren&#39;t avoiding work. They&#39;re working through a new layer, and your documentation is part of that layer whether you designed it to be or not.&lt;/p&gt;
&lt;p&gt;The numbers back this up. Claude alone now handles &lt;a href=&quot;https://www.incremys.com/en/resources/blog/claude-statistics&quot;&gt;25 billion API calls per month&lt;/a&gt;, with 30 million monthly active users across 159 countries. &lt;a href=&quot;https://www.incremys.com/en/resources/blog/claude-statistics&quot;&gt;70% of Fortune 100 companies&lt;/a&gt; use Claude. According to a Menlo Ventures survey, Anthropic holds &lt;a href=&quot;https://fortune.com/2025/12/02/how-anthropics-safety-first-approach-won-over-big-business-and-how-its-own-engineers-are-using-its-claude-ai/&quot;&gt;32% of enterprise AI market share by model usage&lt;/a&gt;, ahead of OpenAI at 25%. An HSBC research report puts that even higher: 40% by total AI spending. These aren&#39;t experimental tools. They&#39;re primary infrastructure.&lt;/p&gt;
&lt;h2&gt;Developer relations was built for a different era&lt;/h2&gt;
&lt;p&gt;If your DevRel strategy was designed before 2023, it was designed for a world where developers read docs directly. That world hasn&#39;t disappeared, but it&#39;s no longer the dominant interaction pattern for a growing share of builders.&lt;/p&gt;
&lt;p&gt;This changes the calculus on several long-standing DevRel activities.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Conference talks.&lt;/strong&gt; A 45-minute presentation at a developer conference reaches a room of a few hundred people. A well-structured &lt;code&gt;/llms.txt&lt;/code&gt; file and clean machine-readable documentation reach every builder who asks any AI assistant about your product, continuously, at any time. The talk is a one-time event. The machine-readable docs compound. I&#39;m not saying conferences are worthless (I literally just came back from one), but the leverage equation has shifted.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Getting-started guides.&lt;/strong&gt; The classic five-step quickstart tutorial is increasingly a formality. The builder doesn&#39;t follow steps. They describe what they want and expect the AI to produce the integration. If the API is well-documented in a machine-friendly format, the AI handles the getting-started experience more efficiently than any tutorial could. What tutorials should become instead is conceptual material: explaining why you&#39;d choose approach A over approach B. The AI can generate the implementation. It&#39;s much less reliable at explaining the trade-offs.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stack Overflow.&lt;/strong&gt; Their own survey data showed that &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;84% of developers&lt;/a&gt; use technical documentation directly, with 90% of those relying on docs within API and SDK packages. But the way they &lt;em&gt;access&lt;/em&gt; those docs is increasingly through an AI layer, not a browser tab. The questions that still reach Stack Overflow tend to be the hard ones. Edge cases, production debugging, things that require nuance. Valuable, sure. But no longer where the volume is.&lt;/p&gt;
&lt;h2&gt;When the AI reads your docs, freshness becomes critical&lt;/h2&gt;
&lt;p&gt;Here is the part that most teams have not thought through.&lt;/p&gt;
&lt;p&gt;When a human reads a documentation page, they can apply judgement. They might notice the screenshots look old, or that a comment at the bottom says the process changed. They can squint at it and think &amp;quot;this feels outdated.&amp;quot;&lt;/p&gt;
&lt;p&gt;An AI assistant can&#39;t do any of that. It reads the text, processes it as fact, and generates an answer with full confidence. If the documentation describes a deprecated endpoint, the AI will cheerfully recommend integrating with it. If the documentation references infrastructure that was replaced six months ago, the AI will describe the old setup as current. No hesitation.&lt;/p&gt;
&lt;p&gt;And here&#39;s the thing that makes this worse than it sounds: &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;66% of developers&lt;/a&gt; already say the biggest problem with AI tools is that they give results that are &amp;quot;almost right but not quite.&amp;quot; Stale documentation feeds directly into that problem. The AI isn&#39;t hallucinating. It&#39;s faithfully reproducing outdated content, and there&#39;s no way for the builder to tell the difference.&lt;/p&gt;
&lt;p&gt;The builder trusts the AI. The AI trusts the documentation. If the documentation is stale, that trust chain delivers a confidently wrong answer.&lt;/p&gt;
&lt;p&gt;This was always a problem, obviously. Stale content has always confused people. But the damage was contained because human readers could sometimes catch it. AI intermediaries can&#39;t. They amplify stale content by serving it at scale, with authority, to people who have no reason to doubt it.&lt;/p&gt;
&lt;p&gt;Freshness isn&#39;t a content quality issue anymore. It&#39;s a reliability issue for every AI-powered workflow that touches your docs.&lt;/p&gt;
&lt;h2&gt;The word &amp;quot;developer&amp;quot; is too narrow&lt;/h2&gt;
&lt;p&gt;The people building software in 2026 don&#39;t all identify as developers. Some are designers who prompt Claude to build a working prototype. Some are product managers who use Cursor to ship internal tools. Some are data analysts who describe a data pipeline in natural language and let an agent assemble it. At Start Summit, half the hackathon teams had members with zero programming background who were shipping working software by the end of the weekend.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://ramp.com/&quot;&gt;Ramp&lt;/a&gt; is a useful example. The fintech company went from a $5.8B valuation in 2023 to &lt;a href=&quot;https://techcrunch.com/2025/11/17/ramp-hits-32b-valuation-just-three-months-after-hitting-22-5b/&quot;&gt;$32B by late 2025&lt;/a&gt;, crossing $1B in annualised revenue along the way. One of the fastest-growing startups in history. A widely discussed part of their approach: product managers building features directly with AI tools instead of waiting in an engineering backlog. PMs at Ramp do not just write specs. They ship code. The AI handles the implementation. The PM handles the intent.&lt;/p&gt;
&lt;p&gt;Not a shortcut. A new operating model, and it&#39;s working at a scale that makes it really hard to dismiss as an experiment.&lt;/p&gt;
&lt;p&gt;Anthropic&#39;s own internal study is revealing here. When they &lt;a href=&quot;https://fortune.com/2025/12/02/how-anthropics-safety-first-approach-won-over-big-business-and-how-its-own-engineers-are-using-its-claude-ai/&quot;&gt;surveyed 132 of their own engineers&lt;/a&gt; about how they use Claude, the engineers reported using it for about 60% of their work tasks. The most common uses? Debugging existing code, understanding what parts of the codebase were doing, and implementing new features. The engineers said they tend to hand Claude tasks that are &amp;quot;not complex, repetitive, where code quality isn&#39;t critical.&amp;quot; And 27% of the work they now do with Claude simply wouldn&#39;t have been done at all before.&lt;/p&gt;
&lt;p&gt;That&#39;s Anthropic&#39;s own team. The people who built the model are using it as a documentation reader, a codebase navigator, and a first-draft generator. Everyone else is doing the same, just with your docs instead of theirs.&lt;/p&gt;
&lt;p&gt;Anthropic has been deliberate about calling this the &amp;quot;builder&amp;quot; persona. Their tools are designed not just for professional software engineers but for anyone who can describe what they want to build. When Claude can scaffold a full-stack application from a Figma design via MCP, the traditional line between &amp;quot;developer&amp;quot; and &amp;quot;non-developer&amp;quot; dissolves.&lt;/p&gt;
&lt;p&gt;This has real implications for anyone who maintains documentation or cares about developer experience. Your audience is no longer limited to people who know what a REST endpoint is. It includes anyone whose AI assistant might interact with your product. The PM at Ramp who ships a feature using your API? Probably never reading your documentation directly. Their AI agent absolutely will.&lt;/p&gt;
&lt;h2&gt;What this means for documentation&lt;/h2&gt;
&lt;p&gt;If documentation now serves two audiences, human readers and AI intermediaries, it needs to work for both. Sounds obvious. In practice, almost nobody does it.&lt;/p&gt;
&lt;p&gt;Here&#39;s what I think actually matters:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Machine-readable formats alongside human-readable ones.&lt;/strong&gt; If your API docs are a beautifully rendered HTML page that an LLM has to scrape and parse, the AI is working harder than it should. Ship the raw OpenAPI spec alongside the rendered version. Ship clean markdown. Make the specifications accessible without requiring the AI to interpret page layout.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Block-level structure instead of page-level narrative.&lt;/strong&gt; AI assistants do not consume documentation page by page. They extract relevant sections. A document with clear headings, self-contained paragraphs, and explicit block-level semantics is dramatically more useful to an AI than a flowing narrative that requires reading the entire page for context.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Trust signals that machines can read.&lt;/strong&gt; When was this document last reviewed? Is this still current? Has the content been flagged? These signals need to exist in a form the AI can access, not just as visual cues on a web page. A freshness score, an expiry status, a review date, these are the metadata that allow an AI to decide whether a document is safe to use as a source.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Freshness as a prerequisite, not a feature.&lt;/strong&gt; When an AI assistant serves a builder a confident answer based on a deprecated endpoint, the damage is worse than a 404. The builder builds on it. Ships it. Then it breaks in production, and nobody knows why until someone traces it back to documentation that should have been updated months ago. Every document that an AI might reference needs a mechanism to prove it&#39;s still current. (This is, full disclosure, exactly the problem I&#39;m building this to solve. Forced expiry on documentation blocks so stale content can&#39;t hide.)&lt;/p&gt;
&lt;h2&gt;Getting started: audit your current docs&lt;/h2&gt;
&lt;p&gt;If you&#39;ve read this far and you&#39;re thinking &amp;quot;okay, but what do I actually do on Monday,&amp;quot; here are four concrete things you can check this week.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;1. Test your docs through an AI.&lt;/strong&gt; Open Claude or ChatGPT and ask it to integrate your product in a realistic scenario. Don&#39;t use your internal knowledge. Just look at what the AI produces. Is it correct? Is it current? Is it using the right endpoints, the right SDK version, the right auth flow? If the AI gets it wrong, that&#39;s what builders are getting right now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;2. Check for stale content.&lt;/strong&gt; Pick your five most-visited documentation pages and ask: when was this last reviewed? Does it still describe the current state of the product? If you can&#39;t answer that confidently, neither can an AI. This is the single highest-leverage fix for most teams.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;3. Ship machine-readable formats.&lt;/strong&gt; If you don&#39;t have a &lt;code&gt;/llms.txt&lt;/code&gt; file, create one. If your API reference is only available as rendered HTML, export the raw OpenAPI spec and make it accessible. If your docs are in a CMS that doesn&#39;t output clean markdown, that&#39;s a problem worth solving now.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;4. Add review dates and freshness metadata.&lt;/strong&gt; Even something simple, a &lt;code&gt;last-reviewed&lt;/code&gt; field in your content management system, a mandatory review cycle for high-traffic pages. This gives both humans and AI a signal about whether content is trustworthy. Tools like this platform can &lt;a href=&quot;https://www.tcdev.de/features/freshness&quot;&gt;automate this with forced expiry at the block level&lt;/a&gt;, but even a manual process is better than nothing.&lt;/p&gt;
&lt;h2&gt;The quiet shift in how products are represented&lt;/h2&gt;
&lt;p&gt;There is a broader consequence of all this that is worth stating directly.&lt;/p&gt;
&lt;p&gt;Your documentation is no longer just a reference manual for developers. It&#39;s the source material that AI assistants use to represent your product to the world. When a builder asks Claude how to use your product, Claude&#39;s answer is shaped by whatever it can find and parse from your docs.&lt;/p&gt;
&lt;p&gt;Good docs, good answer. Outdated, ambiguous, locked inside HTML that&#39;s hard for a model to parse? Worse answer, or an incorrect one. Simple as that.&lt;/p&gt;
&lt;p&gt;The quality of the AI&#39;s answer about your product is now a direct proxy for your developer experience. Most companies aren&#39;t treating it that way yet.&lt;/p&gt;
&lt;p&gt;The teams that are ahead on this, Stripe, Vercel, Cloudflare, Anthropic themselves, treat AI readability as a first-class concern. A foundational requirement that shapes how documentation gets written, structured, and maintained. Not a backlog item for next quarter.&lt;/p&gt;
&lt;p&gt;The builder sitting in Claude right now, describing what they want to build, expecting working code in minutes. They may never visit a documentation site again. But the AI that serves them will. Constantly.&lt;/p&gt;
&lt;p&gt;That AI is now your most frequent reader. The question is whether your docs are ready for it.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The best developer experience strategy in 2026 is not a conference talk or a quickstart guide. It is making sure the AI gets it right.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This post references publicly available research and product documentation. Statistics are drawn from &lt;a href=&quot;https://github.blog/news-insights/research/survey-ai-wave-grows/&quot;&gt;GitHub&#39;s 2024 developer survey&lt;/a&gt;, the &lt;a href=&quot;https://survey.stackoverflow.co/2024/&quot;&gt;Stack Overflow 2024 Developer Survey&lt;/a&gt;, &lt;a href=&quot;https://www.index.dev/blog/developer-productivity-statistics-with-ai-tools&quot;&gt;Index.dev&#39;s 2026 developer productivity report&lt;/a&gt;, &lt;a href=&quot;https://www.incremys.com/en/resources/blog/claude-statistics&quot;&gt;Incremys Claude statistics&lt;/a&gt;, and &lt;a href=&quot;https://fortune.com/2025/12/02/how-anthropics-safety-first-approach-won-over-big-business-and-how-its-own-engineers-are-using-its-claude-ai/&quot;&gt;Fortune&#39;s reporting on Anthropic&lt;/a&gt;. The /llms.txt specification is maintained at &lt;a href=&quot;https://llmstxt.org/&quot;&gt;llmstxt.org&lt;/a&gt;. The Model Context Protocol is documented at &lt;a href=&quot;https://modelcontextprotocol.io/&quot;&gt;modelcontextprotocol.io&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="documentation" />
    <category term="developer-experience" />
  </entry>
  <entry>
    <title>How This Translation Approach Actually Works, And Why It Sounds Like Your Team</title>
    <link href="https://www.tcdev.de/blog/how-rasepi-translations-work-and-why-they-sound-like-your-team/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/how-rasepi-translations-work-and-why-they-sound-like-your-team/</id>
    <updated>2026-03-31T00:00:00Z</updated>
    <summary>this platform doesn&#39;t just translate your documentation into other languages. It learns your terminology, matches your tone, and lets every language version live its own life. Here&#39;s how.</summary>
    <content type="html">&lt;p&gt;If you&#39;ve ever run a document through Google Translate, or honestly any translation tool, you know the result. Technically correct. Tonally wrong. Your product is suddenly called something different. Your team&#39;s internal shorthand disappears. Formal &amp;quot;you&amp;quot; where your company uses informal, or the other way around.&lt;/p&gt;
&lt;p&gt;The output is translated, but it doesn&#39;t sound like you.&lt;/p&gt;
&lt;p&gt;That&#39;s what I built this platform&#39;s translation system to fix. Not &amp;quot;can we translate documentation&amp;quot; (every tool can do that now) but &amp;quot;can we translate it so it actually sounds like our team wrote it.&amp;quot;&lt;/p&gt;
&lt;p&gt;The answer is yes. And it doesn&#39;t require a team of professional translators to get there.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/natural-translations.svg&quot; alt=&quot;Translations that sound like your team&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;Only what changed gets translated&lt;/h2&gt;
&lt;p&gt;Most documentation platforms translate entire pages. You change one sentence and the whole document goes off for retranslation. Every language, every paragraph, whether it changed or not.&lt;/p&gt;
&lt;p&gt;this platform works differently. It tracks every paragraph individually. When you edit one section of a 20-section document, only that one section gets retranslated. The other 19, across all languages, stay exactly as they were.&lt;/p&gt;
&lt;p&gt;This means two things:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Your translation costs drop dramatically.&lt;/strong&gt; We&#39;re talking 94% less for typical edits. Most updates touch one or two sections, not the whole page.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translations you already reviewed stay stable.&lt;/strong&gt; If your German team approved a translation last week, editing an unrelated paragraph in English won&#39;t touch their approved text.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;The system knows what changed because every paragraph has a unique identity and a content fingerprint. When the fingerprint changes, that specific paragraph is flagged for retranslation. Nothing else.&lt;/p&gt;
&lt;h2&gt;Your glossary, your terminology&lt;/h2&gt;
&lt;p&gt;Here&#39;s where it gets interesting.&lt;/p&gt;
&lt;p&gt;Every company has its own vocabulary. &amp;quot;Sprint Review&amp;quot; might stay as &amp;quot;Sprint Review&amp;quot; in your German docs because your Berlin team uses the English term. Or it might become &amp;quot;Sprint-Überprüfung&amp;quot; because your Munich team prefers the German version. &amp;quot;Knowledge Base&amp;quot; might be &amp;quot;Wissensdatenbank&amp;quot; or &amp;quot;Knowledge Base&amp;quot; or something entirely different your team coined internally.&lt;/p&gt;
&lt;p&gt;this platform lets you build a glossary for each language. Basically a list of terms and their approved translations. When a paragraph is translated, the system checks your glossary first. Every term in your list gets translated exactly the way you defined it. Every time. Across every document.&lt;/p&gt;
&lt;p&gt;You can manage your glossary directly in this platform:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Add terms one by one&lt;/strong&gt; as you notice inconsistencies&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Import a CSV&lt;/strong&gt; if you already have a terminology list from another system&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Export your glossary&lt;/strong&gt; to share with external translators or other tools&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The glossary works per language pair. Your English-to-German glossary is separate from your English-to-French glossary. This matters because the same English term might need different treatment in different languages. &amp;quot;Sprint Review&amp;quot; might stay English in German but get translated in Japanese.&lt;/p&gt;
&lt;p&gt;When you update your glossary, the change takes effect the next time any paragraph is translated into that language. No need to retranslate everything manually. The next natural edit cycle picks it up.&lt;/p&gt;
&lt;h2&gt;Style rules: making translations sound like you wrote them&lt;/h2&gt;
&lt;p&gt;Glossaries handle individual words. But a translation can use all the right terms and still feel off. Wrong tone. Dates in the wrong format. Numbers with the wrong separator. Currency symbols in the wrong place.&lt;/p&gt;
&lt;p&gt;That&#39;s what style rules are for.&lt;/p&gt;
&lt;p&gt;For each language, you can set up a collection of rules that control how translations are shaped:&lt;/p&gt;
&lt;h3&gt;Formatting conventions&lt;/h3&gt;
&lt;p&gt;These are the details that make a document feel native rather than &amp;quot;obviously translated from English&amp;quot;:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Date and time formats.&lt;/strong&gt; 24-hour clock for German, AM/PM for English, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Number formatting.&lt;/strong&gt; Comma as decimal separator in German (3,14 instead of 3.14), period for thousands&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Punctuation rules.&lt;/strong&gt; Academic degree formatting, quotation mark styles, and other regional conventions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You pick the conventions that match your company&#39;s standards. this platform applies them to every translation in that language, across every document.&lt;/p&gt;
&lt;h3&gt;Custom instructions&lt;/h3&gt;
&lt;p&gt;This is where things get really powerful. Custom instructions are plain-language directives that tell the translation engine how to handle your content. You write them in normal sentences, and the engine follows them.&lt;/p&gt;
&lt;p&gt;Some examples:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Use a friendly, diplomatic tone&amp;quot;&lt;/em&gt; for a company that wants approachable documentation&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Always use the formal &#39;Sie&#39; form, never &#39;du&#39;&amp;quot;&lt;/em&gt; for professional German communication&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Use British English spelling: colour, organisation, licence&amp;quot;&lt;/em&gt; when your English-speaking audience is UK-based&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;Put currency symbols after the numeric amount&amp;quot;&lt;/em&gt; to match European conventions&lt;/li&gt;
&lt;li&gt;&lt;em&gt;&amp;quot;When describing API endpoints, use imperative mood&amp;quot;&lt;/em&gt; for technical docs that should feel direct&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You can add up to 200 custom instructions per language. They work alongside your glossary and formatting rules, and the translation engine considers all of them together on every translation.&lt;/p&gt;
&lt;h3&gt;Formality&lt;/h3&gt;
&lt;p&gt;German has &amp;quot;du&amp;quot; and &amp;quot;Sie.&amp;quot; French has &amp;quot;tu&amp;quot; and &amp;quot;vous.&amp;quot; Japanese has multiple levels of politeness. Even languages without obvious formal/informal pronouns have tonal differences that matter.&lt;/p&gt;
&lt;p&gt;this platform lets you set the formality level for each language. Once configured, every translated paragraph matches that tone. If your company addresses readers formally in French (&amp;quot;vous&amp;quot;) but informally in German (&amp;quot;du&amp;quot;), that&#39;s exactly what every translation will do.&lt;/p&gt;
&lt;h3&gt;It all works together&lt;/h3&gt;
&lt;p&gt;Here&#39;s what matters: glossary terms, formatting conventions, custom instructions, and formality settings all apply to every translation at the same time. You don&#39;t pick one or the other. You set them all up once, and every paragraph that gets translated goes through the same set of rules.&lt;/p&gt;
&lt;p&gt;The result is translations that read like someone on your local team wrote them. Not like a machine that translated each sentence without knowing anything about your company.&lt;/p&gt;
&lt;h2&gt;Each language can have its own content&lt;/h2&gt;
&lt;p&gt;This is the feature that surprises people the most.&lt;/p&gt;
&lt;p&gt;In this platform, a translated document isn&#39;t a locked copy of the original. Each language version can have content that only exists in that language.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Why does this matter?&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;Because different markets need different things:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Your German documentation might need a DSGVO (GDPR) compliance section that doesn&#39;t apply to the US version&lt;/li&gt;
&lt;li&gt;Your Japanese team might need a note about local tooling nobody else uses&lt;/li&gt;
&lt;li&gt;Your Brazilian office might need context about regional tax regulations&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In most translation tools, adding content to one language version means it gets overwritten the next time someone retranslates from English. Teams figure this out fast and stop adding local content. They create shadow docs in Notion or Slack or somewhere else, and now you have two systems that nobody fully trusts.&lt;/p&gt;
&lt;p&gt;In this platform, unique content is flagged as belonging to that language. It&#39;s never overwritten by retranslation. It&#39;s never deleted when the English source changes. It lives alongside the translated content as a natural part of the document.&lt;/p&gt;
&lt;p&gt;Same goes for structure. If your Japanese translators prefer numbered lists where the English version uses bullets (a common convention in Japanese technical writing), they can change the format. this platform preserves that choice across future updates.&lt;/p&gt;
&lt;p&gt;Every language version is a first-class document, not a read-only mirror.&lt;/p&gt;
&lt;h2&gt;Automatic and human: they work together&lt;/h2&gt;
&lt;p&gt;this platform doesn&#39;t force you to choose between machine translation and human translation. It supports both, and it knows the difference.&lt;/p&gt;
&lt;p&gt;When a paragraph is machine-translated and the source changes, this platform retranslates it automatically. No human intervention needed. The glossary and style rules keep things consistent.&lt;/p&gt;
&lt;p&gt;When a paragraph has been manually edited by a human translator, maybe they rewrote it for cultural nuance or added context a machine wouldn&#39;t catch, this platform respects that work. If the source changes, the system flags the paragraph as needing review but &lt;strong&gt;never silently overwrites human edits&lt;/strong&gt;. The translator sees what changed in the source and decides how to update their version.&lt;/p&gt;
&lt;p&gt;This means your translation quality improves over time. Machine translation handles the bulk. Human translators focus on the paragraphs that need a human touch. And neither one steps on the other&#39;s work.&lt;/p&gt;
&lt;h2&gt;Two modes: always current or translate on demand&lt;/h2&gt;
&lt;p&gt;For each language, you choose when translations happen:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Always translate.&lt;/strong&gt; Every time someone saves the source document, changed paragraphs are retranslated immediately. Best for your most important languages where readers expect up-to-the-minute accuracy.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translate when viewed.&lt;/strong&gt; Changed paragraphs are flagged but not translated until someone actually opens the document in that language. Great for languages that are used less frequently. No wasted translation costs on content nobody is reading.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Both modes use the same glossary, the same style rules, the same quality. The only difference is timing.&lt;/p&gt;
&lt;h2&gt;What this looks like in practice&lt;/h2&gt;
&lt;p&gt;Say you run a company with teams in London, Munich, Paris, and Tokyo. Your documentation is written in English.&lt;/p&gt;
&lt;p&gt;A product manager in London updates the deployment guide. One section about a new CI/CD step.&lt;/p&gt;
&lt;p&gt;Here&#39;s what happens:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;German (always translate).&lt;/strong&gt; The changed section is retranslated within seconds. &amp;quot;Sprint Review&amp;quot; becomes &amp;quot;Sprint-Überprüfung&amp;quot; because that&#39;s in your glossary. Formal &amp;quot;Sie&amp;quot; because that&#39;s your formality setting. Dates in 24-hour format because that&#39;s your configured rule. The custom instruction &amp;quot;use a direct, imperative tone&amp;quot; shapes the phrasing. The DSGVO section the Munich team added? Untouched.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;French (always translate).&lt;/strong&gt; Same section, retranslated immediately. &amp;quot;Vous&amp;quot; formality. French glossary terms applied. Currency symbols after the number per your custom instruction. The rest of the document stays exactly as the Paris office last reviewed it.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Japanese (translate when viewed).&lt;/strong&gt; The changed section is flagged as stale. When someone in Tokyo opens the document, it&#39;s translated on the fly. Their custom numbered-list formatting is preserved. Their local tooling note stays in place.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;One edit. Three languages updated. Zero full-document retranslations. Consistent terminology, consistent tone, and respectful of every team&#39;s local additions.&lt;/p&gt;
&lt;h2&gt;Speaking of language quality&lt;/h2&gt;
&lt;p&gt;The translation engine behind all of this is DeepL, the same technology that powers this platform&#39;s &lt;strong&gt;Talk to Docs&lt;/strong&gt; feature. You can speak to your documentation and get answers out loud. DeepL Voice handles the spoken interaction, which means the same terminology consistency, style rules, and language quality you get in written translations carries over to voice conversations too. Your glossary terms and custom instructions sound right whether your team is reading or listening.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Translations that sound like your team aren&#39;t a luxury. For companies operating across languages, they&#39;re the difference between documentation people trust and documentation people work around. Glossaries, style rules, custom instructions, smart retranslation, and per-language unique content make that possible. Automatically, from day one.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Your documentation should sound like your team in every language. Not like a machine. Not like a different company. Like you.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#multilingual&quot;&gt;See multilingual publishing in action →&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="multilingual" />
    <category term="translation" />
    <category term="deepl" />
  </entry>
  <entry>
    <title>Inside the Translation Engine: Glossaries, Style Rules, and Smart Retranslation</title>
    <link href="https://www.tcdev.de/blog/inside-the-translation-engine-glossaries-style-rules-and-smart-retranslation/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/inside-the-translation-engine-glossaries-style-rules-and-smart-retranslation/</id>
    <updated>2026-03-31T00:00:00Z</updated>
    <summary>A deep technical walkthrough of how this platform&#39;s translation pipeline actually works: glossary resolution, DeepL style rules and custom instructions, content hashing, and the integration that ties it all together.</summary>
    <content type="html">&lt;p&gt;Our &lt;a href=&quot;https://www.tcdev.de/en/blog/how-plugin-guardrail-and-pipeline-systems-work/&quot;&gt;previous architecture post&lt;/a&gt; covered plugins, action guards, and the pipeline system. This one goes deeper into the translation engine, the part I think makes this platform fundamentally different from every other docs platform.&lt;/p&gt;
&lt;p&gt;Not the marketing pitch about translating paragraphs instead of pages. The actual code. How glossaries are resolved per tenant, how DeepL&#39;s style rules and custom instructions shape every translation, how content hashing drives stale detection, and how the orchestrator decides which blocks to retranslate.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/translation-engine-deep-dive.svg&quot; alt=&quot;Translation engine: glossaries, style rules, and smart retranslation&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The translation pipeline&lt;/h2&gt;
&lt;p&gt;When a user saves a document, the system doesn&#39;t just retranslate everything. It runs a pretty specific sequence:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Parse the TipTap JSON into individual blocks&lt;/li&gt;
&lt;li&gt;Compare content hashes to detect which blocks actually changed&lt;/li&gt;
&lt;li&gt;For changed blocks, resolve the tenant&#39;s glossary and style rule list for the language pair&lt;/li&gt;
&lt;li&gt;Apply style rules, custom instructions, and formality from tenant configuration&lt;/li&gt;
&lt;li&gt;Send only changed blocks to DeepL&lt;/li&gt;
&lt;li&gt;Update translation blocks and sync content hashes&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each step is its own service with its own interface. That matters because any step can be swapped out for something else, a different translation provider, a different hashing algorithm, a different glossary source.&lt;/p&gt;
&lt;h2&gt;Glossary resolution: tenant-scoped, DeepL-synced&lt;/h2&gt;
&lt;p&gt;DeepL glossaries have a constraint most people don&#39;t know about: &lt;strong&gt;they&#39;re immutable.&lt;/strong&gt; You can&#39;t edit a DeepL glossary. Any change means deleting the old one and creating a new one.&lt;/p&gt;
&lt;p&gt;this platform handles this by treating the database as the source of truth and DeepL glossaries as throwaway runtime artifacts. The &lt;code&gt;TenantGlossary&lt;/code&gt; entity stores everything locally:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public class TenantGlossary : ITenantScoped
{
    public Guid Id { get; set; }
    public Guid TenantId { get; set; }
    public string Name { get; set; }
    public string SourceLanguage { get; set; }     // e.g. &amp;quot;en&amp;quot;
    public string TargetLanguage { get; set; }     // e.g. &amp;quot;de&amp;quot;
    public string? DeepLGlossaryId { get; set; }   // Runtime DeepL ID
    public DateTime? LastSyncedAt { get; set; }
    public bool IsDirty { get; set; } = true;      // Triggers re-sync
    public ICollection&amp;lt;TenantGlossaryEntry&amp;gt; Entries { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When a user adds a glossary entry, say mapping &amp;quot;Sprint Review&amp;quot; to &amp;quot;Sprint-Überprüfung&amp;quot; for EN→DE, the database record updates immediately and &lt;code&gt;IsDirty&lt;/code&gt; gets set to &lt;code&gt;true&lt;/code&gt;. The DeepL glossary isn&#39;t recreated right then. It gets recreated lazily, the next time a translation actually needs it.&lt;/p&gt;
&lt;h3&gt;The sync flow&lt;/h3&gt;
&lt;p&gt;Before every translation call, the system resolves the glossary:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;string?&amp;gt; GetOrSyncDeepLGlossaryIdAsync(
    string sourceLanguage, string targetLanguage,
    CancellationToken ct = default)
{
    var glossary = await _db.TenantGlossaries
        .Include(g =&amp;gt; g.Entries)
        .FirstOrDefaultAsync(g =&amp;gt;
            g.SourceLanguage == sourceLanguage &amp;amp;&amp;amp;
            g.TargetLanguage == targetLanguage, ct);

    if (glossary is null || glossary.Entries.Count == 0)
        return null;

    if (!glossary.IsDirty &amp;amp;&amp;amp; glossary.DeepLGlossaryId is not null)
        return glossary.DeepLGlossaryId;

    // Dirty - delete old, create new
    if (glossary.DeepLGlossaryId is not null)
        await _deepL.DeleteGlossaryAsync(glossary.DeepLGlossaryId);

    var entries = glossary.Entries
        .ToDictionary(e =&amp;gt; e.SourceTerm, e =&amp;gt; e.TargetTerm);

    var deepLGlossary = await _deepL.CreateGlossaryAsync(
        $&amp;quot;tenant-{glossary.Id}&amp;quot;,
        glossary.SourceLanguage,
        glossary.TargetLanguage,
        entries);

    glossary.DeepLGlossaryId = deepLGlossary.GlossaryId;
    glossary.IsDirty = false;
    glossary.LastSyncedAt = DateTime.UtcNow;
    await _db.SaveChangesAsync(ct);

    return glossary.DeepLGlossaryId;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Three things worth noting here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Lazy sync.&lt;/strong&gt; We only hit the DeepL API when a translation is actually needed. Editing glossary entries in bulk doesn&#39;t trigger dozens of API calls.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tenant isolation.&lt;/strong&gt; The query runs through EF global query filters, so &lt;code&gt;TenantGlossaries&lt;/code&gt; is automatically scoped. Tenant A&#39;s glossary entries never leak into Tenant B&#39;s translations.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;One glossary per language pair.&lt;/strong&gt; DeepL enforces this anyway. One EN→DE glossary, one EN→FR glossary, and so on. The &lt;code&gt;(SourceLanguage, TargetLanguage)&lt;/code&gt; pair is unique per tenant.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Glossary entries&lt;/h3&gt;
&lt;p&gt;Individual entries are just term mappings:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public class TenantGlossaryEntry
{
    public Guid Id { get; set; }
    public Guid GlossaryId { get; set; }
    public string SourceTerm { get; set; }   // e.g. &amp;quot;Sprint Review&amp;quot;
    public string TargetTerm { get; set; }   // e.g. &amp;quot;Sprint-Überprüfung&amp;quot;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The API gives you full CRUD plus CSV import/export for bulk management:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;POST   /admin/glossaries                       Create glossary
POST   /admin/glossaries/{id}/entries           Add term
PUT    /admin/glossaries/{id}/entries/{entryId}  Update term
DELETE /admin/glossaries/{id}/entries/{entryId}  Remove term
POST   /admin/glossaries/{id}/import            Import CSV
GET    /admin/glossaries/{id}/export            Export CSV
POST   /admin/glossaries/{id}/sync              Force DeepL sync
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;CSV import is super useful for teams migrating from existing translation memory systems. Export your terms, clean them up, import into this platform, and the next translation run uses the new glossary automatically.&lt;/p&gt;
&lt;h2&gt;Style rules, custom instructions, and formality&lt;/h2&gt;
&lt;p&gt;Glossaries handle terminology. But terminology is only half of it. A translation can use all the right words and still sound wrong. Wrong tone, wrong date format, wrong punctuation conventions.&lt;/p&gt;
&lt;p&gt;DeepL&#39;s &lt;strong&gt;Style Rules API&lt;/strong&gt; (v3) solves this. You can create reusable style rule lists that combine two types of controls:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Configured rules&lt;/strong&gt;, predefined formatting conventions for dates, times, punctuation, numbers, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Custom instructions&lt;/strong&gt;, free-text directives that shape tone, phrasing, and domain-specific conventions&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;this platform creates and manages these per tenant, per target language. The &lt;code&gt;TenantStyleRuleList&lt;/code&gt; entity stores the DeepL &lt;code&gt;style_id&lt;/code&gt; alongside the tenant&#39;s configured rules and custom instructions:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public class TenantStyleRuleList : ITenantScoped
{
    public Guid Id { get; set; }
    public Guid TenantId { get; set; }
    public string Name { get; set; }
    public string TargetLanguage { get; set; }      // e.g. &amp;quot;de&amp;quot;
    public string? DeepLStyleId { get; set; }       // Runtime DeepL style_id
    public string ConfiguredRulesJson { get; set; }  // Serialized configured rules
    public bool IsDirty { get; set; } = true;
    public DateTime? LastSyncedAt { get; set; }
    public ICollection&amp;lt;TenantCustomInstruction&amp;gt; CustomInstructions { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Creating style rule lists&lt;/h3&gt;
&lt;p&gt;When an admin sets up translation rules for German, this platform calls DeepL&#39;s v3 API to create the style rule list. Here&#39;s what that looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;string&amp;gt; CreateOrSyncStyleRuleListAsync(
    TenantStyleRuleList ruleList, CancellationToken ct = default)
{
    if (!ruleList.IsDirty &amp;amp;&amp;amp; ruleList.DeepLStyleId is not null)
        return ruleList.DeepLStyleId;

    // DeepL style rule lists are mutable - we can update in place
    if (ruleList.DeepLStyleId is not null)
    {
        // Replace configured rules on existing list
        await _httpClient.PutAsJsonAsync(
            $&amp;quot;v3/style_rules/{ruleList.DeepLStyleId}/configured_rules&amp;quot;,
            JsonSerializer.Deserialize&amp;lt;JsonElement&amp;gt;(ruleList.ConfiguredRulesJson),
            ct);

        // Sync custom instructions
        await SyncCustomInstructionsAsync(ruleList, ct);

        ruleList.IsDirty = false;
        ruleList.LastSyncedAt = DateTime.UtcNow;
        return ruleList.DeepLStyleId;
    }

    // Create new style rule list
    var payload = new
    {
        name = $&amp;quot;tenant-{ruleList.TenantId}-{ruleList.TargetLanguage}&amp;quot;,
        language = ruleList.TargetLanguage,
        configured_rules = JsonSerializer.Deserialize&amp;lt;JsonElement&amp;gt;(
            ruleList.ConfiguredRulesJson),
        custom_instructions = ruleList.CustomInstructions.Select(ci =&amp;gt; new
        {
            label = ci.Label,
            prompt = ci.Prompt,
            source_language = ci.SourceLanguage
        })
    };

    var response = await _httpClient.PostAsJsonAsync(&amp;quot;v3/style_rules&amp;quot;, payload, ct);
    var result = await response.Content.ReadFromJsonAsync&amp;lt;StyleRuleResponse&amp;gt;(ct);

    ruleList.DeepLStyleId = result.StyleId;
    ruleList.IsDirty = false;
    ruleList.LastSyncedAt = DateTime.UtcNow;
    await _db.SaveChangesAsync(ct);

    return ruleList.DeepLStyleId;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Unlike glossaries, DeepL style rule lists are &lt;strong&gt;mutable&lt;/strong&gt;. You can replace configured rules in place with &lt;code&gt;PUT /v3/style_rules/{style_id}/configured_rules&lt;/code&gt;, and custom instructions can be individually added, updated, or deleted. Much friendlier for iterative refinement.&lt;/p&gt;
&lt;h3&gt;What configured rules look like&lt;/h3&gt;
&lt;p&gt;Configured rules cover formatting conventions that vary by language or company preference. Things like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;dates_and_times&amp;quot;: {
    &amp;quot;time_format&amp;quot;: &amp;quot;use_24_hour_clock&amp;quot;,
    &amp;quot;calendar_era&amp;quot;: &amp;quot;use_bc_and_ad&amp;quot;
  },
  &amp;quot;punctuation&amp;quot;: {
    &amp;quot;periods_in_academic_degrees&amp;quot;: &amp;quot;do_not_use&amp;quot;
  },
  &amp;quot;numbers&amp;quot;: {
    &amp;quot;decimal_separator&amp;quot;: &amp;quot;use_comma&amp;quot;
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;These sound trivial, but they compound fast. A German document that uses AM/PM time format and period-separated decimals just reads as &amp;quot;translated from English&amp;quot; to a German reader. Setting &lt;code&gt;use_24_hour_clock&lt;/code&gt; and &lt;code&gt;use_comma&lt;/code&gt; for decimal separators across all German translations eliminates that immediately.&lt;/p&gt;
&lt;h3&gt;Custom instructions: this is the real power&lt;/h3&gt;
&lt;p&gt;Custom instructions are free-text directives, up to 200 per style rule list, each up to 300 characters. You basically tell DeepL how to shape the translation in plain language:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public class TenantCustomInstruction
{
    public Guid Id { get; set; }
    public Guid StyleRuleListId { get; set; }
    public string Label { get; set; }              // e.g. &amp;quot;Tone instruction&amp;quot;
    public string Prompt { get; set; }             // e.g. &amp;quot;Use a friendly, diplomatic tone&amp;quot;
    public string? SourceLanguage { get; set; }    // Optional source lang filter
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Real examples from our tenants:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;Use a friendly, diplomatic tone&amp;quot;&lt;/code&gt; for a startup that wants approachable docs&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;Always use &#39;Sie&#39; form, never &#39;du&#39;&amp;quot;&lt;/code&gt; for a German law firm&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;Translate &#39;deployment&#39; as &#39;Bereitstellung&#39;, never &#39;Deployment&#39;&amp;quot;&lt;/code&gt; for terms that need context-dependent handling beyond simple glossary mapping&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;Use British English spelling (colour, organisation, licence)&amp;quot;&lt;/code&gt; for UK-based companies translating between English variants&lt;/li&gt;
&lt;li&gt;&lt;code&gt;&amp;quot;Put currency symbols after the numeric amount&amp;quot;&lt;/code&gt; to match European conventions&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Custom instructions are really powerful for domain-specific conventions that don&#39;t fit in glossary entries. A glossary maps one term to another. A custom instruction can say &amp;quot;when translating API docs, use imperative mood instead of passive voice.&amp;quot; That&#39;s a completely different kind of control.&lt;/p&gt;
&lt;h3&gt;Formality&lt;/h3&gt;
&lt;p&gt;DeepL&#39;s &lt;code&gt;formality&lt;/code&gt; parameter (&lt;code&gt;default&lt;/code&gt;, &lt;code&gt;more&lt;/code&gt;, &lt;code&gt;less&lt;/code&gt;, &lt;code&gt;prefer_more&lt;/code&gt;, &lt;code&gt;prefer_less&lt;/code&gt;) is still available as a separate control alongside style rules. German &amp;quot;du&amp;quot; versus &amp;quot;Sie&amp;quot;, French &amp;quot;tu&amp;quot; versus &amp;quot;vous&amp;quot;, Japanese politeness levels. These are set per tenant language through &lt;code&gt;TenantLanguageConfig&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public class TenantLanguageConfig : ITenantScoped
{
    public string LanguageCode { get; set; }
    public string DisplayName { get; set; }
    public bool IsEnabled { get; set; }
    public TranslationTrigger Trigger { get; set; }
    public string? Formality { get; set; }         // &amp;quot;more&amp;quot;, &amp;quot;less&amp;quot;, &amp;quot;prefer_more&amp;quot;, etc.
    public string? StyleRuleListId { get; set; }   // Links to TenantStyleRuleList
    public string? TranslationProvider { get; set; }
    public int SortOrder { get; set; }
    public bool IsDefault { get; set; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Formality, style rules, and glossaries all compose. A single translation call can carry all three:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var glossaryId = await GetOrSyncDeepLGlossaryIdAsync(sourceLang, targetLang, ct);
var styleId = await GetOrSyncStyleRuleListAsync(targetLang, ct);
var formality = tenantLanguageConfig.Formality ?? &amp;quot;default&amp;quot;;

// Build the v2/translate request payload
var payload = new
{
    text = new[] { blockContent },
    source_lang = NormalizeLanguageCode(sourceLang),
    target_lang = NormalizeLanguageCode(targetLang),
    glossary_id = glossaryId,
    style_id = styleId,
    formality = formality,
    preserve_formatting = true,
    context = surroundingContext,  // Adjacent blocks, not billed
    model_type = &amp;quot;quality_optimized&amp;quot;
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Two things worth noting here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;The &lt;code&gt;context&lt;/code&gt; parameter.&lt;/strong&gt; We pass adjacent blocks as context to improve translation quality. DeepL uses this to resolve ambiguity but doesn&#39;t translate or bill for it. A paragraph about &amp;quot;cells&amp;quot; translates differently when the surrounding context is a biology document versus a spreadsheet manual.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Model selection.&lt;/strong&gt; Any request with &lt;code&gt;style_id&lt;/code&gt; or &lt;code&gt;custom_instructions&lt;/code&gt; automatically uses DeepL&#39;s &lt;code&gt;quality_optimized&lt;/code&gt; model. This is the highest quality tier. You can&#39;t combine these with &lt;code&gt;latency_optimized&lt;/code&gt;, and that&#39;s a deliberate constraint by DeepL. Style customisation needs the full model.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Why this matters more than you&#39;d think&lt;/h3&gt;
&lt;p&gt;Picture a company writing internal docs in German with informal &amp;quot;du&amp;quot; that suddenly switches to formal &amp;quot;Sie&amp;quot; in a translated section. Looks inconsistent at best, unprofessional at worst. Formality handles that. But formality alone won&#39;t catch a document that uses AM/PM timestamps when your German office uses 24-hour format, or that puts the currency symbol before the number instead of after.&lt;/p&gt;
&lt;p&gt;All of these layered together (style rules, custom instructions, formality, glossaries) produce translations that read like someone on your team wrote them. Not like output from a machine that doesn&#39;t know your company exists.&lt;/p&gt;
&lt;h2&gt;The DeepL service layer&lt;/h2&gt;
&lt;p&gt;All DeepL communication goes through &lt;code&gt;IDeepLService&lt;/code&gt;. It wraps the official DeepL .NET SDK and handles v3 API calls for style rules:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public interface IDeepLService
{
    // Text translation (v2)
    Task&amp;lt;TextResult&amp;gt; TranslateTextAsync(
        string text, string sourceLanguage, string targetLanguage,
        string? options = null);

    Task&amp;lt;TextResult[]&amp;gt; TranslateTextBatchAsync(
        IEnumerable&amp;lt;string&amp;gt; texts, string sourceLanguage,
        string targetLanguage, string? options = null);

    // Glossary management (v2)
    Task&amp;lt;GlossaryInfo&amp;gt; CreateGlossaryAsync(
        string name, string sourceLang, string targetLang,
        Dictionary&amp;lt;string, string&amp;gt; entries);
    Task DeleteGlossaryAsync(string glossaryId);
    Task&amp;lt;GlossaryInfo&amp;gt; GetGlossaryAsync(string glossaryId);
    Task&amp;lt;GlossaryInfo[]&amp;gt; ListGlossariesAsync();
    Task&amp;lt;Dictionary&amp;lt;string, string&amp;gt;&amp;gt; GetGlossaryEntriesAsync(
        string glossaryId);

    // Style rules (v3)
    Task&amp;lt;StyleRuleResponse&amp;gt; CreateStyleRuleListAsync(
        string name, string language,
        JsonElement configuredRules,
        IEnumerable&amp;lt;CustomInstructionRequest&amp;gt; customInstructions);
    Task ReplaceConfiguredRulesAsync(
        string styleId, JsonElement configuredRules);
    Task&amp;lt;CustomInstructionResponse&amp;gt; AddCustomInstructionAsync(
        string styleId, string label, string prompt,
        string? sourceLanguage = null);
    Task DeleteCustomInstructionAsync(
        string styleId, string instructionId);
    Task DeleteStyleRuleListAsync(string styleId);

    // Usage tracking
    Task&amp;lt;Usage&amp;gt; GetUsageAsync();
    Task&amp;lt;Language[]&amp;gt; GetSourceLanguagesAsync();
    Task&amp;lt;Language[]&amp;gt; GetTargetLanguagesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The implementation handles language code normalisation. DeepL requires &lt;code&gt;EN-US&lt;/code&gt; or &lt;code&gt;EN-GB&lt;/code&gt; instead of bare &lt;code&gt;en&lt;/code&gt;, and &lt;code&gt;PT-PT&lt;/code&gt; or &lt;code&gt;PT-BR&lt;/code&gt; instead of &lt;code&gt;pt&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;private static string NormalizeLanguageCode(string code)
    =&amp;gt; code.ToLower() switch
    {
        &amp;quot;en&amp;quot; =&amp;gt; &amp;quot;EN-US&amp;quot;,
        &amp;quot;pt&amp;quot; =&amp;gt; &amp;quot;PT-PT&amp;quot;,
        _ =&amp;gt; code.ToUpper()
    };
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Batch translation uses 50-item chunking to stay within DeepL&#39;s API limits while maximising throughput:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;TranslationBatchResult&amp;gt; TranslateBatchAsync(
    Dictionary&amp;lt;string, string&amp;gt; texts,
    string sourceLanguage, string targetLanguage)
{
    var translations = new Dictionary&amp;lt;string, string&amp;gt;();
    long totalChars = 0;

    foreach (var chunk in texts.Chunk(50))
    {
        var results = await _deepL.TranslateTextBatchAsync(
            chunk.Select(kv =&amp;gt; kv.Value),
            sourceLanguage, targetLanguage);

        for (int i = 0; i &amp;lt; chunk.Length; i++)
        {
            translations[chunk[i].Key] = results[i].Text;
            totalChars += chunk[i].Value.Length;
        }
    }

    return new TranslationBatchResult
    {
        Translations = translations,
        BilledCharacters = totalChars
    };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Because we only send stale blocks, not entire documents, a typical translation batch for a single edit contains 1-3 blocks instead of 40+. That&#39;s where the 94% cost reduction comes from.&lt;/p&gt;
&lt;h2&gt;The translation orchestrator&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;TranslationOrchestrator&lt;/code&gt; decides what to do with each block when the source document changes. Let&#39;s walk through the decision tree:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task OrchestrateTranslationAsync(
    Guid entryId, List&amp;lt;Guid&amp;gt; changedBlockIds,
    CancellationToken ct = default)
{
    var entry = await _db.Entries
        .FirstOrDefaultAsync(e =&amp;gt; e.Id == entryId, ct);

    var translations = await _db.EntryTranslations
        .Where(t =&amp;gt; t.EntryId == entryId)
        .ToListAsync(ct);

    foreach (var translation in translations)
    {
        var langConfig = await GetLanguageConfigAsync(
            translation.Language, ct);

        var translationBlocks = await _db.TranslationBlocks
            .Where(tb =&amp;gt; changedBlockIds.Contains(tb.SourceBlockId)
                      &amp;amp;&amp;amp; tb.Language == translation.Language)
            .ToListAsync(ct);

        foreach (var block in translationBlocks)
        {
            if (block.IsLocked || block.TranslatedById is not null)
            {
                // Human-edited or locked - mark stale, don&#39;t overwrite
                block.Status = TranslationStatus.Stale;
            }
            else if (langConfig.Trigger == TranslationTrigger.AlwaysTranslate)
            {
                // Machine-translated, auto mode - retranslate now
                await RetranslateBlockAsync(block, translation.Language, ct);
            }
            else
            {
                // TranslateOnFirstVisit - mark stale, translate later
                block.Status = TranslationStatus.Stale;
            }
        }
    }

    await _db.SaveChangesAsync(ct);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The key bit: &lt;strong&gt;human-edited blocks are never automatically overwritten.&lt;/strong&gt; If a translator manually adjusted a block, maybe adding cultural context or rewording for clarity, the system respects that work. It marks the block as stale so the translator knows the source changed, but it won&#39;t silently replace their edits.&lt;/p&gt;
&lt;p&gt;Machine-translated blocks with &lt;code&gt;AlwaysTranslate&lt;/code&gt; enabled are retranslated immediately. Machine-translated blocks with &lt;code&gt;TranslateOnFirstVisit&lt;/code&gt; are marked stale and translated when someone actually opens the document in that language.&lt;/p&gt;
&lt;h2&gt;Translation triggers: when translations happen&lt;/h2&gt;
&lt;p&gt;Each language has a &lt;code&gt;TranslationTrigger&lt;/code&gt; that controls timing:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public enum TranslationTrigger
{
    AlwaysTranslate,         // Translate on every save
    TranslateOnFirstVisit    // Translate when first opened in that language
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;code&gt;AlwaysTranslate&lt;/code&gt; is useful for high-priority languages where you want translations to be immediately current. French for a company with a large Paris office. German for a company with headquarters in Munich.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;TranslateOnFirstVisit&lt;/code&gt; is useful for languages that are occasionally needed but not worth the API cost of keeping perfectly current at all times. When someone opens the document in that language, stale blocks are translated on the fly.&lt;/p&gt;
&lt;p&gt;Both modes use the same glossary resolution, the same formality settings, and the same content hashing. The only difference is timing.&lt;/p&gt;
&lt;h2&gt;Unique content and structure adaptation&lt;/h2&gt;
&lt;p&gt;This is where the architecture really pays off beyond just translation.&lt;/p&gt;
&lt;p&gt;When a German translator adds a DSGVO compliance section that doesn&#39;t exist in English, they add it as a new block in the German version. That block has no &lt;code&gt;SourceBlockId&lt;/code&gt;, it&#39;s flagged as unique content. The system never sends it for retranslation because there&#39;s no source to translate from. It only exists in German.&lt;/p&gt;
&lt;p&gt;When a Japanese translator changes a bullet list to a numbered list (a common convention in Japanese technical writing), the block&#39;s &lt;code&gt;IsStructureAdapted&lt;/code&gt; flag preserves this across future retranslation cycles:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var translation = new TranslationBlock
{
    SourceBlockId = sourceBlockId,
    Language = targetLanguage,
    BlockType = translatedBlockType,
    SourceBlockType = sourceBlock.BlockType,
    IsStructureAdapted = translatedBlockType != sourceBlock.BlockType,
    StructureAdaptationNotes = &amp;quot;Numbered list preferred in JP technical docs&amp;quot;,
    SourceContentHash = sourceBlock.ContentHash,
    Status = TranslationStatus.UpToDate,
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;IsNoTranslate&lt;/code&gt; flag handles content that should be copied verbatim: code blocks, URLs, product names, mathematical notation. The translation provider skips these entirely.&lt;/p&gt;
&lt;h2&gt;Putting it all together&lt;/h2&gt;
&lt;p&gt;Let&#39;s walk through the full flow. A user in London edits a paragraph in the English source document, and your Munich office has German set to &lt;code&gt;AlwaysTranslate&lt;/code&gt;:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;User saves.&lt;/strong&gt; TipTap sends JSON to the API.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Block extraction and change detection.&lt;/strong&gt; &lt;code&gt;CreateBlocksFromDocumentAsync&lt;/code&gt; parses JSON, recalculates content hashes, and compares old and new hashes to identify which blocks actually changed.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Orchestrator runs.&lt;/strong&gt; Finds the German &lt;code&gt;EntryTranslation&lt;/code&gt;, checks the German block. It&#39;s machine-translated, not locked, not human-edited, so it&#39;s eligible for retranslation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translation config loaded.&lt;/strong&gt; Glossary ID resolved via &lt;code&gt;GetOrSyncDeepLGlossaryIdAsync(&amp;quot;en&amp;quot;, &amp;quot;de&amp;quot;)&lt;/code&gt;, style rules via &lt;code&gt;GetOrSyncStyleRuleListAsync(&amp;quot;de&amp;quot;)&lt;/code&gt;, formality set to &amp;quot;more&amp;quot; (formal &amp;quot;Sie&amp;quot;), adjacent blocks passed as context for disambiguation.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DeepL call.&lt;/strong&gt; Single block sent with glossary ID, style ID, formality, and context.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Block updated.&lt;/strong&gt; Translated content stored, &lt;code&gt;SourceContentHash&lt;/code&gt; synced, status set to &lt;code&gt;UpToDate&lt;/code&gt;. One block translated instead of 40+. The remaining 39 blocks? Untouched.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Meanwhile, your Tokyo office has Japanese set to &lt;code&gt;TranslateOnFirstVisit&lt;/code&gt;. The same edit marks the Japanese translation block as &lt;code&gt;Stale&lt;/code&gt;. When someone in Tokyo opens the document, steps 5-9 happen on the fly. Their structure adaptation (numbered list) is preserved. Their unique blocks stay exactly where they are.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;I think the translation engine is the part of this platform that delivers the most visible value. Translations that use your terminology, follow your formatting conventions, obey your custom instructions, match your tone, respect your translators&#39; work, and cost a fraction of what full-document retranslation would. The architecture makes all of that automatic, and stays out of the way when humans want to take over.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The same DeepL engine that powers written translations also powers Talk to Docs, our conversational documentation interface, with DeepL Voice handling the spoken interaction. Same glossaries, same style rules, same formality, same consistency. Whether your team reads documentation or talks to it, the language quality is identical.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://developers.tcdev.de/&quot;&gt;Explore the translation API →&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="architecture" />
    <category term="translation" />
    <category term="deepl" />
  </entry>
  <entry>
    <title>Stop Maintaining Five Copies of the Same Document</title>
    <link href="https://www.tcdev.de/blog/stop-maintaining-five-copies-of-the-same-document/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/stop-maintaining-five-copies-of-the-same-document/</id>
    <updated>2026-03-31T00:00:00Z</updated>
    <summary>Most companies have onboarding_germany, onboarding_japan, onboarding_brazil. In this platform, it&#39;s just &#39;Onboarding&#39;. One document. Shared steps translated, local steps per language. No more copies drifting apart.</summary>
    <content type="html">&lt;p&gt;Open your company wiki right now and search for &amp;quot;onboarding.&amp;quot; How many results do you get?&lt;/p&gt;
&lt;p&gt;If you&#39;re a global company, I&#39;m guessing it&#39;s not one. It&#39;s probably something like this:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;Onboarding Guide (EN)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Onboarding Guide - Germany&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Onboarding Guide - Japan&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Onboarding LATAM (draft)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;Onboarding - New (DO NOT USE OLD ONE)&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Five documents. All covering roughly the same thing. All slightly different. All maintained by different people on different schedules. Some current, some three months behind, one that nobody is sure about anymore.&lt;/p&gt;
&lt;p&gt;This is what happens when your documentation platform can&#39;t handle multilingual content properly. You end up copying the whole document for every market, and each copy slowly drifts away from the others.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/one-document-all-languages.svg&quot; alt=&quot;One document, every language&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The copy-and-localise trap&lt;/h2&gt;
&lt;p&gt;It starts innocently enough. You have a great onboarding guide in English. The Berlin office needs it in German, so someone copies it, translates it, and adds the Germany-specific bits: DSGVO training, Betriebsrat information, local health insurance enrollment.&lt;/p&gt;
&lt;p&gt;Then Tokyo needs one. Copy again. Translate. Add the Japan-specific stuff: hanko registration, commuter pass process, office etiquette guide.&lt;/p&gt;
&lt;p&gt;São Paulo is next. Same thing. Copy, translate, add local content about CLT requirements, meal vouchers, and tax documents.&lt;/p&gt;
&lt;p&gt;Now you have four documents. The English original gets updated regularly. The German version was updated last quarter. The Japanese version... someone thinks Tanaka-san updated it in October. The Brazilian version was created by a contractor who left, and nobody has touched it since.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Every copy is a maintenance burden.&lt;/strong&gt; And every one of them contains a mix of shared content (the stuff that&#39;s the same everywhere) and local content (the stuff specific to that market). But the platform doesn&#39;t know the difference. It&#39;s all just text on a page.&lt;/p&gt;
&lt;p&gt;So when someone updates the security policy section in the English original, nobody updates the other four. Or worse, someone updates the German one but not the Japanese one. Now you have five documents that all say slightly different things about the same company policy.&lt;/p&gt;
&lt;h2&gt;The real problem: shared and local content are mixed together&lt;/h2&gt;
&lt;p&gt;The thing is, most of these documents are 70-80% identical. The onboarding steps, the tools setup, the security policies, the company values section, the &amp;quot;who to contact&amp;quot; list. That&#39;s all the same regardless of whether you&#39;re in Berlin, Tokyo, or São Paulo.&lt;/p&gt;
&lt;p&gt;The local stuff is maybe 20-30% of the document. Specific compliance requirements, local benefits, regional processes, team contacts for that office.&lt;/p&gt;
&lt;p&gt;But when everything lives in one big flat document per language, there&#39;s no way to tell which parts are shared and which are local. An update to the shared content means manually checking and updating every copy. Which nobody does consistently. Which is why your copies drift.&lt;/p&gt;
&lt;h2&gt;One document. That&#39;s it.&lt;/h2&gt;
&lt;p&gt;In this platform, the onboarding guide is one document. Not one per language. One.&lt;/p&gt;
&lt;p&gt;The shared content, the 70-80% that&#39;s the same everywhere, is written once in English and automatically translated into every language your team uses. When someone updates the security policy section in English, it&#39;s retranslated in German, Japanese, Portuguese, and French within seconds. No manual copying. No &amp;quot;someone should update the other versions.&amp;quot;&lt;/p&gt;
&lt;p&gt;The local content lives in its respective language version. The DSGVO training section exists only in the German version. The hanko process exists only in the Japanese version. The CLT requirements exist only in the Portuguese version. These sections are flagged as unique content, they belong to that language and are never overwritten by retranslation.&lt;/p&gt;
&lt;p&gt;We covered exactly how this works in an earlier post about this translation approach. The short version: each paragraph has its own identity. Shared paragraphs are translated and tracked. Unique paragraphs belong to their language and nothing else touches them.&lt;/p&gt;
&lt;p&gt;The result? Your wiki search for &amp;quot;onboarding&amp;quot; returns one result. Just &amp;quot;Onboarding.&amp;quot; Open it in English, you see the English version with all shared content. Open it in German, you see the same shared content in German plus the Germany-specific sections. Open it in Japanese, same shared content in Japanese plus the Japan-specific sections.&lt;/p&gt;
&lt;p&gt;One document. Not five. Not five documents slowly rotting at different speeds.&lt;/p&gt;
&lt;h2&gt;What this actually changes&lt;/h2&gt;
&lt;p&gt;This isn&#39;t just tidier. It fundamentally changes how your documentation works across offices.&lt;/p&gt;
&lt;h3&gt;Updates actually reach everyone&lt;/h3&gt;
&lt;p&gt;When you update the shared part of the onboarding guide, it&#39;s updated in every language. Not eventually, not after someone remembers to do it. Automatically. The paragraph you changed is retranslated. Everything else stays exactly where it was.&lt;/p&gt;
&lt;p&gt;This means your Tokyo office is reading the same company policy as your London office. Not the version from six months ago that nobody got around to updating.&lt;/p&gt;
&lt;h3&gt;Local teams own their local content&lt;/h3&gt;
&lt;p&gt;Your Munich team can add a section about the local gym discount without worrying that it&#39;ll get wiped out by the next English update. Their unique content is theirs. It stays in the German version, untouched by any changes to the English source.&lt;/p&gt;
&lt;p&gt;Same for every other office. Local content is genuinely local. It doesn&#39;t interfere with shared content, and shared content doesn&#39;t interfere with it.&lt;/p&gt;
&lt;h3&gt;New hires get the right information&lt;/h3&gt;
&lt;p&gt;A new hire in São Paulo opens the onboarding guide and sees everything they need. The shared sections (tools, security, values) are in Portuguese. The Brazil-specific sections (CLT, tax docs, meal vouchers) are right there alongside them. One document, everything in their language, nothing missing, nothing outdated.&lt;/p&gt;
&lt;p&gt;They don&#39;t need to know that three other offices have different local sections. They just see their version. Clean and complete.&lt;/p&gt;
&lt;h3&gt;Your page count drops&lt;/h3&gt;
&lt;p&gt;This is the simple math. If you have 50 key documents and you maintain them in 5 languages with the copy-and-localise approach, you have 250 documents. In this platform, you have 50. Each with language versions that share common content and maintain their own local sections.&lt;/p&gt;
&lt;p&gt;250 documents to maintain versus 50. That&#39;s 200 pages of maintenance overhead that just disappears.&lt;/p&gt;
&lt;h2&gt;It&#39;s not just onboarding&lt;/h2&gt;
&lt;p&gt;Onboarding is the obvious example because every global company has this problem. But the same pattern shows up everywhere:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Deployment guides.&lt;/strong&gt; Core steps are the same, but the Berlin team uses a local staging server and Tokyo has a different approval process.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compliance documentation.&lt;/strong&gt; GDPR section for Europe, LGPD for Brazil, APPI for Japan. All in the same doc, each appearing only where it&#39;s relevant.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Benefits and HR policies.&lt;/strong&gt; The parental leave policy is different in every country. The company values are the same everywhere.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Customer-facing help docs.&lt;/strong&gt; The product works the same everywhere, but payment methods, support hours, and regional regulations vary.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Every one of these is a document that most companies maintain as separate copies per market. And every one of them could be a single document with shared and local content.&lt;/p&gt;
&lt;h2&gt;The compound effect&lt;/h2&gt;
&lt;p&gt;Here&#39;s where it gets real. A company with 200 documents across 4 markets isn&#39;t maintaining 200 docs. They&#39;re maintaining 800. But they&#39;re staffed for 200. So what actually happens is:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The English versions are current&lt;/li&gt;
&lt;li&gt;The German versions are mostly current&lt;/li&gt;
&lt;li&gt;The French versions are behind&lt;/li&gt;
&lt;li&gt;The Japanese versions are a question mark&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Sound familiar?&lt;/p&gt;
&lt;p&gt;In this platform, they maintain 200 documents. The shared content is automatically translated. The local content is added by local teams. Every version is as current as the English one, plus whatever local additions the regional team has made.&lt;/p&gt;
&lt;p&gt;The translation costs are lower too. When you update one paragraph in English, only that paragraph gets retranslated across all languages. Not the whole document, not all 200 documents. Just the paragraph that actually changed. I wrote about that approach in detail, including the glossary and style rules that make translated content sound natural.&lt;/p&gt;
&lt;h2&gt;A quick gut check&lt;/h2&gt;
&lt;p&gt;If you&#39;re running a global team, ask yourself:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;How many duplicate documents do you have?&lt;/strong&gt; Search for the same topic and count the language-specific copies.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How current are the non-English versions?&lt;/strong&gt; Check the last-edited date on your German, French, or Japanese docs. How far behind are they?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Do local teams add content to their versions?&lt;/strong&gt; Or have they given up because it gets overwritten?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;How long does onboarding take in non-English offices?&lt;/strong&gt; If it&#39;s longer, chances are the documentation isn&#39;t serving them properly.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If the answers make you uncomfortable, you&#39;re not alone. Most companies don&#39;t realise how much overhead they&#39;ve created until they actually count the copies.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Documentation should scale with your company, not multiply. Every copy you maintain is a copy that can fall behind, confuse a new hire, or contradict the version someone else is reading. One document per topic, with shared content translated and local content where it belongs, is how documentation should work in a global company.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Your wiki shouldn&#39;t need five copies of the same document. One is enough. Shared steps translated, local steps per language. That&#39;s it.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#multilingual&quot;&gt;See multilingual publishing in action →&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="multilingual" />
    <category term="documentation" />
    <category term="localisation" />
  </entry>
  <entry>
    <title>The Business Case for Block-Level Localisation</title>
    <link href="https://www.tcdev.de/blog/why-multilingual-knowledge-is-the-key-to-business-success/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/why-multilingual-knowledge-is-the-key-to-business-success/</id>
    <updated>2026-03-24T00:00:00Z</updated>
    <summary>Global teams don&#39;t just need translations. They need knowledge that works in every market, with each language carrying its own structure. Block-level localisation makes that practical.</summary>
    <content type="html">&lt;p&gt;There&#39;s a pattern in every company that operates across borders. The English documentation is solid. The German version is three months behind. The Japanese version was translated once, by a contractor, and nobody has touched it since. The Brazilian Portuguese version doesn&#39;t exist yet, even though São Paulo is the fastest-growing office.&lt;/p&gt;
&lt;p&gt;Everyone agrees this is a problem. Nobody has a good solution. Until now, localisation has been treated as a project, a one-time effort you budget for, execute, and then quietly neglect until the next big overhaul.&lt;/p&gt;
&lt;p&gt;That approach is broken. Here&#39;s why, and what I think actually works.&lt;/p&gt;
&lt;h2&gt;Translation isn&#39;t localisation&lt;/h2&gt;
&lt;p&gt;Let&#39;s get the terminology straight. Translation is taking text in one language and producing equivalent text in another. Localisation is making knowledge work in a specific market. They overlap, but they&#39;re not the same thing.&lt;/p&gt;
&lt;p&gt;A translated document reads correctly. A localised document reads naturally. It accounts for cultural context, regional regulations, local tooling, and the way people in that market actually work.&lt;/p&gt;
&lt;p&gt;This distinction matters because most documentation platforms treat localisation as a translation task. You write in English, press a button, and get output in French. Done. Except it&#39;s not done, because:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The French team has a different deployment process that the English doc doesn&#39;t cover&lt;/li&gt;
&lt;li&gt;German compliance requirements add an extra approval step that doesn&#39;t exist elsewhere&lt;/li&gt;
&lt;li&gt;The Japanese office uses a different internal tool for the same workflow&lt;/li&gt;
&lt;li&gt;Brazilian Portuguese readers need context about local tax rules that aren&#39;t relevant anywhere else&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;A straight translation of the English doc is technically correct in all of these cases, and practically useless in all of them too.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;The problem with document-level translation&lt;/h2&gt;
&lt;p&gt;Traditional localisation works at the document level. You have an English document. You translate the entire thing into German. When the English version changes, you send the entire thing for retranslation. This creates three problems:&lt;/p&gt;
&lt;h3&gt;1. It&#39;s expensive&lt;/h3&gt;
&lt;p&gt;If your onboarding guide has 15 sections and you change one paragraph, you&#39;re paying to retranslate all 15 sections. Multiply that by 8 languages and every edit becomes a budget conversation.&lt;/p&gt;
&lt;h3&gt;2. It&#39;s slow&lt;/h3&gt;
&lt;p&gt;Sending complete documents for translation takes time. Even with modern machine translation, the review cycle for a full document is significantly longer than reviewing a single changed section. Teams in other languages are always running behind.&lt;/p&gt;
&lt;h3&gt;3. It doesn&#39;t support unique content&lt;/h3&gt;
&lt;p&gt;This is the real killer. If the German version needs an extra section about DSGVO compliance, where does it go? In a document-level translation system, any content added to the German version gets overwritten the next time someone retranslates from English. The German team learns fast: don&#39;t add anything, because it&#39;ll be wiped out.&lt;/p&gt;
&lt;h2&gt;Block-level localisation: a different architecture&lt;/h2&gt;
&lt;p&gt;this platform doesn&#39;t translate documents. It translates blocks: individual paragraphs, headings, and sections, each tracked independently with its own identity and content hash.&lt;/p&gt;
&lt;p&gt;Here&#39;s what this means in practice:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When you edit a single paragraph in English&lt;/strong&gt;, this platform detects which block changed by comparing SHA256 content hashes. Only that one block is sent for translation via DeepL. The other 14 blocks in the document stay exactly as they were. Your translation cost drops by up to 94%.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When the German translator needs to add a DSGVO section&lt;/strong&gt;, they add it as a new block in the German version. That block exists only in German. It doesn&#39;t affect the English source. It doesn&#39;t get overwritten when English changes. It&#39;s flagged as unique content so everyone knows it&#39;s intentional.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;When the Japanese version needs a different structure&lt;/strong&gt;, say, a numbered list instead of bullet points because that&#39;s the convention in Japanese technical writing, the translator can change the block type. The system tracks this as a &amp;quot;structure adaptation&amp;quot; and preserves it across future updates.&lt;/p&gt;
&lt;p&gt;Each language version becomes a first-class document, not a shadow copy.&lt;/p&gt;
&lt;h2&gt;How it works, technically&lt;/h2&gt;
&lt;p&gt;Every block in this platform has:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;A UUID&lt;/strong&gt; that persists across all edits and translations&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A content hash&lt;/strong&gt; (SHA256) that changes when the text changes&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A position index&lt;/strong&gt; so blocks stay in the right order&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;A soft-delete flag&lt;/strong&gt; so removing a block in English doesn&#39;t break alignment in other languages&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When a translation block is created, it stores the source block&#39;s content hash. On every save, the system compares hashes. If they match, the translation is current. If they don&#39;t, the translation is marked as stale, and only that specific block needs attention.&lt;/p&gt;
&lt;p&gt;This is the mechanism behind the 94% cost reduction. Most edits change one or two sections. The rest of the document, across all languages, stays untouched.&lt;/p&gt;
&lt;h2&gt;Unique content per language&lt;/h2&gt;
&lt;p&gt;This is where things get genuinely different from any other platform.&lt;/p&gt;
&lt;p&gt;In this platform, each language version can contain:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Translated blocks.&lt;/strong&gt; Direct translations of the source language, tracked for staleness&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Unique blocks.&lt;/strong&gt; Content that exists only in that language, added by the local team&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Structure-adapted blocks.&lt;/strong&gt; Same source content, different formatting or block type&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A single document might look like this across languages:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Block&lt;/th&gt;
&lt;th&gt;English (source)&lt;/th&gt;
&lt;th&gt;German&lt;/th&gt;
&lt;th&gt;Japanese&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;Introduction&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;td&gt;Setup steps&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;td&gt;Structure adapted (numbered list)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;3&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;DSGVO compliance (unique)&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;Deployment&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;-&lt;/td&gt;
&lt;td&gt;Local tooling note (unique)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;Troubleshooting&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;td&gt;Translated&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Every team gets exactly the documentation they need. No compromises. No workarounds. No one-size-fits-all limitations.&lt;/p&gt;
&lt;h2&gt;Freshness tracking across languages&lt;/h2&gt;
&lt;p&gt;Each language version tracks its own freshness independently. The English source might score 94 (recently reviewed, all links valid, high readership). The French version might score 71 (two stale blocks, one broken link specific to the French content). The Japanese version might score 88 (all translations current, but readership is declining).&lt;/p&gt;
&lt;p&gt;This per-language freshness tracking means:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;You know exactly which languages need attention&lt;/li&gt;
&lt;li&gt;Stale translations are surfaced automatically, not discovered by accident&lt;/li&gt;
&lt;li&gt;AI tools can factor in language-specific freshness when serving answers&lt;/li&gt;
&lt;li&gt;Dashboards show content health broken down by language, not just by document&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The business case&lt;/h2&gt;
&lt;p&gt;Companies that operate across languages face a simple reality: your documentation is either an asset or a liability in every market you serve.&lt;/p&gt;
&lt;p&gt;When your Berlin team is working from a German translation that&#39;s three months behind the English source, they&#39;re making decisions based on outdated information. When your Tokyo office can&#39;t add local context to shared docs because the translation system would overwrite it, they stop using the wiki and create their own shadow documentation. When your São Paulo team doesn&#39;t have docs in Portuguese at all, onboarding takes twice as long.&lt;/p&gt;
&lt;p&gt;The cost isn&#39;t just translation fees. It&#39;s:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Slower onboarding&lt;/strong&gt; in non-English markets&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Duplicated effort&lt;/strong&gt; as teams maintain parallel documentation in local tools&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Knowledge silos&lt;/strong&gt; that form when the official wiki doesn&#39;t serve everyone&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Compliance risk&lt;/strong&gt; when region-specific requirements aren&#39;t captured&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Lost trust&lt;/strong&gt; in the documentation system itself&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Block-level localisation solves all of these, not by making translation cheaper (though it does), but by making every language version a living, maintained, trustworthy document.&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;If you&#39;re running a multilingual team on any documentation platform today, here&#39;s a quick gut check:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Pick your most important document.&lt;/strong&gt; Check it in every language. Is each version current?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Ask your non-English teams:&lt;/strong&gt; do they trust the translated docs? Do they use them?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Look for shadow documentation.&lt;/strong&gt; Are teams maintaining local wikis, Notion pages, or Slack pinned messages because the official docs don&#39;t serve them?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Calculate your translation spend.&lt;/strong&gt; How much are you paying per update, and how much of that is retranslating content that didn&#39;t change?&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;If the answers are uncomfortable, you&#39;re not alone. Most companies don&#39;t discover the gap until it causes a real problem: a compliance issue, a botched deployment, a new hire who spent two weeks following outdated instructions.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Multilingual knowledge isn&#39;t a nice-to-have. For any company that operates across borders, it&#39;s the foundation of how teams align, make decisions, and ship. The question is whether your documentation platform treats it that way.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Every language deserves to be a first-class citizen in your knowledge base. Not a copy. Not a shadow. A real, maintained, trusted document.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That is what this approach delivers: block-level translation, unique content per language, independent freshness tracking, and major translation cost reductions. All automatic. All from day one.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#multilingual&quot;&gt;See multilingual publishing in action →&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="multilingual" />
    <category term="localisation" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Content Freshness, Part 2: Beyond Expiry Dates</title>
    <link href="https://www.tcdev.de/blog/expiry-dates-are-just-not-enough/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/expiry-dates-are-just-not-enough/</id>
    <updated>2026-03-18T00:00:00Z</updated>
    <summary>Expiry dates solve accountability. But a document can go stale in a hundred ways between reviews. Part 2 explains how continuous freshness monitoring fills the gap.</summary>
    <content type="html">&lt;p&gt;&lt;em&gt;This is Part 2 of our content freshness series. &lt;a href=&quot;https://www.tcdev.de/en/blog/why-freshness-matters-more-than-ever/&quot;&gt;Part 1&lt;/a&gt; covers why freshness matters and what it actually means. This post picks up where it left off: why expiry dates alone aren&#39;t enough, and what continuous monitoring looks like.&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Let&#39;s say you do the responsible thing. Every document in your wiki gets a review date. Six months from creation, maybe twelve for stable reference material. When the date arrives, the owner gets a notification: review this or it gets flagged.&lt;/p&gt;
&lt;p&gt;That&#39;s better than what most companies do. Most companies do nothing. The doc sits there, slowly decaying, and nobody notices until someone follows the instructions and something breaks.&lt;/p&gt;
&lt;p&gt;But here&#39;s the uncomfortable truth: &lt;strong&gt;expiry dates are necessary and completely insufficient.&lt;/strong&gt; I&#39;ve seen documents go dangerously stale days after their last review, and a review date won&#39;t catch it.&lt;/p&gt;
&lt;h2&gt;What expiry dates actually solve&lt;/h2&gt;
&lt;p&gt;Expiry dates solve the accountability problem. They answer the question: &lt;em&gt;&amp;quot;Who is responsible for confirming this is still accurate, and when?&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That&#39;s genuinely valuable. Without it, documentation enters what we call the ownership void, a state where everyone assumes someone else is maintaining it, so nobody does. Setting a review date assigns a single person a single obligation on a specific date. Simple. Clear. Effective.&lt;/p&gt;
&lt;p&gt;Here&#39;s what expiry dates look like in practice:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A document is created with a review date 90 days out&lt;/li&gt;
&lt;li&gt;14 days before expiry, the owner gets notified&lt;/li&gt;
&lt;li&gt;On the expiry date, the document is flagged as &amp;quot;needs review&amp;quot;&lt;/li&gt;
&lt;li&gt;The owner reviews, confirms it&#39;s still accurate, and extends the date&lt;/li&gt;
&lt;li&gt;Or they update it, or reassign it, or archive it&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This is a solid system. It catches the slow decay, the doc that nobody has thought about in a year. It creates a regular cadence of review. It makes ownership visible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;But it has a blind spot the size of a continent.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What expiry dates miss&lt;/h2&gt;
&lt;p&gt;Between review dates, a document lives in a black box. You reviewed it on January 15. The next review is April 15. On February 3, any of these things could happen:&lt;/p&gt;
&lt;h3&gt;Links break silently&lt;/h3&gt;
&lt;p&gt;An external URL you referenced returns a 404. An internal link points to a document that was archived. A code repository was renamed and every GitHub link in your doc is now dead. Your document still looks fine. The expiry date isn&#39;t for another two months. Nobody knows the links are broken.&lt;/p&gt;
&lt;h3&gt;Related content changes&lt;/h3&gt;
&lt;p&gt;You wrote a deployment guide that references your architecture document. In February, someone completely rewrites the architecture doc. New patterns, new infrastructure, new conventions. Your deployment guide still references the old architecture. It&#39;s not technically wrong yet, but it&#39;s drifting. By the time your review date arrives, the gap might be significant.&lt;/p&gt;
&lt;h3&gt;Readership drops to zero&lt;/h3&gt;
&lt;p&gt;Your document used to be read by 40 people a month. Then a process changed and nobody needs it anymore, but nobody archived it either. It sits in search results, taking up space, occasionally confusing a new hire who doesn&#39;t know it&#39;s irrelevant. The expiry date doesn&#39;t care about readership. It&#39;ll ping the owner on schedule regardless.&lt;/p&gt;
&lt;h3&gt;Translations fall behind&lt;/h3&gt;
&lt;p&gt;The English source was updated on February 10. The French, German, and Japanese translations are now stale. But the expiry date on those translated versions isn&#39;t until May. For three months, non-English teams are reading outdated content and don&#39;t know it.&lt;/p&gt;
&lt;h3&gt;Readers flag problems&lt;/h3&gt;
&lt;p&gt;A reader leaves a comment: &amp;quot;Step 3 doesn&#39;t work anymore, the CLI flag was deprecated.&amp;quot; That comment sits there. The expiry date is still weeks away. The next person who reads the doc might not see the comment. The one after that definitely won&#39;t.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Expiry is a scheduled checkpoint. These are unscheduled events. The gap between the two is where stale documentation does the most damage.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Freshness: continuous monitoring&lt;/h2&gt;
&lt;p&gt;Freshness scoring fills the gap that expiry dates leave open. Instead of checking a document&#39;s health once every 90 days, freshness tracks it continuously. Every day, in the background, without anyone needing to do anything.&lt;/p&gt;
&lt;p&gt;Here&#39;s how it works in this platform:&lt;/p&gt;
&lt;p&gt;Every document gets a live freshness score from 0 to 100, calculated from multiple signals:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Signal&lt;/th&gt;
&lt;th&gt;What it detects&lt;/th&gt;
&lt;th&gt;Why it matters&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Link health&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Broken, redirected, or unreachable URLs&lt;/td&gt;
&lt;td&gt;Broken links erode trust and waste time&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Review status&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Whether the doc has been reviewed on schedule&lt;/td&gt;
&lt;td&gt;The baseline accountability check&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Readership trends&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Whether anyone is actually reading this&lt;/td&gt;
&lt;td&gt;Low readership suggests the doc may be irrelevant&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Edit recency&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;When the doc was last modified vs. related content&lt;/td&gt;
&lt;td&gt;Detects drift relative to the surrounding knowledge base&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Translation alignment&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Whether all language versions are current&lt;/td&gt;
&lt;td&gt;Stale translations mean teams in other markets work from old info&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Reader flags&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Whether readers have reported issues&lt;/td&gt;
&lt;td&gt;Crowdsourced staleness detection&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;strong&gt;Cross-references&lt;/strong&gt;&lt;/td&gt;
&lt;td&gt;Whether documents this one links to are themselves stale&lt;/td&gt;
&lt;td&gt;Staleness is contagious&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Each signal contributes to the overall score. A document can lose freshness points for a broken link today, even though its review date isn&#39;t for weeks. That&#39;s the whole point.&lt;/p&gt;
&lt;h2&gt;How the two work together&lt;/h2&gt;
&lt;p&gt;Expiry and freshness aren&#39;t competing approaches. They&#39;re complementary layers:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Expiry dates&lt;/strong&gt; are the governance layer. They create a regular cadence of human review. Someone has to look at this document on a schedule and confirm it&#39;s still accurate. This catches the things automation can&#39;t: whether the &lt;em&gt;content&lt;/em&gt; is still correct, whether the advice is still sound, whether the process it describes still reflects reality.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Freshness scoring&lt;/strong&gt; is the monitoring layer. It catches everything between review dates: the broken links, the translation drift, the abandoned documents, the contextual decay that happens when the world moves and a document doesn&#39;t.&lt;/p&gt;
&lt;p&gt;Together they create a system where:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;Every document is reviewed by a human on a regular schedule (expiry)&lt;/li&gt;
&lt;li&gt;Between reviews, automated signals catch problems as they happen (freshness)&lt;/li&gt;
&lt;li&gt;Both systems feed into a single trust score that everyone can see&lt;/li&gt;
&lt;li&gt;That score affects how the document ranks in search and whether AI tools use it as a source&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The scoring impact&lt;/h2&gt;
&lt;p&gt;Here&#39;s where it gets practical. In this platform, a document&#39;s freshness score directly affects its visibility:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Score 80–100:&lt;/strong&gt; Full visibility. Appears normally in search results. Eligible as a source for AI answers. No flags.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Score 50–79:&lt;/strong&gt; Reduced visibility. Appears in search with a staleness indicator. AI tools may deprioritise it as a source. Owner is notified.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Score below 50:&lt;/strong&gt; Flagged. Pushed down in search results significantly. Excluded from AI answers entirely. Owner receives urgent notification.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This creates a feedback loop. When a document&#39;s score drops, the owner is pushed to fix it, not because an arbitrary date arrived, but because something actually changed. The broken link, the stale translation, the declining readership, these are real signals that demand attention now, not in six weeks.&lt;/p&gt;
&lt;h2&gt;A practical example&lt;/h2&gt;
&lt;p&gt;Let&#39;s walk through a scenario:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 1:&lt;/strong&gt; Your &amp;quot;Incident Response Playbook&amp;quot; scores 92. It was reviewed two weeks ago, all links are valid, readership is high, and all four language versions are current.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 8:&lt;/strong&gt; Someone restructures the engineering status page. Three URLs in the playbook now redirect. Freshness score drops to 78. The owner gets a notification: &amp;quot;3 broken links detected.&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 10:&lt;/strong&gt; The owner fixes the links. Score rebounds to 89.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 15:&lt;/strong&gt; The English version is updated with a new escalation path. The French and German translations are now stale (content hash mismatch). Score drops to 74.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 17:&lt;/strong&gt; The translations are updated. Score returns to 91.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;March 20:&lt;/strong&gt; Readership data shows the Japanese version hasn&#39;t been accessed in 30 days. Score dips to 86. A subtle signal, but tracked.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;April 1:&lt;/strong&gt; The scheduled review date arrives. The owner reviews the content, confirms it&#39;s accurate, extends the expiry to July 1. Score stays at 86 because the readership signal is still present.&lt;/p&gt;
&lt;p&gt;At no point did the team wait for a review date to catch a problem. The freshness system flagged issues within days. The review date provided the governance checkpoint. Both layers doing their job.&lt;/p&gt;
&lt;h2&gt;Why &amp;quot;just set a review date&amp;quot; isn&#39;t enough anymore&lt;/h2&gt;
&lt;p&gt;Five years ago, expiry dates might have been sufficient. Documentation was read by people, and people can exercise judgement. If a doc looked a bit off, they&#39;d ask around.&lt;/p&gt;
&lt;p&gt;Today, documentation is infrastructure. It feeds AI tools, onboarding automation, compliance systems, and search engines that serve results without context. These systems don&#39;t exercise judgement. They consume content as-is and redistribute it at scale.&lt;/p&gt;
&lt;p&gt;A document with broken links and stale translations that still has three weeks until its review date can do a lot of damage in those three weeks, especially if an AI assistant is confidently serving answers based on it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Expiry dates are the minimum viable approach to documentation governance. Freshness scoring is what you need when documentation is consumed by systems that can&#39;t think for themselves.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;Getting started&lt;/h2&gt;
&lt;p&gt;If you already have expiry dates on your documents (good for you, seriously, most teams don&#39;t even do that), here&#39;s how to layer on freshness:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Start tracking links.&lt;/strong&gt; Run a broken link check across your top 50 documents. The number will probably surprise you.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Check translation alignment.&lt;/strong&gt; If you have multilingual docs, compare last-edit dates between the source and translations. How many are more than a month behind?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Look at readership.&lt;/strong&gt; Which documents get zero traffic? Are they still needed, or should they be archived?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Talk to your AI team.&lt;/strong&gt; If you have an internal AI assistant, ask what documents it&#39;s sourcing from. Then check the freshness of those documents.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;You&#39;ll likely find that your technically-not-expired documents have plenty of problems that expiry dates will never catch.&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;Expiry dates tell you if someone has checked a document recently. Freshness tells you if the document is actually healthy right now. One is a calendar event. The other is a living signal.&lt;/p&gt;
&lt;p&gt;You need both. But if you only have expiry dates, you&#39;re flying blind between checkpoints.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;A document doesn&#39;t go stale on its review date. It goes stale the moment something changes and nobody notices. Freshness scoring notices.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;this platform combines mandatory expiry dates with continuous freshness monitoring. Every document earns its trust score, or loses it, in real time. No waiting, no blind spots, no surprises at review time.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#freshness&quot;&gt;See how freshness scoring works →&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is Part 2 of a two-part series. If you haven&#39;t read it yet, start with &lt;a href=&quot;https://www.tcdev.de/en/blog/why-freshness-matters-more-than-ever/&quot;&gt;Part 1: The Metric Your Team Isn&#39;t Tracking&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="freshness" />
    <category term="expiry" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Content Freshness, Part 1: The Metric Your Team Isn&#39;t Tracking</title>
    <link href="https://www.tcdev.de/blog/why-freshness-matters-more-than-ever/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/why-freshness-matters-more-than-ever/</id>
    <updated>2026-03-16T00:00:00Z</updated>
    <summary>Your documentation might be technically correct today. But in six months, who checks? Freshness is about to become the most important signal in your knowledge base.</summary>
    <content type="html">&lt;p&gt;There&#39;s a moment every engineering team has experienced. Someone finds a document on the internal wiki, follows the instructions, and something breaks. They message the channel: &lt;em&gt;&amp;quot;Is this still accurate?&amp;quot;&lt;/em&gt; Nobody knows. The person who wrote it left eight months ago. The doc says it was last edited in 2024.&lt;/p&gt;
&lt;p&gt;This is the freshness problem. And it&#39;s getting worse.&lt;/p&gt;
&lt;h2&gt;The old contract is breaking down&lt;/h2&gt;
&lt;p&gt;For a long time, documentation had an implicit contract: someone writes it, everyone trusts it, and occasionally someone updates it. Maybe. That contract worked, barely, when docs were consumed only by people who could apply judgement. If a setup guide looked a bit off, a senior engineer would just adapt on the fly.&lt;/p&gt;
&lt;p&gt;But that world is over. Today your documentation isn&#39;t just read by humans. It&#39;s consumed by AI tools, internal chatbots, onboarding automation, and search systems that treat every word as equivalent truth. An AI assistant doesn&#39;t squint at a doc and think &lt;em&gt;&amp;quot;hmm, this looks a bit dated.&amp;quot;&lt;/em&gt; It reads the text, processes it as fact, and serves it with full confidence.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Stale documentation plus AI equals confidently wrong answers at scale.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What freshness actually means&lt;/h2&gt;
&lt;p&gt;Freshness isn&#39;t just &amp;quot;when was this last edited.&amp;quot; A doc could be edited yesterday and still reference a deprecated API. True freshness is a composite signal:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Review status.&lt;/strong&gt; Has someone explicitly confirmed this is still accurate?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Link health.&lt;/strong&gt; Are the URLs inside the doc still resolving?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Readership.&lt;/strong&gt; Is anyone actually using this, or has it been abandoned?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Contextual drift.&lt;/strong&gt; Have related documents changed while this one stayed the same?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translation alignment.&lt;/strong&gt; If this exists in five languages, are all of them up to date?&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Community signals.&lt;/strong&gt; Have readers flagged this as outdated?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Each of these tells you something different. Together, they give you a trust score: a single number that represents how much confidence you should place in a piece of content right now.&lt;/p&gt;
&lt;h2&gt;Why this matters now, specifically&lt;/h2&gt;
&lt;p&gt;Three things have converged to make freshness urgent:&lt;/p&gt;
&lt;h3&gt;1. AI is consuming your knowledge base&lt;/h3&gt;
&lt;p&gt;Whether you&#39;ve deployed an internal RAG system, use Copilot in your IDE, or have an AI assistant answering questions from your docs, the quality of the source material directly determines the quality of the output. Garbage in, garbage out has never been more literal.&lt;/p&gt;
&lt;p&gt;When a developer asks your AI assistant &amp;quot;how do I deploy to staging?&amp;quot; and it answers using a two-year-old runbook that references infrastructure you&#39;ve since migrated, the cost isn&#39;t just a wrong answer. It&#39;s lost trust in the entire system.&lt;/p&gt;
&lt;h3&gt;2. Teams are more distributed than ever&lt;/h3&gt;
&lt;p&gt;A team in Berlin, another in São Paulo, a third in Tokyo. All reading the same documentation, often in different languages. When the English source goes stale, every translation built on top of it goes stale too, but nobody notices because the translations are maintained separately, if at all.&lt;/p&gt;
&lt;h3&gt;3. Compliance and audit pressure is increasing&lt;/h3&gt;
&lt;p&gt;Regulated industries are starting to ask: &amp;quot;Can you prove this documentation was current at the time it was referenced?&amp;quot; If your answer is &amp;quot;well, someone probably checked it,&amp;quot; that&#39;s not going to hold up.&lt;/p&gt;
&lt;h2&gt;What a freshness-first approach looks like&lt;/h2&gt;
&lt;p&gt;The core idea is simple: &lt;strong&gt;every document must continuously earn the right to be trusted.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;This means:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Mandatory review dates.&lt;/strong&gt; Every document gets an expiry date when it&#39;s created. No exceptions. When the date arrives, the owner is notified, and the document is flagged until someone explicitly re-approves it.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Automated health monitoring.&lt;/strong&gt; In the background, the system continuously checks for broken links, readership trends, and contextual changes. These signals feed into a live score that updates without anyone having to do anything.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Freshness affects visibility.&lt;/strong&gt; This is the key mechanism. A high-scoring document surfaces to the top of search results and is eligible to be used as a source for AI answers. A low-scoring document drops in ranking. If it falls below a threshold, it&#39;s excluded from AI answers entirely.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Transparency.&lt;/strong&gt; Everyone can see why a document scored the way it did. Broken links, overdue review, low readership, the signals are visible, not hidden in a backend report nobody reads.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h2&gt;The cost of doing nothing&lt;/h2&gt;
&lt;p&gt;Here&#39;s what happens when you don&#39;t track freshness:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;New hires follow outdated onboarding docs and spend their first week confused&lt;/li&gt;
&lt;li&gt;AI tools serve wrong answers and nobody understands why&lt;/li&gt;
&lt;li&gt;Compliance docs silently go stale and create audit risk&lt;/li&gt;
&lt;li&gt;Translations drift out of sync and teams in different regions work from different versions of reality&lt;/li&gt;
&lt;li&gt;Engineers stop trusting the wiki entirely and fall back to Slack messages, which creates its own knowledge silo&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The compound cost of stale documentation is enormous, but it&#39;s invisible until something breaks.&lt;/p&gt;
&lt;h2&gt;A practical starting point&lt;/h2&gt;
&lt;p&gt;You don&#39;t need to overhaul everything at once. Start with these:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Audit your top 20 most-read documents.&lt;/strong&gt; When were they last reviewed? Are the links still valid? Is the content still accurate?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Set review dates.&lt;/strong&gt; Even if you do nothing else, putting a &amp;quot;review by&amp;quot; date on every document creates accountability.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Track what your AI tools are sourcing.&lt;/strong&gt; If you have an internal AI assistant, look at what documents it&#39;s pulling from. Are those documents current?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Make freshness visible.&lt;/strong&gt; Put the score where people can see it, next to the document title, in search results, in the sidebar. Visibility creates pressure to maintain.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;p&gt;Documentation freshness isn&#39;t a feature. It&#39;s a fundamental shift in how we think about organisational knowledge. In a world where AI tools consume and redistribute your docs at scale, the question isn&#39;t whether you can afford to care about freshness. It&#39;s whether you can afford not to.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Every document should have to prove it&#39;s still worth trusting. Not once. Continuously.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;That is the direction modern documentation platforms should move toward: freshness as a foundation, not an afterthought. Review enforcement, live health scoring, freshness-weighted search, and AI answers that only use sources you can trust.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#freshness&quot;&gt;See how it works →&lt;/a&gt;&lt;/p&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is Part 1 of a two-part series. In &lt;a href=&quot;https://www.tcdev.de/en/blog/expiry-dates-are-just-not-enough/&quot;&gt;Part 2: Beyond Expiry Dates&lt;/a&gt;, we explore how continuous freshness monitoring fills the gaps that review dates leave open.&lt;/em&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="freshness" />
    <category term="ai" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Teach Your AI to Ignore Stale Documentation</title>
    <link href="https://www.tcdev.de/blog/ai-just-fetches-everything-stop-that/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/ai-just-fetches-everything-stop-that/</id>
    <updated>2026-03-12T00:00:00Z</updated>
    <summary>Your AI assistant treats a document reviewed last week the same as one nobody has touched in two years. Content governance fixes that.</summary>
    <content type="html">&lt;p&gt;Here&#39;s what happens when you deploy an AI assistant on top of your internal knowledge base:&lt;/p&gt;
&lt;p&gt;A new engineer asks: &amp;quot;How do I set up the staging environment?&amp;quot;&lt;/p&gt;
&lt;p&gt;The AI searches your documentation, finds three relevant documents, synthesises an answer, and presents it with confidence. The engineer follows the instructions. The first two steps work. Step three references a CLI tool that was deprecated six months ago. Step four describes an infrastructure setup that was replaced during a migration nobody documented.&lt;/p&gt;
&lt;p&gt;The engineer is stuck. They message the team channel. Someone says: &amp;quot;Oh, that doc is really old.&amp;quot; The AI didn&#39;t know that. It can&#39;t know that. It just fetched everything it found and presented it as truth.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;This is the default behaviour of every RAG system, every AI search tool, and every LLM-powered assistant you&#39;ve ever used on internal docs. They fetch everything. They don&#39;t discriminate. They can&#39;t tell fresh from stale.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;And it&#39;s destroying trust in AI tools faster than those tools can build it.&lt;/p&gt;
&lt;h2&gt;Why AI assistants are blind to quality&lt;/h2&gt;
&lt;p&gt;Large language models and retrieval-augmented generation (RAG) systems work by finding text that&#39;s semantically relevant to a query, then using that text to generate an answer. The relevance matching is usually excellent. Vector search and embeddings are genuinely good at finding content that relates to a question.&lt;/p&gt;
&lt;p&gt;But relevance isn&#39;t the same as reliability.&lt;/p&gt;
&lt;p&gt;A document written in 2023 about your Kubernetes deployment process is highly relevant to the question &amp;quot;how do I deploy to production?&amp;quot; It&#39;s also completely wrong if you migrated to a different platform in 2024. The AI sees relevant text. It doesn&#39;t see a document that&#39;s 18 months out of date with broken links and zero readership.&lt;/p&gt;
&lt;p&gt;Most AI systems have exactly one ranking signal: &lt;strong&gt;semantic similarity to the query.&lt;/strong&gt; That&#39;s it. They don&#39;t check:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;When was this document last reviewed?&lt;/li&gt;
&lt;li&gt;Are the links inside it still valid?&lt;/li&gt;
&lt;li&gt;Is anyone actually reading this document?&lt;/li&gt;
&lt;li&gt;Has the content been flagged by readers as outdated?&lt;/li&gt;
&lt;li&gt;Is this a draft, an archived page, or a current document?&lt;/li&gt;
&lt;li&gt;If this exists in multiple languages, are the translations current?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Without these signals, the AI is doing keyword matching with extra steps. Impressive keyword matching, yes, but fundamentally incapable of telling you whether the answer it&#39;s giving is based on content you can trust.&lt;/p&gt;
&lt;h2&gt;The confidence problem&lt;/h2&gt;
&lt;p&gt;This wouldn&#39;t be as dangerous if AI tools presented uncertain answers with appropriate caveats. But they don&#39;t. That&#39;s not how LLMs work. They generate fluent, confident text regardless of whether the source material is current or ancient.&lt;/p&gt;
&lt;p&gt;A human reading a wiki article might notice it looks dated. The page layout is old. The screenshots show a UI that no longer exists. There&#39;s a comment at the bottom saying &amp;quot;this is outdated.&amp;quot; A human can apply judgement.&lt;/p&gt;
&lt;p&gt;An AI can&#39;t. It reads the text, processes it as equivalent to any other text, and generates an answer that sounds authoritative. The user, especially a new hire who doesn&#39;t know what the current process looks like, has no reason to doubt it.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;The more confident the AI sounds, the more damage stale source material does.&lt;/strong&gt;&lt;/p&gt;
&lt;h2&gt;What the AI actually needs&lt;/h2&gt;
&lt;p&gt;For an AI assistant to give trustworthy answers from your knowledge base, it needs more than text and embeddings. It needs metadata that tells it which documents are worth using as sources. Specifically:&lt;/p&gt;
&lt;h3&gt;1. Freshness score&lt;/h3&gt;
&lt;p&gt;A numeric signal that represents how healthy a document is right now. Not when it was last edited, that&#39;s just one input. A true freshness score combines review status, link health, readership, translation alignment, and contextual drift into a single number.&lt;/p&gt;
&lt;p&gt;When a document scores above a threshold (say, 70 out of 100), it&#39;s eligible to be used as a source for AI answers. Below that threshold, it&#39;s excluded. No exceptions.&lt;/p&gt;
&lt;p&gt;This single mechanism eliminates the most dangerous class of AI errors: confidently wrong answers based on stale sources.&lt;/p&gt;
&lt;h3&gt;2. Expiry status&lt;/h3&gt;
&lt;p&gt;Is this document currently within its review window, or has it expired without re-approval? An expired document should be heavily deprioritised or excluded entirely, regardless of how relevant its content might be to the query.&lt;/p&gt;
&lt;p&gt;In this platform, expired documents are flagged and their freshness scores drop automatically. An AI system querying the knowledge base can see this status and act on it.&lt;/p&gt;
&lt;h3&gt;3. Classification labels&lt;/h3&gt;
&lt;p&gt;Not every document serves the same purpose. A draft shouldn&#39;t be used as a source. An archived document shouldn&#39;t appear in AI answers. An internal-only document shouldn&#39;t surface in queries from external-facing tools.&lt;/p&gt;
&lt;p&gt;Classification labels give the AI context about what kind of document it&#39;s looking at:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Published.&lt;/strong&gt; Current, approved, safe to use&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Draft.&lt;/strong&gt; Work in progress, should not be cited&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Under review.&lt;/strong&gt; Expiry triggered, awaiting re-approval&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Archived.&lt;/strong&gt; No longer active, kept for reference only&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Internal&lt;/strong&gt; / &lt;strong&gt;External.&lt;/strong&gt; Controls visibility scope&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When an AI assistant processes a query, it can filter by classification before even looking at content relevance. A draft document that perfectly matches the query should never be served as an answer.&lt;/p&gt;
&lt;h3&gt;4. Language-level signals&lt;/h3&gt;
&lt;p&gt;If your knowledge base is multilingual, the AI needs to know whether the version it&#39;s pulling from is current. A French translation that&#39;s three months behind the English source is technically relevant in French, but the information might be outdated.&lt;/p&gt;
&lt;p&gt;this platform tracks freshness at the language level. Each translation has its own score based on whether its source blocks have changed since the translation was last updated. An AI querying the French knowledge base can see that the French version of a document is stale and either:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fall back to the English source (which is current)&lt;/li&gt;
&lt;li&gt;Include a caveat that the French version may be outdated&lt;/li&gt;
&lt;li&gt;Exclude the document entirely&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;5. Reader signals&lt;/h3&gt;
&lt;p&gt;If multiple readers have flagged a document as outdated, that signal should reduce the document&#39;s weight in AI responses. Crowdsourced quality signals are noisy, but they&#39;re valuable, especially when combined with other freshness metrics.&lt;/p&gt;
&lt;h2&gt;How this works in practice&lt;/h2&gt;
&lt;p&gt;Let&#39;s walk through what happens when an AI assistant queries a this platform knowledge base:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Query:&lt;/strong&gt; &amp;quot;What&#39;s our process for handling a P1 incident at 2am?&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 1: Retrieval with filtering.&lt;/strong&gt; The system searches for semantically relevant documents. Before ranking, it filters out:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Documents with freshness score below the threshold&lt;/li&gt;
&lt;li&gt;Expired documents that haven&#39;t been re-approved&lt;/li&gt;
&lt;li&gt;Drafts and archived content&lt;/li&gt;
&lt;li&gt;Documents whose language version is stale (if the query is in a non-primary language)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;Step 2: Freshness-weighted ranking.&lt;/strong&gt; Among the remaining documents, those with higher freshness scores rank higher. A document scoring 94 outranks one scoring 72, even if the 72-scored document has slightly higher semantic similarity.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 3: Answer generation.&lt;/strong&gt; The AI generates an answer from the filtered, freshness-ranked sources. Every source is cited with its freshness score visible.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Step 4: Staleness warnings.&lt;/strong&gt; If the best available source has a borderline freshness score, the AI includes a caveat: &lt;em&gt;&amp;quot;Note: The primary source for this answer was last reviewed 60 days ago. You may want to verify with the team.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Compare this to the default behaviour: find relevant text, generate confident answer, hope for the best.&lt;/p&gt;
&lt;h2&gt;What happens when you don&#39;t do this&lt;/h2&gt;
&lt;p&gt;The consequences of AI systems operating on unfiltered knowledge bases are predictable and expensive:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;New hire confusion.&lt;/strong&gt; The most common AI use case for internal docs is onboarding. New hires, by definition, don&#39;t know what&#39;s current and what&#39;s stale. They trust the AI. The AI trusts everything. Stale docs get served with confidence.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Compliance exposure.&lt;/strong&gt; If your AI assistant provides guidance on regulatory processes using outdated documents, the advice might not just be wrong, it might be non-compliant. &amp;quot;The AI told me to&amp;quot; doesn&#39;t hold up in an audit.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Erosion of trust.&lt;/strong&gt; Every time the AI gives a wrong answer, users trust it a little less. After three or four bad experiences, they stop using it. The investment in AI tooling delivers no value because the underlying content wasn&#39;t trustworthy.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Shadow knowledge.&lt;/strong&gt; When people lose trust in the official knowledge base (and the AI built on top of it), they create their own: Slack messages, personal notes, tribal knowledge shared in meetings. The fragmentation that the wiki was supposed to prevent happens anyway, just differently.&lt;/p&gt;
&lt;h2&gt;The fix is at the source, not at the model&lt;/h2&gt;
&lt;p&gt;There&#39;s a temptation to solve this at the AI layer: better prompts, more sophisticated RAG pipelines, fine-tuned models that can somehow detect staleness from text alone. This is the wrong approach.&lt;/p&gt;
&lt;p&gt;The fix is at the source. If your documents carry rich, accurate metadata about their current state (freshness score, expiry status, classification, language alignment, reader signals) then any AI system can use that metadata to make better decisions. You don&#39;t need a smarter model. You need smarter documents.&lt;/p&gt;
&lt;p&gt;This is what a freshness-first knowledge system provides:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Every document has a live freshness score&lt;/strong&gt; that updates continuously based on link health, readership, review status, and more&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Every document has an expiry date&lt;/strong&gt; that triggers review when it arrives&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Every document has a classification&lt;/strong&gt; (published, draft, under review, archived)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Every language version has its own freshness signal&lt;/strong&gt; so stale translations are detected independently&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Reader flags and cross-reference tracking&lt;/strong&gt; add additional quality signals&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When an AI system queries a knowledge base that carries this metadata, all of this context is available. The AI doesn&#39;t have to guess whether a document is trustworthy. The document tells it.&lt;/p&gt;
&lt;h2&gt;A practical starting point&lt;/h2&gt;
&lt;p&gt;If you have an AI assistant running on your knowledge base today, you can start assessing the problem in 30 minutes:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Ask your AI assistant 10 questions you know the answers to.&lt;/strong&gt; Note which answers use stale sources. You&#39;ll probably find at least 2-3 out of 10 are based on outdated content.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Check the source documents.&lt;/strong&gt; For each answer the AI gave, look at the source document. When was it last reviewed? Are the links valid? Would you trust it if you read it yourself?&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Look for the worst case.&lt;/strong&gt; Find your oldest, most neglected document that still appears in search results. Ask the AI a question that would surface it. Does the AI use it? It almost certainly does.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Estimate the impact.&lt;/strong&gt; How many queries per day does your AI assistant handle? If 20-30% of answers are based on stale content, what&#39;s the cost in terms of wasted time, wrong decisions, and lost trust?&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;hr /&gt;
&lt;p&gt;AI assistants are only as good as the content they&#39;re built on. Right now, most of them treat every document in your knowledge base as equally valid. They fetch everything, the doc that was reviewed yesterday and the one nobody has touched in two years, and present it all with the same confidence.&lt;/p&gt;
&lt;p&gt;That&#39;s not a model problem. It&#39;s a data quality problem. And the solution is straightforward: give your documents metadata that tells AI tools what to trust.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Your AI assistant shouldn&#39;t sound confident about an answer sourced from a document nobody has reviewed in 18 months. With the right signals, it won&#39;t.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;this platform makes every document carry its own trust score: freshness, expiry status, classification, language alignment. AI tools query the knowledge base and get not just content, but context. Trusted sources surface. Stale ones don&#39;t. That&#39;s how AI-powered documentation should work.&lt;/p&gt;
&lt;p&gt;&lt;a href=&quot;https://www.tcdev.de/#talk-to-docs&quot;&gt;See how this platform works with AI tools →&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="freshness" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Talking to Documents Feels Better Than Reading Them</title>
    <link href="https://www.tcdev.de/blog/why-talking-to-documents-feels-better-than-reading/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/why-talking-to-documents-feels-better-than-reading/</id>
    <updated>2026-03-10T00:00:00Z</updated>
    <summary>Reading is powerful, but effortful. Conversation is older, faster, and more natural. Speaking to information often feels mentally lighter than scanning pages of text.</summary>
    <content type="html">&lt;p&gt;There&#39;s a reason people say &lt;em&gt;&amp;quot;let&#39;s talk it through&amp;quot;&lt;/em&gt; when something is complex.&lt;/p&gt;
&lt;p&gt;When we&#39;re trying to understand a new idea, solve a problem, or recall a process under pressure, conversation often feels easier than reading. Not because reading is bad. Reading is one of the most powerful tools humans have ever developed. But reading is a learned skill layered on top of something much older: speech.&lt;/p&gt;
&lt;p&gt;We&#39;re talkers long before we&#39;re readers.&lt;/p&gt;
&lt;p&gt;That matters more than people realise, especially now that more of the world&#39;s knowledge lives inside documents, wikis, PDFs, and long internal pages that nobody wants to open unless they absolutely have to.&lt;/p&gt;
&lt;h2&gt;Reading is learned. Conversation is native.&lt;/h2&gt;
&lt;p&gt;Human beings spoke for a very long time before they wrote anything down. Children learn to understand spoken language naturally. Reading takes explicit instruction, repetition, and years of practice.&lt;/p&gt;
&lt;p&gt;Even for highly literate adults, reading is still a more deliberate act than listening or speaking. It asks for visual focus, continuous attention, working memory, and interpretation of structure on the page. You&#39;re decoding symbols, parsing sentences, building context, and deciding what matters.&lt;/p&gt;
&lt;p&gt;Conversation works differently. When information is delivered in a spoken, interactive form, the brain gets a different experience:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;It feels sequential rather than visually overwhelming&lt;/li&gt;
&lt;li&gt;It provides immediate feedback and clarification&lt;/li&gt;
&lt;li&gt;It reduces the need to scan and filter large blocks of text&lt;/li&gt;
&lt;li&gt;It mirrors how people already ask for help in real life&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That last point matters a lot. Under uncertainty, most people don&#39;t instinctively want to read 1,500 words. They want to ask: &lt;em&gt;&amp;quot;What do I do next?&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;h2&gt;Talking lowers cognitive friction&lt;/h2&gt;
&lt;p&gt;A document is static. It contains everything at once.&lt;/p&gt;
&lt;p&gt;That sounds useful, and often it is. But it also creates friction. A page full of headings, callouts, links, notes, examples, and edge cases forces the reader to decide what to ignore. That&#39;s cognitively expensive.&lt;/p&gt;
&lt;p&gt;When you talk to an information system, you usually get the opposite experience: relevance first, detail second.&lt;/p&gt;
&lt;p&gt;You ask one question. You get one answer. Then you ask a follow-up.&lt;/p&gt;
&lt;p&gt;That interaction pattern reduces mental overhead in a few important ways:&lt;/p&gt;
&lt;h3&gt;1. It narrows the problem space&lt;/h3&gt;
&lt;p&gt;A full document presents the whole landscape. A conversation presents the next useful step.&lt;/p&gt;
&lt;p&gt;When someone asks, &lt;em&gt;&amp;quot;How do I onboard a new engineer?&amp;quot;&lt;/em&gt; they usually don&#39;t want the entire handbook immediately. They want orientation. Conversation lets them begin small and expand only when needed.&lt;/p&gt;
&lt;h3&gt;2. It preserves working memory&lt;/h3&gt;
&lt;p&gt;Reading requires you to hold multiple things in your head while looking for the relevant part. Spoken or conversational interaction externalises that effort. The system does more of the filtering for you.&lt;/p&gt;
&lt;h3&gt;3. It feels socially familiar&lt;/h3&gt;
&lt;p&gt;Humans are deeply adapted to back-and-forth exchange. We ask. Someone answers. We refine. They clarify. That loop is one of the oldest forms of learning we have.&lt;/p&gt;
&lt;p&gt;Even when the &amp;quot;someone&amp;quot; is a system, the structure still feels natural.&lt;/p&gt;
&lt;h2&gt;Reading isn&#39;t passive. That&#39;s exactly the point.&lt;/h2&gt;
&lt;p&gt;One reason talking can feel easier is that reading isn&#39;t as effortless as people assume. Skilled readers make it look effortless, but the process is highly active.&lt;/p&gt;
&lt;p&gt;To read well, you have to:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;identify structure&lt;/li&gt;
&lt;li&gt;infer importance&lt;/li&gt;
&lt;li&gt;resolve ambiguity&lt;/li&gt;
&lt;li&gt;keep context in memory&lt;/li&gt;
&lt;li&gt;connect one section to another&lt;/li&gt;
&lt;li&gt;decide when to skim and when to slow down&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That&#39;s real cognitive work.&lt;/p&gt;
&lt;p&gt;In many situations, that work is worthwhile. Deep reading helps with nuance, precision, and long-form understanding. But in other situations, especially when someone is tired, stressed, overloaded, or just trying to get unstuck, talking is often the mentally lighter option.&lt;/p&gt;
&lt;p&gt;This is especially true in the workplace, where people aren&#39;t usually approaching documentation in ideal conditions. They are:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;mid-task&lt;/li&gt;
&lt;li&gt;interrupted&lt;/li&gt;
&lt;li&gt;context-switching&lt;/li&gt;
&lt;li&gt;trying to solve something quickly&lt;/li&gt;
&lt;li&gt;often already slightly frustrated&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In that state, &lt;em&gt;conversational access&lt;/em&gt; to information can feel dramatically better than page-first access.&lt;/p&gt;
&lt;h2&gt;Speaking changes the relationship with information&lt;/h2&gt;
&lt;p&gt;There&#39;s also an emotional dimension here.&lt;/p&gt;
&lt;p&gt;Documents can feel formal and distant. They imply: &lt;em&gt;read all of this, understand it correctly, and don&#39;t miss anything important.&lt;/em&gt; That can be useful for reference material, but it can also create hesitation.&lt;/p&gt;
&lt;p&gt;Conversation feels permissive. You can be vague. You can ask badly. You can admit confusion. You can say, &lt;em&gt;&amp;quot;I don&#39;t really know what I&#39;m looking for, but I need the thing about access requests.&amp;quot;&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;That matters because people often avoid documentation not because they dislike information, but because they dislike the effort and uncertainty involved in finding the right part of it.&lt;/p&gt;
&lt;p&gt;Talking reduces that barrier.&lt;/p&gt;
&lt;h2&gt;Why this matters now&lt;/h2&gt;
&lt;p&gt;For a long time, documents had to be read because there was no practical alternative. Search helped people find pages, but it did not change the interaction model. You still had to open the page, scan it, and extract what you needed.&lt;/p&gt;
&lt;p&gt;That&#39;s changing.&lt;/p&gt;
&lt;p&gt;As interfaces become more conversational, people are increasingly expecting information to respond rather than simply exist. They want to ask for what they need in plain language and receive something shaped to the moment.&lt;/p&gt;
&lt;p&gt;This doesn&#39;t make reading obsolete. It changes its role.&lt;/p&gt;
&lt;p&gt;Reading becomes the deep layer. Conversation becomes the access layer.&lt;/p&gt;
&lt;p&gt;The best systems will support both:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;talk when you need orientation or speed&lt;/li&gt;
&lt;li&gt;read when you need depth, verification, or full context&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;The risk of oversimplifying&lt;/h2&gt;
&lt;p&gt;There&#39;s one important caveat: talking to information only feels better if the answers are reliable.&lt;/p&gt;
&lt;p&gt;If a conversational interface gives partial, misleading, or overly confident answers, the experience becomes worse than reading because it removes the user&#39;s ability to inspect the source material directly.&lt;/p&gt;
&lt;p&gt;So the future isn&#39;t &amp;quot;replace all documents with voice.&amp;quot; The future is giving people a more human way to access documents without losing the depth and precision that written knowledge provides.&lt;/p&gt;
&lt;p&gt;That balance matters. Conversation is easier, but documents still carry the durable structure, detail, and accountability that organisations need.&lt;/p&gt;
&lt;h2&gt;A more human interface to knowledge&lt;/h2&gt;
&lt;p&gt;The deeper point is simple: people don&#39;t naturally think in pages. They think in questions, stories, fragments, and dialogue.&lt;/p&gt;
&lt;p&gt;We ask:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;What does this mean?&lt;/li&gt;
&lt;li&gt;What do I do first?&lt;/li&gt;
&lt;li&gt;What&#39;s the important part?&lt;/li&gt;
&lt;li&gt;Can you explain that differently?&lt;/li&gt;
&lt;li&gt;What changed?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Those are conversational moves, not reading moves.&lt;/p&gt;
&lt;p&gt;So when talking to information feels mentally easier than reading it, that&#39;s not a sign of intellectual laziness. It&#39;s usually a sign that the interface matches the way the brain prefers to approach uncertainty.&lt;/p&gt;
&lt;p&gt;Reading remains essential. But as an entry point to knowledge, conversation often feels better because it&#39;s closer to what we are by nature.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We are not readers first. We are talkers first. The most intuitive knowledge systems will remember that.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="voice" />
    <category term="knowledge" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Documentation Platforms Built for Another Era</title>
    <link href="https://www.tcdev.de/blog/why-confluence-and-notion-are-struggling-in-the-ai-era/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/why-confluence-and-notion-are-struggling-in-the-ai-era/</id>
    <updated>2026-03-08T00:00:00Z</updated>
    <summary>Confluence and Notion were built for a pre-AI model of documentation. They can evolve, but established platforms carry structural baggage. Newer systems can design for AI from day one.</summary>
    <content type="html">&lt;p&gt;Confluence and Notion are not bad products. That needs to be said clearly at the start.&lt;/p&gt;
&lt;p&gt;They succeeded for good reasons. Confluence became the &lt;a href=&quot;https://www.atlassian.com/software/confluence&quot;&gt;default home for internal documentation&lt;/a&gt; in many companies because it gave teams a central place to write, organise, and share knowledge. Notion &lt;a href=&quot;https://www.notion.com/about&quot;&gt;won people over&lt;/a&gt; with flexibility, cleaner writing experiences, and a more modern feeling product surface.&lt;/p&gt;
&lt;p&gt;Both platforms solved real problems in the era they were built for.&lt;/p&gt;
&lt;p&gt;The issue now is that the world around them has changed faster than their foundations.&lt;/p&gt;
&lt;p&gt;We are no longer in a world where documentation just needs to be written, stored, and searched. We are in a world where documentation is increasingly expected to be:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;machine-readable&lt;/li&gt;
&lt;li&gt;freshness-aware&lt;/li&gt;
&lt;li&gt;safe for AI retrieval&lt;/li&gt;
&lt;li&gt;structured enough for automation&lt;/li&gt;
&lt;li&gt;dynamic across languages and audiences&lt;/li&gt;
&lt;li&gt;continuously trustworthy, not just available&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That is a different bar.&lt;/p&gt;
&lt;h2&gt;They were built for a pre-AI model of knowledge&lt;/h2&gt;
&lt;p&gt;Traditional documentation platforms were designed around a simple assumption: if the page exists and is searchable, the problem is mostly solved.&lt;/p&gt;
&lt;p&gt;That was good enough when the main user was a human opening a wiki, skimming the page, and applying judgement. In that model, the platform&#39;s job was to make authoring and navigation easier.&lt;/p&gt;
&lt;p&gt;AI changes the job description.&lt;/p&gt;
&lt;p&gt;Now the platform is not just storing knowledge for people. It is producing source material for systems that retrieve, rank, summarise, and answer questions automatically.&lt;/p&gt;
&lt;p&gt;That introduces new requirements that older architectures did not prioritise:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Which content is trustworthy right now?&lt;/li&gt;
&lt;li&gt;Which pages are stale but still searchable?&lt;/li&gt;
&lt;li&gt;Which sections changed recently?&lt;/li&gt;
&lt;li&gt;Which language version is current?&lt;/li&gt;
&lt;li&gt;Which content is draft, archived, region-specific, or low-confidence?&lt;/li&gt;
&lt;li&gt;Which documents should be excluded from AI answers entirely?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;A platform that was not built around these questions has to retrofit them. That is always harder than designing for them from the start.&lt;/p&gt;
&lt;h2&gt;Legacy strength becomes legacy drag&lt;/h2&gt;
&lt;p&gt;Established products have advantages: distribution, ecosystem, brand, customer familiarity, integrations, and teams that know how to ship. But those same strengths can slow structural change.&lt;/p&gt;
&lt;p&gt;Why? Because mature platforms carry commitments.&lt;/p&gt;
&lt;p&gt;They have:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;years of accumulated product decisions&lt;/li&gt;
&lt;li&gt;huge installed bases with existing workflows&lt;/li&gt;
&lt;li&gt;expectations around backward compatibility&lt;/li&gt;
&lt;li&gt;plugins and extensions depending on old behaviour&lt;/li&gt;
&lt;li&gt;data models optimised for yesterday&#39;s use cases&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;When a platform like Confluence or Notion wants to add a genuinely new capability, it often has to fit that capability around the existing system rather than through it.&lt;/p&gt;
&lt;p&gt;That is the challenge of incumbency: you are not just building the future, you are dragging the past with you.&lt;/p&gt;
&lt;h2&gt;Adding AI features is not the same as becoming AI-native&lt;/h2&gt;
&lt;p&gt;A lot of established platforms are now layering AI on top. Summaries. Writing assistance. Search improvements. Q&amp;amp;A interfaces. Confluence has &lt;a href=&quot;https://www.atlassian.com/platform/intelligence&quot;&gt;Atlassian Intelligence&lt;/a&gt;, Notion shipped &lt;a href=&quot;https://www.notion.com/product/ai&quot;&gt;Notion AI&lt;/a&gt;, and GitBook added &lt;a href=&quot;https://docs.gitbook.com/product-tour/searching-your-content/gitbook-ai&quot;&gt;AI-powered search&lt;/a&gt;. These are useful features. Some of them are good.&lt;/p&gt;
&lt;p&gt;But there is a meaningful difference between:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;adding AI features to a documentation product&lt;/li&gt;
&lt;li&gt;building a documentation product whose core architecture assumes AI consumption from day one&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The first approach often leads to assistive features around the edges. The second changes the foundation.&lt;/p&gt;
&lt;p&gt;An AI-native knowledge platform asks different design questions from the start:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;how should documents be structured so systems can reason about them safely?&lt;/li&gt;
&lt;li&gt;how should trust be represented?&lt;/li&gt;
&lt;li&gt;what metadata must be first-class, not optional?&lt;/li&gt;
&lt;li&gt;how should stale content degrade in visibility?&lt;/li&gt;
&lt;li&gt;how should answers be restricted when the underlying sources are weak?&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Those are architectural questions, not feature questions.&lt;/p&gt;
&lt;h2&gt;Fresh platforms have a temporary advantage&lt;/h2&gt;
&lt;p&gt;This is where newer platforms can win, at least for a while.&lt;/p&gt;
&lt;p&gt;A new platform has the freedom to design around today&#39;s constraints instead of yesterday&#39;s habits. It does not have to preserve a decade of assumptions about what a document is or how a wiki should behave. It can make different choices early:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;treating freshness as a first-class concept&lt;/li&gt;
&lt;li&gt;making source trust visible to both humans and machines&lt;/li&gt;
&lt;li&gt;storing richer metadata about content state&lt;/li&gt;
&lt;li&gt;building multilingual workflows into the core model instead of bolting them on&lt;/li&gt;
&lt;li&gt;deciding that search and AI retrieval should rank by trust, not just relevance&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;That freedom matters.&lt;/p&gt;
&lt;p&gt;In technology, incumbents are often strongest during stable periods. New entrants are often strongest when the model itself is shifting.&lt;/p&gt;
&lt;p&gt;The AI era is one of those shifts.&lt;/p&gt;
&lt;h2&gt;Why this is especially hard for Confluence&lt;/h2&gt;
&lt;p&gt;Confluence is powerful, but it comes from an older worldview. It was built around &lt;a href=&quot;https://support.atlassian.com/confluence-cloud/docs/use-spaces-to-organize-your-work/&quot;&gt;team spaces, pages, hierarchical navigation&lt;/a&gt;, and a &lt;a href=&quot;https://marketplace.atlassian.com/&quot;&gt;plugin-rich enterprise model&lt;/a&gt;. Those choices made sense. They still make sense for many organisations.&lt;/p&gt;
&lt;p&gt;But they also mean the product is carrying a lot of complexity. Enterprise platforms rarely get to reinvent themselves cleanly. They have to negotiate with their own history.&lt;/p&gt;
&lt;p&gt;That makes modernisation slower. Not impossible. Just slower.&lt;/p&gt;
&lt;p&gt;When AI-era requirements call for cleaner metadata, more explicit trust modeling, or more opinionated content governance, a system built for maximal flexibility through years of extensions can struggle to move cohesively.&lt;/p&gt;
&lt;h2&gt;Why this is especially tricky for Notion&lt;/h2&gt;
&lt;p&gt;Notion has a different problem. It feels newer, lighter, and more flexible. But flexibility can also work against it.&lt;/p&gt;
&lt;p&gt;Notion&#39;s strength is that &lt;a href=&quot;https://www.notion.com/product&quot;&gt;almost anything can become a page, a database, a note, a lightweight doc, or a collaborative space&lt;/a&gt;. That flexibility is great for teams. It is less great when you need strong guarantees about what content means, what state it is in, and whether it should be used as a trusted source by an AI system.&lt;/p&gt;
&lt;p&gt;The more free-form a platform is, the harder it is to impose reliable semantics later.&lt;/p&gt;
&lt;p&gt;AI systems thrive on structure, explicit metadata, and confidence signals. Flexible general-purpose workspaces often need a lot of interpretation before their content is safe for that kind of use.&lt;/p&gt;
&lt;h2&gt;None of this means they are doomed&lt;/h2&gt;
&lt;p&gt;It would be lazy analysis to say Confluence and Notion cannot adapt. Of course they can.&lt;/p&gt;
&lt;p&gt;They have smart teams, significant resources, and strong incentives. They will ship more AI capabilities. They will improve retrieval, authoring assistance, summaries, governance, and structured workflows. Over time, they may close a lot of the gap.&lt;/p&gt;
&lt;p&gt;But timing matters.&lt;/p&gt;
&lt;p&gt;When a shift like this happens, the advantage often belongs to whoever is willing to rebuild assumptions fastest. Newer platforms can move with more coherence because they are not retrofitting as much. That gives them a window.&lt;/p&gt;
&lt;p&gt;It may not be a permanent window. But it is real.&lt;/p&gt;
&lt;h2&gt;The next phase of documentation platforms&lt;/h2&gt;
&lt;p&gt;The next generation of documentation tools will likely be judged less by how well they let people write pages and more by how well they manage knowledge as a trusted system.&lt;/p&gt;
&lt;p&gt;That means the winners will probably do five things well:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;They will model trust explicitly.&lt;/li&gt;
&lt;li&gt;They will distinguish current knowledge from stale knowledge.&lt;/li&gt;
&lt;li&gt;They will handle AI retrieval as a core product surface, not an add-on.&lt;/li&gt;
&lt;li&gt;They will support multilingual and audience-specific knowledge without fragmentation.&lt;/li&gt;
&lt;li&gt;They will give teams stronger control over what information is surfaced, to whom, and under what conditions.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;That is a different category from the classic wiki.&lt;/p&gt;
&lt;h2&gt;Why fresh starts matter&lt;/h2&gt;
&lt;p&gt;There are moments in software when a clean-sheet product has an advantage not because incumbents are incompetent, but because history is expensive.&lt;/p&gt;
&lt;p&gt;This is one of those moments.&lt;/p&gt;
&lt;p&gt;A new platform can decide, from day one, that documents are not just pages. They are active sources for humans, agents, search systems, and AI assistants. That assumption changes everything downstream.&lt;/p&gt;
&lt;p&gt;Confluence and Notion can get there. But the path is longer because they have to transform systems that were optimised for another era.&lt;/p&gt;
&lt;p&gt;That transformation takes time. In the meantime, newer platforms have room to define what modern knowledge infrastructure should look like.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;The biggest advantage of a fresh platform is not novelty. It is freedom from old assumptions at exactly the moment those assumptions stop working.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;hr /&gt;
&lt;p&gt;&lt;em&gt;This is a perspective piece. Claims about competitor products are based on publicly available product documentation and announcements as of March 2026. We have genuine respect for both Confluence and Notion — they are excellent products that serve millions of teams well.&lt;/em&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="ai" />
    <category term="platforms" />
    <category term="documentation" />
  </entry>
  <entry>
    <title>Inside the Architecture: Plugins, Action Guards, and Pipelines</title>
    <link href="https://www.tcdev.de/blog/how-plugin-guardrail-and-pipeline-systems-work/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/how-plugin-guardrail-and-pipeline-systems-work/</id>
    <updated>2026-03-06T00:00:00Z</updated>
    <summary>A deep technical walkthrough of how this platform&#39;s plugin system, action guard pipeline, and block-level translation engine actually work, with real code from the codebase.</summary>
    <content type="html">&lt;p&gt;Most documentation platforms talk about &amp;quot;extensibility&amp;quot; the way airlines talk about &amp;quot;legroom.&amp;quot; Technically present, practically disappointing. I wanted this platform&#39;s architecture to be genuinely extensible without becoming unpredictable, so we built three interlocking systems: &lt;strong&gt;plugins&lt;/strong&gt; for capability, &lt;strong&gt;action guards&lt;/strong&gt; for control, and &lt;strong&gt;pipelines&lt;/strong&gt; for deterministic execution.&lt;/p&gt;
&lt;p&gt;This post walks through how each one works in our actual codebase.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/architecture-pipeline.svg&quot; alt=&quot;this platform architecture: Plugins, Guards, and Pipelines working together&quot; /&gt;&lt;/p&gt;
&lt;h2&gt;The plugin system: modular by design&lt;/h2&gt;
&lt;p&gt;Every plugin in this platform implements &lt;code&gt;IPluginModule&lt;/code&gt;, a single interface that declares what the plugin is, what services it needs, and what routes it exposes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public interface IPluginModule
{
    PluginManifest Manifest { get; }
    void RegisterServices(IServiceCollection services);
    void MapRoutes(IEndpointRouteBuilder routes);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;PluginManifest&lt;/code&gt; is pure data. It describes the plugin without executing anything:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed class PluginManifest
{
    public required string Id { get; init; }
    public required string Name { get; init; }
    public required string Version { get; init; }
    public string Description { get; init; }
    public string Category { get; init; }
    public IReadOnlyDictionary&amp;lt;string, string&amp;gt; UiContributions { get; init; }
    public bool HasSettings { get; init; }
    public bool HasEndpoints { get; init; }
    public IReadOnlyList&amp;lt;string&amp;gt; Dependencies { get; init; }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice &lt;code&gt;UiContributions&lt;/code&gt;. That dictionary maps frontend extension points to component names, so the Vue frontend knows which UI components each plugin contributes (a toolbar button, a sidebar panel, a settings page).&lt;/p&gt;
&lt;h3&gt;Registration is one line per plugin&lt;/h3&gt;
&lt;p&gt;At startup, we register plugins through a fluent API:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var pluginRegistry = new PluginRegistry();

pluginRegistry
    .AddPlugin&amp;lt;WorkflowPluginModule&amp;gt;(builder.Services)
    .AddPlugin&amp;lt;RulesPluginModule&amp;gt;(builder.Services)
    .AddPlugin&amp;lt;RetentionPluginModule&amp;gt;(builder.Services)
    .AddPlugin&amp;lt;ClassificationPluginModule&amp;gt;(builder.Services);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Each call instantiates the module, stores it in the registry, and calls &lt;code&gt;RegisterServices()&lt;/code&gt; to wire up its dependencies. After the app builds, a single line maps all plugin routes:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;app.MapPluginRoutes(pluginRegistry);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Under the hood, each plugin gets a scoped route group at &lt;code&gt;/plugins/{pluginId}/&lt;/code&gt; with authorization automatically applied.&lt;/p&gt;
&lt;h3&gt;Real example: the Workflow plugin&lt;/h3&gt;
&lt;p&gt;Here&#39;s what a real plugin looks like, the Workflow &amp;amp; Approvals module:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed class WorkflowPluginModule : IPluginModule
{
    public const string PluginId = &amp;quot;workflow&amp;quot;;

    public PluginManifest Manifest { get; } = new()
    {
        Id = PluginId,
        Name = &amp;quot;Workflow &amp;amp; Approvals&amp;quot;,
        Version = &amp;quot;1.0.0&amp;quot;,
        Description = &amp;quot;Adds approval workflows to entry publishing.&amp;quot;,
        Category = &amp;quot;Workflow&amp;quot;,
        HasSettings = true,
        HasEndpoints = true,
        UiContributions = new Dictionary&amp;lt;string, string&amp;gt;
        {
            [&amp;quot;entry.toolbar.publish&amp;quot;] = &amp;quot;WorkflowPublishButton&amp;quot;,
            [&amp;quot;entry.sidebar.status&amp;quot;]  = &amp;quot;WorkflowStatusPanel&amp;quot;,
            [&amp;quot;hub.admin.settings&amp;quot;]    = &amp;quot;WorkflowHubSettings&amp;quot;,
        }
    };

    public void RegisterServices(IServiceCollection services)
    {
        services.AddScoped&amp;lt;IWorkflowService, WorkflowService&amp;gt;();
        services.AddScoped&amp;lt;IActionGuard, WorkflowPublishGuard&amp;gt;();
    }

    public void MapRoutes(IEndpointRouteBuilder routes)
    {
        WorkflowEndpoints.Map(routes);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The core platform never references &lt;code&gt;WorkflowService&lt;/code&gt; or &lt;code&gt;WorkflowPublishGuard&lt;/code&gt; directly. It discovers them through the DI container. That&#39;s the key to zero coupling. The core app never touches plugin code.&lt;/p&gt;
&lt;h2&gt;Action guards: the control layer&lt;/h2&gt;
&lt;p&gt;Plugins add capability. Action guards decide whether that capability, or any core action, is allowed to proceed. They&#39;re synchronous validators that intercept operations before execution.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/action-guard-flow.svg&quot; alt=&quot;Action guard evaluation flow&quot; /&gt;&lt;/p&gt;
&lt;p&gt;The interface is deliberately minimal:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public interface IActionGuard
{
    string PluginId { get; }
    string? ActionName { get; }  // null means guard ALL actions

    Task&amp;lt;ActionGuardResult&amp;gt; EvaluateAsync(
        ActionGuardContext context,
        IServiceProvider services,
        CancellationToken ct = default);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When &lt;code&gt;ActionName&lt;/code&gt; is &lt;code&gt;null&lt;/code&gt;, the guard runs for every action. When it&#39;s set to something like &lt;code&gt;&amp;quot;Entry.Publish&amp;quot;&lt;/code&gt;, it only intercepts that specific action.&lt;/p&gt;
&lt;h3&gt;The context and result contracts&lt;/h3&gt;
&lt;p&gt;Every guard receives a typed context with the action name, tenant, user, entity, and a property bag:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed record ActionGuardContext(
    string ActionName,
    Guid TenantId,
    Guid UserId,
    Guid EntityId,
    IReadOnlyDictionary&amp;lt;string, object?&amp;gt; Properties)
{
    public T? Get&amp;lt;T&amp;gt;(string key) =&amp;gt;
        Properties.TryGetValue(key, out var v) &amp;amp;&amp;amp; v is T typed
            ? typed : default;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And every guard returns a predictable result: allow, deny, or allow-with-modifications:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed record ActionGuardResult
{
    public bool IsAllowed { get; init; }
    public string? ReasonCode { get; init; }
    public string? Message { get; init; }
    public IReadOnlyDictionary&amp;lt;string, object?&amp;gt;? Modifications { get; init; }

    public static ActionGuardResult Allow() =&amp;gt;
        new() { IsAllowed = true };

    public static ActionGuardResult Deny(
        string reasonCode, string message) =&amp;gt;
        new() { IsAllowed = false, ReasonCode = reasonCode, Message = message };
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;Modifications&lt;/code&gt; field is important. A guard can approve an action but rewrite part of the content (for example, redacting secrets before publish).&lt;/p&gt;
&lt;h3&gt;Canonical action names&lt;/h3&gt;
&lt;p&gt;We define all interceptable actions as string constants so there&#39;s zero ambiguity about what a guard can target:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public static class ActionNames
{
    public static class Entry
    {
        public const string Create  = &amp;quot;Entry.Create&amp;quot;;
        public const string Save    = &amp;quot;Entry.Save&amp;quot;;
        public const string Publish = &amp;quot;Entry.Publish&amp;quot;;
        public const string Delete  = &amp;quot;Entry.Delete&amp;quot;;
        public const string Archive = &amp;quot;Entry.Archive&amp;quot;;
        public const string Renew   = &amp;quot;Entry.Renew&amp;quot;;
    }

    public static class Hub
    {
        public const string Create = &amp;quot;Hub.Create&amp;quot;;
        public const string Delete = &amp;quot;Hub.Delete&amp;quot;;
        public const string TransferOwnership = &amp;quot;Hub.TransferOwnership&amp;quot;;
    }

    public static class Translation
    {
        public const string Create  = &amp;quot;Translation.Create&amp;quot;;
        public const string Publish = &amp;quot;Translation.Publish&amp;quot;;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Real example: blocking publish without approval&lt;/h3&gt;
&lt;p&gt;The Workflow plugin registers a guard that intercepts &lt;code&gt;Entry.Publish&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed class WorkflowPublishGuard : IActionGuard
{
    public string PluginId =&amp;gt; WorkflowPluginModule.PluginId;
    public string? ActionName =&amp;gt; ActionNames.Entry.Publish;

    public async Task&amp;lt;ActionGuardResult&amp;gt; EvaluateAsync(
        ActionGuardContext context,
        IServiceProvider services,
        CancellationToken ct = default)
    {
        var db = services.GetRequiredService&amp;lt;RasepiDbContext&amp;gt;();
        var entry = await db.Entries
            .AsNoTracking()
            .FirstOrDefaultAsync(e =&amp;gt; e.Id == context.EntityId, ct);

        if (entry is null)
            return ActionGuardResult.Allow();

        var workflowService = services.GetRequiredService&amp;lt;IWorkflowService&amp;gt;();
        var check = await workflowService
            .CheckPublishAllowedAsync(entry.Id, entry.HubId);

        if (check.IsAllowed)
            return ActionGuardResult.Allow();

        return ActionGuardResult.Deny(
            &amp;quot;workflow.approval_required&amp;quot;,
            check.Message ?? &amp;quot;Approval required before publishing.&amp;quot;);
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The core platform knows nothing about approval workflows. It just calls &lt;code&gt;Entry.Publish&lt;/code&gt; through the pipeline, and the guard blocks it if the workflow hasn&#39;t been completed.&lt;/p&gt;
&lt;h2&gt;The action pipeline: where everything converges&lt;/h2&gt;
&lt;p&gt;The &lt;code&gt;ActionPipeline&lt;/code&gt; is the single execution path for all guarded operations. It resolves which guards apply, evaluates them, and either blocks or executes the action.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed class ActionPipeline : IActionPipeline
{
    public async Task&amp;lt;ActionPipelineResult&amp;gt; ExecuteAsync(
        string actionName,
        ActionGuardContext context,
        Func&amp;lt;Task&amp;gt; action,
        CancellationToken ct = default)
    {
        var result = await EvaluateAsync(actionName, context, ct);
        if (!result.IsAllowed) return result;

        await action();  // All guards passed — execute

        return result;   // Return modifications for caller
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;EvaluateAsync&lt;/code&gt; method does the heavy lifting:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;ActionPipelineResult&amp;gt; EvaluateAsync(
    string actionName,
    ActionGuardContext context,
    CancellationToken ct = default)
{
    // 1. Which plugins are enabled for this tenant?
    var enabledPlugins = await _resolver.GetEnabledPluginIdsAsync();

    // 2. Which guards match this action?
    var applicable = _guards
        .Where(g =&amp;gt; enabledPlugins.Contains(g.PluginId))
        .Where(g =&amp;gt; g.ActionName == null || g.ActionName == actionName)
        .ToList();

    // 3. Evaluate each guard
    var denials = new List&amp;lt;ActionGuardResult&amp;gt;();
    var modifications = new List&amp;lt;ActionGuardResult&amp;gt;();

    foreach (var guard in applicable)
    {
        try
        {
            var guardResult = await guard.EvaluateAsync(context, _services, ct);
            if (!guardResult.IsAllowed)
                denials.Add(guardResult);
            else if (guardResult.Modifications?.Count &amp;gt; 0)
                modifications.Add(guardResult);
        }
        catch (Exception ex)
        {
            _logger.LogError(ex, &amp;quot;Guard threw. Treating as Allow.&amp;quot;);
        }
    }

    // 4. Any denial blocks the whole action
    if (denials.Count &amp;gt; 0)
        return ActionPipelineResult.Blocked(denials);

    return modifications.Count &amp;gt; 0
        ? ActionPipelineResult.Allowed(modifications)
        : ActionPipelineResult.Allowed();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Three important design decisions here:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Per-tenant resolution.&lt;/strong&gt; The &lt;code&gt;TenantPluginResolver&lt;/code&gt; checks which plugins each tenant has installed and enabled. A guard for a disabled plugin never runs.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;All-must-pass.&lt;/strong&gt; If any guard denies, the action is blocked. This is a deliberate security stance.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Guard errors fail open.&lt;/strong&gt; If a guard throws an exception, it&#39;s logged and treated as &lt;code&gt;Allow()&lt;/code&gt;. This prevents a broken plugin from locking the entire platform.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;Per-tenant plugin resolution&lt;/h3&gt;
&lt;p&gt;The resolver queries the &lt;code&gt;TenantPluginInstallations&lt;/code&gt; table (automatically scoped to the current tenant by EF global query filters):&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public sealed class TenantPluginResolver : ITenantPluginResolver
{
    public async Task&amp;lt;IReadOnlySet&amp;lt;string&amp;gt;&amp;gt; GetEnabledPluginIdsAsync(
        CancellationToken ct = default)
    {
        if (_cache is not null) return _cache;

        var ids = await _db.TenantPluginInstallations
            .Where(i =&amp;gt; i.IsEnabled)
            .Select(i =&amp;gt; i.PluginId)
            .ToListAsync(ct);

        _cache = ids.ToHashSet();
        return _cache;
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;Event-driven side effects&lt;/h2&gt;
&lt;p&gt;Actions are synchronous. Side effects aren&#39;t. After an action completes, the service publishes a domain event:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;await _eventPublisher.PublishAsync(
    EventNames.Entry.Created, entry.Id, new { entry.OriginalLanguage });
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Events are enqueued to an in-memory channel and processed by a background &lt;code&gt;EventConsumerWorker&lt;/code&gt;. The worker routes events to multiple systems:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Activity tracking.&lt;/strong&gt; Logs who did what, when&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Translation billing.&lt;/strong&gt; Tracks costs per provider&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Plugin event handlers.&lt;/strong&gt; Any plugin can subscribe to domain events&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Plugin event handlers implement &lt;code&gt;IPluginEventHandler&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public interface IPluginEventHandler
{
    string PluginId { get; }
    IReadOnlyList&amp;lt;string&amp;gt; SubscribedEvents { get; }

    Task HandleAsync(
        string eventName, Guid entityId,
        Guid? tenantId, Guid? userId,
        string payloadJson, IServiceProvider services,
        CancellationToken ct = default);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The worker only invokes handlers whose plugin is enabled for the tenant. This means plugin A&#39;s side effects never leak into a tenant that only has plugin B installed.&lt;/p&gt;
&lt;h2&gt;The block-level translation engine&lt;/h2&gt;
&lt;p&gt;This is where the architecture pays off most visibly.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/block-translation.svg&quot; alt=&quot;Block-level translation: only changed blocks get retranslated&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Traditional platforms translate entire documents. We translate individual &lt;strong&gt;blocks&lt;/strong&gt;: paragraphs, headings, list items. When a user edits one paragraph in a 50-block document, only that paragraph needs retranslation. That&#39;s the source of our 94% cost savings.&lt;/p&gt;
&lt;h3&gt;How blocks are created from TipTap JSON&lt;/h3&gt;
&lt;p&gt;When a user saves a document, the TipTap editor sends JSON like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;{
  &amp;quot;type&amp;quot;: &amp;quot;doc&amp;quot;,
  &amp;quot;content&amp;quot;: [
    {
      &amp;quot;type&amp;quot;: &amp;quot;paragraph&amp;quot;,
      &amp;quot;attrs&amp;quot;: { &amp;quot;blockId&amp;quot;: &amp;quot;a1b2c3d4-...&amp;quot; },
      &amp;quot;content&amp;quot;: [{ &amp;quot;type&amp;quot;: &amp;quot;text&amp;quot;, &amp;quot;text&amp;quot;: &amp;quot;Hello world&amp;quot; }]
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The &lt;code&gt;BlockTranslationService&lt;/code&gt; parses this JSON and creates individual &lt;code&gt;EntryBlock&lt;/code&gt; records:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task&amp;lt;List&amp;lt;EntryBlock&amp;gt;&amp;gt; CreateBlocksFromDocumentAsync(
    Guid entryId, string language, string contentJson,
    int version, Guid userId)
{
    var doc = JsonDocument.Parse(contentJson);
    var content = doc.RootElement.GetProperty(&amp;quot;content&amp;quot;);

    int position = 0;
    foreach (var node in content.EnumerateArray())
    {
        var blockType = node.GetProperty(&amp;quot;type&amp;quot;).GetString();
        var blockJson = JsonSerializer.Serialize(node);

        // Strip metadata attrs before hashing
        var hashInput = StripBlockMetaAttrs(blockJson);

        var block = new EntryBlock
        {
            Id = ExtractOrGenerateBlockId(node),
            EntryId = entryId,
            Language = language,
            Position = position++,
            BlockType = blockType,
            ContentJson = blockJson,
            ContentHash = CalculateContentHash(hashInput),
            IsNoTranslate = ExtractNoTranslateFlag(node),
            Version = version,
        };

        _context.EntryBlocks.Add(block);
    }

    await _context.SaveChangesAsync();
    return blocks;
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;SHA256 hashing for stale detection&lt;/h3&gt;
&lt;p&gt;The content hash is the core of stale detection. We hash the block content (after stripping metadata attributes like &lt;code&gt;blockId&lt;/code&gt; and &lt;code&gt;deleted&lt;/code&gt;) using SHA256:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;private string CalculateContentHash(string content)
{
    using var sha256 = SHA256.Create();
    var hashBytes = sha256.ComputeHash(Encoding.UTF8.GetBytes(content));
    return Convert.ToHexString(hashBytes);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When a source block changes, its hash changes. The system then compares every translation block&#39;s &lt;code&gt;SourceContentHash&lt;/code&gt; to the current source hash, and mismatches are marked &lt;code&gt;Stale&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task MarkTranslationsAsStaleAsync(List&amp;lt;Guid&amp;gt; changedBlockIds)
{
    var affected = await _context.TranslationBlocks
        .Where(t =&amp;gt; changedBlockIds.Contains(t.SourceBlockId))
        .ToListAsync();

    foreach (var translation in affected)
    {
        translation.Status = TranslationStatus.Stale;
        translation.UpdatedAt = DateTime.UtcNow;
    }

    await _context.SaveChangesAsync();
}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Structure adaptation&lt;/h3&gt;
&lt;p&gt;Translators can change block types across languages. An English bullet list might become a German numbered list, a cultural preference. The system tracks this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;var translation = new TranslationBlock
{
    SourceBlockId = sourceBlockId,
    Language = targetLanguage,
    BlockType = translatedBlockType,
    SourceBlockType = sourceBlock.BlockType,
    IsStructureAdapted = translatedBlockType != sourceBlock.BlockType,
    SourceContentHash = sourceBlock.ContentHash,
    Status = TranslationStatus.UpToDate,
};
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;Translation providers as plugins&lt;/h3&gt;
&lt;p&gt;External translation services (DeepL, Google Translate, etc.) plug in through &lt;code&gt;ITranslationProviderPlugin&lt;/code&gt;:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public interface ITranslationProviderPlugin : IRasepiPlugin
{
    string[] GetSupportedLanguages();

    Task&amp;lt;string&amp;gt; TranslateAsync(
        string text, string sourceLanguage, string targetLanguage);

    Task&amp;lt;TranslationBatchResult&amp;gt; TranslateBatchAsync(
        Dictionary&amp;lt;string, string&amp;gt; texts,
        string sourceLanguage, string targetLanguage);
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The batch method receives a dictionary of block IDs to content, translates them all, and returns the translations with a billed character count. Because we only send stale blocks, not the entire document, costs stay minimal.&lt;/p&gt;
&lt;h2&gt;Tenant isolation: the invisible safety net&lt;/h2&gt;
&lt;p&gt;Every system described above runs inside strict tenant isolation.&lt;/p&gt;
&lt;p&gt;The &lt;code&gt;TenantContextMiddleware&lt;/code&gt; resolves the tenant from the JWT on every request and verifies membership:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;public async Task InvokeAsync(
    HttpContext context, TenantContext tenantContext, RasepiDbContext db)
{
    var tenantIdClaim = context.User.FindFirstValue(&amp;quot;tenant_id&amp;quot;);
    var userIdClaim = context.User.FindFirstValue(ClaimTypes.NameIdentifier);

    // Populate scoped context
    tenantContext.TenantId = Guid.Parse(tenantIdClaim);
    tenantContext.UserId = Guid.Parse(userIdClaim);

    // Verify membership — fail closed
    var membership = await db.TenantMemberships
        .Where(m =&amp;gt; m.TenantId == tenantContext.TenantId
                  &amp;amp;&amp;amp; m.UserId == tenantContext.UserId)
        .FirstOrDefaultAsync();

    if (membership == null)
    {
        context.Response.StatusCode = 401;
        return;  // No membership = no access
    }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Entity Framework global query filters ensure that even if a developer forgets to filter by tenant, the database layer does it automatically:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-csharp&quot;&gt;modelBuilder.Entity&amp;lt;Hub&amp;gt;()
    .HasQueryFilter(h =&amp;gt; h.TenantId == _tenantContext.TenantId);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The result: &lt;code&gt;db.Hubs.ToListAsync()&lt;/code&gt; always returns only the current tenant&#39;s hubs. Data leaks require actively bypassing the query filter, which is banned in our codebase.&lt;/p&gt;
&lt;h2&gt;The full picture&lt;/h2&gt;
&lt;p&gt;When a user clicks &amp;quot;Publish&amp;quot; on an entry, here&#39;s what happens:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;Request enters.&lt;/strong&gt; Authentication validates the JWT, &lt;code&gt;TenantContextMiddleware&lt;/code&gt; resolves and verifies the tenant.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Controller calls pipeline.&lt;/strong&gt; &lt;code&gt;IActionPipeline.ExecuteAsync(&amp;quot;Entry.Publish&amp;quot;, context, action)&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Pipeline resolves guards.&lt;/strong&gt; Queries which plugins the tenant has enabled, selects applicable guards.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Guards evaluate.&lt;/strong&gt; The Workflow guard checks for approvals, the Retention guard checks for policy, the Rules guard validates content. All pass? The entry is published.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Events fire.&lt;/strong&gt; &lt;code&gt;Entry.Published&lt;/code&gt; event is enqueued. A background worker logs activity, updates translation billing, and calls plugin event handlers.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Block translations checked.&lt;/strong&gt; Stale blocks are identified for retranslation.&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;Each layer does its job. No layer reaches into another. That&#39;s the architecture.&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;We didn&#39;t build this because extensibility is trendy. We built it because a documentation platform that can&#39;t adapt to each team&#39;s workflow will eventually be replaced by one that can. And a platform that adapts without guardrails will eventually break something that matters.&lt;/p&gt;
&lt;/blockquote&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="architecture" />
    <category term="plugins" />
    <category term="ai" />
  </entry>
  <entry>
    <title>Dev Tunnels vs NGrok</title>
    <link href="https://www.tcdev.de/blog/dev-tunnels-in-visual-studio/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/dev-tunnels-in-visual-studio/</id>
    <updated>2024-02-29T00:00:00Z</updated>
    <summary>Can Visual Studio dev-tunnels replace Ngrok yet?</summary>
    <content type="html">&lt;p&gt;&lt;/p&gt;
&lt;figure class=&quot;image&quot;&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-1.gif&quot; alt=&quot;Source: https://devblogs.microsoft.com/visualstudio/dev-tunnels-in-visual-studio-for-asp-net-core-projects/&quot; style=&quot;max-width: 100%;&quot; /&gt;
&lt;figcaption&gt;Taken from https://devblogs.microsoft.com/visualstudio/dev-tunnels-in-visual-studio-for-asp-net-core-projects/&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;table border=&quot;1&quot; style=&quot;border-collapse: collapse; width: 40.9412%; height: 352.688px;&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 39.1875px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 39.1875px;&quot;&gt;&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 39.1875px;&quot;&gt;Visual Studio Dev Tunnels&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 39.1875px;&quot;&gt;NGrok&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;SSL/HTTPS&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 58.7812px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 58.7812px;&quot;&gt;tunnels&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 58.7812px;&quot;&gt;unlimited&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 58.7812px;&quot;&gt;1 per license&lt;br /&gt;2 per agent&lt;br /&gt;(more for paid)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;TCP Connections&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;?&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;up to 100&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;bandwidth restriction&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;? (likely unlimited)&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt; 1 GB / monthly&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;Maximum uptime&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;unlimited&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;2 hours (free)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;Authentication&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt; (Paid only, 50 MAU)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;request log&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;❌&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-3.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;replay requests&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;❌&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-3.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;inspect requests&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;❌&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-3.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;random URLs&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;custom URLs&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;❌&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-3.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt; (Paid only)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 39.1875px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 39.1875px;&quot;&gt;durable URLs&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 39.1875px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt; (Might not be reliable)&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 39.1875px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19.5938px;&quot;&gt;
&lt;td style=&quot;width: 37.62%; height: 19.5938px;&quot;&gt;automatically start/stop&lt;/td&gt;
&lt;td style=&quot;width: 31.4408%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;✔&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-2.gif&quot; /&gt;&lt;/td&gt;
&lt;td style=&quot;width: 30.9392%; height: 19.5938px;&quot;&gt;&lt;img alt=&quot;❌&quot; class=&quot;imgs&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/dev-tunnels-in-visual-studio-inline-3.gif&quot; /&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;(All as of writing this article in January 2023, table is subject to change)&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="TCDev" />
  </entry>
  <entry>
    <title>Visual Studio 2022 Preview 3 is here!</title>
    <link href="https://www.tcdev.de/blog/visual-studio-2022-preview-3-adds-some-great-features/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/visual-studio-2022-preview-3-adds-some-great-features/</id>
    <updated>2023-01-21T00:00:00Z</updated>
    <summary>Read about some of the new features shipped in Visual Studio 2022 Preview 3</summary>
    <content type="html">&lt;p&gt;Visual Studio 2022 Preview 3 was released a few days ago and is packed with quite a few nice updates now (including Preview 1 + 2 features).&amp;nbsp;&lt;br /&gt;The new release comes with quite some handy features and shows yet again that Microsoft is actually listening to the community! You can find all community suggestions that&lt;br /&gt;made it into the release &lt;a href=&quot;https://developercommunity.visualstudio.com/VisualStudio?q=%5BFixed+In%3A+Visual+Studio+2022+version+17.5%5D&amp;amp;ftype=idea&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;! Working with API&#39;s got some huge quality-of-life upgrades but also some handy tools for developers working on integrations,&lt;br /&gt;plugins, or mobile apps last but not least you now have a nice integrated markdown editor for those pesky readme files!&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;Let&#39;s have a look at some of what I think are the most useful ones in no specific order:&lt;/p&gt;
&lt;h1&gt;Integrated Markdown editor (&lt;strong&gt;Community suggestion!&lt;/strong&gt;)&lt;/h1&gt;
&lt;p&gt;Despite being added in preview 2 the markdown editor is now available for everyone and always enabled per default. You can now easily and conveniently edit those&lt;br /&gt;readme.md files or other pieces of documentation you might have, inside Visual Studio!&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Nice feature on top - Spell checking works in markdown as well :)&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-1.webp&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;h1&gt;&lt;br /&gt;DevTunnels&amp;nbsp;&lt;/h1&gt;
&lt;p&gt;Those of you who have been working on any plugin, integration, MS Teams extension, or similar might know the need for a tunnel to your locally running Visual Studio. &lt;br /&gt;In past I&#39;ve been using NGROk for that which does its job quite well but is an external tool to use, you have to set it up, etc. &lt;br /&gt;Now Visual Studio comes with an integrated way of achieving the same the so-called &quot;DevTunnels&quot;. &lt;br /&gt;After configuring a tunnel it starts and stops with your application and you don&#39;t have to care about that. It even comes with inbuilt authentication!&amp;nbsp;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Let&#39;s have closer look.&lt;br /&gt;&lt;br /&gt;After enabling Developer Tunnels in the preview options you&#39;ll find a new entry in the context menu below:&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-2.png&quot; alt=&quot;undefined&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;From here you can manage your Tunnels easily. Adding a new tunnel is quite simple, you can give it a name for&lt;br /&gt;yourself to recognize it, you can configure authentication options and persistence. Based on the official preview docs the&lt;br /&gt;persistent URL should stay for the lifetime of the tunnel but we have not tried that yet.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;The &quot;Access&quot; setting is also quite handy, you can set it to public which is not recommended, you can also set it to private which&lt;br /&gt;only allows access for yourself and you can set it to organizational which allows anyone who&#39;s in your tenant to access it.&lt;br /&gt;&lt;br /&gt;Organizational however does only work with an M365 account and does not work with Github.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-3.png&quot; alt=&quot;undefined&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;After setting up the tunnel you get a new windows showing the current state, this also gives you the URLS that have been generated, you can only&lt;br /&gt;remove the tunnel here. Everything else is tied to your project. When you start the project the tunnel is automatically started as well, no manual steps&lt;br /&gt;involved!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-4.webp&quot; alt=&quot;undefined&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For me this is a huge game changer as I no longer have to deal with Ngrok, make sure its started, etc and it also covers securing your work on top. Nicely done!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h1&gt;Colorized Tabs (RegEx based!)&lt;/h1&gt;
&lt;p&gt;One of the new features I like the most as it really helps me get some more structure in my work is the colorized tabs feature. It is not completely new but got quite an update.&amp;nbsp;&lt;br /&gt;You can now colorize (and group) tabs based on regex! This allows you for example to give all your controllers a specific color or separate model from a controller, JS from C# files, etc&lt;br /&gt;etc it&#39;s really powerful and up to you! Another nice feature is, its configured in a .txt file means you can have&amp;nbsp; different settings per repository you&#39;re working on!&lt;br /&gt;&lt;br /&gt;Its simple to enable:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-5.png&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;After enabling that there&#39;s a new &quot;Configure Regex&quot; option which leads to this:&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-6.webp&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;For myself, I separated extensions, controllers, and middleware as that&#39;s what I&#39;m constantly working on for my APIGenerator&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-7.png&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;^.*&#92;&#92;*Extension&#92;.cs$
&lt;p&gt;^.*&#92;*Controller.cs$&lt;/p&gt;
&lt;p&gt;^.*&#92;*Middleware.cs$
&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h1&gt;ASP.NET Output in the integrated terminal (yay!)&lt;/h1&gt;
&lt;p&gt;When working with WebAPI etc you often had tons of console windows flying around, at least always one. &lt;br /&gt;This was now changed as Visual Studio no longer opens a new console window but shows the &lt;br /&gt;output of your app directly in the integrated terminal. When you have multiple projects, each will get its own terminal window and you can easily switch between them.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-8.webp&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h1&gt;WebAPI Endpoint Explorer&lt;/h1&gt;
&lt;p&gt;One nice feature I didn&#39;t even know about as its not listed in the official preview docs is the new Endpoint Explorer. (Thanks Hassan for sharing!)&lt;br /&gt;The new view parses your controllers at compile/coding time and lets you navigate all your endpoints directly from here.&amp;nbsp;&lt;br /&gt;It brings you to the corresponding functions by&amp;nbsp; clicking on it and gives a nice overview.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;I don&#39;t know what the plans are for this, to be useful it definitely needs to have a few more features (like testing those endpoints!)&lt;br /&gt;But its a first start, if you combine that with the .HTTP files explained further below its a handy addition and shows what&lt;br /&gt;might come in future so stay tuned!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-9.gif&quot; alt=&quot;undefined&quot; style=&quot;max-width: 60%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;br /&gt;(Thanks &lt;a href=&quot;https://twitter.com/HassanRezkHabib&quot; rel=&quot;follow&quot;&gt;Hassan Habib&lt;/a&gt; for the gif!)&lt;/p&gt;
&lt;h1&gt;.HTTP Files for easy REST call testing&lt;/h1&gt;
&lt;p&gt;Its been around for a while but not well known so I include that here. Visual Studio 22 now has a lovely feature to quickly test&lt;br /&gt;your live endpoints or just try things.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;If you create a .HTTP file you can perform API calls directly from that file as seen in the gif below&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-10.gif&quot; style=&quot;max-width: 60%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;(Thanks &lt;a href=&quot;https://twitter.com/HassanRezkHabib&quot; rel=&quot;follow&quot;&gt;Hassan Habib&lt;/a&gt; for the gif!)&lt;/p&gt;
&lt;p&gt;The HTTP calls in here are really simple to use, just write&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;&amp;lt;METHOD&amp;gt; &amp;lt;Endpoint&amp;gt;
such as
GET https://api.github.com/xxxx&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;If you need more parameters or are testing a POST/PUT etc call you can add more parameters in the lines below:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;POST https://api.github.com/xxxx
Content-Type: application/json
{
  &quot;username&quot;: &quot;testuser&quot;,
  &quot;password&quot;: &quot;testpassword&quot;
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;A handy little feature that makes your life a bit easier. I&#39;ll cover more about that in a detail article soon.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h1&gt;Colorized brace pairs is here! (C++ only for now)&lt;/h1&gt;
&lt;p&gt;Nothing too special for me personally but an honorary mention is the new colorization feature for braces, &lt;br /&gt;various nesting levels and scopes can now have a different color making&lt;br /&gt;code navigation is quite a bit simpler. I&#39;ll properly test that once its available for C# which is at least&lt;br /&gt;announced.&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-11.webp&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h2&gt;All-In-One Search results&lt;/h2&gt;
&lt;p&gt;Visual Studio now lets you search for whatever&amp;nbsp; you want in your code and in Visual Studio functionality as well! Simply type what you&#39;re looking for and you get all&lt;br /&gt;matching&amp;nbsp; results, search for something in your code and it navigates you right to it. If you search for a VS Window or option it takes you straight to the options dialog or&lt;br /&gt;opens the window you&#39;ve been asking for!&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/visual-studio-2022-preview-3-adds-some-great-features-inline-12.webp&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;There&#39;s a ton more new features I did not cover yet such as added and improve spell checking (fix those comments!), improvements when working with container images, quick add new files or sticky scroll when editing code files. All I can say is, VS keeps getting better and better every day!&lt;br /&gt;&lt;br /&gt;I&#39;ll cover some topics later in detail so stay tuned!&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term=".NET" />
    <category term="TCDev" />
    <category term="Visual Studio" />
  </entry>
  <entry>
    <title>Adaptive Cards, what else can you use them for?</title>
    <link href="https://www.tcdev.de/blog/adaptive-cards-what-else-can-you-use-them-for/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/adaptive-cards-what-else-can-you-use-them-for/</id>
    <updated>2022-08-30T00:00:00Z</updated>
    <summary>Many people think Adaptive Cards is something Microsoft invented to be used in MS Teams or Power Platform and the alike and while that is totally true it&#39;s only half the picture. Adaptive Cards and the technology behind, including the templating engine, can be used for so much more!</summary>
    <content type="html">&lt;div class=&quot;is it iu iv iw&quot;&gt;
&lt;div class=&quot;&quot;&gt;
&lt;h1 id=&quot;7735&quot; class=&quot;pw-post-title ix iy iz bo ja jb jc jd je jf jg jh ji jj jk jl jm jn jo jp jq jr js jt ju jv gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Cards, what else can you use them for?&lt;/h1&gt;
&lt;/div&gt;
&lt;p id=&quot;0185&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;As usual, if you don&amp;rsquo;t know what Adaptive Cards are you probably want to read a bit about them first. The easiest start is by just going to&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au ku&quot; href=&quot;http://www.adaptivecards.io/&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;www.adaptivecards.io&lt;/a&gt;, play a bit with the designer there, have a look at the samples and get a feeling what all this is about. Just come back here whenever you&amp;rsquo;re done and don&amp;rsquo;t forget me :)&lt;/p&gt;
&lt;p id=&quot;ad30&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Many people think Adaptive Cards is something Microsoft invented to be used in MS Teams or Power Platform and the alike and while that is totally true it&#39;s only half the picture. Adaptive Cards and the technology behind, including the templating engine, can be used for so much more!&lt;/p&gt;
&lt;/div&gt;
&lt;div class=&quot;o dd kv kw ie kx&quot; role=&quot;separator&quot;&gt;&lt;span class=&quot;ky gv cu kz la lb&quot;&gt;&lt;/span&gt;&lt;span class=&quot;ky gv cu kz la lb&quot;&gt;&lt;/span&gt;&lt;span class=&quot;ky gv cu kz la&quot;&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class=&quot;is it iu iv iw&quot;&gt;
&lt;p id=&quot;e7ca&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Let&#39;s just have a closer look at what the website says here:&lt;/p&gt;
&lt;blockquote class=&quot;lc ld le&quot;&gt;
&lt;p id=&quot;d25e&quot; class=&quot;jw jx lf jy b jz ka kb kc kd ke kf kg lg ki kj kk lh km kn ko li kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Cards are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into native UI that automatically adapts to its surroundings.&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p id=&quot;19cc&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;That, to me, doesn&amp;rsquo;t necessarily say anything about any Microsoft product as such. No! You can use it in your apps, your websites, your mobile apps. Basically, wherever you like and everything you need to do so is included in the library itself on Github, NPM or Nuget. (https://www.github.com/microsoft/adaptivecards)&lt;/p&gt;
&lt;p id=&quot;db26&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;In the last year, as part of my work for Teamwork.com we used Adaptive Cards for various things. Our MS Teams App (available in Beta, just ask us ) is using cards for various notifications and UI pieces.&lt;/p&gt;
&lt;p id=&quot;77e5&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;The Visual Studio Code Extension is fully made with Cards and was showcased during Build and MS Ignite last year. If you want to have a look, here it is:&lt;br /&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://github.com/Teamwork/vscode-projects&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/Teamwork/vscode-projects&lt;/a&gt;&lt;br /&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://marketplace.visualstudio.com/items?itemName=Teamwork.twp&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://marketplace.visualstudio.com/items?itemName=Teamwork.twp&lt;/a&gt;&lt;/p&gt;
&lt;blockquote class=&quot;lc ld le&quot;&gt;
&lt;p id=&quot;f70f&quot; class=&quot;jw jx lf jy b jz ka kb kc kd ke kf kg lg ki kj kk lh km kn ko li kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Isn&amp;rsquo;t that still a Microsoft product?&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p id=&quot;09fd&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;So&amp;hellip; yea someone said to me on Ignite, its great yea but VSCode is still a Microsoft product. Even if VS Code is Electron-based, yea maybe true. I get that. However, I have more to talk about for you :)&lt;/p&gt;
&lt;p id=&quot;416a&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;This is a screenshot of one of our upcoming desktop apps. Written in Electron and vue.js. We used Adaptive Cards here to show some file details and allow adding comments on files.&lt;/p&gt;
&lt;figure class=&quot;lk ll lm ln gr lo gf gg paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;lp lq ct lr ea ls&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;gf gg lj&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ea lt lu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptive-cards-what-else-can-you-use-them-for-inline-1.jpg&quot; width=&quot;700&quot; height=&quot;455&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;lv ej gh gf gg lw lx bo b bp bq fc&quot; data-selectable-paragraph=&quot;&quot;&gt;Teamwork Document Editor V2 using Adaptive Cards for a few features&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;fd18&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Its again, similar to VS Code obviously an Electron app, the same thing again and before someone says anything, let&#39;s get a bit crazier :)&lt;/p&gt;
&lt;/div&gt;
&lt;div class=&quot;o dd kv kw ie kx&quot; role=&quot;separator&quot;&gt;&lt;span class=&quot;ky gv cu kz la lb&quot;&gt;&lt;/span&gt;&lt;span class=&quot;ky gv cu kz la lb&quot;&gt;&lt;/span&gt;&lt;span class=&quot;ky gv cu kz la&quot;&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div class=&quot;is it iu iv iw&quot;&gt;
&lt;h1 id=&quot;0169&quot; class=&quot;ly lz iz bo ma mb mc md me mf mg mh mi mj mk ml mm mn mo mp mq mr ms mt mu mv gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Why not build a somewhat full app UI with basically just Adaptive Cards?&lt;/h1&gt;
&lt;p id=&quot;c616&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz mw kb kc kd mx kf kg kh my kj kk kl mz kn ko kp na kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Yeah right, sounds crazy, some people might not even think that&#39;s doable but here we are, have a look at these two screenshots:&lt;/p&gt;
&lt;figure class=&quot;lk ll lm ln gr lo gf gg paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;lp lq ct lr ea ls&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;gf gg nb&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ea lt lu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptive-cards-what-else-can-you-use-them-for-inline-2.png&quot; width=&quot;700&quot; height=&quot;233&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;lv ej gh gf gg lw lx bo b bp bq fc&quot; data-selectable-paragraph=&quot;&quot;&gt;Person listing, made with just Bootstrap and Cards&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;figure class=&quot;lk ll lm ln gr lo gf gg paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;lp lq ct lr ea ls&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;gf gg nc&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ea lt lu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptive-cards-what-else-can-you-use-them-for-inline-3.png&quot; width=&quot;700&quot; height=&quot;490&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;lv ej gh gf gg lw lx bo b bp bq fc&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Card used to grab person details&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;153c&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;In the last roughly one hour I build an example app you can see on the screenshots. The only things this app is using are Bootstrap, Vue.JS and Adaptive Cards. There&amp;rsquo;s no input field or anything in the code itself, Vue.js is only used for routing and data/state management, Bootstrap for grid and positioning.&lt;/p&gt;
&lt;p id=&quot;4bd6&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;The app is mainly made with two cards, one for the list view and the other one for edit/create.&lt;/p&gt;
&lt;p id=&quot;b721&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;List Card:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp/blob/master/AdaptiveCardsVuePeople/src/assets/personCard.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp/blob/master/AdaptiveCardsVuePeople/src/assets/personCard.json&lt;/a&gt;&lt;/p&gt;
&lt;p id=&quot;a15d&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Edit Card:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp/blob/master/AdaptiveCardsVuePeople/src/assets/personCreate.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp/blob/master/AdaptiveCardsVuePeople/src/assets/personCreate.json&lt;/a&gt;&lt;/p&gt;
&lt;p id=&quot;c369&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;We can use the same card for edit and create as the templating engine will omit empty values and just show the placeholder, when showing the create modal we sent no data and the templating engine automatically just shows the placeholder and no value.&lt;/p&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;{
    &quot;type&quot;: &quot;Input.Text&quot;,
    &quot;placeholder&quot;: &quot;Person full name&quot;,
    &quot;value&quot;: &quot;{$root.displayName}&quot;,
    &quot;id&quot;: &quot;name&quot;
},&lt;/code&gt;&lt;/pre&gt;
&lt;figure class=&quot;lk ll lm ln gr lo&quot;&gt;
&lt;div class=&quot;m l ct&quot;&gt;
&lt;div class=&quot;ys ne l&quot;&gt;&lt;/div&gt;
&lt;div class=&quot;ys ne l&quot;&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;900f&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;But lets just not talk about stuff, demo&amp;rsquo;s are the best proof, just have a look at this, the full app available and working online:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://deejaytc.github.io/AdaptiveCardsPeopleApp/&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://deejaytc.github.io/AdaptiveCardsPeopleApp/&lt;/a&gt;&lt;/p&gt;
&lt;p id=&quot;974f&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Full source available here:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au ku&quot; href=&quot;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/AdaptiveCardsPeopleApp&lt;/a&gt;&lt;/p&gt;
&lt;h2 id=&quot;f409&quot; class=&quot;nf lz iz bo ma ng nh ni me nj nk nl mi kh nm nn mm kl no np mq kp nq nr mu ns gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Now, why am I talking to you about this?&lt;/h2&gt;
&lt;p id=&quot;094e&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz mw kb kc kd mx kf kg kh my kj kk kl mz kn ko kp na kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Well, as said in the beginning, Adaptive Cards and everything related is a lot more than just a technology used for Microsoft products. It&#39;s highly flexible and can be used for a ton of things. This article pretty much should just explain that you can do more with Cards than you might have thought by now.&lt;/p&gt;
&lt;p id=&quot;183f&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Just start thinking&amp;hellip;what else can I use this for? I&amp;rsquo;m sure you&amp;rsquo;ll find a few great things.&lt;/p&gt;
&lt;p id=&quot;8d25&quot; class=&quot;pw-post-body-paragraph jw jx iz jy b jz ka kb kc kd ke kf kg kh ki kj kk kl km kn ko kp kq kr ks kt is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Let me know what you use Cards for today and let&#39;s talk!&lt;/p&gt;
&lt;/div&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="AdaptiveCards" />
    <category term="Guide" />
  </entry>
  <entry>
    <title>CRUD API&#39;s in an instant</title>
    <link href="https://www.tcdev.de/blog/crud-apis-in-an-instant/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/crud-apis-in-an-instant/</id>
    <updated>2022-03-31T00:00:00Z</updated>
    <summary>Generating fully working CRUD API&#39;s in seconds!</summary>
    <content type="html">&lt;h2&gt;Want an easy way to build a CRUD API?&lt;/h2&gt;
&lt;p&gt;Building an API with .NET can be quite time consuming as there is a lot of boilerplate code you usually have to deal with. Controllers, EntityFramework, Models, Routes....&lt;br /&gt;and whatever else comes to your mind. However, it doesn&#39;t have to be like this and can actually be done lightning fast!&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;What if someone told you, this snippet of just a class is already a fully working API:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;    [Api(&quot;/courses&quot;)]
    public class Course : IObjectBase&amp;lt;int&amp;gt;
    {
        public int Id { get; set; }
        public List&amp;lt;Student&amp;gt; Students { get; set; }
        public Teacher Teacher { get; set; }
        public List&amp;lt;DateTime&amp;gt; Schedule { get; set; }
    }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Yes, this is the FULL code you have to write to get a working CRUD API with everything handled out of the box and working magically. Database, Routing, Caching, OpenAPI Definition...its all there, just based on this class.&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;The same can be done with pure JSON as well!&lt;strong&gt;&lt;/strong&gt;&lt;/p&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;  {
    &quot;name&quot;: &quot;Car&quot;,
    &quot;route&quot;: &quot;/cars&quot;,
    &quot;idType&quot;: &quot;int&quot;,
    &quot;Fields&quot;: [
      {
        &quot;name&quot;: &quot;Name&quot;,
        &quot;type&quot;: &quot;String&quot;
      },
      {
        &quot;name&quot;: &quot;Description&quot;,
        &quot;type&quot;: &quot;String&quot;
      },
      {
        &quot;name&quot;: &quot;Year&quot;,
        &quot;type&quot;: &quot;int&quot;
      },
      {
        &quot;name&quot;: &quot;Make&quot;,
        &quot;type&quot;: &quot;virtual Make&quot;
      },
      {
        &quot;name&quot;: &quot;MakeId&quot;,
        &quot;type&quot;: &quot;int&quot;
      }
    ]
  }&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;How is this done? What is this magic?&lt;/h2&gt;
&lt;p&gt;Creating APIs that fast and simple can be done using my OS Project &quot;APIGenerator&quot;. You can find it on Github here -&amp;gt; https://github.com/DeeJayTC/net-dynamic-api&lt;br /&gt;A few more details can also be found here -&amp;gt; https://www.tcdev.de/instant-crud-apis-with-net&lt;br /&gt;&lt;br /&gt;The APIGenerator basically turns any class or JSON Definition that follows my schema, into a full blown API with everything working out of the box.&amp;nbsp;&lt;br /&gt;It can be used with any Database supported by EntityFramework.&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h2&gt;Getting Started&lt;/h2&gt;
&lt;p&gt;&lt;br /&gt;Start either a new WebAPI or WebApp project with .NET 6&lt;/p&gt;
&lt;p&gt;Download the package:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;dotnet add package TCDev.ApiGenerator&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The APIGenerator comes with a few optional dependencies out of which you need to pick at least the database ones to use the API properly.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Install either of these for the matching database:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;TCDev.APIGenerator.Data.SQL
TCDev.APIGenerator.Data.SQLite
TCDev.APIGenerator.Data.Postgres
TCDev.APIGenerator.Data.InMemory&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And highly recommended to use the OData package as well&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;dotnet add package TCDev.APIGenerator.OData&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;br /&gt;Setting things up:&lt;/h2&gt;
&lt;p&gt;Add the library to program.cs (or startup.cs if you&#39;re using the old way!)&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;builder.Services.AddApiGeneratorServices()
                .AddConfig(NameOfRootNodeInAppSettings)
                or
                .AddConfig(new ApiGeneratorConfig() { ... })
                or
                .AddConfig()&lt;/code&gt;&lt;/pre&gt;
&lt;h2&gt;&lt;br /&gt;Creating your first API&lt;/h2&gt;
&lt;p&gt;&lt;br /&gt;&lt;span data-offset-key=&quot;2b011416bb1148cbac9b0ec5ba44a9fd:0&quot;&gt;The first and most important thing to note is, you need to implement the &quot;&lt;/span&gt;&lt;strong data-slate-leaf=&quot;true&quot; data-offset-key=&quot;2b011416bb1148cbac9b0ec5ba44a9fd:1&quot; class=&quot;r-b88u0q&quot;&gt;IObjectBase&amp;lt;T&amp;gt;&lt;/strong&gt;&lt;span data-offset-key=&quot;2b011416bb1148cbac9b0ec5ba44a9fd:2&quot; data-slate-fragment=&quot;JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRoZSUyMGZpcnN0JTIwYW5kJTIwbW9zdCUyMGltcG9ydGFudCUyMHRoaW5nJTIwdG8lMjBub3RlJTIwaXMlMkMlMjB5b3UlMjBuZWVkJTIwdG8lMjBpbXBsZW1lbnQlMjB0aGUlMjAlNUMlMjIlMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCUyQyUyMnNlbGVjdGlvbnMlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMklPYmplY3RCYXNlJTNDVCUzRSUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIybWFyayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJib2xkJTIyJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCU3RCU1RCUyQyUyMnNlbGVjdGlvbnMlMjIlM0ElNUIlNUQlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMiU1QyUyMiUyMGludGVyZmFjZSUyMGZvciUyMHlvdXIlMjBwcmltYXJ5JTIwa2V5LiUyMFRoaXMlMjBpcyUyMG1hZGUlMjB0byUyMGFsbG93JTIweW91JTIwdG8lMjBzZWxlY3QlMjB0aGUlMjB0eXBlJTIwb2YlMjBwcmltYXJ5JTIwa2V5JTIwKGludCUyQyUyMHN0cmluZyUyMG9yJTIwZ3VpZCklMjB5b3UlMjB3YW50JTIwdG8lMjB1c2UlMjBidXQlMjBpcyUyMGFsc28lMjB1c2VkJTIwdG8lMjBnZW5lcmF0ZSUyMHRoZSUyMEFQSSUyMEVuZHBvaW50cyUyMGxhdGVyJTIwb24uJTIwVGhpcyUyMGlzJTIwdGhlJTIwbWluaW11bSUyMHJlcXVpcmVtZW50JTIweW91JTIwaGF2ZSUyMHRvJTIwZG8uJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlMkMlMjJzZWxlY3Rpb25zJTIyJTNBJTVCJTVEJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyOTZmYjFmYzMzOWNmNGJkMWJjZDYyMjRkNzU5YmE0MjYlMjIlN0QlNUQlMkMlMjJrZXklMjIlM0ElMjI5NDZiNmQ0YTk2ZWY0ZjU4YWUzOTVmNWE1OTY3OGQyMiUyMiU3RCU1RCUyQyUyMmtleSUyMiUzQSUyMjQ4ZDAyODc1YmE0NjQwNGQ5MmFhZmQ5MTk0MzMxMWNmJTIyJTdE&quot;&gt;&quot; interface for your primary key. This is made to allow you to select the type of primary key (int, string or guid) you want to use but is also used to generate the API Endpoints later on. This is the minimum requirement you have to do. &lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span data-slate-fragment=&quot;JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMkxldHMlMjB3cml0ZSUyMGElMjBjbGFzcyUyMGFzJTIwYSUyMHNhbXBsZSUzQSUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTJDJTIyc2VsZWN0aW9ucyUyMiUzQSU1QiU1RCU3RCU1RCUyQyUyMmtleSUyMiUzQSUyMmNjZTMxNDFhM2I4ZTQ5YzZiYThlNGZjOGRiZDUxZjE3JTIyJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyY2U4MTZkYzU4MDEwNDBkNmI5YmRhYmQ0OTZjZTBiZGUlMjIlN0QlNUQlMkMlMjJrZXklMjIlM0ElMjI5ZmY3N2Y0YjAxOWQ0MWFhOWE3YjZlZDNmZWVjYjdmMyUyMiU3RA==&quot;&gt;Lets write a class as a sample:&lt;/span&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;public class Person : IObjectBase&amp;lt;int&amp;gt; {
&lt;p&gt;public int Id {get;set;}
public string Name {get;set;}
public DateTime DateOfBirth {get;set;}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;div&gt;
&lt;div data-rnw-media-class=&quot;1472-195__1470-_b1430-195&quot; class=&quot;css-1dbjc4n r-1ro0kt6 r-18u37iz r-16y2uox r-1wbh5a2 r-1777fci&quot;&gt;
&lt;div class=&quot;css-1dbjc4n r-1ro0kt6 r-16y2uox r-1wbh5a2 r-1l5ssaz&quot;&gt;
&lt;div data-rnw-media-class=&quot;1471-__1469&quot; class=&quot;css-1dbjc4n&quot;&gt;
&lt;div class=&quot;css-1dbjc4n&quot;&gt;
&lt;div data-key=&quot;d70274fca8d1439a80ff6b00fef39fd0&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1mlwlqe r-eqz5dr r-1q142lx r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-xd6kpl r-tskmnb r-1yzf0co r-bnwqim r-417010&quot;&gt;
&lt;div data-block-content=&quot;d70274fca8d1439a80ff6b00fef39fd0&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1ro0kt6 r-eqz5dr r-16y2uox r-1wbh5a2 r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-bnwqim r-417010&quot;&gt;
&lt;div dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;/div&gt;
&lt;p dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;span data-key=&quot;c762ece396914b5cb5adc1e5431001b8&quot;&gt;Now, this is enough for our underlying systems to generate the database tables for this specific class and you can theoretically query and store data for it. Yet this is not an API Endpoint yet. &lt;/span&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div data-rnw-media-class=&quot;1472-195__1470-_b1430-195&quot; class=&quot;css-1dbjc4n r-1ro0kt6 r-18u37iz r-16y2uox r-1wbh5a2 r-1777fci&quot;&gt;
&lt;div class=&quot;css-1dbjc4n r-1ro0kt6 r-16y2uox r-1wbh5a2 r-1l5ssaz&quot;&gt;
&lt;div data-rnw-media-class=&quot;1471-__1469&quot; class=&quot;css-1dbjc4n&quot;&gt;
&lt;div class=&quot;css-1dbjc4n&quot;&gt;
&lt;div data-key=&quot;0b1f841b486248438bf7a35c17bd29f2&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1mlwlqe r-eqz5dr r-1q142lx r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-xd6kpl r-tskmnb r-1yzf0co r-bnwqim r-417010&quot;&gt;
&lt;div data-block-content=&quot;0b1f841b486248438bf7a35c17bd29f2&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1ro0kt6 r-eqz5dr r-16y2uox r-1wbh5a2 r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-bnwqim r-417010&quot;&gt;
&lt;div dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;span data-key=&quot;60c50a9b251a4b0db5bde5a79433723c&quot;&gt;To have a class appear in swagger and have all the needed CRUD endpoints generated you have to actually mark it as an API, this is quite easy thanks to our API Attribute. &lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div data-slate-fragment=&quot;JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMk5vdyUyQyUyMHRoaXMlMjBpcyUyMGVub3VnaCUyMGZvciUyMG91ciUyMHVuZGVybHlpbmclMjBzeXN0ZW1zJTIwdG8lMjBnZW5lcmF0ZSUyMHRoZSUyMGRhdGFiYXNlJTIwdGFibGVzJTIwZm9yJTIwdGhpcyUyMHNwZWNpZmljJTIwY2xhc3MlMjBhbmQlMjB5b3UlMjBjYW4lMjB0aGVvcmV0aWNhbGx5JTIwcXVlcnklMjBhbmQlMjBzdG9yZSUyMGRhdGElMjBmb3IlMjBpdC4lMjBZZXQlMjB0aGlzJTIwaXMlMjBub3QlMjBhbiUyMEFQSSUyMEVuZHBvaW50JTIweWV0LiUyMCUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTJDJTIyc2VsZWN0aW9ucyUyMiUzQSU1QiU1RCU3RCU1RCUyQyUyMmtleSUyMiUzQSUyMmEwZTc3MmIxMjM2MDRmNGQ4NjMwMDMxNmE0ZWQ1Y2VhJTIyJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyNWZkNzgzMWNhN2Y2NGVkZjkxODY2YjY1YmMxNjQ0NzUlMjIlN0QlMkMlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlRvJTIwaGF2ZSUyMGElMjBjbGFzcyUyMGFwcGVhciUyMGluJTIwc3dhZ2dlciUyMGFuZCUyMGhhdmUlMjBhbGwlMjB0aGUlMjBuZWVkZWQlMjBDUlVEJTIwZW5kcG9pbnRzJTIwZ2VuZXJhdGVkJTIweW91JTIwaGF2ZSUyMHRvJTIwYWN0dWFsbHklMjBtYXJrJTIwaXQlMjBhcyUyMGFuJTIwQVBJJTJDJTIwdGhpcyUyMGlzJTIwcXVpdGUlMjBlYXN5JTIwdGhhbmtzJTIwdG8lMjBvdXIlMjBBUEklMjBBdHRyaWJ1dGUuJTIwJTIyJTJDJTIybWFya3MlMjIlM0ElNUIlNUQlMkMlMjJzZWxlY3Rpb25zJTIyJTNBJTVCJTVEJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyNjBjNTBhOWIyNTFhNGIwZGI1YmRlNWE3OTQzMzcyM2MlMjIlN0QlNUQlMkMlMjJrZXklMjIlM0ElMjIwYjFmODQxYjQ4NjI0ODQzOGJmN2EzNWMxN2JkMjlmMiUyMiU3RCUyQyU3QiUyMm9iamVjdCUyMiUzQSUyMmJsb2NrJTIyJTJDJTIydHlwZSUyMiUzQSUyMnBhcmFncmFwaCUyMiUyQyUyMmlzVm9pZCUyMiUzQWZhbHNlJTJDJTIyZGF0YSUyMiUzQSU3QiU3RCUyQyUyMm5vZGVzJTIyJTNBJTVCJTdCJTIyb2JqZWN0JTIyJTNBJTIydGV4dCUyMiUyQyUyMmxlYXZlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMmxlYWYlMjIlMkMlMjJ0ZXh0JTIyJTNBJTIyTGV0cyUyMGZ1cnRoZXIlMjBleHRlbmQlMjB0aGUlMjBjbGFzcyUyMHRvJTIwd29yayUyMGFzJTIwYW4lMjBBUEklMjIlMkMlMjJtYXJrcyUyMiUzQSU1QiU1RCUyQyUyMnNlbGVjdGlvbnMlMjIlM0ElNUIlNUQlN0QlNUQlMkMlMjJrZXklMjIlM0ElMjJhY2JiYmUwMzkwNjk0OTk2YTRiM2I1MGZkZjZiOGZmNCUyMiU3RCU1RCUyQyUyMmtleSUyMiUzQSUyMjc5NmNhYzA5MmNmZTQ4NjBiZWQ1NjE3MGQ1YmY2ZDUxJTIyJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyMWFlODFjMzhmNzE2NGI5NDljOWMwMDJkYWJjZjk1NWQlMjIlN0Q=&quot;&gt;
&lt;div data-rnw-media-class=&quot;1472-195__1470-_b1430-195&quot; class=&quot;css-1dbjc4n r-1ro0kt6 r-18u37iz r-16y2uox r-1wbh5a2 r-1777fci&quot;&gt;
&lt;div class=&quot;css-1dbjc4n r-1ro0kt6 r-16y2uox r-1wbh5a2 r-1l5ssaz&quot;&gt;
&lt;div data-rnw-media-class=&quot;1471-__1469&quot; class=&quot;css-1dbjc4n&quot;&gt;
&lt;div class=&quot;css-1dbjc4n&quot;&gt;
&lt;div data-key=&quot;796cac092cfe4860bed56170d5bf6d51&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1mlwlqe r-eqz5dr r-1q142lx r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-xd6kpl r-tskmnb r-1yzf0co r-bnwqim r-417010&quot;&gt;
&lt;div data-block-content=&quot;796cac092cfe4860bed56170d5bf6d51&quot; class=&quot;r-1oszu61 r-1xc7w19 r-1phboty r-1yadl64 r-deolkf r-6koalj r-1ro0kt6 r-eqz5dr r-16y2uox r-1wbh5a2 r-crgep1 r-ifefl9 r-bcqeeo r-t60dpp r-bnwqim r-417010&quot;&gt;
&lt;p dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;span data-key=&quot;acbbbe0390694996a4b3b50fdf6b8ff4&quot;&gt;Lets further extend the class to work as an API&lt;/span&gt;&lt;/p&gt;
&lt;div dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;span data-key=&quot;acbbbe0390694996a4b3b50fdf6b8ff4&quot;&gt;&lt;/span&gt;&lt;/div&gt;
&lt;div dir=&quot;auto&quot; class=&quot;css-901oao r-z9jf92 r-gg6oyi r-ubezar r-16dba41 r-135wba7 r-fdjqy7 r-1xnzce8&quot;&gt;&lt;span data-key=&quot;acbbbe0390694996a4b3b50fdf6b8ff4&quot;&gt;&lt;/span&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;[Api(&quot;/people&quot;)
public class Person : IObjectBase&amp;lt;int&amp;gt; {
&lt;p&gt;public int Id {get;set;}
public string Name {get;set;}
public DateTime DateOfBirth {get;set;}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;&lt;span data-slate-fragment=&quot;JTdCJTIyb2JqZWN0JTIyJTNBJTIyZG9jdW1lbnQlMjIlMkMlMjJkYXRhJTIyJTNBJTdCJTdEJTJDJTIybm9kZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJibG9jayUyMiUyQyUyMnR5cGUlMjIlM0ElMjJwYXJhZ3JhcGglMjIlMkMlMjJpc1ZvaWQlMjIlM0FmYWxzZSUyQyUyMmRhdGElMjIlM0ElN0IlN0QlMkMlMjJub2RlcyUyMiUzQSU1QiU3QiUyMm9iamVjdCUyMiUzQSUyMnRleHQlMjIlMkMlMjJsZWF2ZXMlMjIlM0ElNUIlN0IlMjJvYmplY3QlMjIlM0ElMjJsZWFmJTIyJTJDJTIydGV4dCUyMiUzQSUyMlNlZSUyMGhvdyUyMHdlJTIwb25seSUyMGFkZGVkJTIwdGhhdCUyMGF0dHJpYnV0ZSUzRiUyMFllcyUyQyUyMHRoaXMlMjBpcyUyMGVub3VnaCUyMHRvJTIwdHVybiUyMHRoZSUyMGNsYXNzJTIwaW50byUyMGElMjBmdWxsJTIwQVBJISUyMiUyQyUyMm1hcmtzJTIyJTNBJTVCJTVEJTJDJTIyc2VsZWN0aW9ucyUyMiUzQSU1QiU1RCU3RCU1RCUyQyUyMmtleSUyMiUzQSUyMjkxN2FjMGQ1MTIyMDQxNTRiNTk2YTM5YzRlNTMxNWNkJTIyJTdEJTVEJTJDJTIya2V5JTIyJTNBJTIyZjA3YzA3YjkyMjljNGY2OWI4ZGIyYjAwNDc1ZGNmYjYlMjIlN0QlNUQlMkMlMjJrZXklMjIlM0ElMjIzYWE2MzliZWU1YWM0NDEyYmJmZjRlYWFiNDZlOTk0NiUyMiU3RA==&quot;&gt;See how we only added that attribute? Yes, this is enough to turn the class into a full API! If you start your App now you should be greeted by Swagger with a fully working API!&lt;/span&gt;&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;For more details and guides just jump straight to the docs! -&amp;gt;&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;https://www.tcdev.de/&quot; rel=&quot;follow&quot;&gt;https://www.tcdev.de&lt;/a&gt;/&lt;/h2&gt;
&lt;p&gt;Let me know what you think, any questions..just jump in the comments below!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="Guide" />
    <category term=".NET" />
  </entry>
  <entry>
    <title>Generic controllers in .NET Core</title>
    <link href="https://www.tcdev.de/blog/generic-controllers-in-net-core/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/generic-controllers-in-net-core/</id>
    <updated>2022-03-31T00:00:00Z</updated>
    <summary>Often controllers are really really similar to each other, here&#39;s a generic approach to this</summary>
    <content type="html">&lt;p&gt;In many many repositories you can find tons of controllers, completely similar code with the only difference of serving different types.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Here&#39;s one approach to fix this. If you just want the full code, skip the article and look here -&amp;gt; &lt;a href=&quot;https://github.com/DeeJayTC/samples&quot; title=&quot;https://github.com/DeeJayTC/samples&quot; rel=&quot;follow&quot;&gt;https://github.com/DeeJayTC/samples&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Using one generic controller for all the types in your project&lt;/h3&gt;
&lt;p&gt;First step is to create a generic controller, this is a really simple part, just implement a controller as usually and add T to make it generic.&amp;nbsp;&lt;br /&gt;We also need to add a common interface to all the classes, I just named it IObjectBase. This is used to make sure all classes share the same ID Property.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;The Interface:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   public interface IObjectBase&amp;lt;TId&amp;gt;
   {
      [Key]
      TId Id { get; set; }
   }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The Controller:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;[Route(&quot;api/[controller]&quot;)]
[Produces(&quot;application/json&quot;)]
public class GenericController&amp;lt;T&amp;gt; : Controller where T : class,
   IObjectBase
{
   private readonly GenericDbContext db;
&lt;p&gt;public GenericController(GenericDbContext context)
{
this.db = context;
}&lt;/p&gt;
&lt;p&gt;[HttpGet]
public IQueryable&amp;lt;T&amp;gt; Get()
{
return this.db.Set&amp;lt;T&amp;gt;();
}
....excluded for brevity, full sample in repo&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;We also add a DBContext pretty much as usually, similar to this:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;public class GenericDbContext : DbContext
{
   public static IModel StaticModel { get; } = BuildStaticModel();
&lt;p&gt;public DbSet&amp;lt;Something&amp;gt; Somethings { get; set; }
public DbSet&amp;lt;SomeOtherThing&amp;gt; OtherThing { get; set; }&lt;/p&gt;
&lt;p&gt;protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder)
{
if (!optionsBuilder.IsConfigured) optionsBuilder.UseInMemoryDatabase(&amp;quot;ApplicationDb&amp;quot;);
}&lt;/p&gt;
&lt;p&gt;protected override void OnModelCreating(ModelBuilder builder)
{
base.OnModelCreating(builder);
}&lt;/p&gt;
&lt;p&gt;private static IModel BuildStaticModel()
{
using var dbContext = new GenericDbContext();
return dbContext.Model;
}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Don&#39;t forget to add the DBContext to your startup file!&lt;/p&gt;
&lt;h3&gt;Lets put things together&lt;/h3&gt;
&lt;p&gt;To tell .NET Core that we want to add additional controllers and routes we need to change the AddMVC call a bit&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;builder.Services.AddMvc(o =&amp;gt;
      o.Conventions.Add(new GenericControllerRouteConvention()))
       .ConfigureApplicationPartManager(m =&amp;gt; m.FeatureProviders.Add(
          new GenericTypeControllerFeatureProvider(new[] {  Assembly.GetEntryAssembly().FullName}))
);&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The ApplicationPartManager and FeatureProvider allows you to add new controllers at runtime ( &lt;a href=&quot;https://docs.microsoft.com/en-us/aspnet/core/mvc/advanced/app-parts?view=aspnetcore-6.0#:~:text=Feature%20providers%20work%20with%20application,common%20functionality%20between%20multiple%20apps.&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;See here&lt;/a&gt; )&lt;/p&gt;
&lt;p&gt;In the feature provider we need to find a way to find all the classes we want to use as a controller, here we are again using our Interface. This can be done using a custom attribute or anything similar thats shared by all classes supposed to create a controller.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Here&#39;s a sample code for this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   public void PopulateFeature(IEnumerable&amp;lt;ApplicationPart&amp;gt; parts, ControllerFeature feature)
   {
      foreach (var assembly in this.Assemblies)
      {
         var loadedAssembly = Assembly.Load(assembly);
         var customClasses = loadedAssembly.GetExportedTypes()
            .Where(x =&amp;gt; x.IsAssignableTo(typeof(IObjectBase)) &amp;amp;&amp;amp; x.Name != nameof(IObjectBase));
&lt;p&gt;foreach (var candidate in customClasses)
{
// Ignore BaseController itself
if (candidate.FullName != null &amp;amp;&amp;amp; candidate.FullName.Contains(&amp;quot;BaseController&amp;quot;)) continue;&lt;/p&gt;
&lt;p&gt;// Generate type info for our runtime controller, assign class as T
var propertyType = candidate.GetProperty(&amp;quot;Id&amp;quot;)
?.PropertyType;
if (propertyType == null) continue;
var typeInfo = typeof(GenericController&amp;lt;,&amp;gt;).MakeGenericType(candidate, propertyType)
.GetTypeInfo();&lt;/p&gt;
&lt;p&gt;// Finally add the new controller via FeatureProvider -&amp;gt;
feature.Controllers.Add(typeInfo);
}
}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;Last but not least we need to make AttributeRouting work, this can be done quite easily as well, there&#39;s a function called IControllerModelConvention.&amp;nbsp;&lt;br /&gt;We can use this to apply route conventions to all GenericController instances.&amp;nbsp;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   public void Apply(ControllerModel controller)
   {
      if (controller.ControllerType.IsGenericType)
      {
         var genericType = controller.ControllerType.GenericTypeArguments[0];
         controller.ControllerName = genericType.Name;
         controller.Selectors.Add(new SelectorModel
         {
            AttributeRouteModel = new AttributeRouteModel(new RouteAttribute($&quot;/{genericType.Name}&quot;))
         });
      }
   }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;Final Words&lt;/h3&gt;
&lt;p&gt;Implementing things like this allows you to have a shared controller for all types that don&#39;t need any specific work done. You can still add a normal controller for special cases and everything else, swagger for example, keeps working as usually.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Just check the sample here -&amp;gt;&lt;a href=&quot;https://github.com/DeeJayTC/samples/tree/main/GenericControllers&quot; rel=&quot;follow&quot;&gt; https://github.com/DeeJayTC/samples/tree/main/GenericControllers&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term=".NET" />
    <category term="Guide" />
  </entry>
  <entry>
    <title>OpenSource is not free software!</title>
    <link href="https://www.tcdev.de/blog/opensource-is-not-free-software/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/opensource-is-not-free-software/</id>
    <updated>2022-03-31T00:00:00Z</updated>
    <summary>Many people think open source is free software and while that is sort of true, its totally not true depending on the point of view</summary>
    <content type="html">&lt;p&gt;The last days, during MVP Summit (yay #MVPBuzz!) there where often topics covered related to OpenSource in many ways. And there was one line you could find in all of them: The problem with OpenSource developers and missing gratitude for their work.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;These days, without open source, Github and all these lovely so to say &quot;free&quot; tools, many many startups wouldn&#39;t be where they are today. Yes even many bigger enterprises would probably struggle. Think alone how many devices, apps, real people where effected by the recent Log4J problem. Log4j is the de-facto default logging library in Java and was even ported, yet again as open source, by the community. With the recent bug it was actually easier to ask &quot;Who is not affected&quot; as it was just so many, theres millions using the library and if all of them would just give back $5! to the developers, there would be no issue at all fixing these bugs, maintaining libraries properly and so on. However, to my knowledge, theres very little companies that actually gave back something to the maintainers. IdentityServer 4, if you remember this was another similar story. The guys got some funding but by far not enough to pay their bills despite their work, again, being used by countless companies all around the world.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;This and similar stories can be told for countless open source projects.&lt;/p&gt;
&lt;h3 style=&quot;text-align: center;&quot;&gt;So lets have a look at things&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;Taken from Gartner, Worldwide IT spending is projected to total $4.2 trillion in 2021....yes trillion. Companies, private people, institutions, everyone pays a crazy amount of money for Software and Services. Its not that people don&#39;t pay for software, its the opposite. Crazy amounts of money are paid for pieces of software, sometimes built in a few hours and not even worth the money. Software development is pricey, reaaaally pricey if not to say expensive. If you look at the average saas product from start to finish and the amount of money it took you often end up with multiple 100k , sometimes millions. &lt;br /&gt;&lt;br /&gt;When you look at startups, its even crazier. Some startups, wether they suceed or not, receive crazy funding from VC etc. Hasura just anounced they received $100M in funding. Thats not because someone is just throwing out money, no!. Thats because what they built actually has a value, worth and price tag! Building software is time consuming, takes a lot of effort, special skills and dedication.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;When it comes to saas, apps, libraries...things you pay for. This is kinda the norm everyone knows about. Software has a price tag, i have to pay to use it, the guys spent money to implement it.&amp;nbsp;&lt;br /&gt;And don&#39;t even think about that every saas company is highly profitable. Many have less than 10% turnover, some even less than 5%. For many startups it takes years to actually be profitable!&lt;/p&gt;
&lt;p&gt;After all, software has a price tag.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;em&gt;But open source is free? We can just use open source and don&#39;t have to pay or wait, its just sitting there for everyone to take!&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3 style=&quot;text-align: center;&quot;&gt;Open Source is not free software!&lt;/h3&gt;
&lt;p&gt;Yes, you can use Open Source libraries and apps for free, if you look at this as simple as that. Yes, OS is free. However thats not even half the picture. &lt;br /&gt;&lt;br /&gt;Think of it from another point of view: &lt;em&gt;&lt;span style=&quot;text-decoration: underline;&quot;&gt;Open Source developers, project owners, maintainers...they all give away literal money for free!&amp;nbsp;&lt;/span&gt;&lt;/em&gt; &lt;br /&gt;&lt;br /&gt;On average these days, 1 hour freelance work is probably around 80-150&amp;euro; some way above that some lower. Software agencies usually charge around 400-500 minimum per day. Yet OS people don&#39;t charge anything...they literarily donate, what in many many cases is their own free time, to the world. To me, many of them are heroes on their own. Without open source, many of theese agencies would have to charge easily double that if not more!&lt;br /&gt;&lt;br /&gt;Very little companies value open source and allow their developers to work on projects during working hours, countless lines of codes in open source projects are written in the evenings, weekends and even on vacation. Just because someone is dedicated to built something. ( its actually 22:15 here while writing this! )&lt;/p&gt;
&lt;p&gt;Many projects have one thing in common, they want to help. Help make other developer&#39;s live better, help sort of problem other&#39;s might have, help educate others and help other developers to grow and start a career in tech.&amp;nbsp;By saying open source is free and by acting like many people do you completely devalue the effort, dedication and energy anyone working on open source brings to the table!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;However, it does not have to be like this!&lt;/p&gt;
&lt;h3 style=&quot;text-align: center;&quot;&gt;Just try to give back!!&lt;/h3&gt;
&lt;p&gt;Just think about, what was the last open source project you used. Which os libraries do you rely on for your saas product? Which sdk&#39;s really help you have an easier time at work? Which blog author, content creator or maintainer helped you solve an issue? And then think about how much time you would need if their work wouldn&#39;t exist. Think about, could you have built your saas app the way you did without open source? Just be nice, and kind and give at least something back.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Giving back to open source doesn&#39;t always have to be money, if you can&#39;t afford paying money there&#39;s other stuff you can do!&amp;nbsp; Sometimes even sending a tweet &quot;Hey we&#39;re using that awesome open source library xx&quot; already helps. Helps raise awareness and maybe someone else can support them monetary.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If you can afford it, Github has a really great way to give back to developers, projects and maintainers.&lt;/p&gt;
&lt;h3 style=&quot;text-align: center;&quot;&gt;Github Sponsors&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;GitHub Sponsors&amp;nbsp;&lt;/span&gt;&lt;b&gt;allows the developer community to financially support the people and organizations who design, build, and maintain the open source projects they depend on, directly on GitHub&lt;/b&gt;&lt;span&gt;. That is, you can directly support all the projects, developers and communities you are building your own work on. And you should make use of this!&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;When you go to Github and visit the projects you&#39;re using in your product just have a look at the Sponsors button. You might see something like this in the navigation:&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/opensource-is-not-free-software-inline-1.webp&quot; /&gt;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;span&gt;By using the sponsor functionality you can directly donate to the project owner and give back and just show you&#39;re actually valueing what they&#39;re doing.&amp;nbsp;&lt;/span&gt;If the project you want to support doesn&#39;t have sponsoring enabled, just reach out to them kindly, tell them you want to support and ask them how you can donate. You can point them to this blog article if you want to: &lt;a href=&quot;https://github.blog/2020-03-24-getting-started-with-github-sponsors/&quot; title=&quot;https://github.blog/2020-03-24-getting-started-with-github-sponsors/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://github.blog/2020-03-24-getting-started-with-github-sponsors/&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;If the project has the sponsor button but none of the tiers match what you want to give them, just kindly reach out and ask them to enable &quot;custom funding&quot;. By doing that you can just pay whatever you like.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3 style=&quot;text-align: center;&quot;&gt;Final words&lt;/h3&gt;
&lt;p&gt;I know this is a topic countless people where writing about already but it also can&#39;t be said often enough. Start to value open source work, start to actually show you&#39;re grateful for what these awesome guys do and start support the projects you love. Now... i&#39;ll wait here..just go and sponsor someone!&lt;/p&gt;
&lt;p&gt;Joke&#39;s aside... the open source community is important for every software company on this planet, its in our all responsibility that people keep publishing things, working on open source and even grow the community as otherwise things would become pretty tricky quite soon.&lt;/p&gt;
&lt;p&gt;Greets...and go sponsor someone!&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="TCDev" />
  </entry>
  <entry>
    <title>TCDev API Generator - Getting Started</title>
    <link href="https://www.tcdev.de/blog/tcdev-api-generator-getting-started/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/tcdev-api-generator-getting-started/</id>
    <updated>2022-03-27T00:00:00Z</updated>
    <summary>Here&#39;s a small getting started guide for my API Generator Project</summary>
    <content type="html">&lt;p&gt;Hey Folks, as I received a few questions I decided to write a quick getting started guide ...and prolly should give that baby a proper name at some point!&lt;/p&gt;
&lt;p&gt;The current state of the project will generate a fully working CRUD API from just a model class, eventually it will evolve in a full &quot;Database direct to API&quot; project similar to Hasura or other options.&lt;br /&gt;&lt;br /&gt;You can find the docs....in desperate need of an update... here -&amp;gt; &lt;a href=&quot;https://www.tcdev.de/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://www.tcdev.de&lt;/a&gt;&lt;/p&gt;
&lt;h3&gt;Getting Started&lt;/h3&gt;
&lt;p&gt;Start either a new WebAPI or WebApp project with .NET 6&lt;/p&gt;
&lt;p&gt;Download the package via nuget:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;dotnet add package TCDev.ApiGenerator&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;Add the library to program.cs (or startup.cs if you&#39;re using the old way!)&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;builder.Services.AddApiGeneratorServices()
                .AddConfig(NameOfRootNodeInAppSettings)
                or
                .AddConfig(new ApiGeneratorConfig() { ... })
                or
                .AddConfig()&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Note: Assembly.GetExecutingAssembly() can be overwritten, just use the assembly where you plan to add your models!&lt;/p&gt;
&lt;h3&gt;Optional Step: Migrations&lt;/h3&gt;
&lt;p&gt;When using SQLite or SQL you can have automatic migrations enabled, this is useful for development but shouldn&#39;t be used once you&#39;re done and especially not if you use an&lt;br /&gt;existing Database!&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;app.UseApiGenerator();
app.UseAutomaticAPIMigrations(true);&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;Build your first API&lt;/h3&gt;
&lt;p&gt;Building your first API is as simple as just adding a class to your project, something along these lines:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   [Api(&quot;/people&quot;, ApiMethodsToGenerate.All )]
   public class Person :  IObjectBase&amp;lt;Guid&amp;gt;
   {
      public string Name { get; set; }
      public DateTime Date { get; set; }
      public string Description { get; set; }
      public int Age { get; set; }
      public Guid Id { get; set; }
}&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The only requirement is that you have to use the IObjectBase&amp;lt;TEntityType&amp;gt; interface. This tells the library which type your primary key has and is used for various things. This requirement might change but the current versions require this.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;To turn your class into an API just add the ApiAttribute as seen above and set the route to whatever you want it to be. (Needs to start with trailing / )&lt;br /&gt;The second parameter controls which API Methods should be available.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;Just start your project now and you&#39;ll be greeted by Swagger Docs showing you the docs for your API and you can start using it, it should already work :)&lt;/p&gt;
&lt;h3&gt;Configure the Database&lt;/h3&gt;
&lt;p&gt;Per default the project uses an InMemory Database Provider, good for development but probably you want to change that quickly. Nothing easier than that :9&lt;br /&gt;Add this to your AppSettings:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;  &quot;Api&quot;: {
    &quot;Database&quot;: {
      &quot;DatabaseType&quot;: &quot;SQL&quot;
    }
  }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And additionally the connection string:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;  &quot;ConnectionStrings&quot;: {
    &quot;ApiGeneratorDatabase&quot;: &quot;Server=localhost;database=tcdev_dev_222;user=sa;password=Password!23;&quot;
  },&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Make sure the name is &quot;ApiGeneratorDatabase&quot; as this is a requirement right now.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If you start your project again it should now use your database and should have it automatically created.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If you want to have migrations automatically applied just add this to startup:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;app.UseApiGenerator();
app.UseAutomaticAPIMigrations(true);&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is first of all everything you &quot;have&quot; to do...but theres more you can do&lt;/p&gt;
&lt;h3&gt;Configure your api even further&lt;/h3&gt;
&lt;p&gt;The lib comes with 2 helper classes you can use to have things cleaner. &quot;Trackable&quot; and &quot;SoftDeletable&quot;.&amp;nbsp;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Trackable adds two new fields to the database &quot;CreatedAt&quot; and &quot;UpdatedAt&quot; and handles everything automatically.&amp;nbsp;&lt;/li&gt;
&lt;li&gt;SoftDeletable does probably what you think it does, items are not deleted but just &quot;marked&quot; as deleted&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;Customize behaviour&lt;/h4&gt;
&lt;p&gt;You can add various interfaces to your class to inject custom functionality, currently called Hooks. For every method theres a Before and After Hook, similar to this:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   [Api(&quot;/people&quot;, ApiMethodsToGenerate.All )]
   public class Person : Trackable, 
      IObjectBase&amp;lt;Guid&amp;gt;,
      IBeforeUpdate&amp;lt;Person&amp;gt;, // Before Update Hook
      IBeforeDelete&amp;lt;Person&amp;gt;, // BeforeDelete Hook
&lt;p&gt;{&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Implementing the interface allows you to intercept whats happening and add custom functionality, like in this example:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      public Task&amp;lt;Person&amp;gt; BeforeUpdate(Person newPerson, Person oldPerson)
      {
         newPerson.Age = 333;
&lt;p&gt;return Task.FromResult(newPerson);
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;..more to come&lt;/p&gt;
&lt;h3&gt;Last but not least, customize database layout&lt;/h3&gt;
&lt;p&gt;To customize the table and how your model looks in the database you can use classic EntityFramework functionality.&amp;nbsp;&lt;br /&gt;Just add the IEntityTypeConfiguration interface and you can use all the EntityFramework options like here:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   [Api(&quot;/people&quot;, ApiMethodsToGenerate.All )]
   public class Person : Trackable, 
      IObjectBase&amp;lt;Guid&amp;gt;,
      IEntityTypeConfiguration&amp;lt;Person&amp;gt; // Configure Table Options yourself
   {
      public void Configure(EntityTypeBuilder&amp;lt;Person&amp;gt; builder)
      {
         builder.ToTable(&quot;MyFancyTableName&quot;);
         //....all the other EF Core Options
      }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;way more to come....add issues and discussions to github! See you soon&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term=".NET" />
    <category term="Guide" />
  </entry>
  <entry>
    <title>Be the one who wrote the posts and gave the talks!</title>
    <link href="https://www.tcdev.de/blog/be-the-one-who-wrote-the-posts-and-gave-the-talks/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/be-the-one-who-wrote-the-posts-and-gave-the-talks/</id>
    <updated>2022-03-26T00:00:00Z</updated>
    <summary>A developer career is more than just writing code, #SharingIsCaring and giving back supports other developers!</summary>
    <content type="html">&lt;p&gt;Hey visitor,&lt;br /&gt;this is something, not about code as such but something I realized about myself the last months and pretty sure there&#39;s more of you who think similar.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;I&#39;ve written countless lines of codes, caused countless bugs, fixed them, caused new ones but all that ends up as experience. Some people ask &quot;how do you know so much?&quot; and the only answer is&lt;br /&gt;try and error, practice and just never stop learning. However, you can only learn if there&#39;s actually people around to share their experience with you. Yes, you can learn everything the hard way yourself but many many people wouldn&#39;t work in tech todays if it wasn&#39;t for great tutorials, tutors and just an epic community all around the world.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;There are people, constantly trying to help other developers some really famous and some below the radar but still doing what they can.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;I planned to write about this quite a while ago but just now had the missing inspiration I needed.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Scott Hanselmann who&#39;s pretty much the most famous face of Microsoft Development published a video on TikTok. While on its own its a great advice for many people and definitely worth a watch, its the final quote of the video I liked most about it which is something I thought for quite a long while. Watch it here -&amp;gt; &lt;a href=&quot;https://www.tiktok.com/@shanselman/video/7079548362235628843?_t=8QyPTjv5gOh&amp;amp;_r=1&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Scott @ TikTok&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;He finished the video with this quote:&lt;/p&gt;
&lt;p&gt;&lt;em&gt;&quot;At the end of your career you don&#39;t want to be the one who read all the books, seen all the talks and used all the libraries. You wanna be the person who wrote the books, wrote the libraries and gave the talks&quot;&lt;/em&gt;&lt;/p&gt;
&lt;h3&gt;So, whats so great about that?&lt;/h3&gt;
&lt;p&gt;I have a whole career in development, probably a bit more than 20 years to go and I thought about how to spend these years to actually create something valuable. Working on code is something thats great, enjoyable and what i&#39;d definitely continue to do but I&#39;m just not the typical developer anymore. I don&#39;t enjoy coding 8 hours straight for a customer, I&#39;ve done that and enjoyed it but realized there has to be more i can do.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Working in tech, for your employer, your own projects and whatever else is after all one thing, experience. Errors you made, problems you fixed and yes success you had finishing that project. Theres a never ending stream of new folks joining tech and while they often can code properly, build great things they all lack experience. A 20 year old guy might be an amazing programmer but he just doesn&#39;t have a ton of experience about what can go wrong and what to look for.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;At some point in my career I realized that helping other developers is actually a lot more enjoyable than working on code myself. For various reasons. First of all, helping as such is already great. Making someone&#39;s life easier is great but you&#39;re also creating way more value than by writing code yourself. Second, the feeling of getting something done or having achieved something is way stronger when you wrote a library compared to just a finished project for the next customer or the next....&lt;/p&gt;
&lt;p&gt;So, if you&#39;re the one who gave the talk, it only took you about an hour, maybe a few hours to prepare but thats it. What you created by doing that is to actually multiply your own experience, and based on that helped to create way more things than you&#39;d ever be able to by just coding. Plus you made other people&#39;s work day a bit more enjoyable!&lt;/p&gt;
&lt;h3&gt;Lets dig a bit deeper&lt;/h3&gt;
&lt;p&gt;If you read a lot of books, use great libraries and watch all the sessions you can watch you&#39;re probably a pretty good developer and especially in my mid 20s thats what I did to learn, practice and enhance my skills. Thats what everyone does and thats really great. However, there have to be people to actually create libraries, books and talks as otherwise there won&#39;t be content left at some point and young fellas wouldn&#39;t be able to get running.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;At some point in your career you need to think about what YOU can do for your community, how you can actually multiply your experience, help others to learn from your career, help them to accept making their own mistakes and to properly learn.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Time is the most precious resource we have.&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;Time is something you can&#39;t increase. You can hire 10 devs but you can&#39;t get 10 additional hours. So, lets take a look at a small example to explain this.&amp;nbsp;&lt;br /&gt;Lets say it takes you 10 hours to build something, it also takes 10 hours to prepare a blog post and small library for this. Now if we think of this, when you work&lt;br /&gt;on the project you finish it, you have a project finished and done. Thats great, nothing bad about this.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Now if you would have spend the same time to build a library or write a blog post you might not have finished the project BUT you actually did way more than that, you&#39;ve enabled&lt;br /&gt;other developers to do what you can do. And by that, instead of finishing that one project you helped finish countless other projects as well.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;10 hours spent writing on a book might help 1000 other developers get their job done faster and easier and you actually achieved way more than you would have by just consuming and coding yourself.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Be the one who wrote the books, gave the talks and built the libraries&lt;/h3&gt;
&lt;p&gt;That said, at some point in your career you have to decide if you want to actually just quit with 65, having built a couple of possibly great projects but thats about it or if you actually enabled other developers to continue, grow and build even more projects you helped them to build.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;If you&#39;re the one who wrote the books, gave the talks and built the libraries you have way more to look back at the end of your career. You&#39;ll be a lot more satisfied with what you achieved and you actually left way more in the community than you whould have by just writing code.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Code is getting old will probably be thrown away at some point and all you did is gone a few years later. If you, however, enabled other developers to become better in their job your work will probably be great years after you finished your career.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;You&#39;re not good at writing or talking?&lt;/h3&gt;
&lt;p&gt;You might not enjoy talking in front of people, your writing skills might not be perfect...who cares? Mine aren&#39;t but its all about the content. As long as people can understand the point you&#39;re trying to make or understand the content you want to deliver its definitely good enough and worth to share. Don&#39;t be afraid..just start sharing! You probably can&#39;t do everything anyway, some people help by wiriting great libraries, they don&#39;t want to talk publicaly but thats totally fine, you&#39;re still helping other.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;So all thats left to say... think about how you can help other developers, what you learned in your career you want to share. Everything you know can possibly help other developers so just start writing and don&#39;t be afraid to make mistakes doing that. You will make mistakes...you will learn from them and you will tell others about the problems you had. Thats a really great thing!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;So..what are you waiting for, go and get some stuff written down!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Also:&lt;/p&gt;
&lt;p&gt;Thanks Scott :)&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="Guide" />
    <category term="TCDev" />
  </entry>
  <entry>
    <title>The new TCDev</title>
    <link href="https://www.tcdev.de/blog/the-new-tcdev/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/the-new-tcdev/</id>
    <updated>2022-03-25T00:00:00Z</updated>
    <summary>A full guide how this page is made, the stack i&#39;m using and more</summary>
    <content type="html">&lt;h3&gt;Welcome to the new TCDev.de!&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;My page was offline for quite some time and desperately needed an update. I thought i&#39;d do something new this time, something i didn&#39;t try yet. My community page &lt;a href=&quot;https://www.madewithcards.io/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://www.madewithcards.io&lt;/a&gt; was up and running for quite a while, the internal blog module however was a bit of a mess and really needed to be replaced. I figured I did not want a seperate blog module for both pages and I also didn&#39;t want to use Wordpress or anything similar. So i decided to give ButterCMS a try, as a headless cms was pretty much what i needed to achieve what i wanted. A shared blog between both pages with the features i wanted to have.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The new TCDev.de is built with&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Vue + Vuetify (V2 as Vuetify isn&#39;t completely supported on V3 yet!)&lt;/li&gt;
&lt;li&gt;ButterCMS for content&lt;/li&gt;
&lt;li&gt;Github + Cloudflare Pages for hosting&lt;/li&gt;
&lt;li&gt;Prerender.io for SEO improvements&lt;/li&gt;
&lt;li&gt;.NET Core WebAPI for some API things I&#39;m not talking about yet :)&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;First of all, what is ButterCMS? What is a headless CMS?&lt;/h3&gt;
&lt;p&gt;&lt;span&gt;ButterCMS is an API-based or &amp;ldquo;headless&amp;rdquo; CMS, technically its a content management system as you might know it, Wordpress etc. However its fully API based. Headless here means, there&#39;s no &quot;Head&quot; aka Frontend for the CMS. Everything you do is purely accessible by using their API. In my case, exactly as i wanted. And a big plus.... ButterCMS has a free non-profit offering, you have to ask them for it and have to share a backlink on your page but hey, thats grant for awesome free functionality like this!&amp;nbsp;&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;Starting with ButterCMS is quick and easy, &lt;a href=&quot;https://www.tcdev.de/join/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;sign up&lt;/a&gt;, take your API key, install the Vue SDK and you&#39;re pretty much done, the onboarding flow already gives you the first snippet you can use directly in the framework you&#39;re working with. Yes the guys give you snippets exactly for what you&#39;re using, they offer tons of Frameworks to chose from:&lt;br /&gt;&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/the-new-tcdev-inline-1.png&quot; style=&quot;max-width: 100%;&quot; alt=&quot;ButterCMS&#39;s description for many JS Frameworks&quot; /&gt;&lt;br /&gt;&lt;br /&gt;In my case i&#39;m working with Vue. Getting started with ButterCMS takes seconds really. By just following the onboarding guide you already have a working blog. For me things where a bit different. I wanted more than a normal blog.&amp;nbsp;&lt;/p&gt;
&lt;h4&gt;Blog posts with references and further links.&amp;nbsp;&lt;/h4&gt;
&lt;p&gt;For the blog I had in mind i wanted to be able to have a section in the sidepane linking tech pieces used in the post. For example, when writing about Vue or AdaptiveCards i wanted to link these in the resources section. ButterCMS can do that pretty well with references, however these don&#39;t exist on the pre-made blog pages. Luckily you can add custom pages types which perfectly did the trick for me. &lt;a href=&quot;https://www.tcdev.de/kb/how-to-build-a-custom-blog-page&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Read more about custom page types here&lt;/a&gt;. Blog posts in ButterCMS are technically the same as the custom pages, its just a pre-made page type that pretty much resembled the blog post type and additionally added the fields I needed.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/the-new-tcdev-inline-2.png&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;In my new &quot;CustomBlog&quot; page type I added a few references such as &quot;Stack&quot;, &quot;Category&quot;, &quot;External Author&quot; and a few more. These references are just custom collection types, another feature of ButterCMS. Think of it like custom collections where you define the fields, add items and use these in your posts and pages. As simple as that.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/the-new-tcdev-inline-3.png&quot; style=&quot;max-width: 100%; display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;After all, despite needing a custom setup, things where still quick and smooth in ButterCMS. The UI is really handy, you never feel lost and setting it all up was done quickly.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Sharing a blog between two pages&lt;/h3&gt;
&lt;p&gt;As said before, I wanted to share the blog between both my MadeWithCards page and TCDev.de. Luckily as both are Vue pages I only had to write the template once and share it with both sources. There&#39;s only one difference. MadeWithCards is supposed to only show content either tagged with AdaptiveCards or in the AdaptiveCards category i added earlier. Thanks again to the lovely ButterCMS sdk this was also a matter of seconds to implement.&amp;nbsp;&lt;br /&gt;This is the code i&#39;m using to fetch my posts, after fetching I group the posts by year for display purposes.&amp;nbsp;&lt;/p&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;     
     butter.page
        .list(&#39;customblog&#39;,params)
        .then((res) =&amp;gt; {
          const postGroups = []
&lt;p&gt;res.data.data.forEach((post) =&amp;gt; {
const group = postGroups.find((x) =&amp;gt; x.name === moment(post.fields.published).format(&#39;yyyy&#39;))&lt;/p&gt;
&lt;p&gt;if (!group) postGroups.push({
name: moment(post.fields.published).format(&#39;yyyy&#39;),
posts: [post]
})
else group.posts.push(post)
})
this.posts = postGroups.sort((p) =&amp;gt; p.name).reverse()
})&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;The only thing thats different on MadeWithCards is the &quot;params&quot; part.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;While TCDev is using these params:&lt;/p&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;      const params = {
        page: 1,
        page_size: 25,
        exclude_body: true,
        &#39;filter.tags.slug&#39;: this.currentTag
      }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;which is using a custom tag filter based on the currently selected tags on the page, MadeWithCards has the tags and category hardcoded:&lt;/p&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;      const params = {
        page: 1,
        page_size: 25,
        exclude_body: true,
        &#39;filter.tags.slug&#39;: &#39;adaptive-cards&#39;,
        &#39;filter.category.slug&#39;: &#39;adaptive-cards&#39;,
      }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;As you can see in these two examples, the formentioned collections and all other fields I manually added to my page type are all filter and sortable!&lt;/p&gt;
&lt;h3&gt;Custom &quot;Preview&quot; protection&lt;/h3&gt;
&lt;p&gt;With ButterCMS you can easily preview your page while you&#39;re working on it and see it in your own page. Butter has a nice functionality for this. For me this wasn&#39;t enough. ButterCMS just applies a &quot;preview=1&quot; parameter and you&#39;re supposed to check this in your code and then display the preview. I wanted a way that at least is a bit more complex.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;My custom blog page type has a field &quot;preview-code&quot; which is just a random number. When previewing the page the &quot;&amp;amp;preview&quot; param must match whats returned from ButterCMS API, otherwise the page can not be previewed. While this is not perfect its still good enough for me.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Including &quot;external&quot; posts&lt;/h3&gt;
&lt;p&gt;From time to time I want to add references to posts from other author&#39;s especially on MadeWithCards. I don&#39;t want to copy their content into a new post but also don&#39;t want to use any RSS feed as I want to pick manually which posts to add. Achieving this for me was pretty fast, again due to ButterCMS.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;My custom blog type has 2 fields &quot;isExternal&quot; and &quot;externalUrl&quot; besides the reference to an author. Whenever i have an external post I add a new post, leave it completely empty and just fill out the title and externalXX fields besides adding a preview image maybe. By doing this I can render external and &quot;internal&quot; posts differently in Vue. While my own posts have a dedicated page to show the content and details, external posts are opened in a new tab using the externalUrl. Viewers can identify wether its my own or a shared post by checking the author and a huge &quot;external&quot; banner on the images.&amp;nbsp;&lt;/p&gt;
&lt;h3&gt;Getting the page deployed and hosted.&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;I wanted to keep things as simple as I can, no hussle with any sort of webserver or cloud hosting. Just my static Vue page with the dynamic content from ButterCMS. Luckily there&#39;s a really easy way to do this. Github and &lt;a href=&quot;https://pages.cloudflare.com/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Cloudflare Pages&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Cloudflare Pages pretty much takes your Github code, deploys it to one of their servers and gives you a URL for it. Fully automatic and setup only takes a couple minutes. After the first deploy, every commit to the selected branch automatically re-deploys your code. Ci/CD made simple :)&lt;br /&gt;&lt;br /&gt;I won&#39;t go into the details here as the &lt;a href=&quot;https://developers.cloudflare.com/pages/framework-guides/deploy-anything/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;documentation&lt;/a&gt; is already pretty good, just give it a read!&lt;/p&gt;
&lt;h3&gt;Improving SEO&lt;/h3&gt;
&lt;p&gt;As you might know, SPA&#39;s are a bit tricky when it comes to crawlers. Most of the content requires javascript to be executed and the page fully rendered to appear. There&#39;s various ways to achieve this. Server side rendering, Pre-rendered for crawlers and more. I decided to go the pre rendered route but instead of doing all of this myself I chose to use PrerenderIo. Its free for a set amount of cached pages and also plays nicely with Cloudflare using a Worker.&amp;nbsp; &lt;a href=&quot;https://docs.prerender.io/docs/24-cloudflare&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Using Cloudflare with Prerender.io&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;With all this in place I was done for now and have my new page online, rest is adding content which I&#39;ll add tons more.&lt;/p&gt;
&lt;p&gt;Stay tuned!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="TCDev" />
  </entry>
  <entry>
    <title>#SharingIsCaring!</title>
    <link href="https://www.tcdev.de/blog/sharingiscaring/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/sharingiscaring/</id>
    <updated>2022-03-18T00:00:00Z</updated>
    <summary>I just uploaded a bunch of stuff to Github because #SharingFeelsGood!</summary>
    <content type="html">&lt;h1&gt;#SharingIsCaring and #SharingFeelsGood....&lt;/h1&gt;
&lt;p&gt;The last few days i uploaded various things to Github, mostly older stuff but also pretty new unfinished work in progress things.&amp;nbsp;&lt;br /&gt;While cleaning up my harddrive i decided that it doesn&#39;t help when the code is just flying around on my end. I needed to back it up anyway,&lt;br /&gt;a public github repo is great place to back up code...its secure and also helps other people :)&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/DeeJayTC/dotnet-utils&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/dotnet-utils&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Is a collection of various things, extension methods, useful functions, other stuff i collected through the years. Taken from various older repositories,&amp;nbsp;&lt;br /&gt;some new stuff. Made available with no license at all. Just take whatever you want, leaving a star would be great tho :)&lt;/p&gt;
&lt;h2&gt;&lt;a href=&quot;https://github.com/DeeJayTC/cloudstorage-wrapper&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/cloudstorage-wrapper&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Is a microservice i made while working for Teamwork, still being used for integrations today. It basically wraps the Dropbox, Onedrive and Sharepoint API&#39;s and makes it available&lt;br /&gt;under one common API schema. If you&#39;re a SAAS app and want to build an integration with either, this is for you...you only have to do things once and can&lt;br /&gt;support all three. I might add Google Drive and Box.com at some point.&amp;nbsp;&lt;/p&gt;
&lt;h2&gt;&lt;br /&gt;&lt;a href=&quot;https://github.com/DeeJayTC/net-dynamic-api&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/net-dynamic-api&lt;/a&gt;&lt;/h2&gt;
&lt;p&gt;Is just a pet project but might be useful for some once its more evolved.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;It basically turns this:&lt;br /&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/sharingiscaring-inline-1.png&quot; alt=&quot;undefined&quot; /&gt;&lt;/p&gt;
&lt;p&gt;Into a fully working CRUD API with ODATA enabled&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Feel free to browse and use the stuff, leave a comment if you can!&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="TCDev" />
    <category term=".NET" />
    <category term="Open Source" />
  </entry>
  <entry>
    <title>Instant CRUD APIs with .NET</title>
    <link href="https://www.tcdev.de/blog/instant-crud-apis-with-net/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/instant-crud-apis-with-net/</id>
    <updated>2022-03-10T00:00:00Z</updated>
    <summary>How to create fully working ODATA CRUD API&#39;s from just classes.</summary>
    <content type="html">&lt;p&gt;These days there&#39;s a rising demand to reduce boilerplate code in apps, more and more tools are appearing reducing backend code even further, such as Hasura&#39;s automatic creation of GraphQL APIs.&amp;nbsp;&lt;br /&gt;Here&#39;s a somewhat different approach to this.&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;What if someone says that this is the &lt;span style=&quot;text-decoration: underline;&quot;&gt;&lt;strong&gt;full code&lt;/strong&gt;&lt;/span&gt; needed for a fully working CRUD API with ODATA filter options enabled?&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   [GeneratedController(&quot;/people&quot;)]
   public class Person : Trackable, IObjectBase&amp;lt;Guid&amp;gt;
   {
      public string Name { get; set; }
      public DateTime Date { get; set; }
      public string Description { get; set; }
      public int Age { get; set; }
&lt;p&gt;public IEnumerable&amp;lt;PersonLink&amp;gt; Links { get; set; }
public Guid Id { get; set; }
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;br /&gt;Sounds a bit crazy but yea, its already done and availble...sort of. There&#39;s no Nuget package yet but you can go to &lt;a href=&quot;https://github.com/DeeJayTC/net-dynamic-api&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Github &lt;/a&gt;and grab the code if you want to.&amp;nbsp;&lt;br /&gt;An API created like this has everything done automatically, all CRUD Endpoints, fully routing, even Authorization is working. Besides various things like caching and even webhooks once implemented.&amp;nbsp;&lt;br /&gt;A lot of this is still in early alpha tho.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;The class will generate these routes:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;table border=&quot;1&quot; style=&quot;border-collapse: collapse; width: 18.1471%; height: 38px;&quot;&gt;
&lt;tbody&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 8.00205%; height: 19px;&quot;&gt;&lt;strong&gt;Method&lt;/strong&gt;&lt;/td&gt;
&lt;td style=&quot;width: 8.08337%; height: 19px;&quot;&gt;&lt;strong&gt;Endpoint&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr style=&quot;height: 19px;&quot;&gt;
&lt;td style=&quot;width: 8.00205%; height: 19px;&quot;&gt;GET&lt;/td&gt;
&lt;td style=&quot;width: 8.08337%; height: 19px;&quot;&gt;/people&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 8.00205%;&quot;&gt;GET&lt;/td&gt;
&lt;td style=&quot;width: 8.08337%;&quot;&gt;/people/{id]&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 8.00205%;&quot;&gt;POST&lt;/td&gt;
&lt;td style=&quot;width: 8.08337%;&quot;&gt;/people&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style=&quot;width: 8.00205%;&quot;&gt;DELETE&lt;/td&gt;
&lt;td style=&quot;width: 8.08337%;&quot;&gt;/people/{id}&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Besides that you&#39;ll have the known ODATA filter options like $filter, $select etc&lt;/p&gt;
&lt;h3&gt;But lets tell you a bit about how this is done.&amp;nbsp;&lt;/h3&gt;
&lt;p&gt;The whole API Generator is built with EntityFramework Core, .Net Core and the Microsoft ODATA Libraries. Thats pretty much it.&amp;nbsp;&lt;br /&gt;The first step to achieve what I wanted to is to get the EFCore pieces done, have the DBContext generated at runtime fully working with all possible options available.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;After digging a bit into things I figured that this is actually quite easy.&amp;nbsp;&lt;br /&gt;EntityFramework offers various ways to generate the DBContext and DBSchema. You&#39;re probably familiar with the usual OnModelCreating and ModelBuilder approaches.&amp;nbsp;&lt;br /&gt;Something like this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      protected override void OnModelCreating(ModelBuilder builder)
      {
         builder.Entity&amp;lt;Person&amp;gt;().HasKey(&quot;ID&quot;);
         base.OnModelCreating(builder);
      }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;In my case, I don&#39;t know any of the classes at design time as the code is not in my library but in the code of whoever uses my tools. I needed a different approach. Luckily EntityFramework offers two really nice options here.&amp;nbsp;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.modelbuilder.applyconfigurationsfromassembly?view=efcore-6.0&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;ApplyConfigurationsFromAssembly&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Allows you to extract the builder code and OnModelCreating into any class implementing IEntityTypeConfiguration&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/dotnet/api/microsoft.entityframeworkcore.metadata.builders.entitytypebuilder-1?view=efcore-6.0&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;builder.Entity(type)&lt;/a&gt;
&lt;ul&gt;
&lt;li&gt;Allows you to add an entity to the builder of any type, with reflection we can extract types from the calling assembly and use these here&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;By using these two options i was able to rewrite OnModelCreating to this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      protected override void OnModelCreating(ModelBuilder builder)
      {
         // Add all types T using IEntityTypeConfiguration
         builder.ApplyConfigurationsFromAssembly(Assembly.GetEntryAssembly());
&lt;p&gt;// Add all other types (auto mode)
var customTypes = Assembly.GetEntryAssembly().GetExportedTypes()
.Where(x =&amp;gt; x.GetCustomAttributes&amp;lt;GeneratedControllerAttribute&amp;gt;().Any());
foreach (var customType in customTypes.Where(x =&amp;gt; x.GetInterface(&amp;quot;IEntityTypeConfiguration`1&amp;quot;) == null))
builder.Entity(customType);&lt;/p&gt;
&lt;p&gt;base.OnModelCreating(builder);
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;So what are we doing here?&amp;nbsp;&lt;br /&gt;We first of all take all Types implementing IEntityTypeConfiguration and apply their EntityConfiguration. Whatever class is using this only needs to implement the &quot;Configure&quot; function.&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;        public void Configure(EntityTypeBuilder&amp;lt;Person&amp;gt; builder)
        {
           //default stuff if nothing special
        }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;This gets called from the OnModelCreating function and gets handled automatically by EntityFramework. When you work normally with EntityFramework you often don&#39;t want to configure the entities yourself and just let EF do its magic, this is also possible just a bit more tricky. By using reflection we want to grab all types that do NOT implement IEntityTypeConfiguration and just add them to EFCore. This is what happens here:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;         // Add all other types (auto mode)
         var customTypes = Assembly.GetEntryAssembly().GetExportedTypes()
            .Where(x =&amp;gt; x.GetCustomAttributes&amp;lt;GeneratedControllerAttribute&amp;gt;().Any());
         foreach (var customType in customTypes.Where(x =&amp;gt; x.GetInterface(&quot;IEntityTypeConfiguration`1&quot;) == null))
            builder.Entity(customType);&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;You might see, theres a clause limiting the types to all types with a &quot;GeneratedControllerAttribute&quot; this is what my library uses to idenfity the actual classes used, otherwise we where not able to limit the results just to the classes we really want. The GeneratedControllerAttribute is further used to configure the output of the generated API, here&#39;s the definition:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      /// &amp;lt;summary&amp;gt;
      ///    Attribute defining auto generated controller for the class
      /// &amp;lt;/summary&amp;gt;
      /// &amp;lt;param name=&quot;route&quot;&amp;gt;The full base route for the class ie /myclass/ &amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;requiredReadClaims&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;requiredWriteClaims&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;requiredRolesRead&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;requiredRolesWrite&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;fireEvents&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;authorize&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;cache&quot;&amp;gt;&amp;lt;/param&amp;gt;
      /// &amp;lt;param name=&quot;cacheDuration&quot;&amp;gt;&amp;lt;/param&amp;gt;
      public GeneratedControllerAttribute(
         string route,
         string[] requiredReadClaims = null,
         string[] requiredWriteClaims = null,
         string[] requiredRolesRead = null,
         string[] requiredRolesWrite = null,
         bool fireEvents = false,
         bool authorize = true,
         bool cache = false,
         int cacheDuration = 50000)
      {&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;Once i was able to find all the types i wanted to use, it was just a matter of adding these as entities to EFCore. Thats about the DBContext part.&amp;nbsp;&lt;br /&gt;The rest that was needed was actually far easier than i expected, using dependency injection and generic types.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;I created two classes, a generic repository and a generic controller&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   public interface IGenericRespository&amp;lt;T, TEntityId&amp;gt; : IDisposable
   {
      IQueryable&amp;lt;T&amp;gt; Get();
&lt;p&gt;T Get(TEntityId id);&lt;/p&gt;
&lt;p&gt;Task&amp;lt;T&amp;gt; GetAsync(TEntityId id);&lt;/p&gt;
&lt;p&gt;void Create(T record);&lt;/p&gt;
&lt;p&gt;void Update(T record);&lt;/p&gt;
&lt;p&gt;void Delete(TEntityId id);&lt;/p&gt;
&lt;p&gt;int Save();&lt;/p&gt;
&lt;p&gt;Task&amp;lt;int&amp;gt; SaveAsync();
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Accessing the data in EF is rather simple, here&#39;s the Get implementation:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      public TEntity Get(TEntityId id)
      {
         return Get().SingleOrDefault(e =&amp;gt; e.Id.ToString() == id.ToString());
      }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Posting the full controller here is quite long so i&#39;ll only post the ctor and class as such:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   [Route(&quot;api/[controller]&quot;)]
   [Produces(&quot;application/json&quot;)]
   public class GenericController&amp;lt;T, TEntityId&amp;gt; : ODataController
      where T : class,
      IObjectBase&amp;lt;TEntityId&amp;gt;
   {
      public GenericController(IAuthorizationService authorizationService, IGenericRespository&amp;lt;T, TEntityId&amp;gt; repository)
      {
         _repository = repository;
         _authorizationService = authorizationService;
&lt;p&gt;ConfigureController();
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;What the hell does IObjectBase&amp;lt;TEntityId&amp;gt; actually do here?&lt;/h3&gt;
&lt;p&gt;For many things in here you need to know what the primary key of the class is, EntityFramework needs it and the controller as well. As i wanted to allow people to use whatever type they want to use I added a simple interface:&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;   public interface IObjectBase&amp;lt;TId&amp;gt;
   {
      [Key]
      [DatabaseGenerated(DatabaseGeneratedOption.Identity)]
      TId Id { get; set; }
   }&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;By using this people can choose whatever primary key (string, int or guid) they want to have and i know what the type is and have an easier time actually using it.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;Putting it all together&lt;/h3&gt;
&lt;p&gt;So now that we have all the moving parts or at least the basics, we need to actually put things together. How can we actually use our generic controller? This is where .NET Core Application Parts and Feature Provider comes in, read more &lt;a href=&quot;https://docs.microsoft.com/en-us/aspnet/core/mvc/advanced/app-parts?view=aspnetcore-6.0&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;here&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;By writing our own ApplicationPart we can easily do what we&#39;re trying to achieve. I wrote an extension method to IMVCBuilder to initialize everything during app startup. Yes, right now it only works during startup and not fully dynamic but thats step2.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;For initialization you need to pass your feature provider to the AddMvc method, similar to this:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;         services.AddMvc(o =&amp;gt; 
               o.Conventions.Add(new GenericControllerRouteConvention()))
                  .ConfigureApplicationPartManager(m =&amp;gt;
                     m.FeatureProviders.Add(new GenericTypeControllerFeatureProvider(new[] {assembly.FullName}))&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;br /&gt;The GenericControllerRouteConvention is one of the main part to allow the routing engine to work for dynamically generated controllers. While we only have one &quot;Generic Controller&quot; instances are getting added for each type we have added to the app. And the router needs to know which route the controller is listening to. This is how the RouteConvention looks right now and again we&#39;re using our GeneratedControllerAttribute here to configure the routing behaviour, controller name etc. This is important as this part is later also used by the Swagger implementation to create the OpenAPI Spec.&amp;nbsp;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      public void Apply(ControllerModel controller)
      {
         if (controller.ControllerType.IsGenericType)
         {
            var genericType = controller.ControllerType.GenericTypeArguments[0];
            var customNameAttribute = genericType.GetCustomAttribute&amp;lt;GeneratedControllerAttribute&amp;gt;();
            controller.ControllerName = genericType.Name;
&lt;p&gt;if (customNameAttribute?.Route != null)
{
if (controller.Selectors.Count &amp;gt; 0)
{
var currentSelector = controller.Selectors[0];
currentSelector.AttributeRouteModel = new AttributeRouteModel(new RouteAttribute(customNameAttribute.Route));
}
else
{
controller.Selectors.Add(new SelectorModel
{
AttributeRouteModel = new AttributeRouteModel(new RouteAttribute(customNameAttribute.Route))
});
}
}
}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;The RouteConvention is not the only part, the second part is the actual FeatureProvider I had to implement. The provider is the magic part that actually goes through the assembly, fetches the types with our attribute and adds a controller &quot;feature&quot; for each. (note: we excluded BaseController here as thats our generic one we don&#39;t need twice)&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;      public void PopulateFeature(IEnumerable&amp;lt;ApplicationPart&amp;gt; parts, ControllerFeature feature)
      {
         foreach (var assembly in Assemblies)
         {
            var loadedAssembly = Assembly.Load(assembly);
            var customClasses = loadedAssembly.GetExportedTypes().Where(x =&amp;gt; x.GetCustomAttributes&amp;lt;GeneratedControllerAttribute&amp;gt;().Any());
&lt;p&gt;foreach (var candidate in customClasses)
{
// Ignore BaseController itself
if (candidate.FullName != null &amp;amp;&amp;amp; candidate.FullName.Contains(&amp;quot;BaseController&amp;quot;)) continue;&lt;/p&gt;
&lt;p&gt;// Generate type info for our runtime controller, assign class as T
var propertyType = candidate.GetProperty(&amp;quot;Id&amp;quot;)?.PropertyType;
if (propertyType == null) continue;
var typeInfo = typeof(GenericController&amp;lt;,&amp;gt;).MakeGenericType(candidate, propertyType).GetTypeInfo();&lt;/p&gt;
&lt;p&gt;// Finally add the new controller via FeatureProvider -&amp;gt;
feature.Controllers.Add(typeInfo);
}
}
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;And thats about it first of all, having all these parts combined and added you&#39;d now get an already working API when running the app.&amp;nbsp;&lt;br /&gt;You can find a small sample app demonstrating the functionality here: &lt;a href=&quot;https://github.com/DeeJayTC/net-dynamic-api/tree/main/sample/ApiGeneratorSampleApp&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;https://github.com/DeeJayTC/net-dynamic-api/tree/main/sample/ApiGeneratorSampleApp&lt;/a&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term=".NET" />
    <category term="TCDev" />
  </entry>
  <entry>
    <title>MadeWithCards Updates February 22</title>
    <link href="https://www.tcdev.de/blog/madewithcards-updates-february-22/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/madewithcards-updates-february-22/</id>
    <updated>2022-02-15T00:00:00Z</updated>
    <summary>MadeWithCards now has a lot more content and quite a few more updates!</summary>
    <content type="html">&lt;h3&gt;New Updates and even more content!&lt;/h3&gt;
&lt;p&gt;Since quite a while I didn&#39;t do any updates to the page, the blog was literarily not existing and we only had community calls from the Adaptive Cards Team as such.&amp;nbsp;&lt;br /&gt;This changes now! MadeWithCards was always about content related to Adaptive Cards. While the getting started sections and cards as such where really helpfull already, the blog and media sections where severely lacking. From now on I&#39;ll frequently add new video&#39;s and blog posts from the community. You&#39;ll find a lot more content on the page.&lt;/p&gt;
&lt;h3&gt;The new &quot;Media&quot; Channel&lt;/h3&gt;
&lt;p&gt;Our new Media area now not only lists Adaptive Cards Community calls but all Community Calls with Adaptive Cards related content. I also added already and will continue to add videos and guides made by the community. This is first of all a community project so ..more community content for you guys!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/madewithcards-updates-february-22-inline-1.png&quot; alt=&quot;undefined&quot; width=&quot;732&quot; height=&quot;294&quot; style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;The &quot;new&quot; Blog&lt;/h3&gt;
&lt;p&gt;Besides having my own articles as before, the blog now will become a huge collection of blog posts, guides, news and updates from various community members. This is hand picked first of all to keep a specific quality level. If you want your posts to be added here, join the community discord and let me know or send a mail to tim@madewithcards.io.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;I&#39;ll continue adding new posts as i see them, coming back frequently&amp;nbsp; now makes a lot more sense for you guys!&lt;/p&gt;
&lt;p&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/madewithcards-updates-february-22-inline-2.png&quot; width=&quot;793&quot; height=&quot;387&quot; style=&quot;display: block; margin-left: auto; margin-right: auto;&quot; /&gt;&lt;/p&gt;
&lt;h3&gt;Join the community discord!&lt;/h3&gt;
&lt;p&gt;As a small Idea to try out I created a Discord Server &quot;MadeWithCards&quot;&amp;nbsp; &lt;a href=&quot;https://discord.gg/pAbVu2jNA9&quot; title=&quot;join community discord&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Join here!&lt;/a&gt;&amp;nbsp;&lt;/p&gt;
&lt;p&gt;I&#39;ve heard a couple of times that its quite hard to find people when it comes to helping with Adaptive Cards. Yes you can read all the posts, watch the videos but sometimes you have a specific question. Stackoverflow is already a good source but sometimes you want a more detailed or direct answer. Thats the idea behind the discord server.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Discord has a lot lower entry barrier compared to like the AdaptiveCards TAP and is free to join for everyone.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Lets try and create a nice and lovely community here!&lt;br /&gt;&lt;br /&gt;&lt;/p&gt;
&lt;h3&gt;&lt;br /&gt;A note at the end&lt;/h3&gt;
&lt;p&gt;EVERYTHING i do, this page, the AdaptiveCards Studio extension, my libraries and everything is and will always be completely free, no fee&#39;s nothing and no adds on any of my websites. Won&#39;t happen...never! That said, if you really like to and think my work is worth a few bucks consider using Github Sponsors &lt;a href=&quot;https://github.com/sponsors/DeeJayTC/&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;here &lt;/a&gt;. You obviously don&#39;t have to...but hey, if you want to, much appreciated and I&#39;ll make sure to list you as a sponsor on the page!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Greetings&lt;br /&gt;Tim&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="MadeWithCards" />
  </entry>
  <entry>
    <title>.NET App Settings explained</title>
    <link href="https://www.tcdev.de/blog/net-app-settings-explained/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/net-app-settings-explained/</id>
    <updated>2021-11-25T00:00:00Z</updated>
    <summary>Quite often i see questions about how app settings work localy, in azure and in docker, here&#39;s a small explaination</summary>
    <content type="html">&lt;p&gt;In the last days I often saw questions related to appsettings. Thats something many people still have concerns about or are unsure how to handle things properly.&amp;nbsp;&lt;br /&gt;This is my approach to do that&lt;/p&gt;
&lt;h3&gt;Lets try to walk through it:&lt;/h3&gt;
&lt;p&gt;This is a piece of code i usually have in my apps responsible for loading your app configuration.&amp;nbsp;&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;Configuration = new ConfigurationBuilder()
    .SetBasePath(Directory.GetCurrentDirectory())
    .AddJsonFile(&quot;appsettings.json&quot;)
    .AddJsonFile($&quot;appsettings.{env.Name}.json&quot;)
    .AddJsonFile(&quot;secrets.json&quot;)
    .AddAzureKeyVault() &amp;lt;-- should be here
    .AddEnvironmentVariables() &amp;lt; --should alwas be the last
    .Build();&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;The first two parts are pretty self explainatory, you just tell your app where to look for the files so you don&#39;t have to use long paths. In this case we say that all json files are in &quot;CurrentDirectory&quot; which is the base directory of your app.&amp;nbsp;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AddJsonFile(&quot;appsettings.json&quot;)&lt;br /&gt;&lt;/strong&gt;appsettings.json should always only include settings that are valid for all app instances, no matter if development or production.&amp;nbsp;&lt;br /&gt;This is where you put generic settings, something that applies to every developer and all people working with your code.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AddJsonFIle(&quot;appsettings.{env.Name}.json&quot;)&lt;br /&gt;&lt;/strong&gt;This is where some of the magic of configuration happens, you might have noticed that you have more than one app settings file in your app. &lt;br /&gt;Usually something like this:&lt;br /&gt;&quot;appsettings.development.json&quot;&lt;br /&gt;&quot;appsettings.production.json&quot;&lt;br /&gt;&quot;appsettings.staging.json&quot;&lt;br /&gt;these files get loaded depending on the environment you&#39;re currently in and override the prior loaded config if there&#39;s matching keys.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;&lt;br /&gt;Lets say you have this in your appsettings.json
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;  &quot;Logging&quot;: {
    &quot;LogLevel&quot;: {
      &quot;Default&quot;: &quot;Information&quot;,
      &quot;Microsoft&quot;: &quot;Warning&quot;,
      &quot;Microsoft.Hosting.Lifetime&quot;: &quot;Information&quot;
    }
  },&lt;/code&gt;&lt;/pre&gt;
&lt;br /&gt;These are general settings that should apply if not overwritten anywhere else. Now, in the development settings we could have something like this:&lt;br /&gt;&lt;br /&gt;
&lt;pre class=&quot;language-javascript&quot;&gt;&lt;code&gt;  &quot;Logging&quot;: {
    &quot;LogLevel&quot;: {
      &quot;Default&quot;: &quot;Verbose&quot;,
    }
  }​&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This would mean that while you&#39;re in development , the log settings are way more verbose and you overwrite the normal app settings by doing this.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;Have in mind that this only works because the order of loading configuration files is important. Each key gets overwritten by the same key if loaded later.&amp;nbsp;&lt;br /&gt;Thats also why Environmental variables should come last, but we&#39;ll get to this.&amp;nbsp;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AddJsonFile(&quot;secrets.json&quot;)&lt;br /&gt;&lt;/strong&gt;The secrets.json is what i usually use to store local development secrets, nothing too private but also something that might be personal per developer. Best is to not include this file in your repository and just add it to .gitignore. This is where your devs can add their own settings which should not collide with anyone else.&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;AddAzureKeyVault&lt;br /&gt;&lt;/strong&gt;The AzureKeyVault is where usually private keys, settings, url&#39;s and otherwise confidential stuff can be stored for your production apps. Usually you don&#39;t have a key vault locally and this setting only applies to production. As this comes late in the chain it will overide all prior app settings. When hosting an app on azure you should use this for client secrets, client ids, database connection strings and whatever else should be secret&lt;br /&gt;&lt;br /&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/azure/azure-app-configuration/use-key-vault-references-dotnet-core?tabs=core5x&quot; rel=&quot;follow noopener&quot; target=&quot;_blank&quot;&gt;Here&#39;s a guide taken from Microsoft Docs&lt;/a&gt;&lt;br /&gt;&lt;br /&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;AddEnvironmentalVariables()&lt;br /&gt;&lt;/strong&gt;The last bit in the picture are environmental variables, you might see it differently but in my case i always want the env vars to override everything else, mostly for containerization reasons. Env vars are whats primary used to configure applications inside Docker. But Helm Charts and Kubernetes also play a big role here. When deploying containers you often work with deployment scripts and set things on a more global level, really often using environmental vars in like docker-compose files.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;It happened often in past that the settings did not have any effect leading to confusion because some appsetting was overwriting the environmental variables. Thats why I decided that these are always loaded last to make sure that If there is a specific environment variable its the setting that is in effect and beats all the others.&amp;nbsp;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;/p&gt;
&lt;h3&gt;A word on Github Repositories and Settings&lt;/h3&gt;
&lt;p&gt;If you ask whats actually suposed to be pushed in the source the answer is quite simple and easy.&amp;nbsp;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Don&#39;t push:
&lt;ul&gt;
&lt;li&gt;Anything thats containing&amp;nbsp; private keys, otherwise confidential data (like secrets.json, docker-compose files with secrets in it...anything similar)&lt;/li&gt;
&lt;li&gt;Anything thats for a specific person, ie personal developer settings&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;Do push:
&lt;ul&gt;
&lt;li&gt;The default appsettings.json&lt;/li&gt;
&lt;li&gt;Anything that only contains app configuration like logging settings or url&#39;s you don&#39;t have to keep secret&lt;/li&gt;
&lt;li&gt;Anything noone can do any harm with.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;&lt;/strong&gt;&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="Guide" />
    <category term=".NET" />
  </entry>
  <entry>
    <title>The new MadeWithCards.io</title>
    <link href="https://www.tcdev.de/blog/the-new-madewithcardsio/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/the-new-madewithcardsio/</id>
    <updated>2021-09-15T00:00:00Z</updated>
    <summary>A brand new updated MadeWithCards.io was released</summary>
    <content type="html">&lt;div&gt;
&lt;div&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Hey Everyone! We&#39;re back in new shape, better than ever! :)&lt;/h2&gt;
&lt;p id=&quot;a826&quot; data-selectable-paragraph=&quot;&quot;&gt;As you might have noticed, you&#39;re reading this on a brand new page :) Our old page was build pretty fast and lacked some substantial features we&amp;nbsp; needed to improve it and add new features. So we had to rebuild quite a few things from scratch. This took quite a while to acomplish and you did not hear from us in a while...this is done now :)&amp;nbsp;&lt;br /&gt;&lt;br /&gt;We now have a brand new www.madewithcards.io page, lots of features to be added soon. While there still is some stuff left to work on, we didn&#39;t want to wait any longer and show the community our work. There&#39;s some tuning to do, our blog isn&#39;t finished yet and various features behind the scenes need some love aswell but its by far better than the old page and there&#39;s a bunch of stuff already available!&amp;nbsp;&lt;br /&gt;&lt;br /&gt;There&#39;s a ton of new features added and to be added soon&amp;nbsp;&lt;/p&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Customize the page :)&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;A more gimicky feature but you can now decide if you want the page in light or dark...just use the cogs icon on the right to chose. You can also change the default card layout from here.&amp;nbsp;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/the-new-madewithcardsio-inline-1.png&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;A brand new getting started section!&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;We extended the page with a brand new getting started section, this is getting filled more and more and will help people getting started with AdaptiveCards. We list Microsoft Learn content, recommended blog posts besides various documentation link to help you find what you need!&lt;br /&gt;Starting with AdaptiveCards was never easier ;)&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;&lt;img src=&quot;https://www.tcdev.de/blog/img/legacy/the-new-madewithcardsio-inline-2.png&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;div&gt;
&lt;div&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Our brand new AdaptiveCards API!&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;The old page had the cards sort of hardcoded in the page&#39;s own database, the new MadeWithCards.io is using our public Cards Database. Yea, right you can get all the cards on the page, and all cards we&#39;ll add via&amp;nbsp;https://api.madewithcards.io. This also means that in future, all public cards can be used in BotFramework, VSCode or your Power Automate flows making shared cards even more useful for the community.&lt;/p&gt;
&lt;p&gt;While the API is already online, it can only be used for selected user&#39;s at the moment. We plan to have the API commonly available soon.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;A note on cards: currently not all the old cards are online on the page, this is because we&#39;re transistioning to the new system. You will find all cards and a bunch of new cards soon in the cards section.&amp;nbsp;&lt;/p&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Your own cards!&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Once our API is publicly available you will also be online to login to the page, add your own cards (even private!) make your own cards public and edit your cards from VSCode or the Designer. THis will allow you to re-use cards everywhere you like to from PowerAutomate to Teams Apps and more. Yes even Cards for SAP or Cisco.&amp;nbsp;&lt;br /&gt;&lt;br /&gt;With the help of the community we plan to have a huge section of AdaptiveCard templates for common data, such as Github.com or Asana, Teamwork, Wrike, Microsoft Graph API&#39;s and a lot more. We&#39;re always looking for Cards for &quot;public&quot; APIs. Any template build for data for any SAAS product really.&amp;nbsp;&lt;/p&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Various Extensions and Libraries&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Easily use one of our cards from PowerAutomate, use cards from the API within Botframework and self contained cards using Stencil...this is just a few of things we&#39;re about to release pretty soon.&amp;nbsp;&lt;/p&gt;
&lt;p&gt;Oh...did you know you can embed AdaptiveCards in Medium.com blog posts soon? :)&lt;/p&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;AdaptiveCards Studio!&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;AdaptiveCards Studio will receive quite a few updates aswell, besides loading and editing cards directly from within VSCode you can also write cards in Yaml format and convert to JSON later. (Thanks to our friends at&amp;nbsp;https://www.asseco.com)&lt;/p&gt;
&lt;h2 id=&quot;f627&quot; data-selectable-paragraph=&quot;&quot;&gt;Now...why all this?!&lt;/h2&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;AdaptiveCards are reusable, templated and can easily be loaded from any source. We see quite a few people still hardcode cards in c# or botframework and want to help ease AdaptiveCards development and help people make their cards reusable, shareable and future proof :)&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Want to know more? Leave us a message!&lt;/p&gt;
&lt;p&gt;&lt;/p&gt;
&lt;p&gt;Stay tuned!&lt;/p&gt;
&lt;p&gt;Tim&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="MadeWithCards" />
  </entry>
  <entry>
    <title>AC Templating  is a game changer!</title>
    <link href="https://www.tcdev.de/blog/ac-templating-is-a-game-changer/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/ac-templating-is-a-game-changer/</id>
    <updated>2021-07-01T00:00:00Z</updated>
    <summary>Simply explained templating is data-binding onto JSON strings.</summary>
    <content type="html">&lt;div&gt;
&lt;h1 id=&quot;427c&quot; data-selectable-paragraph=&quot;&quot;&gt;Why templating for Adaptive Cards is a game-changer.&lt;/h1&gt;
&lt;/div&gt;
&lt;p id=&quot;907a&quot; data-selectable-paragraph=&quot;&quot;&gt;If you never heard of what Adaptive Cards are, it might be a good idea to learn a few things about them before we continue. Also on a separate note, some of the things covered in this post are what we&amp;rsquo;ve been talking about in the Microsoft Ignite session just recently.&lt;/p&gt;
&lt;p id=&quot;eff8&quot; data-selectable-paragraph=&quot;&quot;&gt;If you prefer watching the session before reading, you can watch the recording here h&lt;a href=&quot;https://myignite.techcommunity.microsoft.com/sessions/81641?source=sessions&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;ttps://myignite.techcommunity.microsoft.com/sessions/81641&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&quot;5ccd&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Cards&lt;/h1&gt;
&lt;p id=&quot;3253&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Cards&amp;nbsp;are platform-agnostic snippets of UI, authored in JSON, that apps and services can openly exchange. When delivered to a specific app, the JSON is transformed into a native UI that automatically adapts to its surroundings. Cards are supported by various Microsoft products but are rendered anywhere you like even in your own apps.&lt;/p&gt;
&lt;p id=&quot;5868&quot; data-selectable-paragraph=&quot;&quot;&gt;Read more here: www.adaptivecards.io&lt;/p&gt;
&lt;h1 id=&quot;eb69&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Card Templating&lt;/h1&gt;
&lt;p id=&quot;c9c9&quot; data-selectable-paragraph=&quot;&quot;&gt;After talking about what Adaptive Cards are, let&amp;rsquo;s talk about the main topic of this post.&lt;/p&gt;
&lt;p id=&quot;81af&quot; data-selectable-paragraph=&quot;&quot;&gt;Templating. What is that all about?&lt;br /&gt;Simply explained templating is data-binding onto JSON strings. It&amp;rsquo;s not even Adaptive Cards related as such it just lets you bind any data, be it JSON formatted or a specific object instance onto your JSON string. Yet similar to what a string replace would do just way more reliable and convenient with support for functions and arrays.&lt;/p&gt;
&lt;p id=&quot;070f&quot; data-selectable-paragraph=&quot;&quot;&gt;Basic data binding&lt;br /&gt;Within your JSON file you can use placeholders like {person.firstName} or even access multiple levels with like {person.address.streeet} or {person.contact[2].phone}. The templating library is capable of running functions over your data, can repeat JSON structures based on arrays and a few more things.&lt;/p&gt;
&lt;p id=&quot;5878&quot; data-selectable-paragraph=&quot;&quot;&gt;You can easily access your data using these placeholders to start with:&lt;/p&gt;
&lt;pre&gt;&lt;span id=&quot;6e1f&quot; data-selectable-paragraph=&quot;&quot;&gt;{&lt;br /&gt;&lt;/span&gt;&lt;span id=&quot;6e1f&quot; data-selectable-paragraph=&quot;&quot;&gt; &lt;/span&gt;&lt;span id=&quot;6e1f&quot; data-selectable-paragraph=&quot;&quot;&gt;&quot;{&amp;lt;property&amp;gt;}&quot;: &quot;Implicitly binds to `$data.&amp;lt;property&amp;gt;`&quot;,&lt;br /&gt; &quot;$data&quot;: &quot;The current data object&quot;,&lt;br /&gt; &quot;$root&quot;: &quot;The root data object.&quot;,&lt;br /&gt; &quot;$index&quot;: &quot;The current index when iterating&quot;,&lt;br /&gt; &quot;$host&quot;: &quot;Access properties of the host *(not working yet)*&quot;&lt;br /&gt;}&lt;/span&gt;&lt;/pre&gt;
&lt;p id=&quot;7de2&quot; data-selectable-paragraph=&quot;&quot;&gt;Iterations and Conditions&lt;br /&gt;All this gets even more interesting when we add iterations and conditions to the plate. Let us admit we have the following data returned from any API or database&lt;/p&gt;
&lt;pre&gt;&lt;span id=&quot;e072&quot; data-selectable-paragraph=&quot;&quot;&gt;{&lt;br /&gt; &quot;title&quot;: &quot;My list of people:&quot;,&lt;br /&gt; &quot;count&quot;: 4,&lt;br /&gt; &quot;people&quot;: [{&lt;br /&gt;   &quot;firstName&quot;: &quot;Micky&quot;,&lt;br /&gt;   &quot;lastName&quot;: &quot;Mouse&quot;,&lt;br /&gt;   &quot;age&quot;: 44&lt;br /&gt;  },&lt;br /&gt;  {&lt;br /&gt;   &quot;firstName&quot;: &quot;Donald&quot;,&lt;br /&gt;   &quot;lastName&quot;: &quot;Duck&quot;,&lt;br /&gt;   &quot;age&quot;: 12&lt;br /&gt;  },&lt;br /&gt;  {&lt;br /&gt;   &quot;firstName&quot;: &quot;Harry&quot;,&lt;br /&gt;   &quot;lastName&quot;: &quot;Potter&quot;,&lt;br /&gt;   &quot;age&quot;: 18&lt;br /&gt;  },&lt;br /&gt;  {&lt;br /&gt;   &quot;firstName&quot;: &quot;Matt&quot;,&lt;br /&gt;   &quot;lastName&quot;: &quot;Hidinger&quot;,&lt;br /&gt;   &quot;age&quot;: &quot;28&quot;&lt;br /&gt;  }&lt;br /&gt; ]&lt;br /&gt;}&lt;/span&gt;&lt;/pre&gt;
&lt;p id=&quot;d5b5&quot; data-selectable-paragraph=&quot;&quot;&gt;Now we want to bind this onto the following JSON template:&lt;/p&gt;
&lt;pre&gt;&lt;span id=&quot;58f3&quot; data-selectable-paragraph=&quot;&quot;&gt;{&lt;br /&gt;    &quot;type&quot;: &quot;AdaptiveCard&quot;,&lt;br /&gt;    &quot;body&quot;: [&lt;br /&gt;        {&lt;br /&gt;            &quot;type&quot;: &quot;TextBlock&quot;,&lt;br /&gt;            &quot;size&quot;: &quot;Medium&quot;,&lt;br /&gt;            &quot;weight&quot;: &quot;Bolder&quot;,&lt;br /&gt;            &quot;text&quot;: &quot;{title}&quot;&lt;br /&gt;        },&lt;br /&gt;        {&lt;br /&gt;            &quot;type&quot;: &quot;FactSet&quot;,&lt;br /&gt;            &quot;facts&quot;: [&lt;br /&gt;                {&lt;br /&gt;                  &quot;$data&quot;: &quot;{people}&quot;,&lt;br /&gt;                  &quot;$when&quot;: &quot;{$index.age &amp;gt; 12}&quot;,&lt;br /&gt;                  &quot;title&quot;: &quot;{$index.firstName} {$index.lastName}&quot;,&lt;br /&gt;                  &quot;value&quot;: &quot;{$index.age}&quot;&lt;br /&gt;                }&lt;br /&gt;            ]&lt;br /&gt;        }&lt;br /&gt;    ],&lt;br /&gt;    &quot;$schema&quot;: &quot;&lt;a href=&quot;http://adaptivecards.io/schemas/adaptive-card.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;http://adaptivecards.io/schemas/adaptive-card.json&lt;/a&gt;&quot;,&lt;br /&gt;    &quot;version&quot;: &quot;1.0&quot;&lt;br /&gt;}&lt;/span&gt;&lt;/pre&gt;
&lt;p id=&quot;bd71&quot; data-selectable-paragraph=&quot;&quot;&gt;As you can see we access the properties from the data like the title but also firstName and lastName from the people&amp;rsquo;s array. the $when condition makes sure we only render people who are older than 12.&lt;br /&gt;After transforming the data the resulting JSON will be this:&lt;/p&gt;
&lt;pre&gt;&lt;span id=&quot;5309&quot; data-selectable-paragraph=&quot;&quot;&gt;{&lt;br /&gt; &quot;type&quot;: &quot;AdaptiveCard&quot;,&lt;br /&gt; &quot;body&quot;: [{&lt;br /&gt;   &quot;type&quot;: &quot;TextBlock&quot;,&lt;br /&gt;   &quot;size&quot;: &quot;Medium&quot;,&lt;br /&gt;   &quot;weight&quot;: &quot;Bolder&quot;,&lt;br /&gt;   &quot;text&quot;: &quot;My list of people&quot;&lt;br /&gt;  },&lt;br /&gt;  {&lt;br /&gt;   &quot;type&quot;: &quot;FactSet&quot;,&lt;br /&gt;   &quot;facts&quot;: [{&lt;br /&gt;     &quot;title&quot;: &quot;Micky Mouse&quot;,&lt;br /&gt;     &quot;value&quot;: &quot;44&quot;&lt;br /&gt;    },&lt;br /&gt;    {&lt;br /&gt;     &quot;title&quot;: &quot;Harry Potter&quot;,&lt;br /&gt;     &quot;value&quot;: &quot;18&quot;&lt;br /&gt;    },&lt;br /&gt;    {&lt;br /&gt;     &quot;title&quot;: &quot;Matt Hidinger&quot;,&lt;br /&gt;     &quot;value&quot;: &quot;28&quot;&lt;br /&gt;    }&lt;br /&gt;   ]&lt;br /&gt;  }&lt;br /&gt; ],&lt;br /&gt; &quot;$schema&quot;: &quot;&lt;a href=&quot;http://adaptivecards.io/schemas/adaptive-card.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;http://adaptivecards.io/schemas/adaptive-card.json&lt;/a&gt;&quot;,&lt;br /&gt; &quot;version&quot;: &quot;1.0&quot;&lt;br /&gt;}&lt;/span&gt;&lt;/pre&gt;
&lt;p id=&quot;6493&quot; data-selectable-paragraph=&quot;&quot;&gt;The transformation ignored Donald because his age is not above 12. But it repeatedly added facts for each of the people with condition met.&lt;/p&gt;
&lt;p id=&quot;2d15&quot; data-selectable-paragraph=&quot;&quot;&gt;The final adaptive card would look like this:&lt;/p&gt;
&lt;figure class=&quot;aht ahu ahv ahw sp aie mg mh paragraph-image&quot;&gt;
&lt;div class=&quot;mg mh aid&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;dm wi aif&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/ac-templating-is-a-game-changer-inline-1.png&quot; width=&quot;431&quot; height=&quot;141&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;figcaption class=&quot;ri ig sh mg mh aig aih cf b gd ge afh&quot; data-selectable-paragraph=&quot;&quot;&gt;Finally transformed and rendered card.&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;00b9&quot; data-selectable-paragraph=&quot;&quot;&gt;These are just examples, there&amp;rsquo;s a lot more you can do.&lt;br /&gt;You can find all the available functionality here:&lt;br /&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/adaptive-cards/templating/&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://docs.microsoft.com/en-us/adaptive-cards/templating/&lt;/a&gt;&lt;br /&gt;and&lt;br /&gt;&lt;a href=&quot;https://docs.microsoft.com/en-us/adaptive-cards/templating/language&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://docs.microsoft.com/en-us/adaptive-cards/templating/language&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&quot;a655&quot; data-selectable-paragraph=&quot;&quot;&gt;Why is that useful now?&lt;/h1&gt;
&lt;p id=&quot;a55e&quot; data-selectable-paragraph=&quot;&quot;&gt;Think about how you usually generate JSON to send to any API or how you would normally generate JSON output for your own API. Most likely you have things like classes and serialization in mind, right? While some people already tried different approaches you usually still go with serialization as it was the most convenient way by now. However, generating JSON in your code leads to a couple of issues. It&#39;s in your codebase, it has to be compiled, etc. Changes require new releases and you often repeat yourself in various places or apps.&lt;/p&gt;
&lt;p id=&quot;8076&quot; data-selectable-paragraph=&quot;&quot;&gt;This is especially true when I hear people talk about Adaptive Cards and how they use them. Even some of the Microsoft Examples on how to use the BotFramework for MS Teams say you should do something like this:&lt;/p&gt;
&lt;pre&gt;&lt;span id=&quot;fd18&quot; data-selectable-paragraph=&quot;&quot;&gt;// Create card&lt;br /&gt;    var card = new AdaptiveCard(new AdaptiveSchemaVersion(1, 0))&lt;br /&gt;    {&lt;br /&gt;        // Use LINQ to turn the choices into submit actions&lt;br /&gt;        Actions = choices.Select(choice =&amp;gt; new AdaptiveSubmitAction&lt;br /&gt;        {&lt;br /&gt;            Title = choice,&lt;br /&gt;            Data = choice,  // This will be a string&lt;br /&gt;        }).ToList&amp;lt;AdaptiveAction&amp;gt;(),&lt;br /&gt;    };&lt;/span&gt;&lt;/pre&gt;
&lt;p id=&quot;1147&quot; data-selectable-paragraph=&quot;&quot;&gt;Now think about what happens when you want to change anything on your card, add something or even allow someone like your customer to edit this card? Literarily impossible.&lt;/p&gt;
&lt;p id=&quot;7a9e&quot; data-selectable-paragraph=&quot;&quot;&gt;With Adaptive Cards Templating data is completely separate from the card layout as such. This means your card template can be anywhere. In your code, in a database or even on a completely remote server fetched by URL.&lt;/p&gt;
&lt;p id=&quot;6140&quot; data-selectable-paragraph=&quot;&quot;&gt;To render the card you just bind your data onto your template using the template library.&lt;/p&gt;
&lt;p id=&quot;803b&quot; data-selectable-paragraph=&quot;&quot;&gt;This, for example, is one of the templates we use in our VS Code app:&amp;nbsp;&lt;a href=&quot;https://templates.adaptivecards.io/teamwork.com/projects/task.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://templates.adaptivecards.io/teamwork.com/projects/task.json&lt;/a&gt;&lt;/p&gt;
&lt;p id=&quot;4e0c&quot; data-selectable-paragraph=&quot;&quot;&gt;If you&amp;rsquo;ve seen code for Adaptive Cards before you should be familiar with it, however, there is no data in it just a lot of placeholders as described above. This template is loaded at runtime when people use our extension.&lt;/p&gt;
&lt;figure class=&quot;aht ahu ahv ahw sp aie mg mh paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;aij aik cj ail dm aim&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;mg mh aii&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;dm wi aif&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/ac-templating-is-a-game-changer-inline-2.png&quot; width=&quot;700&quot; height=&quot;448&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;ri ig sh mg mh aig aih cf b gd ge afh&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Card rendered in VS Code ( JS / Typescript)&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;8e56&quot; data-selectable-paragraph=&quot;&quot;&gt;The extension is open-source and available on Github&amp;nbsp;&lt;a href=&quot;https://github.com/Teamwork/vscode-projects&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/Teamwork/vscode-projects&lt;/a&gt;&amp;nbsp;in case you want to have a look at how we did it.&lt;/p&gt;
&lt;p id=&quot;4e7b&quot; data-selectable-paragraph=&quot;&quot;&gt;This gives us quite a few interesting options. Let&#39;s admit we want to change something in the template, we do not have to do any new release, we don&amp;rsquo;t do any code changes. We update the template and its changed for all customers using the VSCode extension straight away.&lt;/p&gt;
&lt;p id=&quot;36ca&quot; data-selectable-paragraph=&quot;&quot;&gt;An additional advantage is that you can re-use the same template in various places and can even share it with other people to allow them to use it if you want to. It is just a template, no private data in it.&lt;/p&gt;
&lt;p id=&quot;3c38&quot; data-selectable-paragraph=&quot;&quot;&gt;It&#39;s not yet available but we also use the same template in our Visual Studio Pro extension, again the same template from that link above.&lt;/p&gt;
&lt;figure class=&quot;aht ahu ahv ahw sp aie mg mh paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;aij aik cj ail dm aim&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;mg mh ain&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;dm wi aif&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/ac-templating-is-a-game-changer-inline-3.png&quot; width=&quot;700&quot; height=&quot;450&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;ri ig sh mg mh aig aih cf b gd ge afh&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Card in Visual Studio Pro ( WPF )&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;h1 id=&quot;9b13&quot; data-selectable-paragraph=&quot;&quot;&gt;Template Repository and sharing templates&lt;/h1&gt;
&lt;p id=&quot;74dd&quot; data-selectable-paragraph=&quot;&quot;&gt;I mentioned templates fetched from a remote URL and also &amp;ldquo;sharing&amp;rdquo; of templates earlier. Yes, that&#39;s becoming a thing with templating.&lt;/p&gt;
&lt;p id=&quot;9198&quot; data-selectable-paragraph=&quot;&quot;&gt;As templates do not contain any data you can absolutely share the templates with other people. This is especially interesting when talking about common data such as Github issues or weather data, financial reports or data from Microsofts Graph API.&lt;br /&gt;If a template works for one person it surely works for more, if one person invested time to build a template for let&#39;s say Github, there may be other people who would use the template for something they work on.&lt;/p&gt;
&lt;p id=&quot;bdde&quot; data-selectable-paragraph=&quot;&quot;&gt;We are working on a Github Integration for one of our Products and we wrote an Adaptive Card template for it.&amp;nbsp;&lt;a href=&quot;https://templates.adaptivecards.io/github.com/issue_webhook.json&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://templates.adaptivecards.io/github.com/issue_webhook.json&lt;/a&gt;&lt;/p&gt;
&lt;figure class=&quot;aht ahu ahv ahw sp aie mg mh paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;aij aik cj ail dm aim&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;mg mh aii&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;dm wi aif&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/ac-templating-is-a-game-changer-inline-4.png&quot; width=&quot;700&quot; height=&quot;432&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;ri ig sh mg mh aig aih cf b gd ge afh&quot; data-selectable-paragraph=&quot;&quot;&gt;Adaptive Card showing Github Issue within Teamwork Projects ( Knockout / Javascript )&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;4e3c&quot; data-selectable-paragraph=&quot;&quot;&gt;As you might have noticed earlier, the links are all based on&amp;nbsp;&lt;a href=&quot;https://templating.adaptivecards.io/&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://templates.adaptivecards.io&lt;/a&gt;&lt;/p&gt;
&lt;p id=&quot;d7b4&quot; data-selectable-paragraph=&quot;&quot;&gt;The developers behind Adaptive Cards have created a proof of concept repository (similar to like npm or docker hub) where anyone can upload templates and make them available. They are working to create a lot of templates for commonly used data like all OData types for MS Graph but also things like flight reports. We added our own templates to that repository to allow anyone who wants to work with Teamwork tasks to use our template. We also shared our Github Template which might get some changes sooner or later.&lt;/p&gt;
&lt;p id=&quot;6484&quot; data-selectable-paragraph=&quot;&quot;&gt;If you have a template or suggested changes, you can go ahead and add a PR to the repository, it&#39;s open-source!&lt;/p&gt;
&lt;p id=&quot;7ed1&quot; data-selectable-paragraph=&quot;&quot;&gt;&lt;a href=&quot;https://github.com/microsoft/adaptivecards-templates/tree/master/templates&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/microsoft/adaptivecards-templates&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&quot;a6a3&quot; data-selectable-paragraph=&quot;&quot;&gt;Wrapping things up&lt;/h1&gt;
&lt;p id=&quot;ab6c&quot; data-selectable-paragraph=&quot;&quot;&gt;In this post, you heard about Adaptive Cards as such, templating and even the template repository. There are a lot more things you can do with these tech pieces but I want to leave that to your imagination for now. I&amp;rsquo;ll try to cover more topics and examples in more posts later on.&lt;/p&gt;
&lt;p id=&quot;be97&quot; data-selectable-paragraph=&quot;&quot;&gt;In case you have any questions on all this feel free to reach out here in the comments or on twitter @TimCadenbach&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="AdaptiveCards" />
  </entry>
  <entry>
    <title>AdaptiveCards just got a ton better</title>
    <link href="https://www.tcdev.de/blog/adaptivecards-just-got-a-ton-better/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/adaptivecards-just-got-a-ton-better/</id>
    <updated>2021-01-05T00:00:00Z</updated>
    <summary>With AdaptiveCards 1.3 (also 2.1.0 ) cards got a relatively simple but really powerfull change. </summary>
    <content type="html">&lt;div&gt;
&lt;div&gt;
&lt;p id=&quot;171f&quot;&gt;With AdaptiveCards 1.3 (also 2.1.0 ) cards got a relatively simple but really powerfull change. All input fields now have a new property called &amp;ldquo;label&amp;rdquo;&lt;/p&gt;
&lt;/div&gt;
&lt;p id=&quot;67e5&quot; data-selectable-paragraph=&quot;&quot;&gt;Yea&amp;hellip;a new field label but how does this make cards simpler?&lt;br /&gt;Pretty simple. Take this card , made with AdaptiveCards 1.2:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&quot;language-json hljs&quot;&gt;{
    &quot;type&quot;: &quot;AdaptiveCard&quot;,
    &quot;$schema&quot;: &quot;http://adaptivecards.io/schemas/adaptive-card.json&quot;,
    &quot;version&quot;: &quot;1.2&quot;,
    &quot;body&quot;: [
        {
            &quot;type&quot;: &quot;TextBlock&quot;,
            &quot;text&quot;: &quot;Your Family Name&quot;
        },
        {
            &quot;type&quot;: &quot;Input.Text&quot;,
            &quot;placeholder&quot;: &quot;Your family name...&quot;,
            &quot;label&quot;: &quot;Family Name&quot;,
            &quot;isRequired&quot;: true,
            &quot;errorMessage&quot;: &quot;This field is required&quot;,
            &quot;id&quot;: &quot;familyname&quot;
        },
        {
            &quot;type&quot;: &quot;TextBlock&quot;,
            &quot;text&quot;: &quot;Your Name&quot;
        },
        {
            &quot;type&quot;: &quot;Input.Text&quot;,
            &quot;placeholder&quot;: &quot;Placeholder text&quot;,
            &quot;label&quot;: &quot;Your first name...&quot;,
            &quot;id&quot;: &quot;firstname&quot;
        },
        {
            &quot;type&quot;: &quot;TextBlock&quot;,
            &quot;text&quot;: &quot;Date of Birth&quot;
        },
        {
            &quot;type&quot;: &quot;Input.Date&quot;,
            &quot;isRequired&quot;: true
        },
        {
            &quot;type&quot;: &quot;TextBlock&quot;,
            &quot;text&quot;: &quot;Email&quot;
        },
        {
            &quot;type&quot;: &quot;Input.Text&quot;,
            &quot;placeholder&quot;: &quot;email@domain.com&quot;,
            &quot;id&quot;: &quot;email&quot;
        },
        {
            &quot;type&quot;: &quot;TextBlock&quot;,
            &quot;text&quot;: &quot;Password&quot;
        },
        {
            &quot;type&quot;: &quot;Input.Text&quot;,
            &quot;placeholder&quot;: &quot;Placeholder text&quot;,
            &quot;id&quot;: &quot;pwd&quot;
        }
    ],
    &quot;actions&quot;: [
        {
            &quot;type&quot;: &quot;Action.Submit&quot;,
            &quot;title&quot;: &quot;Sign Up&quot;
        }
    ]
}&lt;/code&gt;&lt;/pre&gt;
&lt;figure&gt;
&lt;div&gt;
&lt;div&gt;
&lt;script src=&quot;https://gist.github.com/DeeJayTC/d34ddcb48673c61c322493be562a0f76.js&quot;&gt;&lt;/script&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;8c1c&quot; data-selectable-paragraph=&quot;&quot;&gt;Notice how I said &amp;ldquo;simple&amp;rdquo; and you had to scroll all the way through the looong JSON Code? In fact, this card is pretty simple when we look at the rendered card:&lt;/p&gt;
&lt;figure class=&quot;xm xn xo xp uy xq ao ap paragraph-image&quot;&gt;
&lt;div class=&quot;ao ap xt&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ae nt xu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptivecards-just-got-a-ton-better-inline-1.png&quot; width=&quot;435&quot; height=&quot;419&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;figcaption class=&quot;xv ew uh ao ap xw xx gu b hw hr hs&quot; data-selectable-paragraph=&quot;&quot;&gt;Rendered Card&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;6425&quot; data-selectable-paragraph=&quot;&quot;&gt;However, to build the card we had to do a lot of things, and for every input we needed 2 AdaptiveCard Controls. An Input.Text and a Textblock.&lt;/p&gt;
&lt;figure&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;a0b7&quot; data-selectable-paragraph=&quot;&quot;&gt;Now, lets compare things and have a look at the exact same card, made with Adaptive Cards 1.3:&lt;/p&gt;
&lt;figure&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;030e&quot; data-selectable-paragraph=&quot;&quot;&gt;Wow&amp;hellip;.thats a lot less JSON Code than what we had before isn&amp;rsquo;t it? In fact its half the amount of controls needed to get the exact same result as before.&lt;/p&gt;
&lt;figure class=&quot;xm xn xo xp uy xq ao ap paragraph-image&quot;&gt;
&lt;div class=&quot;ao ap xy&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ae nt xu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptivecards-just-got-a-ton-better-inline-2.png&quot; width=&quot;437&quot; height=&quot;408&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;9b31&quot; data-selectable-paragraph=&quot;&quot;&gt;With AdaptiveCards 1.3 every Input has a &amp;ldquo;Label&amp;rdquo; property, this property automatically renders the text label above the input and you no longer have to do that yourself. You still can, if you want to tho. But this effectively means that you in many many cases only need half the controls to get the same card result. Thats already great isn&amp;rsquo;t it?&lt;/p&gt;
&lt;p id=&quot;e598&quot; data-selectable-paragraph=&quot;&quot;&gt;But thats not all of the great news!!!&lt;/p&gt;
&lt;/div&gt;
&lt;div role=&quot;separator&quot;&gt;&lt;/div&gt;
&lt;div&gt;
&lt;h1 id=&quot;95e9&quot; data-selectable-paragraph=&quot;&quot;&gt;Client Side Input Validation!&lt;/h1&gt;
&lt;p id=&quot;ed64&quot; data-selectable-paragraph=&quot;&quot;&gt;Yes, thats right, you&amp;rsquo;ve read it right. With AdaptiveCards 1.3 there&amp;rsquo;s now input validation on all input fields. You can now do things like making sure people can only enter valid emails or specific range dates etc pretty much anything you can do with either min/max or regex can be done.&lt;/p&gt;
&lt;p id=&quot;bc23&quot; data-selectable-paragraph=&quot;&quot;&gt;Using the input validation is pretty simple. Take a look at this part for example:&lt;/p&gt;
&lt;figure&gt;
&lt;div&gt;
&lt;div&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;e8ac&quot; data-selectable-paragraph=&quot;&quot;&gt;A simple &amp;ldquo;Input.Text&amp;rdquo; but its validated against regex making sure the password entered follows specific rules. In this case 1 lower char, 1 upper, 1 special and minimum 8 chars required.&lt;/p&gt;
&lt;figure class=&quot;xm xn xo xp uy xq ao ap paragraph-image&quot;&gt;
&lt;div class=&quot;ao ap yx&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ae nt xu&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/adaptivecards-just-got-a-ton-better-inline-3.png&quot; width=&quot;424&quot; height=&quot;450&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;figcaption class=&quot;xv ew uh ao ap xw xx gu b hw hr hs&quot; data-selectable-paragraph=&quot;&quot;&gt;AdaptiveCard showing validation errors&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;b5c9&quot; data-selectable-paragraph=&quot;&quot;&gt;Additionally you can add a custom error message when validation fails. You can try the actual working card here:&amp;nbsp;&lt;a href=&quot;https://www.madewithcards.io/cards/simple-signup-form-with-validation&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;simple-signup-form-with-validation&lt;/a&gt;&lt;/p&gt;
&lt;h1 id=&quot;fb8b&quot; data-selectable-paragraph=&quot;&quot;&gt;So what exactly can I do with this?&lt;/h1&gt;
&lt;p id=&quot;5947&quot; data-selectable-paragraph=&quot;&quot;&gt;Well, you can validate all inputs a user does on any given AdaptiveCard, some examples but not limited to are:&lt;/p&gt;
&lt;ul&gt;
&lt;li id=&quot;51b0&quot; data-selectable-paragraph=&quot;&quot;&gt;Validating text fields based on regex, anything you can do with regex, you can validate.&lt;/li&gt;
&lt;li id=&quot;c022&quot; data-selectable-paragraph=&quot;&quot;&gt;Validating Dates, ie requiring a min or max date value on fields&lt;/li&gt;
&lt;li id=&quot;0e04&quot; data-selectable-paragraph=&quot;&quot;&gt;Numeric limitations, max number 1000? yea go for it.&lt;/li&gt;
&lt;li id=&quot;59ac&quot; data-selectable-paragraph=&quot;&quot;&gt;Enforce specific formats. You want the user to input something in a spefic format like emails, zip codes or generally something like &amp;ldquo;no spaces&amp;rdquo;. You can do this now aswell.&lt;/li&gt;
&lt;li id=&quot;d654&quot; data-selectable-paragraph=&quot;&quot;&gt;and much more&amp;hellip;.&lt;/li&gt;
&lt;/ul&gt;
&lt;p id=&quot;994c&quot; data-selectable-paragraph=&quot;&quot;&gt;All validations happen on clientside before the actual card is submitted to the server.&lt;/p&gt;
&lt;p id=&quot;ca95&quot; data-selectable-paragraph=&quot;&quot;&gt;That said have a look at&amp;nbsp;&lt;a href=&quot;http://www.madewithcards.io/cards&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;www.madewithcards.io/cards&lt;/a&gt;&amp;nbsp;we have some 1.3 and input validation samples there aswell.&lt;/p&gt;
&lt;p id=&quot;f036&quot; data-selectable-paragraph=&quot;&quot;&gt;You can get the new AdaptiveCards version from Nuget or NPM as usually but should consider installing 2.1.0 or 2.0.0 which is a new version fully backwards compatible and includes all the new features.&lt;/p&gt;
&lt;/div&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term="AdaptiveCards" />
  </entry>
  <entry>
    <title>How to use the Teamwork Projects SDK</title>
    <link href="https://www.tcdev.de/blog/how-to-use-the-teamwork-projects-sdk/" rel="alternate" type="text/html" />
    <id>https://www.tcdev.de/blog/how-to-use-the-teamwork-projects-sdk/</id>
    <updated>2020-02-12T00:00:00Z</updated>
    <summary>Recently, a new version of the Teamwork .Net SDK was released and I’d like to give you a few examples on how this can be used to make development for Teamwork Projects a lot easier when working with .Net.</summary>
    <content type="html">&lt;div class=&quot;&quot;&gt;
&lt;h1 id=&quot;4c54&quot; class=&quot;pw-post-title ix iy iz bo ja jb jc jd je jf jg jh ji jj jk jl jm jn jo jp jq jr js jt ju jv gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Using the Teamwork Projects SDK&lt;/h1&gt;
&lt;/div&gt;
&lt;figure class=&quot;gj gl jx jy jz ka gf gg paragraph-image&quot;&gt;
&lt;div role=&quot;button&quot; class=&quot;kb kc ct kd ea ke&quot; tabindex=&quot;0&quot;&gt;
&lt;div class=&quot;gf gg jw&quot;&gt;&lt;img alt=&quot;&quot; class=&quot;ea kf kg&quot; src=&quot;https://www.tcdev.de/blog/img/legacy/how-to-use-the-teamwork-projects-sdk-inline-1.png&quot; width=&quot;700&quot; height=&quot;279&quot; role=&quot;presentation&quot; /&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;8e2b&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Recently, a&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au lf&quot; href=&quot;https://github.com/Teamwork/dotnet&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;new version of the Teamwork .Net SDK&lt;/a&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;was released and I&amp;rsquo;d like to give you a few examples on how this can be used to make development for Teamwork Projects a lot easier when working with .Net.&lt;/p&gt;
&lt;p id=&quot;033c&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;To begin with, we need to add the library. As its currently still listed as pre-release in nuget we need to add the IncludePreRelease param:&lt;/p&gt;
&lt;blockquote class=&quot;lg lh li&quot;&gt;
&lt;p id=&quot;d31b&quot; class=&quot;kh ki lj kj b kk kl km kn ko kp kq kr lk kt ku kv ll kx ky kz lm lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;&lt;em class=&quot;iz&quot;&gt;install-package Teamwork -IncludePrerelease&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p id=&quot;768a&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;As of writing this theres a .net core and .net 4.6+ version available.&lt;/p&gt;
&lt;p id=&quot;4075&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Once we have the library loaded we need to get an authentication token to be used with the api and also the base url for the installation. This part can only be partly done with the SDK.&lt;/p&gt;
&lt;h2 id=&quot;da33&quot; class=&quot;ln lo iz bo lp lq lr ls lt lu lv lw lx ks ly lz ma kw mb mc md la me mf mg mh gz&quot; data-selectable-paragraph=&quot;&quot;&gt;Getting an Access Token to be used with the client&lt;/h2&gt;
&lt;p id=&quot;0e52&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk mi km kn ko mj kq kr ks mk ku kv kw ml ky kz la mm lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;You need to understand how&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au lf&quot; href=&quot;https://developer.teamwork.com/projects/authentication-questions/how-to-authenticate-via-app-login-flow&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;Teamwork&amp;rsquo;s App Loginflow&lt;/a&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;is working. Once you have that configured and and ready to receive the callback from the loginflow the SDK offers a handy function to make parsing the callback and fetching the actual access token a lot easier:&lt;/p&gt;
&lt;figure class=&quot;mn mo mp mq gr ka&quot;&gt;
&lt;div class=&quot;m l ct&quot;&gt;
&lt;div class=&quot;wf ms l&quot;&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;using Teamwork;
using Teamwork.Shared;
using Teamwork.Shared.Common;
&lt;p&gt;private async Task&amp;lt;Teamwork.Client&amp;gt; HandleOauthAuthentication(string code, string state)
{
var response = await LoginFlow.TeamworkLoginFlow.GetLoginDataAsync(code);&lt;/p&gt;
&lt;p&gt;// Use response to initialize a new instance of the Teamwork APi Client
return Client.GetTeamworkClient(
pDomain: response.AccountData.Account.URL,
pApiKey: response.TokenData.AccessToken,
pUseOauth: true);
}&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;mt ej gh gf gg mu mv bo b bp bq fc&quot;&gt;Handling callback form AppLoginflow and getting a new client instance&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;69ba&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;The instance of the Teamwork API client is everything you need to start working with the API and actual data.&lt;/p&gt;
&lt;figure class=&quot;mn mo mp mq gr ka&quot;&gt;
&lt;div class=&quot;m l ct&quot;&gt;
&lt;div class=&quot;wg ms l&quot;&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// Get all people in installation
var people = client.Projects.People.GetPeopleAsync();
&lt;p&gt;// Get status updates by people in installation
var latestStatusMessages = client.Projects.People.GetPeopleStatusAsync();&lt;/p&gt;
&lt;p&gt;// Find a person by email
var personByMail = client.Projects.People.GetPersonByMailAsync(&amp;quot;max@teamwork.com&amp;quot;);&lt;/p&gt;
&lt;p&gt;// Get all active Projects
var projects = client.Projects.Projects.GetAllProjectsAsync(OnlyStarredProjects?)&lt;/p&gt;
&lt;p&gt;// Get a specific project and its details by id
var project = client.Projects.Projects.GetProject(theID);
// You can chose to include tasks, milestones, people etc here&lt;/p&gt;
&lt;p&gt;// Get all boards of a project
var boards = client.Projects.Boards.GetProjectBoardssAsync(theId);&lt;/p&gt;
&lt;p&gt;// return time tracking totals for a given project and user
var totals = client.Projects.Time.GetTotals_Project(project,userid)&lt;/p&gt;
&lt;p&gt;// Search for anything in Projects
// first param is the search term
// second is the type we are searching for
// available types are task, project, comment, message, notebook, person
var results = client.Projects.Projects.Search(&amp;quot;Whatever we want to search for&amp;quot;, &amp;quot;task&amp;quot;)&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;mt ej gh gf gg mu mv bo b bp bq fc&quot;&gt;Examples how to fetch data from Teamwork Projects usind the SDK&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;a81a&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;We can also add or modify all items in Teamwork Projects. This can be done in a similarly easy way to fetching data. Lets just have a look at how we would add a new task. To add a task we need to have a projectid and optionally a tasklist, both can be retreived from the example calls above.&lt;/p&gt;
&lt;figure class=&quot;mn mo mp mq gr ka&quot;&gt;
&lt;div class=&quot;m l ct&quot;&gt;
&lt;div class=&quot;wh ms l&quot;&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// Create a new TodoItem instance
var newTask = new TodoItem() {
    Description = &quot;This is a new task we want to add&quot;,
    Content = &quot;The Title for my new task&quot;
};
&lt;p&gt;// And add it as task to a project
// tasklist id can be left blank, the task will be added to an &amp;quot;Inbox&amp;quot; tasklist
// also we can add it as a subtask and assign the parenttask
var result = await client.Projects.Projects.AddTodoItem(
pTodoItem: newTask,
pProjectId: TheProjectId,
pTaskListId: TheTaskListId
pIsSubTask: IsSubTask
pParentTask: ParentTask);&lt;/p&gt;
&lt;p&gt;//if we want to we can complete our newly created task right away,
//the result will be the taskid of the newly created task
var ok = await client.Projects.Tasks.CompleteTask(result.Id);&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;figcaption class=&quot;mt ej gh gf gg mu mv bo b bp bq fc&quot;&gt;Add a new task&lt;/figcaption&gt;
&lt;/figure&gt;
&lt;p id=&quot;577e&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;These are pretty much the basics of what you can do with the SDK, however theres a lot more and its constantly updated so always worth to update when a new version comes out.&lt;/p&gt;
&lt;p id=&quot;38c3&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;A few more examples:&lt;/p&gt;
&lt;pre class=&quot;language-csharp&quot;&gt;&lt;code&gt;// Lets just like a comment
var ok = await client.Projects.Reactions.LikeItem(&quot;comment&quot;, &quot;commentId&quot;);
&lt;p&gt;// we can even send messages on teamwork chat!
var ok = await client.Projects.Chat.SendMessage(&amp;quot;the message i want to send&amp;quot;, &amp;quot;the RoomId&amp;quot;);&lt;/p&gt;
&lt;p&gt;// Update your status
var ok = await client.Projects.Me.AddNewStatusMessage(&amp;quot;Gone Fishin&amp;quot;);&lt;/p&gt;
&lt;p&gt;// Or a more complex example, create a company, add a person and
// finaly create a project for the new company&lt;/p&gt;
&lt;p&gt;// first add the company
var company = new Company() {
Name = &amp;quot;MyNewCompany&amp;quot;
};
var companyId = await client.Projects.Companies.AddCompany(company);&lt;/p&gt;
&lt;p&gt;// Now add a person and assign it to the newly created company
var newPerson = new Person() {
EmailAddress = &amp;quot;max@teamwork.com&amp;quot;,
FirstName = &amp;quot;max&amp;quot;,
LastName = &amp;quot;miller&amp;quot;,
CompanyId = companyId
};
var ok = await client.Projects.People.AddPerson(newPerson);&lt;/p&gt;
&lt;p&gt;// Finally add a project for the newly added company
var projectToCreate = new Project() {
Name = &amp;quot;My New project&amp;quot;,
CompanyId = companyid
};
var ok = await client.Projects.Projects.AddProject(projectToCreate);&lt;/p&gt;&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;/p&gt;
&lt;figure class=&quot;mn mo mp mq gr ka&quot;&gt;
&lt;div class=&quot;m l ct&quot;&gt;
&lt;div class=&quot;wi ms l&quot;&gt;&lt;/div&gt;
&lt;/div&gt;
&lt;/figure&gt;
&lt;p id=&quot;f68b&quot; class=&quot;pw-post-body-paragraph kh ki iz kj b kk kl km kn ko kp kq kr ks kt ku kv kw kx ky kz la lb lc ld le is gz&quot; data-selectable-paragraph=&quot;&quot;&gt;The SDK is available on Github:&lt;span&gt;&amp;nbsp;&lt;/span&gt;&lt;a class=&quot;au lf&quot; href=&quot;https://github.com/Teamwork/dotnet&quot; rel=&quot;noopener ugc nofollow&quot; target=&quot;_blank&quot;&gt;https://github.com/Teamwork/dotnet&lt;/a&gt;&lt;span&gt;&amp;nbsp;&lt;/span&gt;.&lt;br /&gt;If you want to add something or request a change or even if you just have an issue, feel free to add things on Github. The developers always answer fast and love to help.&lt;/p&gt;
</content>
    <author><name>Tim Cadenbach</name></author>
    <category term=".NET" />
    <category term="Guide" />
    <category term="Teamwork" />
  </entry>
</feed>
