Back in 2018, I worked as a technical writer at a crypto wallet startup, and I was constantly worried about being replaced by AI. It was still the early days of machine learning, but it was already creeping into our jobs - slowly, steadily, and unmistakably. We all knew that technical writers might soon go the way of the dinosaurs.
Fast forward seven years: we still have tech writers in some companies, but far-far fewer. Developers are now center-stage when it comes to producing docs, READMEs, and tutorials, and of course, a reasonable step to simplify and speed up doc production is to ask LLMs for help.
There’s nothing wrong with using LLMs for technical writing, but as with everything, it can be done with one prompt and result in unreadable AI slops, or it can be done right.
That’s what we’ll focus on in this article: how to use AI effectively for different technical writing purposes - whether you’re a developer creating product or project documentation, working with MCP servers, optimizing docs for LLMs, or preparing a hackathon submission.
This isn’t going to be another piece on prompt engineering advice - by the end of 2025, we’ve all got black belts in prompt engineering. So, here, we’re going one step further.
Where we need tech writing
When you should roll up your sleeves and turn to this article is when you’re working on one of these:
- Documentation: Onboarding guides, API references, release notes, FAQs, etc. - any clear docs that are supposed to cut the time to first “Hello World” and reduce dev support load.
- GitHub repos with READMEs: A README is your project’s landing page for devs - the first impression and project orientation.
- Hackathon submissions: You have limited words to sell an idea - clarity and a concise pitch win prizes.
- Grant applications: When applying for grants, you often need to explain not just what you built but why it matters for the ecosystem. It’s not a clear example of technical writing, but a good grant application follows the core tech writing principles.
- Tutorials: Step-by-step explanations that assume no previous knowledge but simultaneously don’t overexplain.
- AI Integrations (MCP) for docs: Knowing how to feed docs into AI tools allows developers to query your project right from their code editor.
- Optimizing docs for LLMs: if you want LLMs to find your docs, learn to optimize for them, and not only for SEO.
Core principles of technical writing
Connecting a great style guide to your AI agent is, of course, a good start. But good technical writing isn’t about style - it’s about results. The reader should leave your document having achieved their goal, i.e. the reason they opened your docs in the first place.
Following the core tech writing principles by feeding them to your Custom GPT and by double-checking the AI-generated text yourself will highly improve the chances of your docs being effective and worth reading.
1. Audience-first
Always think about the reader and their goal. Why are they reading this page? How to help them achieve what they’re here for in as few words as possible and in the clearest way possible.
You can start by defining who you’re writing for:
- Beginners need hand-holding and context.
- Advanced developers need precision and examples.
- Investors/hackathon judges need clarity on value and outcomes.
When writing for developers, there’s one rule of thumb: can a developer understand your product faster through the docs than by experimenting with the code? Answering this question will help you cut the unnecessary details and concentrate on delivering the best dev experience.
Then you need to take into account why the reader opened this particular page. What should they achieve? Here we’re looking at what each tech writing genre is supposed to deliver. For example:
- Guides and manuals answer the questions: “How do I do X?” and “What and why is X?”. They should be detailed and focused on examples and use cases.
- Troubleshooting: FAQs should answer the question “How do I fix X?”. They should be focused on one specific problem.
- Release notes should answer the question “What changed in version X?”. They should be brief and focused only on this particular version.
A good practice to understand what the reader needs is to describe a user story, just like a product manager would do: “As a [user], I want to [goal], so that [benefit]”. For example:
“As an Ethereum developer, I want to deploy my smart contracts on Neon, so that I can connect to Solana programs from my Solidity contracts”.
Imagining this kind of reader request helps define the documentation type, structure, and write it as concisely as possible for the reader to achieve their goal.
However, when thinking about a user story, start writing your instructions from the end - from the benefit or end goal of the reader. This will help the reader understand if they’re in the right place. This is a good practice both to start a guide, and to start separate sentences, for example: “To add a custom token to MetaMask, click Import tokens and paste a token contract address.”
2. Discoverability
The discoverability principle is usually broken down into two parts:
- Searchability (external discoverability)
- Scannability (internal discoverability)
We’ll talk about how to improve searchability of you docs below in “Optimizing for LLMs” section.
As for scannability, this principle is what LLMs follow best when producing written content. They use text structure as a design tool to highlight headings, bullet points, code blocks, and tables.
What you should watch out for is AI’s tendency to repeat itself and include similar information across different sections.
For example, if you’re writing a project description for a hackathon, you might want to highlight the impact your project has on the ecosystem. However, if you simply prompt AI to describe your hackathon project, it might re-iterate the impact across several different sections, such as Use Cases, Hackathon Impact, and Ecosystem Impact, which it may even create on its own.
A good rule of thumb is to define the document structure yourself, then ask your Custom GPT to strictly follow it, or generate each section separately to maintain focus and avoid repetition.
3. Minimalism
.png)
Minimalism isn’t the same as a low character count. It’s about writing only what’s essential for the reader to achieve their goal, or saying as much as possible in as few words as possible.
Cut the fluff! What is fluff? It’s text with low information value, such as repetitions, adjectives (especially sentiment-heavy ones), buzzwords, filler words, or irrelevant details.
However, you should know where to stop when cutting because at some point, your sentences can become unclear and lacking minimal context needed for understanding.
There’s a concept in linguistics, called valency, that describes how a verb connects only to the words it needs - its required arguments - to form a complete and meaningful sentence. For example, “sleep” needs just a subject (“He sleeps”), while “give” needs three elements (“He gave her a book”).
The same principle applies to writing: every word in your documentation should have a clear syntactic and semantic function that “satisfy its valency.” If a word doesn’t connect meaningfully to the rest of the sentence or help the reader understand or act, it’s a candidate for removal.
Rule of thumb: If a sentence doesn’t help the reader do something, cut it. If it becomes unclear or ambiguous, add the necessary context (satisfy its valency).
4. Accessibility
Keep in mind that a person reading your docs can easily be a non-native speaker who’s probably working under a deadline and feeling a bit stressed.
But also remember that sometimes it’s not a person at all, but an AI agent tasked with building something based on the information in your docs.
Neither of them is browsing your documentation to appreciate your writing style or wordsmith skills. And another thing, your docs should be easily localized and machine translated.
So, keep it simple:
- Use active verbs and short, simple sentences.
- Stick to positive sentences instead of negative ones (”to reconnect, reset your IP address” instead of “reconnecting without resetting an IP address won’t be possible”)
- Avoid adjectives, especially emotional ones (“revolutionary,” “next-gen,” etc.)
- Pick one style guide (Google’s developer style guide is a great start) and teach your AI agent to stick to it. The style guide will govern the tone of voice, specifics of what is and isn’t fluff, capitalization, abbreviations, punctuation, highlighting, and more.
Building a Custom GPT
Custom GPTs can serve as your personal tech writing assistant when configured properly. The goal isn’t just to automate writing, but to teach your model how to think like a tech writer.
Start by embedding the technical writing principles described earlier - audience-first, clarity, minimalism, and accuracy - directly into your GPT’s system instructions. You can even prompt the model to create structured schemas for each documentation type (e.g., README, API reference, tutorial, grant proposal). These schemas act as blueprints the GPT can follow consistently.
Don’t just prompt - pre-train by context.
Before giving writing tasks, feed your GPT examples of what “good” looks like. Upload a mix of excellent READMEs, your team’s existing docs, or award-winning hackathon submissions. The model will absorb tone, structure, and formatting conventions, improving output consistency.
Setup checklist:
- Define structure rules for different documentation types (e.g., all hackathon submissions follow: Title → 1-liner → Value Prop → Demo Flow). A good idea is to give your GPT a few examples for each format and ask it to “abstract” a reusable structure. This helps the model understand pattern logic, not just formatting.
- Don’t just import a style guide as a link - convert it into bullet-based instructions with “DO” and “DON’T” examples - LLMs respond best to concrete patterns. Use it to define what’s considered fluff, have a consistent tone of voice and structure.
- Upload the existent project descriptions, READMEs, and repos with code into the GPT’s Knowledge. Include both good and bad examples, labelling them clearly (good, bad). Models learn boundaries faster when they see contrasts.
- Enable data analysis for format conversion (JSON → Markdown, table generation, sanity checks). You can also ask the model to “cross-check” examples, e.g., to verify if API parameters in the text match the schema you’ve uploaded.
- Integrate tools and API schemas (like Context7 that will get you up-to-date code samples, or a code interpreter). Include API schemas in JSON/YAML format and ask GPT to reference them when writing examples - this prevents hallucinations and keeps code accurate.
- Work on content section-by-section: introduction, architecture, usage, etc. This modular approach makes it easier to maintain quality and consistency across all documentation.
Optimizing your docs for LLMs
.png)
We used to optimize documentation for SEO, adding H1/H2 tags, alt text, and keywords so people could find it easily in a search engine. Now, we need to optimize for LLMs so that AI agents can find, read, and understand our docs just as easily. Your documentation can (and should) be discoverable and readable by AI.
Here’s how to make that happen:
- Add
llms.txt
at your domain root - thinkrobots.txt
but for AI crawlers. It tells LLMs which parts of your site they’re allowed to index. - Example:
https://docs.neonevm.org/llms.txt
- Tell AI providers you exist.
- Submit your docs, sitemap, and
llms.txt
to OpenAI, Anthropic, and Perplexity. This helps them crawl your site and surface your docs as trusted sources in AI search results. - If using Docusaurus, automate the process.
- Plugins like
docusaurus-plugin-llms-txt
(or a simple build script) can automatically generate and deploy yourllms.txt
file every time you update your docs.
This ensures your project’s knowledge is indexed and queryable by LLMs and dev tools.
Feeding docs into MCP servers
.png)
After making your documentation discoverable to LLMs, the next step is to make it usable inside AI tools - where developers actually build. That’s where MCP servers come in.
MCP (Model Context Protocol) servers, such as Context7, allow AI assistants to access and serve real-time, structured documentation directly from your repository. This turns your project’s docs into a live, queryable knowledge source for developers using AI-powered environments.
Once integrated, developers can query your project’s API, contracts, or codebase directly from within their editors or chat-based assistants, getting accurate, up-to-date answers pulled straight from your official docs.
This is especially important for vibe coders - those who may not fully understand all the technical instructions and rely heavily on what an AI agent builds for them. By importing the correct codebase and documentation, you not only reduce tech support load, but also empower builders to self-serve and troubleshoot independently.
AI doesn’t replace technical writers - it amplifies everyone’s ability to communicate clearly, if used wisely. The skill is in knowing what to ask, what to keep, and what to cut.
Remember that AI models don’t think - they predict the probability of each next word. They can structure and rephrase ideas brilliantly, but only humans can ensure truth, clarity, and usefulness.
When using AI for writing:
- Focus on reader outcomes, not word count.
- Check for hallucinations and teach your custom GPT well.
- Cut the fluff - every word should fulfil its function and help the reader do something.
Used right, AI becomes less of a writing shortcut and more of a thinking partner - one that helps you document better, teach faster, and give builders the clarity they need to build confidently.
Resources
Here are some awesome tech writing materials to explore next:
- Context7 - MCP doc integration
- dspy.ai tutorial: llms.txt generation
- FusionAuth: Using LLMs for Docs
- Prompting Guide: RAG for Docs
- GitHub Education: AI for Tech Writing
- Brno University of Technology: Fundamentals of Tech Writing Course
Workshop and practice
Watch our workshop on tech writing with AI and submit a practice exercise to get our free review and advice on how to improve your hackathon submission README.
If you have any questions or want our help with tech writing, feel free to ask us anything in Discord developers chat.