As a #software developer, if you have to add or change a parameter on an API method, and just above/below that method you have a [pydoc/javadoc/doxygen/etc] comment with the list of methods, then adding a line of documentation with the added/modified parameter is a task that usually takes 10s-10mins.
You can even add a pre-commit hook or a test that verifies that every attribute of your methods is documented, and that the documentation matches the type. So you don't even need to define a new process: you just can't commit undocumented changes by design.
If you have a #Github/ #Gitlab / etc. pipeline that also regenerates the docs (e.g. over Sphinx, swagger.json dynamic generation, or even a dumb Markdown exporter), then the documentation task is, by definition, completed when the coding task is completed.
The developer pushes the code, the parameters of the API call are changed, and the new documentation is automatically generated. And you can even export it in multiple formats or to multiple sources (and MAKE SURE that all of your sources support import from open formats, or at least via API).
And everyone wins: developers who browse the code and end-users who browse an API/SDK documentation all have access to the same information, and it's in everybody's interest to keep it up-to-date.
Take the case instead where the developer changes the code, but the documentation of the change is on the technical writer, or the PM, or the TL. Those people will have to accommodate time in their agendas to implement the docs change (this is usually a release blocker). If they are the only gatekeepers to the docs, then they will also be the bottlenecks. Documentation tasks are separated from development tasks, and futile process overhead is added. And if documentation tasks are shifted to PMs, TLs and seniors, then you're stealing bandwidth from the people you're paying the most.
Which of the two approaches do you prefer?
Invest on #automation pipelines that build all the documentation of your project from its source code, or from Markdown/RST/asciidoc text files contained in the same repo. No exceptions made.
Everything else will just lead to worse documentation, unmaintained stuff, more overhead and more wasted money. No documentation should live only on Confluence, or on external cloud services,or even just on a Gitlan repo only visibile to engineers, unless it's automatically exported. This is the kind of fragmentation that creates information barriers among stakeholders, process overhead, and eventually worse documentation for all the involved parties.
Please stay away from documentation platforms that don't make it easy to import and export data. And prefer those that do so over open formats. If a documentation platform supports import/export over Markdown/RST/asciidoc/swagger.json, then integrating it with your pipelines is just a matter of copying files. If it's over API, then you'll have to invest more engineering resources to implement connectors for those APIs.
A platform about automation, software architecture, data science and tech.