Developing Documentation¶
As an open ecosystem project, we encourage community feedback and involvement. Docs can be updated by pull requests against the GitLab repository either from a private tree or directly against the tree.
A couple of notes about consistency.
- Digital Rebar is the name of the parent project and can be abbreviated DR.
- Digital Rebar Provision or DR Provision or DRP can be used to reference this part of the project.
- API docs generated from the go files as part of swagger annotations of the godoc comments. Update there, please.
- CLI docs generated from the cli files as part of cobra structures. The tools generate those. Update there, please.
Updating the Docs¶
Editing Docs¶
- Checkout/clone the Digital Rebar Provision repo from GitLab
- Modify the doc(s) as appropriate
- Verify the modifications are rendered correctly and fix errors/warnings
- Create a branch
- Submit a pull request for your changes
Build and Review the Changes¶
Use the Building docs locally with docker to build and view your edits.
Merge Request¶
Once your changes are ready, commit the changes to your local git repo on a branch. You should then push that branch to gitlab and open a merge request against the RackN doc repo. RackN employees may create branches and merge request directly in the docs repo.
The RackN team will review the merge request, provide feedback, and once the feedback is addressed, merge request into the doc tree and publish an update.
Building docs locally with docker¶
The docs can be built partially or fully. A full build will pull in the content packs and other repo pieces.
The core tool is mkdocs. The build scripts and Dockerfile contain the process to get mkdocs and the required plugins and configure them appropriately.
Prerequisites¶
- Docker/Podman container environment
- This repo checked out
Summary¶
- in docs dir:
make container
- in docs dir:
make setup
- For a full build of the docs dir:
./setup_full.sh
- in docs/core dir:
make setup
- in docs/core dir:
make dirtywatch
- open browser http://127.0.0.1:8000/
Build local container¶
Note: This command needs to be run in the root folder, one level above core/
Add the GITHUB token found in the mkdocs github token
in the 1Password RackN Engineering vault.
make container
with mkdocs.secret
or
docker build --build-arg GH_TOKEN=$(cat ~/mkdocs.secret) -t squidfunk/mkdocs-material .
podman build --build-arg GH_TOKEN=$(cat ~/mkdocs.secret) -t squidfunk/mkdocs-material .
Setup container¶
Note: This command needs to be run in the root folder (one level above core/
) AND from inside core/
This steps generates all the necessary files and the link_map
Ensure that you have d2 installed, you can run tools/get-d2.sh
Once that is done, run
make setup
Watch Docs¶
Note: Be sure to run this command in core/
.
This does NOT use mike and versioning will be screwy. It will also render slowly as any change will rebuild all docs.
make watch
or
docker run --rm -it -p 8000:8000 -v ${PWD}:/docs rackn/mkdocs-material
podman run --rm -it -p 8000:8000 -v ${PWD}:/docs docker.io/squidfunk/mkdocs-material:latest
Consider using --dirtyreload
to render only files that have changed. There may be rendering issues, notably,
navigation will likely break on files that have been changed.
make dirtywatch
or
docker run --rm -it -p 8000:8000 -v ${PWD}:/docs rackn/mkdocs-material serve --dirtyreload -a 0.0.0.0:8000
podman run --rm -it -p 8000:8000 -v ${PWD}:/docs docker.io/squidfunk/mkdocs-material:latest serve --dirtyreload -a 0.0.0.0:8000
Build Docs¶
Note: Be sure to run this command in core/
.
make build
or
docker run --rm -it -v ${PWD}:/docs --entrypoint=mike rackn/mkdocs-material build
podman run --rm -it -v ${PWD}:/docs --entrypoint=mike docker.io/squidfunk/mkdocs-material:latest build
Puts the built docs into the public
directory as the dev version
Troubleshooting¶
If you get PermissionError: [Errno 13] Permission denied: '/docs/mkdocs.yml'
and you are running podman with selinux,
allow container content sharing with the following command.
Developing Documentation Content¶
The documentation is split into 2 sites.
- core - this becomes the docs.rackn.io site
- refs - this becomes the refs.rackn.io site
The two sites are currently requried because the search engine and its index is too large and bulky to hold everything. References will work across sites.
Within the the core
site, there are five sections with two helpers.
Home
- The landing page and points at the rest with some information about the docs and the layout.Getting Started
- Initial installation, upgrading, and scaling. This includes some different types of environments for DRP.Architecture Guide
- Information about the architecture and feature descriptions.Developer Guide
- Information about how to use APIs and extend RackN content to extend the basic featuresOperator Guide
- Information about how to install DRP, configurate DRP, and integrated with external systems. Additionally, tutorials and how-to documents for configuring features within the product.Resources
- Information that is more resource lookup information. e.g. Object model definitions, release information, ...Tags
- This is a set of indexes by tag and scope.
Additionally, the documentation attempts to describe the system from five views
- Deploying / Configurating / Operating DRP - This is often covered in
Deployment
sections - Discovery - This contains information about using DRP to discover hardware, inventory, and classify. This is often covered in
Discovery
sections. - Provisioning Hardware, OS, and Application - This contains information about operating and configuring hardware, deploying an Operating System, and configuring and installing an application. Application also refers to platforms like OpenShift.
Additional sections covering clustering, batching, auditing are covered across the big sections in either the Provisioning or their own sections.
These docs are in markdown with some extensions.
Command Execution¶
The output of commands can be placed into documents by wrapping your command in 5 less-then and greater-than symbols.
The command will be executed and the output inject at that spot at build time. The tools and commands need to be in the path and setup by the setup scripts in .gitlab-ci.yml and the Dockerfile.
Fields¶
Getting field info from the current drpcli is an exampe of using Command Execution
## Fields
| Field | Definition |
| -------------|----------- |
< < < < < drpcli subnets fieldinfo | jq '. | to_entries| .[] | .key+"|"+(.value | gsub("\n"; "<br/>"))' -r > > > > >
Developing Knowledge Base Articles¶
Please see the Contributing to KB Articles document for information on developing knowledge base articles.
Hints and Tips for Content Packs and Plugin Providers¶
Here are some tips for building and writing documentation for Content Packs and Plugin Providers.
These files are generated by the build process and pulled into the docs at build time of the docs. These plugin files are stored in two sets of locations.
Content Pack Documentation¶
For a content pack, you will need to do the following to get the documentation file from the content pack. For this
example, we will assume that your content pack is in the directory, example. You will need to do the following steps.
Only the last is different from your probable normal test procedure. This also assumes that drpcli
is in your path.
cd example
drpcli contents bundle ../example.yaml
mkdir -p ../rackn-base-docs/local
drpcli contents document-md ../example.yaml ../rackn-base-docs/local || :
At this point, you have all the content docs in the rackn-base-docs/local
directory.
You can copy the files into place in your doc tree.
# Copy the base content file into place
cp rackn-base-docs/local/core/docs/developers/contents/* $MY_DOC_TREE/core/src/operators/deployment
# Copy the refs files into place
cp -r rackn-base-docs/local/refs/docs/* $MY_DOC_TREE/refs/src
You can then build either the core or refs trees.
Plugin Provider Documentation¶
For a plugin provider, you will need to use the tools/build-one.sh
command. Once you completed editing your content
section of your Plugin Provider, you will need to build it. Using example
again, you would do the following:
# Assumes that you have built example in the cmds/example directory
mkdir -p ../rackn-base-docs/local
drpcli contents document-md cmds/example/content.yaml rackn-base-docs/local || :
At this point, you have all the content docs in the rackn-base-docs/local
directory.
You can copy the files into place in your doc tree.
# Copy the base content file into place
cp rackn-base-docs/local/core/docs/developers/contents/* $MY_DOC_TREE/core/src/operators/deployment
# Copy the refs files into place
cp -r rackn-base-docs/local/refs/docs/* $MY_DOC_TREE/refs/src
You can then build either the core or refs trees.
Header Section Levels¶
The file ._Documentation.meta
, inside a content pack or the content portion of a plugin provider, should be Markdown
format. The build tools will automatically bundle the content pieces into a build product file. This fill will be upload
to an Amazon S3 bucket when the build completes.
The documentation tools will add yaml headers for each element in the content pack.
The base content pack/plugin looks like this:
---
title: Eikon Image Deploy
tags:
- reference
- developer
- content
---
# Eikon Image Deploy {#rs_content_eikon}
Within the ._Documenation.meta
file, section separations must follow this hierarchy because the tools add pieces to
the top to make the page consolidate and show in the table of contents correctly.
#
- Reserved for the Title of the content pack or plugin provider
##
- Next level down - all new sections in ._Documentation.meta should at the level
###
- Next level down - within the higher sections
####
- Next level down - within the higher sections
#####
- Next level down - within the higher sections
The goal of the ._Documentation.meta
insert is that it can add a descriptive set of information at the highest level
and then start creating sub-sections as needed. The build process will append second level (-------------
) sections
for all the included object types within the content.
Here is an example of a ._Documentation.meta
file in the example content package:
This is the main descriptive section.
## SubSection1
### SubSection1Sub1
### SubSection1Sub2
## SubSection2
### SubSection2Sub1
Each object will also have it own markdown file. It will get a header like the following:
---
title: packet-console
tags:
- reference
- developer
- profile
- packet-ipmi
---
# packet-console {#rs_profiles_packet-console}
Profile contains the required kernel console settings for active serial consoles in Packet
The same schema is used as the main documentation.
.. Release v4.13.0 Start
.. Release v4.14.0 Start