Flexibility and modularity for the win! and also extensions’ ecosystem

I came from this “Allow Node classes to be set #41” github issue and from biggest of topics about id and headers.

I’m using commonmark a lot and wrote commonmark-helpers to make AST-handling easier and I thought a lot about markdown extensibillity. So here I will tell you what I ended up with.
Quotes are picked up from various sources.

I actually think the opposite. If you would be adding automatically generated ids to the standard specification, then there absolutely must be a clear specification of how that’s going to work.

it can be implemented in various ways, so it probably belong to extensions-land.

I think that this feature, just like tables in markdown, is useful but should be considered an extension of standard markdown.


If parser provides easy way to customize renderer, that’s not a big problem to add ids as needed.

Right not parsers are not able to do it. But possibility to add custom properties to AST-nodes, will fix this problem.
Because then any render can handle them in any way it want.
For example HTML renderer can handle custom node’s properties id and className to transform them
into id and class attributes for HTML DOM-nodes.

The spec just tells you what to parse as a header, what its contents are, and what level it is. It’s up to the writer to determine exactly how to render it in a given format.

If spec can be a bit more flexible it will help developers to add useful “metadata” to dom-nodes while parsing AST-tree. I wrote small package to handle commonmark AST in a bit easy way then now, it is commonmark-helpers 0.4 with support for processing nodes. So right now we can do simple stuff like uppercasing literals. It is good base for a lot of plugins, but its not enough for extra stuff, e.g. adding ids to the headings. And I’m pretty sure that processing AST-tree is reference implementation’s responsibility and not renderer’s one. So basically I want to do like this:

import { html, text, isHeader, matchProcess } from 'commonmark-helpers';
import unidecode from 'unidecode';

const addIDs = (node) => {
  if (isHeader(node)) {
    node.id = unidecode(text(node).replace(/\s/gim, '-').toLowerCase());

html(matchProcess(`# AVE commonmark\n\npls add this feature`, addIDs));

  <h1 id="ave-commonmark">AVE commonmark</h1>\n
  <p>pls add this feature</p>\n

My proposal not only about ids for headers. Other case for extensibillity is processing Paragraphes like these:

T> This is some tip.

W> This is some warning.

With more flexible spec it will be able to check if first symbols are T> or W> and then change current node type from ‘Paragraph’ to ‘Blockquote’ and add classes like tip for T> and probably wisdom for W>.

The spec just tells you what to parse as a header, what its contents are, and what level it is. It’s up to the writer to determine exactly how to render it in a given format.

One more point for moving AST-node handling into spec is that resulting AST-tree can be used in various renderers, so they need exactly same metadata for correct work. For example if we will be able to operate AST-nodes then HTML-renderes and potential PDF (or whatever) renderers will also know, about nodes’ metadata but and will can handle it in different way.

More flexible spec ⇒ more posibillities to extensions.

Tools are great for simplicity, modularity and community. commonmark has community on this forum. And it can have modularity for extensions in this way:

const up = node => { if (node.literal) { node.literal = node.literal.toUpperCase(); } };
const procEmph   = (node, proc) => { if (isEmph(node)) { matchProcess(node, proc) } }
const procStrong = (node, proc) => { if (isStrong(node)) { matchProcess(node, proc) } }

text(matchProcessList(`_emph_ and **strong**`,
  node => procEmph(node, up),
  node => procStron(node, up)
); // EMPH and STRONG

With more flexible spec commonmark will have not only community and AST parser but also great ecosystem like postcss and gulp have now. For example we will have various transforms for handling headers and other great extensions like sidenotes and other smart stuff. Just imagine that you will be able to handle you markdown like this: html(transform(input, headers, sidenotes, capsForSomething)) it will cover most of your needs and it will be easy to create new transform plugin.

Flexibility and modularity for the win! (and ecosystem and community)

The priority now is to get the core spec finalized, and there’s still
plenty to do there. After that we can think more about extensions.

There is no estimates on finalizing, there is no structured roadmap. there is no room to contribute. it kinda sad
How we can finalize something without roadmap and tests, which will show that we implemented all the features we want to be in 1.0?

Anyway with this approach we don’t need to wait 1.0. We already can have extensions if ref spec will be more flexible with my proposed approacg

@jgm @codinghorror what do you think about having commonmark ecosystem?