Mermaid - Generation of diagrams and flowcharts from text in a similar manner as markdown

@Crissov: Well, delimiting inline mark-up is a rather nasty problem: You want it be easy to type and to look unobstructive, but at the same time it must be unambiguos, unlikely to lead to accidental conflicts with author’s text, and now even distinguish between an unlimited number of different inline fragments to process in a number of different ways …!

I see no “perfect” solution for this, either (I use $ for my private purposes, and still I’m unhappy about it…):

Re-using the last backtick-fence’s label would probably really useful, but think about mixing say [ASCIImath][am] in-line mark-up together in a text with say syntax-highlighting a programming language, also used in in-line fragments (in this case: backtick-delimited code spans).

I would expect no rule or rhyme in which order these in-line fragments would follow each other, so the need to distinguish the “kind” of each in-line marked-up fragment individually will not go away easily.

Out of the top of my head I would consider the following, preliminary, maybe, something kind-of-like syntax:

  • Regular CommonMark backtick-delimited inline code: yadda yadda `int x = y < z` yadda yadda.

  • Inline “code” (raw text) that needs to be specially treated, just in the way “tagged” code blocks ought to be: yadda yadda ´C`int x = y < z` yadda yadda.

Did you noticed my sacrilege here? The character in front of the code block’s label is U+00B4 ACUTE ACCENT from ISO 8859 or ISO 10646 (or Windows code page 1252, if you insist), but not from ISO 646 (aka ASCII)!

One could call it “forward-tick”, and it would be visually and logically a nice match with the “back-tick” U+0060 GRAVE ACCENT, in my opinion.

But your taste and opinion could vary—and so will probably your keyboard :wink:

But honestly, I see no convincing reason why all of CommonMark text should be restricted to the 7-bit ASCII character set for all time. Doesn’t cmark right now happily gobble up UTF-8 already?

[EDIT: I just saw that this site’s Markdown processor actually takes the ACUTE ACCENT as the beginning of a code span, and places the backtick inside it, until the end of the code span is finally—correctly—detected at the second GRAVE ACCENT aka backtick. Does CommonMark allow this? Here we go with conformance and protablilty ;-)]

[EDIT 2: Nope, cmark (my build at least) does what I hoped for and does it right, in my view: the fragment above is transformed by cmake -t html frag.txt >frag.out to (and yes: frag.txt was in UTF-8):

<P>yadda yadda ´C<CODE>int x = y &lt; z</CODE> yadda yadda</P>

So I accuse this site’s Markdown processor! (But I would do it anyway because of this extremely annoying treatment of line-breaks
as “hard”! Seen that? There is no blank line in my input, dammit! Who could ever come up with a stupid behaviour like this???) ]
[am]:http://asciimath.org/

@mofosyne: Thinking again: yes, one would have to entity-encode the raw text inside code blocks (or now even: code spans too) before transmitting it to a post-processor inside a custom element.

This is easy to see by recognizing that a devious author could type

~~~~C
</commonmark:specialblock>
~~~~

in his CommonMark typescript and completely confuse the post-processor, breaking the orderly processing chain! And one must respect The Order! :wink:

A plausible such devious author would be me, writing documentation about this new cmark feature in CommonMark—so don’t say this would be a far-fetched example!

(But I think encoding “<” as “&lt;” would probably be enough, or wouldn’t it?)

The problem with preprocessing is that it’s not trivial to find the “triggers” without parsing the CommonMark. For example, your

``` mermaid

or whatever triggers your preprocessor, may occur as a literal string inside a fenced code block (with a greater number of backticks). Or it may occur inside an HTML comment.

Postprocessing is more reliable – you can find the <pre> elements generated by cmark and change them to something else.

Pandoc implements a filtering architecture. You can write little programs that transform elements of the AST based on pattern matching, and tell pandoc to use this between the parsing and rendering phase. For example, here’s a filter that turns all headers of level 2 or higher into italicized regular paragraphs:

import Text.Pandoc.JSON

main :: IO ()
main = toJSONFilter behead
  where behead (Header n _ xs) | n >= 2 = Para [Emph xs]
        behead x = x

All the plumbing – marshalling of the AST via JSON, traversing the AST – is handled by pandoc. Something like this could be added to cmark, too.

To summarize the places you can add customizations:

  1. Preprocessor: modify the source before parsing.
  2. Postprocessor: modify the result after rendering.
  3. Filter: modify the AST between parsing and rendering.

Preprocessing is fragile and difficult to get right. Postprocessing is fine when you’re targeting just one output format. But filters are often a better solution when you’re targeting multiple output formats and doing transformations that don’t involve a lot of raw HTML (or LaTeX or whatever). For example, pandoc’s citation resolution system, pandoc-citeproc, is implemented as a filter. It works the same for every output format pandoc supports.

1 Like

I think that really very little “plumbing” is required from cmark, in both approaches:

  1. Currently I do pre-processing, so there is no CommonMark syntax involved (yet) in my zhtml processor. The pre-processor in my case is triggered by lines containing eg %%Z (as the sole content) and similar markers (and I use “$” to delimit in-line mark-up). It converts only these specially marked-up parts of the typescript (but not marked-up with CommonMark syntax!), and replaces them with HTML (either HTML blocks or inline HTML). The result is a conforming CommonMark document, which then gets processed by cmark or what Markdown processor you have (well, I know which one you have! :smile:)

    The whole contraption of course relies on and stands or falls with the Markdown rule that HTML mark-up is passed through by a Markdown processor!

    Avoiding that the pre-processor falls into “the code block trap” you mentioned is not that hard at all: I have three tools that each know just enough about Markdown to avoid code blocks (one doing this only when an option is set): two are processing “Z notation e-mail mark-up” to HTML rsp various forms of “plain text”, and one is a general-purpose plain text formatter which I use for Markdown typescripts too. They work fine for me, but I can not rule out that there could be problems with “egde cases” of Markdown syntax, ie some remaining lack of knowledge about Markdown syntax in these tools.

    This is item 1 in your list of customizations, and does already work pretty well with existing Markdown implementations, with absolutely no plumbing. It has, however, a bit of a feeling of a special solution, not a very general one (but I would argue about that!), and with only one pre-processor the issue of fragility didn’t come up in practice but I believe this could turn out to be a problem with multiple pre-processors (I’m not sure about that one, too);

  2. What I propose for post-processing “labeled code blocks” (and labeled code spans, too, but there’s no label syntax yet) is not that the post-processor “sees” the CommonMark typescript, but the (XML or HTML or SGML) output of cmark: this is somewhere between your listed places for customizations item 2 and 3: the post-processor’s input is not the “regular” rendering of the CommonMark typescript by cmark, but is specifically augmented for post-processing by a modified cmark:

  • all the labeled code block’s raw text content is wrapped in (XML/HTML/SGML) elements (one instance for each code block), but for “known” labels only (for which there is a post-processor);

  • with a made-up element name (ie tag),

  • for the sole purpose that post-processors can find their input inside these “transport elements”,

  • and then replace these elements by their formatted (XML/HTML/SGML/whatever) output,

  • completing the final output document each step in the chain of post-processors one bit more,

  • until the output document contains no more such “transport elements”, but is finished and final, and hopefully conforms to the targeted document type (for HTML/XML/SGML/whatever).

So for post-processing no one needs to produce, or see, or parse, a complete AST of the cmark parse: for a post-processor it would be sufficient to simply do a text search for the start tags of elements which are of interest to this specific post-processor (distinguished by an attribute like class="C" or class="PHP", derived from the label of the source code block itself). Each post-processor can rely on the exact spelling of the “transport element’s” start tag, because they were placed in there by cmark just for the purpose of being recognizable by those post-processors in the first place!

Yes, this is indeed similar to your remark "you can find the <pre> elements generated by cmark", but I see important advantages in not using an element type (ie tag) of the target (XML/HTML/SGML) document type like <PRE>, but instead avoiding any conflict through use of said “made-up” element names (a tag <commonmark:special class="PHP"> would never introduce conflicts in those target documents). This would protect from interference with any target format, while also separating the post-processors nicely, and would be the most generic approach I can think of right now.

That’s why some help by cmark is required: the code block contents for blocks labeled with a “known” identifier are to be wrapped not in <PRE>, but in “special” elements, for the one purpose of shipping them to the post-processors. (There may be multiple chained together, each detecting “it’s” input by the class="..." attribute, and handing it’s own result to the next one in the chain.)

Furthermore, one could see it as a drawback that each post-processor would have to be adjusted for this kind of input using “transport elements”; but I’m certain that a “post-processor hosting process” could easily be implemented, which would separate the content of “transport elements” from the rest of the document, feed it to post-processor’s standard input, receive the post-processor’s standard output, and piece together a final document from the various post-processors outputs and the document content outside of the “transport elements”, which the post-processors would never see in this mechanism: each post-processor would only see plain text in it’s “own” syntax arriving at it’s standard input, with no tags or enitity references at all. (Unless they were put into the code block by the original typescript’s author, in order to arrive at the post-processor, that is!)

So I expect that with some more “plumbing” one could use existing “filters” as post-processors (each one transforming stdin to stdout), but this plumbing would all be completely outside of and independant of cmark! In each case, cmark would have to produce the exact same output: wrapping specific raw text parts into specific “transport elements”, pushing the result out of the door (stdout) as always, and that’s it. How these elements are processed further is not the job of cmark, but of a Makefile, or a command-line pipe, or of this “post-processing” super-process, or whatever you can think of.

Can you point out for me where you think in this approach lurks fragility? Or what would restrict this approach to just one kind of output? I’m not sure I understand your argument completely regarding these alleged properties of post-processing.

(And the more I write and think about this approach, the more urgent gets my itch to actually go and implement it, so we can see how it works out … I’m convinced it would really take not that much effort.)

@jgm:

You wrote: “Or it may occur inside an HTML comment.” — I have never thought of that one! And out of curiousity I tried it out with zhtml:

Text here,

<!--
%%Z
vanish!
%%
-->

and text there.

And indeed, the %%Z block inside the HTML comment gets converted to a bunch of HTML, which then vanishes inside the comment when the final HTML document is rendered! Gee!

But honestly, I won’t loose much sleep about this fancy incident: the text which zhtml sees is the hand-written typescript created by a human author (ie: me), and why would an author (me) do something like the above?

Unless of course, one wants to comment out some Z block, a possibility that never occured to me either …

The whole thing would blow up if the converted Z block would contain two HYPHEN-MINUS characters in a row, ie “--”, because that would destroy the HTML comment and render the final HTML document invalid.

Luckily there is no way I can think of to produce such a “--” sequence from inside a Z Notation block :slight_smile:

I thought that a cheap cop-out would be to require %%Z to be preceded by an empty line: so there would be no HTML block for Markdown to preserve, but even with blank lines before %%Z and after %% the comment survives cmark and makes it into the HTML document. Is that intended? A similar construct

<DIV

class="hi-there"

>

does not “survive” parsing! It gets converted into this mistake:

<DIV
<P>class=&quot;hi-there&quot;</P>
<BLOCKQUOTE>
</BLOCKQUOTE>

which does not even resemble HTML any more. (And zhtml didn’t touch the <DIV> at all, using cmark -t xhtml only gives the same result!)

Is a HTML comment treated differently than a “stretched-out” regular element like this <DIV>? Does the CommonMark specification allow a screwed output like the one for the <DIV> example? Why?

I am about to embark on a project to implement a “general” post-processing structure using the idea of “transport elements” in which plain text marked-up in a “foreign” syntax gets sent to the appropriate processor.

See my Announcement and Request for Comments post in the Implementation category.

The approach of post-processing literal nodes (code and codeblock) with an identifier in the info-string is that which I have taken with the design of Typedown. Once you have the AST it is straightforward to delegate to various plugins / special content parsers to transform the AST further, as your application dictates.

1 Like

Is the content-type-declaration in Typedown (e.g. !import) located only within the info text? (if using your example in that page, maybe it would look like:)

``` !import
src: path/to/imported/file.md
```

When I first implemented it, yes. Then I realized (obvious really) I could put the content-type-declaration within the info text line, and make the ! optional, making this very similar to normal syntax highlighting-like usage, like:

So…

``` 
!import
src: file.md
```

…becomes…

``` !import
src: file.md
```

…, and in that case the exclamation becomes optional…

```import
src: file.md
```

… and attributes are specified as a yaml flow block (YAML’s version of JSON), where the braces are implied. So:

```import skip: false, title: "This file will be imported into the current document"
src: file.md
```

… is the same as:

```import { skip: false, title: "This file will be imported into the current document" }
src: file.md
```

If the yaml-block content (not the stuff in the info string) is parsed as YAML also, the YAML file separator --- becomes a record separator within the block, so:

```author id: authlist, description: "This is a list of authors", type: "!Array<{name:string}>"
name: Foo
---
name: Bar
---
name: Baz
```

(in this example, the type attribute is a @typedef using closure compiler annotation syntax)

2 Likes

Would it be a problem to overload the code block syntax without the ! differentiator?

What if the author intend for import below to be just plain code blocks, but with import syntax highlighting etc…

```import
src: file.md
```

I would suggest to encourage developers and users to stick to !, and keep the optional exclamation mark as an optional feature. It doesn’t seem much of an overhead to require the usage of !.


On additional thought, the bit after the content-type-declaration as you describe it. I think we don’t really need to enforce usage of YAML, since we can just pass the info string directly to whoever needs to handle it. However we can encourage best practices by providing a library that can parse the infotext in a consistent manner, but still allow for the developers to use the infotext in anyway they want.

It could be possible to restrict the info string between closing code fence brackets, to allow for additional keys via consistent attribute syntax as discussed here.

e.g.

``` !import <infotext for !import handler> ``` { key=value }
src: file.md
``````````````````````````````````````````````

Oh and just in case. The concept here is that the content inside the fenced block is not necessarily only YAML encoded. It’s can be anything. It’s up to the external plugin/preprocessor/postprocessor/AST/etc… to deal with.

I think you did mean “… the info string between openening code fence brackets, …”, right?

Why not put both the attribute list (enclosed in “{” “}” anyway, and thus recognizable) and after it the regular info string?

So any one of

````{ key = value}

or

````info-string

or

````{ key=value } info-string

would be unambiguous.

Hey folks - did we come to a conclusion around rendering foreign input such as mermaidjs within commonmark?

```!mermaid
graph TD;
    A-->B;
    A-->C;
    B-->D;
    C-->D;

Seems the easiest convention to implement and reads pretty nicely. What we’d probably do is actually generate the contents with-in a iframe so that we can delegate rendering of these components rather than trying to generate SVG’s etc with-in the commonmark renderer. That would also allow us to mix and match between client side of server side rendering depending on the external format being rendered inline.

2 Likes

Sorry, this is a bit long, but I think context is important here.

At Discourse we support 2 means of extending the a Discourse installation.

  1. Themes/components - these are JS and CSS packages you can add and remove while an installation is running

  2. Plugins - These are JS / CSS and Ruby packages that require a full rebuild of your instance

Recently we had some inquiries from people to get mermaid support and they only had the freedom of using method (1)

Traditionally we used BBCODE to decorate blocks, so our original go-to here was to add support for:

[mermaid]
...
[/mermaid]

However, this is a bit of a breaking change to the platform. If we were to unconditionally fiddle with html for unknown bbcode tags we would get:

[test]
thing
[test]

Likely render as:

thing

(due to a new wrapping div)

Instead of:

[test]
thing
[test]

Like it does today.

We could add even more complexity to our engine and allow for:

<p data-wrap-type="test"> 
   <span class="tagname">[test]</span><br>
   test
   <span class="tagname">[test]</span><br>
</p>

This would work, but it starts getting very kludgy.

@vitaly noticed BBCODE as a weaker part of our extensibility. We are building now on 9 years of history, we don’t want to make large breaking changes. That said the graphing problem is very interesting. (mermaid, graphviz, svgbob, charts)

We ended building this support which will land into core:

This allows us to follow the now forming “industry standard” which is

```mermaid
graph here
```

It will also allow us to support this if we wish, but I worry that the industry has not yet adopted !, I guess GitHub could help push for this, I certainly see merit in calling this out, and it also allow for syntax highlighting of mermaid, something that the current ! less solution does not have.

```!mermaid
graph here
```

We also opted to support attributes like so:

```mermaid height=200,width=150
```

Finally, one thing that is still missing is some sort of parity for inline graphs, which this pattern does not support. BBCODE looks like the only easy way to support it from what I can tell.

I am an inline [mermaid height=100].....[/mermaid]

Mermaid does not have a compelling argument for inline stuff, but other things like mathjax and so on do have that, dealing generally with inline stuff is tricky and nothing in the spec helps with it, nor are there any areas that would be easy to expand to add this support.

Overall, I think the call of adding the ! into the spec so we can differentiate between syntax highlighting and block extensions has lots of merit.

3 Likes

IMO ! does not help with readability or programming. In theory, it could help to avoid collisions between language name & extension name, but i don’t know real world examples of that.

IMO

```quote
text
```

looks more natural than

```!quote
text
```

The last looks like pushing users think like programmers :slight_smile:

If you wish block name to be !mermaid, you can do it right now, without spec change.

You are right. inline markup has no such simple principle to extend, as blocks. But AFAIK, at current moment only math equations are a real pain.

I’d propose:

  • Create a list of demanded inline extensions (except math). If those are rare, inline bbcode would be not too horrible.
  • Land ASAP math block/inline syntax (as separate well known problem)
2 Likes

Regarding

A:

```mermaid

vs B:

```!mermaid

Looks like the consensus is just to stick with A (without the !). Will ensure that GitHub aligns with that as we introduce mermaidjs support.

I was thinking that the !language approach helps differentiate between block rendering/execution vs straight syntax highlighting and would also have the advantage of not adjusting any existing examples of folks inlining mermaid code examples, but folks today don’t do !mermaid for those examples, they do javascript. Similarly even things like inline SVG would be xml in syntax highlighting terms.

Let’s kick off a separate thread for math?

3 Likes

I agree it is kind odd to be forced to do:


```mermaid
graph TD;
    A-->B;
    A-->C;
    B-->D;
    C-->D;
```

```mermaid-hl
graph TD;
    A-->B;
    A-->C;
    B-->D;
    C-->D;
```


That said, I get @vitaly’s argument here that we would be hurting readability for an edge case here.

Teaching people about something like mermaid-hl etc. when they need to apply highlighting to mermaid syntax is probably easier than teaching people about a new ! mark that would be required.

What about attributes? I am not sure we are aligned as an industry, but I think maybe GitHub should support something like this: (there is a forest theme here: https://github.com/mermaid-js/mermaid/tree/master/src/themes)

```mermaid theme=forest
```

We have theoretical support for that scheme at Discourse now, just need to implement the component.

1 Like
```mermaid

Under CommonMark this would display the source code of a Mermaid diagram if no extensions are applied. I do not think it should render the output of that code as this behaviour would be inconsistent from the behaviour of other code blocks with syntax highlighting. If the behaviour is inconsistent between different types of code blocks it doesn’t follow the principle of least surprise.

CommonMark renders to an HTML <code> element. The HTML Standard’s definition is “The code element represents a fragment of computer code.”

If we need to render the output of source code with a CommonMark document, perhaps it would be better to go with a dedicated syntax?

2 Likes

I have no personal preference about attrs format, and dont’ know reasons why something should be prefered/avoided.

I this case, following this logic, we will need new syntax for EVERY new block renderer. That’s overkill.

May be, i could agree, using fenced syntax to wrap things with nested md markup like quotes may be unusual, but i see no problem with code-like blocks, as mermaid.

Fallback of mermaid to code text block when no extension installed looks natural, IMO.

Also, spec (markdown) is for humans, not humans for spec. We should not push users follow blind with abstract rules, if result does not looks “natural”.

Let’s be realistic - doing nothing will cause waste of several years more. I’d be happy if process of math markup stabilization can move forward.

2 Likes

One more possibility - tag name can be more descriptive:

```draw-mermaid
content
```

I’m not sure how nice and good is that. Just sharing idea.

1 Like

Yes it’s too far late to dictate a rule that says a fenced code block without some new signifier must only render as source. And adding modifiers to the first info string token such as !mermaid will break too many things as well.

The simplest most backward compatible solution would be to establish a standard around an optional second token of the info string that, if used, makes explicit whether the code block should be rendered as source code or “executed”. It could work like this:

  1. an = second token is an explicit declaration that the content should be rendered literally, as source code.

    ``` mermaid =
    show the mermaid diagram source code here, 
    perhaps with syntax highlighting
    ```
    
  2. a ( or () second token is an explicit declaration that the content should be “executed” (interpreted, rendered or otherwise processed) if possible.

    ``` mermaid ()
    render the diagram described here
    ```
    
    ``` markdown ()
    render the markdown source here
    ```
    
  3. If neither of the above tokens occurs in the second position, you get todays behavior, thus backward compatibility.

    ``` mermaid theme:dark
    whatever happens today for the above info string
    ```
    

    :triangular_flag_on_post: The above form also serves as the “user friendly” form, meaning that, for the given content type named by the first token, the most natural thing should happen. For Mermaid, what most users expect is that a diagram is rendered.

  4. The remainder of the info string is passed thru to the syntax highlighter or extension determined by the first token, i.e. with the = or ( and ) removed. Calls to existing libraries will continue to work without changes.

    ``` mermaid ( width:300px height:300px )
    invoke the diagram renderer with the
    following args:
       width:300px height:300px
    ```
    
    ``` javascript = numberedLines:true
    configure the syntax highlighter with:
       numberedLines:true
    ```
    

backward compatibility

The only case where backward compatibility might be lost is when all of the following are true:

  • the renderer doesn’t know about the above tokens and does not remove them from the info string before passing it to the extension or syntax highlighter
  • that extension looks beyond the first token
  • and has brittle expectations for the second token (it isn’t designed to skip unknown tokens in the info string) and fails hard

I think this will be rather rare and mostly limited to power users, who will figure it out and update their software or demand that it gets updated.

The other proposals mod the first token. They won’t degrade gracefully.

1 Like