Mermaid - Generation of diagrams and flowcharts from text in a similar manner as markdown

I didn’t say anything about alt or rather title, which isn’t such a bad idea but should go inside quotation marks, and I’d leave out the curly braces part, of course.

``` !mermaid "A Demo Diagram" .mermaid #mermaid something else

<mermaid title="A Demo Diagram" class="mermaid" id="mermaid">…</mermaid>

I don’t think it is necessary to handle syntax entensions like Mermaid by extending a CommonMark processor like cmark at all. I use a regular CommonMark (rsp Markdown) syntax processor in a very similar scenario without difficulties, and I think my approach can be used in a very general way.

I am currently using (a slightly extended clone of) cmark to generate HTML documents, but not directly from CommonMark formatted input, but from original “extended” syntax input after a pre-processing step.

For example, I generate this HTML from that input.

The cmark input is not the hand-written CommonMark text, but the output of another tool of mine which acts as a pre-processor: It reads the original text input (written in the “extended” syntax) and replaces parts of the hand-written text with HTML mark-up, while leaving the rest alone (which is all the regular CommonMark text).

This output is than fed to cmark, and cmark then ignores and passes through the inserted HTML (as the CommonMark and Markdown specifications require), while doing it’s job on the CommonMark text. The output from this step is the final HTML document.

The two steps look in a Makefile like this (cm2html outputs a whole HTML document, including the document type definition and the <HEAD>, but is in all other respects the CommonMark parser):

$(HTML): $(MARKDOWN)
        zhtml -aUm $(MARKDOWN) >"%TEMP%\$(TMPNAME)"
        cm2html "%TEMP%\$(TMPNAME)" >$(HTML)

This works quite nicely and allows to “extend” the CommonMark syntax with another syntax, which is in my case very similar in spirit, namely the “e-mail mark-up” of the Z notation, as defined in ISO/IEC 13568:2000.

I’m sure that other “extended” syntaxes could be accomodated using the same approach, which is by the way very much like the time-honored use of pre-processors for troff, and that the same can be done for Mermaid.

(While this answer was written with cmark in mind, nothing in this process is specific to CommonMark, and works just as well with other Markdown processors.)

Is certainly an option to just use preprocessor (and can be an effective solution to extensions that is local to a specific website). But would quickly get unwieldy when you implement more than one preprocessor filters, due to syntax collision.

Plus its not obvious in the original source code that it is Z notation. Hence its safer for the interpreter to have an “island” of known non-markdown content, that would self specify what extensions can be safely applied to it in a portable manner.

Yes, placing “foreign” syntax in kind-of code block boxes tagged with an indicator to signal how to process them: if I understand you correctly, that would be your approach, and I think it could work. Taking my use case, that would mean a mark-up like this:

... in normal _CommonMark_ text here, but now:

~~~~Z
%%Z
X == Y %x Z
%%
~~~~

Back in _CommonMark_

where cmark would somehow be configured to pass the content of the ~~~~Z block through an external converter like my zhtml, right?

I’ll give you that this would be a relatively clean way to have one main processor, cmark, and one or more subordinate processors for “foreign” syntaxes, like zhtml.

But I see two drawbacks:

  1. It is actually more verbose to enforce “double” mark-up for the Z notation paragraphs in this way, while ^%%Z$ is (one of several) perfectly recognizable markers for such paragraphs already (and a preprocessor could certainly recognize ~~~~Z too!);

  2. It does not provide a solution for in-line “foreign” syntaxes, which the pre-processor approach naturally does – I use $ to delimit in-line Z notation.

Furthermore

  • I don’t see how this would get unwieldy if combining several pre-processors, as long as each preprocessor leaves HTML marked-up stuff alone (as they do already);

  • whether it is obvious in my example where Z notation begins and ends is somewhat a matter of taste; the standard (I mentioned and which specifies this Z mark-up) at least prescribes delimiters like %%Z (and others) do delimit various kinds of paragraphs in Z Notation, which I do not find hard to perceive. Remember that the exact same pre-processor approach would work just as well with ^~~~~Z / ^~~~~ delimiters too: it’s just the pre-processor who needs to recognize them.

So no, I’m not really convinced of the merits in modifying/extending cmark – and hence, CommonMark syntax itself, in some kind. But using some special kind of code blocks (and inline code spans too?) for this purpose would be a generic approach, yes.

But I do see the advantage in portablility, if each block of “foreign” syntax is enclosed in a tagged code block: If nothing can be done with the content, cmark could just treat it as an ordinary code block – if that’s what your’re aiming at.

Wouldn’t that be exactly the way to implement syntax highlighting in the existing notation for blocks of code, tagged with a language identifier like PHP, or C++?

1 Like

I see your point on how preprocessors can still be of use.

Well at the very least you can see this as a general good practice outline for preprocessor handling in such a way that there is a known way for commonmark to gracefully degrade the content if not handled by a preprocessor (or extension, etc…).

But these are just implementation issues. The core thing is really just to encourage portability, and graceful degradation of extensions. So for me, at the very least, there needs to be an agreement on how a preprocessor should be expected to locate their content in a best practice manner. Which if using your example, but with my concept would be like this (Though I would encourage using ``` only for falling back as code instead, to reduce preprocessor complexity) :

... in normal _CommonMark_ text here, but now:

~~~~!Z
%%Z
X == Y %x Z
%%
~~~~

Back in _CommonMark_

The ! is to make it clear to the preprocessor that it is actually for it to process, and not just a Z code snippet for some Z tutorial.

The benefit to coming to an agreement is threefold. Reduced preprocessor complexity, increased portability and graceful degradation.


How does various implementation handle processing the extension is up for implementation. Not our problem. Just making sure that the cruft of other language syntax doesn’t infect and force us to have hacks on commonmark in a fews years (because everyone is already using such hypothetical extention) is more important however.


Elaborated: Fenced Block Types, Generic Extension/Webcomponents, and fallback handling

There could be a convention that the info string from a backtick fence also applies to all following inline backtick code spans. In many cases, authors are dealing with just one language, but you could of course encounter spans of HTML, CSS, JS, PHP, RX and SQL syntax, for instance, within the same paragraph. Some of which may be sufficiently – even if not perfectly – covered by auto-detection of languages.

1 Like

Yes, having a mechanism how “tagged” code blocks could be processed by more-or-less “external” tools, in a common and general way, would be a good thing, I agree on that. There are several comments on it I would like to give on the details.

The ! to mark “external syntax” would be fine with me too. But what would a label on a code block be good for if it had no consequences? Just for documentation? And if it would have consequences (other than in the current cmark, if I remember correctly!)—how do these consequences happen if not by an “external” processor of the kind we’re talking about, be it for syntax highlighting in the HTML output, or for a complete new and other mark-up syntax?


Thinking about how cmake could find the appropriate processor for the tag, and how this “external” processor would be invoked (A plug-in architecture? Using standard C library system()? Both? How does the “external” processor’s output comes back into the cmark process—which is still in the middle of processing an input document?), I felt that this seemed all too complex and brittle to implement, at least using the standard C library only.

And considering it being a reference implementation after all, I would strongly prefer cmark to be “just” a standard C program, even if the sources are currently half-way between C95 and C99. (Requiring <stdbool.h> for example, but not requiring C99’s declaration-is-a-statement syntax—any more.) And IMO it should not depend on <dlopen.h> on U*IX but on LoadLibrary() on Windows, for example.


But I now think that both of us (or rather: both of our approaches and preferences) could have our cakes and eat it, with a more general and simpler implementation strategy:

Instead of invoking an “external” processor in order to translate content of such a “tagged” code block, wouldn’t it be much more easy and clean and robust if cmark would simply output the whole content of said code block, wrapped inside a SGML/HTML/XML element, say with a special class attribute, or even a configurable tag name. This would be trivially easy to implement in cmark, I guess.

It would certainly be no problem at all to create a new or adapt an existing formatting tool like mine, or say a processor for Mermaid, to scan it’s input text for just these special elements (rsp tags), and then replace the elements with the processed PCDATA content of the element itself—would it? I can even imagine factoring this switch between “copy text outside these elements” and “replace these elements with their processed content” into a kind of post-processor infrastructure library, or post-processor applying-tool—completely independant of cmark of course.

Because these elements would solely exist for the communication between cmark and an “external” processor (a post-processor this time …), no SGML/HTML/XML document type definition needs to be construed or modified, as long as the tag in use does not conflict with the target document’s DTD (or XML schema, or whatever). This is very simple to guarantee by inventing and using a tag in a made-up namespace like <commonmark:specialblock class="Z">, to continue our “Z notation” example. Remember that no one but a post-processor would actually see these elements.

And for a post-processor to just filter these elements would not even require an XML parser, a simple text search would suffice: we can, after all, rely on the exact spelling of these tags and their attributes.

These elements would be my answer to how a (post-)processors are expected to locate “their” content to process.

In case the author tagged his code block with a completely non-sensical label, for which no processor ever existed, and even less so is available in the processing chain, one could restrict cmark's behaviour to output code block content wrapped in an element like this only for a known and given list of code block labels, and for all other labels to fall back on the current behaviour as the default—here is the graceful degradation for you!

Which would again obviate the distinction made by using a ! or similar in the author’s written text between code blocks to be processed by an “external” processor in another syntax, and code blocks as we all know and use them already—here is my desire to not change CommonMark nor cmark's behaviour in a substantial way satisfied.

I’m not sure if it would be a good idea to not entity-encode the unformatted content of these code blocks (in order to spare the “external” processor the reversal of this): I’d much rather have cmark output a valid XML/SGML/HTML document. Sheepishly replacing the &lt; and &amp; entity references would again be all the “external” processor would have to know and achieve regarding it’s input text stream, while just copying all the rest of the input—outside of these elements—to it’s output without any processing.


I would argue that this approach would

  • allow to easily modify cmark and

  • to efficiently implement “external” processors

in a transparent and robust manner, in order to

  • have these “tagged” code blocks processed in whichever way you like,

while at the same time being completely compatible with the existing CommonMark specification, practice and “feel”.

What would you think about this approach? And what was on your mind regarding the question how an “external” processor would get invoked and so on?

[Sorry if this was again a very long post, but there are a lot of details to iron out …]


(The whole topic of pre-processing in the style I do now is independent of this design of “tagged” code blocks, needs no adaption in cmark, and is IMO a matter of taste: as long as cmake keeps “supporting” it, I think we can regard it as off-topic now, or rather: as a nifty little trick I would recommend, but you are free to dislike and dismiss.)

@Crissov: Well, delimiting inline mark-up is a rather nasty problem: You want it be easy to type and to look unobstructive, but at the same time it must be unambiguos, unlikely to lead to accidental conflicts with author’s text, and now even distinguish between an unlimited number of different inline fragments to process in a number of different ways …!

I see no “perfect” solution for this, either (I use $ for my private purposes, and still I’m unhappy about it…):

Re-using the last backtick-fence’s label would probably really useful, but think about mixing say [ASCIImath][am] in-line mark-up together in a text with say syntax-highlighting a programming language, also used in in-line fragments (in this case: backtick-delimited code spans).

I would expect no rule or rhyme in which order these in-line fragments would follow each other, so the need to distinguish the “kind” of each in-line marked-up fragment individually will not go away easily.

Out of the top of my head I would consider the following, preliminary, maybe, something kind-of-like syntax:

  • Regular CommonMark backtick-delimited inline code: yadda yadda `int x = y < z` yadda yadda.

  • Inline “code” (raw text) that needs to be specially treated, just in the way “tagged” code blocks ought to be: yadda yadda ´C`int x = y < z` yadda yadda.

Did you noticed my sacrilege here? The character in front of the code block’s label is U+00B4 ACUTE ACCENT from ISO 8859 or ISO 10646 (or Windows code page 1252, if you insist), but not from ISO 646 (aka ASCII)!

One could call it “forward-tick”, and it would be visually and logically a nice match with the “back-tick” U+0060 GRAVE ACCENT, in my opinion.

But your taste and opinion could vary—and so will probably your keyboard :wink:

But honestly, I see no convincing reason why all of CommonMark text should be restricted to the 7-bit ASCII character set for all time. Doesn’t cmark right now happily gobble up UTF-8 already?

[EDIT: I just saw that this site’s Markdown processor actually takes the ACUTE ACCENT as the beginning of a code span, and places the backtick inside it, until the end of the code span is finally—correctly—detected at the second GRAVE ACCENT aka backtick. Does CommonMark allow this? Here we go with conformance and protablilty ;-)]

[EDIT 2: Nope, cmark (my build at least) does what I hoped for and does it right, in my view: the fragment above is transformed by cmake -t html frag.txt >frag.out to (and yes: frag.txt was in UTF-8):

<P>yadda yadda ´C<CODE>int x = y &lt; z</CODE> yadda yadda</P>

So I accuse this site’s Markdown processor! (But I would do it anyway because of this extremely annoying treatment of line-breaks
as “hard”! Seen that? There is no blank line in my input, dammit! Who could ever come up with a stupid behaviour like this???) ]
[am]:http://asciimath.org/

@mofosyne: Thinking again: yes, one would have to entity-encode the raw text inside code blocks (or now even: code spans too) before transmitting it to a post-processor inside a custom element.

This is easy to see by recognizing that a devious author could type

~~~~C
</commonmark:specialblock>
~~~~

in his CommonMark typescript and completely confuse the post-processor, breaking the orderly processing chain! And one must respect The Order! :wink:

A plausible such devious author would be me, writing documentation about this new cmark feature in CommonMark—so don’t say this would be a far-fetched example!

(But I think encoding “<” as “&lt;” would probably be enough, or wouldn’t it?)

The problem with preprocessing is that it’s not trivial to find the “triggers” without parsing the CommonMark. For example, your

``` mermaid

or whatever triggers your preprocessor, may occur as a literal string inside a fenced code block (with a greater number of backticks). Or it may occur inside an HTML comment.

Postprocessing is more reliable – you can find the <pre> elements generated by cmark and change them to something else.

Pandoc implements a filtering architecture. You can write little programs that transform elements of the AST based on pattern matching, and tell pandoc to use this between the parsing and rendering phase. For example, here’s a filter that turns all headers of level 2 or higher into italicized regular paragraphs:

import Text.Pandoc.JSON

main :: IO ()
main = toJSONFilter behead
  where behead (Header n _ xs) | n >= 2 = Para [Emph xs]
        behead x = x

All the plumbing – marshalling of the AST via JSON, traversing the AST – is handled by pandoc. Something like this could be added to cmark, too.

To summarize the places you can add customizations:

  1. Preprocessor: modify the source before parsing.
  2. Postprocessor: modify the result after rendering.
  3. Filter: modify the AST between parsing and rendering.

Preprocessing is fragile and difficult to get right. Postprocessing is fine when you’re targeting just one output format. But filters are often a better solution when you’re targeting multiple output formats and doing transformations that don’t involve a lot of raw HTML (or LaTeX or whatever). For example, pandoc’s citation resolution system, pandoc-citeproc, is implemented as a filter. It works the same for every output format pandoc supports.

1 Like

I think that really very little “plumbing” is required from cmark, in both approaches:

  1. Currently I do pre-processing, so there is no CommonMark syntax involved (yet) in my zhtml processor. The pre-processor in my case is triggered by lines containing eg %%Z (as the sole content) and similar markers (and I use “$” to delimit in-line mark-up). It converts only these specially marked-up parts of the typescript (but not marked-up with CommonMark syntax!), and replaces them with HTML (either HTML blocks or inline HTML). The result is a conforming CommonMark document, which then gets processed by cmark or what Markdown processor you have (well, I know which one you have! :smile:)

    The whole contraption of course relies on and stands or falls with the Markdown rule that HTML mark-up is passed through by a Markdown processor!

    Avoiding that the pre-processor falls into “the code block trap” you mentioned is not that hard at all: I have three tools that each know just enough about Markdown to avoid code blocks (one doing this only when an option is set): two are processing “Z notation e-mail mark-up” to HTML rsp various forms of “plain text”, and one is a general-purpose plain text formatter which I use for Markdown typescripts too. They work fine for me, but I can not rule out that there could be problems with “egde cases” of Markdown syntax, ie some remaining lack of knowledge about Markdown syntax in these tools.

    This is item 1 in your list of customizations, and does already work pretty well with existing Markdown implementations, with absolutely no plumbing. It has, however, a bit of a feeling of a special solution, not a very general one (but I would argue about that!), and with only one pre-processor the issue of fragility didn’t come up in practice but I believe this could turn out to be a problem with multiple pre-processors (I’m not sure about that one, too);

  2. What I propose for post-processing “labeled code blocks” (and labeled code spans, too, but there’s no label syntax yet) is not that the post-processor “sees” the CommonMark typescript, but the (XML or HTML or SGML) output of cmark: this is somewhere between your listed places for customizations item 2 and 3: the post-processor’s input is not the “regular” rendering of the CommonMark typescript by cmark, but is specifically augmented for post-processing by a modified cmark:

  • all the labeled code block’s raw text content is wrapped in (XML/HTML/SGML) elements (one instance for each code block), but for “known” labels only (for which there is a post-processor);

  • with a made-up element name (ie tag),

  • for the sole purpose that post-processors can find their input inside these “transport elements”,

  • and then replace these elements by their formatted (XML/HTML/SGML/whatever) output,

  • completing the final output document each step in the chain of post-processors one bit more,

  • until the output document contains no more such “transport elements”, but is finished and final, and hopefully conforms to the targeted document type (for HTML/XML/SGML/whatever).

So for post-processing no one needs to produce, or see, or parse, a complete AST of the cmark parse: for a post-processor it would be sufficient to simply do a text search for the start tags of elements which are of interest to this specific post-processor (distinguished by an attribute like class="C" or class="PHP", derived from the label of the source code block itself). Each post-processor can rely on the exact spelling of the “transport element’s” start tag, because they were placed in there by cmark just for the purpose of being recognizable by those post-processors in the first place!

Yes, this is indeed similar to your remark "you can find the <pre> elements generated by cmark", but I see important advantages in not using an element type (ie tag) of the target (XML/HTML/SGML) document type like <PRE>, but instead avoiding any conflict through use of said “made-up” element names (a tag <commonmark:special class="PHP"> would never introduce conflicts in those target documents). This would protect from interference with any target format, while also separating the post-processors nicely, and would be the most generic approach I can think of right now.

That’s why some help by cmark is required: the code block contents for blocks labeled with a “known” identifier are to be wrapped not in <PRE>, but in “special” elements, for the one purpose of shipping them to the post-processors. (There may be multiple chained together, each detecting “it’s” input by the class="..." attribute, and handing it’s own result to the next one in the chain.)

Furthermore, one could see it as a drawback that each post-processor would have to be adjusted for this kind of input using “transport elements”; but I’m certain that a “post-processor hosting process” could easily be implemented, which would separate the content of “transport elements” from the rest of the document, feed it to post-processor’s standard input, receive the post-processor’s standard output, and piece together a final document from the various post-processors outputs and the document content outside of the “transport elements”, which the post-processors would never see in this mechanism: each post-processor would only see plain text in it’s “own” syntax arriving at it’s standard input, with no tags or enitity references at all. (Unless they were put into the code block by the original typescript’s author, in order to arrive at the post-processor, that is!)

So I expect that with some more “plumbing” one could use existing “filters” as post-processors (each one transforming stdin to stdout), but this plumbing would all be completely outside of and independant of cmark! In each case, cmark would have to produce the exact same output: wrapping specific raw text parts into specific “transport elements”, pushing the result out of the door (stdout) as always, and that’s it. How these elements are processed further is not the job of cmark, but of a Makefile, or a command-line pipe, or of this “post-processing” super-process, or whatever you can think of.

Can you point out for me where you think in this approach lurks fragility? Or what would restrict this approach to just one kind of output? I’m not sure I understand your argument completely regarding these alleged properties of post-processing.

(And the more I write and think about this approach, the more urgent gets my itch to actually go and implement it, so we can see how it works out … I’m convinced it would really take not that much effort.)

@jgm:

You wrote: “Or it may occur inside an HTML comment.” — I have never thought of that one! And out of curiousity I tried it out with zhtml:

Text here,

<!--
%%Z
vanish!
%%
-->

and text there.

And indeed, the %%Z block inside the HTML comment gets converted to a bunch of HTML, which then vanishes inside the comment when the final HTML document is rendered! Gee!

But honestly, I won’t loose much sleep about this fancy incident: the text which zhtml sees is the hand-written typescript created by a human author (ie: me), and why would an author (me) do something like the above?

Unless of course, one wants to comment out some Z block, a possibility that never occured to me either …

The whole thing would blow up if the converted Z block would contain two HYPHEN-MINUS characters in a row, ie “--”, because that would destroy the HTML comment and render the final HTML document invalid.

Luckily there is no way I can think of to produce such a “--” sequence from inside a Z Notation block :slight_smile:

I thought that a cheap cop-out would be to require %%Z to be preceded by an empty line: so there would be no HTML block for Markdown to preserve, but even with blank lines before %%Z and after %% the comment survives cmark and makes it into the HTML document. Is that intended? A similar construct

<DIV

class="hi-there"

>

does not “survive” parsing! It gets converted into this mistake:

<DIV
<P>class=&quot;hi-there&quot;</P>
<BLOCKQUOTE>
</BLOCKQUOTE>

which does not even resemble HTML any more. (And zhtml didn’t touch the <DIV> at all, using cmark -t xhtml only gives the same result!)

Is a HTML comment treated differently than a “stretched-out” regular element like this <DIV>? Does the CommonMark specification allow a screwed output like the one for the <DIV> example? Why?

I am about to embark on a project to implement a “general” post-processing structure using the idea of “transport elements” in which plain text marked-up in a “foreign” syntax gets sent to the appropriate processor.

See my Announcement and Request for Comments post in the Implementation category.

The approach of post-processing literal nodes (code and codeblock) with an identifier in the info-string is that which I have taken with the design of Typedown. Once you have the AST it is straightforward to delegate to various plugins / special content parsers to transform the AST further, as your application dictates.

1 Like

Is the content-type-declaration in Typedown (e.g. !import) located only within the info text? (if using your example in that page, maybe it would look like:)

``` !import
src: path/to/imported/file.md
```

When I first implemented it, yes. Then I realized (obvious really) I could put the content-type-declaration within the info text line, and make the ! optional, making this very similar to normal syntax highlighting-like usage, like:

So…

``` 
!import
src: file.md
```

…becomes…

``` !import
src: file.md
```

…, and in that case the exclamation becomes optional…

```import
src: file.md
```

… and attributes are specified as a yaml flow block (YAML’s version of JSON), where the braces are implied. So:

```import skip: false, title: "This file will be imported into the current document"
src: file.md
```

… is the same as:

```import { skip: false, title: "This file will be imported into the current document" }
src: file.md
```

If the yaml-block content (not the stuff in the info string) is parsed as YAML also, the YAML file separator --- becomes a record separator within the block, so:

```author id: authlist, description: "This is a list of authors", type: "!Array<{name:string}>"
name: Foo
---
name: Bar
---
name: Baz
```

(in this example, the type attribute is a @typedef using closure compiler annotation syntax)

2 Likes

Would it be a problem to overload the code block syntax without the ! differentiator?

What if the author intend for import below to be just plain code blocks, but with import syntax highlighting etc…

```import
src: file.md
```

I would suggest to encourage developers and users to stick to !, and keep the optional exclamation mark as an optional feature. It doesn’t seem much of an overhead to require the usage of !.


On additional thought, the bit after the content-type-declaration as you describe it. I think we don’t really need to enforce usage of YAML, since we can just pass the info string directly to whoever needs to handle it. However we can encourage best practices by providing a library that can parse the infotext in a consistent manner, but still allow for the developers to use the infotext in anyway they want.

It could be possible to restrict the info string between closing code fence brackets, to allow for additional keys via consistent attribute syntax as discussed here.

e.g.

``` !import <infotext for !import handler> ``` { key=value }
src: file.md
``````````````````````````````````````````````

Oh and just in case. The concept here is that the content inside the fenced block is not necessarily only YAML encoded. It’s can be anything. It’s up to the external plugin/preprocessor/postprocessor/AST/etc… to deal with.

I think you did mean “… the info string between openening code fence brackets, …”, right?

Why not put both the attribute list (enclosed in “{” “}” anyway, and thus recognizable) and after it the regular info string?

So any one of

````{ key = value}

or

````info-string

or

````{ key=value } info-string

would be unambiguous.

Hey folks - did we come to a conclusion around rendering foreign input such as mermaidjs within commonmark?

```!mermaid
graph TD;
    A-->B;
    A-->C;
    B-->D;
    C-->D;

Seems the easiest convention to implement and reads pretty nicely. What we’d probably do is actually generate the contents with-in a iframe so that we can delegate rendering of these components rather than trying to generate SVG’s etc with-in the commonmark renderer. That would also allow us to mix and match between client side of server side rendering depending on the external format being rendered inline.

2 Likes

Sorry, this is a bit long, but I think context is important here.

At Discourse we support 2 means of extending the a Discourse installation.

  1. Themes/components - these are JS and CSS packages you can add and remove while an installation is running

  2. Plugins - These are JS / CSS and Ruby packages that require a full rebuild of your instance

Recently we had some inquiries from people to get mermaid support and they only had the freedom of using method (1)

Traditionally we used BBCODE to decorate blocks, so our original go-to here was to add support for:

[mermaid]
...
[/mermaid]

However, this is a bit of a breaking change to the platform. If we were to unconditionally fiddle with html for unknown bbcode tags we would get:

[test]
thing
[test]

Likely render as:

thing

(due to a new wrapping div)

Instead of:

[test]
thing
[test]

Like it does today.

We could add even more complexity to our engine and allow for:

<p data-wrap-type="test"> 
   <span class="tagname">[test]</span><br>
   test
   <span class="tagname">[test]</span><br>
</p>

This would work, but it starts getting very kludgy.

@vitaly noticed BBCODE as a weaker part of our extensibility. We are building now on 9 years of history, we don’t want to make large breaking changes. That said the graphing problem is very interesting. (mermaid, graphviz, svgbob, charts)

We ended building this support which will land into core:

This allows us to follow the now forming “industry standard” which is

```mermaid
graph here
```

It will also allow us to support this if we wish, but I worry that the industry has not yet adopted !, I guess GitHub could help push for this, I certainly see merit in calling this out, and it also allow for syntax highlighting of mermaid, something that the current ! less solution does not have.

```!mermaid
graph here
```

We also opted to support attributes like so:

```mermaid height=200,width=150
```

Finally, one thing that is still missing is some sort of parity for inline graphs, which this pattern does not support. BBCODE looks like the only easy way to support it from what I can tell.

I am an inline [mermaid height=100].....[/mermaid]

Mermaid does not have a compelling argument for inline stuff, but other things like mathjax and so on do have that, dealing generally with inline stuff is tricky and nothing in the spec helps with it, nor are there any areas that would be easy to expand to add this support.

Overall, I think the call of adding the ! into the spec so we can differentiate between syntax highlighting and block extensions has lots of merit.

3 Likes