Mermaid - Generation of diagrams and flowcharts from text in a similar manner as markdown

That sounds rather fair. I can imagine the spec would have a section on recommendation for trigger words to always be provided e.g.diagram like:

``` dot diagram

And if diagram is not in infostring, then the plugin would simply ignore it and let it fall to code block mode.

Which would mean that by convention, the plugin will not activate without seeing the diagram keyword in the info-string (could be other keywords as it’s only a recommendation, not a standard).

I cannot and don’t want to see CM adopting English trigger words.

However, there’s syntax precedent for embedding instead of more or less verbatim rendering: the exclamation mark in image syntax vs. normal hyperlinks. I could see extensions adopting this letter for the info string, e.g.:

``` dot!


```! dot

Those are totally different graph types, though. As an example of syntax, it’s interesting but not very closely related.

I totally agree with embracing the exclamation point as a generic marker of embedding content in commonmark, and this aligns with what I suggested elsewhere to disambiguate image link syntax and convert alt text into a visible caption.

1 Like

Keep in mind that Mermaid, Viz.js (Graphviz), et al. are one of those things that should work fine today as embedded HTML with not much more syntax burden than fenced code blocks would require.

Whether that translates into usable graphing behavior when exporting to PDF or ePub, e.g., from a given implementation may be a different, more difficult, story.

If web-components will become a thing, and we use the ! as the trigger char for codeblocks then we could just make it so that:

``` !mermaid A Demo Diagram { .diagramStyle id=demoChart } 
graph TD;
<mermaid class=diagramStyle id=demoChart alt="A Demo Diagram" >
    graph TD;

Essentially a no-markdown island. Of course, you could also have a filter in front of the AST to catch and modify the behaviour before it defaults back to webcomponent mode.

Using Concepts from:

  • Consistent attribute syntax
  • alt field is heavily encouraged by w3 for disability accessibility reasons. So infotext should always be included in alt field to encourage it’s usage. (e.g. blind users can understand what the diagram is suppose to be.)

Note, until the official mermaid supports webcomponents, if the above approach is adopted… you will need to type like this:

``` !div { .mermaid } 

To render as in this official example :

<div class="mermaid"> 

Alternatively, you can have a official commonmark-mermaid plugin that captures the web-component and output the correct html code for the mermaid engine (at least until they switch to webcomponents).

  • Another approach is to say that “.dot” and “#dot” in infotext means a div with either a class=dot or id=dot. ( e.g. .mermaid is equiv to !div { .mermaid } )

If people like this idea and it has not been discussed before, I could probbly paste this into it’s own thread. Or maybe slip it into the “no markdown island” if relevant.

I don’t like the generic attribute syntax with curly braces at all, but I could live with special-casing strings that are introduced by dot . or hash # in places that don’t get rendered. The exact treatment depends on the output format, HTML attributes id and class are obviously the default case. Other places than the info string of a start fence are the location and reference parts of links, but it’s not as compatible with existing implementations, e.g.:

[text](location .class #id)

[text][reference .class #id]
  [reference]: location

  [reference]: location .class #id

So I presume you mean something like this? Crissov?

``` !mermaid A Demo Diagram { key=value } 

``` ! .mermaid A Demo Diagram { key=value } 

``` ! #mermaid A Demo Diagram { key=value } 

``` ! #mermaid .mermaid A Demo Diagram { key=value } 

``` !mermaid #mermaid .mermaid A Demo Diagram { key=value } 
<mermaid key=value alt="A Demo Diagram" >...</mermaid>
<div class=mermaid key=value alt="A Demo Diagram" >...</div>
<div id=mermaid key=value alt="A Demo Diagram" >...</div>
<div class=mermaid id=mermaid key=value alt="A Demo Diagram" >...</div>
<mermaid class=mermaid id=mermaid key=value alt="A Demo Diagram" >...</mermaid>

In most cases the generic attribute syntax can be omitted if you do not need to set any values in html (or plugin/extention settings). (basically I know it’s ugly, but it helps avoid overloading the most common commonmark syntax, by moving the cruft to an optional {}.

On the question of compatibility. This is what it would generally look like (in current commonmark):

<pre><code class="language-!mermaid">...</code></pre>

Edit: Added ! .mermaid to .mermaid and others, to account for the case that people want to style “code blocks” rather than styling web-components.

I didn’t say anything about alt or rather title, which isn’t such a bad idea but should go inside quotation marks, and I’d leave out the curly braces part, of course.

``` !mermaid "A Demo Diagram" .mermaid #mermaid something else

<mermaid title="A Demo Diagram" class="mermaid" id="mermaid">…</mermaid>

I don’t think it is necessary to handle syntax entensions like Mermaid by extending a CommonMark processor like cmark at all. I use a regular CommonMark (rsp Markdown) syntax processor in a very similar scenario without difficulties, and I think my approach can be used in a very general way.

I am currently using (a slightly extended clone of) cmark to generate HTML documents, but not directly from CommonMark formatted input, but from original “extended” syntax input after a pre-processing step.

For example, I generate this HTML from that input.

The cmark input is not the hand-written CommonMark text, but the output of another tool of mine which acts as a pre-processor: It reads the original text input (written in the “extended” syntax) and replaces parts of the hand-written text with HTML mark-up, while leaving the rest alone (which is all the regular CommonMark text).

This output is than fed to cmark, and cmark then ignores and passes through the inserted HTML (as the CommonMark and Markdown specifications require), while doing it’s job on the CommonMark text. The output from this step is the final HTML document.

The two steps look in a Makefile like this (cm2html outputs a whole HTML document, including the document type definition and the <HEAD>, but is in all other respects the CommonMark parser):

        zhtml -aUm $(MARKDOWN) >"%TEMP%\$(TMPNAME)"
        cm2html "%TEMP%\$(TMPNAME)" >$(HTML)

This works quite nicely and allows to “extend” the CommonMark syntax with another syntax, which is in my case very similar in spirit, namely the “e-mail mark-up” of the Z notation, as defined in ISO/IEC 13568:2000.

I’m sure that other “extended” syntaxes could be accomodated using the same approach, which is by the way very much like the time-honored use of pre-processors for troff, and that the same can be done for Mermaid.

(While this answer was written with cmark in mind, nothing in this process is specific to CommonMark, and works just as well with other Markdown processors.)

Is certainly an option to just use preprocessor (and can be an effective solution to extensions that is local to a specific website). But would quickly get unwieldy when you implement more than one preprocessor filters, due to syntax collision.

Plus its not obvious in the original source code that it is Z notation. Hence its safer for the interpreter to have an “island” of known non-markdown content, that would self specify what extensions can be safely applied to it in a portable manner.

Yes, placing “foreign” syntax in kind-of code block boxes tagged with an indicator to signal how to process them: if I understand you correctly, that would be your approach, and I think it could work. Taking my use case, that would mean a mark-up like this:

... in normal _CommonMark_ text here, but now:

X == Y %x Z

Back in _CommonMark_

where cmark would somehow be configured to pass the content of the ~~~~Z block through an external converter like my zhtml, right?

I’ll give you that this would be a relatively clean way to have one main processor, cmark, and one or more subordinate processors for “foreign” syntaxes, like zhtml.

But I see two drawbacks:

  1. It is actually more verbose to enforce “double” mark-up for the Z notation paragraphs in this way, while ^%%Z$ is (one of several) perfectly recognizable markers for such paragraphs already (and a preprocessor could certainly recognize ~~~~Z too!);

  2. It does not provide a solution for in-line “foreign” syntaxes, which the pre-processor approach naturally does – I use $ to delimit in-line Z notation.


  • I don’t see how this would get unwieldy if combining several pre-processors, as long as each preprocessor leaves HTML marked-up stuff alone (as they do already);

  • whether it is obvious in my example where Z notation begins and ends is somewhat a matter of taste; the standard (I mentioned and which specifies this Z mark-up) at least prescribes delimiters like %%Z (and others) do delimit various kinds of paragraphs in Z Notation, which I do not find hard to perceive. Remember that the exact same pre-processor approach would work just as well with ^~~~~Z / ^~~~~ delimiters too: it’s just the pre-processor who needs to recognize them.

So no, I’m not really convinced of the merits in modifying/extending cmark – and hence, CommonMark syntax itself, in some kind. But using some special kind of code blocks (and inline code spans too?) for this purpose would be a generic approach, yes.

But I do see the advantage in portablility, if each block of “foreign” syntax is enclosed in a tagged code block: If nothing can be done with the content, cmark could just treat it as an ordinary code block – if that’s what your’re aiming at.

Wouldn’t that be exactly the way to implement syntax highlighting in the existing notation for blocks of code, tagged with a language identifier like PHP, or C++?

1 Like

I see your point on how preprocessors can still be of use.

Well at the very least you can see this as a general good practice outline for preprocessor handling in such a way that there is a known way for commonmark to gracefully degrade the content if not handled by a preprocessor (or extension, etc…).

But these are just implementation issues. The core thing is really just to encourage portability, and graceful degradation of extensions. So for me, at the very least, there needs to be an agreement on how a preprocessor should be expected to locate their content in a best practice manner. Which if using your example, but with my concept would be like this (Though I would encourage using ``` only for falling back as code instead, to reduce preprocessor complexity) :

... in normal _CommonMark_ text here, but now:

X == Y %x Z

Back in _CommonMark_

The ! is to make it clear to the preprocessor that it is actually for it to process, and not just a Z code snippet for some Z tutorial.

The benefit to coming to an agreement is threefold. Reduced preprocessor complexity, increased portability and graceful degradation.

How does various implementation handle processing the extension is up for implementation. Not our problem. Just making sure that the cruft of other language syntax doesn’t infect and force us to have hacks on commonmark in a fews years (because everyone is already using such hypothetical extention) is more important however.

Elaborated: Fenced Block Types, Generic Extension/Webcomponents, and fallback handling

There could be a convention that the info string from a backtick fence also applies to all following inline backtick code spans. In many cases, authors are dealing with just one language, but you could of course encounter spans of HTML, CSS, JS, PHP, RX and SQL syntax, for instance, within the same paragraph. Some of which may be sufficiently – even if not perfectly – covered by auto-detection of languages.

1 Like

Yes, having a mechanism how “tagged” code blocks could be processed by more-or-less “external” tools, in a common and general way, would be a good thing, I agree on that. There are several comments on it I would like to give on the details.

The ! to mark “external syntax” would be fine with me too. But what would a label on a code block be good for if it had no consequences? Just for documentation? And if it would have consequences (other than in the current cmark, if I remember correctly!)—how do these consequences happen if not by an “external” processor of the kind we’re talking about, be it for syntax highlighting in the HTML output, or for a complete new and other mark-up syntax?

Thinking about how cmake could find the appropriate processor for the tag, and how this “external” processor would be invoked (A plug-in architecture? Using standard C library system()? Both? How does the “external” processor’s output comes back into the cmark process—which is still in the middle of processing an input document?), I felt that this seemed all too complex and brittle to implement, at least using the standard C library only.

And considering it being a reference implementation after all, I would strongly prefer cmark to be “just” a standard C program, even if the sources are currently half-way between C95 and C99. (Requiring <stdbool.h> for example, but not requiring C99’s declaration-is-a-statement syntax—any more.) And IMO it should not depend on <dlopen.h> on U*IX but on LoadLibrary() on Windows, for example.

But I now think that both of us (or rather: both of our approaches and preferences) could have our cakes and eat it, with a more general and simpler implementation strategy:

Instead of invoking an “external” processor in order to translate content of such a “tagged” code block, wouldn’t it be much more easy and clean and robust if cmark would simply output the whole content of said code block, wrapped inside a SGML/HTML/XML element, say with a special class attribute, or even a configurable tag name. This would be trivially easy to implement in cmark, I guess.

It would certainly be no problem at all to create a new or adapt an existing formatting tool like mine, or say a processor for Mermaid, to scan it’s input text for just these special elements (rsp tags), and then replace the elements with the processed PCDATA content of the element itself—would it? I can even imagine factoring this switch between “copy text outside these elements” and “replace these elements with their processed content” into a kind of post-processor infrastructure library, or post-processor applying-tool—completely independant of cmark of course.

Because these elements would solely exist for the communication between cmark and an “external” processor (a post-processor this time …), no SGML/HTML/XML document type definition needs to be construed or modified, as long as the tag in use does not conflict with the target document’s DTD (or XML schema, or whatever). This is very simple to guarantee by inventing and using a tag in a made-up namespace like <commonmark:specialblock class="Z">, to continue our “Z notation” example. Remember that no one but a post-processor would actually see these elements.

And for a post-processor to just filter these elements would not even require an XML parser, a simple text search would suffice: we can, after all, rely on the exact spelling of these tags and their attributes.

These elements would be my answer to how a (post-)processors are expected to locate “their” content to process.

In case the author tagged his code block with a completely non-sensical label, for which no processor ever existed, and even less so is available in the processing chain, one could restrict cmark's behaviour to output code block content wrapped in an element like this only for a known and given list of code block labels, and for all other labels to fall back on the current behaviour as the default—here is the graceful degradation for you!

Which would again obviate the distinction made by using a ! or similar in the author’s written text between code blocks to be processed by an “external” processor in another syntax, and code blocks as we all know and use them already—here is my desire to not change CommonMark nor cmark's behaviour in a substantial way satisfied.

I’m not sure if it would be a good idea to not entity-encode the unformatted content of these code blocks (in order to spare the “external” processor the reversal of this): I’d much rather have cmark output a valid XML/SGML/HTML document. Sheepishly replacing the &lt; and &amp; entity references would again be all the “external” processor would have to know and achieve regarding it’s input text stream, while just copying all the rest of the input—outside of these elements—to it’s output without any processing.

I would argue that this approach would

  • allow to easily modify cmark and

  • to efficiently implement “external” processors

in a transparent and robust manner, in order to

  • have these “tagged” code blocks processed in whichever way you like,

while at the same time being completely compatible with the existing CommonMark specification, practice and “feel”.

What would you think about this approach? And what was on your mind regarding the question how an “external” processor would get invoked and so on?

[Sorry if this was again a very long post, but there are a lot of details to iron out …]

(The whole topic of pre-processing in the style I do now is independent of this design of “tagged” code blocks, needs no adaption in cmark, and is IMO a matter of taste: as long as cmake keeps “supporting” it, I think we can regard it as off-topic now, or rather: as a nifty little trick I would recommend, but you are free to dislike and dismiss.)

@Crissov: Well, delimiting inline mark-up is a rather nasty problem: You want it be easy to type and to look unobstructive, but at the same time it must be unambiguos, unlikely to lead to accidental conflicts with author’s text, and now even distinguish between an unlimited number of different inline fragments to process in a number of different ways …!

I see no “perfect” solution for this, either (I use $ for my private purposes, and still I’m unhappy about it…):

Re-using the last backtick-fence’s label would probably really useful, but think about mixing say [ASCIImath][am] in-line mark-up together in a text with say syntax-highlighting a programming language, also used in in-line fragments (in this case: backtick-delimited code spans).

I would expect no rule or rhyme in which order these in-line fragments would follow each other, so the need to distinguish the “kind” of each in-line marked-up fragment individually will not go away easily.

Out of the top of my head I would consider the following, preliminary, maybe, something kind-of-like syntax:

  • Regular CommonMark backtick-delimited inline code: yadda yadda `int x = y < z` yadda yadda.

  • Inline “code” (raw text) that needs to be specially treated, just in the way “tagged” code blocks ought to be: yadda yadda ´C`int x = y < z` yadda yadda.

Did you noticed my sacrilege here? The character in front of the code block’s label is U+00B4 ACUTE ACCENT from ISO 8859 or ISO 10646 (or Windows code page 1252, if you insist), but not from ISO 646 (aka ASCII)!

One could call it “forward-tick”, and it would be visually and logically a nice match with the “back-tick” U+0060 GRAVE ACCENT, in my opinion.

But your taste and opinion could vary—and so will probably your keyboard :wink:

But honestly, I see no convincing reason why all of CommonMark text should be restricted to the 7-bit ASCII character set for all time. Doesn’t cmark right now happily gobble up UTF-8 already?

[EDIT: I just saw that this site’s Markdown processor actually takes the ACUTE ACCENT as the beginning of a code span, and places the backtick inside it, until the end of the code span is finally—correctly—detected at the second GRAVE ACCENT aka backtick. Does CommonMark allow this? Here we go with conformance and protablilty ;-)]

[EDIT 2: Nope, cmark (my build at least) does what I hoped for and does it right, in my view: the fragment above is transformed by cmake -t html frag.txt >frag.out to (and yes: frag.txt was in UTF-8):

<P>yadda yadda ´C<CODE>int x = y &lt; z</CODE> yadda yadda</P>

So I accuse this site’s Markdown processor! (But I would do it anyway because of this extremely annoying treatment of line-breaks
as “hard”! Seen that? There is no blank line in my input, dammit! Who could ever come up with a stupid behaviour like this???) ]

@mofosyne: Thinking again: yes, one would have to entity-encode the raw text inside code blocks (or now even: code spans too) before transmitting it to a post-processor inside a custom element.

This is easy to see by recognizing that a devious author could type


in his CommonMark typescript and completely confuse the post-processor, breaking the orderly processing chain! And one must respect The Order! :wink:

A plausible such devious author would be me, writing documentation about this new cmark feature in CommonMark—so don’t say this would be a far-fetched example!

(But I think encoding “<” as “&lt;” would probably be enough, or wouldn’t it?)

The problem with preprocessing is that it’s not trivial to find the “triggers” without parsing the CommonMark. For example, your

``` mermaid

or whatever triggers your preprocessor, may occur as a literal string inside a fenced code block (with a greater number of backticks). Or it may occur inside an HTML comment.

Postprocessing is more reliable – you can find the <pre> elements generated by cmark and change them to something else.

Pandoc implements a filtering architecture. You can write little programs that transform elements of the AST based on pattern matching, and tell pandoc to use this between the parsing and rendering phase. For example, here’s a filter that turns all headers of level 2 or higher into italicized regular paragraphs:

import Text.Pandoc.JSON

main :: IO ()
main = toJSONFilter behead
  where behead (Header n _ xs) | n >= 2 = Para [Emph xs]
        behead x = x

All the plumbing – marshalling of the AST via JSON, traversing the AST – is handled by pandoc. Something like this could be added to cmark, too.

To summarize the places you can add customizations:

  1. Preprocessor: modify the source before parsing.
  2. Postprocessor: modify the result after rendering.
  3. Filter: modify the AST between parsing and rendering.

Preprocessing is fragile and difficult to get right. Postprocessing is fine when you’re targeting just one output format. But filters are often a better solution when you’re targeting multiple output formats and doing transformations that don’t involve a lot of raw HTML (or LaTeX or whatever). For example, pandoc’s citation resolution system, pandoc-citeproc, is implemented as a filter. It works the same for every output format pandoc supports.

1 Like

I think that really very little “plumbing” is required from cmark, in both approaches:

  1. Currently I do pre-processing, so there is no CommonMark syntax involved (yet) in my zhtml processor. The pre-processor in my case is triggered by lines containing eg %%Z (as the sole content) and similar markers (and I use “$” to delimit in-line mark-up). It converts only these specially marked-up parts of the typescript (but not marked-up with CommonMark syntax!), and replaces them with HTML (either HTML blocks or inline HTML). The result is a conforming CommonMark document, which then gets processed by cmark or what Markdown processor you have (well, I know which one you have! :smile:)

    The whole contraption of course relies on and stands or falls with the Markdown rule that HTML mark-up is passed through by a Markdown processor!

    Avoiding that the pre-processor falls into “the code block trap” you mentioned is not that hard at all: I have three tools that each know just enough about Markdown to avoid code blocks (one doing this only when an option is set): two are processing “Z notation e-mail mark-up” to HTML rsp various forms of “plain text”, and one is a general-purpose plain text formatter which I use for Markdown typescripts too. They work fine for me, but I can not rule out that there could be problems with “egde cases” of Markdown syntax, ie some remaining lack of knowledge about Markdown syntax in these tools.

    This is item 1 in your list of customizations, and does already work pretty well with existing Markdown implementations, with absolutely no plumbing. It has, however, a bit of a feeling of a special solution, not a very general one (but I would argue about that!), and with only one pre-processor the issue of fragility didn’t come up in practice but I believe this could turn out to be a problem with multiple pre-processors (I’m not sure about that one, too);

  2. What I propose for post-processing “labeled code blocks” (and labeled code spans, too, but there’s no label syntax yet) is not that the post-processor “sees” the CommonMark typescript, but the (XML or HTML or SGML) output of cmark: this is somewhere between your listed places for customizations item 2 and 3: the post-processor’s input is not the “regular” rendering of the CommonMark typescript by cmark, but is specifically augmented for post-processing by a modified cmark:

  • all the labeled code block’s raw text content is wrapped in (XML/HTML/SGML) elements (one instance for each code block), but for “known” labels only (for which there is a post-processor);

  • with a made-up element name (ie tag),

  • for the sole purpose that post-processors can find their input inside these “transport elements”,

  • and then replace these elements by their formatted (XML/HTML/SGML/whatever) output,

  • completing the final output document each step in the chain of post-processors one bit more,

  • until the output document contains no more such “transport elements”, but is finished and final, and hopefully conforms to the targeted document type (for HTML/XML/SGML/whatever).

So for post-processing no one needs to produce, or see, or parse, a complete AST of the cmark parse: for a post-processor it would be sufficient to simply do a text search for the start tags of elements which are of interest to this specific post-processor (distinguished by an attribute like class="C" or class="PHP", derived from the label of the source code block itself). Each post-processor can rely on the exact spelling of the “transport element’s” start tag, because they were placed in there by cmark just for the purpose of being recognizable by those post-processors in the first place!

Yes, this is indeed similar to your remark "you can find the <pre> elements generated by cmark", but I see important advantages in not using an element type (ie tag) of the target (XML/HTML/SGML) document type like <PRE>, but instead avoiding any conflict through use of said “made-up” element names (a tag <commonmark:special class="PHP"> would never introduce conflicts in those target documents). This would protect from interference with any target format, while also separating the post-processors nicely, and would be the most generic approach I can think of right now.

That’s why some help by cmark is required: the code block contents for blocks labeled with a “known” identifier are to be wrapped not in <PRE>, but in “special” elements, for the one purpose of shipping them to the post-processors. (There may be multiple chained together, each detecting “it’s” input by the class="..." attribute, and handing it’s own result to the next one in the chain.)

Furthermore, one could see it as a drawback that each post-processor would have to be adjusted for this kind of input using “transport elements”; but I’m certain that a “post-processor hosting process” could easily be implemented, which would separate the content of “transport elements” from the rest of the document, feed it to post-processor’s standard input, receive the post-processor’s standard output, and piece together a final document from the various post-processors outputs and the document content outside of the “transport elements”, which the post-processors would never see in this mechanism: each post-processor would only see plain text in it’s “own” syntax arriving at it’s standard input, with no tags or enitity references at all. (Unless they were put into the code block by the original typescript’s author, in order to arrive at the post-processor, that is!)

So I expect that with some more “plumbing” one could use existing “filters” as post-processors (each one transforming stdin to stdout), but this plumbing would all be completely outside of and independant of cmark! In each case, cmark would have to produce the exact same output: wrapping specific raw text parts into specific “transport elements”, pushing the result out of the door (stdout) as always, and that’s it. How these elements are processed further is not the job of cmark, but of a Makefile, or a command-line pipe, or of this “post-processing” super-process, or whatever you can think of.

Can you point out for me where you think in this approach lurks fragility? Or what would restrict this approach to just one kind of output? I’m not sure I understand your argument completely regarding these alleged properties of post-processing.

(And the more I write and think about this approach, the more urgent gets my itch to actually go and implement it, so we can see how it works out … I’m convinced it would really take not that much effort.)


You wrote: “Or it may occur inside an HTML comment.” — I have never thought of that one! And out of curiousity I tried it out with zhtml:

Text here,


and text there.

And indeed, the %%Z block inside the HTML comment gets converted to a bunch of HTML, which then vanishes inside the comment when the final HTML document is rendered! Gee!

But honestly, I won’t loose much sleep about this fancy incident: the text which zhtml sees is the hand-written typescript created by a human author (ie: me), and why would an author (me) do something like the above?

Unless of course, one wants to comment out some Z block, a possibility that never occured to me either …

The whole thing would blow up if the converted Z block would contain two HYPHEN-MINUS characters in a row, ie “--”, because that would destroy the HTML comment and render the final HTML document invalid.

Luckily there is no way I can think of to produce such a “--” sequence from inside a Z Notation block :slight_smile:

I thought that a cheap cop-out would be to require %%Z to be preceded by an empty line: so there would be no HTML block for Markdown to preserve, but even with blank lines before %%Z and after %% the comment survives cmark and makes it into the HTML document. Is that intended? A similar construct




does not “survive” parsing! It gets converted into this mistake:


which does not even resemble HTML any more. (And zhtml didn’t touch the <DIV> at all, using cmark -t xhtml only gives the same result!)

Is a HTML comment treated differently than a “stretched-out” regular element like this <DIV>? Does the CommonMark specification allow a screwed output like the one for the <DIV> example? Why?