A little while ago, I created my own fork of commonmark.js in order to experiment with ways to customize the output of HtmlRenderer. After a short discussion with @jgm on commonmark.js issue #6, I went back to the drawing board. I’d now like to bring the discussion before the main community here since I’ve synced my latest revisions to GitHub this evening.
I’d like to gather some feedback on my implementation and whether or not I might be able to help work towards an agreeable solution for how to handle custom output. I thought it best to start by coding first and asking questions later so that everyone has a good reference for what I’m trying to do and also to have something to experiment with and review independently. Suggestions are welcome!
Here’s a quick rundown of the API so far, but the documentation provides much more detail. I’m afraid I can’t link to it all because I’m a new user here, but each class is documented in individual .md files in the top level of my fork.
Renderer - the base class from which all other renderers now derive
HtmlRenderer - the standard HTML renderer which now accepts functions to modify the behavior of each node type
XmlRenderer - the standard XML renderer which also now accepts handler functions like HtmlRenderer
HtmlSyntaxRenderer - a new renderer that parses raw HTML and handles all input as generic HTML objects
Below is a quick example. It performs the simple task of converting emphasized text to uppercase, and it does so using two different API’s.
// commonmark = require('commonmark.min.js');
// Simple Markdown and parser to test with
var markdown = 'Hello *world*!';
var parser = new commonmark.Parser();
// Emphasis flag
var inEmph = false;
// HtmlRenderer implementation
var htmlrenderer = new commonmark.HtmlRenderer({
Emph: function(node,entering) {
inEmph = entering;
return this._Emph(node,entering);
},
Text: function(node,entering) {
var text = this._Text(node,entering);
return inEmph ? text.toUpperCase() : text;
}
});
// HtmlSyntaxRenderer implementation
var htmlsyntaxrenderer = new commonmark.HtmlSyntaxRenderer({
String: function(str) {
var text = this.escape(str,false);
return inEmph ? text.toUpperCase() : text;
},
Tag: function(tag) {
if(tag.name === 'em') {
inEmph = tag.entering;
}
if(tag.entering) {
var tag = '<' + tag.name;
for(var attribute in tag.attributes)
tag += ' ' + attribute + '="' + this.escape(tag.attributes[attribute],true) + '"';
return tag + '>'; // no self-closing slashes
} else {
return '</' + tag.name + '>';
}
}
});
// Generate HTML
console.log('HtmlRenderer:\n' + htmlrenderer.render(parser.parse(markdown)));
console.log('HtmlSyntaxRenderer:\n' + htmlsyntaxrenderer.render(parser.parse(markdown)));
/**** OUTPUT ****
HtmlRenderer:
<p>Hello <em>WORLD</em>!</p>
HtmlSyntaxRenderer:
<p>Hello <em>WORLD</em>!</p>
****/
What are some of the sample use cases where custom output is needed? In general I like your idea but some people will probably say render to HTML and change your output by using CSS or XLST.
A feature like this makes it possible to perform tasks such as adding id or class attributes to certain elements, adding markup to plain text such as turning @Usernames into links, or whitelisting the markup that you wish to allow in the output. These are just to name a few; I’m sure others can think of more creative applications for it.
It’s true that XSLT is meant for output transformations, but it isn’t a direct replacement for this API. I was going to write a huge analysis of the differences, but the fact is, XSLT is powerful and is very capable of complex transformation tasks. The only real downsides I can think of are that its input must be XML (XmlRenderer might work well for this), and it would likely degrade performance more than simple JavaScript function overrides. Sticking to pure JavaScript might also make the code more portable on its own between browsers and servers (Node.js, Java Nashorn, etc.); you would only have to maintain one code base for every environment.
In my own opinion, XSLT is also cumbersome both on its own and as part of the bigger picture. Implementing complex operations in XSLT requires lots of boilerplate code and can be very difficult without extension functions in some cases. Using XSLT also complicates the technology stack since you have to manage JavaScript, XSLT API’s, XSLT documents, plus any extensions coupled to your XSLT.
Feel free to disagree, but I’d personally rather deal with a renderer API such as the one I’ve created rather than adding XSLT to the mix.
I agree with you in regards to XSLT, but I brought it up as a common objection. If you want to succeed in this direction without forking (i.e. have it accepted back into the common as a pull request), you’ll need to first explain the use cases better, and how they are better or different than using other approaches.
@Vitaly, re “For real needs syntax customizations required too.”
I agree, a good approach to customizing is currently my biggest barrier to deploying CommonMark — I can’t use it as a drop-in replacement for creating books (ala LeanPub or an iOS wiki-like client like Trunk Notes) without adding new functionality, which I’d like to do in a standard, modular way.
I agree, a good approach to customizing is currently my biggest barrier to deploying CommonMark — I can’t use it as a drop-in replacement for creating books (ala LeanPub or an iOS wiki-like client like Trunk Notes) without adding new functionality, which I’d like to do in a standard, modular way.
A certain amount of customization can be done without any changes in the parser, by establishing certain conventions for your application and manipulating the AST.
For example, fenced code blocks with a certain info string can be processed replaced by other content. You might use this, for example, to include declarative diagrams (graphviz, tikz, mermaid, etc.).
Link syntax can be overridden too. Here’s a real-world example from my pandoc scripting tutorial. Another example: in gitit I treat links with empty contents as wikilinks.
Special sections in books, like epigraphs or appendices, could be handled by intercepting headers with specific titles and treating them specially.
I don’t want to deny that some extensions to core CommonMark would be very useful. But you might consider whether some of your needs could be met by overloading normal Markdown constructs. (This has the advantage of degrading gracefully, too, when your document is rendered with an ordinary Markdown parser.)
There are probably a lot of customizations better off implemented in the parser instead of only in the renderer. I started with the rendering code because the corresponding issue on GitHub indicated the desire to customize the HTML renderer in particular, and it seemed like a useful and interesting endeavor that I could help with.
This might be a good time to ask exactly what kind of customization we’d like to have in commonmark.js. Is there a desire for an API similar to markdown-it? Should there be a larger focus on parser customizations? Or should commonmark.js strictly be a reference for how to parse and render Markdown according to the CommonMark spec? I admit to being an alien in the CommonMark (and GitHub) community with no good sense of direction.
I’ll continue to tinker with the code in my own fork, and I’d have no objections if my changes are destined to stay there. But if there are any specific interfaces we want to see implemented in regards to customization, I’ll try to steer myself towards those goals. Part of the reason I’m posting here is to fuel the conversation on the subject of customization in commonmark.js so that everyone (myself included) has a better idea of what the project needs or doesn’t need.
In the meantime, I’ll do some reading through the other topics here to educate myself more on general opinions.
@EmptyStar, I think all technical questions are known very well. There is only one problem - time. It’s not possible to do everything at once. Even if you have 24hour/day for it.
@vitaly, I hope I didn’t give the impression of being upset or frustrated! I’m totally chill, even if a bit overzealous.
I know I have much to learn, and as I said, I’m going to read up more on discussions that have already taken place. I’ll also keep plugging away on my own fork, and I’ll use it to experiment with initiatives and ideas that I come across. At the very least, I can provide something else to refer to when discussions about customization arise.
I ran into this for a a presentation I’m putting together.
It uses reveal-ck as the top level framework which in turn leverages html-pipeline underneath.
There are filters to parse the markdown, but then you add extra filters on top of that:
convert --- to mean the difference between 2 different slides. Basically convert <hr> to <section>
converts graphviz dot to svg.
convert text to appropriate <abbr>.
So I added a filter to convert code blocks:
``'dot
digraph X {
A -> B
}
``'
I want a CAR.
*[:car:]: CAR
(that should be 3 back ticks, but I couldn’t figure out how to escape them.
I also have added this (in a different implementation) to jekyll as well.
I notice the extensions I want/write tend to be parser centric. But maybe that is because I control the layout template. Just adding a css class in the parser will allow me to use html/js to customize the output. Also the fact that markdown passes html tags through makes this easier.
I am encountering the same problem trying to create a renderer that produces Commonmark(down). The use case is quite simple (but there are many), to produce a README document from multiple markdown documents occasionally including file contents or the output of executing programs. I use this to keep the examples in my README in sync with the state of the repository.
Whilst I can certainly create a Renderer that does all the basics, if I want to output the document as close as possible to the original (I imagine only changes to whitespace might occur) then the Parser would need to be modified. Some examples:
Determining the type of hard line break used (backslash or double spaces)
Rendering link references ([link]: http://example.com)
Determining the type of character used for emph, strong etc.
I am sure there are more I haven’t run into but as far as I can tell the AST does not expose enough information for me to achieve a lot of these tasks.
My personal feeling is that a round-trip back to Markdown should be possible. I ran into the same issue with marked previously with my initial implementation (which works) and had to fork the source.
I would really like not to fork commonmark.js and contribute to a Parser and Renderer design that allows rendering back to Commonmark (as well as HTML, XML, PDF, TXT, MAN etc).
So I just wanted to see if @jgm thought this problem was worth solving and to continue this discussion on what I think is an important topic.
Of course, even if the spec doesn’t say that this information needs to be preserved (and in my view it probably shouldn’t), our implementations could preserve it.
It would be quite easy to modify commonmark.js to keep track of what character was used for emphasis, how many backticks were used in a backtick code block, what type of hard line break was used, and whether a fenced or indented code block was used (in cases where the info string is empty).
Keeping track of whether a link or image was inline or reference (and label was used for the reference) would also be relatively easy. The tricky part would be knowing where the reference goes, since this isn’t tracked in the AST. Putting them all at the end would be easy, though.
About your use case: you don’t need a CommonMark renderer for that, unless the source code blocks sometimes occur in indented contexts (blockquotes or lists). All you’d need is a preprocessor or template engine to insert the contents of the source files into specially marked code blocks.