The result of the parsing is a *list of tokens*, that will be passed to the `renderer` to generate the html content.
The result of parsing is a token stream that will be passed to the renderer to generate HTML content.
These tokens can be themselves parsed again to generate more tokens (ex: a `list token` can be divided into multiple `inline tokens`).
These tokens can themselves be parsed again to generate more tokens (ex: a `list`token can be divided into multiple `inline` tokens).
An `env`sandbox can be used alongside tokens to inject external variables for your parsers and renderers.
An `env`object can be used alongside tokens to inject external variables into your parsers and renderers.
Each chain (core / block / inline) uses an independent `state` object when parsing data, so that each parsing operation is independent and can be disabled on the fly.
Each chain (`core`, `block`, &`inline`) uses an independent `state` object when parsing data so that each parsing operation is independent and can be disabled on the fly.
## Token stream
## Token stream
Instead of traditional AST we use more low-level data representation - *tokens*.
Instead of a traditional AST, we use more low-level data representation -- *tokens*.
The difference is simple:
The difference is simple:
- Tokens are a simple sequence (Array).
- Tokens are a simple sequence (an array).
- Opening and closing tags are separate.
- Opening and closing tags are separate.
- There are special token objects, "inline containers", having nested tokens.
- There are special token objects, "inline containers", that have nested tokens.
sequences with inline markup (bold, italic, text, ...).
These are sequences with inline markup, such as bold, italic, text, etc.
See [token class](https://github.com/markdown-it/markdown-it/blob/master/lib/token.js)
See the [`Token`](https://github.com/markdown-it/markdown-it/blob/master/lib/token.js) class
for details about each token content.
for details about each token's content.
In total, a token stream is:
In total, a token stream is:
- On the top level - array of paired or single "block" tokens:
- On the top level -- an array of paired or single "block" tokens:
- open/close for headers, lists, blockquotes, paragraphs, ...
- open/close for headers, lists, blockquotes, paragraphs, etc.
- codes, fenced blocks, horizontal rules, html blocks, inlines containers
- [Live demo](https://markdown-it.github.io/) - type your text and click `debug` tab.
- [Live demo](https://markdown-it.github.io/) - type your text and click the `debug` tab.
## Rules
## Rules
Rules are functions, doing "magic" with parser `state` objects. A rule is associated with one or more *chains* and is unique. For instance, a `blockquote` token is associated with `blockquote`, `paragraph`, `heading` and `list` chains.
Rules are functions, doing "magic" with parser `state` objects. A rule is associated with one or more *chains* and is unique. For instance, a `blockquote` token is associated with the `blockquote`, `paragraph`, `heading`, and `list` chains.
Rules are managed by names via [Ruler](https://markdown-it.github.io/markdown-it/#Ruler) instances and can be `enabled`/ `disabled` from the [MarkdownIt](https://markdown-it.github.io/markdown-it/#MarkdownIt) methods.
Rules are managed by name via [`Ruler`](https://markdown-it.github.io/markdown-it/#Ruler) instances and can be enabled and disabled from [`MarkdownIt`](https://markdown-it.github.io/markdown-it/#MarkdownIt)'s methods.
You can note, that some rules have a `validation mode` - in this mode rules do not
Note that some rules have a `validation mode` -- in this mode, rules do not
modify the token stream, and only look ahead for the end of a token. It's one
modify the token stream and only look ahead for the end of a token. It's one
important design principle - a token stream is "write only" on block & inline parse stages.
important design principle -- a token stream is "write only" on the `block`&`inline` parse stages.
Parsers are designed to keep rules independent of each other. You can safely enable/disable them, or
Parsers are designed to keep rules independent of each other. You can safely enable/disable them or
add new ones. There are no universal recipes for how to create new rules - design of
add new ones. There are no universal recipes for how to create new rules -- the design of
distributed state machines with good data isolation is a tricky business. But you
distributed state machines with good data isolation is a tricky business. However, you
can investigate existing rules & plugins to see possible approaches.
can investigate existing rules & plugins to see possible approaches.
Also, in complex cases you can try to ask for help in tracker. Condition is very
In complex cases you can try to ask for help in the [issue tracker](https://github.com/markdown-it/markdown-it/issues).
simple - it should be clear from your ticket, that you studied docs, sources,
The condition is very simple -- it should be clear from your ticket that you studied the docs, sources,
and tried to do something yourself. We never reject with help to real developers.
and tried to do something yourself. We never reject with help to real developers.
## Renderer
## Renderer
After token stream is generated, it's passed to a [renderer](https://github.com/markdown-it/markdown-it/blob/master/lib/renderer.js).
After the token stream is generated, it's passed to a [`Renderer`](https://markdown-it.github.io/markdown-it/#Renderer).
It then plays all the tokens, passing each to a rule with the same name as token type.
It then iterates through all the tokens, passing each to a rule with the same name as its token type.
Renderer rules are located in `md.renderer.rules[name]` and are simple functions
Renderer rules are located in `md.renderer.rules[name]` and are simple functions