Imagine the HTML: <a></b>
.
Parser-wise, how would you design the AST architecture, considering cases like above?
Read article `codsen-parser` vs. `hyntax`import { strict as assert } from "assert";
import { tokenizer } from "codsen-tokenizer";
const gathered = [];
// it operates from a callback, like Array.prototype.forEach()
tokenizer(`<td nowrap>`, {
tagCb: (obj) => {
gathered.push(obj);
},
});
assert.deepEqual(gathered, [
{
type: "tag",
start: 0,
end: 11,
value: "<td nowrap>",
tagNameStartsAt: 1,
tagNameEndsAt: 3,
tagName: "td",
recognised: true,
closing: false,
void: false,
pureHTML: true,
kind: null,
attribs: [
{
attribName: "nowrap",
attribNameRecognised: true,
attribNameStartsAt: 4,
attribNameEndsAt: 10,
attribOpeningQuoteAt: null,
attribClosingQuoteAt: null,
attribValueRaw: null,
attribValue: [],
attribValueStartsAt: null,
attribValueEndsAt: null,
attribStarts: 4,
attribEnds: 10,
attribLeft: 2,
},
],
},
]);
codsen-tokenizer
is currently being developed. Please come back later.
See it in the monorepo , on GitHub.
To report bugs or request features or assistance, raise an issue on GitHub .
Any code contributions welcome! All Pull Requests will be dealt promptly.
Copyright © 2010–2021 Roy Revelt and other contributors
codsen-parser
vs. hyntax
Imagine the HTML: <a></b>
.
Parser-wise, how would you design the AST architecture, considering cases like above?
Read article `codsen-parser` vs. `hyntax`