-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Low-level tokenization mode for Glow #206
Comments
Sounds like a legit idea. I was planning to implement clearer parsing/tokenization and rendering phases because there is a need for more customized highlighting per language. I'm sorry this answer took so long. My mind has been occupied with the upcoming design system, but I'm planning to make a round of updates to Glow and Nuekit internals before launching it. Thanks |
This comment was marked as resolved.
This comment was marked as resolved.
@fabiospampinato there is a public parseRow method that now understands inline comments with the most recent commit. It will return an array of tokens in following format: [
{ start: 0, end: 1, tag: "i", re: /[^\w \u2022]/g, },
{ start: 11, end: 18, tag: "em", re: /'[^']*'|"[^"]*"/g, is_string: true, }
...
] Where start is the start index and end is the end index in the inputted string. Hope this helps. This method only understands individual rows so it has no clue about multi-line comments. |
Nice thanks 👍 Are the tokens covering the entire input string? Like what should happen in that example between indexes 1 and 11? |
@tipiirai the new function is not exported from the entrypoint, could you fix this? |
@tipiirai the tokenization seems a bit wrong. With this code: import {parseRow} from 'nue-glow/src/glow.js';
const code = "import shiki from 'shiki';";
const lang = "js";
const tokens = parseRow ( code, lang ); I get the following tokens: {start: 0, end: 6, tag: 'strong', re: /\b(null|true|false|undefined|import|from|async|aw…l|until|next|bool|ns|defn|puts|require|each)\b/gi}
{start: 13, end: 17, tag: 'strong', re: /\b(null|true|false|undefined|import|from|async|aw…l|until|next|bool|ns|defn|puts|require|each)\b/gi}
{start: 18, end: 25, tag: 'em', re: /'[^']*'|"[^"]*"/g, is_string: true}
{start: 18, end: 19, tag: 'i', re: /[^\w •]/g}
{start: 24, end: 25, tag: 'i', re: /[^\w •]/g}
{start: 25, end: 26, tag: 'i', re: /[^\w •]/g} Which are problematic because you can spot right away that there are 3 tokens wrapping around a single character, but our input string ends with If I explicitly slice those ranges off from the input string I get this array: ['import', 'from', "'shiki'", "'", "'", ';'] So basically there are two tokens about apostrophes for the string that shouldn't exist 😢 |
I just released the "convenient" highlighter/tokenizer on top of Glow that I had in mind: https://twitter.com/fabiospampinato/status/1762965155841773879 Generally, FWIW, I really like this approach, and if more effort could be put into refining the syntax highlighter I think it could actually be pretty decent for a lot of use cases. Some areas that IMO would be nice if they could be improved:
IMO with relatively few tweaks it would be closer to the quality that TextMate can achieve in a lot more cases. Example comparison I got, with Glow on the left and TextMate on the right: Code I used for the example: import shiki from 'shiki';
// Some example code
shiki
.getHighlighter({
theme: 'nord',
langs: ['js'],
})
.then(highlighter => {
const code = highlighter.codeToHtml(`console.log('shiki');`, { lang: 'js' })
document.getElementById('output').innerHTML = code
}); |
I've been thinking about ways of deleting Oniguruma from my bundles, which is needed for handling TextMate grammars, which are commonly used to syntax highlight languages, and Glow seems a very interesting way to do that and more.
I'm building a new experimental tiny code editor for the web, and I'd be interested in wiring it with Glow. For that I'm not interested in emitting HTML, but rather I'd need the syntax highlighter to return me a list of tokens, which basically would tell me what color to use at each ranges.
Is there any interest in adding a low-level tokenization function like that?
Ideally something a bit lower-level than that, where one is able to ask the syntax highlighter for tokens going line by line explicitly, so that the main thread is potentially never blocked for a long time, would be ideal, but for a lot of use cases something simpler should suffice.
The text was updated successfully, but these errors were encountered: