Brush cost gauges #45

Closed
opened 2024-08-24 12:03:54 +02:00 by liquidev · 2 comments
liquidev commented 2024-08-24 12:03:54 +02:00 (Migrated from github.com)

Once we have a live brush preview #36, it would be nice to display some gauges to the user to visualize the cost of their brush and how close they are to the VM's limits. Right now if a brush runs out of fuel, or recurs too far, or exhausts any resource, it will fail suddenly with an error message. Monitoring these limited resources actively with gauges would be a much nicer experience.

Once we have a live brush preview #36, it would be nice to display some gauges to the user to visualize the cost of their brush and how close they are to the VM's limits. Right now if a brush runs out of fuel, or recurs too far, or exhausts any resource, it will fail suddenly with an error message. Monitoring these limited resources actively with gauges would be a much nicer experience.
liquidev commented 2024-09-05 13:45:57 +02:00 (Migrated from github.com)

I've been thinking over which costs are the most useful from the user perspective, and here's an overview.

We consider the case where the brush grows organically rather than is pasted in all at once.
Therefore we're not concerned with the case where earlier phases fail and prevent later phases from executing.

I think the following metrics should be displayed to the user:

  • Syntax nodes - to help measure how big your code is against the limits
  • Fuel - to help measure how long your code runs against the limits
  • Refs - to help measure how many heavy values your code allocates
  • Bulk memory - to help measure how much memory it uses on bulk data - lists, pixmaps, etc.

The following metrics may be useful, but we won't show them because it's kind of hard to communicate what they're about, and they're hard to implement efficiently. Your script will simply fail if any of these are exhausted - which most likely will happen due to a bug.

  • Stack usage
  • Call stack depth

The following metrics will not be displayed until exhausted, because they mostly describe the same thing: source code size. I presume out of all code size-related metrics, AST nodes will reach their limit first.

  • Source code length - this one is obvious, but I don't think it makes sense to show it. You'll most likely run into parser events or AST nodes first.
  • Chunks - not modifiable by the user right now, so we won't show it.
  • Tokens - you'll run into the AST size limit first, because each token must be wrapped in an AST node. While it's possible to write + 65536 times, I believe the parser would emit an error node for every one of those, which would mean parser events get exhausted first.
  • Parser events - this one's easy to run into, because there's always more parser events than AST nodes - for each AST node, we emit 1 or 3 parser events. However, I don't like how unintuitive it is, so we may raise it above the AST capacity such that parser event exhaustion can never occur. Parser events are very cheap anyways (1 byte per event.)
  • Bytecode length - I'd need to measure, but bytecode is much more compact than any of the above, so I presume it usually ends up being much smaller than the source code, or any related things.

It doesn't feel very useful to monitor the following metrics proactively:

  • Defs - when you run out... well, 256 is a lot of definitions, don't you think. Are you generating your code by any chance?

The following metrics are related to the current renderer, which I plan to rework into something much simpler - such that these probably aren't going to be needed.

  • Pixmap stack
  • Transform stack

I'm still debating bulk memory. It'd be nice to merge them into refs somehow, but I don't see how we could meaningfully do that...

I've been thinking over which costs are the most useful from the user perspective, and here's an overview. We consider the case where the brush grows organically rather than is pasted in all at once. Therefore we're not concerned with the case where earlier phases fail and prevent later phases from executing. I think the following metrics should be displayed to the user: - **Syntax nodes** - to help measure how big your code is against the limits - **Fuel** - to help measure how long your code runs against the limits - **Refs** - to help measure how many heavy values your code allocates - **Bulk memory** - to help measure how much memory it uses on bulk data - lists, pixmaps, etc. The following metrics may be useful, but we won't show them because it's kind of hard to communicate what they're about, and they're hard to implement efficiently. Your script will simply fail if any of these are exhausted - which most likely will happen due to a bug. - Stack usage - Call stack depth The following metrics will not be displayed until exhausted, because they mostly describe the same thing: source code size. I presume out of all code size-related metrics, AST nodes will reach their limit first. - Source code length - this one is obvious, but I don't think it makes sense to show it. You'll most likely run into parser events or AST nodes first. - Chunks - not modifiable by the user right now, so we won't show it. - Tokens - you'll run into the AST size limit first, because each token must be wrapped in an AST node. While it's possible to write `+` 65536 times, I believe the parser would emit an error node for every one of those, which would mean parser events get exhausted first. - Parser events - this one's easy to run into, because there's always more parser events than AST nodes - for each AST node, we emit 1 or 3 parser events. However, I don't like how unintuitive it is, so we may raise it above the AST capacity such that parser event exhaustion can never occur. Parser events are very cheap anyways (1 byte per event.) - Bytecode length - I'd need to measure, but bytecode is much more compact than any of the above, so I presume it usually ends up being much smaller than the source code, or any related things. It doesn't feel very useful to monitor the following metrics proactively: - Defs - when you run out... well, 256 is a lot of definitions, don't you think. Are you generating your code by any chance? The following metrics are related to the current renderer, which I plan to rework into something much simpler - such that these probably aren't going to be needed. - Pixmap stack - Transform stack I'm still debating bulk memory. It'd be nice to merge them into refs somehow, but I don't see how we could meaningfully do that...
liquidex added this to the Backlog project 2024-09-13 20:24:03 +02:00
Owner

I implemented an MVP of this for further feedback during future playtests.

I implemented an MVP of this for further feedback during future playtests.
Sign in to join this conversation.
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: liquidex/rkgk#45
No description provided.