2022-05-03 16:17:32 KipIngram: I'm curious, what's your register allocation in your forth? 2022-05-03 16:18:13 I'm in the planning stages of one and i've run out (none available for future use....) 2022-05-03 16:33:19 Oops, I realised I had EBP 2022-05-03 18:23:56 %define rrBB r15 ; System base address; NEXT lives here 2022-05-03 18:23:57 %define rrHB r14 ; Designates active task block 2022-05-03 18:23:59 %define rrIP r13 ; Instruction pointer 2022-05-03 18:24:02 %define rrRP r12 ; Return stack pointer 2022-05-03 18:24:03 %define rrSP r11 ; Data stack pointer 2022-05-03 18:24:05 %define rrFRAME r10 ; Frame pointer for stack access 2022-05-03 18:24:07 %define rrW r9 ; Vector record pointer 2022-05-03 18:24:09 %define rrTMP r8 ; Scratch register 2022-05-03 18:24:11 %define rrTOS rcx ; Cached Top Of data Stack item 2022-05-03 18:24:13 %define rrA rsi ; "A" address register 2022-05-03 18:24:15 %define rrB rdi ; "B" address register 2022-05-03 18:24:17 %define rrEXP rbp ; Longjmp pointer for {| & |} 2022-05-03 18:24:19 %define rrTICK rbx ; background operation counter 2022-05-03 18:24:48 BB = Body Base, HB = Header Base. rrEXP - "exception" frame register - that's how I do something akin to setjmp/longjmp. 2022-05-03 18:25:04 Most of the rest should be self-explanatory, but I can answer questions. 2022-05-03 18:25:19 See, I'm 64-bits, so I get 16 regs. 2022-05-03 18:25:45 I couldn't do this system (this way) in 32-bit or 16-bit. 2022-05-03 18:25:51 Not enough regs. 2022-05-03 18:27:33 rrTick just decrements one time each pass through NEXT. When it hits zero, I go run a vectored word, reload it, and continue. Sort of a timer tick interrupt. 2022-05-03 18:27:46 Vector is null for now. 2022-05-03 18:27:58 Well, not null. Sorry. Points to an empty word. 2022-05-03 18:28:31 Right now I have the tick load set to 1024*1024, so I only do anything other than the dec and a non-taken jump once in a million NEXTs. 2022-05-03 18:30:46 I'd like to put in an advertising plug for the stack frame thing - it pretty much just totally eliminates any "serious amount" of stack noodling. 2022-05-03 18:31:34 If I can get what I need with one OVER or so, then I don't bother - but my code just doesn't have any "runs" of stack juggling anymore. 2022-05-03 18:32:51 And the side benefit (not the intended purpose, but a "perk") is that it lets you drop unknown amounts of leftovers from the stack. That comes in really handy sometimes when a word contains multiple returns. 2022-05-03 18:33:11 That don't all have to leave the same stack layout, if the caller calls from inside a frame. 2022-05-03 18:33:31 That shortens code too. 2022-05-03 18:33:43 "They" don't... 2022-05-03 18:34:19 { and } do a frame. The pattern is { ...code ... } 2022-05-03 18:34:37 } puts SP back where it was in {, and then drops additional items. 2022-05-03 18:36:51 I haven't experimented with whether a negative will work, but offhand I don't think of any reason not. 2022-05-03 18:37:25 Usually, though I use it to drop incoming parameters, and possibly use the deepest parameter slot for a result. 2022-05-03 18:39:07 eris: Also, I think that the numbered registers, r8-r15, may involve a REX byte in the instruction. So it's possible that it would be slightly faster if I had used old registers for the stuff NEXT and docol need. But I doubt it's a very big difference. 2022-05-03 18:39:48 There's also the option of using rsi for rIP, and using a LODS? instruction in NEXT. But I *think* I timed that once and found it to be slightly slower, contrary to what intuition would say. 2022-05-03 18:40:08 oh wow, thats a lot of registers 2022-05-03 18:40:44 huh, i thought that LODS for next would be faster 2022-05-03 18:41:01 guess i was right to put A and B in ESI and EDI 2022-05-03 18:53:28 Maybe I should measure it again. I might have fouled it up somehow - it was years ago. 2022-05-03 18:53:43 Because yeah - I'd have thought the same. 2022-05-03 18:54:28 Right now, since adding the tick register, I only have left rax and rbx that don't have any "continuous use." 2022-05-03 18:54:46 r8 and r9, though, don't store anything long term - they're just scratch. 2022-05-03 18:55:00 And I'm not doing anything with rsp 2022-05-03 18:55:19 I just leave it the way Linux hands it to me. 2022-05-03 18:55:45 ahh 2022-05-03 18:55:49 The Forth I'm working on right now: RSP for return stack, RBP for data stack, RBX user area pointer, RAX top-of-stack 2022-05-03 18:56:01 I'm doing 32bit so i could potentially run on older computers 2022-05-03 18:56:03 And currently I've not changed memory maps or anything 2022-05-03 18:56:13 Yeah, I don't have a user area pointer. :-| 2022-05-03 18:56:25 I've got EAX as a scratch because it tends to get thrashed by instructions 2022-05-03 18:56:33 What I'm probably going to need, though, is a "task block pointer," when I start wanting multi-threading. 2022-05-03 18:56:45 That's a user area really 2022-05-03 18:56:51 That's what it's for 2022-05-03 18:56:57 Yeah, it is similar. 2022-05-03 18:57:08 I've just used it as a globals pointer so far, I've not actually implemented any multitasking stuff 2022-05-03 18:57:19 Each thread would have it's own value of return and data stack pointers. 2022-05-03 18:57:26 Yup 2022-05-03 18:57:48 I'm just not sure that's enough - I may need one more. 2022-05-03 18:57:58 A fixed one. 2022-05-03 18:58:12 I'll see what I run into when I need to use that last free register 2022-05-03 18:58:16 If I want the task to have anything "per task" other than the stacks. 2022-05-03 18:58:27 I don't understand why you've used so many registers 2022-05-03 18:58:30 I mean, the need to have their own sp0 and rp0 too. 2022-05-03 18:58:44 Well, each one speeds things up. 2022-05-03 18:58:57 I mean eris[m] 2022-05-03 18:59:01 But yeah - I was conscious of how thoroughly I've "given up" Forth's native "fast context switching." 2022-05-03 18:59:03 Oh. 2022-05-03 18:59:20 Context switching isn't fast any more when you're using nearly all the regs. 2022-05-03 18:59:25 veltas: I'm on 32bit 2022-05-03 18:59:27 I'm on subroutine threaded as well so I guess RIP is my PC 2022-05-03 18:59:30 I only have 7 general purpose registers 2022-05-03 18:59:40 Yeah. 2022-05-03 18:59:50 I've used 6 2022-05-03 18:59:51 eris[m]: Yeah so don't use them 2022-05-03 19:00:00 ? 2022-05-03 19:00:01 what? 2022-05-03 19:00:09 Save them for CODE usage 2022-05-03 19:00:47 there's a designated scratch register already 2022-05-03 19:01:00 plus, the two address registers are freely clobberable 2022-05-03 19:01:10 a whole three registers to work with :D 2022-05-03 19:01:29 What threading model? 2022-05-03 19:01:40 debating 2022-05-03 19:01:48 TOS is sort of 'free' anyway because the assumption is you'll be using that anyway 2022-05-03 19:01:49 most likely indirect threading 2022-05-03 19:02:02 wym TOS is free? 2022-05-03 19:02:07 ah 2022-05-03 19:02:08 yea 2022-05-03 19:02:08 i guess 2022-05-03 19:02:17 Do you want to try and match Starting FORTH's model? 2022-05-03 19:02:23 why do you think that? 2022-05-03 19:02:27 Or FIG Forth? 2022-05-03 19:02:30 i've never read starting forth 2022-05-03 19:02:47 i skipped straight to thinking forth 2022-05-03 19:02:49 :P 2022-05-03 19:03:05 i dont really know about models other than indirect threading 2022-05-03 19:03:12 i'd be interested to know :) 2022-05-03 19:03:35 i've done a lil forth on the J1 CPU, which doesn't use any threading model 2022-05-03 19:03:36 because 2022-05-03 19:03:37 well 2022-05-03 19:03:42 Have you read the Moving FORTH articles? 2022-05-03 19:03:47 nope 2022-05-03 19:04:06 i have a copy of thinking forth in front of me 2022-05-03 19:04:11 That's a good place to read about other models, although opinions on performance might be a little outdated 2022-05-03 19:04:22 The articles 2022-05-03 19:05:19 do you have any links? 2022-05-03 19:05:30 last time i tried to look i don't think I came up with anything conclusive 2022-05-03 19:05:43 https://www.bradrodriguez.com/papers/moving1.htm 2022-05-03 19:06:19 That first article explains what the different threading models are 2022-05-03 19:07:04 The later articles explain performant register allocations for different models on different older CPUs, and how stuff like DOES> works 2022-05-03 19:07:58 oh 2022-05-03 19:08:14 I think I remember Chuck Moore talking approvingly of subroutine threading 2022-05-03 19:08:18 haha, i already knew about these 2022-05-03 19:08:23 other than token-threading 2022-05-03 19:08:27 veltas: that's on his CPUs 2022-05-03 19:08:31 where subroutines are low cost 2022-05-03 19:08:34 Indirect threading is the simplest model 2022-05-03 19:08:54 It's the classic model 2022-05-03 19:08:57 on traditional CPUs, yea 2022-05-03 19:09:10 Old and new it's the simplest 2022-05-03 19:09:22 On newer CPUs it's very space-inefficient 2022-05-03 19:09:30 by traditional i mean non-forth 2022-05-03 19:09:49 I'm not responding to what you said about his CPUs 2022-05-03 19:09:57 ah 2022-05-03 19:10:19 Token threading IMO actually makes a lot of sense today on i.e. x86-32 and x86-64 2022-05-03 19:13:46 Subroutine threading is theoretically the fastest, if you optimise. If you don't optimise colon definitions then apparently direct threading is faster on modern CPUs, although I've not tested this 2022-05-03 19:15:06 I've heard the same thing - haven't tested it either. 2022-05-03 19:17:12 optimising incurs complexity 2022-05-03 19:17:45 I think that supporting very limited inlining, tail calls, constant folding... that stuff can be quite simple 2022-05-03 19:17:52 And can probably give you a lot 2022-05-03 19:18:59 constant folding? 2022-05-03 19:19:00 you mean [ ] 2022-05-03 19:19:41 No I mean automatically doing constant folding without making the programmer do it with `[ ... ] literal` 2022-05-03 19:20:51 Logic like... `1 2 rshift` can be converted to a constant because `rshift` is a pure word, it takes 2 arguments, so we can just execute it on those numbers right now 2022-05-03 19:21:10 'pure' i.e. does not care about global state, does not affect global state 2022-05-03 19:22:10 see, now this requires tracking purity 2022-05-03 19:22:30 Yep... or at least having a static flag for purity on some basic words 2022-05-03 19:22:43 You don't need to 'track' it per-se although that would be better 2022-05-03 19:23:01 you keep adding conditionals to the compile process :p 2022-05-03 19:23:07 I'm also trying to generate code that does some simple register allocation instead of using the stack etc 2022-05-03 19:23:19 Peep-hole optimisations 2022-05-03 19:23:57 Better code generation if i.e. I'm doing `2 +` I'll generate `ADD RAX, 2` rather than putting 2 on stack / in register 2022-05-03 19:24:19 i've always enjoyed the directness of forth :akaneshrug: 2022-05-03 19:24:35 Yeah as I said it's not very 'forthy' 2022-05-03 19:24:43 i forgot i wasnt on a matrix channel 2022-05-03 19:24:43 oh my 2022-05-03 19:24:44 hope that shrug isnt too killer 2022-05-03 19:24:45 I'm in two minds about it, but I think there's a place for it 2022-05-03 19:25:02 Not the shrug, the optimisation 2022-05-03 19:25:20 yea 2022-05-03 19:25:35 I certainly know what you mean 2022-05-03 19:25:44 im still getting beaten up by machine forth's word-boundaries.... 2022-05-03 19:31:57 What's cool is when stuff like ROT and SWAP are entirely optimised out, when putting stack stuff into registers 2022-05-03 19:32:25 The actual movement just changes which registers the operation starts with 2022-05-03 19:34:07 My dog is snoring so loud 2022-05-03 19:42:31 I'm not familiar with :akaneshrug: 2022-05-03 19:44:02 I've never figured out how to do that optimization cleanly - whenever I've considered more than one stack item in registers, I've wound up deciding it would force me to have multiple versions of each primitive - one for each possible register permutation. 2022-05-03 19:44:22 If you had that, the compiler could keep up with the current permutation and compile the right one. 2022-05-03 19:44:27 Sketching JSON interface https://pastebin.com/raw/FDuRQLmg 2022-05-03 19:44:51 Oh, is :akaneshrug: something that permutes the stack in some general purpose way? 2022-05-03 19:45:29 I think it generates a reaction image in some clients 2022-05-03 19:45:43 Like a graphical emoticon substitution 2022-05-03 19:45:52 Ah. 2022-05-03 19:46:00 But what do you think of the JSON idea I pasted 2022-05-03 19:47:22 I'm sort of sketching out the interface for a JSON library in Forth, because I said something the other day about how it's done 'wrong' everywhere and I think I can make it nice in Forth 2022-05-03 19:47:55 No time to do that now though, need to sleep soon 2022-05-03 19:50:04 I think the aim is to use exceptions to handle failed parse attempts 2022-05-03 19:51:07 And JSON-SEARCH will iterate trying the first word until it works, using the second word in-between to skip stuff that won't parse, until neither parses 2022-05-03 19:51:31 Anyway I'll produce actual working code and then maybe people will care :( 2022-05-03 19:57:20 I need to scroll back up and see it - sorry. I spent four hours this afternoon in a dentist's char, so I'm behind some. 2022-05-03 19:57:26 How long ago did you post it? 2022-05-03 19:57:35 https://pastebin.com/raw/FDuRQLmg 2022-05-03 19:57:35 Oh - right there. 2022-05-03 19:57:48 I see - sorry. Lemme look... 2022-05-03 19:57:48 Should be in an optician's chair! :P 2022-05-03 19:57:57 :-) Nice... 2022-05-03 19:58:25 I'll get actual code working and then share some nice demo rather than just this 2022-05-03 19:58:31 This is just design work 2022-05-03 19:58:36 Anyway time to sleep 2022-05-03 19:59:52 Rest well. 2022-05-03 20:07:52 eris[m]: that J1 cpu being excamera J1 ? 2022-05-03 20:09:22 found the whole 'unencoded' instruction idea intriguing 2022-05-03 20:11:20 was playing around in logisim evolution a few moons ago and got quite aways into implementing fcpu16 as an boolean sequential circuit 2022-05-03 20:13:00 in Philip Koopman's book Stack machines the new wave terminology fcpu16 is a modified canonical dual stack machine 2022-05-03 20:14:45 KipIngram, why would you need multiple versions of the primitive? you could construct machine code on the fly with whatever register happened to be needed 2022-05-03 20:15:58 but I borrowed a trick from excamera J1 of having all the alu operations possible be performed on TOS and NOS and the just mux the output from them later in the instruction cycle when the op is actually known 2022-05-03 20:18:14 this means that while an instruction is being fetched, the aforesaid alu ops are also being performed 2022-05-03 20:20:08 MrMobius: That was me in an indirect threaded mindset. Sure, if you're building your code as you go, then it would be different. 2022-05-03 20:21:24 Sounds like the path your on would involve almost not even really having a "stack" (you still might), but really it's the compiler keeping up with where everything is, and your stack diddling would get handled then, at compile time. Just update the compiler's "map." 2022-05-03 20:22:05 But won't you sooner or later need to call a word you've already defined, and it will expect the data in a certain arrangement? In that case the compiler woud have to shuffle everything around for you at runtime, right? 2022-05-03 20:22:40 Once you've actually defined a word, that word exists as code, and it operates on the data in specific locations. 2022-05-03 20:23:35 So if I want to have all the machine code pre-existing, then I have to have either versions for each data arrangement that might come along, or some mechanism to do rearrangements automagically for me. 2022-05-03 20:25:03 Simple example - say I can say 4 3 - and get 1. If I then want 3 4 swap - to give me 1, that swap has to actually be done, somewhere. 2022-05-03 20:25:33 On the other hand, if there were two versions of - and the compiler could tell which one to call, then it could avoid having any run-time cost for swap. 2022-05-03 20:25:46 Swap would just update the compiler's knowledge of where stuff was at that point. 2022-05-03 20:26:27 I read somewhere a write-up the GForth guys did, where they studied the payoff of caching various numbers of stack items. 2022-05-03 20:26:40 One gave a big payoff, of course, but more only gave a very small payoff. 2022-05-03 20:27:50 If the 4 and the 3 in my example were intermediate results of other blocks of code, then really the best answer is for me to arrange those blocks so the items wind up in the right order when it's time for -. 2022-05-03 20:32:02 KipIngram, ah, I see what you mean. I was thinking about the native code compiling Forths like mecrisp which spills the register to the stack for words that need it 2022-05-03 20:32:30 like : FOO IF 5 THEN ; would unbalance your stack so you have to spill on stuff like that 2022-05-03 20:34:09 seems like you could mix indirect and machine code 2022-05-03 20:34:50 i thought about keeping the last few words in ram before writing any to the dictionary to do some peepholing 2022-05-03 21:37:40 hey 2022-05-03 22:02:25 Right - unbalnaced code segments are a problem for automated that kind of optimization too - makes it hard for the compiler to know what to expect. 2022-05-03 22:03:16 The FIG Forth FIND word had variant return - if it failed, it returned just a false flag, but if it succeeded it returned several items. 2022-05-03 22:03:27 It was cumbersome. 2022-05-03 22:03:41 I just return a CFA or 0 on fail. 2022-05-03 22:04:12 But then you have to deal with both 1) recognizing truth and 2) using the value, which also isn't particularly easy. 2022-05-03 22:04:21 ?DUP will fix it. 2022-05-03 22:05:00 But then it itself suffers from the same issue. 2022-05-03 22:05:24 I think ?DUP is my only primitive that does that. 2022-05-03 22:05:38 Has variant return stack possibilities, that is. 2022-05-03 22:06:06 It's fine for you the programmer - you're conscious of it, and it's why you chose that word. 2022-05-03 22:06:13 Hard for "analyzing" source. 2022-05-03 22:08:25 Hi tabemann. 2022-05-03 22:09:31 It makes you wonder if perhaps a "flag" makes sense, instead returning flags on the stack. 2022-05-03 22:10:05 That doesn't actually solve find's problem, though. 2022-05-03 22:10:40 How does functional programming do conditionals? 2022-05-03 22:10:41 the thing is that in many cases with branches and like unbalanced returns are only a problem for a very smart compiler, because a normal compiler would have to dump registers anyways 2022-05-03 22:10:56 i.e. it would only really be an issue for a multipass compiler 2022-05-03 22:12:05 in Haskell there is special syntax for if then else, but in reality if-then-else is just an ordinary function underneath it all aside from the special syntax (and that normally you can't pass it around as a closure) 2022-05-03 22:12:20 I think unbalanced code happens rarely enough that dumping registers probably isnt much of a hit 2022-05-03 22:12:27 let foo = if bar then baz else quux 2022-05-03 22:12:28 The context I thought about it in was when I was pondering that "typed" Forth extension that would let me do things like Matlab. One of the things I wanted was for the compiler to be aware of what the types of the stack elements were, and to use that pattern of types at the top of the stack to assist word search. 2022-05-03 22:12:46 Which would let you overload words - the same name could do different things depending on the stack's type pattern. 2022-05-03 22:13:11 But to make that fly the compiler has to be able to track what the stack type pattern is going to be as it compiles a new word. 2022-05-03 22:13:37 that is definitely doable with a more sophisticated compiler 2022-05-03 22:13:46 Variant effect paths can mean more than one such type pattern could be associated with a particular point in the code. 2022-05-03 22:13:51 KipIngram, I like typed stacks too. the HP calculators did that. you could something like "Foo" 5 + and get "Foo5" 2022-05-03 22:13:52 to do overloading you'd probably want something like type classes 2022-05-03 22:13:59 I buy that - I wasn't thinking at a very sophisticated level. 2022-05-03 22:14:09 Definitely was still thinking in terms of "one pass forward." 2022-05-03 22:14:38 you could just check the type at runtime which is slow but will always work 2022-05-03 22:14:48 were I to reinvent forth from scratch I'd not use traditional Forth control constructs 2022-05-03 22:14:53 I'd do it more like PostScript 2022-05-03 22:14:54 But actually, how? If there actually CAN be more than one tyep pattern at some point in the code, how does it choose? Does the compiler have to bifurcate that into multiple flows that get chose at run-time somehow? 2022-05-03 22:15:09 I actually don't have those constructs in mine. 2022-05-03 22:15:16 I use conditional returns. 2022-05-03 22:15:21 KipIngram, signal a syntax error 2022-05-03 22:15:25 From which I can buid anything I need. 2022-05-03 22:15:32 Oh, I see. Just "not allowed." 2022-05-03 22:15:59 if you're using dynamic typing though 2022-05-03 22:16:08 KipIngram, sure. isnt that how python et al work? dont figure anything out at compilation. just wait until the calculation is executed to check anything 2022-05-03 22:16:09 you could do a CLOS or Julia-style multiple dispatch 2022-05-03 22:16:10 MrMobius: Yes, I was definitely trying for a "compile time only" smart compiler. 2022-05-03 22:16:15 ahh 2022-05-03 22:16:17 I wanted the compiled code to be entirely standard. 2022-05-03 22:16:22 that would be neat 2022-05-03 22:16:43 I think you just said the thing I was trying to say. 2022-05-03 22:16:49 You just said it properly. :-) 2022-05-03 22:17:25 I don't think you can do this really with standard Forth 2022-05-03 22:17:58 you either need a Haskell-like system with type inference and type classes or a CLOS or Julia-like system with dynamic typing and multiple inference 2022-05-03 22:18:53 s/multiple inference/multiple dispatch/ 2022-05-03 22:19:03 both'd be neat for sure 2022-05-03 22:19:03 If the conditions along a path through the source can be different at runtime, then that becomes multiple paths through the run-time. 2022-05-03 22:19:09 With a "selector" somewhere. 2022-05-03 22:20:04 if you're using static typing, you need to be able to resolve all the possible paths so as to ensure that the stack depth and the types of everything on the stack stay consistent 2022-05-03 22:20:12 i.e. you need a type inference algorithm 2022-05-03 22:20:37 if you're using dynamic typing, you don't have to worry about such things 2022-05-03 22:20:51 then you just have to worry about dispatch 2022-05-03 22:21:14 you can do a hybrid approach though 2022-05-03 22:21:23 where on paper you use dynamic typing 2022-05-03 22:21:36 but you try to statically resolve as many types as possible at compile time 2022-05-03 22:21:51 so as to resolve all the dispatches 2022-05-03 22:21:55 that you can 2022-05-03 22:22:01 and leave the rest to dynamic typing 2022-05-03 22:22:13 that way you get the flexibility of dynamic typing but the speed of static typing 2022-05-03 22:22:22 I think Julia does something like this IIRC 2022-05-03 22:24:08 one way to extend this to normal Forth 2022-05-03 22:24:20 would be to keep multiple copies of each word internally 2022-05-03 22:24:29 with different levels of specialization 2022-05-03 22:24:47 one copy for each possible specialization of a word based upon the words inside it 2022-05-03 22:24:56 and one copy for pure dynamic typing as a catch-all 2022-05-03 22:25:10 and then behind the scenes use this for multiple dispatch 2022-05-03 22:25:22 but completely hidden behind the user 2022-05-03 22:25:31 well, this wouldn't be normal Forth per se 2022-05-03 22:25:35 because it'd be typed 2022-05-03 22:26:01 but it would be without explicitly declaring type signatures and possible dispatch modes 2022-05-03 22:28:31 Right - I follow. 2022-05-03 22:29:28 I think I see the broad brushstrokes pretty well - it makes sense. 2022-05-03 22:31:42 of course, this requires a very smart compiler 2022-05-03 22:32:00 not in the sense of being smart w.r.t. pinhole optimization 2022-05-03 22:32:27 but in the sense of having a very smart type inference engine 2022-05-03 22:32:41 especially if you have things like parameterized types 2022-05-03 22:37:14 That would be something like arrays of different sizes? 2022-05-03 22:39:46 more like arrays of different element types 2022-05-03 22:40:08 I was thinking in a very simpleminded way - word headers would describe the expected and produced type patterns. 2022-05-03 22:40:10 arrays of different sizes would require in practice dependent types 2022-05-03 22:40:16 The compiler would just... "compile those." 2022-05-03 22:40:25 And store the result in each new word's header. 2022-05-03 22:40:49 so you plan on making the ( foo -- bar ) into actual syntax? 2022-05-03 22:41:10 That was a way that seemed plausible. 2022-05-03 22:41:29 I also have a stack frame mechanism in my existing Forth. I have words that will index into the stack from the "frame register." 2022-05-03 22:41:47 I'd also thought about possibly tying that to stack comments so those words could be referenced by names given in the comments. 2022-05-03 22:42:01 Again, it would be the same compile time result - just a compiler service. 2022-05-03 22:42:13 I use s0, s1, s2, ... right now. 2022-05-03 22:42:34 I've done no work on either of these fancy compiler things, though. 2022-05-03 22:44:05 Either one makes code more "verbose." 2022-05-03 22:44:45 Initially I had s0, etc. index from the stack pointer. But that was kind of a nightmare - I had to keep pencil and paper by me all the time and keep up with how those targets moved as the stack changed. 2022-05-03 22:45:00 Being able to establish a fixed frame makes it something much easier to work with.