2023-01-31 12:17:24 what is it called when code is “compressed” but when it is run the “decompression” happens along side execution of the code? 2023-01-31 12:17:54 that is, the process that the program evolves is the decompressed form 2023-01-31 12:19:28 example of such is say LIT_1 which pushed literal one on the datastack 2023-01-31 12:20:37 : LIT_1 1 ; which actually is in colon specification form (LIT) 1 2023-01-31 12:21:53 I often see when writing forth code a common subsequence repeation that I then often factor out into its own word 2023-01-31 12:22:16 mainly to save program space 2023-01-31 12:26:29 Don't know but it reminds me of how .so's work (they are loaded on first use when dynamically linked in an ELF) and JITs (they will compile when code is run for first time, or after multiple iterations, or optimise after multiple iterations etc) 2023-01-31 12:27:36 not what I mean, there is no decompression into memory just how the process of the program behaves 2023-01-31 12:28:57 It's probably not given enough credit just how good value LuaJIT is compared to JavaScript JITs with millions (billions?) of paid corporate effort behind them 2023-01-31 12:29:50 Are you planning on doing runtime inlining? 2023-01-31 12:30:07 I am talking about the conceptual process from comp sci here. Not a unix or os process. 2023-01-31 12:35:11 Pretty sure you can demonstrate empirically the new wikipedia isn't "more accessible" 2023-01-31 12:35:55 my runtime here is pretty much bare metal. An sequential boolean logic circuit implementing a variation of canonical dual stack machine from Philip Koopmans book Stack Machines the new Wave. This circuit is then being performed as an Secure Multi Party Computation 2023-01-31 12:37:28 so, program and work ram is at a premium as they say. So no inlining to circumvent cpu pipeline stalls as there is cpu pipeline 2023-01-31 14:46:23 Zarutian_iPad: I don't know the technical term, but just a couple of weeks ago I linked an article on a Huffman encoded instruction set and a live decoder meant to be run "at runtime." 2023-01-31 14:46:45 I was briefly enamored of the idea, but pretty quickly deicded I wasn't that interested after all. 2023-01-31 14:55:46 veltas: I'm actually toying with ideas for some automatic inlining at the bottom layers of this system. 2023-01-31 14:56:22 I'm specifically intending this for situations where I have plenty of RAM, so it seems worth it to me. 2023-01-31 14:57:34 I haven't quite decided whether to do it "generally" or just inside loops. It's still a pretty fuzzy notion for me. 2023-01-31 15:20:04 I've just been a little more sensitive about the overhead of NEXT the last few days, since I'm in the process of plotting that all out for a new system. 2023-01-31 15:21:12 I keep getting tempted to compromise things I find fairly elegant, in the name of performance. So I decided maybe I'd look into writing things the way they really "want" to be written and see if a bit of inlining would "compensate." 2023-01-31 15:22:11 For performance, those CFA pointers are wanting to get promoted to 64 bits, but I really don't WANT to invest that extra space across hundreds of words and I don't really WANT to split the CFAs and PFAs into two tables. 2023-01-31 15:23:25 you could use truncated pointer whose msbits are zero or a constant 2023-01-31 15:24:22 Sure, and probably will truncate them to 32 bits if I go this way. It just puts more work into NEXT, which is THE MOST CRITICAL code in the entire system. 2023-01-31 15:25:17 If I use the 64-bit CFA fields and the plit tables, then NEXT can just be lodsw; jmp [r15+8*rax]. 2023-01-31 15:25:40 (neglecting the counter decrement bit, which is either there or not in any case - separate issue). 2023-01-31 15:26:40 For 32-bit merged table, I'm looking at something like xor rax, rax; lodsw; shl rax, 3; add rax, r15; jmp rax. 2023-01-31 15:27:46 There are a number of "positions" I could pick in the performance / elegance spectrum. At least three, maybe four. 2023-01-31 15:30:20 Anyway, the only memory space that will be constrained at all in this design is the table space; everything else has a 4GB range. 2023-01-31 15:30:32 Well, actually not table SPACE - more like table size in entries. 2023-01-31 15:30:47 And I don't really consider support for 64k words to be very limiting. 2023-01-31 15:34:00 given ANS racks up a whopping 359 words... 2023-01-31 15:56:03 Yeah, and even GForth is barely over a thousand. 2023-01-31 15:57:11 At any rate, if I ever ran into trouble it would be easy to rebuild the system with the xt's promoted to 32 bits. 2023-01-31 16:23:32 Old GForth maybe, I bet new GForth is way more than 1000 2023-01-31 16:26:21 echo words | gforth | perl -nle '$w++ while m/(\S+)/g}{print $w' --> 1864 2023-01-31 16:26:36 Which gforth is that? 2023-01-31 16:26:45 0.7.3 2023-01-31 16:26:50 That's the old one 2023-01-31 16:56:13 That doesn't run on my system. 2023-01-31 16:56:42 I've got 0.7.3 as well. 2023-01-31 16:57:08 I think I scraped my estimate off of a web page on GForth. A glossary.