2022-04-19 09:13:11 Mecrisp-Stellaris are there any benchmark figures of this? 2022-04-19 09:15:49 crc, you around? 2022-04-19 09:23:36 yes 2022-04-19 09:35:52 it's a quiet morning :) 2022-04-19 10:10:21 Morning guys. 2022-04-19 10:15:45 So I did have some further thoughts on last night's discussion. I think the notion of trying to write a Forth that is a "top performer" right out of the box is a thoroughly worthy goal. Doing that would start with choosing the highest performance threading architecture, which would absolutely be code threading. Then you'd face including a bunch of optimizing capabilities in the compiler (deciding when to 2022-04-19 10:15:47 inline primitives being one of them). And so on. I think if someone tackles such a project it would be worth tackling it completely, and doing all those things. I actually hope there is such a Forth, and if there's not that there will be. It's worth doing. 2022-04-19 10:16:17 It's just not what I chose to do - I'm trying to build something that is a blend of "pretty good performance," a high level of elegance, and fairly easily portable. 2022-04-19 10:17:15 I haven't "defined" those criteria any any particularly formal way - I'm just doing what feels right to me each step. 2022-04-19 10:18:59 When I first started this on MacOS, that system required me to make the code relocatable in a particular way. And that steals a little bit of performance right there (since I wanted an indirect threaded system). Now that I've moved to Linux I could potentially remove those bits and speed it up a little, but at this point it's fairly baked in and I haven't decided if I'm going to do that or not. 2022-04-19 10:19:54 That bit of things makes my NEXT five instructions instead of three, but they're very fast instructions compared to the three that would still be there. 2022-04-19 10:20:20 I attempted to measure the difference the other day, and couldn't - it was lost in the noise that background OS operations cause in the test durations. 2022-04-19 10:21:06 That was comforting - at least the two NEXT versions don't have a 5/3 duration ratio - it's much smaller than that. 2022-04-19 10:22:33 Those two "extra" instructions are both register/register adds, whereas the other three all fetch data from memory (in one case it's an immediate operand). 2022-04-19 10:38:08 I have the feeling that I am barking up the wrong tree. Forth is beautiful to program. But, it does not fit the unix ecosystem. 2022-04-19 10:38:19 every fetch and store needs to be sanitized. 2022-04-19 10:38:34 it is just too much burden for any programming language to carry. 2022-04-19 10:38:44 a pure forth system is a different matter. 2022-04-19 10:39:49 I am not sure how pure forth systems would work when hosting, etc., or providing access to unprivileged users 2022-04-19 10:40:29 they wouldnt 2022-04-19 10:40:34 and thats a great thing 2022-04-19 10:40:46 how? 2022-04-19 10:40:54 why is it a great thing? 2022-04-19 10:40:55 personal computers dont need a notion of users 2022-04-19 10:41:08 its something you own 2022-04-19 10:41:20 you should have power 2022-04-19 10:41:26 in this world, every computer interacts with others. 2022-04-19 10:41:36 forth gives you full power over everything 2022-04-19 10:41:46 doesnt have to 2022-04-19 10:42:00 my 3ds rarely if ever connects to the internet :) 2022-04-19 10:43:59 Is there a way to program in forth but get optimized assembly at run time? an interpreter but is smart enough to run the optimized asm at runtime? 2022-04-19 10:44:24 something that python/haskell/perl interpreters do. 2022-04-19 10:44:32 For me, I like having the option to interconnect with other machines, but am increasingly working offline 2022-04-19 10:47:00 optimizing forth compilers? mecrisp on the embedded side, not sure on hosted forths. iforth or vfx perhaps. 2022-04-19 10:54:24 haskell isnt interpreted 2022-04-19 10:54:39 youre thinking of a JIT 2022-04-19 10:54:47 if you want it at runtime 2022-04-19 10:56:18 yes, a jit or some such sort of mechanism. 2022-04-19 11:00:34 does the iforth author hang out here? 2022-04-19 11:01:57 Not that I'm aware of 2022-04-19 11:03:03 he's on the comp.lang.forth newsgroup 2022-04-19 11:07:08 for something listing a jit, https://github.com/anse1/firmforth might be interesting (GPLv3) 2022-04-19 11:22:33 it is interesting that these benchmarks http://home.iae.nl/users/mhx/monsterbench.html do not have the C run times. 2022-04-19 11:22:45 crc, thanks. 2022-04-19 11:27:07 I don't see C runtimes as relevant unless you are trying to compete with C for performance 2022-04-19 11:32:13 joe9: What is the task you're trying to get to run fast? Sorry if you said - I stepped away for a little bit. 2022-04-19 11:32:54 Also, I did do a little timing measurement on interpretation speeds, for my system and for GForth. 2022-04-19 11:33:19 GForth whipped the pants off me - I decided that they're using some kind of hash algorithm rather than a linked list search to find words in the dictionary. 2022-04-19 11:33:28 The point being that GForth interprets awfully fast. 2022-04-19 11:33:57 KipIngram: I am exploring a forth on plan9. 2022-04-19 11:34:02 The GForth guys have spent a good bit of time optimizing performance of all types, so if fast is what you're after it's at least a starting point. 2022-04-19 11:34:59 I was in the same ballpark as GForth on compiled code speed, t hough, and in fact in some cases my system offered words that let me write things in ways that ran faster than the best GForth implementation. 2022-04-19 11:35:06 and I cannot justify a forth program that runs 10 times slower than a C program no matter how elegant it is. 2022-04-19 11:35:17 For straight heads up comparison GForth beat me by about 10%. 2022-04-19 11:35:25 that is cool. 2022-04-19 11:35:30 But when I exploited my conditional returns and so on, I beat it by about 15%. 2022-04-19 11:35:38 That was just on simple empty loop timing. 2022-04-19 11:36:26 I timed something like this: 2022-04-19 11:36:46 I think pforth is more portable than gforth. 2022-04-19 11:36:59 does gforth come with an interpreter? 2022-04-19 11:37:01 : down begin 1- dup 0= if rdrop exist then again ; 2022-04-19 11:37:19 Sure - when you run gforth you're in the interpreter. 2022-04-19 11:37:27 And it can certainly load source from files and so on. 2022-04-19 11:37:42 So if I emulated that structure as closely as I could, I was about 10% slower. 2022-04-19 11:37:47 But my system allows this: 2022-04-19 11:37:57 : down 1- .0=; me ; 2022-04-19 11:38:05 which "does" the same basic thing. 2022-04-19 11:38:18 And I won then, just because I had a lot fewer words in the loop. 2022-04-19 11:38:54 That loop contains two NEXT executions. 2022-04-19 11:39:02 https://www.complang.tuwien.ac.at/forth/performance.html from here, even gforth is a few times slower than C. 2022-04-19 11:39:05 Why is that? 2022-04-19 11:39:21 I am talking about the run time performance? 2022-04-19 11:39:31 The other one has more, and an extra jump too. 2022-04-19 11:39:34 once, it is all compiled, it should be running at the same speed as C, correct? 2022-04-19 11:40:02 Well, yes - I just don't have any expectation that any Forth is going to compete with well-optimized C. 2022-04-19 11:40:23 I generally assume it will be a factor slower - anything from 2 to 4 or so. 2022-04-19 11:40:27 why not? when we are using the same C compiler? 2022-04-19 11:40:41 the output would be similar asm, correct? 2022-04-19 11:40:45 It just has to do with how thoroughly the code is optimized. 2022-04-19 11:41:05 The common C compilers do a LOT of optimization, that's been slowly engineered into them over a period of many many years. 2022-04-19 11:41:37 We had an article link in here a week or two ago, title of the article was something like "C Is Not a Low-Level Language." 2022-04-19 11:41:47 That article talked a lot about how smart the C compiler is these days. 2022-04-19 11:42:06 When C first appeared, it was a VERY direct map onto the computer architecture (PDP-11, primarily). 2022-04-19 11:42:07 sure, but, if are using the same C compilers to generate the underlying asm, why would it cause a slowdown? 2022-04-19 11:42:12 It was invented to be exactly that. 2022-04-19 11:42:46 But processors have changed, and yet C has attempted to keep legacy code running, and running as fast as possible, and that has involved doing a lot of fancy mapping of things onto the new processor features. 2022-04-19 11:43:11 The processors we run on now are in no way the simple structure that your C code implies they are. 2022-04-19 11:43:55 The Forth language structure allows for some optimizations, but there's just no guarantee that it allows for the same / the same amount of optimizations that C does. 2022-04-19 11:43:58 for something like this, https://share.credativ.com/~ase/firm-postgres-jit-forth.pdf . Why is it slower or not close to C. 2022-04-19 11:44:24 Or maybe it does, but the C compiler has just received tons more attention that Forth has. A LOT of man hours of work are baked into the C compiler. 2022-04-19 11:45:24 When you write down code, in any language, you are tryiing to specify some particular result. Your system is free to mangle the literal code you write in any way it wants to, so long as it guarantees the right answer pops out at the end. 2022-04-19 11:45:30 And C does exactly that. 2022-04-19 11:45:30 what am i asking is why the jit's would not be using the underlying C implementation? 2022-04-19 11:45:57 GForth may do some too, but like I said, it doesn't represent as many man hours of cleverness as the C compiler. 2022-04-19 11:46:30 the forth jit would be using the same C compiler, correct? 2022-04-19 11:46:35 The language you write down is the input to an optimization process - the executable code is the output. Different language in - different "possibilities" for optimization. 2022-04-19 11:46:57 Different possibilities, and different realities as to how much of the potential optimization has been exploited. 2022-04-19 11:47:07 is it because of all the stack push/pop's? 2022-04-19 11:47:26 c generated using forth as input is likely to be differently structured from what a c compiler is designed to compile optimally 2022-04-19 11:47:32 I mean, you're right in a sense - I get your point. Why can't the Forth compiler just produce exactly the same optimal code the C compiler does? 2022-04-19 11:48:01 I think the answer is a combination of 1) the input constrains the possibilities, and 2) the effort that has been invested in the tool defines how much of the possible optimization has been achieved. 2022-04-19 11:48:23 I can't guess how the actual difference should be allocated to those two categories. 2022-04-19 11:51:52 I just use c or assembly (or other languages if it makes sense) where a task needs more performance (or has specialized processing requirements) than I get from Forth. For me, the time & health investment to make a compiler generating faster code isn't worth it. 2022-04-19 11:52:06 So imagine you had some piece of writing saying the same thing in two different languages (say Greek and French, for example), and you had one translator translate the Greek to English and another translate the French to English. You'll get two English versions, and they will almost certainly not be the same. 2022-04-19 11:52:27 Some of the difference might have to do with Greek and French language possibilities, and some of it to do with the skill level of the translators. 2022-04-19 11:52:32 It's the same thing, really. 2022-04-19 11:52:46 C -> x86 machine code, Forth -> x86 machine code. 2022-04-19 11:53:37 What it's NOT like is having a decimal number and a hex number (the same number) and converting both to binary - you damn well better get the same bit pattern in both cases. 2022-04-19 11:53:42 It's just not that "precise." 2022-04-19 13:14:28 hi, I was just wondering what different approaches are for FORTH on multi-core embedded systems, if you want to be able to utilizes the other core(s) 2022-04-19 13:16:02 in particular, I was wondering if it would make sense to somehow treat the other core like a slave CPU, say for only handling certain kind of operations (math co-processor), like was sometimes done with multi-cpu computers back in the 70s 2022-04-19 13:17:45 in my mind right now is the RP2040 target, which has two cores. The current mecrisp forth implementation has support for only one core, but they were hoping to add support for the other core later 2022-04-19 13:17:53 in some fashion 2022-04-19 13:39:07 I think that would be a fine way to use a core, at least in some situations. Back when I first got into parallelism (like the mid 1980's) a lot of the focus was on very fine grain parallelism - having parallel resources work on the same basic instruction stream. 2022-04-19 13:39:40 But processors really didn't evolve in a way to pursue that; they're mostly good at having the cores doing completely separated work (and by separated I mean including not sharing cache lines). 2022-04-19 13:40:20 If you used core A to set up data that core B would then operate on, I think even if those addresses were in core B's cache to start with you'd flush them when you wrote to those addresses using core A. 2022-04-19 13:40:34 So the first access by B would have to re-load his cache. 2022-04-19 13:40:44 And whereever B left the results would then not be in A's cache. 2022-04-19 13:41:07 So cache coherency keeps you from really having two cores ping-pong data back and forth at absolute top speed. 2022-04-19 13:41:49 So what modern cores do well is serve as parts of a pipeline, in which each member pays one cache load cost to bring a block of data in and then does a bunch of work on that data, then hands it off to the next step in the chain. 2022-04-19 13:42:05 You want the "local work to handoff volume" to be as high as possible to get full cache benefits. 2022-04-19 13:42:24 See the wikipedia article on "flow based programming" - that's the basic model that multi-core cpus map well onto. 2022-04-19 13:42:55 That's a completely different kind of parallelism from the sort that I first studied way back when. 2022-04-19 13:43:05 Quite "coarse grained." 2022-04-19 14:15:58 KipIngram: do you have opinions/thoughts on how to handles stacks and all that? Task system...? 2022-04-19 16:59:31 I don't know that I have any reason to think my ideas on that are particularly "good." But yeah, I've given some thought to how I'd organize it. 2022-04-19 17:00:51 I see tasks belonging to a single process as sharing that process's dictionary, but each having their own RRAM block that will contain the stacks and some "task-specific variables," and finally a ram section where the stack pointers can be stored while the task is sleeping. 2022-04-19 17:01:29 Most of the registers I'd just push onto the stack, and I could even push one stack pointer onto the other stack - but that last stack pointer has to be "remembered" somewhere so you can undo it tall. 2022-04-19 17:01:34 all 2022-04-19 17:02:12 Ready to run tasks would be organized in a ring list; tasks waiting for something might be stored somewhere else, in a structure specific to that resource. 2022-04-19 17:02:45 Might have one of those rings for each active core. 2022-04-19 20:51:01 lispmacs[work], I've got a Forth which supports both cores on the RP2040 2022-04-19 20:51:26 https://github.com/tabemann/zeptoforth 2022-04-19 20:56:08 tabemann: Nice; what was your basic use strategy? What status does the second core have when it's not doing any work for you? 2022-04-19 21:07:04 basically it's SMP 2022-04-19 21:07:04 I'm about to do a little work on mine. Right now my keyboard input is blocking. I thinking about switching it to non-blocking and putting in a polling loop with sleep periods in between polls. A first step toward enabling the main core to work on background tasks if they're available. Later on I'll set things up so I can specify how many trips through NEXT a thread makes before giving other things a shot, 2022-04-19 21:07:06 but for this first step I'll have background tasks be "cooperative." Eventually I'll get to supporting additional cores, but... baby steps. 2022-04-19 21:07:16 as soon as you start a task on the second core the second core is booted 2022-04-19 21:07:34 once it's booted the two cores are basically equal except that the second core can't reboot the system or erase flash 2022-04-19 21:07:56 I see. 2022-04-19 21:08:22 zeptoforth is preemptively multitasking, and serial IO is blocking, but only blocks the task doing the IO 2022-04-19 21:08:53 That makes sense. 2022-04-19 21:09:24 note that there are words that detect whether serial IO is ready or not 2022-04-19 21:09:29 specifically key? and emit? 2022-04-19 21:09:35 Yes. 2022-04-19 21:09:38 with them one can effectively do no-blocking serial IO 2022-04-19 21:11:34 I don't think Linux gives me a way to ask whether keyboard input is available. I'll have to implement that myself, by doing a non-blocking keyboard read. If it fails, no input is ready. But if it gives the character back I'll have to buffer it and indicate that keyboard input is available, then provide that buffered character when the call for the data is made. 2022-04-19 21:11:45 That's not the only way I could do it, but it's a way. 2022-04-19 21:11:49 it does 2022-04-19 21:11:58 you can set fds to be nonblocking 2022-04-19 21:12:26 and then you buffer your input 2022-04-19 21:12:26 Yeah, that's the plan. But if a keystroke is ready, how can you find out it's ready without taking it? 2022-04-19 21:12:37 Ok - that's exactly what I'm thinking about. 2022-04-19 21:16:14 note that a polling loop has problems with high baud rates, if characters are received faster than the period between polls 2022-04-19 21:16:44 what I do with zeptoforth is process serial IO with an interrupt handler, which fills an Rx buffer and empties a Tx buffer 2022-04-19 21:16:55 Yeah - I did some serial handling a LONG time ago. 2022-04-19 21:17:20 In my current situation the OS will handle the delicate stuff; my part should be fairly easy. 2022-04-19 21:17:41 yeah, I'm working on bare metal myself 2022-04-19 21:17:58 I also want to use this as a way to deal properly with the extended sequence keys (escape sequences). 2022-04-19 21:18:00 zeptoforth is the OS 2022-04-19 21:18:35 what I've done with that in places such as my line editor is to keep a secondary buffer which keys can be read from and returned to 2022-04-19 21:18:39 Basic idea will be that if it's a multi-byte key, they should all be ready more or less at once - if I get Escape then I check again right away - if nothing else is ready, then it's 'just escape'. 2022-04-19 21:18:46 at least one charcter in my case 2022-04-19 21:18:50 But if there's more, it's a sequence key. 2022-04-19 21:19:05 so if a character does not form part of a valid escape sequence it will be saved for later parsing 2022-04-19 21:19:24 Yes. 2022-04-19 21:19:56 It's quite annoying that they used a byte that has its own key on the keyboard as the prefix character for those sequences. 2022-04-19 21:20:27 It would all be a lot simpler if the lead-off byte guaranteed it was a sequence. 2022-04-19 21:20:49 yeah 2022-04-19 21:21:28 another thing to do is to count the number of ticks from when the escape is received 2022-04-19 21:21:43 so if there's a delay over a threshold, you'll know it's a real escape keypress 2022-04-19 21:22:07 Right. 2022-04-19 21:22:09 because escape sequences should be quite fast 2022-04-19 21:22:14 That's exactly what I had in mind. 2022-04-19 21:22:40 I think that should work well enough to be more or less completely reliable. 2022-04-19 21:23:19 But it does mean I need to be checking the keyboard quite frequently, so that I actually know when the escape arrived. 2022-04-19 21:25:00 do you have software timer interrupts? 2022-04-19 21:26:07 so when you get an escape you go into a tight loop where you poll the input for a few milliseconds until the software timer expires 2022-04-19 21:26:23 if it expires first you know you've received a real escape key 2022-04-19 21:26:31 No, I figured I'd just sleep for a couple of ms before re-checking. 2022-04-19 21:26:53 if you get another key before it expires and it's a valid escape sequence, you'll know you have an escape sequence 2022-04-19 21:27:08 Right. 2022-04-19 21:27:43 These APL characters I've been learning about the last few days - those are mostly multi-byte sequences. 2022-04-19 21:27:58 But generally not with escape - some other >127 lead-off byte. 2022-04-19 21:28:55 UTF-8 can be handled separately from escape characters 2022-04-19 21:29:31 because unlike escapes, with a UTF-8 encoding the >127 characters are not valid when input independently, so if you get them unexpectedly you know you've got corrupt data 2022-04-19 21:40:58 okay, now I can re-add my docs for DEFER... 2022-04-19 21:55:44 Yes - those are actually a lot simpler. 2022-04-19 21:56:17 Almost all of them are (the APL chars) are three-byte codes that start with the same byte (196, I think). 2022-04-19 21:56:32 A few of them are two bytes, but they all start with a different character. 2022-04-19 21:57:03 to me the hard part with UTF-8 is handling an input buffer containing them in which characters can be deletd 2022-04-19 21:57:03 So the input routine just checks the first byte and immediately knows how many bytes it is. 2022-04-19 21:57:17 I'm thinking I may return all of those in the same 64-bit response. 2022-04-19 21:57:35 so if you want to delete one character, you have to parse the UTF-8 to determine how many bytes to delete 2022-04-19 21:57:42 Right. 2022-04-19 21:57:53 of course, if you internally convert your characters to UTF-32 you don't have this problem 2022-04-19 21:58:23 I can enter those symbols now, but some parts of my system treats them as three chars not only in the buffer but on the screen too. 2022-04-19 21:58:34 The cursor positioning stuff in EXPECT doesn't get that right currently. 2022-04-19 21:59:19 if you are using a large system I'd suggest using UTF-32 for your internal state 2022-04-19 21:59:25 it'll save a lot of headaches 2022-04-19 21:59:43 but if you are using a small system (i.e. embedded) that's really not an option 2022-04-19 21:59:45 It's kind of fun - if I type the character I see it, but the cursor shows up two spaces too far along. And then if I backspace the EXPECT routine removes the third byte and moves the cursor left one. 2022-04-19 21:59:59 But then the routine re-draws the screen, and the symbol is no longer right. 2022-04-19 22:00:16 All of that makes total sense though and is a result of EXPECT insisting that all bytes are independent. 2022-04-19 22:00:24 yeah 2022-04-19 22:01:26 Packing these things into one 64-bit return is pleasing to me, because it keeps KEY a thing that returns a single result in all circumstances. 2022-04-19 22:01:51 The application code is going to have to be aware of this stuff regardless of whether it gets the bytes one at the time or together. 2022-04-19 22:02:58 converting it to UTF-32 would do that 2022-04-19 22:03:04 The low order byte would be the first byte received, and then higher bytes as needed would be filled in. 2022-04-19 22:03:06 i.e. KEY returns a code point 2022-04-19 22:03:21 you could have two layers 2022-04-19 22:03:48 a parsing layer, and a layer which receives code points and escapes 2022-04-19 22:03:50 I was blissfully ignoring all of this until I got interestedin APL. :-) 2022-04-19 22:05:08 But that ignoring mean just not using things like cursor keys and so on, which was a bit of a defeciency. 2022-04-19 22:05:42 But with blocking keyboard i/o there just really wasn't a good way to handle it. 2022-04-19 22:06:51 well you've got 64 bits to work with 2022-04-19 22:06:54 eight bytes 2022-04-19 22:07:01 Yes - more than enough. 2022-04-19 22:07:10 if you are outputting Unicode code points you don't need all that many bytes 2022-04-19 22:07:29 so you've still got bits to spare to represent cursor keys and like 2022-04-19 22:08:02 You know, when I put a three byte APL char in an input buffer (like in EXPECT) and then I get a backspace, you can't tell by looking at the LAST byte that it was an APL char. Handling backspace will mean looking back far enough to guarantee that the last byte is a loner. 2022-04-19 22:08:09 like devote the lower 32 bits to code points and the upper 32 bits to non-Unicode keys 2022-04-19 22:08:55 you can tell where a character starts in UTF-8 by searching from the last byte backwards 2022-04-19 22:09:03 because the first byte is always recognizable 2022-04-19 22:09:09 So I'll need to look at the last three bytes and see if they're together, then look at the last two bytes. And then I know. 2022-04-19 22:09:30 Yes, but you can't base that on just the last byte. 2022-04-19 22:09:39 yes 2022-04-19 22:09:54 Or can you? I printed all these codes out the other day, but I don't have that in front of me. 2022-04-19 22:10:06 Are all of the bytes >127? 2022-04-19 22:10:09 the last byte doesn't tell you the position of the first byte or not 2022-04-19 22:10:23 but it can tell you if it's multibyte or not, obviously 2022-04-19 22:10:34 But does the last byte tell you whether it's a multi or a single? 2022-04-19 22:10:40 yes 2022-04-19 22:10:46 That's nice - ok good. 2022-04-19 22:10:56 That'll make things a little simpler. 2022-04-19 22:11:20 in a multibyte the high bit is set to 1 for all the bytes, whereas for a single byte it's always 0 2022-04-19 22:11:56 Nice. 2022-04-19 22:12:06 https://en.wikipedia.org/wiki/UTF-8#Encoding 2022-04-19 22:12:18 the first byte is always recognizable, and always tells you the total number of bytes 2022-04-19 22:12:58 So I had written some of my editor before I got into APL. But fortunately I'd already decided I wanted to support colors and attributes and stuff, so I already had planned on severing buffer bytes consumed from screen characters consumed. 2022-04-19 22:13:56 And had already recognized that there would be things that would require me to scan a line of text from the beginning, to properly find the right column for the cursor and so on. 2022-04-19 22:14:27 yeah, I wrote a block editor for zeptoforth and had much of the same 2022-04-19 22:24:42 I'm designing this one for future file system compatibility - I might have the on-screen text spread across a pair of partially filled blocks. 2022-04-19 22:25:22 But... it's barely begun yet. 2022-04-19 22:30:23 I'm using array of pointers to where each line begins in the disk blocks. Each one can potentially be in different blocks, but I don't expect to ever exploit that to more than a pair of blocks. The lines would have to be super long to cause more than one block boundary to be onscreen at the same time. 2022-04-19 22:31:41 The idea is for the screen window to be scrollable in the file both horizontally and vertically. 2022-04-19 22:32:52 mine is just traditional blocks except I count from block 0 rather than block 1 2022-04-19 22:33:19 I thought about that, but decided to keep the custom that BLK=0 indicated keyboard input. 2022-04-19 22:34:01 Yours is really "sleeker." 2022-04-19 22:34:31 I never bothered to implement BLK 2022-04-19 22:35:01 rather I just copy strings from the block one line at a time into a buffer then EVALUATE the buffer 2022-04-19 22:35:10 When I first started writing Forths I wrote them quite traditionally, and parts of it have "stuck." 2022-04-19 22:35:26 Whereas other parts have changed quite a bit. 2022-04-19 22:36:15 zeptoforth isn't really a traditional Forth in some ways, as it uses quotations pretty heavily, it uses xt's for exceptions rather than fixed integers, and it uses a module system on top of wordlists rather than using wordlists directly (except when bootstrapping) 2022-04-19 22:36:16 Ah. That works. 2022-04-19 22:44:28 It really wouldn't have been hard to have had BLK = -1 indicate keyboard input. 2022-04-19 22:44:56 And that would have simplified some stuff in BLOCK. 2022-04-19 22:45:05 A little at least. 2022-04-19 22:45:36 yeah 2022-04-19 22:52:43 I've fancied at times that I'd like to be able to do things like Linux piping in my Forth. But I've not really figured out how that will even work. Linux's stdin/stdout/stderr is just kind of "different" from how Forth handles I/O. I feel like there must be a smooth solution, but I haven't really pinned it down yet. 2022-04-19 22:53:33 I know I want a command history eventually, and that likely will involve echoing the input lines to disk as a starting point. 2022-04-19 22:54:48 I consider bash command history to be a little deficient in some ways, actually. If you trick it out the right way you can go back and change lines you've already issued. 2022-04-19 22:54:57 I just don't think that should be possible. 2022-04-19 22:55:01 It's a HISTORY. 2022-04-19 22:55:49 Of course, if it's on disk it's editable, so in that sense it would be subject to change. But I don't want to be able to do it from the normal command line itself. 2022-04-19 23:27:00 KipIngram: you might take inspiration from the "genius" part of pipes (beyond simple composition), 2022-04-19 23:27:26 in the old, memory-limited unices pipes were essential because you'd only produce 4k of output, then write it to the pipe 2022-04-19 23:27:42 which the scheduler would know to use to directly schedule the read end 2022-04-19 23:28:27 so you in effect got coroutine-like control flow out of decoupled and "semantically clean" IPC 2022-04-19 23:34:37 Oh, that is interesting.