2023-08-03 03:44:55 More than that, there are only countable computable questions in general 2023-08-03 03:45:10 Although computable means different things here probably 2023-08-03 03:45:24 Generally 'computable' things are countable 2023-08-03 03:45:38 Countable is quite big and 'should be enough for anyone' though 2023-08-03 03:45:56 It's already bigger than the entire universe and everything in it 2023-08-03 03:46:37 ACTION puts away his Cantors Dustpan 2023-08-03 03:46:46 veltas: "questions"? 2023-08-03 03:48:49 veltas: wasn't it that algorithms / "solutions" are countable, but "problems" aren't. 2023-08-03 07:14:04 Man, forth keeps getting hate over in ##asm. Not really sure why. 2023-08-03 07:22:27 DKordic: Now, that p-adic stuff is something I picked up from Wildberger videos that I thought was utterly fascinating. 2023-08-03 07:22:46 Controversial or not, the guy has some worthwhile things that he teaches. 2023-08-03 07:23:29 I couldn't walk anyone through it righ tnow - it's been a while. But that was good material. 2023-08-03 07:25:35 One thing I do remember is that when working with triangles he just avoids talking about side lengths. Because that leads straight to irrational numbers. Just take a right triangle with the sides next to the right angle of length 1 - the diangonal is length sqrt(2) which is irrationa. 2023-08-03 07:25:48 Instead he quantifies those things by the area of a square. 2023-08-03 07:25:56 So those areas would be 1, 1, and 2. 2023-08-03 07:25:59 Did some p-adic thing go viral recently? I studied them breifly back in undergrad, when they were obscure enough that even number theory junkies didn't necessarily know about them. 2023-08-03 07:26:10 Keep hearing people casually bring them up now. 2023-08-03 07:26:20 And that defines the thing as well as the lengths do. 2023-08-03 07:26:41 Not that I know of. I only know them from Wildberger's coverage of them. 2023-08-03 07:27:09 I just stumbled across his "Wild Trigonometry" videos once, watched a bunch of htem, and then went from there through some of his other stuff. 2023-08-03 07:28:28 KipIngram: You might like this series on Ridiculously Huge Numbers: https://onion.tube/playlist?list=PL3A50BB9C34AB36B3 2023-08-03 07:28:52 Goes way past Graham's number and even TREE(3). 2023-08-03 07:29:06 Fun - I'll take a look. 2023-08-03 07:29:38 I've seen Quora questions pop up on Wildberger. Typically questions like "is he a genius or a crackpot?" As though those are the only two possibilities. 2023-08-03 07:30:06 It's pretty fascinating how something so basic as wanting to "count really high" quickly starts to poke at logical foundations. 2023-08-03 07:30:14 Lol. I'll have to check out Wildberger. 2023-08-03 07:30:27 I find him to be pretty reasonable. He doesn't try to claim modern math doesn't "work" for anything. I think he'd concede that for an engineer or someone else who just needs to "get answers" everything holds together pretty well. 2023-08-03 07:30:45 He's just concerned that there's a lack of "total rigor" in the "pure math" sense. 2023-08-03 07:31:00 And I agree with him that the goal of pure math is absolute, unquestionable rigor. 2023-08-03 07:32:23 The math guys have a problem, though - I think people have shown, using Godel-type arguments, that it's IMPOSSIBLE to have a math system that's true and COMPLETE at the same time. So there just may not be a way to have total rigor that lets you work with EVERYTHING. 2023-08-03 07:32:42 Well, math whitepapers are mostly intuition pumps for people on the inside. If you jump into proof checkers (Lean, Metamath, Coq, etc), then there's arguably a lot more rigor. 2023-08-03 07:33:37 I think the idea is that once you get to a system that's powerful enough to express arithmetic, then there are guaranteed to be true statements you can't prove are true. 2023-08-03 07:33:49 They're generally very obscure, remote statements, though. 2023-08-03 07:34:08 I.e., not statements that you or I would be likely to need for any practical work. 2023-08-03 07:34:44 You're touching on what are usually called "natural" examples of independent statements. 2023-08-03 07:34:59 So if you're building a system for practicla work, you can probably prove it's all correct. 2023-08-03 07:35:30 Though it's somewhat of an abuse of "natural", stuff like Goodstein's Theorem. 2023-08-03 07:35:34 Okay. I'm bad about not knowing all the formal terminology of stuff I think about. :-( 2023-08-03 07:35:57 On these "out in the fringes" topics I tend to "peruse" rather to study deeply. 2023-08-03 07:36:44 I'm interested in a lot of things, but usually not so much that I muster up the energy to try to "master" the area. 2023-08-03 07:37:41 Automatic proof system - I find them quite interesting, but definitely don't really know how to "do" any of it. 2023-08-03 07:38:30 It seems like a powerful idea and I think it probably has a lot of value. But the stuff I actually do I wind up just trying to do it in a way such that I can look at it and just personally convince myself tha tit's right. 2023-08-03 07:39:19 Part of why I like Forth and it's advocacy of short definitions. Short = small number of words, and I can just hold the whole thing in my head and "see" that it's going to work. 2023-08-03 07:43:03 Riviera: Computable questions are questions that can be computed with an algorithm 2023-08-03 07:43:36 i have very simple understand of math things :\ there is "reverse mathematics" which aim to find the minimal axioms set to prove given theorems, smaller is better lol 2023-08-03 07:45:47 I've built up a fair bit of ability in what I'll call "continuous" math. Extensions of calculus. Differential equations, tensor stuff, etc. I did that deliberately in grad school because I knew that I wanted to study higher physics later on in life, on my own. 2023-08-03 07:46:06 But I discovered later that other parts of math were important in physics too, and I neglected some areas. 2023-08-03 07:46:22 Group theory, topology, that kind of thing. I'm sadly weak on those fronts. 2023-08-03 07:46:42 I've gotten a little better, but not having gotten any formal training makes it hard. 2023-08-03 07:46:48 xelxebar: What sort of hate is Forth getting in ##asm? 2023-08-03 07:47:30 Hunh. I would expect Forth hate in #c, but I'd think #asm guys would be a little more open-minded. 2023-08-03 07:47:54 Generally on IRC I don't expect anything at all 2023-08-03 07:48:10 Good point. 2023-08-03 07:48:14 It's a pleasant surprise when people have anything good to say 2023-08-03 07:48:24 I think to some extent it's a human trait to like finding things to hate on. 2023-08-03 07:48:37 Yes and I'm guilty of it 2023-08-03 07:48:41 KipIngram: Just gonna drop this at you: https://us.metamath.org/. Proof system, but one that's simple enough that pretty much anyone can write an implementation and verify the proof database. 2023-08-03 07:48:45 And the net has given us a perfect vehicle for it. 2023-08-03 07:48:49 Safe, anonymous, etc. 2023-08-03 07:48:56 I'm no anonymous 2023-08-03 07:49:01 Would be *super* cool to have a Forth checker. 2023-08-03 07:49:24 I'm very critical of the Forth standard, of the new C standards, of Rust fans..... 2023-08-03 07:49:24 Well, me either - I smile at the clever nicks and stuff I run across, but I've always just been KipIngram pretty much everywhere online. 2023-08-03 07:49:36 veltas: Oh, just that it's insane. Nothing particularly substantive. 2023-08-03 07:49:40 And it woudln't be hard to "find me" for anyone who invested any effort. 2023-08-03 07:49:52 xelxebar: It is a bit insane, one of the reasons I like it 2023-08-03 07:50:04 Beauty is in the eye of the beholder 2023-08-03 07:50:05 It's insanely sane. 2023-08-03 07:50:39 Came up because I was asking help with my hand-disassembled code from SmithForth. Trying to sanity check it. 2023-08-03 07:50:42 Sanity is in the eye of the huge monster outside my house right now 2023-08-03 07:51:26 xelxebar: Yeah I thought it would be that 2023-08-03 07:51:27 xelxebar: Did you take a look at PlanckForth? 2023-08-03 07:52:29 KipIngram: Yes, I did! Didn't go through it in detail yet. Looks authored by a Japanese guy, so I might try reaching out to him. Vague chance we live close by. 2023-08-03 07:53:58 veltas: I mean, I *did* hand-check that my code assembles to exactly the same bytes as SmithForth, module memory locations. 2023-08-03 07:54:38 But mine segfaults somewhere in a sea of generated code/data. 2023-08-03 07:55:27 Interestingly enough, the SmithForth guy seems to have some signature *quirks* in his hand-written opcodes that no assembler is likely to output naturally. 2023-08-03 07:56:00 Like an null REX byte, or slightly quirky encodings of certain instructions. 2023-08-03 07:56:22 Had to go out of my way to fixup those in my code to produce the right thing. 2023-08-03 07:59:53 Heh. Last night I made a remark in Reddit's #embedded that I tried to avoid Microsoft if at all possible. That earned me a downvote from someone. 2023-08-03 08:00:55 xelxebar: There used to be a freely available assembler (this is way, way back), and the guy said that he had wired in certain encoding quirks that would let him prove in court a binary had been assembled with his tool. 2023-08-03 08:01:10 He was happy for people to use it non-commercially, but said he'd come after commercial users who hadn't paid. 2023-08-03 08:02:09 It was a lot easier to use than MASM. 2023-08-03 08:03:12 That's cute. Stenographic assemblers. 2023-08-03 08:05:30 Later on there appeared a plethora of options, but at the time I wasn't aware of any other than MASM and his. 2023-08-03 08:05:44 Which could have just been ignorance on my part. 2023-08-03 08:06:36 Anyway, someone told me that I should bail on Segger Embedded Studio and use VSCode, but a little scratching led me to associate VSCode with Microsoft. 2023-08-03 08:06:51 It looks like it's also available open source, though. 2023-08-03 08:07:24 And someone also makes VSCodium, which is VSCode cloned binaries (so you don't have to build it from source if you don't want to). 2023-08-03 08:07:36 So it looks like you don't HAVE to have MS in your loop if you don't want to. 2023-08-03 08:07:54 I'm pretty happy at the moment with the Maxim SDK, though; it's "working." 2023-08-03 08:08:26 I assume, though, that the Maxim SDK won't do my SAMD51 boards. 2023-08-03 08:08:33 Whereas I think Segger will. 2023-08-03 08:08:45 Haven't tried it yet, though. 2023-08-03 08:09:17 I'm really more interested in the MAX32655 than the SAMD51. It has a better assortment of peripherals. 2023-08-03 08:10:47 I want to get clean code together that brings up the 32655 just enough for the UART to work, and get a Forth built around that. The rest of the peripherals I'll handle in Forth. 2023-08-03 08:12:29 I'm going to use that F18A-like approach, because I'll only have 128kB of RAM, and I think it looks like the most compact of all the approaches I've considered. 2023-08-03 08:19:41 KipIngram: the A86 & A386 assemblers had fingerprinting in the encodings; http://eji.com/a86/ 2023-08-03 08:19:53 A86 - I think that was it. 2023-08-03 08:20:03 Tickles my memory the right way. 2023-08-03 08:20:49 I don't remember exactly when that memory is from, but it was a LONG time ago. 2023-08-03 08:21:08 I want to think it wass first wife era. 2023-08-03 08:27:54 I used A86 in my early x86 assembly, later switching to NASM (when I began 32-bit stuff), then FASM 2023-08-03 08:30:37 I've recently used nasm; toyed a little with fasm some years ago. 2023-08-03 08:30:53 fasm seemed to have a nice "cleanliness" to it. 2023-08-03 08:32:05 indeed it does 2023-08-03 08:33:56 Heh. So that little hello world binary I made with the Maxim SDK, that I've proven will drag-and-drop program onto the board, is 43788 bytes. 2023-08-03 08:34:21 Probably could find the kitchen sink in there somewhere if I went looking. 2023-08-03 08:35:47 fasm2 cuts deep 2023-08-03 08:43:54 :P 2023-08-03 08:44:27 Yeeessss! My hand-written disassembly compiles! 2023-08-03 08:44:38 s/compiles/runs/ 2023-08-03 08:44:40 no its not fasmg that i'm talking about, its fasm2, which is the sequel to fasm 2023-08-03 08:44:44 which runs on top of fasmg 2023-08-03 08:44:47 i have an obsession dont i 2023-08-03 08:56:38 I've not used fasmg/fasm2, just the original 2023-08-03 08:58:37 xelxebar: Nice. 2023-08-03 08:59:04 drakonis: We're entitled to the occasional obsession. 2023-08-03 09:00:05 fasmg had the goal of opening the thing up to non-x86 architectures? 2023-08-03 09:00:40 So fasmg+ = fasm2 ? 2023-08-03 09:02:27 its actually "fasmg+ = fasm2" 2023-08-03 09:02:36 https://board.flatassembler.net/topic.php?t=22855 2023-08-03 09:03:06 you can also get the latest iteration by cloning the repository 2023-08-03 09:03:20 or download the zip without any history 2023-08-03 09:03:49 but in a manner of speaking, yes. 2023-08-03 09:25:09 Also looks like my assembly is byte-for-byte the same as original smithforth impl, modulo memory addresses. 2023-08-03 09:25:20 Had to hand-check that, though. 2023-08-03 09:49:42 xelxebar: I think what you've just done was a marvelous exercise. There's no way you didn't profit from that. 2023-08-03 09:52:28 So, for this little 32655 Forth, on the one hand I want the simplest thing I can possibly write. But, I don't want to wind up boxed in. I definitely want the ability to run multiple threads. That implies some things about memory management, about how the console gets connected to those threads, etc. 2023-08-03 09:52:49 So I want "simple + minimal extensions" that prevent that boxing in. 2023-08-03 09:55:05 As far as I/O goes, I think that means that I/O at the process level is just consuming and filling buffers. Then some sort of I/O driver handles the other side of that buffering process. 2023-08-03 09:55:44 If you're reading a buffer, you just don't care how it's getting filled. 2023-08-03 09:58:46 so these buffers can be filled/emptied by InterruptServiceRoutines? 2023-08-03 09:58:55 And that's nice, because it detaches the Forth itself from the hardware. 2023-08-03 09:59:05 Well, by some other thread. 2023-08-03 09:59:15 Possibly interrupt based, possibly not - not sure yet. 2023-08-03 09:59:38 But if you try to read your buffer and nothing is there, you block, and eventually some other thread will put something in there and you'll get woken back up. 2023-08-03 10:00:00 I think to what extent I use interrupts is flexible. 2023-08-03 10:00:47 As far as just getting the data into the buffer goes, this chip supports tying a DMA controller to the UART, so all that could just happen with no software involvement. But somehow you'd have to get woken, so... 2023-08-03 10:01:15 I think I'm unlikely to try anything that fancy initially. 2023-08-03 10:01:43 most such DMA UART combos have vectored interrupts for such 2023-08-03 10:02:43 Yeah. I imagine an interrupt would be the logical way to tend to the thread management side of things. 2023-08-03 10:03:05 I like interrupts - I'm sure eventually I'll incorporate them in some reasonable way. 2023-08-03 10:03:17 basically "DMA done!" kind of deal 2023-08-03 10:03:27 But I definitely believe that ISrs should do... "very little." 2023-08-03 10:03:39 Just the minimum touch possible. 2023-08-03 10:04:57 re multi threading threads: it is usally how thread context switch occur 2023-08-03 10:05:21 Yeah. 2023-08-03 10:06:53 heck, a scheduler in a system I read about set up a round of runslices of threads. Basically a linked list of thread control blocks whose last item was the scheduler itself 2023-08-03 10:07:05 Once the "idea" of treating the console as a pair of buffers is in place, then that leads to treating all I/O that way. Maybe I've got an A/D sampling some signal - that would just go through another buffer. 2023-08-03 10:07:43 Yeah, that sounds reasonable. 2023-08-03 10:08:16 And I found that Pike paper on communicating synchronous processes pretty compelling - I want the ability to write applications along those lines without having to do backflips. 2023-08-03 10:09:08 On the other hand, on this little device it's not going to be Erlang - I'm not going to be trying to run a million threads. 2023-08-03 10:09:26 Just being able to efficiently support "a few" will get me a long way. 2023-08-03 10:09:35 that meant the thread context switching done by the runslice timer isr is context saving the current thread, moving to next thread control block, and restoring from that 2023-08-03 10:10:01 Yes. Save the regs, change the "thread register," lod the regs, continue. 2023-08-03 10:10:39 Just as cutthroat simple as I can make it. 2023-08-03 10:11:08 A ring of threads that are "currently able to run," and then other lists holding threads that are blocked for some reason. 2023-08-03 10:13:02 one thing I have not come across regarding thread priority literature is the variance of runslice length versus the usual scheduling frequency 2023-08-03 10:15:56 Yeah, I'm not sure how much "cleverness" to attempt there. It would be easy enough to let each thread have its own value for how long to run before yielding up the processor. And of course, that "ring" could be made a priority list of some kind. Lots of possibilities there. 2023-08-03 10:16:07 I see those as second layer decisions, though. 2023-08-03 11:16:14 I still think the right approach here is that each thread has an associated block of memory. That block may have only stacks and some minimal set of variables. Or, it might have stacks, those variables plus others, and room for a private dictionary, and so on. Just depends on what all I intend for that thread to do. 2023-08-03 11:19:14 So it will be completely possible to start a thread, compile some code, run it, and then stop and release that RAM block. 2023-08-03 11:19:51 I actually think I want all vocabularies to go into distinct ram blocks, whether they are associated with a thread or not. 2023-08-03 11:20:23 So it would be possible for my main process to make a vocabulary, compile code, run it, and then discard it all. 2023-08-03 11:21:20 Up to me to make sure I no longer have that vocabulary in my path etc. before I discard it. 2023-08-03 12:04:37 So, basic dictionary acts like a (contiguous) linked list. Word retreival seems like overhead would be horrendous. 2023-08-03 12:05:10 Is that not really an issue in practice? Like, you just write new, faster words if you need it? 2023-08-03 12:05:58 Or do some of your forths have prefix trees or hash tables or whatnot optimizations? 2023-08-03 12:06:10 forth, somehow, is generally very fast 2023-08-03 12:30:03 It's not really an issue in terms of you as a human watching it happen. 2023-08-03 12:30:21 As thrig says, it's generally plenty fast to keep you happy in real time. 2023-08-03 12:31:06 However, it IS an O(N) process as typically implemented and that is "bad" in some sense - computer scientists usually don't like O(N). Modern languages use hash-based techniques for this kind of thing and can get O(1) amortized. 2023-08-03 12:31:16 Some Forth implementations (like GForth) use hash tables. 2023-08-03 12:31:25 Much, much faster. 2023-08-03 12:31:56 I'm pretty sure I'll use hashing with table doubling for future desktop implementations (where I have way more RAM than I need). 2023-08-03 12:32:08 I don't know if I'll try to do that on this 32655 with its limited RAM, though. 2023-08-03 12:32:25 Limited RAM means limited number of words, and that O(N) cost just won't ever be too bad. 2023-08-03 12:32:36 And you can't beat a linked list for "simple and straightforward." 2023-08-03 12:32:58 And remember, this is only while compiling. Once your definitions are compiled you don't have to look the components up anymore. 2023-08-03 12:33:08 That's the whole point of having a compiled form. 2023-08-03 12:41:49 Makes sense. So in practice you just write new words that amalgamate the large block of functionality you want. 2023-08-03 12:42:49 Yes. 2023-08-03 12:43:28 I think if you do implement a hash+table doubling dictionary, then you certainly ought to offer dictionary support for general use too ("dictionary" in the sense of a data structure, like in Python). 2023-08-03 12:43:42 But, that brings in the need for some kind of general memory management, and Forth usually doesn't have that. 2023-08-03 12:43:57 So a bit of planning and work is involved in taking that step. 2023-08-03 12:44:35 But yeah, as far as writing apps goes, you just think about the job you're trying to do, "invent a language" for talking about your solution process, and create a lexicon of words that let you speak that language to your Forth system. 2023-08-03 12:44:59 Factor into short definitions in some appropriate way. 2023-08-03 12:45:21 I.e., don't write Forth definitions that fill up whole screens. 2023-08-03 12:45:44 DSL design is definitely a high-level skill. 2023-08-03 12:46:17 What does DSL stand for there? 2023-08-03 12:46:25 Domain Specific Language 2023-08-03 12:46:37 Ah. I knew that. Somewhere down there. :-) 2023-08-03 12:46:43 :) 2023-08-03 12:46:46 Yes, it's a thoughtful process. 2023-08-03 12:46:56 Nitty gritty question: 2023-08-03 12:47:13 And really should involve people who know that domain well. 2023-08-03 12:47:38 So SmithForth word headers start with the address of their first instruction. 2023-08-03 12:47:49 Ok. 2023-08-03 12:47:58 Header layout is interesting. 2023-08-03 12:48:00 Thus, execing the word involves that little indirection. 2023-08-03 12:48:09 Yes. 2023-08-03 12:48:44 In the earliest Forths definitions normally followed headers immediately, and the layout was typically like this: 2023-08-03 12:48:50 2023-08-03 12:48:54 2023-08-03 12:48:57 2023-08-03 12:49:00 2023-08-03 12:49:05 2023-08-03 12:49:09 2023-08-03 12:49:11 ... 2023-08-03 12:49:21 There are a number of issues with that. 2023-08-03 12:49:42 In my system that's "active now," headers are remote from definitions, and headers look like this: 2023-08-03 12:49:51 2023-08-03 12:49:54 2023-08-03 12:49:58 2023-08-03 12:50:00 2023-08-03 12:50:03 2023-08-03 12:50:27 That way I never need to traverse the name string while moving around in the header, and the definition pointer is optional. 2023-08-03 12:50:32 When execing a word, naively I'd just `call (%rcx)` or whatever, where %rcx points to the beginning of the header. 2023-08-03 12:50:40 I "point to" the field. 2023-08-03 12:51:06 one issue with the former is that link field and code pointer are not at fixed offset from previous link field points to 2023-08-03 12:51:07 That would be direct threading, which means you actually find CODE there in the header. 2023-08-03 12:51:15 Indirect threading adds another level of indirection there. 2023-08-03 12:51:25 Right. 2023-08-03 12:51:33 The second example solves that too. 2023-08-03 12:51:54 However, SmithForth decides to set aside 8 bytes at the end of memory, compile a call instruction + return at that location, and then call to that newly compiled call. 2023-08-03 12:52:03 Zarutian_iPad: Either you are MrMobius recommend that latter format. 2023-08-03 12:52:14 I went to some trouble to switch to it, because it just made such good sense. 2023-08-03 12:52:17 And for the life of me i can't grok why SmithForth would be doing things that way. 2023-08-03 12:52:56 Well, first of all, I object to using call at all, unless you're subroutine threading in which case that's just the way to do it. 2023-08-03 12:53:03 in a threaded system I would use jumps. 2023-08-03 12:53:12 KipIngram: makes FIND and co rather fast 2023-08-03 12:54:02 xelxebar: Do you know whether SmithForth is direct or indirect threaded? 2023-08-03 12:54:33 If it's got actual machine code in the headers I'd say that means it's direct threaded. 2023-08-03 12:54:45 I don't know what direct and indirect mean in this context. 2023-08-03 12:54:51 because comparing the length of name field before starting a comparisation of the name means that you can skip that dictionary entry 2023-08-03 12:55:02 And the reason they use call there is so that the address of the header is on the processor stack. 2023-08-03 12:55:14 Where you can grab it and use it to access other nearby data. 2023-08-03 12:55:39 But this is not how I do it, so I'm likely not the best person to comment on it. 2023-08-03 12:55:48 Yeah, dictionary entry layout is | Code pointer | Link | Flags/Name Length | Code | 2023-08-03 12:56:18 What you're calling | Code | there is a list of addresses? 2023-08-03 12:56:32 That's "the definition"? 2023-08-03 12:56:33 Oh, no, straight up assembly. 2023-08-03 12:56:53 Are you looking at a : definition word, or at a primitive/ 2023-08-03 12:56:55 ? 2023-08-03 12:57:09 If there's machine code there that seems like a primitive. 2023-08-03 12:57:23 xelxebar: so for : defintions you have a jsr or call docol at front? 2023-08-03 12:57:30 Does | code pointer | point at that code? 2023-08-03 12:57:59 Well, compiling something like : Foo A B C ; would put 3 call instructions inside Foo's Code block. That's what I mean. 2023-08-03 12:58:16 That sounds like code threading. 2023-08-03 12:58:21 subroutine threading 2023-08-03 12:58:46 If you get an actual call instruction for each item. 2023-08-03 12:58:59 In a direct or indirect threaded system you'd just get a list of addresses. 2023-08-03 12:59:11 And a little block of code somewhere else would pick through that list. 2023-08-03 12:59:26 KipIngram: what kind of threading is the one I asked? 2023-08-03 12:59:51 Ah, so on x86 that would let you skip the 3-byte call instruction overhead for each word. 2023-08-03 12:59:56 Ok, let's start by saying foo is a colon definition. 2023-08-03 13:00:07 If that definition has literal machine code, that's code threading. 2023-08-03 13:00:17 If it's a list of addresses, it's either direct or indirect threading. 2023-08-03 13:00:28 If those address items point to machine code, it's direct threading. 2023-08-03 13:00:41 If they point to an address field, which points to machine code, that's indirect threading. 2023-08-03 13:00:59 Ah, okay. Crystal clear. 2023-08-03 13:01:10 right, direct threading was the thing I was talking about 2023-08-03 13:01:22 What's the tradeoff where you'd want the extra indirection in indirect? 2023-08-03 13:01:29 I think direct threading is generally more popular these days. 2023-08-03 13:01:41 I've always felt indirect threading has the highest level of "elegance," though. 2023-08-03 13:01:58 And I don't think it's *guaranteed* to be lower performance the way people seem to think it is. 2023-08-03 13:02:25 Where does the extra level of indirection come from? 2023-08-03 13:02:39 Very often the addresses in a definition point to a jump or call instruction, in which cases you're really still getting that extra level of indirection. 2023-08-03 13:02:44 Just in a different way. 2023-08-03 13:03:24 But if they point to full copies of docol at every location, then yeah, one could argue that might be faster. 2023-08-03 13:03:48 I need to reread moving forth. 2023-08-03 13:04:06 I think, though, we're arguing over a very small performance difference, and one should really profile their code and optimize using assembly in the critical parts. 2023-08-03 13:04:15 In which case the threading mode really just doesn't matter. 2023-08-03 13:04:26 For performance, I mean. 2023-08-03 13:05:04 I think it's one of those things that has become a bit of a religious argument and doesn't really need to be. 2023-08-03 13:05:11 However you do it, it's Forth so it's good. 2023-08-03 13:05:33 Huh! Didn't realize this was a core debated topic! 2023-08-03 13:08:33 Oh yes. :-) 2023-08-03 13:08:36  then there is the isa design debate 2023-08-03 13:08:59 BTW, I think the reason the code field originally came last in the header is so that you could leave it out of primitives. 2023-08-03 13:09:10 Well, no. 2023-08-03 13:09:15 That's not what I meant. 2023-08-03 13:09:25 I guess in a primitive it's still there and just points to the next address. 2023-08-03 13:09:51 In direct threaded systems you could leave it out, and just start the primitive right there. 2023-08-03 13:10:06 also like in eForth the names and links part of the dictionary lives seperate from the code 2023-08-03 13:10:21 Yeah, in all of my recent ones too. 2023-08-03 13:11:00 In my current one I have name, link, and a pair of pointers. 2023-08-03 13:11:08 A code pointer and an "implementation" pointer. 2023-08-03 13:11:22 another benefit of direct threaded is that you can have diffrent versions of docol and even bytecode interpretation 2023-08-03 13:11:25 For : defs, code pointer points to docol and implementation pointer points to the definition. 2023-08-03 13:11:41 For primitives, the code pointer just points to the primitive code, and there is no implementation pointer. 2023-08-03 13:11:55 That's why I've got it at one boundary of the header, so I can leave it out if I want to. 2023-08-03 13:42:31 Hey, I was recently in here asking about other related languages, and hope it's not unwelcome that I drop in now to link a just-born lemmy community for concatenative programming generally: https://programming.dev/c/concatenative 2023-08-03 13:45:38 concats? are those like cats but covinning? 2023-08-03 13:46:00 convinning* 2023-08-03 13:46:59 Hi AndyAndyBoBandy. I personally welcome links that are even remotely related to our topic here. 2023-08-03 14:20:16 ah lemmy, eh? 2023-08-03 14:20:23 this should be good 2023-08-03 14:23:09 WITHIN CELLS 2023-08-03 14:25:33 padded cells for this one 2023-08-03 14:25:48 cells... 2023-08-03 14:27:32 Yeah, we've been watching the new Dresden sub that showed up on lemmy. 2023-08-03 14:27:37 Not much activity there yet. 2023-08-03 14:32:59 : INTERLINKED WITHIN CELLS ; INTERLINKED 2023-08-03 14:33:13 Sorry I just thought it was funny that WITHIN and CELLS are words in standard Forth 2023-08-03 20:47:12 Just found out about mecrisp: https://mecrisp.sourceforge.net/. Looks to be a nicely optimized Forth. 2023-08-03 20:49:51 haven't try that one, i tried (flashed and toggle some leds..) flashforth and esp32forth