2022-11-22 00:00:19 no idea about collisions in a hash table 2022-11-22 00:00:37 but I'll go to kill some noob players instead of learning what they are 2022-11-22 00:00:38 XD 2022-11-22 00:00:46 see you KipIngram, take care 2022-11-22 07:45:08 KipIngram: It's been pointed out that x86 is quite easy to read in octal 2022-11-22 07:45:17 Like it was almost designed for it 2022-11-22 07:45:22 Because there are a lot of 3-bit fields 2022-11-22 07:53:58 huh 2022-11-22 08:01:49 Quite od 2022-11-22 10:30:49 Interesting. Maybe a couple of the designers were just nice and octal savvy. 2022-11-22 10:31:02 And got into a design rhythm. 2022-11-22 10:31:48 Well, I got that appliance script into "final bash." It's about 200 lines, and is ready to put on the machine and start shaking down. 2022-11-22 10:32:15 No doubt it has some mistakes, but I hope not too many; I've "mentally walked it" several times now. 2022-11-22 11:01:03 I never really got heavy enough into octal to get good at it. By the time I started doing a lot of stuff most stuff was hex, so I got good at that. 2022-11-22 11:01:41 And hex always has struck me as slightly more "elegant," just because not only is 16 a power of two, but 4 is as well. 2022-11-22 11:03:19 So hex digits never have to "straddle" across wider words. 2022-11-22 11:03:43 Well, "commonly encountered" wider words. 2022-11-22 11:49:19 KipIngram: now it takes 2 seconds instead of 4 to sum from 0 to 10000000 (note it's 7 zeroes) 2022-11-22 11:49:32 still no way I'll get near to your forth :/ 2022-11-22 11:50:08 it's amazing that your forth can sum a list of the first million numbers in 0.02 seconds 2022-11-22 11:50:27 mine takes 0.25 now 2022-11-22 11:50:47 but meh, that's what you get by writing in assembly 2022-11-22 11:51:25 [ 0 1 1000000 range [ i + ] compile do.list . ] benchmark . => 500000500000 250 2022-11-22 11:52:00 with one zero more => 50000005000000 2226 2022-11-22 11:52:06 2.2 seconds 2022-11-22 11:52:16 without compile is 4 instead 2022-11-22 11:52:56 compile just converts the [ i + ] into a list of functions, so it removes the interpreter searching 2022-11-22 11:53:29 I have to make it able to compile recursive words or it will loop forever 2022-11-22 11:54:34 I'll memoize what is compiling so it will know the word is already compiled or being compiled, also should speed a bit the process when compiling a whole package 2022-11-22 11:54:44 I shouldn't call it compile, but meh 2022-11-22 12:17:30 can't solve the recursive problem easily :/ 2022-11-22 12:33:19 But... if you think about what assembly instructions it takes to do that, it's not an unreasonable number. I'm basically running those assembly instructions, with a quite thin layer of overhead on top to string them together. But that's exactly what Forth does - it lets you stitch together primitive code sequences with a very controlled and understood amount of overhead. 2022-11-22 12:33:50 The "glue" that guides you from the end of one primitives "working instructions" (the ones you need for your application) to the first instructions of the next one we generally call "next" in Forth. 2022-11-22 12:34:04 And I've timed my "next" - it's about 1.3 nanoseconds. 2022-11-22 12:34:40 Getting out of one colon definition and into the next one requires slightly more overhead, but not that much. Still a very small number of nanoseconds. 2022-11-22 12:35:02 So if you take nanoseconds and multiply by a million, you have milliseconds. And I'm using 20 of those. 2022-11-22 12:35:24 And 20 probably isn't awfully different from the number of primitives in those definitions I showed using to do this. 2022-11-22 12:35:35 So it's really right in line with expectations. 2022-11-22 12:36:18 A standard Fort that looks stuff up in a linked list dictionary is kind of slow when it's *interpreting*. But when you're running compiled code, all of that stuff is out and you're just blazing through your bits of code as fast as you can. 2022-11-22 12:36:43 So I don't think I've done anything particularly spectacular - it's just how a properly written forth *is*. 2022-11-22 12:38:17 By the way, if you look back at the code I shared lats night you'll see that I have a DROP right after each UET. 2022-11-22 12:38:38 UET returns two cells. The one on top of the stack (that I'm dropping) is the unix epoch time seconds. 2022-11-22 12:38:48 The one right under that is microseconds within the second. 2022-11-22 12:39:24 So the first time I ran that measure word I got a negative time result, because the seconds rolled over. I could have just added a million, but I just ran it again and the second time it all happened within the same second. 2022-11-22 12:39:40 So basically uet returns unix epoch time with microsecond resolution. 2022-11-22 12:40:15 If anyone is interested, this is the code; it's a primitive: 2022-11-22 12:40:17 code "uet", uet 2022-11-22 12:40:19 mpush rsi, rdi 2022-11-22 12:40:21 sub rrSP, 16 2022-11-22 12:40:23 putd rrSP+8, rrTOS 2022-11-22 12:40:25 mov rax, 96 2022-11-22 12:40:27 mov rdi, rrSP 2022-11-22 12:40:29 decd rdi 2022-11-22 12:40:31 xor rsi, rsi 2022-11-22 12:40:33 sys 2022-11-22 12:40:35 getd rrTOS, rrSP-8 2022-11-22 12:40:37 mpop rdi, rsi 2022-11-22 12:40:39 next 2022-11-22 12:40:45 mpush, putd, decd, getd, and mpop (and next) are all macros. 2022-11-22 12:41:15 Well, sys is too. 2022-11-22 12:41:36 It saves a bunch of registers and runs the an OS syscall, using syscall instruction. 2022-11-22 12:42:31 When I add a new primitive (or definition, actually), it all goes in one spot like that in the nasm source. "code" is also a macro that builds a primitive header and adds it to the linked list being constructed. 2022-11-22 12:42:52 The details are all "under the hood," and this implementation is the first time i've actually accomplished that in a "complete" way. 2022-11-22 12:42:59 nasm has a nice macro facility. 2022-11-22 12:51:40 I was even able to just re-write one of those macros when I wanted to change the header layout. I then had to chase down a few dependencies where the head of the list was referenced from outside, and the words that actually navigate a header, but it wasn't that bad to get it all to gel. 2022-11-22 12:53:43 with assemblty there's not additional overhead, only the one you write 2022-11-22 12:54:58 But Forth's operation imposes some overhead, over what you'd get if you hand coded your entire task in assembly. 2022-11-22 12:55:18 btw I fixed the recursive problem, but meh 2022-11-22 12:55:20 You have to get around from primitive to primitive, and from :-def to :-def, somehow. 2022-11-22 12:55:50 a recursive word will need the presence of the memoized results to execute xD 2022-11-22 12:55:55 One of Forth's strengths is that it makes that overhead about as minimal as it can be, given that it's there at all. 2022-11-22 12:56:26 what I like from forth is you can start at 0 and go adding colon definitions 2022-11-22 12:56:47 and they're not so expensive as a function call so you can abuse them 2022-11-22 12:56:58 Another trick you can play is to have your system able to automatically build a new primitive, formed by stringing together a sequence of existing primitives. Then you only get one NEXT at the end of all of it, instead of the numerous NEXT calls you otherwise had. 2022-11-22 12:57:21 Only makes sense to do that in nested / multi-nested loops. In your performance critical code. 2022-11-22 12:57:25 I'd like to do that if I had a proper forth 2022-11-22 12:57:44 It's pretty easy; the system can recognize where NEXT starts at the end of a word. 2022-11-22 12:57:54 what comes before that is what you copy to the new place. 2022-11-22 12:58:02 Then just stick one copy of NEXT on the end of the whole thing. 2022-11-22 12:58:19 I suppose I'll always end wanting to do a proper forth 2022-11-22 12:58:43 You should do one someday. It's a totally "within reach" task. 2022-11-22 12:58:57 yeah but it should be in assembly 2022-11-22 12:59:55 Yes, I totally agree. 2022-11-22 13:00:02 That's within reach too - nasm is a good tool. 2022-11-22 13:00:39 Jesus, it's been so long since I've done significant work on mine. 2022-11-22 13:00:50 It is so so close to being "ready to develop with." 2022-11-22 13:01:14 But where I really want to get it is able to rebuild itself, and then I'll stop using nasm altogether. 2022-11-22 13:01:44 in my case as I'm using js, I just need to add words, but I want to get this fake compile stuff right 2022-11-22 13:02:02 I've planned for that as I wrote it, and I don't think it's that far away, but so far I just haven't buckled down and done it. 2022-11-22 13:02:22 Well, it's important to do what you want to and feel good about. 2022-11-22 13:02:25 what are you going to do with your forth? 2022-11-22 13:02:43 with my not-so-forth the first thing I'll do is start making the browser version 2022-11-22 13:02:56 I mean I have a "core" that works both in node and the browser 2022-11-22 13:03:11 Play with it. Try things out in it. And maybe, if i do what I tell myself I want to, use it to write software for gadgetS I build on down the road. 2022-11-22 13:03:15 I can have 2 separate files, one for node stuff and another for the browser 2022-11-22 13:03:45 but the browser stuff is funnier, so I'll start there xD 2022-11-22 13:03:52 like the dom, the canvas, etc 2022-11-22 13:04:08 I can make . be attached to a dom element so when you print you print in that element 2022-11-22 13:04:18 or even to the canvas 2022-11-22 13:04:40 also the events are nice to play with 2022-11-22 13:04:51 One "build project" I have that I maybe will do is to build a "from scratch" computer system, using FPGA-based processors of my own design. 2022-11-22 13:05:00 :0 2022-11-22 13:05:04 In that case I want this Forth to be the OS. 2022-11-22 13:05:31 so both the hardware and the software will be assembled by you? 2022-11-22 13:05:36 Yes. 2022-11-22 13:05:50 That kind of hardware design is what my actual education was all about. 2022-11-22 13:05:56 I could die in peace if I reach that goal 2022-11-22 13:05:56 Core skill. 2022-11-22 13:06:11 The software stuff is just stuff I've picked up along the way. 2022-11-22 13:06:46 one of my secret and biggest dreams is to do some sort of umpc similar to a gameboy 2022-11-22 13:06:48 But I tend to niggle into things pretty deep when I study them, so while I'm not a "seasoned comuter science professional," I may know a little more than some hardware guys who hack out software. 2022-11-22 13:06:58 embedded systems always caught my attention 2022-11-22 13:07:17 but never learned anything about hardware or electronics 2022-11-22 13:07:24 Yeah - I did embedded systems for a living for years. 2022-11-22 13:07:26 It's fun. 2022-11-22 13:07:35 I love the "total control' you can have over the whole thing. 2022-11-22 13:08:04 I always wanted to know both, hardware and software and be able to do the stuff you're going to do some day xD 2022-11-22 13:08:10 And Forth is just PERFECT for embedded work. 2022-11-22 13:08:18 That's probably a big reason I'm so keen on it. 2022-11-22 13:08:38 After all that's the kind of work Chuck was doing when he conceived it. 2022-11-22 13:08:48 but not an operating system, just some hardware and softare mixed together 2022-11-22 13:08:57 and then see how it burns 2022-11-22 13:09:09 Yeah - that's what would be involved for most of my gadget plans. Just embedded software. 2022-11-22 13:09:19 But if I'm going to build a full on computer, it will need an OS. 2022-11-22 13:09:58 well with forth seems like it can be easier than in other ways 2022-11-22 13:10:00 I'd just feel smug thinking I had a computer that I could be confident had no back doors whatsoever. 2022-11-22 13:10:11 just let forth be the os 2022-11-22 13:10:33 Yes - that's the idea. 2022-11-22 13:10:41 But there are certain thinGS OS's need to be able to do. 2022-11-22 13:10:50 File system, process management, etc. 2022-11-22 13:11:15 I want to build everything. I want to build the storage out of flash memory chips - not just wire an SSD into it. 2022-11-22 13:11:24 Build the graphics hardware. 2022-11-22 13:11:37 Obviously I don't expect to rival off-the-shelf capabilities. 2022-11-22 13:11:50 FPGA just can't keep up will full custom silicon. 2022-11-22 13:11:59 That has man-centuries of design in it. 2022-11-22 13:12:24 So my processor won't be as fast as an x86. So I will use an array of them. 2022-11-22 13:12:30 it will take years I guess 2022-11-22 13:12:31 To get it "fast enough" to do useful work. 2022-11-22 13:12:38 Yeah, likely. 2022-11-22 13:12:45 Well, likely I'll never finish. 2022-11-22 13:12:54 I don't have a stellar track record of finishing projects. 2022-11-22 13:13:01 as long as you enjoy the process is fine 2022-11-22 13:13:03 Except at work, where I always finished. 2022-11-22 13:13:15 That's the idea - it's fun to me. 2022-11-22 13:13:33 I sketched out processor notes some years ago. 2022-11-22 13:13:38 In a fair bit of detail. 2022-11-22 13:13:40 I'll go to eat, see you KipIngram 2022-11-22 13:13:46 It'll run Forth natively, of course. 2022-11-22 13:22:18 KipIngram: I do think octal is easier, but hex fits 8-bit bytes better it's true 2022-11-22 13:42:04 I think the only advantage octal has is that it uses on things we're accustomed to thinking of as "digits." But I don't recall it taking very long for me to get as comfy with A-F as i was with 0-9. 2022-11-22 13:43:38 "only" things... 2022-11-22 13:48:08 That processor I once thought about was a Harvard architecture thing, with separate fetch and execute units. The fetch unit dug down through the definition structure until it found opcodes, and just streamed those to the execute unit. Other than that the only "novel" thing about was its method for dealing with conditional jumps. When the fetch unit came to such a jump, it would stall if it didn't know what 2022-11-22 13:48:10 to do. But there was a faciity that would let the execute unit do the decision calculation for a conditional jump, and report the result to the fetch unit, BEFORE the execute unit reached the decision point. That's not always possible, but sometimes is - it depends on the application. 2022-11-22 13:48:48 The idea was that hopefully you could tell the fetch unit what to do well enough before you ran out of already-fetched opcodes to let it find more and get you re-stocked. 2022-11-22 13:48:59 And if not... well, you get a stall. 2022-11-22 13:49:06 So it's an opportunistic thing. 2022-11-22 13:49:59 One case it would work really well in would be walking a linked list or something like that. You could check the next pointer for null as the first thing you did in the loop, and send that result on over to fetch, and THEN do whatever application processing of that node you needed to do. 2022-11-22 13:53:03 It also had a way for "micro-looping" over the last small group of opcodes its executed. 2022-11-22 14:46:57 hi ive an issue : Array Create ... Does> ... ; I compile this word to flash 2022-11-22 14:47:20 then I whant to create an array and keep it to flash so : Array myArray 2022-11-22 14:47:24 Wrong address or data for writing flash ! 2022-11-22 14:49:50 I could compile it to ram, but then how to reference it from a word comiled to flash?? 2022-11-22 14:56:07 How do you plan to write into this array if it's in flash? 2022-11-22 14:57:10 It probably expects it to be writable with !, and maybe that doesn't work to flash. If you put the array in RAM, then you'd just need for your code in flash to know it's address. 2022-11-22 14:58:36 aw yeah! thanks 2022-11-22 15:14:15 how to use buffer: to make a make an array ? 2022-11-22 15:16:31 ok i fnd 2022-11-22 15:16:36 found 2022-11-22 16:47:10 we have a couple arduino uno boards, and I was wondering, anyone have a forth that can load onto the arduino without wiping out its bootloader? 2022-11-22 17:01:30 jim: A C forth I guess 2022-11-22 17:09:19 What are you going to use the forth for? 2022-11-22 17:10:39 iirc forth-on-arduino is hard due to the split between code and data memory. 2022-11-22 17:11:06 Not a problem in a simple C forth 2022-11-22 17:11:23 yeah, you can build a pseudo-VM. 2022-11-22 17:13:43 Don't even need to go that way 2022-11-22 17:14:23 Just a classical threaded forth will do, or direct threaded 2022-11-22 17:14:42 Or tokenized (would that be considered a pseudo-VM?) 2022-11-22 17:14:44 Yeah - anything other than code-threaded. 2022-11-22 17:27:37 that'd be considered a pseudo-VM. 2022-11-22 17:35:31 Well, could call it that. But it's not quite the same. The key idea in indirect and direct threading is that the things that comprise your pgram are *addresses*, not just symbols you interpret. 2022-11-22 17:35:41 program 2022-11-22 17:36:52 So your program is a list of addresses that point to the code you need to run to carry those operations out. 2022-11-22 17:37:38 But people do talk about "the Forth vm" - I use that phrase myself. 2022-11-22 17:37:52 It fits the broad category, but it's a distinct style of doing it. 2022-11-22 17:38:33 You definitely can think of the addresses of primitive words in those program as "instructions" of a sort. 2022-11-22 17:38:45 Instructions of a stack machine. 2022-11-22 17:39:24 Addresses of higher level definitions, though, aren't quite the same - they're addresses of things you created. 2022-11-22 17:44:42 IMO, it's a virtual machine. the primary difference between traditional VMs and "the forth VM" is its API. 2022-11-22 17:44:58 subprograms are their own units, you don't load things as a single blob. 2022-11-22 17:45:52 whether you use addresses or tokens just depends on your ability to jump around in data memory, though honestly now that I think about it you don't really need that unless you're directly assembling subprograms into machine code in that data memory. 2022-11-22 17:46:26 that's kind of the primary powerhouse. you have this central state, the dictionary, that you can use to store subprograms or dynamic data. 2022-11-22 17:47:03 it's neat. 2022-11-22 17:48:42 compare it to something like Lua or a traditional register machine. your API is "load and run this thing which has all of its code in a single giant blob". whereas with forths and forth-likes, you're dropped into an interactive shell and asked to build up (or load) small pieces of code/data which can be re-used/referenced. 2022-11-22 17:49:04 Well, like I said, it's part of the broad class. 2022-11-22 17:49:14 I regard it as a rather unique part, though. 2022-11-22 17:49:31 it's the direction other VMs need to go. 2022-11-22 17:49:50 I would agree with that, definitely. 2022-11-22 17:50:54 Of course, many Forth's offer assembly capability, which gives you direct access to the underlying hardware - that bypasses the intervening layer completely. So it's not really a "pure" vm. 2022-11-22 17:51:06 The real point of a vm is to render you independent of the underlying hardware. 2022-11-22 17:51:37 (which has nothing to do with whether other vm's go that way or not) 2022-11-22 17:51:41 Separate thought. 2022-11-22 17:52:02 it's more about structure. 2022-11-22 17:52:20 if you don't have the assembly capability, you can still use addresses or tokenized forths with well-codified primitives. 2022-11-22 17:52:38 In some sense C is a VM. It lets you program on a "machine that has C as its language." 2022-11-22 17:52:39 don't see compiled programs as blobs, but as components of a more descriptive data structure. 2022-11-22 17:53:05 But we usually don't think of it that way because of the way we've split things apart into a compile / execute pattern. 2022-11-22 17:53:44 We think of that as a "translation." Which it is, of course. 2022-11-22 17:54:26 Honestly though our processors are vms these days. 2022-11-22 17:54:34 We don't really program "the hardware" anymore. 2022-11-22 17:54:51 Got a whole thick layer of microcode there, implementing a virtual processor for us. 2022-11-22 17:59:08 One of the ways I'm interested in improving the Forth I'm gradually working on is to add an extra layer in its structure that you very much could think of as a VM - it would be a virtual instruction layer that sat between my primitive definitions and the actual machine code. Implemented via macros in the assembler, so it wouldn't really do anything at run time. It would operate when I assembled the source. 2022-11-22 17:59:10 And the idea is to have a set of primitive definitions that will create correct functionality on either x86 or ARM. 2022-11-22 17:59:20 Without giving up any significant amount of performance. 2022-11-22 18:00:00 It would make the primitive implementations portable across platforms. 2022-11-22 18:00:18 I tend to have a lot of primitives - I don't want to have to port all of them when I move platforms. 2022-11-22 18:00:28 I don't like thinking of Forth as belonging to particular machine primitives. addresses and such. 2022-11-22 18:01:06 I'm hoping to have maybe 15-20 percent as much effort in porting that virtual layer to a new platform as I would porting all of the primitives. 2022-11-22 18:01:20 Yeah, that's what I'm trying to dodge around here. 2022-11-22 18:01:58 Ultimately you have to embrace your processor SOMEWHERE, if you're going to wind up with something that runs. But I'd like to compress that part to the smallest amount of work that is possible, without just completely toasting performance. 2022-11-22 18:02:32 embrace the concept of tokenized forths. :P 2022-11-22 18:02:33 I'd like to be able to regard everything else (the primitives and the built in definitions) as fully portable. 2022-11-22 18:02:40 No thanks. :-) 2022-11-22 18:02:51 that's the only portable thing. 2022-11-22 18:02:52 Besides, that's just another way of doing the same thing. 2022-11-22 18:02:58 You'd still have to write your token interpreter. 2022-11-22 18:03:13 yes. but that can take different forms for different languages. 2022-11-22 18:03:19 You can't make it run without dealing with the hardware somewhere. 2022-11-22 18:03:31 Well, unless you let someone else deal with the hardware. 2022-11-22 18:03:34 you can stop short of the primitives you implement. 2022-11-22 18:03:54 I think that's what I'm saying. I want my primitives to be portable. 2022-11-22 18:04:01 rrrrright. so tokenized forth. 2022-11-22 18:04:04 otherwise you're using addresses. 2022-11-22 18:04:08 I want the non-portable layer, that has to be re-written, a notch below the primitives. 2022-11-22 18:04:10 which jump to subroutines. 2022-11-22 18:04:15 which means serializing your state isn't possible. 2022-11-22 18:04:18 And I want it to all happen at build time, not run-time. 2022-11-22 18:04:35 I prefer addresses. 2022-11-22 18:04:41 Your SOURCE is serializable. 2022-11-22 18:04:48 That's the portable form of your code. 2022-11-22 18:04:53 meh. 2022-11-22 18:04:56 I disagree. 2022-11-22 18:05:03 That's fine. 2022-11-22 18:05:06 It's allowed. 2022-11-22 18:05:12 thank you for allowing that. 2022-11-22 18:05:17 oh master of discussions. :P 2022-11-22 18:05:24 My background is in embedded work, and I've always been more prone to embrace performance over portability. 2022-11-22 18:05:48 :-) :-) 2022-11-22 18:05:56 Well, I really meant it bilaterally. 2022-11-22 18:06:03 WE are allowed to disagree with one another. 2022-11-22 18:06:17 Not trying to act like you needed my permission or anything. 2022-11-22 18:07:01 I'm sure there are plenty of situations in the world where both perspectives are superior to the other. 2022-11-22 18:07:22 World's not really a one-stop-shop place. 2022-11-22 18:13:16 Some of these ideas blur a little when you start thinking about a hardware implementation of a dual stack machine. In that case what we call 'primitives" are suddenly much more like any old machine's instructions. They're just bit patterns that get directly interpreted by the hardware. 2022-11-22 18:13:40 And that's not too different from a tokenized/byte code system, which just has bit patterns interpreted by software. 2022-11-22 18:14:12 Your first layer of definitions are just sequences of those instructions - no addresses involved. 2022-11-22 18:14:33 Only when you get another layer up do you have "addresses." 2022-11-22 18:15:20 veltas, hi. sorry for delay... 2022-11-22 18:15:53 I think the thing that most makes me like Forth is that it just stays the same, no matter how far you build up the functionality. Near the hardware, far from the hardware, etc. - definitions are always just lists of addresses, worked through in exactly the same way. 2022-11-22 18:16:16 The syntax (if you can call it that) of using highly abstract words is EXACTLY the same as the syntax for using primitives. 2022-11-22 18:16:23 One pervasive model. 2022-11-22 18:16:24 jim: No worries, IRC is designed for high latency conversation 2022-11-22 18:16:39 Or at least that's what everyone seems to use it for 2022-11-22 18:17:33 yeah :) so, I think we want to discover the opcodes of the arduino 2022-11-22 18:20:04 Might be difficult based on what decay said re modifying code on arduino 2022-11-22 18:20:27 I don't know anything about it 2022-11-22 18:20:52 I guess search for stuff on self-modifying code on arduino 2022-11-22 18:21:01 it isn't possible. 2022-11-22 18:21:06 you can't jump to data memory. 2022-11-22 18:21:08 I don't think forth solves any problems for you here but it is nice to have forth 2022-11-22 18:21:54 In that case, you will need to generate opcodes to try and keep uploading different variations... whatever you do forth probably won't help 2022-11-22 18:22:24 or you can do token threaded. 2022-11-22 18:22:45 that is always possible. 2022-11-22 18:24:18 Yes but it doesn't approach their problem at all 2022-11-22 18:24:24 what's their problem? 2022-11-22 18:24:32 oh. 2022-11-22 18:26:13 jim: https://forth-standard.org/systems 2022-11-22 18:26:15 take your pick. 2022-11-22 18:35:05 decay, veltas, thanks, I'll take a look 2022-11-22 18:44:51 You know, it seems like direct threading would be problematic in such an environment too, veltas. 2022-11-22 18:45:22 For example, the code you jump to for a variable on a direct threaded system is generally immediately adjacent to the storage space for that variable. 2022-11-22 18:46:00 You could change how you did the extra layer of indirection needed in a way that dodged calling it "indirect threading,' I guess, but seems like that extra indirect has to be there somehow. 2022-11-22 18:46:19 A variable would have to really be a constant, that gave you a RAM address. 2022-11-22 18:46:37 Or else you'd neede indirect threading. Either way would solve it. 2022-11-22 18:47:19 In fact, a colon definition would have to be a constant that gave you an address in RAM of your definition, with slightly different code to handle it. 2022-11-22 18:47:41 PFAs couldn't be intermingled with CFAs. 2022-11-22 18:54:14 Yeah - I don't see anything easy. Direct threading on a system with rigid data/code separation seems problematic to me. 2022-11-22 18:54:33 Indirect threading, on the other hand, would be graceful. 2022-11-22 19:01:45 I have a layer of indirection on both my CFA (qualifying it as indirect threaded) and on the PFA - I added that one in order to separate headers and implementations. 2022-11-22 19:02:25 In my if a word ends without a ; runtime, execution just proceeds into the next definition - there's no heaer sitting in the way like there would be on a traditional system. 2022-11-22 19:04:06 : is immediate in my system; I don't have to ; out of compile mode in order to define a new header. 2022-11-22 19:04:54 oh damm! 2022-11-22 19:05:00 Took that idea from Chuck, though - one of his incarnations along the way had that feature. 2022-11-22 19:05:06 What's up? 2022-11-22 19:05:32 do you seperate out the headers in a different allocation area: 2022-11-22 19:05:33 ? 2022-11-22 19:06:06 I did in the system before this one, yes - but this time I just have them in different portions of a single allocated block. 2022-11-22 19:06:27 That's fairly trivial to change around, so i don't feel that committed to it. 2022-11-22 19:06:56 I have registers that point at each spot, so where they're located is pretty arbitrary. 2022-11-22 19:07:23 I could move all my headers en masse, change the register, and it would work fine. 2022-11-22 19:07:40 Same with the bodies. 2022-11-22 19:08:18 Oh, the other benefit of having that indirection on the PFA is that every word becomes "vectorable"; you can change where that points later on, and it changes every instance of the word, even already compiled ones. 2022-11-22 19:08:33 So I don't have to have a special class of words with that trait. 2022-11-22 19:09:10 seems like your cache is gonna be thrashed hard 2022-11-22 19:09:48 go write a forth for brainfuck. 2022-11-22 19:09:57 My whole body block fits in my cache. 2022-11-22 19:10:20 Headers probably would too, but I usually get concerned about such things only re: runtime performance. 2022-11-22 19:11:06 If it weren't for the fact the OS is sitting there taking up a bunch of my resources, I think what actually would happen is that the whole system would get pulled in to each core's cache, and it would fly. 2022-11-22 19:11:14 Thrashing would depend on my write patterns. 2022-11-22 19:12:51 I had what I thought was a nice idea once for threading on the system, but then I realized one day that it was really bad for cache performance. I was trying to implement really fine-grain parallelism, like used to be the focus of most parallel processing papers back in the 80's. 2022-11-22 19:13:42 But it tends to work better these days to have coarser multi-processing, where you data flows smoothly through a pipeline, and different cores handle different stages. So each core gets it, does its thing, and then is done with it and it moves on down the pipeline. 2022-11-22 19:14:29 What I actually had in mind was sharing frame register value across cores, so they could cooperate without me having to copy the data around. 2022-11-22 19:14:36 But then they'd just be stepping on each other's toes. 2022-11-22 19:15:04 It went from "exciting idea" to "terrible idea" in an eyeblink. 2022-11-22 19:15:38 I've kind of given up on that fine-grain parallel stuff these days. 2022-11-22 19:15:52 Our tech just isn't good for that, for the most part. 2022-11-22 19:46:12 Wow - the Wikipedia article on parallel computing doesn't even mention dataflow processing, except via a reference to another article. 2022-11-22 19:46:28 When I first started nosing around in this stuff dataflow was kind of the hot tamale. 2022-11-22 19:46:35 Folks were really excited about it. 2022-11-22 19:46:46 But I think it turns out to be problematic on the hardware front. 2022-11-22 19:47:08 It's about as fine-grain as it gets.