2023-10-12 00:19:55 Nice. There's a whole suite of examples one level up from there, and they all seem to be working. 2023-10-12 00:40:28 Ok, all of them except one executed correctly. 2023-10-12 04:02:32 KipIngram: The functional advantage of alloca() is the allocations are automatically cleared up on return from function, and on longjmp() 2023-10-12 04:02:42 The performance advantage is the allocations are faster 2023-10-12 04:03:25 The disadvantage is that it doesn't (necessarily) check errors, can cause a stack overflow very easily, is probably abusable in a security context 2023-10-12 04:04:23 The ability to "just get a small allocation" that doesn't need any thought to freeing, providing you know you won't overflow anything, is super helpful and can produce much neater code sometimes 2023-10-12 04:05:44 The performance advantage is probably useful in some contexts but I've not given that much thought, that's the less interesting aspect. 2023-10-12 04:05:55 It's worth pointing out your alloca() allocation will be fast, and smaller allocations have a chance of already being in the cache hierarchy 2023-10-12 04:10:23 The closest thing in Forth is HERE (or PAD, if you have a PAD with allocation support). 2023-10-12 04:11:56 I suppose you can achieve this in forth with a variable or register to act as a 'base pointer' for the return stack, and then you add things to the return stack and EXIT and it will all clean up automatically 2023-10-12 04:12:06 But this breaks rdrop exit features 2023-10-12 04:12:12 Depends what you want 2023-10-12 04:13:13 Personally I think rdrop exit stuff is to be avoided on modern computers because it messes up return branch prediction for STC 2023-10-12 09:20:50 Yes, it would break rdrop exit (which is basically the same as my conditional returns), but when I code I'm sensitive to that - for example, I can't return from within a stack frame. I have to close the frame before I return. I haven't found keeping that straight to be a problem. 2023-10-12 09:21:02 Just "clean coding" keeps me away from trying to do such things. 2023-10-12 09:22:38 I'm sure there are some technical arguments against the conditional returns, but I'm just sort of hooked on them at this point. They 'work for me' - my coding work became more fun when I started using them. My definitions got shorter. Everything about my results was more pleasing to me. 2023-10-12 09:23:12 My attitude toward the speed of my system is that it's "plenty fast." If I need top drawer performance in some profiler identified part of an application, I'll optimize it by hand. 2023-10-12 09:24:29 I don't actually see how conditional returns would affect branch prediction in my systeM> The branch is *inside* the conditional return word - at the time it's processed I haven't actually done the conditional return yet. 2023-10-12 09:29:16 I suspect there a good bit of detail in a Forth architecture that might not make it easy for the hardware to predict where we're going to be going. Forth does a lot of calculated jumps (register jumps). I have no idea how good the hardware is at grokking that, but they're unavoieable in Forth, so we're kind of stuck with them. 2023-10-12 09:31:20 In theory logic could recognize a "jmp " and then start fetching from as soon as it had a final value, but I don't know if modern processors actually try to do that. 2023-10-12 09:39:16 It's a fairly intricate situation - in a direct or indirect threaded Forth control flow at the Forth level really has nothing to do with the stream of code executed. All of that is captured in stuff the processor treats as *data*. The processor's own stream of execution is just to jump from primitive to primitive in a way that's really kind of a "level down" from the Forth program's flow. 2023-10-12 09:39:49 In a code threaded system that's not the case - the control flow would be more like a normal processor application. 2023-10-12 09:40:53 Also, in a "real" Forth system (by that I mean a system where Forth was truly running the show), most Forth systems are small enough that all of your machine code is almost guaranteed to get into the cache and stay there. 2023-10-12 09:41:41 In an OS context (ruNning Forth on a Linux or Windows system) all bets are off - there's no telling how other system activity will treat the cache. But if the machine belongs to Forth it's going to run from cache almost certainly. 2023-10-12 09:42:14 The big thing I think one would need to watch out for would be having multiple cores clobber one another's cache lines. 2023-10-12 09:43:07 Having write-hot data interspersed with your machine code would be a bad idea. 2023-10-12 09:49:19 This is one of the reasons I wish OSes would offer a way for you to "allocate" a core *completely* to some task. In a way that would cause the OS to completely unload it other than that particular specified task. 2023-10-12 09:49:32 The you'd know that your program owned the cache in that core. 2023-10-12 09:50:12 I think most people never even give that kind of thing a thought, though - they just trust the system to "handle it." 2023-10-12 10:22:52 Interesting math fact - known as the "Ham Sandwich Theorem." 2023-10-12 10:23:39 Imagine a stack of three layers (like the bread/ham/bread layers of a sandwich). No matter how you arrange those layers, it is always possible to cut the stack with a plane such that you equally divide all three layers. 2023-10-12 10:54:54 This guy has quite a few interesting videos: 2023-10-12 10:54:56 https://www.youtube.com/@SebastianLague 2023-10-12 10:55:15 Lot of "simulation" type stuff. 2023-10-12 10:55:26 Erosion modeling, fluid flow, etc. 2023-10-12 10:58:20 good old navier stokes 2023-10-12 10:59:35 Yeah, though this guy's looking for "visually satisfying" results and doesn't necessarily do all of the calculations in a rigorously precise way. 2023-10-12 10:59:47 He's happy with "approximations that still look good." 2023-10-12 11:00:01 He's approaching it from a game perspective rather than a science perspective. 2023-10-12 11:00:26 Generally if it winds up looking good it's probably at least a *reasonable* approximation of the real deal, though. 2023-10-12 11:02:04 the real deal requires brute forcing or pretending the glacier does not move so you can set one term to 0 and then solve ... 2023-10-12 11:02:24 The fluid modeling one was the first one I watched. He started out running it with his CPU, and had like 500 points he was letting move around. Eventually, though, he moved it to a compute shader and upped the particle count to something quite large, like 100,000 or so, and by the time he was done it looked quite good. 2023-10-12 11:03:08 One amusing bit, though was he was bothered by the fact that the particles at the surface and along the walls "bunched up" into a more tightly packed layer - he figured he had some sort of bug. 2023-10-12 11:03:19 I had to smile - he was just seeing emergent surface tension. 2023-10-12 11:03:27 It's supposed to do that. 2023-10-12 11:03:43 No particles on the other side, so the dynamics are altered. 2023-10-12 11:05:34 this is the Coding Adventures guy? 2023-10-12 11:05:45 Yeah. 2023-10-12 11:06:00 I watched most all of those last night - it's a good series. 2023-10-12 11:15:58 This one's on "complex behavior from simple rules": 2023-10-12 11:16:00 https://www.youtube.com/watch?v=kzwT3wQWAHE&list=PLFt_AvWsXl0d88e0fH4d3vmjPTbhbVzV4&index=2 2023-10-12 11:16:22 Amazing how "alive" some of those results look. 2023-10-12 15:17:37 I bought Ms. Rather's Forth workbook. Seems a bit basic, but I'm still optimistic the exercises will help me be more fluent in the language. Anyone here worked through it before? 2023-10-12 22:01:11 Hey, it looks like in addition to SPIRV intermediate language there is also one for Nvidia called PTX. 2023-10-12 22:01:26 https://www.llvm.org/devmtg/2011-11/Holewinski_PTXBackend.pdf 2023-10-12 22:37:06 You know, I'll tell you something I find spectacularly un-interesting. All the hoopla over "bigest numbers." Like Tree(3). Seriously - who cares? Numbers go on forever - there are going to be big ones. 2023-10-12 22:37:16 I just don't see what makes it noteworthy.