2023-09-01 04:07:54 There are many situations where you still get basically 1980's UDP access 2023-09-01 04:08:20 Unfortunately not many websites work well on a really bad mobile network 2023-09-01 04:08:43 (I know that's TCP but you know I know that) 2023-09-01 04:09:40 The point is do more in that first packet, it shouldn't all be crap, or I'll sit staring at a blank screen for no good reason 2023-09-01 05:52:04 crc: thanks, very interesting, your library? you mean you wrote your forth version with some of your libraries into it? 2023-09-01 05:52:42 KipIngram, i guess that Chuck coded a very primitive form of an hash table for that 2023-09-01 07:09:28 render: tes; I use a Forth I've written, along with my own library of code to extend the core in various ways. 2023-09-01 07:17:27 crc, awesome, in which language did you write Forth? 2023-09-01 07:17:55 one question: there is a word to clear the entire stack? gForth seems to not have 'clear' 2023-09-01 07:46:55 I'm not aware of a standard word to clear the stack. In my system I use 'reset' for this purpose. 2023-09-01 07:48:10 My system is written in assembly, running on a virtual machine (with implementations in numerous languages) 2023-09-01 07:57:03 rendar: In old FIG Forth, QUIT would clear the return stack and send you back to the command interpreter loop, data stack intact. 2023-09-01 07:57:13 Then WARM would do the same but also clear the data stack. 2023-09-01 07:57:25 And COLD was a whole full restart from scratch. 2023-09-01 07:57:34 As crc noted, though, those aren't really "standard." 2023-09-01 07:58:14 You can often manually clear the stack via SP0 @ SP!. 2023-09-01 08:12:42 Knowing the standard COLD would become BRRRRR-CHILLY 2023-09-01 08:13:27 rendar: Standard word to clear stack is ABORT 2023-09-01 08:14:11 In gforth 0.7.3 this also displays a backtrace... but in newer versions it doesn't because the standard has set aside ABORT to be both an 'error' condition and a stack-clearing mechanism 2023-09-01 08:15:50 The standard description of ABORT is "Empty the data stack and perform the function of QUIT, which includes emptying the return stack, without displaying a message." 2023-09-01 08:16:13 So gforth 0.7.3 kind of butchers that 2023-09-01 08:19:54 A 'standard' stack clearing word: : EMPTY-STACK BEGIN DEPTH WHILE DROP REPEAT ; 2023-09-01 08:20:38 But that would be slow, pretty much all forths can do something like SP0 @ SP! .... just LOCATE ABORT or SEE ABORT to find out how 2023-09-01 08:21:28 i did EMPTY and REMPTY to empty my stacks :-) 2023-09-01 09:19:45 veltas: Yes, COLD is actually a little tricky to implement and I'm not sure there is a good way that's "fully efficient." I do it by keeping the OS-loaded initial program image untouched - I allocate a new RAM block and copy the image into it and execute there. Then I wired in a little trickery that lets me jump back to that initial image if I want to - the new RAM block is deallocated and I start all over. 2023-09-01 09:19:47 So it truly is *COLD* in every respect. 2023-09-01 09:19:55 But that's a block of RAM sitting there doing nothing most of hte time. 2023-09-01 09:20:57 ABORT is what I would call WARM. 2023-09-01 09:21:07 : warm sp0 @ sp! 2023-09-01 09:21:35 : quit begin rp0 @ rp! query interpret prompt again ; 2023-09-01 09:22:40 dave0: Note that your REMPTY can't be a colon definition unless you jump through some hoops. 2023-09-01 09:23:15 That is, it couldn't just empty the return stack and return, because you just threw away the return address. 2023-09-01 09:23:23 You could do this: 2023-09-01 09:23:41 : rempty r> rp0 @ rp! >r ; 2023-09-01 09:46:49 I think COLD is a bit pointless on an OS, would probably use exec 2023-09-01 09:49:05 If *you're* the OS you can keep track of what needs resetting a bit easier 2023-09-01 09:49:07 Yes. 2023-09-01 09:49:39 Out of interest.... when do you use COLD? 2023-09-01 09:49:49 I rarely do. 2023-09-01 09:49:51 When you want to just reset the environment after screwing it up? 2023-09-01 09:50:21 More often I just BYE Enter. 2023-09-01 09:50:47 I don't actually crash my system much anymore, though, ever since I put in code to catch segfault signals and run error recovery. 2023-09-01 09:51:00 That was not easy to do, btw. 2023-09-01 09:51:01 I just tried it and gforth 0.7.3 COLD doesn't work very well 2023-09-01 09:51:19 I remember you doing it 2023-09-01 09:51:25 And will probably lean on your expertise at some point 2023-09-01 09:51:39 Like I said, I think to make it work *right* you have to be "wasteful." 2023-09-01 09:51:58 That's what COLD means to me though 2023-09-01 09:52:20 Yeah, I'll be happy to help. The idea is straightforward enough, but it just gets into some kind of hairy system-dependent stuff. 2023-09-01 09:52:37 Regarding handling segfault? 2023-09-01 09:52:49 It seems to be documented rather poorly online, probably BECAUSE there's no system-independent "standard way" of doing it. 2023-09-01 09:52:54 Yes. 2023-09-01 09:52:59 The hard thing is probably interfacing the signal handler without C standard library 2023-09-01 09:53:25 Yes, I think that's probably a good way of saying it. 2023-09-01 09:53:28 I think I remember you just stepping it through in gdb 2023-09-01 09:53:45 Or dumping everything to see what was going on 2023-09-01 09:54:07 I did. Basically I had to find where my return address and so on were in a stack full of stuff and tinker with them, so that when things returned I went where I wanted to go. 2023-09-01 09:54:21 Yeah I seem to remember helping with this 2023-09-01 09:54:51 There was a struct that you can access from one of the signal handler arguments or something 2023-09-01 09:55:02 Which stuff like RIP 2023-09-01 09:55:04 With* 2023-09-01 09:55:12 Basically you have to a) clear the fault, so that the system actually returns to you instead of bailing, and then b) arrange things so you return to the error handler. 2023-09-01 09:55:21 Right. 2023-09-01 09:55:45 Clearing the fault is obvious, and in my case would probably be to simulate a THROW or just load RIP with ABORT 2023-09-01 09:55:48 That struct is platform dependent, though. 2023-09-01 09:55:58 Of course 2023-09-01 09:56:12 I just looked at it until I could figure out where the important pieces were. 2023-09-01 09:57:18 I remember one part of it that was tedious was making sure I didn't wind up with the SYSTEM stack imbalanced. I don't actually use rsp for anything, but it was important to make sure the signal handling process didn't leave junk on it etc. 2023-09-01 09:57:45 I can't remember for sure, but I may have dealt with that by just forcibly restoring it to an earlier captured value. 2023-09-01 09:58:42 It made a huge difference in the usability of the system, though - I dont think I really realized how often I'd segfaulted until I didn't anymore. 2023-09-01 09:59:13 It's REALLY easy to foul up Forth code you're developing in a way that will lead to a segfault inducing condition. 2023-09-01 10:00:11 It's the use of illegal addresses that's the big issue. I catch divide by zero type signals too, but that just doesn't happen nearly as often. 2023-09-01 10:00:14 Yeah like >IN 0 ! 2023-09-01 10:00:17 whoops 2023-09-01 10:00:34 You mean 0 >IN !? 2023-09-01 10:00:53 As an example of messing up Forth code to cause a segfault 2023-09-01 10:00:54 The thing is, you CAN do that successfully if you arrange things right. 2023-09-01 10:01:23 I generally want access of the zero page to segfault on modern arch's 2023-09-01 10:01:26 I've written some simple interpreter 'loops' that way, mostly just to prove I could, but also to time various interpreter related functions. 2023-09-01 10:01:42 Time dictionary searches, etc. 2023-09-01 10:02:23 That's how I found out GForth kicks my Forth's ass on compile speed. ;-( 2023-09-01 10:02:34 Doesn't matter though 2023-09-01 10:02:42 I compared pretty favorably with it on execution speed, but not compile. 2023-09-01 10:02:53 It's Forth, I don't care, I just want a small codebase that I can understand and gforth doesn't fit that bill 2023-09-01 10:02:56 We later found it stated outright, though, that GForth does some kind of hashing. 2023-09-01 10:03:25 I feel the same. I don't really "use" GForth except for occasional checks on things. 2023-09-01 10:03:25 Yeah there's a hash table (not from ground up, it's sort of stapled on, so it's not the lightest hash table ever) 2023-09-01 10:03:52 I decided that such a stapled on hash table would be a reasonable thing to do with the wealth of RAM we have on desktop systems. 2023-09-01 10:03:55 I use gforth 0.7.3 as a mostly ANS Forth I can get on most distros 2023-09-01 10:03:58 Why not use it for something? 2023-09-01 10:04:23 Without licensing issues... because SwiftForth is cool but only free for non-commerical 2023-09-01 10:04:24 My think was you start out with your linked list and an empty hash table. Then populate it as you work. 2023-09-01 10:04:35 So you'd mostly only need to search your linked list for any given word once. 2023-09-01 10:05:20 My approach is to use PACKAGE 2023-09-01 10:05:35 To avoid polluting with loads of write-and-forget intermediate words 2023-09-01 10:05:44 I know you've got :. (or whatever it's called) 2023-09-01 10:06:36 .:, yeah. 2023-09-01 10:06:39 I like it. :-) 2023-09-01 10:07:34 It's neater but I just prefer working in my ANS / Forth Inc world 2023-09-01 10:07:50 This new go round I'm going back to the idea of putting those .: headers in transient memory, so that I fully get rid of them when I .wipe. 2023-09-01 10:07:58 I have been consulting the PolyForth and Swift Forth manuals a lot for design ideas 2023-09-01 10:08:08 And inlining them? 2023-09-01 10:08:36 There's a lot of clever in those. 2023-09-01 10:08:38 I respectfully disagree with some of it but mostly it's good 2023-09-01 10:09:04 Like their block copying function takes src dst n 2023-09-01 10:09:09 Which is crap really 2023-09-01 10:09:22 I'm enticed by Liz Rather's statement that implementing the compiler using a separate loop, instead of having conditionals in a shared interpreter/compiler loop, is a good thing. 2023-09-01 10:09:32 But apparently non-standard in some way. 2023-09-01 10:09:34 That's mostly used manually and I don't want to sit there counting the difference on my fingers, I want to give a dumb range and have it do that for me 2023-09-01 10:09:56 BLOCKS it's called 2023-09-01 10:10:03 Which isn't a good name 2023-09-01 10:10:19 You have to actually think to get good names, though. 2023-09-01 10:10:26 Lot of folks don't enjoy doing that. :-) 2023-09-01 10:10:38 I've called mine BCOPY but I think maybe BMOVE would be better because I check the copy order to avoid clobbering overlapping blocks 2023-09-01 10:10:56 Not the most inventive name I know 2023-09-01 10:15:31 A large fraction of my names are eventually .wiped - I tend not to think about those "as much." I still try to be a little sensible about them, but I can operate in a more "local context" so to speak. 2023-09-01 10:19:01 I suppose a difference between .: and PACKAGE is that mine aren't in the list at all 2023-09-01 10:19:22 My 'local' words are in a different vocab so it doesn't impact lookup speed 2023-09-01 10:19:39 Which is why I compared this with hashing, because for me PACKAGE does the same job 2023-09-01 10:19:41 I've done it a couple of different ways. 2023-09-01 10:19:59 I don't really care having lots of irrelevant words out there because Forth '''has lexical scoping''' (waiting for someone to hit me) 2023-09-01 10:20:14 Last time I put the words into the linked list but in a separate range of memory. Then .wipe unlinked them and deallocated the RAM. 2023-09-01 10:20:23 Yeah that works too 2023-09-01 10:20:41 In my "current" one, though, they're not in separate RAM. I just unlink them, so they're no longer in the search path, but the memory remains occupied. 2023-09-01 10:20:52 That does have some advantages for things like decompiling. 2023-09-01 10:21:10 But this next time I'm going back to a full discard ability. 2023-09-01 10:21:25 Yeah generally the discard should be optional 2023-09-01 10:21:35 And this Python tool I'm writing now avoids putting them in the image at all. 2023-09-01 10:21:44 The downside of PACKAGE is it complicates decompiling a bit ... although I'm inclined to think decompiling is a bit pointless 2023-09-01 10:22:02 I think I agree. It's not like I do it very often. 2023-09-01 10:22:09 It's just a cute thing to say you're capable of. 2023-09-01 10:22:17 A bragging point of sorts. 2023-09-01 10:23:14 Yeah it's definitely a bragging thing 2023-09-01 10:23:24 It's pointless for disk image systems where the source is available 2023-09-01 10:23:43 Right. 2023-09-01 10:23:45 Well really the point is to remind yourself of things you defined at the terminal 2023-09-01 10:24:33 A command history can do that too. 2023-09-01 10:25:22 It's definitely a poverty feature 2023-09-01 10:25:45 My last system had a command history that used its own RAM region - I think in the future I'll wire that into the block system. 2023-09-01 10:25:53 Just keep a history on disk. 2023-09-01 10:26:30 Yeah I'd write to a range of blocks cyclically probably 2023-09-01 10:27:08 Yep, exactly. 2023-09-01 10:27:31 And maybe have first blank line mark the end of the list so I can update without writing out a "start line" continuously 2023-09-01 10:27:42 Just WIPE a block when you advance to next block 2023-09-01 10:27:55 I haven't settled on a format yet. 2023-09-01 10:28:14 Needs to be easy to "back up" through. 2023-09-01 10:28:28 And ideally editing those blocks and looking at them would yield something "readable." 2023-09-01 10:28:47 Not sure what to do with long lines 2023-09-01 10:29:00 Generally I don't type them. :-) 2023-09-01 10:29:17 But I'm kind of moving away from fixed length lines lately. 2023-09-01 10:43:04 You know, this vm-based thing I'm working on is going to raise some interesting distributed system aspects. I've got the prospect of a system running on my notebook, communicating with a system running on some other apparatus, and they'll be *binary compatible* at the vm level. The possibility of squirting bits of code from one to the other, for actual execution, exists. 2023-09-01 10:44:29 let's say I'm writing a Forth stack with a C++ std::vector<> or some similar data structure. Do i have to keep track of the stack top position, or i simply use the vector.len()-1 ? the problem is that i have to compute that -1 every time i want to access the top of the stack 2023-09-01 10:44:33 what is your advice? 2023-09-01 10:44:54 I'd have a pointer / index of the top. 2023-09-01 10:45:06 ok 2023-09-01 10:45:08 You don't want to have to "find" the top every time. 2023-09-01 10:45:19 yes, right 2023-09-01 10:46:16 I'd probably make it an int array and store the top pointer in array[0]. 2023-09-01 10:49:46 Yeah, stacks work better growing down 2023-09-01 10:50:05 Actually not sure if that's what you're saying but I stand by it 2023-09-01 10:50:15 sp[0] is top of stack, sp[1] is second, tec 2023-09-01 10:50:17 etc* 2023-09-01 10:50:25 i cannot have that 2023-09-01 10:50:26 So it's reverse to how you write it in a stack comment 2023-09-01 10:50:53 Well, most processors' hardware stack grows down. But really I don't think the direction matters. I always put my return and data stacks in one region, growing toward each other from the ends. 2023-09-01 10:50:55 i'm planning a stack with std::vector<> of indefinite size, e.g. you can always push, std::vector will reallocate the stack with a new size 2023-09-01 10:51:04 So, one grows up one grows down, unavoidably. 2023-09-01 10:51:17 I usually grow the data stack down and the return stack up. 2023-09-01 10:51:36 But it feels mostly like "habit' to me. 2023-09-01 10:52:13 It's good to initialize that region to some fixed pattern, so you can look later and see how far your stacks have touched. 2023-09-01 10:53:01 When I checked mine I found that both of them had gotten to about 17-18 deep. 2023-09-01 10:53:38 Well you're pigeonholing yourself by using C++ and std::vector 2023-09-01 10:53:55 Also, I like how "the system" and "my code" share the return stack - I ascertained that whenever I called execute on an interpreted word, there was only one system item on the return stack. 2023-09-01 10:54:11 So my words didn't have to run "on top of" a big stack of system return addresses. 2023-09-01 10:54:24 Fixed stacks are quite normal and not limiting in practice 2023-09-01 10:54:56 Yeah. Chuck gets quite extreme about that, with his little tiny stacks. 2023-09-01 10:56:55 I think 32 is considered spacious 2023-09-01 10:57:13 I tend to reserve ~60 2023-09-01 10:57:14 that seems right to me - like I said, I never saw mine go over 17-18. 2023-09-01 10:57:20 So 32 is the "logical number" there. 2023-09-01 10:57:36 60 just to avoid it being an issue unless stuff is properly broken 2023-09-01 10:57:50 I think even on ZX Spectrum I had like 60 2023-09-01 10:57:53 I usually use a 4kB block, so that's 256 elements each. Overkill. 2023-09-01 10:58:17 But in the upcoming system I want to make that adjustable - when I create a thread I'll define it's stack allocation. 2023-09-01 10:58:32 I can picture some threads using quite small regions. 2023-09-01 11:01:03 Yeah that's normal 2023-09-01 11:01:08 To pick stack size for a new thread 2023-09-01 11:01:15 PolyForth does it anyway 2023-09-01 11:06:56 I picture thinking about how deep I want the stacks to be and whether or not I want to thread to have a private vocabularly / console access etc. All these will feed into the allocated RAM region. 2023-09-01 11:07:42 A "session" will be a thread with ample stacks, an application-sized space for a vocabularly, and a console. 2023-09-01 11:08:39 I'll have some way of connecting the actual console to whichever one of those I want. 2023-09-01 11:09:43 Even if a session is running on some remote gadget, I figure I'll actually maintain its screen image on my notebook. The session output will just tell me how to modify that. 2023-09-01 11:10:08 Doesn't make sense to me to be trying to send whole screen re-writes over form the gadget. 2023-09-01 11:10:15 from 2023-09-01 11:10:36 Besides, my notebook is where I have plenty of RAM for those images. 2023-09-01 11:18:22 So as far as the gadget is concerned, a "console" will just be a pair of pipes. 2023-09-01 11:18:40 Or as far as a thread is concerned, that is. 2023-09-01 11:54:10 with 'rot' instruction historically affects only the last 3 elements and not the entire stack? 2023-09-01 11:59:01 why* 2023-09-01 11:59:56 Efficiency. You shouldn't be interested in something that's super deep in your stack. When you rot, you're changing all three of those stack elements, and that has a computational cost. 2023-09-01 12:00:28 There is a word ROLL that takes a stack parameter and basically "rots" all the way down to there. It's heavily frowned upon because the deeper you go, the more it costs. 2023-09-01 12:00:48 rot was just defined as a "less than ideal but still sometimes useful" operation. 2023-09-01 12:01:10 Any given Forth word should only be "interested in" the top few items of the stack. 2023-09-01 12:01:24 The fewer the better, really. 2023-09-01 12:02:01 there's also PICK, which takes a parameter and lets you fetch the item that's that deep in the stack. You can go as far down as you want. It's also frowned upon. 2023-09-01 12:02:36 I don't include PICK and ROLL in my designs, though I do have some words that let me go "somewhat" deeper than is completely standard. 2023-09-01 12:02:59 I try not to use them, but once in a while I find myself in a situation where it's just the cleanest way out. I usually arrange things so that I can reach 5 or 6 deep. 2023-09-01 12:03:44 I use that in EXPECT. Particularly when it's wrapped in QUERY and you have a command history in play, you just wind up with quite a few moving parts in EXPECT. 2023-09-01 12:05:16 This is another area Chuck is quite extreme on. He advises you to never be using more than two items on the stack - work on your code structure until you don't need to violate that. And as I noted earlier, he might have stacks just 4-8 deep anyway. 2023-09-01 12:05:27 I mean not capable of going deeper. 2023-09-01 12:06:00 Anyway, try not to think of the stack as some big fancy data structure. Think of it as a momentary scratchpad. 2023-09-01 12:16:55 The reason you have items deeper down on the stack is because they're related to activities happening up your call tree - by the time they become of real interest again they'll be back near the top. 2023-09-01 12:21:38 PICK doens't really share the terrible cost impact of ROLL, because no matter how deep you PICK from you're still just fetching one value. 2023-09-01 12:21:55 So often you'll see systems that off PICK but don't offer ROLL. 2023-09-01 12:22:03 offer PICK 2023-09-01 12:33:03 KipIngram, ok that's right, but a complete rotation now can be implemented in O(1) if we use a deque instead of a stack 2023-09-01 12:37:14 Surely you don't mean that I could load up an arbitrary depth stack and then full rotate it an arbitrary number of times and have every one of them be O(1)? I realize that if you filled up your entire stack region you could then just move a pointer around it, but what if it wasn't fully loaded? 2023-09-01 12:37:22 Sooner or later it would catch up to you, I think. 2023-09-01 12:37:50 And while that might address aspects of the efficiency concern, it doesn't address the fact that you just shouldn't write code that NEEDS something from dozens of cells deep. 2023-09-01 12:38:04 no i meant that with a big stack, 0..n items, you can push_front() the 0 item, and push_back() in the topmost of the stack in O(1) with a deque 2023-09-01 12:38:24 yes, i agree on that 2023-09-01 12:38:44 What if I wanted to rotate half the stack? Could that be done in O(1)? 2023-09-01 12:38:56 no, that no 2023-09-01 12:39:26 also some microcpu can't do such fancy things 2023-09-01 12:39:27 Well, I suppose if you implemented your stack as a doubly linked list you could do that in O(1). 2023-09-01 12:39:50 but as my understandings are correct, Forth is often implemented very close to hardware, so in CPUs like x86 it uses the real machine stack, which is not a deque 2023-09-01 12:39:57 Yes. 2023-09-01 12:40:19 Generally you have some smallish fixed range of RAM that is your "stack space." 2023-09-01 12:40:35 yeah 2023-09-01 12:40:46 which can't grow, its fixed, so you're subject to stack overflows 2023-09-01 12:41:14 I don't think anything comes even close to rivaling Forth in those "close to the hardware" aspects. 2023-09-01 12:41:45 And that kind of thing has been most of my career - embedded systems and so on. 2023-09-01 12:42:21 Not to mention my first "serious" calculator, and the first thing I ever "programmed" was an HP calculator, so I was doomed to love Forth from the start. 2023-09-01 13:00:12 This will be a three day weekend for us here in the US. Monday is "Labor Day." 2023-09-01 16:26:31 rendar: It's true Forth's stack usually can't grow beyond a certain size, but as veltas and I noted earlier, it's extremely rare to need to. 2023-09-01 16:27:04 Like I said, you just aren't supposed to use the stack as some kind of a sophisticated data structure. 2023-09-01 16:27:41 256 items/cells is usually quite big enough 2023-09-01 16:27:57 that's actually very large. 2023-09-01 16:28:11 Like I said earlier, mine never seems to use more than 17 or 18. 2023-09-01 16:29:33 7400 series up/counters are either 4 bit or 8 bit and I like to use those to drive the address input of the ram mememories used for the stacks 2023-09-01 16:31:10 :-) Blast from the past there... 2023-09-01 16:33:50 I blame Ben Eater and the rule of thumb I learned that ones logic design should be at least theoritcally buildable using 7400 series logic, parallel sram/eeprom/flash/fram and such 2023-09-01 16:34:55 it also helps with making the parts multi sourable 2023-09-01 16:45:27 What I like about it is that it forces you to understand what your circuit's actually *doing*. 2023-09-01 16:46:13 Instead of just describing some high level result you want and letting some algorithm churn out a logic design for you. 2023-09-01 16:49:36 Im perfectly happy, though, doing without board-level propagation delays between components and so on. 2023-09-01 19:14:02 Oh, wow. The DM-42 has PICK and also has UNPICK. 2023-09-01 19:14:41 NOSE UNPICK 2023-09-01 19:17:24 One thing I regard as a shortcoming - it doesn't have a nth root facility. Instead you compute the 1/n power. Ok, fine. But 1/3 can't be stored exactly. So if you take, say, the cube root of -8, you're doing -8^(1/3), and since that's not a number that can be stored exactly, it decides the result needs to be complex. It returnthe complex number with magnitude 2 and angle 60 degrees. 2023-09-01 19:17:39 Which IS a cube root of -8. But it's not the one you "want" - you want -2. 2023-09-01 19:18:11 And if you take that cube root it gives you and cube it, it gives you -8 + i*10^-32 or something like that. 2023-09-01 19:18:27 Which is also "right," given where it's gotten with things. 2023-09-01 19:19:26 You know, since I'm just contantly taking cube roots... 2023-09-01 19:21:43 An nth root function could do better - it would see that the 3 was an integer and could know to prefer the real solution. 2023-09-01 19:22:03 If I ever do make a calculator like I talk about sometime I'll have to keep that in mind. 2023-09-01 19:22:52 I've also thought about taking an approach that stored values as rational numbers. Then it would just have the 1/3 explicitly stored and could do the same kind of reasoning. 2023-09-01 19:25:43 Python is smarter - it gives -2 for -8**(1/3) 2023-09-01 19:26:05 Octave too. 2023-09-01 19:48:57 Hmmm. I wonder how well that would actually work in practice? handling reals as rationals instead of using a floating point format? 2023-09-01 19:49:31 Seems like you'd want a good fast greatest common divisor function, so you could reduce your rational numbers to their best form. 2023-09-01 20:39:20 you're a bit stuck on the DM-42 since it's a workalike of a calculator from the 80s 2023-09-01 20:39:46 I'm sure they could have done arbitrary roots but if that wasn't already on the original keypad, there's no place to put it 2023-09-01 20:45:19 Sure - other calculators do it so they could have too. 2023-09-01 20:46:53 The DM-42 is somewhat based on the HP-42S, but even more than that it's a deployment of Free42, and uses that quadruple precision Intel floating point library. 2023-09-01 20:46:58 34-digit precision throughout. 2023-09-01 20:50:45 Just got a book delivered today. "Elementary Particles and the Laws of PHysics." Feynman and Steven Weinberg. In 1986 they both gave keynote addresses at the Dirac Memorial Lectures at Cambridge - the book contains those addresses. They're both excellent. 2023-09-01 20:51:32 Feynman explains the inevitability of anti-particles and some key featues of the spin statistics theorem. Weinberg discusses likely features of whatever "final laws" we arrive at one of these days. 2023-09-01 20:52:43 feynman is always good 2023-09-01 20:53:03 That's the truth. 2023-09-01 22:04:20 So the "Solver" on the DM42 is pretty straightforward. You just program your equation, but at the beginning of the program you identify the variables using an MVAR entry. So for A*X^2 + B*X + C = 0, you'd have MVAR A, MVAR B, MVAR C, MVAR D and MVAR X. Then RCL X, X^2, RCL A, *, RCL X, RCL B, *, +, RCL C, +, RTN. 2023-09-01 22:05:09 You give that program a name, and when you start the solver you select that. You get a menu with A B C X in it. You can enter a number and hit that button to set a variable, and if you hit a button without entering a number it will solve for that variable. 2023-09-01 22:06:35 Kind of neat how it solves for anything regardless of how you enter the equation. 2023-09-01 22:08:32 I wonder if it uses Newton's method or something for that. For simple equations you could do algebraic manipulations, but if the equation got complicated at all that would fall over. 2023-09-01 22:11:08 can you stick an IF statement in there for example? 2023-09-01 22:11:38 or other arbitrary things that hinder algebraic solving 2023-09-01 22:26:20 Oh, that's a good notion. I'm guessing that you can - I think it almost has to be numerical in order to be sufficiently generic. 2023-09-01 22:26:36 I mean, you could stick a LN in there and it would break algebra too. 2023-09-01 22:41:38 MrMobius: A conditional in there would let us do piecewise functions. I'll have to play with it later and see how robust it is on things like that. 2023-09-01 22:43:01 Also, I tested it on x^2+4x+4=0, and in that case both roots are -2. I wonder if I pick one with two distinct roots, let it find one, what I then to to get the other. What I'd like to do is just hit the X button a second time - I'd like for it to cycle around the roots. 2023-09-01 22:43:02 KipIngram: if youre interested in solvers, a guy name Nasir has done a bunch of presentations at HHC that are online 2023-09-01 22:43:16 Oh neat - I am interested. 2023-09-01 22:43:48 Hey - I just accidentally typed _ when I meant - and it suddenly occurs to me we never talk about using _ as any kind of an operator in Forth. 2023-09-01 22:44:00 That's an available single-char name - a precious resource. 2023-09-01 22:45:14 solvers were also the study of early lisp and get talked up in "Paradigms of AI Programming" 2023-09-01 22:46:27 Neat. Then you can level up and start talking about field solvers. Finite element algorithms and so on. 2023-09-01 22:46:45 Guess that's getting a bit beyond the calculator scope, though. 2023-09-01 22:47:47 But they're really interesting algorithms. 2023-09-01 22:57:11 There's a nifty step in the derivation of the numerics where they use integration by parts to reduce the maximum order of differentiation by one. Gets rid of the second derviatives. 2023-09-01 22:58:31 Then on each little element you have an independent multi-variable polynomial approximation for your field, with the constraint that they all glue together at the boundaries where they meet. 2023-09-01 22:58:42 You solve for the unknown polynomial coefficients. 2023-09-01 23:01:28 Well, it usually winds up getting cast in terms of solving for the field values at the nodal points, but it boils down to solving for those coefficients. 2023-09-01 23:14:10 MrMobius: Got any links to any of that Nasir stuff? I YouTube searched for "Nasir HHC solver" but didn't get anything that looks promising at all. 2023-09-01 23:56:25 KipIngram: https://www.youtube.com/@hpcalc/videos