2024-06-17 13:06:03 Heh, actually need to read individual bits from the scale. 2024-06-17 13:06:09 Never did that before (as a web dev) 2024-06-17 14:10:21 Is Don Hopkins in here? 2024-06-17 14:35:37 Opinions on the pattern: R> 2>R 2024-06-17 14:36:36 The object of this is to tuck something on the return stack 2024-06-17 14:36:57 While keeping the present top of return stack 2024-06-17 14:38:15 Ran into this while attempting division of a triple-cell, while implementing M*/ using UM* and UM/MOD 2024-06-17 14:42:15 Which is easier than it sounds, and deserves an article, but you run into some hairy stack operations on the way down 2024-06-17 15:25:03 veltas: seems like it's missing a swap 2024-06-17 15:29:20 Bearing in mind that 2>R is equivalent to SWAP >R >R 2024-06-17 15:30:01 Maybe I should just write R> SWAP >R >R 2024-06-17 15:30:43 It's clearer 2024-06-17 15:32:04 does 2>r always have an implicit swap? if so, i overlooked that. 2024-06-17 15:33:31 Yeah, the idea being it preserves order of two items, and so 2R@ works, etc 2024-06-17 15:34:02 Technically there's nothing stopping you from not swapping there, but then e.g. 2R@ couldn't be implemented using 2@ or whatever normal order you'd expect 2024-06-17 15:34:13 Unless your return stack grows upwards 2024-06-17 15:34:19 Many ways to skin a cat 2024-06-17 15:36:20 yeah i assumed the swap occurred in 2r@ 2024-06-17 15:39:30 The real difference is whether R@ gives you high or low order cell 2024-06-17 15:39:47 I'd expect, as it's the top, to give high order cell 2024-06-17 15:53:40 Is there no way to tell a serial port to flush everything and start from scratch? 2024-06-17 15:56:00 veltas: i expect r@ to give me the top of the return stack 2024-06-17 15:56:57 a few months ago i was asking in here about the order of double cell values and never got a clear answer. when reading the various docs (ans, fig-79, etc.) i was never able to find clear consistency 2024-06-17 15:58:21 i've seen some people write double literals like "1234 5678" where 5678 (TOS) is the low-order cell, and then i've seen people extend a single-cell value to double-cell with just "0" which suggests the 0 (TOS) is the high-order cell 2024-06-17 15:59:22 zelgomer: The ANS and later standards have high part on top, and I think most classic code does 2024-06-17 15:59:31 So DROP can truncate 2024-06-17 16:00:36 The impact of this though is in a normal forth you get a weird mixed-endianness on little endian systems for doubles 2024-06-17 16:01:22 I'm not surprised there's variation 2024-06-17 16:14:59 PDP had mixed endian 2024-06-17 16:21:47 I don't really mind it, it's hacky but so is Forth 2024-06-17 17:05:51 veltas: R> 2>R seems clever to me. If those words are in your dictionary it's likely the tidiest way to do a return stack TUCK. 2024-06-17 17:06:56 I do see that SWAP >R >R is more "certain" - I don't know offhand if the order of doubles on the stack is "standardized." If I wrote it I'd tend to follow the processor's endianess. 2024-06-17 17:07:35 endianness 2024-06-17 17:23:42 But that's just me being a hardware guy, I guess. It would just feel "more consistent." 2024-06-17 17:25:15 Yeah Forth just has loads of tactical inconsistency 2024-06-17 17:25:36 It's one thing being inconsistent because of a lack of understanding, but classic Forth has a lot of conscious inconsistency 2024-06-17 17:52:15 the order in which double precision values appear on the stack should be defined and not relate to the endianness of HW imo. the stack is like the order of parameters to a c function or the order of return values from a function (if c could return multiple values). in order words, i view it as a forth vm or calling convention type of thing. 2024-06-17 17:53:22 not even a calling convention, actually, because that's at the abi layer. it's more of an api signature convention. 2024-06-17 18:26:43 ill give you a calling convention 2024-06-17 18:36:44 zelgomer: It's true what you're saying 2024-06-17 18:36:58 It's just a shame IMO they didn't choose my preferred endianness but what can you do 2024-06-17 18:37:06 The name itself refers to a pointless argument 2024-06-17 18:38:14 Who even uses doubles that much? I have. Higher precision on 8/16-bit CPU's is cool. 2024-06-17 18:38:45 That's pretty much only reason, I don't think I 'need' 64-bit for very much. 2024-06-17 19:01:49 i'm not to the point where i've used it yet, but tbh, smooth double and multi-precision arithmetic is one of the features that appeals to me about forth. i actually have needed to do fixed-point 64-bit (32.32) arithmetic in c before and it's incredibly janky. 2024-06-17 19:02:19 throwing away the high result of the multiplication operand is one of c's greatest oversights imo 2024-06-17 19:03:39 so to do it in c you're pretty much forced to rely on implementation extensions -- either cast everything to int128_t and hope the compiler does the right thing, or write the routine in assembly where it can never be inlined 2024-06-17 19:04:04 well, i guess it can be inlined if you use the inline __asm stuff. 2024-06-17 19:42:25 Yeah not having mixed mult/div and carry/borrow bits is super annoying in 'portable assembly' 2024-06-17 20:05:39 I think if I was doing a 32-bit system I'd be pretty likely to support doubles. Given a 64-bit system, though, I never quite saw the point. I use decimal points to indicate floating point values instead. 2024-06-17 20:09:44 floats suck 2024-06-17 20:13:19 Floating point holy war 2024-06-17 20:14:32 some numbers are more subnormal than others? 2024-06-17 20:14:37 lol 2024-06-17 20:30:44 :-) I'm not to worked up either way about the details. I just like scientific computing, and floating point is kind of important to it. 2024-06-17 20:35:54 Can measure distance to mars in micrometres with a 64-bit integer 2024-06-17 20:36:19 the only reason floats exist is because of bad programmers who can't identify an appropriate range for the given problem. you actually /lose/ precision with a float of the same size. 2024-06-17 20:36:45 Now I'm not saying you should use it for that, but it is interesting to know, to gauge precision 2024-06-17 20:37:05 of 64-bit, which is pretty incredibly large 2024-06-17 20:38:51 my example above was actually something like that. i was playing around with some astronomical calculations and it was frustrating me that rounding errors were different depending on distance from the origin. when you're dealing with spatial coordinates like that, life is so much simpler if every position is treated equally, and the easiest way to achieve that is with fixed point. 2024-06-17 20:46:54 Yeah, I'm quite interested in fixed point possibilities too. 2024-06-17 20:47:22 And there are also some aspects of projective geometry that look like they might dovetail really nicely with fixed point work. 2024-06-17 20:47:51 I think if a good way could be found to do that kind of work with fixed point, without it feeling painful, that would be great. 2024-06-17 20:49:35 I was working on a timer at work that ran around let's say 10ns per tick, and stored in a 64-bit integer to prevent overflow issues. 2024-06-17 20:49:39 I thought I'd check how long it would run for sanity: over 5 millennia 2024-06-17 20:52:08 What's also interesting is the 32-bit limit: a bit under a minute 2024-06-17 20:52:46 10ns is on the scale of how quickly a computer does what I do in a minute 2024-06-17 20:53:16 veltas: Y7K bug? 2024-06-17 20:53:17 So it takes a computer a minute to do what would take me over 5000 years 2024-06-17 20:53:18 I hope you've put a note on your diary to update to a 128-bit counter in plenty of time :P 2024-06-17 20:54:01 Just crazy we have machines that can do 5000 years of work in a minute 2024-06-17 20:55:45 This is a MCU by the way 2024-06-17 20:57:29 lol i fixed a 32-bit overflow bug at $dayjob once and my commit message was something like "fix 32-bit rollover by increasing mask to 64-bits" (it was code that was ported from a 32-bit platform to 64-bit and they had used a hard-coded ffffffff mask). when the customer service rep saw that, he asked me, "are you sure this actually fixes the problem, or does it only double the amount of time before it 2024-06-17 20:57:35 happens again?" 2024-06-17 20:58:10 i was like ... first, no it actually handles the rollover properly so it will never happen. but even if it didn't, sit down, let me explain to you the concept of exponential growth. 2024-06-17 20:58:55 or just retire before 2038 2024-06-17 21:50:57 Yeah, 64 bits gets you a long long way. 2024-06-17 21:51:43 And yes, it's insane the progress we've made in computing in the last 50 years or so. 2024-06-17 22:52:09 zelgomer: I think floats exist because that's how we've always done scientific/engineering calculations with significant figures, standard form etc 2024-06-17 22:52:23 It's natural to work that way with a lot of multiplications etc 2024-06-17 22:52:34 Less natural for addition, fixed point is better at that 2024-06-17 22:53:17 I mean we were doing floating point right out the gate with computers, in the 40's 2024-06-17 22:55:05 Babbage talked about floating point 2024-06-17 22:57:05 Apparently 1938 was the first 2024-06-17 23:02:51 https://people.eecs.berkeley.edu/~wkahan/SIAMjvnl.pdf 2024-06-17 23:03:54 Interesting paper critical about floating point 2024-06-17 23:10:22 psh babbage. what does that guy know. 2024-06-17 23:10:22 probably never even used forth 2024-06-17 23:18:37 Well that paper isn't critical of floating point, but it does raise some interesting considerations with floating point 2024-06-17 23:19:11 It mentions that Von Neumann didn't like floating point 2024-06-17 23:26:51 Floating point works well 99.00000000000001421085% of the time 2024-06-17 23:29:56 catastrophic cancellation can be fun