2022-07-12 00:05:54 boustophohedon - you mean bootstrapping the numbers from quantities of marks? 2022-07-12 00:06:43 it's named after ox plowing, so the words go one direction, then on the next line the other, etc 2022-07-12 00:07:16 oh, yeah - I see. That would be... unfun. THAT habit is driven in pretty deep. 2022-07-12 00:07:48 boustrophedon byte order would presumably switch between big and little endian between words 2022-07-12 00:08:00 Ugh. 2022-07-12 00:08:41 I tell myself little endian is more sensible, but that could just be because I have so much more experience with it. 2022-07-12 00:10:14 http://bear.ces.cwru.edu/eecs_314/endian_ien-137.html 2022-07-12 00:12:05 Oh, nice Never saw this before 2022-07-12 00:14:17 OTOH, that publication date 2022-07-12 00:24:19 That was fun. 2022-07-12 00:26:37 Of course we're faced with the issue in x86_64 and many others that strings are stored so that low address to high is left to right, but numbers are stored so that low address to high is right to left. 2022-07-12 10:13:01 KipIngram: I thought x86_64 was little endian 2022-07-12 10:13:25 lsb at lowest memory address 2022-07-12 10:17:49 Um, that's what I think of it as. is it not? 2022-07-12 10:18:21 But that's my point. Strings sort on the earlier characters in RAM - numbers sort on later bytes in RAM, the more significant bytes. 2022-07-12 10:20:09 So *strings* are kind of stored "Big Endian." 2022-07-12 10:20:25 oh, for sorting purposes... I see 2022-07-12 10:20:29 ACTION must reboot 2022-07-12 10:23:17 pretty sure the article points out that little endians don't manage consistency 2022-07-12 10:36:10 Yes, it did. 2022-07-12 10:37:06 The thing is, strings don't really have "significance" in the same sense that numbers do. They have a *chronological* order, but not really a "significance order" unless we arbitrarily impose one. 2022-07-12 10:37:15 Which we do when we sort them. 2022-07-12 10:37:46 Numbers have no "chronology," but a very clear "significance" order. 2022-07-12 10:37:52 little endian† 2022-07-12 10:37:57 †except when it is not. some restrictions may apply. 2022-07-12 10:38:23 That was a good write-up; I enjoyed reading that last night. 2022-07-12 10:38:34 I like running across little "classical pieces" like that. 2022-07-12 10:38:55 Kind of like that "what every programmer should know about floating point" paper that's out there. 2022-07-12 10:40:15 float i = 0.00001; float j = 10000.0; while (j < 10001.0) { j += i; } // space heater 2022-07-12 12:22:48 :space-heater (-) .0.00001 .10000.0 [ f:over f:+ f:dup .10001.0 f:lt? ] while ; 2022-07-12 12:25:16 takes 18.94s on my openbsd dev box w 2022-07-12 12:31:14 Why do you have two decimal points in your floats? 2022-07-12 12:31:28 Is the first one required to specify that it's a float? 2022-07-12 12:33:15 I wrote it so that the presence of e or a . anywhere causes it to be a float. But then later I changes that e to non-standard ^ to avoid conflict with the hex digit e. 2022-07-12 12:33:35 And yes, I realize that might be confusing to people not expecting it. 2022-07-12 12:34:05 yes, the leading . is a sigil that tells retro that the token is a floating point value 2022-07-12 12:34:13 Got it. 2022-07-12 12:34:22 Kind of had to be that, for your code to match thrig's. 2022-07-12 12:34:36 So, that's about 100,000 additions? 2022-07-12 12:35:06 it's an infinite loop in C (for some compilers, maybe) 2022-07-12 12:35:15 19 seconds seems like a long time for that, but I don't have actual f: +, over, and lt in my system yet so I can't actually test it. 2022-07-12 12:35:42 Oh, that's enough to snarl up single precision float? 2022-07-12 12:35:54 I usually use doubles, so I'm not well-calibrated on the single limits. 2022-07-12 12:36:13 doubles will need embiggened numbers to get a float to drown 2022-07-12 12:36:17 But yeah - if there aren't enough bits there to represent a different number, what can it do? 2022-07-12 12:36:23 Yeah. 2022-07-12 12:36:38 The funny thing is it doesn't fail in a predictable way which is more evidence of the problem 2022-07-12 12:37:00 thrig: on my openbsd box it's an endless loop with C (if compiled w/o optimizations) or a segfault with -O1 or -O2 2022-07-12 12:37:27 Ah; those flags cause it to detect the fault. 2022-07-12 12:37:34 wait, how could that segfault??? 2022-07-12 12:37:38 KipIngram, my forth isn't particularly fast :) 2022-07-12 12:37:55 MrMobius: I assume it's segfaulting deliberately. 2022-07-12 12:38:00 Signal fault. 2022-07-12 12:38:08 Not literally a memory address fault. 2022-07-12 12:38:21 Signal thrown by the floating point unit. I'm guessing. 2022-07-12 12:38:24 OpenBSD will abort a process if uninit numbers are used and various other conditions 2022-07-12 12:38:26 You get exactly 100,000 additions on a calculator. The whole world should run on BCD floats 2022-07-12 12:39:02 There are well-defined advantages to binary, 2022-07-12 12:39:05 https://asciinema.org/a/bK1C5n65zK7AS7w8LnRt1GiV2 shows what I see 2022-07-12 12:39:14 Which are enumerated in that paper I mentioned earlier. 2022-07-12 12:39:29 Binary can be more accurate. 2022-07-12 12:39:46 But I understand why you say that. 2022-07-12 12:39:56 Ya binary floats are faster and FPU requires less transistors but you better understand what you're doing 2022-07-12 12:40:00 It's annoying when we can write a number down exactly but a computer can't represent it. 2022-07-12 12:40:08 Yes, you better. 2022-07-12 12:40:18 in my new system, I'm not adding floating point 2022-07-12 12:40:24 The computer can represent it just not in binary float :P 2022-07-12 12:40:31 :-) 2022-07-12 12:40:32 Fair. 2022-07-12 12:41:07 Norman Wildberger advocates for a "rational number" representation both in formal math and in computers. 2022-07-12 12:41:20 Since any irrational number is one that can't be actually worked with. 2022-07-12 12:41:33 Someone gave a presentation at calculator conference about history of FPU. Homebrew people were grafting calculator chips onto the bus pre-x87 2022-07-12 12:41:36 I think he's kind of extreme and pedantic, but he also has a point. 2022-07-12 12:42:53 The infinite precision stuff is really cool where you save the divisor among other strategies so you never have a round off error from dividing 2022-07-12 12:42:59 Isn't a measure of BCD support required by the standard? 2022-07-12 12:43:18 IEEE Standard? 2022-07-12 12:44:15 Iirc, there was an 854 then decimal FP got added as part of regular 754 2022-07-12 12:46:55 Yes - I think I meant 754. It's the only one I knew about. 2022-07-12 12:47:39 So, it seems there's some agreement with you out there about the necessity of BCD support. 2022-07-12 12:47:56 Makes sense to me - after all, it's how we think, it's how calculators work, etc. 2022-07-12 12:48:01 Kind of matters. 2022-07-12 13:39:08 imode: That boostrap.fs you linked last night *is* actually fascinating. I'm having quite a nice time picking through it. 2022-07-12 13:39:23 I picked over that thing until I couldn't anymore. 2022-07-12 13:39:27 the bootstrapping is elegant. 2022-07-12 13:43:08 Yeah, it's very entertaining. 2022-07-12 13:43:29 I'm not really drilling down hard on it; more just trying to get a feel for the "flow of things." 2022-07-12 14:17:59 imode: Is the source of the initial binary in this git repo somewhere, or when he says "hand written" does he mean he actually WROTE THE BINARY? 2022-07-12 14:18:11 he actually wrote the binary. 2022-07-12 14:18:14 Does one get this think just as a slug of bits? 2022-07-12 14:18:15 Wow. 2022-07-12 14:18:32 That must have been... tedious. 2022-07-12 20:17:34 I didn't realize there's yet another new Star Trek series. 2022-07-12 20:17:51 Just started, apparently - one season in the bag. 2022-07-12 21:17:45 I recently learned more about floating point and honestly it's pretty nice. 2022-07-12 21:17:59 The issue with fixed point arithmetic is you restrict the range too much. With floating point can you reason and bound the worst-case error. 2022-07-12 21:18:38 Of course it depends on the application. If I wanted to carry out an optimization problem I would use floating point, but if I were doing financial calculations then fixed point. 2022-07-12 21:20:10 there's a reason Knuth uses circle-+ and not + in his articles 2022-07-12 21:20:18 Neural networks don't even need that much precision in the mantissa, in fact more recent research is looking into how to reduce the number of bits needed (people are switching from 32 to 16 bits, and this paper https://proceedings.neurips.cc/paper/2018/file/335d3d1cd7ef05ec77714a215134914c-Paper.pdf shows how to do deep learning with 8) 2022-07-12 21:20:19 siraben: https://iquilezles.org/articles/floatingbar/ 2022-07-12 21:20:31 integer neural nets are also a thing. 2022-07-12 21:20:58 imode: oh yeah, there's also arbitrary precision arithmetic where you represent real numbers as converging Cauchy sequences 2022-07-12 21:21:07 if you really wanted to be exact (but give up performance) 2022-07-12 21:21:10 yeah but floating rationals are neat. 2022-07-12 21:21:16 check the article. 2022-07-12 21:21:36 nice, I'll look through it 2022-07-12 21:22:29 thrig: what articles are you referring to? 2022-07-12 21:23:07 the ones where Knuth uses circle-+ because computer-+ ain't mathematics + 2022-07-12 21:23:17 Right. 2022-07-12 21:23:56 I wonder what aspects of a Forth implementation can be improved to work well on modern hardware. The dictionary is traditionally implemented as a linked list but that's not good for cache reasons. Do people use some sort of hashing? 2022-07-12 21:25:20 because in maths you can do craaaazy things like 66666 66666 + and get a different answer than 2260 2022-07-12 21:25:48 133332 2022-07-12 21:26:35 there's a list of implementation types to start from at http://www.bradrodriguez.com/papers/moving1.htm 2022-07-12 21:26:50 Yeah I read through that several years ago 2022-07-12 21:26:58 was very helpful for https://github.com/siraben/ti84-forth 2022-07-12 21:27:49 2260 = 133332 - 131072 2022-07-12 21:28:56 2^18 = 262144 2022-07-12 21:29:53 17 bit integers with 1 sign bit, then what are the rest of the bits? 2022-07-12 22:44:44 siraben: You *can* do anything you want with fixed point, but you have to understand what you're doing well enough to manage the scaling yourself. 2022-07-12 22:45:08 Floating point lets you have, um, more ignorance about that aspect and still get reasonable results. 2022-07-12 22:45:17 But you can also foul yourself up with it in various ways. 2022-07-12 22:45:36 javascript going funky past 2**53 or so 2022-07-12 22:48:25 imode: You know that planckforth is basically a bytecode interpreter that takes its bytes from the input stream. I was thinking earlier about how easy such a thing would be to write. 2022-07-12 22:48:43 Well, maybe not the way its author did it, but you could do it all kinds of ways. 2022-07-12 22:48:53 Just a jump table, basically. 2022-07-12 22:49:45 yep. 2022-07-12 23:59:17 Decided to read The Wheel of Time this evening. I'm a couple of chapters in and I think it's going to hook me fairly well. I've heard a lot of good things, but just hadn't ever gotten around to it sooner.