2022-04-02 03:53:46 > but, actors + messaging isn't that good at the low level 2022-04-02 03:53:54 eris[m]: read https://dl.acm.org/doi/pdf/10.1145/3212477.3212479?download=true (specifically pg11) ? 2022-04-02 03:54:45 I think the real "hard barrier" is that ~nobody considers DRAM accesses to be IO 2022-04-02 03:54:51 the GA144 does 2022-04-02 03:57:14 and by my reading of pg11 there, at the level of "the implementation of the erlang-like apps language" that might be the right view for an "array of simple cores" chip 2022-04-02 06:39:13 remexre: no, havent 2022-04-02 11:22:46 Chuck's philosophy, that you should be willing to consider completely new hardware when pursuing a problem solution is a good one - and if your game is building an embedded system to solve a problem it's the *right* one. In that arena you're building a THING, and all the doors are open. But most software companies don't operate in that rena. They are building a pattern of bits, with the idea that customers 2022-04-02 11:22:49 can load it onto a machine they *already have* to solve a problem. The hardware that's out there is the hardware that's out there - they *must* target that. Further, the hardware that's "out there" has slight differences - targeting the solution too specifically limits the market size. So portability matters in an economic way. Chuck's thinking just can't be applied completely when that's your operating 2022-04-02 11:22:51 domain. 2022-04-02 11:23:04 I think it can still be applied - but not all the way into hardware selection. 2022-04-02 11:23:56 If you try to "insist" that you can start with a completely blank sheet of paper, you may not wind up doing the right job. 2022-04-02 11:24:24 The ideas are good and we should keep them in mind. But we also have to live in the "real world." 2022-04-02 11:25:08 Chuck needed *one* chip design system. A seat for himself. If my company markets IC design tools, those tools have to run on the existing poipulation of computers in the world. 2022-04-02 11:25:24 Unless I choose to sell complete workstations, hardware included. 2022-04-02 11:26:46 Taking that same reasoning down a level, individual software engineers have to operate within the guidelines laid down by their employer. They can campaign for change, but in the end they have to do what they're told. 2022-04-02 11:27:04 So their tool selection and their "style decisions" may be constrained and outside of their control. 2022-04-02 11:27:34 A true implementation of Chuck's ideas, industry wide, would require a LOT of high level collaboration across the whole industry. 2022-04-02 11:27:37 There's an inertia here. 2022-04-02 11:28:29 And what I mentioned yesterday - the "dependence loop" between how new hardware is designed and how new software is designed - that causes an inertia as well. All these things become "economic" pressure to "not change." 2022-04-02 11:33:03 I read a neat history of the Transputer yesterday. It honestly sounds like htose guys were on the right track technologically - 40 years ago. A lot of their ideas are showing up again in the latest processors today. But 40 years ago the whole industry structure just wasn't amenable to the penetration of those ideas. Maybe this time it will be, but I guess we'll have to see. 2022-04-02 11:33:44 What may happen instead is that we may figure out how to use graphene / carbone nanotubes as our substrate and launch another factor of 100 march forward with the same basic architecture we've been using for decades. 2022-04-02 11:34:18 I think the key is that graphene provides for better heat extraction and thus you can drive the clock rate up - I saw a forecast of 400 GHz cores in one YouTube video. 2022-04-02 11:34:45 So we can keep doing exactly what we've built the economy to do for another 15-25 years. 2022-04-02 11:35:18 talking about hw, you seen this BitGrid idea? 2022-04-02 11:35:19 Moore's law has meant that we could keep getting bigger and faster without having to get smarter. 2022-04-02 11:36:40 BigGrid looks like something worth learning about. 2022-04-02 11:36:52 And remexre - I'm reading that link you posted too. 2022-04-02 11:36:52 see http://bitgrid.blogspot.com/ 2022-04-02 11:36:57 Thanks. 2022-04-02 11:37:45 I wonder how it fits in to Forth or vice versa 2022-04-02 11:38:27 specially if we take a page from Xilinix and support partial reconfiguration 2022-04-02 11:38:47 remexre: this is a good paper. 2022-04-02 11:41:18 This line is interesting: 2022-04-02 11:41:21 The cache is, as its name implies, hidden from the 2022-04-02 11:41:23 programmer and so is not visible to C. 2022-04-02 11:41:33 Specically, the "as the name implies" part. 2022-04-02 11:41:44 That name implies that because that's how we use that word. 2022-04-02 11:41:57 There's nothing about hte word "cache" that necessarily means "hidden." 2022-04-02 11:42:11 It implies hidden because we KNOW that a memory cache is hidden. 2022-04-02 11:42:47 Another perfectly viable path we could have taken over the years would have been to make the layers of RAM explicitly visible and programmer managed. 2022-04-02 11:43:00 That would have put work on the programmer but would have simplified the hardware dramatically. 2022-04-02 11:43:09 And we still could have called that a "cache." 2022-04-02 11:43:55 I read about one interesting processor once that straddled that line. Every RAM cell had two "spots" in the address space - with one bit in the address different. 2022-04-02 11:43:59 lets say we start small with 16x16 BitGrid and place it into say excamera J1, one side is connected to TOS, another to NOS, yet another to address lines of a memory, and last to the data of same memory 2022-04-02 11:44:06 If you used one, it would work as normal and cache the cell. 2022-04-02 11:44:13 If you used the other, it would bypass the cache. 2022-04-02 11:44:23 So that design made the cache "semi-visible" to the programmer. 2022-04-02 11:45:32 a bit like one of the ideas that went into the Itanium? 2022-04-02 11:50:15 Zarutian_HTC: I haven't caught up yet - I'm still reading remexre's link, which is one of the best reads I've come across in a while. 2022-04-02 11:50:28 He's really shining the light onto a lot of things. 2022-04-02 11:50:38 He = the paper author. 2022-04-02 12:04:24 Ok, finished that - going to get a cup of coffee and then read about BitGrid. Thanks again, remexre - that was a great read. 2022-04-02 12:18:33 Zarutian_HTC: is there another link? That one seems to be a guy talking about using BitGrid for something. I didn't get a Wikipedia hit on it. 2022-04-02 12:18:39 Is there a description anywhere? 2022-04-02 12:19:39 nope, the guy who is talking about it is the author/inventor 2022-04-02 12:19:59 That "C is not low level" article makes it pretty clear that the legacy of C code is keeping us from migrating to better processor architectures. The pressure is there to "run existing C code fast." Looks like it was the same with Fortran - C didn't displace Fortran from its niche because Fortran better expresses things that let the computations be done fast. 2022-04-02 12:20:19 Ok - cool; I'll try to "digest" it. 2022-04-02 12:20:28 http://bitgrid.blogspot.com/2005/03/bitgrid-in-25-words-or-less.html is pretty short 2022-04-02 12:22:44 as I understand the concept BitGrid is a systolic array like von Neuman neighbour cellular automata but where each cell as its own ruleset of what to tell its neighbours specifically 2022-04-02 12:23:43 basically each cell is a LUT of NEWS to NEWS bit values 2022-04-02 12:24:04 NEWS standing for North East West South 2022-04-02 12:24:42 and afaict this is all async 2022-04-02 12:25:24 that is, no clock domains like in many FPGAs or some CPLDs 2022-04-02 12:26:27 I found other blog entries on the same site that are getting me there. 2022-04-02 12:27:52 This is somewhat like "smart RAM" - where computing capabilities are embedded with every cell of RAM. 2022-04-02 12:28:15 found this BitGrid, damn, two and a half years ago 2022-04-02 12:28:33 BitGrid thing* 2022-04-02 12:28:53 It does seem like the plummeting cost of transistors should be pushing us in this direction. 2022-04-02 12:31:03 I drew a schematic mockup of one cell and noted to keep the denisity I would need to use serial register like access for the config bits where those config like would snake through the block in a Hilbert curve 2022-04-02 12:32:24 indeed! and memristors implemented in the metal layers might also help with this 2022-04-02 12:34:40 one possible benefit of such a device is mega mass manifacturing or if we restrict ourselfs to say half micron process, easy and cheap manifacturing 2022-04-02 12:35:24 So, all of these ideas - the bitgrid, the GA144, etc. are just variations along a spectrum - different amounts of "capability" in the individual cores, but always the same kind of "uniform core array" idea. 2022-04-02 12:35:37 basically I am keeping Opensource Ecology in mind 2022-04-02 12:36:47 hmm... I think that GA144 cores are a bit more inflexible than this 2022-04-02 12:36:49 In the GA144, and in the Transputer, cores power up waiting to receive instructions from a neighbor, and that's how they wind up loading their app-specific code. That seems to imply a minimum level of core capability - going below that threshold would mean having some other way of configuring the cells. 2022-04-02 12:37:18 see my post above ;-) 2022-04-02 12:37:19 I.e., they have to at least have the ability to establish their functionality in the array. 2022-04-02 12:37:25 Sorry - I missed some stuff. 2022-04-02 12:37:28 :-) 2022-04-02 12:37:46 still thinking on the programming model for massive parallelism 2022-04-02 12:38:52 basically a BitGrid block would be configured like you use an universal serial paralell register via spi 2022-04-02 12:39:22 one issue i forsee is routing logic 2022-04-02 12:40:23 eris[m]: you mean routing and placement logic? 2022-04-02 12:41:36 yea 2022-04-02 12:42:40 well, it could be from an engineer intern plopping down 'macro cell' or such to emergent behaviour ant optimization 2022-04-02 12:44:11 just an phd degree granting question about software 2022-04-02 12:45:03 think of like gcc code or symbiflow placement optimizers 2022-04-02 12:54:18 re that j1 integration example above: lets say it is 20x20 grid instead so that we have a bit of a room in placement plus room for a few control signals in and out 2022-04-02 13:00:18 I think config being in-band vs. out-of-band is a key question. In band the only really viable option is to chain the info through. But you might also have a somewhat slow narrow bus that every cell listens to, and you could configure them over it, and then turn them loose. That would be the "out of band." 2022-04-02 13:01:18 Then you could communicate with cells during program operation, over that out-of-band link, as a debugging maneuver. 2022-04-02 13:02:45 The out of band would be like JTag. 2022-04-02 13:03:16 depends on the complexity of each cell though 2022-04-02 13:05:10 the spi Hilbert curve bus as described above could be used to bus-clock out the current inputs of each cell for debugging monitoring purposes 2022-04-02 13:05:58 Chuck would probably view JTAG as unnecessary overhead. 2022-04-02 13:06:05 And well, maybe it is. 2022-04-02 13:06:27 Sure - if the cells are smart enough then it can all be in-band. 2022-04-02 13:06:53 well JTAG is quite handy and established for QA in manifacturing 2022-04-02 13:07:07 And I guess EVENTUALLY JTAG would start to get slow. You might have millions, or tens of millions, of cells. 2022-04-02 13:07:39 I guess at some point we'd have to stop thinking in terms of individual cells and start thinking of "distributions of functionality." 2022-04-02 13:07:47 I think the idea behind BitGrid to have each cell pretty simple 2022-04-02 13:07:53 Yeah. 2022-04-02 13:08:08 Since it's... BITgrid. 2022-04-02 13:08:30 That may not be "literal," but it surely implies "very simple." 2022-04-02 13:09:25 and if you have say BitGrid blocks in an fpga or cpld bus routing fabric then it might be fast enough 2022-04-02 13:10:13 for JTAG like debuging and config 2022-04-02 13:12:27 and if memristors or fram is used to keep the config bits then the config speed doesnt have to be ludricus fast 2022-04-02 13:14:03 I imagine that one would use two BitGrid cell to make an SR latch, a few more for D latch, and you get the idea 2022-04-02 13:15:27 heh, just thought of this: a device with BitGrid blocks and these lossy inexact matrix multiply analog blocks 2022-04-02 13:16:34 like the Veritaseum guy talked about in a recent youtube vid of his 2022-04-02 13:19:56 That guy makes good videos. I've enjoyed him a lot. 2022-04-02 13:20:05 And he's got a nice personality. 2022-04-02 13:20:23 There are a couple of those fellows that seem to have good content but their personality just grates on me. 2022-04-02 13:21:27 Joe Scott - somehow I just can't watch his stuff. 2022-04-02 13:21:33 I take the blame. 2022-04-02 13:53:10 Hmmm. The bitgrid cell looks an awful lot like an FPGA LUT. 2022-04-02 13:53:36 The four neighbor bits address the LUT, and the four output bits are sent back to the neighbors. 2022-04-02 13:53:55 It should be fairly easy to map that structure onto a Xilinx (at least) FPGA. 2022-04-02 13:54:37 Reading here: 2022-04-02 13:54:39 https://bitgrid.blogspot.com/search?q=bitgrid 2022-04-02 13:54:53 Or, wait - here: 2022-04-02 13:54:56 https://bitgrid.blogspot.com/2005/03/bitgrid-story.html 2022-04-02 13:55:54 It looks like a huge LUT-based combinational circuit; any sequential behavior would be "configured" into it. 2022-04-02 13:56:13 At least no clocked behavior has been mentioned yet. 2022-04-02 14:02:39 I like his idea of making the entire configuration space (the LUT tables) show up as RAM to the host. I don't know how easy that would be to implement, but at least RAM already provides such a structure - this might be as easy as taking a RAM chip and "embedding" the neighbor links into it. 2022-04-02 14:03:08 Maybe there's a single signal that starts and stops the flow of information across the grid. Stop it, load the RAM, start it. 2022-04-02 14:03:29 That's definitely "pleasing" compared to the process FPGAs use today for configuration, and the opacity of the meaning of the configuration bit stream. 2022-04-02 14:04:35 This has long been one of my big beefs with FPGA vendors. They want to make their tool chain a profit center, so they make the bitstream structure proprietary. If those were open and documented we'd have a plethora of public domain tools for working with FPGAs. And bitgrid's "configuration architecture" is about as simple as it could possibly get - easy to understand in a very short time. 2022-04-02 14:04:58 Functionally it looks like a "sea of gates" to me. 2022-04-02 14:07:58 another thing, if we have BitGrid blocks and routing boxes betwixt them that allows for accessing the config mem port if each then you could have radical partial reconfiguration where some of the BitGrid blocks are used as BitGrid and some used as memory, all depending on application 2022-04-02 14:09:39 Yes - I just posted a comment on that page I linked suggesting the very thing. 2022-04-02 14:10:18 I've long thought "on the fly reconfiguration" of FPGAs has a lot of potential. The way typical config works, though, you have to send a full bitstream. His "random access" configuration is a very powerful idea. 2022-04-02 14:10:46 Making one of these might be as simple as taking an existing RAM chip design, "spreading out" things just a tad, and dropping in the new neighbor-to-neighbor connections. 2022-04-02 14:10:58 And having a way to turn the "flow" on and off. 2022-04-02 14:11:33 The point being that we *already have* the configuration circuitry - it's "just a ram chip." 2022-04-02 14:12:02 in this case sram, dram works quite diffrently 2022-04-02 14:12:22 I probably mentioned on the fly reconfig to technical sales critters that visited from FPGA companies as far back as 2000-2001. 2022-04-02 14:12:40 Yes - true. 2022-04-02 14:13:10 Well, I guess it's true. DRAM is still RAM. It's a structure that maintains set values in an array of cells. 2022-04-02 14:13:23 That "spreading out" difficulty might differ. I don't know enough about them. 2022-04-02 14:13:52 Bitgrid is just *using* those values for an additional purpose. 2022-04-02 14:13:52 here is an idea, apply partial evaluation to a BitGrid config/virtual-circuit whenever that makes sense 2022-04-02 14:14:43 Yeah - huge possibilities. Any form of interaction between the host and the grid that "reshaped" the configuration - gradually or radically - could be useful. 2022-04-02 14:14:44 dram cell is basically one fet transistor and one capacitor 2022-04-02 14:15:15 Yes, but you could still use that value in a bitgrid, couldn't you? So long as it wasn't draining the charge. 2022-04-02 14:15:22 while the canonical sram cell uses six transistors iirc 2022-04-02 14:15:34 Right - it's some kind of little latch. 2022-04-02 14:15:56 rs latch iirc 2022-04-02 14:16:09 But if each cell just went to a coupld of mosfet gates, that shouldn't make the dram stop working. 2022-04-02 14:16:21 Reset Set is what the rs stands for 2022-04-02 14:16:29 :-) Yes. 2022-04-02 14:16:46 Hardware guy over here. 2022-04-02 14:17:06 Old enough to have learned back when they still talked about such things. 2022-04-02 14:18:50 but if you dont mind go down in (re)config speed you could use memristors in voltage divider setup per bit cell and use four 16in1out muxers (one per side of the BitGrid cell) 2022-04-02 14:19:12 oh, these things are still tought 2022-04-02 14:19:41 Well, good. 2022-04-02 14:19:53 and never under estimate how usefull 7400s series is for glue logic 2022-04-02 14:20:12 Sometimes it just feels like the old ideas (like assembly) are downplayed, but I haven't really been exposed to the university curricula in a long time. 2022-04-02 14:20:50 seen a one gate nand in an tsop 23-5 from that series 2022-04-02 14:20:56 I remember when I could walk into my local Barnes & Noble (back when they just sold books) and find whole shelves covered with books on assembly language, processor architecture, etc. 2022-04-02 14:21:12 Then a lot of that just went away, and it was all web design, CSS, Ruby on Rails, etc. 2022-04-02 14:21:22 The hard core nitty gritty books faded from view. 2022-04-02 14:21:47 well many of those older books are now often available as free pdfs 2022-04-02 14:21:54 You betcha. 2022-04-02 14:22:37 Overall info is a lot more accessible than it used to be, if you look hard enough. 2022-04-02 14:23:02 and I do not miss the old component datasheet books one bit 2022-04-02 14:23:33 :-) I loved them, though. I felt like they were a source of enormous power. 2022-04-02 14:23:48 though doing a parametic search is a bit of a skill to learn 2022-04-02 14:23:54 Cultivated my databook library pretty carefully back in the mid 1980's. 2022-04-02 14:24:34 I remember reading a Motorola databook for entertainment - I didn't work in that field (communications, primarily), but the array of little gadgets they had available was just fascinating. 2022-04-02 14:25:05 That was the period when we were "on our way" to digital / software radio, but "not quite yet," for performance reasons. 2022-04-02 14:25:15 not when you misread a table because the diode you were looking for shared it with its fifteen diffrent variants 2022-04-02 14:25:47 Well, that's certainly true. And of course datasheets sometimes had *errors*, and those could foul you up big time. 2022-04-02 14:26:49 And yes - quality of presentation definitely "varied." 2022-04-02 14:27:01 yeah, errata that was just put in back of the book, unbound because the books were only reprinted every three years 2022-04-02 14:28:17 talking about good presentation, I do love how Philips do their service and troubleshoot pdfs 2022-04-02 14:29:31 talk about using internal hyperlinks for easier use ( page nr and section references still included for if you printed them out ) 2022-04-02 14:30:55 which reminds of a bit hillarious ancedote 2022-04-02 14:32:21 the story is that a huge free standing bit curved flat panel telly was misbehaving but only doing so at the owners residence 2022-04-02 14:33:12 so a technician was sent back with the telly to figgure out wtf was diffrent 2022-04-02 14:33:23 This is fun already. 2022-04-02 14:34:43 so, the telly is in its place but its back far enough from the wall to provide easy access 2022-04-02 14:36:25 to ease his work the tech had a head lamp and had the brilliant idea of tele vid conferencing with the gray beards back at the shop 2022-04-02 14:37:30 to keep his hands free he strapped his smart watch, which had a camera, to the headband of the torch 2022-04-02 14:38:08 pretty good idea that 2022-04-02 14:38:58 so the guy is been there for an hour and a half proping test points 2022-04-02 14:39:46 when the misbehaviour manifests 2022-04-02 14:41:29 turned out that because the refreshments fridge was on the same electrical circuit as the telly plus an inteligent light dimmer there was some choppiness in the ac power the telly got 2022-04-02 14:43:59 which got the controller in the switch mode power supply a bit confused which produced intermittent dip in the 5v rail which tripped some of the protection circuitry in the 3.3 v regulators 2022-04-02 14:44:19 the fix? small ups for the telly 2022-04-02 14:46:00 nota bene this was among the first of these big curved flat screen tellys 2022-04-02 14:51:03 That's great. And whether or not it was a "design fault" really depends on how large that noise was - I guess it had a "range" that the AC was supposed to stay in. 2022-04-02 14:51:37 I just think it's great that the company was willing to go to that much trouble to figure it out. 2022-04-02 14:51:53 When I moved into this house I'm living in now, I was a Sprint PCS customer. 2022-04-02 14:52:25 I could stand in the street in front of my house and get a good signal. As I walked up the driveway it would fade to next to nothing, and the phone would barely work. 2022-04-02 14:53:00 I called Sprint about it, but some low level person looked at a map and said "You're in an area of good service." And that was the end of it as far as they were concerned. I told them to send a guy out to measure, but they wouldn't. 2022-04-02 14:53:18 I wound up having to pay all the early termination fees and switched to Verizon. 2022-04-02 14:53:48 But for some reason, my house was sitting right in a dead spot, and they just *would not hear it* - it was an area of good service because their map said so. 2022-04-02 14:54:39 That's a great story - on the TV. 2022-04-02 15:03:28 This is an interesting article: 2022-04-02 15:03:30 https://tenthousandmeters.com/blog/python-behind-the-scenes-10-how-python-dictionaries-work/ 2022-04-02 15:04:02 I'm still generally exploring ways of increasing compilation speed. Nothing I feel any need to do at the moment, but it just became a topic of entry for me. 2022-04-02 15:04:11 Ugh. topic of interest 2022-04-02 15:09:14 I was thinking about ways of achieving that speedup that didn't involve embedding (potentially large amounts of) information in the source stream itself. But so far those always run into the same "name re-use not supported" trouble that Chuck's original scheme did. 2022-04-02 15:09:38 The benefit of embedding the address in the source is that different invocations of a word can point to different instances of the word in the dictionary. 2022-04-02 15:10:03 Any attempt to "centralize" the address info (hash table, etc.) tends to conflict with that. 2022-04-02 15:10:20 Because after all, the source words are the same strings. 2022-04-02 15:10:46 It's the dictionary structure that tells us which definition to use in each place - bypassing that search loses that info. 2022-04-02 15:17:08 One insight that occurred to me last night is that separating headers from bodies means that changing the *implementation* of a word or words doesn't change any of the header offsets. 2022-04-02 15:17:17 That might be useful in some optimization. 2022-04-02 15:18:34 It does, though, change where all *following* headers would need to point, if you moved the code following the change. 2022-04-02 15:19:04 But in my system I could redefine a word at any time by just adding a new definition for it and updating its header - without moving anything else. 2022-04-02 15:19:25 Then when I was all done developing I could do a final re-compile of everything to get the most compact thing. 2022-04-02 15:20:53 Squeeze out all the abandoned definitions, that is. 2022-04-02 15:21:34 Maybe I could use :: to mean "redefine an existing word." 2022-04-02 15:46:42 Oh, there's another application for "find in CURRENT." If you're redefining a word, you'd want to redefine the one that's defined in the CURRENT vocabulary, since that's where "new definitions" go. 2022-04-02 15:47:33 Some Forth's automatically search CURRENT (often first) in FIND, but mine doesn't - it searches only the CONTEXT list (I call it PATH instead of CONTEXT). 2022-04-02 15:48:22 In previous implementations CONTEXT/PATH was a stack of vocabulary pointers - this time each vocabulary word has a cell to point to another one, and PATH is implemented as a linked list. 2022-04-02 15:48:52 That automatically bypases the possibility of having a vocabulary show up twice in CONTEXT - in the stack approach I had to check for that to avoid it. 2022-04-02 15:50:11 It occurred to me the other day I'm going to have to add a second link field - in addition to a dynamically changing PATH list I need a permanent list of all vocabularies, because each one will have to be whacked back when I FORGET. Not just the ones currently on PATH. 2022-04-02 15:54:52 The alternative to a second link pointer would be to have FORGET traverse the entire dictionary looking for vocabularies - that's not very appealing. 2022-04-02 15:59:57 ACTION is somewhat fond of Tries 2022-04-02 16:00:58 for instance there is an trie used in one scheme implementation for interning symbol strings 2022-04-02 16:03:28 which meant that each node in that trie was potentially used as the symbol id as the symbols were usually mostly compared object identity wise 2022-04-02 16:04:32 to factiliate printing out these symbols out again each trie node had a pointer to its parent 2022-04-02 16:05:28 meant that the reconstructed symbol string needed to be built in prepend order 2022-04-02 16:14:41 Tries are interesting. All of the binary-search related data structures, actually. I'm really intrigued by ropes at the moment. For dictionary representation, though, they all pose difficulties for vocabulary implementation, unless you make each vocabulary be its own trie / tree / etc. 2022-04-02 16:17:18 In a system where you mean to run any of multiple "applications" it makes sense to compile just the source needed for that application, which is esay enough to do. But you might have multiple applications resident at a time, and then you'd like to avoid re-loading source that's already been loaded. Sort of like Python module imports - you only need to import a module once. 2022-04-02 16:18:08 I've wondered if perhaps each block or file of source should define a word that can be detected to confirm the source has already been compiled. Then the first thing a block might do when it's loaded is ensure it's not already loaded. 2022-04-02 16:20:06 I guess that's a step toward "libraries," which we're taught to think of as bad things, though. 2022-04-02 16:21:18 But you know, sometimes it doesn't seem so bad - surely there are osme things, like say floating point matrices, which are "widely applicable without tuning" to many problems. 2022-04-02 16:21:32 It becomes bad if it's taken too far, which is common. 2022-04-02 16:22:04 When you start adjusting how you write your application based on wanting to take advantage of this or that library, you've moving across that line. 2022-04-02 16:46:44 Anyway, my bet is that the ability to recompile existing words, and have the new definition replace the old one in all compiled instances, will remove the need for any particularly lightning fast compile capability. One would just adopt a workflow where most recompiles were iterative and just replaced definitions of a handful of words. That will be "fast enough." Then when you're happy with everything, 2022-04-02 16:46:46 move those new definitions into the original spots and do a full recompile then, which would basically be a form of "garbage collection." 2022-04-02 16:47:13 That one might take a few seconds, but you'd only be doing it once. 2022-04-02 16:51:11 Oh, by the way - I incorporated ANSI colors into my system output a couple of days ago. My typing is white. System prompts are green, error messages are red, and "normal interpreted output" is blue. It's kind of pleasing to look at on-screen. 2022-04-02 17:37:23 In case anyone is interested, this is the 64-bit hex pattern for pi: 0x400921FB54442D18. 2022-04-02 17:37:39 That gives you the 3 and then 18 correct decimal digits. 2022-04-02 17:39:59 I did just realize that, robust as my number conversion routine is, it doesn't check for overflow. That value there is what results from typing in 3.141592653589793238. But if you add the next decimal digit, which is a 4, it still gives you a value but it's nowhere close to pi. 2022-04-02 17:40:38 Probably wouldn't be too hard to add a check for that. 2022-04-02 17:49:21 someone check me: to generate two random bits with single throw of single six sided dice: one bit for if the number is odd or even and one bit on if the number is 1-3 and off it is 4-6 2022-04-02 17:51:08 Hmmmm. 2022-04-02 17:51:19 Those feel coupled to me, but I'm not sure if it matters. 2022-04-02 17:51:42 Say it's' 1-3. Then the probability of even is just 1/3, whereas if it's 4-6 the probability of even is 2/3. 2022-04-02 17:51:55 But like I said, it may not matter - I'm trying to think it out. 2022-04-02 17:52:55 damn you are right 2022-04-02 17:53:30 The problem here is thath 2 and 3 are relatively prime. 2022-04-02 17:54:20 So there's no way to slice the 6 cases up into 4 "sections." 2022-04-02 17:55:56 The reason it may not matter is because I think a sequence of such rolls would give you a stream of two bits where each bit showed 50% 0, 50% 1. 2022-04-02 17:56:19 But there would still be that internal correlation. 2022-04-02 17:57:04 Whether that's good enough or not really depends on what you're doing with the bits. 2022-04-02 18:02:46 Heh heh heh... 2022-04-02 18:02:48 3.141592653589793238 ok 2022-04-02 18:02:51 . 400921FB54442D18 ok 2022-04-02 18:02:53 3.1415926535897932384 3.1415926535897932384: arithmetic overflow 2022-04-02 18:07:46 That just took slipping a "jo oflow" instruction into +, -, and *. 2022-04-02 18:08:32 A more pleasing behavior, in the floating point case, would be to just ignore further digits, but to do that would have been considerably more invasive and would have slowed those primitives down more substantially and also required other handling. 2022-04-02 18:19:24 Ah, crap. 2022-04-02 18:19:28 It's not quite that simple. 2022-04-02 18:20:09 Because now I can't enter 64-bit patterns that would be negative without actually entering the negative value. 2022-04-02 18:20:25 x:8000000000000000, for example, causes overflow. 2022-04-02 18:20:37 Which is strictly correct, but it still seems undesireable to me. 2022-04-02 18:21:02 I'll have to think about that - having it caught when I inadvertently run into it seems valuable. 2022-04-02 18:24:34 I guess the "most correct" solution would be to have both signed and unsigned multiply and add instructions and use one or the other based on input, so x:8000000000000000 would throw an error but ux:8000000000000000 wouldn't. 2022-04-02 18:24:40 But that feels tedious. 2022-04-02 18:25:08 I don't think I've *ever* typed such a number directly into a Forth. 2022-04-02 18:25:40 I mean, for some real purpose, other than testing things. 2022-04-02 18:36:04 I seem to have misunderstood something all these years. 2022-04-02 18:36:30 I thought the valid range of signed integers was -(maxing+1) to +maxint. 2022-04-02 18:36:42 In my syste maxint is 0x7FFFFFFFFFFFFFFF. 2022-04-02 18:37:14 But if I enter that, negate it, and subtract one, it overflows. 2022-04-02 18:37:32 Apparently it doesn't allow that extreme negative number. Have I actually been wrong all this time? 2022-04-02 18:37:41 Is it really -maxint to +maxint? 2022-04-02 18:39:33 If I use logic operations to get x8000... onto the stack, I can print that out with u., but not with . 2022-04-02 18:40:34 . sees the negative sign bit, prints the - and attempts to negate the number, which fails. That makes sense. But it surprised me that that 1- operation before failed. 2022-04-02 18:53:37 Ok, here: 2022-04-02 18:53:40 https://docs.microsoft.com/en-us/cpp/cpp/integer-limits?view=msvc-170 2022-04-02 18:54:00 in all cases it documents ranges of -(maxint+1) .. +(maxint). 2022-04-02 18:54:25 So apparently what I'm doing isn't sufficient - looks like a proper handling requires considering the sign bit as well. 2022-04-02 18:54:38 I'm not sure I care - I don't want to keep piling code into those primitives. 2022-04-02 18:54:51 How often am I going to need that one number -(maxint+1)? 2022-04-02 21:47:20 Ok, I ripped all that out. I did a little testing and C doesn't seem to detect any of that stuff. And I didn't even EXPECT such protection when dealing with integers directly; it was an unpleasant "Oh, yeah." moment when it showed up. I'll just have to be careful about entering any highly precise floating point numbers - there probably won't be many such things. 2022-04-02 21:47:33 Now I'm starting to roll my sleeves up on the editor.