2023-06-04 10:09:51 drakonis: I found the Dresden series good right from the jump. I particularly liked the last few paragraphs of that first book - it just left me definitely wanting more. However, a lot of people regard the first two books as "lesser" than the what comes after. I think it's a fairly simple thing - the writer had "less to work with" at that point. By that third book he had established a "world" and a slate 2023-06-04 10:09:53 of characters, and it's true that the "big plot" that stretches across all of the books really gets rolling in that third book. 2023-06-04 10:10:59 I have found, though, on multiple re-reads, that there are little "Easter eggs" in the firt books that you notice once you "have the lay of the land," so to speak. Something that just seemed like a few words flying by the first time you read Storm Front - later you read those same words and you're like "Oh - OH! That's important!" 2023-06-04 10:12:22 It's a world that "keeps on giving," even after you've read through it all before. 2023-06-04 10:12:30 Kosh 2023-06-04 10:12:51 Oh, I loved Kosh. He was practically the best part of Babylon 5. 2023-06-04 10:13:42 because Kosh says things that will make sense 2 seasons later 2023-06-04 10:13:49 Yes. 2023-06-04 10:14:41 Slightly different, though - you kind of knew that things he said were important, even if you didn't know how, the first time you heard them. Some of these Dresden easter eggs just skate right by you the first time. You have no context into which you can fit them. 2023-06-04 10:17:41 I feel almost silly saying it out loud, but I've read the Dresden novels seven times over the years. And I swear to god, on that LAST read of Storm Front I was reading along and suddenly a character said something, that I'd read six times previously, and suddenly something "clicked" and I nearly fell over. 2023-06-04 10:18:01 I just hadn't seen up until then that that was a CLUE. 2023-06-04 10:19:20 I've got a fairly involved theory about the things in the series that are still mysteries. It's very specific and carries a lot of details. It's probably unlikely that it's COMPLETELY right. Some others on the reddit buy into it - others don't. We'll just have to wait and see. 2023-06-04 10:22:00 That clue I snagged that last time through supported the theory quite well, which is why it excited me so much. 2023-06-04 10:27:07 Ok, so dictionary. String table, hash table. For the bulk of operations I only need a string pointer and an index in the hash table entries. For FORGET, though, I have to be able to recover the hash table entries in reverse creation order. I could add a link field to them for that purpose. 2023-06-04 10:28:05 Or, I could just find them. I could implement FORGET by just walking back through the string table and hash each string to find its hash entry. 2023-06-04 10:28:59 Even if there's more than one use of a string in the dictionary, I want the one that points to that last string. 2023-06-04 10:29:33 I'm a little torn over which way to go on that - I find the extra baggage in the hash entries a little offensive, but it does seem like the most "straightforward" way to get the job done. 2023-06-04 10:30:23 Including it makes the hash table entry size "not a power of two," which isn't the end of the world, but it seems a bit less than perfect to me. 2023-06-04 10:34:00 So hey, what do you guys know about how various systems handle forgetting content that has manipulated the vocabularly situation? Consider this: 2023-06-04 10:34:10 vocabulary alpha alpha definitions 2023-06-04 10:34:19 Well, wait. 2023-06-04 10:34:22 : foo ... ; 2023-06-04 10:34:30 vocabularly alpha alpha definitions 2023-06-04 10:34:35 : bar ... ; 2023-06-04 10:34:40 forget foo 2023-06-04 10:35:11 That forget needs to undo the addition of alpha to CONTEXT and CURRENT. 2023-06-04 10:35:25 Any insight into how this typically gets handled? 2023-06-04 10:35:48 I.e., there's more to it that merely cutting back the dictionary and resetting HERE. 2023-06-04 10:36:28 The same thing can make error recovery difficult if your goal is to totally undo what's executed on the line up to the error. 2023-06-04 10:36:38 It's one of the reasons I went to a full snapshot. 2023-06-04 10:37:09 A snapshot approach would handle forget too, but it makes forget pretty expensive. 2023-06-04 10:37:29 And it would also cause forget to revert your stack, your variable values, etc. 2023-06-04 10:38:43 Seems like this encourages the use of a marker based approach, and the marker would save the CONTEXT and CURRENT states. 2023-06-04 11:08:32 Hmmm. My .wipe process has shared traits with forget. 2023-06-04 11:08:38 It is removing headers. 2023-06-04 11:09:30 The difference is that a require that .wipe be expected to operate only on the first vocabulary in the search order. Changing vocabularies and then .wipe-ing isn't allowed. 2023-06-04 11:13:08 And it does just walk back through the recent history. 2023-06-04 11:13:27 No mechanism for "block" .wipe. 2023-06-04 11:14:10 It actually walks the whole vocabulary. I suppose I could add an "opening" for it, but it's so fast as it is. 2023-06-04 11:16:13 Some of you have discussed "local definitions" - definitions only visible within some other definition. I do see how that's all very nice and "modern," but really my .: .wipe accomplishes exactly the same thing, and has the advantage that .: words are visible for re-use by multiple : definitions. 2023-06-04 11:16:33 And once I .wipe they're gone - nothing else below sees them. 2023-06-04 11:16:56 And I don't like lengthening my definitions, which is precisely what a local definition does. 2023-06-04 11:21:36 I think I just swallow this link field in the hash table entries. It makes everything easy, and I don't think I've fully envisioned all the difficulties that fancy vocabulary maneuvering could create for any other way of recovering those items. 2023-06-04 11:22:15 It's a thorny problem and I'm almost certainly not seeing all of it at the moment. But with that link field in there it all becomes completely straightforward. 2023-06-04 11:23:09 Given how heavily I use .: definitions, though, it does mean that I'm typically going to have a pretty large number of these "delete me" marked entries in the hash table, that won't go away until I table double. 2023-06-04 11:24:25 It might be worth trying to avoid those - I think the way to do that would be instead of delete later marking a hash entry, one would follow the rest of that probe list and move any subsequent items back a notch, to fill the hole. 2023-06-04 11:24:57 There should never be more than a couple of those anyway. 2023-06-04 11:25:24 "Delete me" items are baggage, and in my case I'll have a lot of it. 2023-06-04 13:06:21 ACTION notes https://github.com/jefftranter/6502/tree/master/asm 2023-06-04 13:10:42 Amazing the "longevity" the 6502 has had. :-) 2023-06-04 13:10:59 and probably will have 2023-06-04 13:11:20 I honestly regard the 6809 as a better processor but while it's "still kicking about" too, it doesn't seem as much so as the 6502. 2023-06-04 13:13:51 If I'm going to implement these hash related functions I really should arrange to use them for my block buffer selection too. Right now I "hash" block numbers, but it's a damn crude hash and there's only one buffer a given block can go to. If it's occupied, then that occupant gets ejected. 2023-06-04 13:14:14 It's FAST, though. 2023-06-04 13:14:33 The thing is regarding 6502, Intel 8088, Radon CISC, and others is their complexity relative to say RTX2010 or the canonical dual stackmachine from Philip Koopman book Stackmachines The New Wave, or excamera J1 2023-06-04 13:15:36 but I do understand why as both (EEP)ROM and RAM were at premium back then 2023-06-04 13:15:36 I think register machines are just in general more complex than stack machines. 2023-06-04 13:15:45 Right. 2023-06-04 13:16:03 That extra complexity does promote certain efficiencies. 2023-06-04 13:16:57 I just wrote a bunch of code to "load" an image from blocks. It does all that allocating and copying and relocating and so on, but it wound up being very terse code precisely because I had all those registers available to just "dedicate" to specific purposes. 2023-06-04 13:17:16 They're like global variables in that code, and there are "enough" of them to think of them that way. 2023-06-04 13:17:27 yebb, one trick I saw in excamera J1 is that TOS and NOS registers outputs directly feed into various op units which means that you pretty much only need a result mux to select which operation to “perform” 2023-06-04 13:17:41 So all the quantities I needed for any operation were just right there at hand, immediately accessible. 2023-06-04 13:18:20 Right. The only aspect of that that bothers me is energy related - you're paying energy for all of those results, and then throwing away all of them but one. 2023-06-04 13:18:43 the main issue I have with register machines is the complexity of operands selection and destination selection circuitry 2023-06-04 13:19:05 It would be interesting to look at how the GA144, for example, accomplishes the equivalent operations. 2023-06-04 13:19:20 Since Chuck was horribly "energy conscious" in the design of that thing. 2023-06-04 13:19:38 If I had to GUESS I'd predict that it only fires the specifically requested operation each time. 2023-06-04 13:20:08 Otherwise that's just a glaring inefficiency that I suspect would have deeply bothered him. 2023-06-04 13:20:14 note that there less than sixteen kinds of ops usually so the energy wasted is not that much 2023-06-04 13:20:29 It's 'tolerable'. 2023-06-04 13:20:33 But it's there. 2023-06-04 13:20:57 and register renaming, out of order operations and such are way more costly 2023-06-04 13:20:59 In an async system like GA144 all you'd need to do is mask the inputs to the various functional logic chains, and only let the inputs through to the one you wanted. 2023-06-04 13:21:46 so it would just be a layer of buffers there at the ALU inputs. 2023-06-04 13:22:05 I like how the inter compute node ports work for sync purposes 2023-06-04 13:22:19 in GA144 that is 2023-06-04 13:22:35 Of course, those buffers would cost energy, so it would be a process to decide which way won. 2023-06-04 13:22:53 Yes, that is really cool. 2023-06-04 13:23:03 The whole "execute a port" thing. 2023-06-04 13:23:13 It's damn clever end to end. 2023-06-04 13:23:17 A blocking read or write basically halts the clock of that node until the other end has writted to or read from the port 2023-06-04 13:23:24 And such an entirely different mindset is required for proramming it. 2023-06-04 13:23:36 I see it as more like designing a circuit that writing a procedure. 2023-06-04 13:23:41 s/that/than/ 2023-06-04 13:24:11 as I have said before I think GA144 and FlowBasedProgramming paradigm fits nicely together 2023-06-04 13:24:12 Well, there isn't a clock. 2023-06-04 13:24:23 But yeah, it's "like that." 2023-06-04 13:24:39 IP pointer incr signal or what have you 2023-06-04 13:24:44 The ports just know whether they're full or empty. 2023-06-04 13:24:58 When you read one, you sit there until there's data to read. 2023-06-04 13:26:37 iirc you can read a ‘register’ that says the status of all the ports so you can refrain from reading/writing if you need non blocking 2023-06-04 13:31:03 Right. 2023-06-04 13:31:25 I can't remember - does writing block until someone reads it? 2023-06-04 13:31:33 Or will the port just hold the data until it's consumed? 2023-06-04 13:31:57 If the former, then it's a lot like communicating synchronous processes. 2023-06-04 13:32:22 depends. I think each port can have non blocking address and a blocking one. 2023-06-04 13:32:36 That makes sense. 2023-06-04 13:33:26 I read some on a processor a couple of years ago that had RAM mapped into multiple places in the address space. By which one you used you could control whether the data was cached or not. 2023-06-04 13:33:28 I liked it. 2023-06-04 13:34:03 I like it when the programmer has more control over resources. 2023-06-04 13:43:10 I rather dispense with hw caching and just have a small but fast RAM per compute node and just use DRAM as sort of fastish disk 2023-06-04 13:46:34 Yeah, that's the ultimate in giving programmer control. 2023-06-04 13:47:05 "total transparency." 2023-06-04 13:47:14 Here are your capabilities - knock yourself out. 2023-06-04 13:47:15 and it gets rid of power hungry and die space hungry complex circuitry 2023-06-04 13:47:34 Yes indeed. 2023-06-04 13:47:42 Instruction re-ordering. 2023-06-04 13:47:54 You could just ask the programmer to write the damn instructions in the right order to start with. 2023-06-04 13:48:10 Just make it blazing fast, when it's used the right way. 2023-06-04 13:48:32 And use that logic space to offer more actually usable resources. 2023-06-04 13:48:36 Or more cores. 2023-06-04 13:49:26 The "automating" of stuff - that's a slippery slope, and modern micros have slipped way, WAY down it. 2023-06-04 13:49:45 one issue with many nanometre processes is that a ic will cook itself if all the circuits on it is used at once 2023-06-04 13:49:50 All to make the code monkey model continue to work. 2023-06-04 13:50:00 Yeah. 2023-06-04 13:50:14 That's what I hear the advantage of graphene is over silicon as a substrate. 2023-06-04 13:50:24 Better thermal performance -> much faster allowed clock speed. 2023-06-04 13:50:43 I saw one YouTube video talking about the possibility of 500 GHz cores, graphene based. 2023-06-04 13:51:03 That's a 100x performance boost, without any change in design at all - just a change of material. 2023-06-04 13:51:17 Kind of shocking to even consider. 2023-06-04 13:51:42 Well, change of material and scale. 2023-06-04 13:52:20 0.5 Terahertz cores wont be on the horizon ever because of clock pulse propagation distances would be absurdly small. Well if they stop on insisting on single or few clock domains. 2023-06-04 13:53:55 Then there is of course optronics that uses photonic links for long buses and possibly computation based on light interference grating systems. 2023-06-04 13:55:06 graphine base substrate  isnt as ameniable to doping as silicon is iirc 2023-06-04 13:55:11 Yeah, that was why I added scale. That pesky relativistic limit... 2023-06-04 13:56:02 But an async design gets rid of the clock domain problem. 2023-06-04 13:56:32 a sheet of graphine is basically a hex grid of Carbon atoms covalent bonded to three neighbours 2023-06-04 13:57:18 as there is one valience electron already available graphine is rather conductive 2023-06-04 13:59:24 so to get similiar np junctions in graphine you might have to dope the material a lot with Boron (which is missing one valience electron) and with element that lacks two valience electrons to get p regions 2023-06-04 14:01:11 which means the thermal conductivity of the thing might go down and the crystaline structural integrity go down also 2023-06-04 14:02:02 then there is the issue of manifacturability of these devices 2023-06-04 14:02:57 just the ultra violate interference based patterning and the resists cycle might not work 2023-06-04 14:04:32 might be possible to use dna brick origami site spefic enzyme-esque calitic doping but that is years off from what I can tell 2023-06-04 14:09:39 catalitic* 2023-06-04 21:15:32 https://dacvs.neocities.org/SF/ beautiful.