2024-05-15 00:19:08 Yes, for sure. 2024-05-15 00:19:23 I wish we'd done microprocessors altogether differently. 2024-05-15 00:19:37 I understand how the economics drove us the way we went - it's not "surprising." 2024-05-15 00:20:38 But I kind of wonder where we might be if the processor designers had just done whatever they could to offer up more processing power, without regard for legacy software. Yes, it would have meant rewriting some software. But the main thing driving that was trying to have fast single cores instead of multiple cores, and eventually we got to multiple cores anyway. 2024-05-15 00:20:53 What if we'd just swallowed that pill way back in the beginning and kept cores simple? 2024-05-15 00:21:07 We'd likely have dodged Spectre/Meltdown for one thing. 2024-05-15 00:21:17 That was totally born out of excessive complexity. 2024-05-15 00:22:13 On some processors all that fancy stuff designed to speed up individual cores takes up more logic than the compute core itself does. 2024-05-15 00:23:27 And then things like hypervisor support, which is right in what you were referring to - it won't suprise me if we get more layers like that in the future. 2024-05-15 00:24:53 surprise 2024-05-15 14:05:16 KipIngram: I think speculative execution security issues were inevitable, I think security and performance are always at odds 2024-05-15 14:27:32 Yes, I agree. The speculative execution guys just failed to recognize that by failing to revert *all* of the effects of the aborted execution (i.e., effects on the cache), they'd left an opening. 2024-05-15 14:28:55 Even if they'd purged any cache lines the speculative execution loaded, they'd still have *ejected* something from the cache, and that might be enough for an exploit too - I don't know for sure. 2024-05-15 14:30:10 I suppose they could have just not let *speculative* instructions modify the cache at all. Just let it get loaded later, via a non-speculative instruction. But that would have been less performant. 2024-05-15 14:30:55 And in some cases there might not BE other instructions that would load that cache line, in which case it would never get loaded and your performance would suck. 2024-05-15 14:32:50 A couple of years ago I saw info on some particular processor that I found interesting. It had RAM show up in multiple places in the address space. Accessing it in one range had no cache effects, while accessing it in the other would bring the read data into the cache. It gave the programmer a measure of explicit control. 2024-05-15 14:33:38 But I'm sure a primary goal of automated cache management in the first place was to boost the performance of legacy code that would not be re-compiled; requiring programmer oversight of how the cache got used would have dfeated that whole point. 2024-05-15 14:33:46 It needed to be ENTIRELY transparent. 2024-05-15 14:34:30 Having the programmer in charge of such things has always appealed to me, though. 2024-05-15 14:35:52 One outcome of "programmer unawareness" of cache action is false sharing - if your programmer was conscious of cache lines, at the source code level, then he or she could think about avoiding such things. 2024-05-15 14:39:27 Another thing they could have done was let the "ring level" affect the MEANING of instructions. Maybe in less authorized rings privileged instructions should just act as nops. 2024-05-15 14:40:28 Unless I'm overlooking something that would solve the problem completely. 2024-05-15 14:40:50 Yeah, you can run that instruction. It's just not going to do anything. 2024-05-15 14:53:02 Seems like just having the ring level be part of how you addressed the microcode storage would make that easy. 2024-05-15 15:05:58 the raspberry pi pico has memory mapped to four different address ranges so you can decide if you want caching 2024-05-15 15:42:24 Nice. I think that's a great feature. 2024-05-15 16:09:15 I guess when we went from a 32-bit address space to a 64-bit address space a lot of possibilities opened up. 2024-05-15 16:47:01 ya true though the pico is still 32 bit 2024-05-15 16:58:04 Ok. So what if it has 4GB of RAM? How do they still put the RAM in more than one place? 2024-05-15 16:58:25 Maybe it just can't have that much? 2024-05-15 17:00:05 IA-32 had extensions for embiggened memory access 2024-05-15 17:00:37 embiggeed :-) 2024-05-15 17:00:43 I like it. That should be a word. 2024-05-15 17:01:43 The old 386 had an "unreal" mode, where it could get at memory outside the old 1 MB limitation. I think the processors we have now still boot up behaving like a 386. 2024-05-15 17:01:54 it is! https://www.oed.com/search/dictionary/?scope=Entries&q=embiggen 2024-05-15 17:01:59 Quite a messy process backward compatibility has brought us. 2024-05-15 17:02:38 (besides the obvious and unresolved debate over what, exactly, a word is, and when a word becomes a word, etc) 2024-05-15 17:04:04 Yeah. "Purist" linguists have always "condemned" how the "young people are ruining the language." You can find such statements going back hundreds of years and across all kinds of languages. 2024-05-15 17:04:30 But in the end, languages evolve in a very messy, grass roots way. 2024-05-15 17:04:46 And you pretty much can't stop that process. 2024-05-15 17:04:48 and we know how Romans pronounced things from writers making fun of kids these days 2024-05-15 17:04:59 Yep. 2024-05-15 17:05:19 get off my banded iron formations, etc 2024-05-15 17:05:21 Peple love to find some group to "lord it over." 2024-05-15 17:16:55 yeah, those idiots 2024-05-15 17:17:00 good thing we're not like that 2024-05-15 17:17:17 ;) 2024-05-15 17:30:36 :-) 2024-05-15 17:36:34 KipIngram: the pico is the microcontroller variants me with 264k of ram 2024-05-15 17:37:13 me=one 2024-05-15 17:37:59 there are some higher tier mcus with "tightly" couples memory or whatever they call it 2024-05-15 17:38:23 where that 64k or however much is faster than the rest. it's a neat idea 2024-05-15 17:39:32 also sticking some code in ram to avoid flash wait states when you have no cache is cool but I can't imagine chopping big routines into tiny pieces and putting only some into ram 2024-05-15 17:40:40 I think that's why the hardware cache is so appealing. you don't have to figure out what part of the function to excuse to get better performance 2024-05-15 17:42:47 True. 2024-05-15 17:43:03 Was it the 6502 that had that "fast 256 bytes"? 2024-05-15 17:43:07 Or was that Z80? 2024-05-15 17:44:35 6502 had a zero page 2024-05-15 17:45:17 The first 256 bytes bytes could be addressed with a single byte 2024-05-15 17:46:02 kind of like 256 registers 2024-05-15 17:49:55 Right. 2024-05-15 19:42:43 say you have to perform a series of calculations, and you want to comment the steps, in order to arrive at a value that you want to give to "constant", how do you format it? 2024-05-15 19:44:05 i could put each step on its own line, followed by a comment, and then the last line "constant foo" 2024-05-15 19:44:36 but then it all kind of runs together, i wonder if i should indent something. but then what do i indent: the calculations or the constant def?