2023-06-27 03:46:53 KipIngram: I recommend doing the simplest version of what he suggests, because it's "good enough" for basic usage. It's impossible to totally avoid fragmentation, coalescing logic is simpler when you greedily coalesce on every free 2023-06-27 03:47:12 And don't have different strategy for smaller and larger allocs, because you probably won't have many tiny allocs anyway! 2023-06-27 03:48:50 Well actually you can limit fragmentation one way, but you won't like it. Have different mmaps for each size range in increasing powers of 2. Then at worst case there's 50% fragmentation 2023-06-27 06:58:27 Yeah, some degree of fragmentation is just going to happen, unless you write your system so that you can move stuff around at any time. 2023-06-27 06:58:39 I've toyed with that idea but have never arrived at a really workable scheme. 2023-06-27 06:58:57 It could be done, but brings with it performance penalties I'm not willing to pay. 2023-06-27 07:00:20 That's why only the root system / Forth vocabulary will be "imageable" in this system I'm planning now. I can create it a way that will allow me to stick it anywhere in memory I want and adjust it to work at that location. But all the rest of the stuff will need to be loaded from source, so that it gets built in a way that works whereever it happens to land that particular time. 2023-06-27 07:04:55 And yes - the way I learned boundary tag was to greedily coalesce anywhere it was possible - I think Lea's "heuristics" around that were exactl that - not really an "algorithm" anymore but more a set of rules of thumb based on actual experience. 2023-06-27 07:05:37 Regarding small allocs, I think once you have one you're likely to have more - you are probably running some data structure you designed tha tinvolves those small allocs, and chances are your program's going to do a bunch of them. 2023-06-27 07:06:15 I suspect larger allocs will be much more sproadic. 2023-06-27 07:09:50 It strikes me that the optimal way to do such smaller allocs would be to allocate a larger block when you start up such an allocation, and allocate your small items from that using an efficient fixed size allocator. Then free the pool when you're done. 2023-06-27 13:26:24 I think that process could be made transparent - you'd just have a list of available alloc sizes, and the first time a request got made it would see that it had no resources for that size and would allocate a larger block to serve as that size's pool. Eventually it would detect that the last items in the pool had been freed, at which time it'd free the larger block. 2023-06-27 13:26:49 And it would use a fast fixed allocator for those small requests. 2023-06-27 13:27:12 All you need is to know that WITHIN A POOL all the requests will be the same size. 2023-06-27 13:28:27 As long as one pool block is enough that seems entirely straightforward. If you exhausted that larger block, though, you'd need to get another one, and I haven't thought through all the details of tracking the state of multiple pool blocks. 2023-06-27 13:29:32 If certain alignment requirements were imposed then I think it would probably work out ok. 2023-06-27 22:49:21 test 2023-06-27 22:49:50 didn't work 2023-06-27 23:16:46 haha 2023-06-27 23:16:51 looks like the matrix bridge is back