2022-02-26 12:12:28 I'll ask here. is there a document that explains how IO is performed? 2022-02-26 12:12:47 like the entire process 2022-02-26 12:38:58 in what? 2022-02-26 12:39:51 computers? 2022-02-26 12:39:55 a specific forth? 2022-02-26 12:39:59 what type of IO? 2022-02-26 12:40:08 what OS, what forth? 2022-02-26 12:40:10 ya gotta give us more bud 2022-02-26 13:46:37 I was just wondering, how the hell does an IO operation happen 2022-02-26 13:46:39 like 2022-02-26 13:46:41 why is it blocking 2022-02-26 13:47:07 I'm missing basic fundamentals, anyone have a book around 2022-02-26 14:15:52 afaik in modern linux the only reason why e.g. read() is blocking is "posix says so" 2022-02-26 14:16:20 like, beyond O_NONBLOCK, there's io_uring 2022-02-26 14:16:39 and we're edging nearer towards the future where you can do IO from eBPF every day 2022-02-26 16:29:50 IO doesnt have to be blocking 2022-02-26 16:30:00 it's blocking because you wait for the result, generally 2022-02-26 16:31:26 a language like C doesnt have something like futures/promises, or some other way to work on values that arent there yet 2022-02-26 16:32:18 without a mechanism to deal with deferred results, you have no choice but to wait on IO 2022-02-26 16:33:17 O_NONBLOCK is a funny little thing 2022-02-26 16:33:27 you effectively have to 'poll' it 2022-02-26 16:33:35 If data is not available when you call read, then the system call will fail with a return value of -1 and errno is set to EAGAIN. 2022-02-26 17:34:28 I've always liked the idea, in cases that might be addressed by blocking I/O, of separating the initiation of the operation and the loading of the results. Then the program has control, and can attempt to do it intelligently so that he can do other useful work while the slow stuff is happening. 2022-02-26 17:35:17 The "initiate" word would return immediately, you'd do whatever else you wanted to do, and then you *might* have the "load result" word block if the thing still isn't ready. Or you might just say "be sure you wait long enough" and have that word give you whatever reads in when you run it. 2022-02-26 17:35:33 In either case, responsibility is shifted to some extent to the programmer. 2022-02-26 17:35:55 Forth in hardware can be so fast that even @ might work this way. 2022-02-26 17:36:38 Maybe you have (@ and @) or something. Just depends on relative speed of your stack-only operations and your RAM. 2022-02-26 17:37:35 In an FPGA design I considered once, even (@ @) with nothing in between would have eliminated the need to have @ have a wait state in it. 2022-02-26 17:38:27 And that @ would have been the only primitive that needed more than one cycle to operate, so doing it that way streamlined the whole design by letting every instruction have single-cycle operation. 2022-02-26 17:38:35 Efficiencies like that "cascade." 2022-02-26 17:39:18 An execution unit that didn't need to be able to insert a wait state selectively was just simpler than one that did. 2022-02-26 17:52:56 KipIngram: this is the idea of 'futures' + await in traditiona' languages 2022-02-26 17:54:23 you have a function that returns a 'future' 2022-02-26 17:55:04 you can do limited things with the future 2022-02-26 17:55:39 and you can get the actual value of the future using 'await' 2022-02-26 17:56:21 what await does is it blocks 2022-02-26 17:56:30 until the value is ready 2022-02-26 17:56:48 but if you dont call await, it doesnt block 2022-02-26 18:01:55 That seems like a nice way to formalize the idea. 2022-02-26 18:02:47 I like the idea of having some limited capabilities, as possible. 2022-02-26 18:33:39 For the (@ ... @) juggle, though, the length of the delay probably doesn't justify much overhead (like reading in a temporary item). I figured that "in between" might just be used for a stack op or two or something like that. 2022-02-26 18:34:00 yea 2022-02-26 18:34:37 Though I don't really have any idea how often it might actually work out. It might be that all the work done in the short period prior to reading RAM would be on getting the address ready. It'd be kind of case-by-case. 2022-02-26 18:35:41 could be different with memory mapped IO 2022-02-26 18:36:13 I also had some ideas during that same bit of work for separating conditional jumps into two pieces, so that whenever possible the decision could be computed early by the execution unit and fed back to the fetch unit before the fetch unit reached the actual jump point. 2022-02-26 18:36:31 That too would only really be feasible in some cases. 2022-02-26 18:36:40 But one it would work for would be when travering a linked list. 2022-02-26 18:37:31 You could test the link pointer and report over to the fetcher whether it was end of list or not before you actually processed the contents of that list item.