2023-07-01 01:16:56 How do you prefer getting the third item on the stack? I use 3 PICK usually. 2023-07-01 01:20:00 careful you might be off by 1 there ... : dup 0 pick ; : over 1 pick ; : third 2 pick ; 2023-07-01 01:20:20 the word `third` isn't standard but i saw it somewhere 2023-07-01 01:22:57 Ah I don't think my implementation has THIRD support. 2023-07-01 02:20:22 The way I think about PICK is 'n PICK' gets the nth cell on stack with zero indexing 2023-07-01 02:23:23 KipIngram: I might do e.g. test(arr) { foreach(arr, ()(element){ putf("%d\n", element); }); } 2023-07-01 02:23:39 How would it be shorter or easier if I had to name that function? 2023-07-01 02:24:08 And I think quotations in Forth look nice and are quite succinct where they appear 2023-07-01 02:25:01 My *only* hesitation using them is I like old-school Forth, which has immediate control words, not quotations, and so I prefer using 'immediate control words' for things 2023-07-01 02:25:11 It's more consistent, with IF .. THEN etc 2023-07-01 02:25:50 Of course there are dialects like retro where it *is* consistent because everything's done with quotations 2023-07-01 05:30:47 emoji seems to work in st now 2023-07-01 05:32:11 I guess that bug in ... libxft or something has been fixed 2023-07-01 05:32:46 I can't remember some random font-related library was doing something wrong and st refused to do a workaround so st would just crash on emoji for a while 2023-07-01 09:03:06 amberishere: I have a notion of "stack frames" in my Forth. I would do this: 2023-07-01 09:03:32 : foo { s2 @ 0 } ; 2023-07-01 09:04:17 In my next one I'm thinking of supporting HP calculator "naming," where the top four items of the stack are x, y, z, t. 2023-07-01 09:04:48 So x would be a synonym for dup, y a synonym for over, and then z and t would be 2 pick and 3 pick respectively. 2023-07-01 09:05:05 I'd also support x<>z and x<>t and a variety of other things. 2023-07-01 09:05:25 There's a lot in HP calculator land that would port directly to simple primitives. 2023-07-01 09:08:11 veltas: Well, that's not really Forth. And I'm not necessarily claiming that non-nesting would result in "less typing" - just that the outer definition is longer with a nested function sitting inside it than it would be with them separated. 2023-07-01 09:08:49 The "entity" represenging the outer function will spread out over more of your "page" with the nested definition in it, and thus will be more for your "mind's eye" to take in and juggle. 2023-07-01 09:09:32 I'm not really trying to make a rigorous objective argument here - more just noting what seems easier for me personally to process mentally. Personal opinion here. 2023-07-01 09:10:51 Plus if you stick a nested function inside another one, then when you compile that you have to jump around your nested function. Not a terribly big deal, but it just doesn't appeal to me much. 2023-07-01 09:13:09 Also, I wasn't really thinking about anonymous functions here - just NESTEd functions, which I figured could still have a name if you wanted them to. There the nesting would be part of your namespace management, and I feel perfectly good about my own way of managing my name space (the .: .wipe thing). 2023-07-01 09:20:19 Out of these three options: 2023-07-01 09:20:21 https://pastebin.com/T8bJrg8t 2023-07-01 09:20:29 I just find the last one more "consumable." 2023-07-01 09:20:53 And it's easier to compile. 2023-07-01 09:28:32 Anyway, I guess I've just evolved a possibly peculiar personal style for this stuff. Just found what appeals to me, that's all. 2023-07-01 09:41:32 KipIngram: Have you tried using stuff like C++'s algorithm library? Or something with a lot of function passing? 2023-07-01 09:42:09 I have done it a bit and find it convenient sometimes, although I do wonder how readable it is to others 2023-07-01 09:42:10 Here's something that I find myself considering from time to time that's non-standard and likely would be frowned upon by some. Consider the generic situation of invoking some function (in Forth). The function has some parameters - we put those on the stack and call the function. And it produces some results. Let's say M parameters and N results. 2023-07-01 09:42:32 Well, those M parameters are sometimes still sitting there after you calculate your results. 2023-07-01 09:42:49 In some situations you might be able to eliminate them as part of calculating the results, but in other cases not. 2023-07-01 09:43:09 So I get tempted to have a generic way of cutting those parameter items out of the stack, since they may no longer be needed. 2023-07-01 09:43:31 A way of "sinking" the N results down and slicing the parameters away, in one primitive operation. 2023-07-01 09:43:48 No, I haven't done much of that. 2023-07-01 09:44:07 Honestly I've done more Python programming (especially in the last few years) than C. 2023-07-01 09:44:16 I haven't done much C since I did my gcc-based Forth. 2023-07-01 09:44:33 And that thing was ugly as mud. 2023-07-01 09:46:15 That removal of parameters is really kind of related to array slicing. 2023-07-01 09:46:24 Which is also a useful operation to have sometimes. 2023-07-01 09:46:37 What do you think of this kind of stuff? https://www.cppstories.com/2014/12/top-5-beautiful-c-std-algorithms/ 2023-07-01 09:47:48 Um, It looks like something I'd need to give a little study, but it's not "offensive at first glance." 2023-07-01 09:48:03 But I'm processing it with my "C eye," not my "Forth eye." 2023-07-01 09:48:24 I really really don't like for my Forth to "look like" C. 2023-07-01 09:48:44 Yeah otherwise what's the point 2023-07-01 09:48:47 Multiple lines with indentation is just not a place I go in Forth. 2023-07-01 09:49:06 Well, you see a lot of Forth that looks just like that if you nose around. 2023-07-01 09:49:22 which I suppose is just people with C experience sticking their toe into Forth. 2023-07-01 09:49:26 I've written Forth like that and the more Forth I write the less it looks like that 2023-07-01 09:49:35 ^ Same for me. 2023-07-01 09:49:58 One reason I am moving to shorter definitions is because it empowers the IDE-style work of Forth 2023-07-01 09:50:06 I think we unavoidably get "indoctrinated" by whatever our heavily used tools are. 2023-07-01 09:50:12 You can LOCATE stuff easier and use it almost like a line editor 2023-07-01 09:50:23 Without actually having any text editor at all 2023-07-01 09:50:25 Yes. 2023-07-01 09:50:45 My current editor is an ever so slightly glorified line editor and I'm liking it just fine. 2023-07-01 09:51:04 I would PREFER a bit more "screen style mobility," and will probably write that eventually, but this is getting me there. 2023-07-01 09:51:06 And the other reason is it makes it easier to understand stack effects, words are the 'parentheses' of Forth in a sense 2023-07-01 09:51:19 Yeah. 2023-07-01 09:51:48 Like if I want to add something under the stack while doing something else, if I refactor this so that this 'push' happens in a different word it's more visually obvious and easier to follow 2023-07-01 09:51:52 It's just easier for me to look at a short definition and "just tell" that it's going to work right. 2023-07-01 09:52:36 The fancy "proofs of correcness" are really interesting, but ultimately my main way of getting my code right is to just look at it and "tell" that it's right. 2023-07-01 09:52:56 And the longer the definition is, the more likely I am to stumble on that. 2023-07-01 09:52:58 I think testing-as-you-write is important too 2023-07-01 09:53:24 Just running each word one-by-one ... I mean don't bother if it all works, but when it doesn't work that's the first thing I do to troubleshoot 2023-07-01 09:53:46 I think so too, though usually I don't do that "totally." That is, I usually wind up testing a little block of a "few" lines rather than one line at a time. 2023-07-01 09:54:02 If your words are mostly 'pure' it's easier, you can just test everything on the stack simply at the prompt 2023-07-01 09:54:17 These conditional return words encourage me to spread control structures up and down the call stack, so usually I'll have five or six little short definitions that form an "interlocked unit." 2023-07-01 09:54:21 And that's what I test. 2023-07-01 09:54:35 Yeah that is probably more efficient, to test 'overall' words, especially with how factored your code is 2023-07-01 09:54:58 I theory I could test the low level definitions on their own, but it's just easiest to have the next couple of levels up to create the test conditions for the bottom layer. 2023-07-01 09:55:09 Rather than go through the tedium of manually crafting an input stack. 2023-07-01 09:55:56 But... I *could* do that manually if I wanted to. 2023-07-01 09:56:13 Okay well one of the big features of the C++ 'algorithms' library is it heavily relies on 'iterators', which are almost like an abstract I/O or 'pointer' mechanism. An iterator can be created to fit almost any data structure, some iterators are expected to have more abilities than others 2023-07-01 09:56:27 Do you think there's anything like that in your Forth, and should there be? 2023-07-01 09:56:59 It's something I've thought about, like whether some kind of I/O abstraction is proper for Forth, given that we don't 'assume' an OS and stuff like piping and redirecting is obviously helpful in things like UNIX 2023-07-01 09:57:18 It's easy to achieve, just vectorise TYPE and KEY or something 2023-07-01 09:57:26 Not currently, but I do think I will wind up with something like that. This general plan I have for using ropes data structures to manage arrays of things calls for the "iterator" concept - an array's iterator would know ow to move from one rope chunk to the next transparently. 2023-07-01 09:58:11 For example, I might have a very long string, like if I read an entire file into one string. 2023-07-01 09:58:23 And it would be comprised of a bunch of 256-byte "ropes items." 2023-07-01 09:58:53 So iterating within one of those is just 1+, but eventually you come to the end of the chunk and have to walk the ropes structure a bit to get to the next one. 2023-07-01 09:59:03 That needs to be hidden away under the hood. 2023-07-01 09:59:31 so if I happen to KNOW that a string is short and fits in one chunk, I could use 1+ just like I would with a standard string. 2023-07-01 09:59:46 But if it's possibly longer I'd need to use the iterator. 2023-07-01 10:00:24 My memory management thinking has evolved a bit since I thought up that idea, so we'll see if it stays in the plan. 2023-07-01 10:01:00 Anyway, though, yes, the general capability you're talking about seems like a good idea to me. 2023-07-01 10:04:23 Once I get to the Octave/Matlab type stuff I talk about, I feel like there has to be a way to have "an array" item on the stack, that could be almost any kind of "type." And I certainly don't want to have to write a fleet of type-specific words for every one and manually keep track of which one I need to call. 2023-07-01 10:05:14 More than just iterators, though - if I have two things on the stack that support the notion of "addition," say, then + needs to add them, full stop. No muss, no fuss. 2023-07-01 10:05:58 Be they integers, floats, strings, matrices, whatever. 2023-07-01 10:06:55 matter, anti-matter, 2023-07-01 10:07:07 That's kind of an abomination from the traditional Forth *implementation* perspective, but from a "usage" perspective it's perfectly natural. 2023-07-01 10:07:21 thrig: Certainly puts the 'dispatch' in dynamic dispatch 2023-07-01 10:08:21 there might be a muss afterwards 2023-07-01 10:09:47 Should tell Catholics how dangerous mass is, given einstein's equation 2023-07-01 10:09:55 My feeling is that I get there somehow by having the type of the words on the stack influence the word search process. 2023-07-01 10:17:26 One of the most powerful features of a programming language is some kind of stateful function. A closure, or just a function pointer + pointer pair will do. In Forth that would be xt + adr 2023-07-01 10:17:39 There are so many things you can represent or abstract with that 2023-07-01 10:18:03 Yeah. 2023-07-01 10:18:15 And in Forth, C, etc you can do this with minimal code. No templates necessary. Not necessarily the *fastest* alternative, but the abstractions aren't paid for in a pound of fleshy .text 2023-07-01 10:19:30 And an iterator is one example of this, you have a function that gets the 'next' item or tells you there are none left, and the state/pointer would contain a reference to the 'thing' being iterated over, and the current iteration position 2023-07-01 10:20:17 If you think about it, that's kind of what a Forth program is - a sequence of those things. Your code is a sequence of xt's and the data pointer is the sequence of stack pointer values. 2023-07-01 10:25:43 I still have that slide deck we had here that one day, of the fifo-based rather than lifo-based Forth. I'm still "curious" if there's anything useful there. 2023-07-01 10:26:13 Someone would have to explain why I'd want that, to me it sounds obviously bad 2023-07-01 10:27:29 The whole point of a stack is you can abstract and do high-level work. A *queue* (fifo) can't do that, surely 2023-07-01 10:28:20 I guess with a queue the data becomes more 'message' than 'object'? But then where do the objects go? You need objects (numbers/buffers/etc) eventually.... 2023-07-01 10:32:13 I know - I'm not sure about it at all. Just curious. 2023-07-01 10:32:28 I'm interested in the slides if you have them 2023-07-01 10:32:39 And yeah - it is more like "messaging." 2023-07-01 10:32:49 Which is interesting in its own right, for multi-thread work. 2023-07-01 10:33:31 I was thinking, though, that "comunicating synchronous processes" thing. Any way you slice that, it comes down to one some threads writing to RAM and other threads reading / monitoring. 2023-07-01 10:33:45 And that seems to have obviously bad cache implications if your threads are on different cores. 2023-07-01 10:34:02 Every time the cell gets written, everyone else has to read it from slow RAM the next time. 2023-07-01 10:34:12 No that's not true 2023-07-01 10:34:23 How so? 2023-07-01 10:34:35 Coherancy operates in L2 cache 2023-07-01 10:34:38 When core A writes a location, doesn't it invalidate that cache line in other cores? 2023-07-01 10:34:47 Ok, fair enough. 2023-07-01 10:34:51 That's true. 2023-07-01 10:34:51 And even in L1 I think, but that's where the overhead is more obvious 2023-07-01 10:35:05 There is an overhead but it's not "RAM read" overhead 2023-07-01 10:35:08 But it doesn't remain in the fastest place. 2023-07-01 10:35:16 Unless you have different clusters/CPUs involved 2023-07-01 10:35:18 You're right - I was making it worse than it is. 2023-07-01 10:35:30 But '''*cores*''' share L2 2023-07-01 10:35:45 And it's unavoidable at any rate - if the cores are going to interact at all. 2023-07-01 10:36:01 What I should say rather is L2 is shared between cores 2023-07-01 10:36:08 Yeah. 2023-07-01 10:36:15 L1 is capable of coherancy but with more measurable performance impact 2023-07-01 10:39:14 I'm still thinking about that stuff - however I do this next system, I want it to be "easy" to have threads interact with one another. Whaever the mechanism is, I want it to be smooth and easy. 2023-07-01 10:39:46 I want using threads as components of my coding to be clean and frienly. 2023-07-01 10:39:49 friendly 2023-07-01 10:40:57 I'm not sure I like the standard "fork" process, where you just get a clone of your initial thread. Then parent and child have to do something conditional to "identify" themselves and part ways. 2023-07-01 10:41:16 Seems more natural to me to just have a word that starts a new thread and pass it an XT. 2023-07-01 10:41:20 The 'obvious' thing is to provide data structures that are thread safe, e.g. some kind of thread-safe queue, and constructs like mutex's for when that's not enough 2023-07-01 10:41:26 I think - not sure yet. 2023-07-01 10:41:34 Yes. 2023-07-01 10:41:45 All supported natively. 2023-07-01 10:42:24 This is why I'm inclined toward having VARIABLE be something that's per-thread. 2023-07-01 10:42:34 Allocate space in the thread's own block of RAM. 2023-07-01 10:42:40 And otherwise base what you're doing on the existing Forth software thread work, because a lot of stuff like USER variables, local dictionaries, terminal threads vs lighter threads are still applicable 2023-07-01 10:42:45 And then have some other defining word for shared memory items. 2023-07-01 10:43:02 Yeah. 2023-07-01 10:43:41 I don't know if I want per-thread dictionaries - if I give a thread a private dictionary I'm likely to call that a "process" instead. 2023-07-01 10:43:56 Yeah that would be a 'terminal thread' as I called it 2023-07-01 10:44:00 A heavier thread 2023-07-01 10:44:08 And if it has a console I might call it a "session." 2023-07-01 10:44:23 But you're right, those are three distinct "levels" a thread can operate at. 2023-07-01 10:44:29 That makes sense to me 2023-07-01 10:45:10 But in any case, the THREADS will all be the same kind of thing - they'll just either have or not have these particular resources. 2023-07-01 10:45:18 And a "thread safe queue" structure on most OS's back in the day was just a 'pipe' 2023-07-01 10:45:40 Again it all comes back to abstracting I/O 2023-07-01 10:45:55 Right. And I see distinguishing between syncrhonous (where the "pipe" is just one cell) vs. asynchronous, where it's a longer thing with room to buffer stuff. 2023-07-01 10:46:29 But the only difference there is the capacity of the pipe. 2023-07-01 10:46:45 Better model is ability to flush the pipe, rather than saying the buffer can only be one cell 2023-07-01 10:46:58 Having a larger buffer is useful even if you want ability to synchronise 2023-07-01 10:47:02 Could be. I need to give it more thought. 2023-07-01 10:47:30 Only having one cell would maximise cross-thread communication costs 2023-07-01 10:47:51 And it would likely be more performance efficient to not be emptying the pipe at the same time I fill it - rather let the producer thread load it up, and then hand the whole thing off to the consumer thread so that it can "own" that RAM until it's done emptying it. 2023-07-01 10:48:27 In which case it's not quite a "pipe" anymore - more just like passing an array. 2023-07-01 10:48:45 I don't agree with that, let the consumer choose when it's got enough data to be useful 2023-07-01 10:48:49 Anyway, I'm not at all sure of all the details there. 2023-07-01 10:49:00 If not it can always yield or ask for more cells first and be suspended 2023-07-01 10:49:01 Thoughts are very vague at this point. 2023-07-01 10:49:08 It's not an area I'm very experienced in. 2023-07-01 10:50:09 In my opinion you can start with software threads and upgrade to preemptive or parallel work incrementally 2023-07-01 10:50:48 Yeah, that seems reasonable. 2023-07-01 10:51:24 I still don't plan on "interrupting" threads, though - I'm going to put the thread switching in at the docol level. 2023-07-01 10:51:36 With a counter in docol. 2023-07-01 10:51:50 That's still preemptive in a sense 2023-07-01 10:52:02 It is, at the "Forth" level. 2023-07-01 10:52:05 Yeah 2023-07-01 10:52:27 I'd initially planned on putting it in NEXT, but it just "fits" so much more naturally in docol. 2023-07-01 10:52:43 In my particular threading model, at least. 2023-07-01 10:53:49 docol saves the current IP and loads IP up with the called value, and then decrements the counter. If it's zero, then you just jump back up a few lines - load RAX with the "preemption handler XT," reset the counter, and hten just fall into docol again. 2023-07-01 10:53:53 It all just works out. 2023-07-01 10:54:10 You save the called IP, and re-route to the preemption handler. 2023-07-01 10:54:23 Then eventually a ; return will send you back to that called IP. 2023-07-01 10:54:43 After you've done a bunch of other stuff. 2023-07-01 10:55:08 Yeah it allows you to consider each 'word' of primitives a mutex, and reduces overhead of preemption significantly 2023-07-01 10:55:11 So, basically preemption just "inserts a call". 2023-07-01 10:55:19 Exactly. 2023-07-01 10:55:34 And there's no advantage of having NEXT do the check, because it's still not 'true blue-blooded' preemption, it can still get stuck in CODE 2023-07-01 10:55:46 I figure I'll do some thousands or maybe tens of thousands of docols before pre-empting, but that's not final yet. 2023-07-01 10:56:13 Gotta find that sweet spot, where it's low-overhead but yet feels "smooth." 2023-07-01 10:56:22 And if you have multiple cores running in parallel you can recover a stuck thread in brutal fasion if required 2023-07-01 10:56:39 As long as the terminal doesn't get stuck 2023-07-01 10:57:02 Yeah, a lot of "operating systems 101" kind of crawls in the door here. 2023-07-01 10:59:28 I definitely think it's best to try and focus on achievable things that you understand to design first, rather than worrying too much about how parallel will work. Most computers didn't even have parallel computing for the longest time anyway, they were smooth and responsive 2023-07-01 10:59:56 Parallel is never necessary for smooth computing, it's only needed when you have multiple compute resources and need to squeeze the most juice for a computation 2023-07-01 11:00:26 Your I/O is already inherently 'parallel' and the main thread can do all the required scheduling and updates to keep it all ticking over 2023-07-01 11:01:00 I agree. Get things working well with just one core first. I mean, I want to at least give some thought to how multiple cores will come in, but I agree it needs to work perfectly on just one, and that's the right starting point. 2023-07-01 11:01:57 It really isn't useful for anything other than simulation and heavy computational work, it's especially useless to a Forth system 2023-07-01 11:02:14 It's no wonder that most 'scripting' languages don't support parallel at all 2023-07-01 11:02:50 Not least of all JavaScript (if it supports parallel today please don't hit me, for the longest time it didn't, and I assume it's still not used today by most websites) 2023-07-01 11:03:14 which runs everything 2023-07-01 11:04:00 But at the same time, we do consider parallel to be a necessary feature for a modern OS, and we consider Forth to be an OS, so as a matter of pride we want to consider it 2023-07-01 11:07:33 It's funny, that's a big difference between C and Forth. Because C is "compile and run" it's necessarily *not* an OS. It can only ever 'implement' an OS 2023-07-01 11:07:54 Forth is compile and interpret and whatever, so you can write everything in Forth and never leave it 2023-07-01 11:08:09 It can be your language, IDE, editor, OS, application 2023-07-01 11:39:26 I don't know - I could certainly see putting multiple cores to work doing some heavy numerical processing involving a matrix. Though really that should use GPU. 2023-07-01 11:39:38 But multiple cores could do it faster than one. 2023-07-01 11:41:34 I do agree though that there are many very common tasks that don't really "promote" parallelism. 2023-07-01 11:42:32 And I certainly might want to have two sessions "doing stuff" at the same time - wouldn't want the one I'm looking at to be the only one running. 2023-07-01 11:42:49 But in that case it could still be one core, just time shared. 2023-07-01 11:43:05 But if they were doing heavy tasks, then that would be faster on separate cores too. 2023-07-01 12:14:11 Nice thing about giving each process its own RAM region for its private vocabulary is that more than one of them can compile at the same time. 2023-07-01 12:14:26 And also when I'm done I can just throw it away. 2023-07-01 14:09:42 I admit that I am unlikely to ever compile enough code in one go to actually notice it gettng speeded up by multiple cores. 2023-07-01 14:13:07 gamedevs meanwhile report that the unreal engine might take 45 minutes to compile on a gonzo compile rig 2023-07-01 14:17:47 oddly enough, they also tend to use lua for in-game scripting 2023-07-01 14:23:51 I am musing myself by thinking about how to implement multiplication in boolean logic and I might have noticed something nifty 2023-07-01 14:25:13 there are two input numbers multiplicant and times 2023-07-01 14:27:09 for every on bit in times one can left shifts the multiplicant by the bit_index of that bit 2023-07-01 14:27:40 minus one that is 2023-07-01 14:28:13 no, not if one is using zeroindexing where lsb is at bit_index 0 2023-07-01 14:28:53 then one just add all these shifted multiplicants together 2023-07-01 14:30:45 as the bit_index is a constant per bit of times, it means the left bitshift is just literal shifting of "wires" 2023-07-01 14:32:13 and selecting between the shifted multiplicant or zero can be done via 2in_muxer 2023-07-01 14:35:22 so the overall latency or tau of this depends mostly on the adders 2023-07-01 15:03:04 Um, yeah, that's essentially what multiplication is. 2023-07-01 15:03:26 We do the same thing when we multiply decimal numbers long hand - each line of the partial products gets shifted over an addition digit. 2023-07-01 15:03:55 Maybe I'm missing something, but it seemed like you just described the standard process. 2023-07-01 15:04:21 Multiplication really is just a bunch of adds. 2023-07-01 15:04:26 Properly shifted. 2023-07-01 15:05:17 still, 2 + 2 really is 7 2023-07-01 15:06:01 You can do it sequentially with a standard two-input adder and a shift register too. Just takes you N cycles for N-bit numbers. 2023-07-01 16:14:13 KipIngram: indeed but what I described is the boolean logic circuit and not a sequential program 2023-07-01 16:15:40 with sixteen bits we need sixteen adders, muxers, and hardwired left shifts 2023-07-01 16:20:41 which usually takes less time than performing ( m t -- t*m ) DUP 0= IF 2DROP 0 EXIT THEN OVER SWAP DO OVER + LOOP NIP ; \ I think 2023-07-01 16:28:56 Yes - I got that. 2023-07-01 16:29:18 it would certainly be faster than doing it sequentially. 2023-07-01 16:30:00 Bet you could configure a Bitgrid to do it. :-) 2023-07-01 16:30:29 You might have to use taxicab geometry for those shifts, though. 2023-07-01 16:30:46 taxicab? 2023-07-01 16:30:58 90-degree turns only - no diagonals. 2023-07-01 16:31:05 right 2023-07-01 16:32:51 I suppose in theory you could build a Bitgrid that was connected to the diagonal neighbors too. 8-in, 8-out instead of 4/4. 2023-07-01 16:33:30 moore neighbourhood instead of just von neuman 2023-07-01 16:33:35 You could also build a 3D Bitgrid, couldn't you? Don't really know how you'd fab that with current tech, but the geometry would exist. 2023-07-01 16:34:15 how much do you know about integreted circuit fabrication? 2023-07-01 16:34:47 Only the basics. A little about how transistors get laid out on a plane of material. 2023-07-01 16:35:00 But not really anything about that "vertical" dimension. 2023-07-01 16:35:21 basically an ic is made in layers 2023-07-01 16:35:25 I know that one plane is a set of a few layers - is it 5? I don't remember exactly. 2023-07-01 16:35:49 And by putting those layers down in the right places, we create transistor geometry. 2023-07-01 16:36:09 the earlies ones were just 5 I think 2023-07-01 16:36:19 So I suppose there's no particular reason you couldn't just keep adding layers. In theory. 2023-07-01 16:36:58 That could raise the probability of a defect in any given device, same as making its area larger does. 2023-07-01 16:37:24 I been mulling over how to use dna brick origami, oligioneuclocides, and  peptide neuclocides 2023-07-01 16:37:29 More content per device, less yield overall. 2023-07-01 16:37:36 to pattern ics 2023-07-01 16:39:15 Silicon crystal defects are more likely over a bigger area, the layer processing could also be likely to add defects the more layers are put on 2023-07-01 16:41:15 Right. 2023-07-01 16:42:07 I've got a book on all that that I've perused a couple of times over the years. It's fairly old, though, so it doesn't have the latest and greatest. And "perused" was the right word - I didn't "study" it the way I would have if I'd been taking a course. 2023-07-01 16:42:26 Just wanted some familiarity with the basic ideas. 2023-07-01 16:43:01 I had a course in school on semiconductor physics that covered the really low-level stuff, and I'm good with logic design. Just wanted to fill in the gap in between a bit. 2023-07-01 16:43:16 the issue with "studying" for a course is that you might not integrate that knowledge quite fully 2023-07-01 16:44:01 Yeah, that's true, and some people learn those little "silos" and never knit them together into a big picture. 2023-07-01 16:44:03 post exam garbade collection time! 2023-07-01 16:44:10 :-) 2023-07-01 16:44:26 I think I did that to some extent as an undergraduate, but graduate school was good for me. 2023-07-01 16:44:44 this is why I like brilliant.org so much 2023-07-01 16:44:54 Or maybe I just accumulated enough pieces that they finally started "meeting up." 2023-07-01 16:45:23 it promotes deeper understanding and not just memorization 2023-07-01 16:45:56 Good - that's really the best way to start. Then you appreciate the deeper stuff more when you get around to learning it. 2023-07-01 16:46:20 knew a guy who had near photographic memory. He flaundered if he ever got a problem that needed such understanding 2023-07-01 16:46:57 Borges had a story about someone who remembered everything (it was not a good thing) 2023-07-01 16:48:27 "Funes the Memorious" in ingles 2023-07-01 16:52:15 been watching discussions regarding these weak ai things that have come out over the last year or so 2023-07-01 16:52:33 Yeah, that seems almost like a detriment. My main way of retaining knowledge is to fit it into a big picture. Then it's easy because I have a "structure" to house it all in. 2023-07-01 16:52:46 Otherwise it WOULD just be sheer memorization, and I actually kind of hate that. 2023-07-01 16:52:58 and a kitchen drawer full of vi keybindings 2023-07-01 16:53:38 invariably someone brings up stories about golems and their dangers 2023-07-01 16:53:54 For a while I was on a chess jag, many years ago. Loved the "principles," seeing how they got put into practice, and so on. I got better. Then I came to the point where in order to take the "next step" I needed to just learn all the opening patterns so I didn't screw the pooch early on in a way that the other guy knew exactly how to exploit, because he'd memorized how to exploit that exact mistake. 2023-07-01 16:53:59 I just... stopped. 2023-07-01 16:54:03 All the fun went out of it. 2023-07-01 16:54:21 try chess960 2023-07-01 16:55:18 Oh, hmmm. That would solve that problem, wouldn't it? :-) 2023-07-01 16:55:28 what I find fascinating about these stories is what conclusions peeps make based on some unstated assumptions 2023-07-01 16:57:36 oh, chess with some starting input randomness. Neat! I think it works against how mechanical chess feels. 2023-07-01 16:58:19 there is also shogi which avoids the dwindling endgame of chess (you can drop pieces you capture against the opponent) 2023-07-01 16:58:23 Yeah, that would scramble all of those "opening sequences." 2023-07-01 16:58:42 golems in these stories are quite literal. Pretty much robots&computers in a way. 2023-07-01 17:00:19 one the assumptions folks make and the characters that construct the golem make is that they know what they mean 2023-07-01 17:01:16 they being the folks and characters and not the golem but it is also an assumption made that the golemn knows what is meant in its instructions 2023-07-01 17:01:29 Zarutian_iPad: I think we're doing some neat stuff with AI, but the idea that we're on our way to the "sinularity" is madness in my opinion. What we're doing is a lot more crude and hackish than the people doing it think. 2023-07-01 17:01:59 The "creative spark" just isn't something that can be rendered into an algorithm. 2023-07-01 17:02:05 meanwhile in the Cyberaid the golem ends up chasing and trying to murder the creator 2023-07-01 17:02:41 a large language model is like three or four cortical columns in the speach understanding and generation region if human brains 2023-07-01 17:03:51 iirc there are at least thousands such columns in these regions 2023-07-01 17:04:51 KipIngram: "creative spark"? I think that is a ?super? emergent phenomina 2023-07-01 17:06:22 heck if one watches GDC and such vids on procedurally generated content in games then one sees how bloody hard it is to get something that isnt utter garbage 2023-07-01 17:08:57 re golems and robotics in particular: I detest open loop control. Which was prefered because sensors and compute were so "expensive" 2023-07-01 17:09:12 generally there's a sweet spot somewhere between too much order and too much random 2023-07-01 17:12:34 for instance in pgc, using wave function collapse to make a map/layout of a dungeon can result in too much randomness and not enough overal structure at larger scales 2023-07-01 17:12:45 I'm mostly an idealist these days. I think consciousness is "fundamental." The thing is, though, my sense it that the question is never provable one way or the other - I think you can claim it either way and the logic will hold together, and none of it is verifiable. 2023-07-01 17:13:15 hence I have an idea of compining Rooms and Mazes with wave functio collapse 2023-07-01 17:13:22 A main take-away of mine from my... well, decades now, of studying physics is that I see now way consciousness CAN emerge from physics. 2023-07-01 17:13:42 Physical behavior that seems conscious can, of course; I'm talking mostly about self-awareness. 2023-07-01 17:14:19 All the physics we invoke when we talk about how our brains and bodies work - none of it NEEDs self-awareness. 2023-07-01 17:14:51 KipIngram: me? I just want to make an robo that could keep a small child safe during a day or so. 2023-07-01 17:15:03 So why the hell are we self-aware? 2023-07-01 17:15:14 velcro suit and a velcro wall would be cheaper 2023-07-01 17:15:15 And you very well might be able to. 2023-07-01 17:15:31 I have no qualms about the notion that our ability to MIMIC consciousness will continue to improve. 2023-07-01 17:15:41 That's totally "doable." 2023-07-01 17:16:18 KipIngram: here is an excersise for you: think of a screw, say an Machine 5mm underzinked one with philips 1 head 2023-07-01 17:16:19 There's no telling how good we'll get, and you just know that sex bots will lead the way. 2023-07-01 17:16:39 Lotta profit there for whoever gets good first. 2023-07-01 17:17:01 probably 2023-07-01 17:17:57 re that screw, lets say it is part of a machine 2023-07-01 17:18:34 now ask the toyota seven whys regarding that screw. 2023-07-01 17:18:41 Anyway, ultimately everything that happens in a robot or an AI is just math. I just don't see any way actual sensations and feelings can come from that. 2023-07-01 17:18:47 But modeling them is fine. 2023-07-01 17:19:09 "I" ness - that's not math. 2023-07-01 17:20:16 Check out Bernardo Katrup's ideas. I think he may be at least partly onto something. 2023-07-01 17:20:26 got that? now ask the same but regarding consciousness 2023-07-01 17:20:28 Kastrup 2023-07-01 17:21:08 Well, I guess my whole point is that I don't think there IS a physical explanation for self-awareness. 2023-07-01 17:22:22 maybe it emerges, like a man in a rubber suit intent on destroying a model tokyo 2023-07-01 17:22:26 Now, if someone publishes an actual explanation that makes sense in the future, I may have to reconsider. 2023-07-01 17:22:39 I would say math, as that is too abstract and mostly what mathematicians wank on about, but information processing 2023-07-01 17:22:46 But the argument that "It has to be physical ultimately because physical is all there is" just does not fly for me. 2023-07-01 17:22:51 That's just hand waving and blind faith. 2023-07-01 17:22:58 No different at all from religious faith. 2023-07-01 17:23:18 It's incredibly arrogant of us to be that sure we know everything that can be. 2023-07-01 17:24:02 why would animals do information processing? because it is evolutionary advantage 2023-07-01 17:24:17 humans are often quite lazy thinkers 2023-07-01 17:24:30 wrong skin color? heave a brick at them! 2023-07-01 17:24:40 no, I do not think we know everything there is to be 2023-07-01 17:25:27 regarding selfawareness, there are levels to it 2023-07-01 17:26:39 I hold forth that animals such as wolves have certainly quite a lot of self awareness due to their pack hunting ability 2023-07-01 17:27:40 or social structure 2023-07-01 17:27:48 and I hold forth that selfawareness is a fucking smooth and long gradient 2023-07-01 17:27:56 Oh yeah - I think animals are definitely self-aware. 2023-07-01 17:28:10 It's very hard to decide how far down the tree of life that goes, actually. 2023-07-01 17:28:27 Some people (scientists - serious ones) argue that plants are conscious. 2023-07-01 17:28:36 thrig: exactly 2023-07-01 17:28:39 They don't think a brain and nervous system is actually required. 2023-07-01 17:28:54 I don't know enough about what they argue to have an opinion. 2023-07-01 17:29:55 KipIngram: perhaps but at what time base? Plants live at lower time rate than us. 2023-07-01 17:30:10 We have cats and dogs in the house, though, and you could never get me to believe they are plenty aware. 2023-07-01 17:30:13 aren't 2023-07-01 17:30:34 God I hate it when I make typos like that, that just completely reverse my meaning. 2023-07-01 17:30:42 How many of them do I fail to notice and correct? 2023-07-01 17:30:58 but I think most plants are about a few flights of stairs than say a bimetalic based thermostat 2023-07-01 17:31:12 Yeah, that's a good point and I'd have no idea. 2023-07-01 17:31:36 I do not know. This is why I never liked contractions 2023-07-01 17:32:29 https://www.theguardian.com/science/2022/apr/06/fungi-electrical-impulses-human-language-study 2023-07-01 17:33:01 so consciousness and selfawareness is quite intertwined if I have read the philosophical and cognetive science stuff correctly 2023-07-01 17:34:38 but they are not the samething 2023-07-01 17:35:26 for instance you can be hyperfocused and conscious when doing something but not be selfaware much 2023-07-01 17:35:57 often called flow-state or being adsorbed into something 2023-07-01 17:37:32 the other direction is like being aware of your tongue in your mouth, that a chore is awaiting, and that you are hopeful about the future 2023-07-01 17:39:20 Yeah, that's why I tend to zero in on self-awareness. I think it better "captures" the mysterious part. Consciousness is more subject to semantics, I think. But I have a sense of what I'm referring to with self-awareness. 2023-07-01 17:39:31 Descartes - I think therefore I am - kind of stuff. 2023-07-01 17:39:39 Just "knowing that we are here." 2023-07-01 17:39:54 The light's on. 2023-07-01 17:41:51 I hold forth that kind of thing, Descartes thinking/cognition is more in the opposite end of the aforesaid gradient than the thermostat 2023-07-01 17:42:03 I'm not sure science is even able to talk about this - science is all about connecting up and explaining cause/effect relationships, and I think there's something "original" in our conscious minds. Something "un-caused." 2023-07-01 17:42:28 And that may just put it outside science's sandbox. But like I said, if someone comes forth with something brilliant, I'm willing to listen. 2023-07-01 17:42:56 KipIngram: what the Ghost In The Shell manga/anime calls ghostline? 2023-07-01 17:43:50 modern science has something of a materialistic and reductive bent 2023-07-01 17:44:55 probably but getting an actually usefull Clippy/Cortana is worth this weak ai stuff. 2023-07-01 17:49:05 I'm not familiar with that franchise. 2023-07-01 17:49:19 I've said before the observer is more real than the material, just look at how wave function collapse works 2023-07-01 17:49:32 that MicroSoft helper 2023-07-01 17:49:50 I don't think materialism is the occam's razor choice for quantum physics 2023-07-01 17:50:13 I think it's dangerous to try to draw big philosophical conclusions from basic quantum mechanics. 2023-07-01 17:50:22 https://knowyourmeme.com/memes/clippy 2023-07-01 17:50:41 It's just a non-relativistic approximation anyway, and the whole scheme doesn't "work" unless you have a quantum / classical boundary somewhere in your analysis. 2023-07-01 17:50:50 oh you mean Ghost in The Shell? I highly recommend it 2023-07-01 17:50:57 "Collapse" is just the movment of information across that boundary, which is an artificial concept anyway. 2023-07-01 17:51:32 I'll check it out. 2023-07-01 17:51:43 Ghost in the Shell is good 2023-07-01 17:51:47 Collapse discards information from you analysis. 2023-07-01 17:52:01 But actually all quantum processes are reversible and therefore don't lose any information, ever. 2023-07-01 17:52:27 What happens is that some of the information that was in the quantum state passes over that bounary, and disperses into degrees of freedom into the environment that you are IGNORING. 2023-07-01 17:52:36 So in your analytical model you just throw that information away. 2023-07-01 17:52:40 That's what "collapse" is. 2023-07-01 17:52:58 But all that information is actually still out there - it's just impossible for you to gather it all up in any useful way again. 2023-07-01 17:53:59 My point is more about the interesting role of observation in that physics, I just think it cuts against the grain of materialism 2023-07-01 17:54:44 And when you make a measurement, what you're actually doing is bringing about an interaction between the tiny quantum thing you're studying (which might be an electron or something) and your huge measurement instrument - it shouldn't surprise anyone that the instrument wins that contest. 2023-07-01 17:55:11 If it's something that you can read a result off of with your eyes, then it absolutely dwwarfs the quantum system you're studying with it. 2023-07-01 20:45:43 Sorry, I am extremely stupid. Why does " a " C@ " 2 " C! compile just fine but " a " C@ " 22 " C! not? 2023-07-01 20:47:09 do not know. Which forth is it you are using? 2023-07-01 20:47:40 I am using bigforth, I'll paste the exact error message, give me a minute or two 2023-07-01 20:49:27 https://paste2.org/hHvdLbj8 2023-07-01 20:50:29 this tells me very little 2023-07-01 20:51:08 other than you are running this on x86 32 bit system 2023-07-01 20:51:33 what does the documentation for C@ and C! say? 2023-07-01 20:53:26 Tt works on a separate out of project test, so it must be something else that's unrelated. I can't share source so I'll have to figure it out on my own. Thanks for your time 2023-07-01 20:54:01 you wrote this code? 2023-07-01 20:54:49 because in most forths C@ has the effect of fetching a char from particular given address 2023-07-01 20:55:30 its stack diagram is ( addr -- char ) 2023-07-01 20:57:29 an " string" most often gets compiled as a sequence of bytes preceded with xt of the word (") and the length of the string 2023-07-01 20:58:08 then during execution the (") word puts onto the stack ( addr length ) 2023-07-01 21:00:54 where addr points to the first byte of the string and length is the strings length 2023-07-01 21:06:07 Thanks for the info. I'm going to lie down and get some sleep, I'll have another look at it tomorrow; it's most likely something very obvious 2023-07-01 21:06:45 it usually is 2023-07-01 21:47:39 Wouldn't his " a " most likely leave the address of a string on the stack? 2023-07-01 21:48:16 So c@ is just fetching from that address. Then he's storing whatever that fetches to the address of the second string? 2023-07-01 21:48:28 I'm just guessing about "