2023-09-25 04:56:57 this shows how slow my lang is 2023-09-25 04:57:00 https://pastebin.com/raw/kWZDdhqb 2023-09-25 05:06:19 wasn't much higher time with the other implementations? 2023-09-25 05:06:55 i don't remeber, but it takes 7 seconds to sum the numbers from 0 to one million 2023-09-25 05:07:09 1 if cheating 2023-09-25 05:10:06 ooh 2023-09-25 05:25:29 dave0: why oh xd it's extremely slow 2023-09-25 05:26:02 but it's not like i care about performance now 2023-09-25 05:26:46 i care about being able to write programs with it, which is a hard task 2023-09-25 05:30:46 my woefully unfinished forth takes 0.16 seconds 2023-09-25 05:31:17 : sum 0 0 begin dup 1000000 < while dup >r + r> 1+ repeat drop . ; 2023-09-25 05:32:15 vms14: mines a lot more code than yours 2023-09-25 05:32:34 oh and i'm off-by-one 2023-09-25 05:32:55 yeah, but look at the speed difference xd 2023-09-25 05:33:07 0.16 vs 7 2023-09-25 05:33:27 KipIngram forth does not even tick 2023-09-25 05:33:31 xd 2023-09-25 05:34:28 i will have to compile somehow at the end 2023-09-25 05:34:50 for now i have to keep defining the language and refining some aspects 2023-09-25 05:35:02 threaded code is interesting 2023-09-25 05:35:10 i want tco 2023-09-25 05:35:32 Chuck's colorforth has a word -; for tail calls 2023-09-25 05:35:36 but i want it to be natural in a way like : foo some code foo ; 2023-09-25 05:35:42 It turns the previous call into a jmp 2023-09-25 05:36:20 with a return stack it's done 2023-09-25 05:36:33 if not, i have to make workarounds, which are not nice 2023-09-25 05:36:40 my BRANCH could do it 2023-09-25 05:36:50 it takes an absolute address 2023-09-25 05:37:45 you could use an immediate word to lay down a BRANCH and then tick ' the word... maybe with >body (which i haven't written) 2023-09-25 05:37:54 also i want to define the lang in itself as most as possible 2023-09-25 05:38:23 in a way i only have to write a tiny core to port it 2023-09-25 05:38:50 and the rest is defined on the lang itself, which should work as long as the core does 2023-09-25 05:39:24 Are you generating native code? 2023-09-25 05:39:34 not yet 2023-09-25 05:39:51 i don't have any kind of compilation 2023-09-25 05:40:19 concatenating machine code sequences is easy for Forth 2023-09-25 05:40:20 : tail-call postpone branch recurse ; immediate this would nearly do it 2023-09-25 05:40:29 i had a fake one which was just to return closures with decisions and sme values taken at "compile time" 2023-09-25 05:40:53 it halves the time it takes for that benchmark 2023-09-25 05:41:07 cause it avoids the task of the interpreter 2023-09-25 05:41:13 So you're not using threaded code at the moment? 2023-09-25 05:41:31 no, colon words are just a list of words/objects 2023-09-25 05:42:02 and my goal is to generate code instead of threading 2023-09-25 05:42:13 GeDaMo: the language is written in perl 2023-09-25 05:42:22 and has more to do with lisp than with forth 2023-09-25 05:42:32 Ah, right 2023-09-25 05:43:00 but my goal is to have some sort of half interpreter half transpiler 2023-09-25 05:43:20 if for example i can generate perl code the repl is a transpiler generating perl code + eval 2023-09-25 05:43:42 "Linear Logic and Permutation Stacks--The Forth Shall Be First" https://web.archive.org/web/20200112152842/http://home.pipeline.com/~hbaker1/ForthStack.html 2023-09-25 05:43:53 i can generate functions from code, so i have words as a code string and as a function 2023-09-25 05:44:16 when compiling it will use the string, unless it's immediate, then the function 2023-09-25 05:44:37 but some words need to be immediate and i have to learn about the whole thing 2023-09-25 05:45:24 also wonder about adding a stack on the target lang or not 2023-09-25 05:55:18 i wonder if i'll ever compile in any of its meanings 2023-09-25 05:55:35 i want to automate c ffi in a similar way gforth does 2023-09-25 05:56:00 https://www.rosettacode.org/wiki/Draw_a_pixel#Forth 2023-09-25 07:53:10 dave0, vms14: You'll probably find that to get tail calls really right you will need to have a state variable that you maintain for ; to consult. 2023-09-25 07:53:21 I've tried to do it without that a few times and can never quite get it to work. 2023-09-25 07:53:50 It's hard to do by just inspecting recently compiled code. 2023-09-25 07:54:01 There's always the possibility of that being a literal or string data. 2023-09-25 07:54:45 Having an IF ... THEN at the end of a definition can also trip it up. 2023-09-25 07:55:02 : foo ... ; 2023-09-25 07:55:16 : bar ... ... foo ; <- tail optimizable. 2023-09-25 07:55:31 : bar ... IF ... foo THEN ; <- not tail optimizable. 2023-09-25 07:55:54 But the last of the code before ; looks the same in both cases. 2023-09-25 07:56:26 But if you set a state variable each time you compile a foo-like word, and have words like THEN clear it... 2023-09-25 07:56:37 Then when you handle ; you can just check that to know what to do. 2023-09-25 07:57:18 KipIngram: so you try to find the tail call as a word in the code? 2023-09-25 07:57:33 it's impossible for me 2023-09-25 07:57:41 My ; is immediate, and its normal operation is to compile (;) 2023-09-25 07:57:52 this is mainly why forths have a word named recurse instead i suppose 2023-09-25 07:58:08 So I just have ; check that state var - if it's set, I know I just compiled something I can tail optimize, so I do. 2023-09-25 07:58:31 Tail call is more general than recursion. 2023-09-25 07:58:42 Recursion is just when it happens to be the word you are defining. 2023-09-25 07:59:01 and the reason some forths have recurse is because they may not make words visible until the definition is FINISHED. 2023-09-25 07:59:07 : foo ... foo ; 2023-09-25 07:59:11 right 2023-09-25 07:59:24 they don't exist yet 2023-09-25 07:59:26 ^ foo has to be "findable" for that to work. 2023-09-25 07:59:29 they can. 2023-09-25 07:59:39 They do in my system - heaers are available as soon as they exist. 2023-09-25 08:00:14 mine just does not give a fuck of the contents xd 2023-09-25 08:00:22 : oh my cat is nice ; 2023-09-25 08:00:25 But nonetheless I have a word called ME that does a recursion. 2023-09-25 08:00:31 And I have conditional forms of it too. 2023-09-25 08:00:51 yeah, you're not actually compiling anything. 2023-09-25 08:01:35 So ; foo ... foo ; would work fine in my system. 2023-09-25 08:01:43 But : foo ... me ; does too. 2023-09-25 08:02:12 me is immediate; it checks the dictionary for the latest name and compiles a jump to it. 2023-09-25 08:02:41 I thought about calling it SELF but ME is shorter. 2023-09-25 08:03:55 Anyway, tail optimization turns out to be a thing that is more delicate that you might initially think. 2023-09-25 08:04:20 i need a stack 2023-09-25 08:06:09 You do - you hang out in a Forth channel. :-) 2023-09-25 08:07:03 without it the only way i see is to turn the code into a loop 2023-09-25 08:07:28 but it is explicit and i dislike it 2023-09-25 08:07:34 For recursion? That's the whole point of tail optimization - to avoid consuming your return stack resources unnecessarily. 2023-09-25 08:08:23 yes but you have to mark a word as recursive to provide the tco 2023-09-25 08:08:45 and it's no longer a colon word, but some sort of a builtin 2023-09-25 08:08:53 A compiler can keep up with it without your explicit assistance. 2023-09-25 08:09:03 a compiler xd 2023-09-25 08:09:29 and a return stack 2023-09-25 08:21:00 dave0: 2023-09-25 08:21:02 : usec uet 1000000 * + ; ok 2023-09-25 08:21:04 : iter swap over + swap 1- .0>me ; ok 2023-09-25 08:21:06 : measure usec 0 1000000 iter drop . usec swap - . ; ok 2023-09-25 08:21:08 measure 500000500000 17433 ok 2023-09-25 08:21:12 17.4 msec. 2023-09-25 08:22:02 KipIngram: nice! 2023-09-25 08:22:21 And that includes some overhead - I could, with a bit more effort, have pushed the usec calls in right next to iter. 2023-09-25 08:24:14 what does `uet` stand for? something something time? 2023-09-25 08:24:30 Unix epoch time. 2023-09-25 08:24:35 aah 2023-09-25 08:24:59 It returns it as a pair of cells - : uet ( -- usec sec) ; 2023-09-25 08:25:20 is it a system call? 2023-09-25 08:25:26 It does a system call. 2023-09-25 08:25:34 it's a primitive in my system. 2023-09-25 08:25:52 cool 2023-09-25 08:26:21 It could directly return a single cell - I think it's arranged that way as legacy from 32-bit systems. 2023-09-25 08:27:24 there's like 5 different ways to get the time on unix 2023-09-25 08:27:26 I wind up defining usec every time i want to use it - it's more convenient. 2023-09-25 08:40:41 Anyway, my advice re: tail opt is to just bite the bullet and use a state var rather than trying to be "clever." 2023-09-25 08:42:33 Basically whenever you compile a word in any way (immediate, non-immediate, whatever) see if it's a non-immediate : def word. If it is, set the var. If it's not, clear it. Then have ; use that to decide whether to compile (;) or render that latest call into a jump. 2023-09-25 08:44:21 i only have a today word 2023-09-25 08:44:30 ? 2023-09-25 08:44:35 today word? 2023-09-25 08:44:46 (`Mon Sep 25 12:44:34 2023`) 2023-09-25 08:44:55 Oh - gotcha. 2023-09-25 08:45:04 date 2023-09-25 08:45:06 just for printing http date header xd 2023-09-25 08:45:30 but i did not decide what the time word will be 2023-09-25 08:46:36 "Date: " . today . cr 2023-09-25 08:47:02 Sure. I haven't had any need for that; I don't do any http: stuff. 2023-09-25 08:47:17 I added uet to facilitate timing activities. 2023-09-25 08:47:21 i should be using frameworks for that 2023-09-25 08:47:28 or servers 2023-09-25 08:48:12 KipIngram: timing activities like a benchmark? 2023-09-25 08:48:54 i still need to find a better way to execute async code than using threads 2023-09-25 08:49:34 Yes - measure how long words take, etc. 2023-09-25 09:30:38 vms14: What do you dislike about threads? 2023-09-25 09:31:21 race conditions and shared data 2023-09-25 09:31:23 I view threads as the logical way to take advantage of having multiple cores. I know they can run on a single core too, but you need more than one execution stream if you want to be able to use more than one core. 2023-09-25 09:31:32 Yes, you have to know what you're doing. 2023-09-25 09:31:43 but mainly the fact they are a extremely discouraged thing in perl 2023-09-25 09:31:58 also, i guess there are better alternatives 2023-09-25 09:31:59 But I've dealt with that kind of thing all along, in designing digital circuits, so I guess it doesn't "scare me." 2023-09-25 09:32:12 like event driven programming for example 2023-09-25 09:32:16 I don't mean to downplay the issue - it's certainly something you need to "get right." 2023-09-25 09:32:38 a thread is not a proper abstraction to async code 2023-09-25 09:32:46 But ultimately you'll still HAVE threrads right? You're talking more about how you have your threads communicate. 2023-09-25 09:32:47 is just code that runs in a system thread 2023-09-25 09:33:31 When you say "events" you're talking about a message passing architecture, right? 2023-09-25 09:33:56 in a general way, any other alternative to threads to execute code in asynchronous way 2023-09-25 09:34:32 Ok. We're just using the word a little differently. 2023-09-25 09:34:49 To me "threads" is anything that involves multiple code streams. 2023-09-25 09:34:58 And you can't support multiple cores without having that. 2023-09-25 09:35:21 But there are many ways you can have them communicate and synchronize. 2023-09-25 09:35:45 And shared memory - yeah, that is definitely an "optional" thing. 2023-09-25 09:35:57 So if you prefer to avoid it to avoid the problems that can come with it, I get that. 2023-09-25 09:36:06 It can also bring in cache efficiency issues. 2023-09-25 09:36:21 So there are definitely some pitfalls associated with shared memory. 2023-09-25 09:36:27 You can take advantage of shard memory as an optimization but it's best to avoid multiple threads reading/writing the same memory at the same time 2023-09-25 09:36:48 /shard/shared/ 2023-09-25 09:36:53 A thing to keep in mind, though, is that you can get cache issues around "shared memORY" without actually sharing memory. 2023-09-25 09:37:04 It's enough to share cache lines, even without "real" overlap. 2023-09-25 09:37:16 Yeah 2023-09-25 09:37:49 vms14: They call that "false sharing," and there are tools that can help you detect it in your applications. 2023-09-25 09:37:56 At least if you're using c. 2023-09-25 09:39:18 vms14: As far as race conditions and deadlock go, I don't think message passing is a universal panacea for that - you can still have race conditions even with message passing. 2023-09-25 09:39:35 but isn't all this a workaround? 2023-09-25 09:39:53 reflecting how threads aren't actually what we need 2023-09-25 09:42:16 The hardware has multiple cores which are essentially threads 2023-09-25 09:42:22 ^ that. 2023-09-25 09:42:34 You have to have threads to fully utilize your hardware in modern processors. 2023-09-25 09:43:04 And you will - you might find a way to bury it under the layers you're using, but if you're capitalizing on your cores you will somehow have threads in there. 2023-09-25 09:43:19 You may be able to map other abstractions onto the hardware threads 2023-09-25 09:43:21 you just might not have them be "visible" to you. 2023-09-25 09:43:27 Right. 2023-09-25 09:43:45 And those abstractions could offer protection against race conditions etc. 2023-09-25 09:44:16 Erlang provides share nothing threads, they communicate via message queues 2023-09-25 09:44:24 vms14: I think you're just looking at things from a different "altitude," so to speak. 2023-09-25 09:45:00 I think Erlang is a good model - it's one of the things I intend to try to take cues from. 2023-09-25 09:45:20 Me too 2023-09-25 09:45:20 i just don't know yet exactly what that means. 2023-09-25 09:45:36 I also want whole array / table operations 2023-09-25 09:46:01 Yeah - I plan to raid APL for that. 2023-09-25 09:47:40 There are many array languages to steal from :P 2023-09-25 09:47:58 APL is just the one I've looked into most deeply. 2023-09-25 09:48:35 I want to wind up with a system I can use for scientific modeling, so some kind of support along those lines is a must. 2023-09-25 09:49:35 At the moment I'm "imagining" that my target is an OS for an RPN calculator. 2023-09-25 09:49:43 Seems like a good way to "scope" the whole thing. 2023-09-25 09:56:35 I found a new way to disregard idiotic comments on mainstream/social media: treat it like AI generated spewage. 2023-09-25 10:09:57 the abstractions on top of threads is what i want 2023-09-25 10:10:15 but i have to decide how 2023-09-25 10:10:20 and learn first xd 2023-09-25 10:12:22 i don't want to have to create a thread and manage the resources, etc 2023-09-25 10:12:40 i want some kind of systems that abstracts all this stuff 2023-09-25 10:13:32 i just want to execute code in asynchronous way 2023-09-25 10:13:44 for now when i want to do that, i spawn a thread 2023-09-25 10:14:20 i'll add some sort of workers list, where you give code and the number of threads to spawn with that code, or alike 2023-09-25 10:14:28 but it's not nice 2023-09-25 10:16:25 and still think i should be able to find a way to execute async code without threads 2023-09-25 10:17:06 for some reason non blocking io with one thread seems to perform much better than blocking io with threads 2023-09-25 10:18:11 That makes sense. I'm on the other end of the spectrum, though - I will need some mechanism for creating threads and so on. Any abstractions that wind up covering that I'll have to implement myself. 2023-09-25 10:18:18 I have no idea how i'll do all those things yet. 2023-09-25 10:18:45 But I want to be able to make threads simply and cheaply. 2023-09-25 10:19:16 I want them that to be a very "economical" operation. 2023-09-25 10:19:44 Just provide a bit of memory and the address of the code to run, and... go. 2023-09-25 10:20:28 but you want to implement the real thing 2023-09-25 10:20:35 with context switch and alike 2023-09-25 10:20:36 But then I also want to be able to equip threads with a private dictionary, console access, etc. as optional features. 2023-09-25 10:20:44 Yes indeed. The real thing. 2023-09-25 10:20:55 i think context switch is what i don't like of threads 2023-09-25 10:21:04 I've already got the "guts" for that in place on my system. 2023-09-25 10:21:56 i'd always think about a main loop that partitions the code 2023-09-25 10:22:27 Every time I pass through "docol" (i.e., enter a new : definition) I decrement a register. The idea is that when it hits zero I'll branch off and that's where a thread swap will occur. 2023-09-25 10:22:30 in lisp using macros i did something similar to fake async 2023-09-25 10:22:54 you give it code and returns a function that executes one statement every time you call it 2023-09-25 10:23:32 then you can run that code by pieces by just calling that function 2023-09-25 10:23:33 The idea is that for each core I'll have a "ring" of threads that are time multiplexing on that core. 2023-09-25 10:24:10 If a thread needs to wait for something, I'll remove it from its ring and park it in some other queue. 2023-09-25 10:25:02 In your lingo that would mean waiting for some "event" that would imply the thread was ready to execute again. 2023-09-25 10:25:44 Which could just be as simple as waiting for particular partner threads to arrive at that same point - maybe i want to take a collection of threads and periodically "sync" them. 2023-09-25 10:26:09 So each one would block when it got to that "gate" - only when all of them were there would I turn them all loose again. 2023-09-25 10:26:48 Or it might be waiting for a disk block to be RAM resident, etc. 2023-09-25 10:27:08 Waiting for a keystroke, whatever. 2023-09-25 10:34:44 vms14: I'm sure that by the time I have my "uppermost levels" working I'll have various layers of abstraction in place on top of these lower levels. 2023-09-25 10:34:50 But I have to build it all up, so... 2023-09-25 10:35:51 i would not try to implement threads 2023-09-25 10:36:01 but to look another thing instead 2023-09-25 10:36:19 You have more options, I think. I don't, given the level I'm coming at this from. 2023-09-25 13:24:47 I've got several threads in play currently. First of all, I'm going to model the lowest level of the system on Chuck's F18A architecture. The F18A packs opcodes into cells, and I plan a "vm layer' that operates that way. 2023-09-25 13:25:26 That's where the compactness comes in - that virtual instruction set is designed primarily for compactness, while still running with speed similar to my previous designs. 2023-09-25 13:26:03 Then I'm planning this thread/vocabulary/directory/whatever layer, which is how I'll organize various applications and so on. 2023-09-25 13:26:43 Kubernetes layer, political layer, investor layer, 2023-09-25 13:27:02 And then we come to the actual structures for numerical computation. I've become a big fan of geometric algebra, which is a math system that lets you mix scalars, vectors, and higher-order quantities together into what are called "multivectors." 2023-09-25 13:27:30 Oh, I'm sorry - I thought I was in another channel talking to a specific person. 2023-09-25 13:27:38 thrig: you just described the stereotypical startup 2023-09-25 13:27:44 Didn't mean to whack all you guys with t his again - I've already been through most of it. 2023-09-25 13:27:57 not to worry 2023-09-25 13:28:18 reviews help it stick in memory (however poorly some of us make use of said memories) 2023-09-25 13:28:44 Anyway, then finally I'm pondering that projective geometry layer. Want to see if I can mix all this stuff together. 2023-09-25 14:02:42 Another thing I've mentioned but only briefly is that I'm interested in trying to work up a block-resident symbol storage approach, so that I don't take up too much of my RAM storing header-type information. 2023-09-25 14:03:00 The payoff would be to be able to use my available RAM almost completely for actual code and data. 2023-09-25 14:03:53 Looking at structuring it as a hash table. Making it block resident would reduce performance some, since I'd have to swap stuff in and out, but algorithmically I'd go for O(1) performance. 2023-09-25 14:13:16 Gotta love "data presentation choices." 2023-09-25 14:13:19 https://media.nature.com/lw767/magazine-assets/d41586-023-02995-7/d41586-023-02995-7_26075536.png?as=webp 2023-09-25 14:13:54 Notice how the y axis there is not 0 based. 2023-09-25 14:14:08 Which of course makes the gap up at the top as prominent as possible. 2023-09-25 14:19:36 There is no doubt about it having been a hot summer, though - it's noticeably hotter here in the Houston area than "typical" summers in my memory. It's actually nicely cooler today, but we've had a LOT of really hot days this time. 2023-09-25 14:20:49 talking about warmth, it has been pleasently warmer here today than usual for this season of the year 2023-09-25 14:22:31 usually, at least in my memory it was annoyingly cold. That is, the kind that makes one go 'I wished it was a degree or so warmer' 2023-09-25 14:26:14 Heh. Too bad I couldn't have shipped you guys some hot air occasionally. 2023-09-25 18:52:04 Holy cow - it's pouring. 2023-09-25 18:52:16 Glad I took the dogs out a few minutes ago. 2023-09-25 18:55:08 a cow with holes in it would pour 2023-09-25 20:39:08 thrig: po(o|u)r cow!