2025-10-03 11:49:24 KipIngram: Well tonight I'm going to treat myself to a couple hours of programming 2025-10-03 11:49:28 Can't wait 2025-10-03 11:49:38 Might get some more in a few months 2025-10-03 13:29:02 :-) That's about the rate I've had in recent times. 2025-10-03 13:29:07 Hope you have fun with it. 2025-10-03 13:37:43 KipIngram, yore retired now tho :) 2025-10-03 13:37:48 youre 2025-10-03 13:39:34 Im currently struggling with a AI to get it to write some LUA code the way I want because I dont know LUA and dont want to leearn more than the minimum to finish my LSP so I can finish my Forth project 2025-10-03 13:40:03 At the mooment I'm working down from the top of the interpreter, but really only have the first little bit done. I'm filling in low level bits as they come up - the idea is to first have just the part required for the interpreter to run. I've got QUIT done, but still have INTERPRET, the prompt, and the looping word to go, and I've got QUERY and EXPECT done, but still have EDIT. I structure that 2025-10-03 13:40:04 while I love simple Forth coding, Im not enjoying Lua much 2025-10-03 13:40:04 so that EDIT is the real workhorse and can edit an already existing buffer in memory. So I have : EXPECT OVER OFF 0 EDIT DROP ; The OVER OFF clears the buffer, and the 0 at the end sets the initial column position. EDIT can start anywhere in an existing string and returns the keystroke that caused it to terminate. That's usually "enter," but can also be cursor up or down keystrokes. Makes it 2025-10-03 13:40:06 useful for writing screen editors. 2025-10-03 13:40:40 nice work! 2025-10-03 13:40:53 tpnix: Right - hopefully I can pick up some steam now. This Forth will get used in a lot of the further things I want to do, so it's sort of first in line. 2025-10-03 13:40:58 Im a tech not a programmer, so I'll never write a Forth 2025-10-03 13:41:22 but I rely on Forth to make embedded devices 2025-10-03 13:41:27 I quite like Lua 2025-10-03 13:41:31 A lot of people in here do I think 2025-10-03 13:42:10 veltas, I dont mind Lua either, I dont dislike it but it's only a stepping stone for me 2025-10-03 13:42:39 it's Forth I I'm building my tooling for 2025-10-03 13:43:29 for sure, Lua is dead easy to read, I dont have any trouble with it, the doc is good 2025-10-03 13:44:24 I have already written a neovim pop up database searcher with Lua, but needed AI help to do it 2025-10-03 13:50:46 Last time I did this, I had written EDIT to handle ASCII only, and later decided to modify it to handle UTF-8. The result was a surprisingly large amount of code overall. I'm really hoping that by desigining for UTF-8 to begin with I can tidy it up and maybe reduce the size a little. So I plan to give that a lot of "thinking time." 2025-10-03 13:51:55 This system is byte coded but will fetch by 64-bit cells. Definitions are cell-aligned, so I'm trying to keep as many of them as possible one-cell definitions. 2025-10-03 13:53:54 I also have a way to "call" any of the 16 cells just below the IP with a single byte, so I'm trying to structure the thing to exploit that as much as possible. 2025-10-03 13:54:08 So far I have, but I haven't written very much yet, so we'll see. 2025-10-03 14:01:09 I'm also doing it so that returns can resume a partially exhausted instruction word. The whole thing is loosely based on the F18A, but with some tweaks. The F18A couldn't return to the middle of a cell, for example. That requires putting two items on the return stack on calls, but I don't think it'll create much of a performance penalty because both of them will usually be in the same cachel 2025-10-03 14:01:10 ine. 2025-10-03 14:01:20 "cache line" 2025-10-03 14:01:45 I'm expecting it to be compact as hell. 2025-10-03 14:03:44 And portable, except for the layer that implements the virtual instructions. 2025-10-03 14:06:50 Considered doing a tokenised forth? 2025-10-03 14:11:54 Yeah, I guess you could use that word. A byte-coded vm of my own design. It supports a nice threading method - each vm instruction concludes with mov bl, al ; sar rax, 8 ; jmp [ + 8*rbx]. 2025-10-03 14:12:10 The upper seven bytes of rbx will just be kept 0 the whole time. 2025-10-03 14:13:00 The nice thing about it is that no slots are wasted for moving to the next cell - I can use all eight. Eventually when they're used up I'll get either 0x00 or 0xFF as the opcode, and both of those table entries will just point to a bit of code that reloads rax with lodsq. 2025-10-03 14:13:24 Well the difference is a tokenised forth is implemented in assembly with a next routine that looks up a table of fixed words that make your 'ops' and allows either indirect or direct threading, and a VM can be implemented in anything and typically doesn't allow inlining code or direct threading 2025-10-03 14:13:54 Ok, then no - not tokenized. This Forth itself will be code threaded. 2025-10-03 14:14:06 Just using vm code. 2025-10-03 14:14:16 bytecode threaded let's call it 2025-10-03 14:14:22 Works for me. 2025-10-03 14:14:24 That should be a thing 2025-10-03 14:14:54 And any instruction can treat the remainder of the cell as a literal - when the instruction executes the literal will be sitting full-formed in rax. 2025-10-03 14:15:12 All shifted down right and sign extended properly. 2025-10-03 14:18:38 I used to think byte code machines would be slow, but I think this may wind up as fast or nearly so as anything I've previously written. Time will tell. 2025-10-03 14:21:15 Should be faster than Python anyway 2025-10-03 14:21:38 Although actually I think Python's getting faster(?) 2025-10-03 14:22:37 I haven't kept track, but historically when I've timed my Forth they've been like 30% ish of the C I wrote to compare to, and some large factor faster than Python. I can't recall exactly what the factor was, something like a 10x or 30x sort of thing. 2025-10-03 14:22:46 through JIT-ing, but that's just lipstick on a pig. Being faster than Python is a really low bar 2025-10-03 14:22:56 Yeah. 2025-10-03 14:23:18 Python's really best when it's glue code over package calls that do the heavy lifting. 2025-10-03 14:23:51 Or used for user interface stuff where you're waiting for the sl owpoke human anyway. 2025-10-03 14:25:09 Benchmarks from the JIT version seem quite bad actually 2025-10-03 14:25:21 3.13 that is, I can't speak for PyPy 2025-10-03 14:26:55 I'd rather write bad C and occasionally optimise C than bad Python and have to rewrite in C constantly 2025-10-03 14:27:44 I wrote a lot of Python at work over the years. I just didn't sweat the performance and was primarily interested in the rich package library. 2025-10-03 14:28:00 And now I've handed all that over to a couple of kids I trained before I left. 2025-10-03 14:28:10 python is good if you want a small program now and dont really care about how fast it is 2025-10-03 14:28:10 Give your primitives are all cozy with the CPU you're alright, Python includes a lot of primitives and runtime features that aren't cozy at all, I suspect that's where a lot of the performance issues are 2025-10-03 14:28:25 amby: Exactly right. 2025-10-03 14:28:42 It just very often felt like the fastest path to delivering my results. 2025-10-03 14:28:51 i wouldnt wanna write anything more than 100 lines or so though 2025-10-03 14:29:09 And since they always wanted them yesterday, that wound up being the priority. 2025-10-03 14:29:40 I used to buy into the general idea that only a small part of your application is your bottleneck, hence Python is OK. You tell yourself: if the need arise, I can rewrite my bottleneck in C. The fact of the matter is, it never happens and you just beef up your servers, contributing to sprawling inefficiencies all over our data centers 2025-10-03 14:29:45 My longest ones were several hundred, like 700-800, but there were only a couple of those. 2025-10-03 14:31:06 sometimes, the bottleneck is easy to identify, a single hot loop. Yay, easy. But a lot of the times, the bottleneck is a web of complex code and there's no straightforward way to proceed, so you don't 2025-10-03 14:31:31 and in the end, that's just sloppyness eating out joules 2025-10-03 14:31:35 at a massive scale 2025-10-03 14:31:35 And my most involved one was targeted at synchronizing a pair of systems that collaboratively tested an SSD - it had two connections to it and was intended for use in high availability systems. So all it had to do was keep up with the job flow, which was by its nature a slow process. 2025-10-03 14:32:28 It "got the job done" (TM) which was good enough for me. 2025-10-03 14:42:45 vdupras: Also the portion that is the 'hot part' gets proportionally smaller if your glue language is slower, so that 80% part in C becomes like 30% in Python 2025-10-03 15:37:37 https://packages.debian.org/bullseye/zlib1g vs https://packages.debian.org/bullseye/zstd 2025-10-03 15:37:41 Who would win 2025-10-03 15:38:19 I feel like a lot of people would reach for zstd but look how bloated it is in comparison, C++ etc 2025-10-03 15:38:26 But who cares I guess it doesn't matter 2025-10-03 20:43:56 forthBot: LOAD ini.fth 2025-10-03 20:43:56 File ini.fth with MOON loaded 2025-10-03 20:44:05 forthBot: EURO 2025-10-03 20:44:05 41 1 38 45 46 12 11 2025-10-03 20:44:16 if you want to win veltas 2025-10-03 20:52:13 forthBot: deal 2025-10-03 20:52:13 Error: Unknown word: deal 2025-10-03 20:52:32 forthBot: bet deal 2025-10-03 20:52:32 Error: Unknown word: bet 2025-10-03 20:53:12 forthBot: LOAD bj.fth 2025-10-03 20:53:12 Error: Error: LOAD: Cannot open file 'bj.fth' 2025-10-03 22:44:05 Environment for cleobuli_ inactive, freeing... 2025-10-03 22:53:12 Environment for lispmacs[work] inactive, freeing...