00:54:36
##forth
<KipIngram>
veltas: No, unfortunately not. I was working entirely in assembly.
02:10:06
##forth
<pgimeno>
oh wow, my interpreter is already interpreting and compiling, and it can even already do [ ] LITERAL yay!
02:12:28
##forth
<zelgomer>
congrats!
02:16:35
##forth
<KipIngram>
pgimeno: Congratulations!
02:16:42
##forth
<KipIngram>
Always a great feeling!
02:17:09
##forth
<pgimeno>
thanks! yes it is :)
05:10:06
##forth
<Seabass_>
pgimeno: hey nice!
09:35:45
##forth
<veltas>
Nice job pgimeno
09:36:02
##forth
<veltas>
KipIngram: Technically I'm working entirely in assembly :P
12:13:37
##forth
<pgimeno>
thanks ^.^
12:13:57
##forth
<pgimeno>
is there a standard name for "branch if not zero"?
12:14:31
##forth
<pgimeno>
the Jupiter disassembly ROM uses ?branch but I wonder if that's standard
12:15:08
##forth
<pgimeno>
and to me it sounds like the other way around, like ?DUP duplicates if not zero
12:16:51
##forth
<veltas>
Might call it -branch
12:17:05
##forth
<veltas>
- before a word often means 'not' or 'invert'
12:35:23
##forth
<xentrac>
NXP appnote AN12245 says that a typical power consumption number for the chip at 528MHz is 38mA (126mW), so you can probably hit submilliwatt consumption whenever the duty cycle is below about 0.8%. which seems pretty reachable for an interactive Forth system; it would work out to about 6 MIPS, roughly 386 performance
12:35:34
##forth
<xentrac>
pgimeno: I like the term "colonoscopy"
12:36:00
##forth
<pgimeno>
well, it allows you to look inside the colon (definition)
12:36:59
##forth
<pgimeno>
veltas: thanks, I like that
12:38:04
##forth
<xentrac>
skvery: Mecrisp is proprietary
12:38:41
##forth
<xentrac>
pgimeno: congratulations!
12:39:03
##forth
<xentrac>
?branch is a pretty common word for a conditional branch
12:39:04
##forth
<pgimeno>
thanks ^.^
12:39:25
##forth
<pgimeno>
xentrac: but it suggests branching if true
12:40:06
##forth
<xentrac>
it does
12:40:18
##forth
<veltas>
It's worth to realise branch and jump don't mean the same thing, in the Forth context
12:40:27
##forth
<veltas>
branch means "either jump or don't jump"
12:40:36
##forth
<veltas>
That's a *branch* in the program
12:40:43
##forth
<veltas>
Where flow splits
12:40:51
##forth
<pgimeno>
oh wait, I wrote it wrong, the Jupiter uses branch if zero
12:41:08
##forth
<veltas>
That's how it's usually implemented, it jumps if the tos is zero
12:41:17
##forth
<veltas>
Matches IF/WHILE/UNTIL
12:41:47
##forth
<pgimeno>
that's exactly what it's used for
12:43:42
##forth
<pgimeno>
so, ?dup means branch if not zero, therefore ?branch suggest branch if not zero too - so the question is what name to use for branch if zero. veltas' suggestion of -branch sounds adequate.
12:44:03
##forth
<pgimeno>
It's an internal name, so not a big deal, but exposed in the Python interpreter.
12:52:09
##forth
<veltas>
I would call it BRANCH but ?BRANCH works too
12:52:17
##forth
<veltas>
Also please read what I wrote about branch vs jump
12:52:46
##forth
<veltas>
You're getting confused here because you are treating them the same, but like many things in forth (e.g. IF .. THEN) the meaning is a bit weird
12:53:25
##forth
<veltas>
BRANCH is true regardless of the condition, it's a branch either way, because it forks the control flow graph
12:56:51
##forth
<skvery>
If branch not zero == ?BRANCH~0 so branch if 0 == BRANCH0
12:57:11
##forth
<skvery>
* ?BRANCH0
13:14:25
##forth
<crc>
xentrac: there are other factors like the display, which are likely to significantly increase power use, even if I clock down the CPU on the Teensy.
13:24:30
##forth
<crc>
Once I have the display hooked up, I'll try to do some experiments to see how changing the CPU speed affects the overall power usage
13:31:49
##forth
<veltas>
skvery: It's always a branch, whether or not the jump happens
13:34:41
##forth
<veltas>
In fig Forth the jump is called BRANCH though, and the branch is called 0BRANCH
13:47:28
##forth
<veltas>
And I think in Forth Inc stuff it's called if (vs IF)
13:47:54
##forth
<veltas>
0BRANCH probably makes the most sense, if fig does it it's fair
13:48:16
##forth
<veltas>
I suppose everything I've said about the meaning of branch vs jump is nonsense
14:01:49
##forth
<pgimeno>
I finally settled for brfalse, as I feel it's the most intuitive. It's internal anyway, not exposed to the user, so it doesn't matter much. I like 0branch for another project I'm planning on.
14:29:30
##forth
<zelgomer>
fwiw, i always call them branch (unconditional) and ?branch (conditional). yes, ?branch seems like it's negated from ?dup, but you can think of it like this: ?branch executes the code that immediately follows it when the predicate is non-zero, and in that sense it's similar to ?dup
14:43:08
##forth
<Seabass_>
brfalse makes sense, I called it “BIZ” ans then changes it back to just BRANCH
15:02:55
##forth
<KipIngram>
pgimeno: I mentioned the other day that I favor short symbolic names, and also mentioned my use of conditional returns. A typical name in that set is 0=; which means return if TOS is zero. I have a version with that name that consumes the stack item and also one called .0=; that does not. That patten extends in an obvious way to other conditional cases. I've never thought of a really excellent
15:02:57
##forth
<KipIngram>
symbol that indicates "jump" rather than return, though.
15:03:39
##forth
<KipIngram>
A little arrow might be nice for that, but the closest thing on the keyboard that's obvious is > and <, and those just never quite resonated for me.
15:04:19
##forth
<KipIngram>
brfalse seems perfectly fine to me.
15:05:03
##forth
<GeDaMo>
^
15:05:06
##forth
<pgimeno>
well, you could go the APL way and make a specialized keyboard with specialized symbols :D
15:05:16
##forth
<GeDaMo>
I don't think ^ is used in standard Forth
15:05:39
##forth
<KipIngram>
I do have that support enabled in my system; I do like those symbols and plan to use them, but haven't done an implementation that does yet.
15:06:06
##forth
<KipIngram>
GeDaMo: I like ^ for xor, but using it in other names would still be allowed of course.
15:07:09
##forth
<KipIngram>
In my systems the only place I ever jump is back to the start of the latest definition - I have a word called me that does that, and there are conditional versions. 0=me etc. It was intended to be a shorter word along the lines of self in OO systems.
15:07:41
##forth
<KipIngram>
But that's no good at all for general branding to anywhere.
15:08:05
##forth
<pgimeno>
in linux you can type an arrow → with ralt + i
15:09:00
##forth
<GeDaMo>
←↓→↑
15:09:35
##forth
<GeDaMo>
AltGr+yui and AltGr+shift+u
15:12:07
##forth
<KipIngram>
s/branding/branching/
15:12:59
##forth
<user51>
Not that I know too much about it, but if threaded code supposedly has better density, does it have any relation to data compression?
15:13:21
##forth
<user51>
Don't know if that's the right question, but it's the first thing that I thought about.
15:24:52
##forth
<pgimeno>
I wouldn't regard code compactness as a compression of any sorts, just like I wouldn't call ARM Thumb code a form of compression; it's closer to a size optimization
15:28:24
##forth
<pgimeno>
but then I might be misunderstanding your question
15:37:53
##forth
<zelgomer>
token threading is kind of similar to color palette compression
15:38:43
##forth
<zelgomer>
i don't know whether that qualifies as compression in this context
15:38:46
##forth
<crc>
user51: not that I'm aware of. At least for direct
15:38:46
##forth
<crc>
& indirect threading, it's better for density in the sense that you just have a list of addresses, but it's not really compressing anything.
15:45:51
##forth
<xentrac>
crc: yeah, it depends a lot on the display. generally you can't save that much by just scaling down the CPU speed, but you can save a lot by powering the CPU down 99% of the time
15:48:35
##forth
<KipIngram>
user51: A big part of Forth compactness is implicit addressing. Instructions don't have to say where the data to operate on is - it's always "on the stack."
15:49:30
##forth
<KipIngram>
Of course that sometimes means you have to dick around with the stack before running your desired instruction. You try to minimize that, but it is a factor.
15:51:30
##forth
<KipIngram>
I don't think pure direct and indirect threading really offer actual compression opportunities. An address is an address - they're all the same size. On the other hand, you could imagine a token threading system where frequently used operations had smaller tokens than infrequently used ones.
15:51:40
##forth
<KipIngram>
That would feel like actually compression to me.
15:51:44
##forth
<KipIngram>
actual
15:52:04
##forth
<KipIngram>
Would probably run a bit slower.
15:54:06
##forth
<veltas>
I disagree, I think threading models as an alternative to full machine code can reduce size, but it matters how big the addresses are
15:54:37
##forth
<veltas>
This I found out the other day was one of the motivations for rewriting FORTRAN compilers to use threaded code around 1970
15:54:37
##forth
<KipIngram>
Well, yes, I agree - I just don't feel like you're deploying actual "compression tech."
15:55:09
##forth
<veltas>
Also I would say it compresses it, I just would clarify it's not a generic compression algorithm
15:55:13
##forth
<veltas>
But that's just me
15:55:30
##forth
<veltas>
Maybe I'd say it's a "compact representation"
15:55:41
##forth
<KipIngram>
No, no - I agree with that. A list of addresses is going to be smaller than a list of subroutine calls with those addresses embedded in them.
15:57:03
##forth
<KipIngram>
If you have different forms of call instructions that take the address in different ways (like relative calls), then that might make a difference, if you code is "local call heavy."
15:57:23
##forth
<KipIngram>
I'm putting that sort of idea into my next system, because I think the way I code is very local call heavy.
15:57:29
##forth
<veltas>
But x86 has what like 24-bit offsets, so it's no longer an advantage?
15:57:44
##forth
<veltas>
Even more so on AMD64
15:57:51
##forth
<veltas>
x86-32 I mean
15:58:08
##forth
<thrig>
IA-32 some call it
15:58:27
##forth
<KipIngram>
Yeah, if your calls let you reduce the size of the addresses then it's no longer really an apples to apples comparison.
15:58:44
##forth
<veltas>
That's the case for most 32-bit and larger CPUs
15:58:55
##forth
<KipIngram>
But the smaller addresses are more limited, and if you did the extra work in your inner interpreter you could use those smaller address sizes too.
15:58:59
##forth
<veltas>
But in the pre-32-bit era it made sense
16:01:37
##forth
<veltas>
Absolutely, but if the motivation was to save space then the fact the CPU arch is smarter about encoding lots of calls makes it a bit redundant
16:02:08
##forth
<veltas>
I said yesterday I think only token and STC makes 'sense' on new CPUs, but everyone can do what they want
16:03:43
##forth
<KipIngram>
I used to think token threading would just kill speed, but I no longer believe it has to be too much of an impact. I'm quite liking the token threading idea these days.
16:04:25
##forth
<KipIngram>
What changed my mind about it was recognizing that you only have to actually fetch a cell once every few instructions - the most common "next" is just shifting through the already fetched tokens.
16:04:35
##forth
<veltas>
Well it's faster than bytecode (without JIT)
16:04:41
##forth
<veltas>
I think(?)
16:05:04
##forth
<KipIngram>
I guess my use of "token threadng" here is more or less bytecode. Byte tokens is what I'm planning.
16:05:34
##forth
<veltas>
The difference is bytecode doesn't call/inline code to read the next op
16:05:40
##forth
<KipIngram>
I was initially thinking about smaller tokens for code density, but this "local call" idea encouraged me to increase the token count, and pretty quickly the "naturalness" of bytes won out.
16:05:43
##forth
<veltas>
That's what makes it 'threaded', IMO
16:06:26
##forth
<KipIngram>
Honestly what I'm picturing almost isn't "threaded" anymore - it's more just a particular vm instruction set.
16:06:42
##forth
<KipIngram>
That will have call instructions of a couple of forms.
16:06:44
##forth
<veltas>
Bytecode that isn't threaded has an interpreter loop
16:07:03
##forth
<KipIngram>
It'll just have byte codes that are shorthand for local calls of various distances.
16:07:03
##forth
<veltas>
(or it's JIT -- in which case it will leave us in the dust)
16:08:09
##forth
<KipIngram>
I've never really done a system that tried to push any particular optimzation (size, speed, etc.) to its limit - I'm always going for some envisioned "balance of things" that happens to appeal to me at that moment.
16:08:31
##forth
<KipIngram>
Fast "enough," compact "enough," ...
16:08:55
##forth
<KipIngram>
Portability is of more interest to me this time than it ever has been before.
16:09:09
##forth
<KipIngram>
I really want the only non-portable part to be the actual vm implementation layer.
16:09:21
##forth
<KipIngram>
And platform specific hardware interface code, of course.
16:12:22
##forth
<veltas>
I'm also interested in going more portable
16:16:13
##forth
<veltas>
Want to get SDL and OpenGL working in my Forth, in a way where I could port it to a more native interface if desired
16:16:16
##forth
<veltas>
But I probably never will
16:16:26
##forth
<veltas>
SDL is good enough
16:28:33
##forth
<veltas>
I guess it makes sense if you can load .so's dynamically to read interface stuff using DWARF data
16:28:48
##forth
<veltas>
But not everything you need is always in there, e.g. #define's
16:29:05
##forth
<veltas>
Saves parsing arbitrary C headers though
16:33:42
##forth
<veltas>
Wow DWARF literally has a stack machine inside
16:59:08
##forth
<xentrac>
so do TrueType and terminfo
17:24:17
##forth
<veltas>
Apparently it's not used by most compilers
17:52:33
##forth
<lispmacs[work]>
hey all, I was just curious if anyone here does any projects with analog computing
17:52:47
##forth
<xentrac>
also pickle
17:52:59
##forth
<xentrac>
it's true that most compilers don't use a stack-based IR these days
18:20:23
##forth
<KipIngram>
xentrac: I'm quite interested in analog computing. I haven't done anything with it yet, but I've long had it in mind that I'd like to try my hand at building a good analog computer at some point.
18:21:03
##forth
<KipIngram>
I'm not really an "analog whiz kid," especially compared to digital, but I know my way around to some extent.
19:09:57
##forth
<veltas>
xentrac: I mean DWARF's stack ops are mostly unused in practice, apparently
19:10:28
##forth
<thrig>
ELF also has a bit of complexity in it (and, thus, security issues)
19:10:33
##forth
<veltas>
I think they're necessary to describe addresses on exotic platforms maybe? But not necessary usually? Not 100% sure but that seems like the case
19:11:34
##forth
<veltas>
Yeah formats tend to be over-engineered; I should know, I've written at least one
19:12:55
##forth
<veltas>
The best binary formats have some random agreed ahead-of-time numbers, a checksum, and a load of fixed offsets I only try to load on my 2003 Pentium 4
21:16:33
##forth
<Seabass_>
veltas: no way, I’m writing a Forth interpreter and planning to add some built-in words for SDL
21:16:43
##forth
<Seabass_>
My goal is to make a little tool that I can write Snake in
21:26:26
##forth
<veltas>
Seabass_: Nice, good luck with that
21:26:55
##forth
<veltas>
It's quite achievable, I just don't get a lot of time and have probably made it harder than it needs to be
21:31:27
##forth
<MrMobius>
I recently made an sdl2 app that wraps the graphics and input functions for the new calculator I got so I can test C code there instead of reflashing every time
21:31:44
##forth
<MrMobius>
so I think a Forth will be one thing I do on there eventually
21:34:41
##forth
<Seabass_>
veltas: it has been surprisingly easy so far, forth seems very intuitive to implement (I figured out how to implement functions and branching without looking at a reference)
21:34:58
##forth
<Seabass_>
But that was over the break and now that I’m back at work my brain is gonna be fried 24/7 again
21:50:26
##forth
<veltas>
The first Forth I wrote was zenv, which I think I had a bit of a false start with, read Moving Forth a little, and then got back on it
21:50:49
##forth
<veltas>
I can't remember what I was hung up on at the time, maybe if I find an old in-progress repo I can remember
21:51:01
##forth
<veltas>
That's a Z80 Forth for the ZX Spectrum
21:51:31
##forth
<veltas>
And I joined this IRC channel to ask for help with that maybe, not sure
21:52:03
##forth
<veltas>
It's either token threaded or direct threaded depending on build flags
22:00:12
##forth
<KipIngram>
My first Forth was quite awful. It worked, but I had no knowledge of the internals at the time, and my way of getting there was awfully complex. Wrote that on a TRS-80 Color Computer (6809 processor). It was in assembly, though.
22:03:30
##forth
<veltas>
That's what I would have ended up with, if it wasn't for the resources available today
22:05:06
##forth
<veltas>
TRS-80, luxury!
22:05:11
##forth
<veltas>
How much RAM did you have?
22:06:06
##forth
<veltas>
Oh I see TRS-80 Color Computer is a different thing, crazy
22:26:42
##forth
<KipIngram>
It had 16k when I bought it, but I upgraded it to 64k.
22:27:04
##forth
<KipIngram>
I used the bottom 32k for RAM and the top 32k as a 32-page "mass storage."
22:27:21
##forth
<KipIngram>
I'd read the whole mass storage in off of cassette, do my work, and write it back out.
22:27:30
##forth
<KipIngram>
I had no disk for my CoCo until later.
22:29:17
##forth
<KipIngram>
I loved the 6809 architecture. The assembly programming class I took used it as the teaching platform - that's why I got a CoCo to start with.
22:29:48
##forth
<KipIngram>
The engineering college had a big room with like 50 of the things in it for general student use, but I wanted to be able to work on my stuff at home.
22:45:11
##forth
<KipIngram>
I still had to go to the lab for some of it, because those systems all had a bay off to the side we could plug circuits we'd designed into. Mine didn't have that.
22:45:30
##forth
<KipIngram>
It was a setup the people at the lab had designed.
22:45:42
##forth
<KipIngram>
(probably graduat student labor)
23:05:33
##forth
<MrMobius>
how did that work? letting the magic smoke out of the coco data bus seems worse than doing it just to something on the breadboard
23:08:44
##forth
<MrMobius>
ive also wondered about building something that could withstand common wiring errors but not sure how you would do that