2023-09-10 00:01:59 KipIngram: Regarding demotion of number to smaller types, there's something called Comparison Tolerance that determines the delta for number to be equal. 2023-09-10 00:04:54 For picturing multidimensional arrays, it's... challenging... to try direct visualisation, but thinking about Rank and Shape often help a lot. 2023-09-10 00:06:52 That said, the vast majority of APL code just deals with scalars, vectors, and matrices. It's completely reasonable to bootstrap your intuitions on the primitives with these lower-dimensional domains. 2023-09-10 00:07:39 I *really* hope you get into APL. Would love to see how you marry Forth and APL to get the best of both. 2023-09-10 00:14:11 KipIngram: Possibly other languages fault here 2023-09-10 00:15:29 Confused meaning of 'dimension' 2023-09-10 00:16:17 But an NxN matrix is two-dimensional, but can represent something in N-dimensional space, or NxN dimensions.... 2023-09-10 00:17:14 http://clhs.lisp.se/Body/f_ar_d_1.htm#array-dimensions 2023-09-10 00:21:33 The examples on that page are bad 2023-09-10 00:22:08 Meh, the meaning is pretty obvious here, IMHO, so I think it's fine. But, yeah, 'rank' is the "correct" term, both in math and APL. 2023-09-10 00:22:24 The example pretty much says "array-dimensions gives you the array dimensions" 2023-09-10 00:22:42 CLHS also has ARRAY-RANK (don't think I've ever used that, have used dimensions a lot) 2023-09-10 00:24:25 In essence I believe it's reasonable to define dimension and rank this way, and it's consistent, and 'rank' means a few different things in maths already anyway 2023-09-10 00:25:09 But it needs definition, it can't be assumed, and certainly in C talk 'dimension' can mean either of those things depending on who you ask 2023-09-10 00:25:36 I like to use DIM as my "array length" macro name because it's short and people seem to know what I mean when I use it 2023-09-10 00:26:01 But I also describe x[3][3] as a "two dimensional array" so it is what it is 2023-09-10 00:33:56 Scary lights outside :( 2023-09-10 00:38:58 Think it's some laser show, really horrid light pollution 2023-09-10 00:40:47 xelxebar: I was just pointing out that rank means something very different in linear algebra. 2023-09-10 00:41:22 Has anyone here seen the Nishimura comet? 2023-09-10 00:42:14 Yes, x[3][3] is a two dimensional array, and could have rank anywhere from 1 to 3. Or maybe 0 too - what do they say the rank of a zero matrix is in linear algebra? 2023-09-10 00:42:32 I mean, it has to do with the number of linear relationships expressed in the matrix, and in the zero matrix there aren't any. 2023-09-10 00:42:53 Zero I'd guess 2023-09-10 00:42:59 i think it should be. 2023-09-10 00:43:17 Don't get hung up on it 2023-09-10 00:43:19 And otherwise (non-zero matrix) you'd have at least rank 1 and perhaps more. 2023-09-10 00:43:47 Apparently we will get best viewing on Tuesday 2023-09-10 00:43:56 However, they also use the word rank to talk about tensors. 2023-09-10 00:44:19 tensor of rank 2 has two indices. So that's the similar concept to how APL uses it. 2023-09-10 00:46:11 KipIngram: Definitely. Rank has different meanings in apl and linalg. 2023-09-10 00:48:32 For the latter, it's probably easier to think in terms of null space vs rank. Since an all 0 matrix sends everything to 0, the space spanned by it's output vectors has a zero dimension. 2023-09-10 00:49:38 Good point about tensors. Luckily (?) in APL we don't have to distinguish between upper and lower indices :P 2023-09-10 00:52:33 Yes, that sounds right to me. Rank plus nullity is always dimension, right? 2023-09-10 00:53:44 Oh, I may want to support covariant and contravariant ideas in whatever I come up with. It's kind of a necessary concept for various parts of physics. 2023-09-10 00:54:40 They dodge having to teach you anything about that in early physics by sticking really close to Cartesian coordinate systems, where the contravariant and covariant components are the same. 2023-09-10 00:55:59 It's still pretty easy to at least motivate the ideas though. Consider a velocity vector. Say you want to switch from feet as your length unit to inches. You have to multiply all of your velocity components by 12. You made the unit smaller, the components got bigger. That's contravarianT (contrary). 2023-09-10 00:56:33 Now consider your vector represents a power density. When you go from watts per square foot to watts per square inch, you have to DIVIDE your components by 144. 2023-09-10 00:56:38 That's covariant. 2023-09-10 00:56:58 So both of them are "vectors," but they behave entirely differently under coordinate system changes. 2023-09-10 00:58:14 So when you go from feet to inches, some of your vector components get bigger and some get smaller. That has to be kept proper track of when you're trying to do things "with full generality." 2023-09-10 02:31:08 You know, this 32655 chip has 128k of RAM but 512k of flash. It occurred to me that in this design I'm looking at I actually could place my headers in flash and block in needed bits. Then my RAM could be solid code/data. 2023-09-10 02:48:30 xelxebar, where does the 0 cell come from in this APL? 2023-09-10 02:48:33 5 5⍴⎕A 2023-09-10 02:48:35 ┌─────┬─┐ 2023-09-10 02:48:37 │ABCDE│0│ 2023-09-10 02:48:39 │FGHIJ│ │ 2023-09-10 02:48:41 │KLMNO│ │ 2023-09-10 02:48:43 │PQRST│ │ 2023-09-10 02:48:45 │UVWXY│ │ 2023-09-10 02:48:47 └─────┴─┘ 2023-09-10 02:51:47 I expected just the left box. 2023-09-10 02:52:36 Oh. Weird. Did it again: 2023-09-10 02:52:38 5 5 ⍴⎕A 2023-09-10 02:52:40 ABCDE 2023-09-10 02:52:42 FGHIJ 2023-09-10 02:52:44 KLMNO 2023-09-10 02:52:46 PQRST 2023-09-10 02:52:48 UVWXY 2023-09-10 02:52:50 THAt is what I expected. 2023-09-10 02:52:52 Don't know what happened the first time. 2023-09-10 02:55:59 So I'm definitely going assign some entire groups of opcodes to the same call instruction, so that I can use it as a "shortened opcode." The normal opcode parser will pick up some of the bits of the offset as "opcode bits," but those will be don't cares - it'll go to the same handling code regardless. The idea is to get a very compact format for making "local" calls (which will characterize most calls that 2023-09-10 02:56:01 appear just as a result of factoring). 2023-09-10 02:56:32 Won't be able to use that to call a built-in word that's likely far away, but "helper" words will tend to be close by. 2023-09-10 03:00:51 I see what happened with the above APL. Just before the first one I'd typed this: 2023-09-10 03:00:53 ⎕0 2023-09-10 03:00:55 ⎕: 2023-09-10 03:01:09 I thought that was just a failure, but it looks like it's prompting for something. 2023-09-10 03:01:39 I duplicated that only did the ⎕0 twice, and got a "double boxed" result. 2023-09-10 03:03:02 mmm tofu 2023-09-10 03:24:54 Anyway, this is going to make the system even more compact. The helper definitions will have no permanent headers and I'll be able to call helpers within 256 bytes of the calling instruction using basically just an opcode-sized field. 2023-09-10 03:25:56 And also any word that I've placed in the first 256 bytes of RAM, so whichever ones are the most commonly used that I choose to put there (those would be non-helper, generic Forth definitions). 2023-09-10 03:27:24 In that video where the guy presents the DB48X project (deployment of HP48 type system on the DM42), his first attempt was to just port the software over. But it was too big, so he had to re-impelemnt and try to make it more compact. I think this system I'm cooking up is compact enough that I could mae a run at a similar type project. 2023-09-10 03:27:53 Not that I'd go just try to copy what he's done, but I'd be able to get similar or superior complexity into the available space of the device. 2023-09-10 03:51:37 Oh, this is good... 2023-09-10 03:51:39 https://www.youtube.com/watch?v=VO2A6I3Woos 2023-09-10 04:09:41 My first look at it wasn't quite like his (his is better of course). What I saw was this: 2023-09-10 04:10:01 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 + 1/7 - 1/8 ... 2023-09-10 04:10:46 Now, note that 1-1/2 = 1/2, so replace the first two terms with 1/2. Then note that 1/3-1/4 = 4/12-3/12 = 1/12, so replace the next two terms with +1/12. 2023-09-10 04:11:06 1/5-1/6 = 6/30-5/30 = 1/30; replace those terms with +1/30. 2023-09-10 04:11:32 And so on. Each time the denominator increases by the previous increase plus eight. So we wind up with 2023-09-10 04:12:12 1/2 + 1/12 + 1/30 + 1/56 + 1/90 + 1/128 + ... 2023-09-10 04:12:16 That's as far as I got. 2023-09-10 04:12:43 Anyway, turns out it converges to ln(2); he demonstrates it visually in the video. 2023-09-10 04:13:47 Anyway, notice that the denominator DELTAS are 10, 18, 26, 34, ... 2023-09-10 04:14:18 BTW, I think my 128 denominator is wrong - just did some arithmetic in my head wrong. 2023-09-10 04:14:27 Or maybe the pattern fails if I keep pushing it out. 2023-09-10 04:41:04 Actually he demonstrates that the series is equal to many different things - turns out all of those are "cheats." I'm pretty sure it actually diverges. 2023-09-10 04:41:58 You can get false sums by monkeying with which items of the series you take at any given step. But the process I outlined up there doesn't cheat - it takes the terms in their logical order, as they come. I coded it up in Python and the result just kept getting bigger. 2023-09-10 06:42:47 TEXAS BEAT ALABAMA. 34-24. Yee HAAAAAAAAA!!!!! 2023-09-10 06:43:20 I'm a big fan of ANYONE that beats Alabama, but in this case it's extra good because I went to Texas. 2023-09-10 08:34:01 So, there are 255 function in my DM42 "catalog." 2023-09-10 08:35:49 xelxebar: One fundamental "friction" between Forth and APl is APL's monadic vs. dyadic functions (I.e., cases where the same function or operatore works either way, and sometimes does very different things). 2023-09-10 08:36:09 You don't necessarily have only one object on the stack when you want to do a unary / modadic operation. 2023-09-10 08:36:46 If I have a sharp separation between "proram code" and "algebraic expression" then it's not that much of a problem. But it is problematic if I am attempting any more thoroughgoing fusion. 2023-09-10 08:39:07 It's easy enough to solve if algebraic expressions are "contained.' You'd just assume an "empty slate" when you started at the right. ⍺ and ⍵ would give you the left and write operands, and you'd just go from there. It would be like putting a "bubble" around that sequence of activities. 2023-09-10 08:39:37 The only way for data to get in would be via those symbols ⍺ and ⍵. 2023-09-10 08:39:50 Or via a named variable or a literal. 2023-09-10 08:41:14 You'd be able to store an algebraic expression in a variable. That's more or less what APL does with foo ← dfn 2023-09-10 08:42:40 So, if you want to use items on the stack, you'll have to reference them via ⍺ / ⍵. 2023-09-10 09:01:03 KipIngram: Exactly. You're pretty much describing exactly lexical scope in dfns, IIUC. 2023-09-10 09:01:40 Though, honestly, I'm not super bullish about marrying languages with such a stark boundary like that. 2023-09-10 09:04:04 It's part of my beef with regular expressions. I love them, but most languages have an extremely narrow band interaction with the regex language itself, which causes lots of pain around its use in practice. 2023-09-10 09:06:19 sed, awk, and sam with it's structural regular expressions are kind of best in class in this regard, IMHO. 2023-09-10 09:08:32 Basically, I think there's gotta be a way to marry the bare-metal feel and implementation simplicity of Forth with the ergonomics of APL. 2023-09-10 14:30:01 i try to think on the best way to provide method calls 2023-09-10 14:30:55 i have . as a prefix for a method call, which will take an object from the stack 2023-09-10 14:31:16 .some_name 2023-09-10 14:32:01 but for now it does not take any argument and always pushes the return value on the stack 2023-09-10 14:33:02 i need some way to give arguments, but i don't want to expect always an argument list or having two alternate syntax for when you give it arguments or not 2023-09-10 14:33:30 cause i also need an alternative syntax for when you want to call a method, but do not want any result on the stack 2023-09-10 14:34:20 i think i'll use - for method calls that do not put anything on the stack 2023-09-10 14:34:46 but still i need a way to push arguments to that method call 2023-09-10 14:35:25 in the past i had 4 variations of syntax 2023-09-10 14:35:43 push the return or not, give arguments or not 2023-09-10 14:36:14 i don't like that and already have .method and -method 2023-09-10 14:36:39 - won't return any value on the stack while . does 2023-09-10 14:36:54 but still need to give arguments when needed 2023-09-10 14:37:36 vms14: I'm looking at making a subset of my vm instructions operate in a type-dependent way (type of top stack item). 2023-09-10 14:37:47 for now the only idea i have is to have some kind of alternative stack to push arguments there 2023-09-10 14:37:58 At least I'll have that possibility as something I can "activate." 2023-09-10 14:38:05 and this stack gets reset at every method call 2023-09-10 14:38:19 but it looks like it can give troubles 2023-09-10 14:38:26 Alternative stacks always wind up requiring some way to specify when to use the alternative instead of the main. 2023-09-10 14:38:37 as at any moment everyone can push arguments there 2023-09-10 14:39:12 KipIngram: in my case the idea is to have some word that pushes an argument from the stack, to that alternative stack 2023-09-10 14:39:40 you never have to retrieve from it, it's just an array for method calls 2023-09-10 14:40:02 when you call the method like .something 2023-09-10 14:40:18 if there's something on that array it will be given to the method call 2023-09-10 14:40:32 and after execution it will be reset 2023-09-10 14:40:47 but it's a bit weird 2023-09-10 14:41:01 and everyone can push arguments there 2023-09-10 14:41:19 i have no more ideas for not having 4 syntax variations 2023-09-10 14:41:35 this way i have .method and -method 2023-09-10 14:41:51 and if they need arguments i'd have a word to push them 2023-09-10 14:42:15 but it would be global accessible 2023-09-10 14:42:42 KipIngram: about activate 2023-09-10 14:42:54 there could also be a flag 2023-09-10 14:42:56 but meh 2023-09-10 14:43:18 what i did in the past was to append a * on the method call nane 2023-09-10 14:43:25 like .method* 2023-09-10 14:43:35 this means it expects arguments 2023-09-10 14:43:39 Actually it will involve changing the table entries for those instructions. Or having a pointer to that table and changing it. 2023-09-10 14:43:52 So that it points to a table with the changed pointers. 2023-09-10 14:44:37 hmm 2023-09-10 14:44:59 i think i'll stay with the * suffix 2023-09-10 14:45:22 .method -method .method* -method* 2023-09-10 14:45:31 i kind of dislike that 2023-09-10 14:45:54 but a global accessible alternative array looks like a worse idea 2023-09-10 14:49:24 i need a way to specify that 4 things 2023-09-10 14:49:43 that way i can even create perl objects with this 2023-09-10 14:50:01 'IO::Socket .new 2023-09-10 15:11:27 The array approach is just because of my overall architecture - I'm implementing my vm via an instruction table. When I'm in "native Forth" mode, the entries for those instructions (the ones that actually depend on the kind of data on the stack) will point to the usual basic functions. When i want to switch to the "type sensitive" mode, I'll just change that. I'll probably have two tables - one with the 2023-09-10 15:11:29 optimally efficient native instructions and one with those entries able to look at the stack item's type and then use that to make a choice as to what to do. 2023-09-10 15:11:48 But this is totally dependent on how I'm putting my system together - I wasn't implying that it would be the 'best way for you'. 2023-09-10 15:19:20 I don't plan to rewrite a bunch of table entries every time I change modes - I'll just have two tables and will switch the whole table out. Most of the entries will be the same, but the ones for these "non-agnostic' instructions will be altered. 2023-09-10 15:19:58 For example, DROP will be DROP - DROP doesn't care what kind of data is represented by the stack item it's dropping. 2023-09-10 15:20:38 But + for example will do something different for stings vs. numeric arrays vs. integers etc. 2023-09-10 15:23:06 s/stings/strings/ 2023-09-10 15:28:38 hmm 2023-09-10 15:29:08 i think i don't want to mess with multiple dispatch 2023-09-10 15:29:33 not for now at least, it will always remain open 2023-09-10 15:29:51 some time ago i was joking about perl oop 2023-09-10 15:30:12 and i was told it's more like a framework to create your own oop system 2023-09-10 15:30:16 Yeah - I'm only considering it because there's already a dispatch table in my design. In the type sensitive mode, it will be easy to use the stop stack item's type to do a second dispatch for an instruction. 2023-09-10 15:30:49 i see it's kind of true 2023-09-10 15:31:04 See, those non-agnostic instructions would otherwise become useless. 2023-09-10 15:31:22 A piece of code that adds two integers on the stack does nothing for you if you don't HAVE two integers on the stack. 2023-09-10 15:31:38 i do have interest in common lisp object system 2023-09-10 15:31:47 So I figure I may as well arrange to have those instructions do something else that IS useful. 2023-09-10 15:32:00 CLOS and the metaobject protocol 2023-09-10 15:32:14 And meanwhile I'll still get the compactness benefits of my instructions. 2023-09-10 15:32:26 it's mainly made with hash tables 2023-09-10 15:32:46 but for now i just want to blend with perl oop 2023-09-10 15:32:54 Instead of having a matrix/matrix add or multiply require a call that consumes a whole cell, I'll still be able to do it with an instruction. 2023-09-10 15:33:48 And there are quite a few of these non-agnostic instructions. All the basic arithmetic instructions, the boolean logic instructions, etc. 2023-09-10 15:34:15 So for types that don't need a huge number of methods, this covers the whole dispatch situation. 2023-09-10 15:34:26 i ended with emacs and evil mode 2023-09-10 15:34:40 i feel fine for now 2023-09-10 15:34:52 feel evil 2023-09-10 15:35:54 :-) What's evil about it? 2023-09-10 15:36:55 APL doesn't require spaces the same way Forth does. 2023-09-10 15:38:01 For example, that string from yesterday, 5 5⍴⎕A, parses as 5 5 ⍴ ⎕A. 2023-09-10 15:38:50 So, two things going on there. First, adds a separation after the , and also ⍴ is recognized as a token. 2023-09-10 15:39:25 I once thought about adding a list of tokens to Forth (usually single characters) that will be recognized as soon as they're seen, regardless of whether they're space-delimited or not. 2023-09-10 15:39:50 That list would initially be empty, so you'd get standard behavior. But you could populate it if you chose to. 2023-09-10 15:40:35 And for a while I've been thinking that Forth really ought to offer a way to recognize "complex literals." For example, "I am a string" is a perfectly fine string literal. 2023-09-10 15:40:42 But we can't use it in Forth. 2023-09-10 15:41:26 We have special-case code (i.e., code that doesn't just look for dictionary matches) to handle numeric literals, but we pay no mind to other types of literals. 2023-09-10 15:41:34 I think strings are just as fundamental as numbers are. 2023-09-10 15:42:11 Given that it's Forth, I think the right approach would be for the programmer to be able to TELL Forth what things were going to be literals. 2023-09-10 15:42:28 We've got all this control over the code we execute, but no control over how our input is processed. 2023-09-10 15:42:36 It's a fairly small box we've been put in. 2023-09-10 15:50:48 It's a kind of open-ended issue (in terms of the programmer being able to DEFINE literal formats), but at the most basic level I feel like literal strings should exist. 2023-09-10 15:51:15 I think the reason Forth has never gone there is because it doesn't really offer a good place to put such strings anyway. 2023-09-10 15:52:06 The thing is, though, now that I've got this two-layer system in mind, with "basic Forth and then type-sensitive mode," this could also be something that comes with the more powerful mode. 2023-09-10 15:52:23 Keeping the "basic mode" pure Forth. 2023-09-10 15:54:50 A simple way to get closer to the "spaces not absolutely required" functionality would be to match partial strings. Say you're parsing characters out of the input, and after a few characters you see that what you've got so far has a match in the dictionary. Keep parsing - as long as you still match, greedily take characters. But if you pick up a character that doesn't match, then take everything before 2023-09-10 15:54:51 that as a word. 2023-09-10 15:56:41 So in that string above, we'd parse the first 5 when we saw the space. Then we'd get the second 5, and by itself it represents something valid. Then we see the ⍴ and 5⍴ isn't in the dictionary, so we take the second 5 then. 2023-09-10 15:57:08 And for the next item, we have ⍴ and that has a match, but when we see ⎕ the match is broken, so we take the ⍴. 2023-09-10 15:57:38 I've barely thought about this - there may be some example case that shows it as no good. Haven't thought of it yet, though. 2023-09-10 16:30:28 In that book he wrote Chuck proposed somethin at least kind of along these lines. His notion was that you'd parse out a space delimited word, and search the dictionary for it. If the search failed, then you'd drop a character from the end and search again. Repeat. If you eventually found a word, you'd execute it and the characters you had to drop to get there would be treated as parameters of some kind. 2023-09-10 16:31:23 Not really the same thing, but similar in the sense that it's a departure from rigorous space delimiting. 2023-09-10 16:46:56 The way I was looking at doing this was modeled along the RPL lines. In RPL, an "algebraic expression" is delimited by single-quotes. 2023-09-10 16:47:13 "program code" is delimited by those << ... >> symbols. 2023-09-10 16:47:31 Anyway, they're different types - they can be processed with different parsers. 2023-09-10 17:19:52 xelxebar: I'd been thinking that the part of this system that does 'algebraic expressions' is where the APL type stuff would come in. However, I think the primary purpose of algebraic expression support in RPL is to allow the solver to work with equations expressed in that form. For example, you might put in 'a*x^2+b*x+c=0' and then you could aim the solver at that - it would offer you up a, b, c, and x as 2023-09-10 17:19:54 the variables and you could set any three of them and solve for the fourth. 2023-09-10 17:20:37 I'm not sure an apl expression would allow that much flexibility. Doesn't mean apl has no value in the system - I'm just not sure if it actually goes in "like that." 2023-09-10 17:20:49 it feels to me like the real benefit of it is the access to array processing. 2023-09-10 17:22:01 So maybe these are different things. Once an array type is supported, it just offers a new set of things one can do with the arrays. 2023-09-10 17:37:58 xelxebar: How would you want to see a multi-dimensional "array editor / browser" work on a calculator? 2023-09-10 17:38:58 For 2D matrices, the DM42 just offers you up/down/left/right cursor motion keys, and it looks like you can "go to" a particular position too. 2023-09-10 17:39:59 For a general dimensionality, though, I think you'd need some more generic type of motion system. 2023-09-10 17:40:44 I doubt anyone would have a lot of interest in ENTERING higher than 2D arrays, but you might generate them via calculations and then want to inspect them. 2023-09-10 17:43:11 I want to be able to work with tensors - I've had that as a target for a while. That's going to impose an extra requirement on the representation, above and beyond what APL needs, because each axis can be contravariant or covariant. 2023-09-10 17:43:57 For example, in general relativity you've got this bad boy: 2023-09-10 17:43:59 https://i.ytimg.com/vi/-Il2FrmJtcQ/maxresdefault.jpg 2023-09-10 18:05:19 Heh - this is fun: 2023-09-10 18:05:21 https://www.google.com/maps/d/viewer?mid=1EVEViVHIuS8nXzhz66b7rEwwRqo&ll=41.901883714102524%2C-87.5588755657483&z=9 2023-09-10 18:05:40 someone painted up a Google Map of the Chicago area with all the pertinent Dresden Files landmarks. 2023-09-10 18:10:09 KipIngram: not evil at all, it feels good cause now i have both hotkeys 2023-09-10 18:10:23 i can use the one i prefer at any moment 2023-09-10 18:10:42 say to save with ctrl+x ctrl+s or :w 2023-09-10 18:10:57 it's cool 2023-09-10 18:24:01 xelxebar: Looks like ? gives me a random integer from integers up to , and ?0 gives me a random number betwee 0 and 1. What if i want to fill a vector or a matrix with random numbers from 0 to 1? 2023-09-10 18:24:33 3 4⍴?0 gives me 12 copies of the same random number. 2023-09-10 18:47:14 Ah, that works. 2023-09-10 18:47:17 x←3 4⍴(12?1000000000)×0.000000001 2023-09-10 18:47:19 x 2023-09-10 18:47:21 0.182629632 0.217457735 0.422122593 0.184416736 2023-09-10 18:47:23 0.42848243 0.886052393 0.303773321 0.353070613 2023-09-10 18:47:25 0.400099606 0.192657316 0.265074796 0.966209742 2023-09-10 18:47:59 Slightlly annoying, though to have ?0 give a result like I want but have to do those backflips to get an array of them. 2023-09-10 19:17:53 I'm starting to see how Monte Carlo analyses would work well in APL. Start out with a "model" - the math relationships capturing your system dynamics. Then expand that over a vector of random numbers that get used to nudge one of your model parameters. So you get a copy of the analysis all along that new dimension. Then peel the results of interest out of it. 2023-09-10 19:30:30 what RNG does it use? 2023-09-10 19:31:35 Oh, I saw something about that a few minutes ago. A fairly standard one, I think, though I did see articles implying that it's less than perfect. Apparently if you interpret the sequential items that come out of it as N-tuples, then your N-tuples wind up lying in hyperplanes instead of being distributed the way you'd like. 2023-09-10 19:31:49 The advice was to shuffle your samples before grouping them into N-tuples. 2023-09-10 19:32:29 Multiplicative congruential generator. 2023-09-10 19:33:38 x(n+1) = 7^5 * x(n) mod (2^31-1) is in this one paper I'm looking at. 2023-09-10 19:36:54 Oh, nice. After adjust the real time clock calibration on my calculator, it's now stayed within about three seconds of the right time, since the last time I set it. 2023-09-10 23:13:41 You know, I have a feeling there were heated discussions among the planners of APL over starting arrays at 0 or 1. And in the end no one would give in, so you can do either one. 2023-09-10 23:28:13 KipIngram: The standard idiom is something like ?3 4⍴0. ? automatically lifts to higher arrays, so you just fill an array with the upper bound first. 2023-09-10 23:30:57 Yeah, ⎕IO lets you set the Index Origin and is agreed to be mildly annoying. APL papers usually start out declaring their ⎕IO and ⎕ML convention. 2023-09-10 23:33:05 What's ⎕ML? 2023-09-10 23:34:33 Oh, nice - that's much better than my random matrix snip. 2023-09-10 23:36:39 Oh, I found ⎕ML. 2023-09-10 23:39:09 That's almost more of a disruption that ⎕IO. 2023-09-10 23:46:21 KipIngram: Yeah, Dyalog APL does have historical quirks. In practice, though, almost every just uses the default ⎕ML these days. It was originally there to help people migrating from other APL systems. 2023-09-10 23:57:45 Well, the first non-calculator programming I ever did was FORTRAN, and arrays started with index 1. So I got used to that, and then later when I ran into C it seemed odd. These days, though, it seems entirely normal. Hard to overlook the fact that N bits holds 0 through 2^N-1, and not 1-2^N. 2023-09-10 23:57:53 hold