@symbolics @weekend_editor @rzeta0 In #PicoLisp probably the shortest way is with 'cnt':
: (cnt t '(a b c d))
-> 4
('t' is a function which always returns 'T')
@symbolics @weekend_editor @rzeta0 In #PicoLisp probably the shortest way is with 'cnt':
: (cnt t '(a b c d))
-> 4
('t' is a function which always returns 'T')
@simon_brooke Testing bignums in a REPL is noisy due to the huge outputs. In #PicoLisp I then do things like
: (bench (fact 10000)) T
0.163 sec
-> T
because only the result of the last expression is printed
@Regenaxer @borkdude @vindarel Thanks! Right, so my comparable in-REPL times for iterative factorial 1000 are
#PicoLisp: (bench (apply * (range 1 1000)))
0.000 sec
#Clojure: user=> (time (apply *' (range 1 1000)))
"Elapsed time: 2.428199 msecs"
#SBCL: CL-USER[1]: (time (apply #'* (alexandria:iota 1000 :step 1)))
Evaluation took:
0.000 seconds of real time
0.000015 seconds of total run time (0.000000 user, 0.000015 system)
100.00% CPU
45,990 processor cycles
0 bytes consed
@simon_brooke @borkdude @vindarel It is the #PicoLisp 'bench' function:
$ pil +
: (bench (do 9999999 (* 3 4)))
0.209 sec
-> 12
@borkdude @vindarel @Regenaxer #PicoLisp doesn't have macros, by design. Its `time` function returns the time of day. So while there may be a way of timing a computation in the REPL, I've not found it yet.
@borkdude @vindarel @Regenaxer Again, this is true and fair. I haven't yet learned enough #PicoLisp to do a comparison timing in the REPL.
What's interesting (to me) is that PicoLisp is also doing recursive computations at very high speeds. I need to explore further but it's an *extremely* impressive system, and I'm amazed I wasn't aware of it before today.
@vindarel That's true. The startup time issue is particularly harsh on #clojure, and @borkdude's #Babashka would probably do a lot better.
But (a) this is very rough timing, and (b) startup time is some sort of proxy for the compactness of the runtime system; and
(c) the thing that's still astounding me is that #PicoLisp is (sort-of) an interpreter, while all the others execute compiled code, so bloody should be faster!
@simon_brooke I would also say that bignums implemented internally in cells are the way to go. I did so in #PicoLisp too. #lisp
@bahmanm
@philsplace
yes, definitely. But in this case I was interested in #picolisp, because some of the constructs made me think of perl. e.g.
Defining of functions: "Variable number of evaluated arguments". With the @ sign and the (next) function.
In any case, I immediately feel at home in picolisp.
While looking into how viable it is to write Android applications in something else than Java, Kotlin or <insert web technology>, I've found a neat #picolisp blog explaining Pilbox and adjacent topics: