Writing a SNES assembler compiler/disassembler - Day 2

blogs.perl.org

Published by Sylvain Colinet on Saturday 04 December 2021 19:54

First look at generating grammars

This will be very short even if that take me a lot of time to figure this part.

In my ASM65816Grammar.rakumod I manually wrote the Number and Addressing grammar but obiously for the instructions it's not really possible.

General ASM grammar

First let's focus on parsing something simple.

The basic gist of what you can write in an asm file is very short

lda $42 clc adc #3 cmp #0005:beq $4855 ; if $42 + 3 is 5 branch to $4855

You have an instruction per line, or you can have multiple instructions separated with a :, and ; are used to mark a comment.

I don't handle label for now since I just want to dumbly generate a grammar for all the valid instruction from my instruction list and see if that work well.

The final grammar look like that

grammar GrammarASM65816 is InstructionGrammar is export { token TOP { <thing>+ } token thing { || <asm-comment> || <instruction-line> <.ws> <.asm-comment>* || <instruction> <.ws> <.asm-comment>* } token instruction-line{ <instruction> (<.ws> ':' <.ws> <instruction>)+ } token asm-comment { ';' .* } }

Today I learn

Don't use <.ws>* <.ws> already have a quantifier, adding a * make Raku goes on forever. Maybe there should have a warning about that?

InstructionGrammar

Yes, I like to compose my grammar with multiple piece since I plan to support like 2-3 ASM grammars : My own for my SLANG and one compatible with xkas/Asar a tool used in romhacking (this include instructions to specify where to inject code in a ROM)

Before generating each token, let manually write 1-2 token to see if that work. I use a proto token for instruction since I will not really care about indivisual instructions in the Action class.

grammar InstructionGrammar is Addressing { proto token instruction {*} token instruction:sym<LDA> {:i "LDA"<.ws><word>} token instruction:sym<RTL> {:i "RTL"} }

The :i adverb makes the token ignore the case.

let's run this to see if that work

raku -I lib -e 'use ASM65816Grammar; say GrammarASM65816.parse("RTL")' 「RTL」 thing => 「RTL」 instruction => 「RTL」
raku -I lib -e 'use ASM65816Grammar; say GrammarASM65816.parse("RTL:lda 42;piko")' 「RTL:lda 42;piko」 thing => 「RTL:lda 42;piko」 instruction-line => 「RTL:lda 42」 instruction => 「RTL」 0 => 「:lda 42」 instruction => 「lda 42」 word => 「42」

We don't see the ;piko since It's not captured in the grammar. Also my grammar for word is not really right, 42 is not a word but a byte. This is fine for now :)

Generating the intruction tokens

From this advent post https://perl6advent.wordpress.com/2015/12/08/day-8-grammars-generating-grammars/ that generate Raku grammar from BNF grammar it look possible. We need to use ^add_method and EVAL to add our token.

Let's do only simple instructions like TXA that does not take an argument.

sub gen-instru { for @ASM65816::instructions -> $instruct { my $token-name = $instruct.inst ~ '-' ~ $instruct.addressing.Str; if $instruct.addressing == IMPLIED { InstructionGrammar.^add_method("instruction:sym<{ $instruct.inst }>", EVAL "my token instruction:sym<{ $instruct.inst }>" ~ '{:i "' ~ $instruct.inst ~ '"}'); } } }

And....

$ raku -I lib -e 'use ASM65816Grammar; say GrammarASM65816.parse("TXA")' TOP | thing | | asm-comment | | * FAIL | | instruction-line | | | instruction | | | * FAIL | | * FAIL | | instruction | | * FAIL | * FAIL * FAIL Nil skarsnik@DESKTOP-UIA12T1:/mnt/f/Project/SnesASM$

I added Grammar::Tracer to have more details, and that does not work. You can probably tell me, "But wait, in the article they create a new grammar and compose it at the end, maybe that why it does not work?"

I don't really know, let have a look if both grammars have the right method :

#After the Grammar definitons say "Instruction Grammar - Name : ", $_.name, " Method :", $_ if $_.name ~~ /TXA/ for InstructionGrammar.^methods; say "Grammar ASM65816 - Name : ", $_.name, " Method :", $_ if $_.name ~~ /TXA/ for GrammarASM65816.^methods; $ raku -I lib -e 'use ASM65816Grammar; say GrammarASM65816.parse("TXA")' Instruction Grammar - Name : instruction:sym<TXA> Method :token instruction:sym<TXA>{:i "TXA"} Grammar ASM65816 - Name : instruction:sym<TXA> Method :token instruction:sym<TXA>{:i "TXA"}

So yes, the token are here but it probably missing something to be a real token.

So for now I opted to generate the lines defining the instructions tokens and copy/paste then in the file. It's not really great if I want to tweak the whole grammar.

(ccclxx) 12 great CPAN modules released last week

Niceperl

Published by Unknown on Saturday 04 December 2021 17:28

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Alien::Build - Build external dependencies for use in CPAN
    • Version: 2.46 on 2021-11-30
    • Votes: 19
    • Previous version: 2.45 was 1 month, 2 days before
  2. App::Netdisco - An open source web-based network management tool.
    • Version: 2.052000 on 2021-11-30
    • Votes: 13
    • Previous version: 2.051005 was 5 days before
  3. Crypt::JWT - JSON Web Token
    • Version: 0.034 on 2021-11-28
    • Votes: 18
    • Previous version: 0.033 was 6 months, 27 days before
  4. LWP - The World-Wide Web library for Perl
    • Version: 6.59 on 2021-12-02
    • Votes: 153
    • Previous version: 6.58 was 1 month, 8 days before
  5. MCE - Many-Core Engine for Perl providing parallel processing capabilities
    • Version: 1.876 on 2021-12-02
    • Votes: 89
    • Previous version: 1.875 was 16 days before
  6. MCE::Shared - MCE extension for sharing data supporting threads and processes
    • Version: 1.875 on 2021-12-03
    • Votes: 14
    • Previous version: 1.873 was 1 year, 4 months, 2 days before
  7. Minion::Backend::mysql - MySQL backend
    • Version: 0.30 on 2021-11-30
    • Votes: 12
    • Previous version: 0.29 was 5 months, 23 days before
  8. Mojo::SQLite - A tiny Mojolicious wrapper for SQLite
    • Version: 3.008 on 2021-11-28
    • Votes: 24
    • Previous version: 3.007 was 3 months, 26 days before
  9. Selenium::Remote::Driver - Perl Client for Selenium Remote Driver
    • Version: 1.46 on 2021-12-04
    • Votes: 44
    • Previous version: 1.45 was 1 month, 14 days before
  10. Sys::Virt - libvirt Perl API
    • Version: v7.10.0 on 2021-12-01
    • Votes: 15
    • Previous version: v7.9.0 was 29 days before
  11. Test2::Suite - Distribution with a rich set of tools built upon the Test2 framework.
    • Version: 0.000144 on 2021-12-03
    • Votes: 31
    • Previous version: 0.000142 was 18 days before
  12. Yancy - The Best Web Framework Deserves the Best CMS
    • Version: 1.085 on 2021-12-04
    • Votes: 42
    • Previous version: 1.084 was 1 month, 1 day before

(cdxciii) metacpan weekly report - Mojolicious

Niceperl

Published by Unknown on Saturday 04 December 2021 17:26

This is the weekly favourites list of CPAN distributions. Votes count: 26

Week's winner: Mojolicious (+2)

Build date: 2021/12/04 16:26:06 GMT


Clicked for first time:


Increasing its reputation:

100 Languages Speedrun: Episode 15: Awk

dev.to #perl

Published by Tomasz Wegrzanowski on Saturday 04 December 2021 14:49

Awk is basically a proto-Perl. As Perl was one of the most influential language of all times (JavaScript, Ruby, and PHP are all Perl's direct descendants), Awk is indirectly quite historically important.

There hasn't been any good reason to use Awk for decades now. As I keep saying over and over, if you write anything nontrivial, just use a real programming language like Ruby, Python, or Perl. But it's still interesting for historical reason, so let's check what coding was like back in the 1980s.

Hello, World!

Awk scripts are a series of pattern { command }, where pattern is most often a regular expression. If script contains any such pattern, it will be executed on each line.

Here's one way to say Hello, World! in Awk:

#!/usr/bin/awk -f

/./ { print "Hello, " $1 "!" }
$ seq 1 5 | ./hello.awk
Hello, 1!
Hello, 2!
Hello, 3!
Hello, 4!
Hello, 5!
$ ./hello.awk
World
Hello, World!
Bob Ross
Hello, Bob!

So any line that contains non-whitespace characters will result in a hello. String concatenation is done by just putting a few strings next to each other. "Hello, " $1 "!" is what would be "Hello, " + $1 + "!" or "Hello, " . $1 . "!" or such in a more reasonable language.

Each line is $0, and it's also automatically split into words, so $1 means first word of currently processed lines, $2 means second word, etc. Those special variables are used for regular expression's first, second etc. match in Perl, Ruby, and some other languages, and I think that's where they came from.

Sum numbers from STDIN

There are some other patterns like BEGIN and END to do before and after processing lines. Here's a simple Awk program for adding all numbers, one per line:

#!/usr/bin/awk -f

BEGIN { x = 0 }
/[0-9]+/ { x += $1 }
END { print x }

Which works like this:

$ seq 10 20 | ./sum.awk
165

Awk has pre-Perl regular expressions, so things like \d don't work. That's another reason why it's better to use something more modern.

Awk's BEGIN { } and END { } blocks are still present in Perl, Ruby, and some other languages.

FizzBuzz with regexp

One way to do the FizzBuzz is to reuse our regular expressions from episode 7. At first you might think command block would just do { print "FizzBuzz" } or such, but then all the other blocks would match too (divisible by 15 is also divisible by 3 and 5 etc.). An easy way is to modify $0 variable (current line). We print it at the end.

#!/usr/bin/awk -f

/^(([0369]*[147]([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|[0369]*[258])(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|([147][0369]*[258]|[0369]?))*(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*[258][0369]*|[147][0369]*)|([0369]*[147]([258][0369]*[147]|[0369])*[258][0369]*|[0369]*))0$/ { $0="FizzBuzz" }
/^(([0369]*[147]([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|[0369]*[258])(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|([147][0369]*[258]|[0369]?))*(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*([258][0369]*[147]|[0369]?)|([147][0369]*[147]|[258]))|([0369]*[147]([258][0369]*[147]|[0369])*([258][0369]*[147]|[0369]?)|[0369]*[147]))5$/ { $0="FizzBuzz" }
/^.*[05]$/ { $0="Buzz" }
/^(([0369]*[147]([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|[0369]*[258])(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*([258][0369]*[258]|[147])|([147][0369]*[258]|[0369]?))*(([147][0369]*[147]|[258])([258][0369]*[147]|[0369])*[258][0369]*|[147][0369]*)|([0369]*[147]([258][0369]*[147]|[0369])*[258][0369]*|[0369]*))$/ { $0="Fizz" }
/./ { print $1 }

To use it:

$ seq 1 20 | ./fizzbuzz.awk
1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz

FizzBuzz

A less ridiculous version would be this:

#!/usr/bin/awk -f

$0 % 15 == 0 { print "FizzBuzz"; next }
$0 % 5 == 0 { print "Buzz"; next }
$0 % 3 == 0 { print "Fizz"; next }
{ print }

Any expression can be used as pattern. next prevents all other pattern checks for the current lines.

File output

Awk makes it really easy to print to files. This script sort out the input to odd.txt and even.txt:

#!/usr/bin/awk -f

/[13579]$/ { print >"odd.txt" }
/[02468]$/ { print >"even.txt" }

Like in shell > means overwrite the file, and >> means append. But while it might look like it will keep reopening and overwriting so you only see last line, each file will be opened just once:

 % seq 20 30 | ./file_output.awk
$ cat odd.txt
21
23
25
27
29
$ cat even.txt
20
22
24
26
28
30

And print without arguments is the same as print $0.

Pipe output

Even nicer, we can do similar redirection with pipes:

#!/usr/bin/awk -f

/[13579]$/ { print | "tac" }

Which matches all the lines with odd numbers and send them to tac program to print them in backward order.

$ seq 10 30 | ./reverse_odds.awk
29
27
25
23
21
19
17
15
13
11

Fibbonacci

Awk has normal function definitions. There's no distinction between number and string variables. If we put a command block without a pattern, it will match every line.

#!/usr/bin/awk -f

function fib(n) {
  if (n <= 2) {
    return 1;
  } else {
    return fib(n - 1) + fib(n - 2);
  }
}

{ print fib($1) }

Which does:

$ seq 1 20 | ./fib.awk
1
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
2584
4181
6765

Rolling Dice

Awk has some trouble with command line arguments - it normally treats them as files to open. This code only works because we don't actually have any per-line patterns.

#!/usr/bin/awk -f

BEGIN {
  for(i=0; i<ARGV[2]; i++) {
    print int(rand() * ARGV[1]);
  }
}

We can use it to roll 5 100-sided dice:

$ ./dice.awk 100 5
84
39
78
79
91

Tally

Awk has associative arrays (nowadays usually called hashes or dictionaries).

#!/usr/bin/awk -f

{ tally[$0]++ }

END {
  for(n in tally) {
    print n, tally[n]
  }
}

Awk has no way to print regular arrays or associative arrays. If you try to print tally it will give you an error. It's another feature of modern programming languages that has roots in times of Awk, but is now done in much better ways.

$ ./dice.awk 6 100 | ./tally.awk
2 17
3 18
4 17
5 22
0 13
1 13

Should you use Awk?

No.

Special purpose languages have their place, but what Awk is doing - processing text files - is no longer "special purpose". Pretty much every modern language excels at processing text files and matching regular expressions, and handles everything Awk does a lot better.

Awk made a lot of sense back when its originated, as C was godawful at text processing, and Unix shell was godawful at writing any kind of structured programs, so Awk was addressing an obvious need. In modern times where every programmer is familiar with a language like Ruby, Python, Perl, or pretty much anything else that can process text, there's no place for Awk.

The language also definitely shows its age. Its regular expression engine is bad. It doesn't have console.log equivalent. It can't handle common text formats like CSV or JSON. It doesn't have sufficient Unicode capabilities. And so on. It does quite decently on conciseness, but only if you do exactly the kind of programs it likes - common requirements like parsing command line arguments will not work too well.

Awk is mainly of historical relevance, but it's not completely dead yet. If you work with a lot of Unix shell scripts, short Awk programs will be occasionally used there. I don't approve of this at all (seriously, just use real programming language like Ruby, Python, or Perl), but it might be useful to learn basics of Awk so you can read such shell code.

Code

All code examples for the series will be in this repository.

Code for the Awk episode is available here.

Geizhals Preisvergleich sponsors the German Perl/Raku Workshop 2022

blogs.perl.org

Published by Max Maischein on Saturday 04 December 2021 14:33

In 2022, the German Perl/Raku Workshop will take place in Leipzig. We are very happy to announce that long time Perl supporter Geizhals Preisvergleich sponsor the workshop.

Geizhals Preisvergleich zählt zu den größten Produkt- und Preisvergleichsplattformen im deutschsprachigen Raum.

Unser Unternehmen bewährt sich seit mittlerweile 20 Jahren in einer hoch kompetitiven, dynamischen Branche.

Unseren Erfolg führen wir auf eine kompromisslose Umsetzung unserer Unternehmensphilosophie zurück: Transparenz, objektive und User-orientierte Information sowie Qualität unseres Contents.

Geizhals Preisvergleich ist ein mit der Heise Medien Gruppe verbundenes Unternehmen.

Geizhals sucht Mitarbeiter

Weekly Challenge 141

dev.to #perl

Published by Simon Green on Saturday 04 December 2021 10:39

Challenge, My solutions

TASK #1 › Number Divisors

Task

Write a script to find lowest 10 positive integers having exactly 8 divisors.

My solutions

The main part of this challenge is calculating the number of divisors a given number has. This is one of those cases where you can over-optimize the code for no real benefit. My function divisors assumes that a number has two divisors (1 and itself) and then goes from 2 to half the number to see if there is any other divisors. A special case is made for one, which only has one divisor.

Once that part is done, the main function has an list called solutions, and set number to 1. It then increments number and adds it to the solutions list if it has eight divisors, continuing until we have 10 numbers.

The Perl solution is a transliteration of the Python code.

Examples

$ ./ch-1.py
24, 30, 40, 42, 54, 56, 66, 70, 78, 88

$ ./ch-1.pl
24, 30, 40, 42, 54, 56, 66, 70, 78, 88

TASK #2 › Like Numbers

Task

You are given positive integers, $m and $n.

Write a script to find total count of integers created using the digits of $m which is also divisible by $n.

Repeating of digits are not allowed. Order/Sequence of digits can’t be altered. You are only allowed to use (n-1) digits at the most. For example, 432 is not acceptable integer created using the digits of 1234. Also for 1234, you can only have integers having no more than three digits.

My solutions

In Python I use itertools' combinations function to generate a list of all combinations. I then use a for loop to count the number of combinations that are divisible by n.

Perl doesn't have a built in combination function, so I took a slight different tack with this task. I basically used a binary switch to generate a list of combinations using map { substr( $m, $_, 1 ) } grep { $i & 2**$_ } ( 0 .. $l - 1 ) where $l is the length of the first number ($m) and $i is between 1 and 2$l - 2.

Examples

$ ./ch-2.py 1234 2
9

$ ./ch-2.py 768 4
3

$ ./ch-2.pl 1234 2
9

$ ./ch-2.pl 768 4
3

Grant report: Optimizations building upon the new Raku dispatch mechanism

Perl Foundation News

Published by Matthias Bloch on Saturday 04 December 2021 05:27

Jonathan writes:


My latest grant proposal was recently approved. I had the possibility to dedicate quality time to Raku work at the start of November, but knew I would be tied up with some other work in the latter part of the month. Thus I optimistically forged ahead with some grant work while I could, crossing my fingers for approval. This report covers what got done.

The main completed task was to reorganize and streamline return and stack unwind handling. My work on the new dispatch mechanism introduced a new callstack layout. This created an opportunity to simplify the way we handle stack unwinding - that is, removing frames either because we are returning or because of an exception. Since this happens for every single non-inlined block or routine that we call, savings here have an impact on all but the most micro of micro-benchmarks. For example, a recursive Fibonacci benchmark (written recursively to frustrate inlining) showed a 5% improvement from this work. The work also elimiated some malloc calls in favor of callstack allocation in a number of situations, and resulted in overall simpler and smaller code in MoarVM. Faster and simpler is certainly welcome.

I also did some optimization on frame invocation, primarily by splitting the specialized and unspecialized callframe setup paths, which allowed for eliminating a number of branches that the C compiler was not able to. This new factoring also revealed an opportunity to fold two memset calls into one, which was also a welcome saving. This was worth a further 3% off the recursive Fibonacci benchmark. (To give a picture of how Raku compares with Perl in this benchmark, Raku runs it in around two thirds of the time, despite the fact that it has to cope with the potential upgrade of Int to a big integer.)

A central goal for the grant as a whole is to make progress on escape analysis. When I worked on this previously, a particular challenge was the reliance on attribute container to vivify (get allocated) upon first touch. This was not so much an optimization as a means to determine if an attribute had been initialized, for the purpose of running defaults. Unfortunately, however, it greatly complicates the escape analysis of object graphs at creation time, and makes all attribute access a little more costly. (One could also get occasionally surprised by the fact that reading an attribute during a constructor would count as initializing it too.) Thus, I started working on a new appraoch, based upon container descriptors, which are also the mechanism used in array and hash element auto-vivification. The work in progress is currently a pull request, which needs further work to analyze why it causes regressions in a small number of modules; this new approach does, however, already passes the specification test suite.

I also did some design work for a faster and simpler way to handle LEAVE blocks. Today they carry quite some performance overhead, we are unable to ever inline them, and needing to support them imposes a small, but non-zero, cost on the exit of every callframe, regardless of if they have a LEAVE block. The new design I have worked on should fix all of these issuses, and I hope to implement it during December. Even if LEAVE is rarely directly used, it plays an important part in ensuring locks are reliabily released, and so is used implicitly in many programs.

Finally, I also tracked down and fixed a bug in the intersection of dispatch resumption and inlining.

Time spent on the grant: 30 hours 52 minutes Time remaining on the grant: 169 hours 8 minuates

Writing a SNES assembler compiler/disassembler - Day 1

blogs.perl.org

Published by Sylvain Colinet on Saturday 04 December 2021 01:02

Writing a SNES assembler compiler/disassembler

Why ? Because I can. More seriously I have a project where I need to inject new Snes code in a running game and I want to express directly this new code in my Raku component (A webserver service). I want to have special sub that returns me Snes bytecode but that contains Snes assembler.

I tried injecting a SLANG in Raku already. Like writing my $byte-code = SNES lda $42; sta $54; rtl; But it’s rather tricky and I will probably just have a additional Slang with its own grammar in a dedicated file.

use SNES-ASM; sub unlock-door (%door-id) { lda #%door-id sta $12 jmp $4565 rtl }

And later in code, I can just do my $unlock-bytecode = SNES::unlock-door(42)

I could just write a custom grammar and have an existing library (libasar) to generate me the bytecode. but since I will write the first part of an assembler (parsing and validating code), why not write a complete assembler anyways?

A byte on the Snes ASM

The Snes only have one accumulator (A) and 2 index register. Most instructions work on these 3 storages space (in 8 or 16 bits mode)

lda load a value in A, sta put a value in A. Number can be written like 42 or more commonly with a $ before to say it's an hexadecimal value $20. A word is 2 bytes long, a long is 3 bytes long.

Generating instructions

Since I don't want to type the whole instruction set and its associated bytecode. I will use the table I refer to when I write SNES code (Sometimes I question my sanity)

With Gumbo and the XML module, I generate a list of instructions from https://wiki.superfamicom.org/65816-reference

@instructions.push(Instruction.new(:inst("ADC"), :addressing(DP-INDEXED-INDIRECT-X), :description("Add With Carry"), :byte(0x61), :alias(""))); @instructions.push(Instruction.new(:inst("ADC"), :addressing(STACK-RELATIVE), :description("Add With Carry"), :byte(0x63), :alias(""))); @instructions.push(Instruction.new(:inst("ADC"), :addressing(DIRECT-PAGE), :description("Add With Carry"), :byte(0x65), :alias("")));

This table is not complete because some instructions are 'ambiguous in their normal form. Something like ldx 42 could be compiled differently if you encode 42 as a word or a byte so I will need to add some stuff later.

Addressing is a generated Enum

Everything is put in a ASM65816.rakumod file

Rant Time - HashSet

When generating this, the addressing part was put in a Set since I want to generate an enumeration from it.

Why does a Mutable Set have to be a Hash and not just a regular Array? It makes sense if you look at how to implement this since in a Hash each key is unique, but I don't get why this has to be exposed this way for the user. Having to write for $myset.keys -> $entry { do stuff} feel so wrong and dumb.

Addressing the Addressing

An instruction is basicly something like <keyword> <addressing>. Addressing is what you are trying to affect with the instruction. Some example :

  • Nothing : rtl
  • Constant/Immediate : lda #42 put 42 in A
  • Address/Absolute : lda $4545 put the value of the address $4545 in A
  • Indirect : lda ($4545) put the value of the address pointed by $4545
  • Indexed X : lda $42, X put the value of the address $42, + the value of the X register

You probably saw DP/Direct-page from the example of the instruction table. Direct Page is a special range of address that is basicly the beginning of the RAM (WRAM) of the Snes.

We can already write the Grammar for all the addressing and what I called Number (byte, word, dp, ect...)

grammar Number { token byte { | '$' <xdigit> ** 1..2 | \d+<?{ $/ < 0x100}> } token word { | '$' <xdigit> ** 3..4 | \d+<?{ $/ < 0x10000}> } token long { | '$' <xdigit> ** 1..6 | \d+<?{ $/ < 0x1000000}> } token bank { <byte> } token dp { <byte> } token pc-relative { | '$' <xdigit> ** 1..4 | \d+<?{ $/ < 0x10000}> } token pc-relative-long { | '$' <xdigit> ** 4..6 | \d+<?{ $/ < 0x1000000}> } };

I will probably rename thesebecause they are basicl what I need to encode after the instruction opcode.

This is part of the Addressing grammar. Absolute is a word, since an address that is < $100 is Direct Page.

grammar Addressing is Number { token ABSOLUTE { <word> } token ABSOLUTE-INDEXED-INDIRECT { '(' <word> ',' 'X' ')' } token ABSOLUTE-INDEXED-X { <word> ',' 'X' } token ABSOLUTE-INDIRECT { '(' <word> ')' } token ABSOLUTE-LONG { <long> } token ACCUMULATOR { 'A' } token DP-INDIRECT-LONG { '[' <dp> ']' } token DP-INDIRECT-LONG-INDEXED-Y { '[' <dp> ']' ',' 'Y' } token DIRECT-PAGE { <dp> } token IMMEDIATE { |'#'<word> |'#'<byte> } token IMMEDIATE-BYTE { '#'<byte> } token IMMEDIATE-WORD { '#'<word> } token PROGRAM-COUNTER-RELATIVE { <pc-relative> } .... }

Developing A Game Engine with Perl: Part 7 - Fork

dev.to #perl

Published by Shawn Holland on Friday 03 December 2021 23:34

Pssssst... I DO NOT KNOW WHAT I AM DOING.

If you want to start reading from the beginning. Check out the first article in this series

Continuing from our last post, I talked about how ANSI Game Engine is a colourful telnet server. We left off with needing to fork the engines telnet server.

Player 2 has joined the game!

Time to level up our telnet server and make it multi-player with some knify forky.

Image description

I've added in the strftime identifier from Perl's POSIX module to help with time stamping the output. The setsid identifier is for starting a new session and group ID for each forked process. A.K.A, the child process. :sys_wait_h is for returning without wait after the child process has exited, using the WNOHANG flag when calling waitpid(). This provides non-blocking wait for all pending zombie children.

Zombie Attack!!!

Image description
You see, when a process dies (exits), it becomes a zombie and needs to be reaped. This will be done when our parent process calls waitpid after receiving a CHLD signal, indicating the child has stopped or terminated.

Ok, I hope that will give you enough information to work with while dissecting the code:

#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket::INET;
use POSIX qw(setsid);
use POSIX qw(strftime);
use POSIX ":sys_wait_h";

sub timestamp {
    my $epoc_seconds = time();
    my $time = strftime "%H:%M:%S", localtime($epoc_seconds);
    my $date = strftime "%m/%d/%Y", localtime;
    my $return = $date . " " . $time;
    return ($return);
}

sub logmsg { print timestamp . " -> $0 -> PID:$$: @_ \n" }
logmsg "Begin";

my $socket = new IO::Socket::INET (
    LocalHost => '192.168.1.15',
    LocalPort => '27777',
    Proto => 'tcp',
    Listen => SOMAXCONN,
    ReuseAddr => 1
);

my $waitedpid = 0;
my $player_data;
my $player_socket;

sub REAPER {
    local $!;   # don't let waitpid() overwrite current error
    logmsg "Ending Player's Game";
    while ((my $pid = waitpid(-1, WNOHANG)) > 0 && WIFEXITED($?)) {
        logmsg "Closed Game ID:$pid : WaitPid:$waitedpid : " . ($? ? " with exit $?" : "");
    }
    $SIG{CHLD} = \&REAPER;  # loathe SysV
}
#if we get the CHLD signal call REAPER sub
$SIG{CHLD} = \&REAPER;

logmsg "Ready and waiting for connection";
while(1)
{
    next unless $player_socket = $socket->accept();
    logmsg ("Incomming Connection");
    logmsg ("Spawning Player A Game");
    my $pid = fork();

    next if $pid; #NEXT if $pid exists (parent)

    #As Child
    setsid();
    my $proc = $$;

    logmsg ("Game ID:$proc -> Ready");

    # get information about a newly connected player
    my $player_address = $player_socket->peerhost();
    my $player_port    = $player_socket->peerport();
    logmsg "Game ID:$proc -> Connection from $player_address:$player_port";

    my $response = "Welcome Player: $player_address:$player_port. Press any key to disconnect.";
    $player_socket->send($response);

    while ($player_socket->connected()) {
        $player_socket->recv($player_data, 1024);
            if ($player_data) {
                logmsg "Player Disconnecting $player_address : $player_port";
                $socket->close();
                logmsg "Player Disconnected";
                last;
            }
    }
    last;
}
exit;

Running this code and connecting with two players via SyncTERM, our client of choice, shows the following:

localhost:~/ANSIGameEngine # perl forking_telnet_server.pl 
12/03/2021 18:16:58 -> forking_telnet_server.pl -> PID:15978: Begin 
12/03/2021 18:16:58 -> forking_telnet_server.pl -> PID:15978: Ready and waiting for connection 
12/03/2021 18:17:04 -> forking_telnet_server.pl -> PID:15978: Incomming Connection 
12/03/2021 18:17:04 -> forking_telnet_server.pl -> PID:15978: Spawning Player A Game 
12/03/2021 18:17:04 -> forking_telnet_server.pl -> PID:15979: Game ID:15979 -> Ready 
12/03/2021 18:17:04 -> forking_telnet_server.pl -> PID:15979: Game ID:15979 -> Connection from 192.168.1.9:33422 
12/03/2021 18:17:08 -> forking_telnet_server.pl -> PID:15978: Incomming Connection 
12/03/2021 18:17:08 -> forking_telnet_server.pl -> PID:15978: Spawning Player A Game 
12/03/2021 18:17:08 -> forking_telnet_server.pl -> PID:15980: Game ID:15980 -> Ready 
12/03/2021 18:17:08 -> forking_telnet_server.pl -> PID:15980: Game ID:15980 -> Connection from 192.168.1.9:33428 
12/03/2021 18:17:11 -> forking_telnet_server.pl -> PID:15979: Player Disconnecting 192.168.1.9 : 33422 
12/03/2021 18:17:11 -> forking_telnet_server.pl -> PID:15979: Player Disconnected 
12/03/2021 18:17:11 -> forking_telnet_server.pl -> PID:15978: Ending Player's Game 
12/03/2021 18:17:11 -> forking_telnet_server.pl -> PID:15978: Closed Game ID:15979 : WaitPid:0 :  
12/03/2021 18:17:13 -> forking_telnet_server.pl -> PID:15980: Player Disconnecting 192.168.1.9 : 33428 
12/03/2021 18:17:13 -> forking_telnet_server.pl -> PID:15980: Player Disconnected 
12/03/2021 18:17:13 -> forking_telnet_server.pl -> PID:15978: Ending Player's Game 
12/03/2021 18:17:13 -> forking_telnet_server.pl -> PID:15978: Closed Game ID:15980 : WaitPid:0 :  

Image description

How it all works

The main (parent) process that accepts new incoming telnet requests is PID:15978 in the above example. After it sets up the listen server, it waits for a connection request and creates a forked process when a new player connects (child). The code distinguishes the parent (main waiting telnet server) process from the child (player) process with the value fork() returns. The parent process receives the child's (player) PID as the return value of fork(), so it loops back up and waits for another player to connect. The child (player) process receives a value of 0 from fork(), so we continue downward in the code. In Perl doing if($pid) does NOT evaluate TRUE if $pid == (0 || undef), which is what the child (player) process will receive as the returned value from fork(). We give the child (player) process a new session, record it's PID ($$) and wait for them to press any key. When the player presses a key the socket is closed and the child (player) process exists and becomes a zombie. This is when the parent (main) process receives the CHLD signal ($SIG{CHLD}) and calls REAPER

How about you?

Have you worked with fork before? Have you unleashed a zombie apocalypse forgetting to reap? Comment about your experience, I'd love to hear your stories.

If you have any suggestions or comments please share constructively. Also please visit our social media pages for lots of fun videos and pictures showing the game engine in action.

ANSI Game Engine on Instagram
ANSI Game Engine on Facebook

Prev << Part 6 - A Colourful Telnet Server
Next >> Coming Soon

Cheers!
Shawn

Developing A Game Engine with Perl: Part 5 - 32bit -> 64bit & Perl's Storable

blogs.perl.org

Published by Shawn Holland on Thursday 02 December 2021 08:42

If you haven't heard already... I DO NOT KNOW WHAT I AM DOING.

If you want to start reading from the beginning. Check out the first article in this series

Continuing on with the last article let's talk about changing system architecture and how that can affect Perl code, specifically Storable.

  • Mouse Input Support
  • Hardware Failure
  • Server Upgrade
  • UEFI vs OpenSuSE Installer
  • 32bit -> 64bit & Perl's Storable

After the old server had HDD failures and finally managing to upgrade to the new server, I eagerly began getting all the services and software installed to test the engine on new hardware. I was quickly meet with an error I have never seen before.

Byte order is not compatible at /usr/lib/perl5/5.34.0/x86_64-linux-thread-multi/Storable.pm

After doing some quick reading, I came to understand that Perl uses architecture specific ways to save content to files when using Storable. Specifically if you use lock_store and store. These are part of Perl's core system and what I use throughout the engine for working with the file structure.

I had to carefully re-read the perldoc's to discover that you can avoid architecture incompatibility by simply using nstore and lock_nstore The method you use for retrieving the stored files doesn't matter, only when storing the data into files does it matter.

I tried to find ways of being able to convert the stored files from 32bit architecture to 64bit, but ultimately the only real option was to use the old server to re-store the files with lock_nstore.

Luckily the old PC was still working, and I was able to modify the code and re-store all the files. The change and usage was simple:

I had to change my use statement from:
use Storable qw(lock_store lock_retrieve);
to
use Storable qw(lock_nstore lock_retrieve)

And to re-store I changed my code from this:
lock_store($data_ref, $dir_location);
to
lock_nstore($data_ref, $dir_location);

Have you ever had to write code for architecture compatibility? Or fix code migrated from one architecture to another? What do you prefer for storing data to files in Perl?

If you have any suggestions or comments please share constructively. Also please visit our social media pages for lots of fun videos and pictures showing the game engine in action.

ANSI Game Engine on Instagram
ANSI Game Engine on Facebook

Prev << Part 4 - UEFI vs OpenSuSE Installer
Next >> Part 6 - A Colourful Telnet Server

Cheers!
Shawn

Developing A Game Engine with Perl: Part 6 - A Colourful Telnet Server

blogs.perl.org

Published by Shawn Holland on Friday 03 December 2021 06:25

I'll stop reminding you that... I DO NOT KNOW WHAT I AM DOING.

If you want to start reading from the beginning. Check out the first article in this series

What is ANSI Game Engine?

Well, at it's core, ANSI Game Engine is a very colourful and interactive telnet server.

Why telnet!?

I know, right!? I still ask myself the same question today, but at this point, I'm kinda committed. Initially, it all came down to my decision NOT to code the client side. I looked into it, and it just wasn't something I wanted to do. And honestly, it seemed too far outside my comfort level with Perl. I feel much more comfortable, and interested in, working with server side code, as may be true for other Perl dev's out there. That's where Perl is quite prolific. Since I was going with ANSI graphics, they are best known these days, for me at least, to be used in telnet/terminal clients and BBS's, which are mainly telnet based now. This being said, it wasn't long searching google before I came across SyncTERM. SyncTERM is, in my opinion, the best available choice for cross-platform rendering of ANSI graphics over telnet. I've tried many different clients, on Mac, Windows, and Linux. SyncTERM works the most consistent across these platforms, it's been around for a long time and is still actively being developed. So telnet it is!

Telnet.... Fork me...

The engine has to be multiplayer. Depending on the specific game requirements, this does not necessarily mean players interacting with each other, but at least having multiple players connected at the same time to the same server. This is a minimum initial requirement. To me, this is a great opportunity to work with Fork. I really like the idea of creating multiple processes with fork that can interact with each other. This is an area I have little experience with and a great opportunity to learn.

Have you heard of Perl's CPAN?

If you are a Perl dev, obviously yes. If you code in another language, perhaps you've heard of it. Comprehensive Perl Archive Network, or CPAN, is Perl's repository of code modules. If you want to do something, chances are someone has already made a module for that. As of writing this article, CPAN currently has 208,034 Perl modules written by 14,179 authors. It's absolutely amazing!

This is where I found the modules to develop a telnet server. A forking telnet server, can you believe it? Fork yah! Okokok.... I'll stop making that joke, I promise!

Let's get to coding!

The first step was to get a telnet server that could accept multiple simultaneous connections. I read up on how to make a telnet server with Perl and from that reading I quickly learned about IO::Socket::INET. It wasn't long after I had working code for a telnet server!

use strict;
use warnings;
use IO::Socket::INET;
print "\nBEGIN\n";

print "Setting up listen socket\n";
my $socket = new IO::Socket::INET (
    LocalHost => '192.168.1.15',
    LocalPort => '27777',
    Proto => 'tcp',
    Listen => SOMAXCONN,
    ReuseAddr => 1
);

my $player_socket;
my $player_data;

print "Waiting for connection ...\n";
while(1) {

    next unless $player_socket = $socket->accept();
    print "Incomming Connection!\n";

    my $player_address = $player_socket->peerhost();
    my $player_port    = $player_socket->peerport();

    my $response = "Player Connection Info: $player_address : $player_port. ";
    print "$response\n";

    $response .= "Press Any Key To Disconnect...\n";
    $player_socket->send($response);

    print "Waiting for player to press a key and disconnect...\n";
    while ($player_socket->connected()) {

        $player_socket->recv($player_data, 1024);
        if ($player_data) {
            print "Player Disconnecting $player_address : $player_port\n";
            $socket->close();
            print "Player Disconnected\n";
            last;
        }

    }
    last;
}
print "Good Bye!\n";
exit;

Running this code and connecting with SyncTERM shows:

localhost:~ # perl telnet_server.pl 

BEGIN
Setting up listen socket
Waiting for connection ...
Incomming Connection!
Player Connection Info: 192.168.1.9 : 55414. 
Waiting for player to press a key and disconnect...
Player Disconnecting 192.168.1.9 : 55414
Player Disconnected
Good Bye!

Image description

I read my Linux OpenSuSE server's man page for listen(2), and the best I can understand is that the value for SOMAXCONN in Listen => SOMAXCONN will allow me to have up to 4096 connections!? I think this is correct. It says the value can be found in /proc/sys/net/core/somaxconn and doing a cat of that file shows 4096.

localhost:~ # cat /proc/sys/net/core/somaxconn 
4096

Now we just need to fork!

This process turned out to be a bit more difficult to understand (for me) and a few more lines of code then did the telnet portion. I'll pick up from here in the next article.

Have you done any telnet programming before? If so, what modules have you found useful? (CPAN or otherwise)

If you have any suggestions or comments please share constructively. Also please visit our social media pages for lots of fun videos and pictures showing the game engine in action.

ANSI Game Engine on Instagram
ANSI Game Engine on Facebook

Prev << Part 5 - 32bit -> 64bit & Perl's Storable
Next >> Coming Soon

Cheers!
Shawn

Developing A Game Engine with Perl: Part 6 - A Colourful Telnet Server

dev.to #perl

Published by Shawn Holland on Thursday 02 December 2021 09:06

I'll stop reminding you that... I DO NOT KNOW WHAT I AM DOING.

If you want to start reading from the beginning. Check out the first article in this series

What is ANSI Game Engine?

Well, at it's core, ANSI Game Engine is a very colourful and interactive telnet server.

Why telnet!?

I know, right!? I still ask myself the same question today, but at this point, I'm kinda committed. Initially, it all came down to my decision NOT to code the client side. I looked into it, and it just wasn't something I wanted to do. And honestly, it seemed too far outside my comfort level with Perl. I feel much more comfortable, and interested in, working with server side code, as may be true for other Perl dev's out there. That's where Perl is quite prolific. Since I was going with ANSI graphics, they are best known these days, for me at least, to be used in telnet/terminal clients and BBS's, which are mainly telnet based now. This being said, it wasn't long searching google before I came across SyncTERM. SyncTERM is, in my opinion, the best available choice for cross-platform rendering of ANSI graphics over telnet. I've tried many different clients, on Mac, Windows, and Linux. SyncTERM works the most consistent across these platforms, it's been around for a long time and is still actively being developed. So telnet it is!

Telnet.... Fork me...

The engine has to be multiplayer. Depending on the specific game requirements, this does not necessarily mean players interacting with each other, but at least having multiple players connected at the same time to the same server. This is a minimum initial requirement. To me, this is a great opportunity to work with Fork. I really like the idea of creating multiple processes with fork that can interact with each other. This is an area I have little experience with and a great opportunity to learn.

Have you heard of Perl's CPAN?

If you are a Perl dev, obviously yes. If you code in another language, perhaps you've heard of it. Comprehensive Perl Archive Network, or CPAN, is Perl's repository of code modules. If you want to do something, chances are someone has already made a module for that. As of writing this article, CPAN currently has 208,034 Perl modules written by 14,179 authors. It's absolutely amazing!

This is where I found the modules to develop a telnet server. A forking telnet server, can you believe it? Fork yah! Okokok.... I'll stop making that joke, I promise!

Let's get to coding!

The first step was to get a telnet server that could accept multiple simultaneous connections. I read up on how to make a telnet server with Perl and from that reading I quickly learned about IO::Socket::INET. It wasn't long after I had working code for a telnet server!

use strict;
use warnings;
use IO::Socket::INET;
print "\nBEGIN\n";

print "Setting up listen socket\n";
my $socket = new IO::Socket::INET (
    LocalHost => '192.168.1.15',
    LocalPort => '27777',
    Proto => 'tcp',
    Listen => SOMAXCONN,
    ReuseAddr => 1
);

my $player_socket;
my $player_data;

print "Waiting for connection ...\n";
while(1) {

    next unless $player_socket = $socket->accept();
    print "Incomming Connection!\n";

    my $player_address = $player_socket->peerhost();
    my $player_port    = $player_socket->peerport();

    my $response = "Player Connection Info: $player_address : $player_port. ";
    print "$response\n";

    $response .= "Press Any Key To Disconnect...\n";
    $player_socket->send($response);

    print "Waiting for player to press a key and disconnect...\n";
    while ($player_socket->connected()) {

        $player_socket->recv($player_data, 1024);
        if ($player_data) {
            print "Player Disconnecting $player_address : $player_port\n";
            $socket->close();
            print "Player Disconnected\n";
            last;
        }

    }
    last;
}
print "Good Bye!\n";
exit;

Running this code and connecting with SyncTERM shows:

localhost:~ # perl telnet_server.pl 

BEGIN
Setting up listen socket
Waiting for connection ...
Incomming Connection!
Player Connection Info: 192.168.1.9 : 55414. 
Waiting for player to press a key and disconnect...
Player Disconnecting 192.168.1.9 : 55414
Player Disconnected
Good Bye!

Image description

I read my Linux OpenSuSE server's man page for listen(2), and the best I can understand is that the value for SOMAXCONN in Listen => SOMAXCONN will allow me to have up to 4096 connections!? I think this is correct. It says the value can be found in /proc/sys/net/core/somaxconn and doing a cat of that file shows 4096.

localhost:~ # cat /proc/sys/net/core/somaxconn 
4096

Now we just need to fork!

This process turned out to be a bit more difficult to understand (for me) and a few more lines of code then did the telnet portion. I'll pick up from here in the next article.

Have you done any telnet programming before? If so, what modules have you found useful? (CPAN or otherwise)

If you have any suggestions or comments please share constructively. Also please visit our social media pages for lots of fun videos and pictures showing the game engine in action.

ANSI Game Engine on Instagram
ANSI Game Engine on Facebook

Prev << Part 5 - 32bit -> 64bit & Perl's Storable
Next >> Part 7 - Fork

Cheers!
Shawn

Advent of Code has started again and I'm again publishing my solutions to github (and probably also here). In the last two years I created on git repo for each year, but this year I changed my mind and want to have on repo containing a dir for each year.

Now I could just copy all the files into the new repo, but that would lose the commit history. Which is unacceptable!

The status quo ante

.
├── advent2019
│   ├── 01_1.pl
│   └── 01_2.pl
└── advent2020
    ├── 01_1.pl
    └── 01_2.pl

What I want

.
└─── advent_of_code
    ├── 2019
    │   ├── 01_1.pl
    │   └── 01_2.pl
    └── 2020
        ├── 01_1.pl
        └── 01_2.pl

I know that git allows you to rewrite history and have once munged a repo using git filter-branch (to completely delete some files and commits). I of course forgot the details, but after a little bit of searching and trial and error I got it working!

First, prepare the old repo

To prevent merge conflicts later, I first move the code in the old repo from the root dir into a new dir, eg 2019

So this:

.
└─── advent2019
    ├── 01_1.pl
    └── 01_2.pl

Should look like this:

.
└─── advent2019
    └─── 2019
        ├── 01_1.pl
        └── 01_2.pl

I found this gist very helpful, and adapted it to my needs:

cd advent2019
mkdir 2019
git filter-branch --tree-filter 'mkdir -p /tmp/ad; mv * /tmp/ad; mkdir 2019; mv /tmp/ad/* 2019/' --tag-name-filter cat --prune-empty -- --all

This uses git filter-branch to move the old code to a temp dir (mv * /tmp/ad) and then move it back to the new location (mv /tmp/ad/* 2019/), using some git magic to keep the history.

While I was at it, I also removed some of the AdventOfCode input files which I had commited (but which we should not commit):

git filter-branch -f --tree-filter 'rm -f 2019/*.data' HEAD

Second, import the repo

Now I can merge the old repo into my new unified repo. This time this StackOverflow comment pointed me in the right direction. The basic idea is to add the old repo as a new remote, fetch the commits, and then merge them using --allow-unrelated-histories

cd advent_of_code
git remote add old19 ../advent2019/
git fetch old19
git merge --allow-unrelated-histories old19/master
git remote remove old19

And then push, and we're done and have a nice unified repo:

~/perl/advent_of_code$ tree
.
├── 2019
│   ├── 01_1.pl
│   ├── 01_2.pl
├── 2020
│   ├── 01_1.pl
│   ├── 01_2.pl
│   ├── 01_2_golf.pl
├── 2021
│   ├── 01_1.pl
│   ├── 01_1_golf.pl
│   └── 01_2.pl
└── README.md

And the history was preserved:

~/perl/advent_of_code$ git log 2019/24_1.pl
commit dbaf0bc1df645bda61ec5cf7e623d478a179947e
Author: Thomas Klausner <domm@plix.at>
Date:   Fri Dec 27 10:10:10 2019 +0100

    no need to keep a map around, the rating is unique per map

Win!

Next steps: Fix all links pointing to the old repos to point to the new location, and maybe archive / delete the github repos (and/or have them redirect to the new unified repo)

Developing A Game Engine with Perl: Part 5 - 32bit -> 64bit & Perl's Storable

dev.to #perl

Published by Shawn Holland on Wednesday 01 December 2021 03:14

If you haven't heard already... I DO NOT KNOW WHAT I AM DOING.

If you want to start reading from the beginning. Check out the first article in this series

Continuing on with the last article let's talk about changing system architecture and how that can affect Perl code, specifically Storable.

  • Mouse Input Support
  • Hardware Failure
  • Server Upgrade
  • UEFI vs OpenSuSE Installer
  • 32bit -> 64bit & Perl's Storable

After the old server had HDD failures and finally managing to upgrade to the new server, I eagerly began getting all the services and software installed to test the engine on new hardware. I was quickly meet with an error I have never seen before.

Byte order is not compatible at /usr/lib/perl5/5.34.0/x86_64-linux-thread-multi/Storable.pm

After doing some quick reading, I came to understand that Perl uses architecture specific ways to save content to files when using Storable. Specifically if you use lock_store and store. These are part of Perl's core system and what I use throughout the engine for working with the file structure.

I had to carefully re-read the perldoc's to discover that you can avoid architecture incompatibility by simply using nstore and lock_nstore The method you use for retrieving the stored files doesn't matter, only when storing the data into files does it matter.

I tried to find ways of being able to convert the stored files from 32bit architecture to 64bit, but ultimately the only real option was to use the old server to re-store the files with lock_nstore.

Luckily the old PC was still working, and I was able to modify the code and re-store all the files. The change and usage was simple:

I had to change my use statement from:
use Storable qw(lock_store lock_retrieve);
to
use Storable qw(lock_nstore lock_retrieve)

And to re-store I changed my code from this:
lock_store($data_ref, $dir_location);
to
lock_nstore($data_ref, $dir_location);

Have you ever had to write code for architecture compatibility? Or fix code migrated from one architecture to another? What do you prefer for storing data to files in Perl?

If you have any suggestions or comments please share constructively. Also please visit our social media pages for lots of fun videos and pictures showing the game engine in action.

ANSI Game Engine on Instagram
ANSI Game Engine on Facebook

Prev << Part 4 - UEFI vs OpenSuSE Installer
Next >> Part 6 - A Colourful Telnet Server

Cheers!
Shawn

List of new CPAN distributions – Nov 2021

Perlancar

Published by perlancar on Wednesday 01 December 2021 00:31

dist author first_version latest_version abstract
API-Mathpix ERIAM 0.01 0.01 Use the API of Mathpix
Acme-CPANModulesBundle-Import-DAVECROSS PERLANCAR 0.001 0.001 Acme::CPANModules::* created from DAVECROSS's posts
Acme-MarkdownTest NEILB 0.01 0.02 test module to see how markdown is handled
Alien-Meson ZMUGHAL 0.01 0.01 Find or build meson build tool
Alien-Ninja ZMUGHAL 0.01 0.02 Find or build Ninja build tool
Alien-SWIProlog ZMUGHAL 0.001 0.001 Alien package for the SWI-Prolog Prolog interpreter
Alien-libgpiod SACAVILIA v0.1 v0.1 Find or build libgpiod (Linux GPIO character device library)
AltaPay_Try MAFEMBRAC v1.0.1 1.0.1
App-CPANChangesUtils PERLANCAR 0.074 0.074 Parse CPAN Changes file
App-Images-To-DjVu SKIM 0.01 0.01 Base class and script for composing DjVu files from images.
App-URIInfoUtils PERLANCAR 0.001 0.002 Utilities related to URI::Info
App-ansiexpand UTASHIRO 0.99 0.9901 ANSI sequence aware tab expand/unexpand command
App-geoCancerDiagnosticDatasetsRetriever AALAMEER 1 1 GEO Cancer Diagnostic Datasets Retriever is a bioinformatics tool for cancer diagnostic dataset retrieval from the GEO website.
App-renlikewd PERLANCAR 0.001 0.001 REName a file so it becomes LIKE the current (Working) Directory's name
App-timeput TULAMILI 0.060 0.078
App-tjoin TULAMILI 0.060 0.061
Bencher-Scenario-BinarySearch PERLANCAR 0.001 0.003 Benchmark binary searching
Bencher-Scenario-BinarySearch-File PERLANCAR 0.001 0.001 Benchmark binary searching sorted lines from a file
Bencher-Scenarios-URI-Info PERLANCAR 0.001 0.002 Scenarios to benchmark URI::Info
Commons-Link SKIM 0.01 0.05 TODO.
Cookie JDEGUEST v0.1.1 v0.1.5 Cookie Object
Cookies JDEGUEST v0.1.0 v0.1.0 Cookies API for Server & Client
DBIx-DBFlow JLMARTIN 0.03 0.03 Database development helpers
DBIx-DBFlow-Runtime JLMARTIN 0.03 0.03 Runtime utilities for DBIx::DBFlow
DBIx-Spreadsheet CORION 0.01 0.02 Query a spreadsheet with SQL
Dancer2-Plugin-ControllerAutoload GILS 0.01 0.01 Autoload controllers
Data-Sah-DefaultValue PERLANCAR 0.001 0.001 Default-value rules for Data::Sah
Data-TableData-Rank PERLANCAR 0.001 0.001 Add a rank column to a table
Device-Chip-Adapter-Gpiod SACAVILIA v0.1.0 v0.1.0 Device::Chip::Adapter implementation for Linux GPIO character devices
Dyn SANKO 0.01 0.02 dyncall Backed FFI
File-Find-Rule-DjVu SKIM 0.01 0.01 Common rules for searching DjVu files.
File-Syslogger VVELOX v0.0.1 v0.0.2 Use POE to tail a file and read new lines into syslog.
File-Temp-VarOptions PERLANCAR 0.001 0.001 Like File::Temp, but allowing to set options with variables
Future-Workflow PEVANS 0.01 0.01 asynchronous workflow management using futures
GitHubDDL MACOPY 0.01 0.01 GitDDL compatibility database migration utility when hosted on GitHub
Hash-Util-Exists-Tiny AAHAZRED 0.01 0.04 Some hash helper functions related to perl's exists function.
HealthCheck-Diagnostic-RemoteHealth GSG v0.1.0 v0.1.0 Get results from an HTTP HealthCheck
IURL-XS BRDUCH v0.1.0 v0.1.0 parsing URLs with zero-copy and no mallocs
Iterator-Breathe GENE 0.0100 0.0201 Iterate a counter up and back
JSON-Schema-Modern-Document-OpenAPI ETHER 0.001 0.005 One JSON Schema document
Math-Ryu SISYPHUS 0.01 0.02 perl interface to the ryu C library.
Module-Features-Set PERLANCAR 0.001 0.003 Features of modules that generate text tables
MojoX-HTTP-Async CDN 0.01 0.08 simple package to execute multiple parallel requests to the same host
Mojolicious-Plugin-YamlRoutes HDELGADO 0.01 0.01 Generates routes from a yaml file
Net-RFC3161-Timestamp AKHUETTEL 0.010 0.020 Utility functions to request RFC3161 timestamps
PERLSDK MAFEMBRAC v1.0.1 v1.0.1
PensioAPI MAFEMBRAC 1.0.1 1.0.1 Integrate AltaPay gateway with Perl projects.
Pensio MAFEMBRAC 1.0.1 1.0.1 Integrate AltaPay gateway with Perl projects.
RT-Extension-ReplyWithMail AVERKIOS 0.01 0.01 RT-Extension-ReplyWithMail Extension
RoleBundle-TinyCommons-Iterator PERLANCAR 0.004 0.004 Iterator roles
RoleBundle-TinyCommons-Tree PERLANCAR 0.129 0.129 Roles related to object tree
SelectPdf SELECTPDF v1.2.0 v1.4.0 Get the result of an asynchronous call.
Task-Kramerius SKIM 0.01 0.01 Install modules for Kramerius system.
Test-Expander JSF v1.0.0 v1.0.5 Expansion of test functionalities that appear to be frequently used while testing.
Test-Mock-Object OVID 0.1 0.1 Dead-simple mocking
Tree-RB-XS NERDVANA 0.00_01 0.05 Red/Black Tree object withthe compatible API to Tree::RB
URI-Info PERLANCAR 0.001 0.002 Extract various information from a URI (URL)
URI-Router SYBER v0.1.0 v0.1.3 highest performance powerful URI router (URI path to value lookup) for HTTP frameworks
WWW-KeePassHttp PETERCJ 0.010 0.020 Interface with KeePass PasswordSafe through the KeePassHttp plugin
WebService-WsScreenshot SYMKAT 0.001 0.001 API client For ws-screenshot
WebService-YTSearch GENE 0.0100 0.0302 Search YouTube
require-relative BARNEY 1.0.0 1.0.0 like require for relative paths

Stats

Number of new CPAN distributions this period: 62

Number of authors releasing new CPAN distributions this period: 36

Authors by number of new CPAN distributions this period:

No Author Distributions
1 PERLANCAR 14
2 MAFEMBRAC 4
3 SKIM 4
4 ZMUGHAL 3
5 TULAMILI 2
6 GENE 2
7 JLMARTIN 2
8 SACAVILIA 2
9 JDEGUEST 2
10 AVERKIOS 1
11 SYMKAT 1
12 GSG 1
13 CORION 1
14 GILS 1
15 ETHER 1
16 MACOPY 1
17 AAHAZRED 1
18 NERDVANA 1
19 ERIAM 1
20 HDELGADO 1
21 BARNEY 1
22 VVELOX 1
23 SYBER 1
24 JSF 1
25 PETERCJ 1
26 OVID 1
27 UTASHIRO 1
28 SANKO 1
29 PEVANS 1
30 SISYPHUS 1
31 SELECTPDF 1
32 BRDUCH 1
33 CDN 1
34 AKHUETTEL 1
35 NEILB 1
36 AALAMEER 1

#540 - 24 Pull Requests

Perl Weekly

Published on Monday 29 November 2021 10:00

Hi there

Did you participate in Hacktoberfest 2021?

If not then you have one more chance to participate in the global event 24 Pull Requests. Participation is simple, you just have to submit one pull request each day between 1st Dec 2021 and 24th Dec 2021. It is one of my favourite events. I have participated in the event each year, without a break, since 2015. I was introduced to it by Neil Bowers. Last year I was going through rough patch so, unfortunately, I had to skip it. My best performance was in the year 2018 when I contributed 125 pull requests in the first 24 days of December. If I am not mistaken then the 24 Pull Requests event is little more relaxed as compare to Hacktoberfest. You don't have to submit a pull request every day. Instead, you can submit in bulk one day and then take a break. However, make sure you have submitted at least 24 pull requests by the end of play on 24th December.

Happy Hacking !!!

Do you follow the much-discussed Perl project Corinna?

The project is run by Curtis and his dedicated team of experts. I came across a blog post by Curtis talking about "Why is Perl not putting Moose in the core?" There is a bold statement in the post in response to the question "What’s an Object in Perl?" and it is "Perl knows nothing about OOP. It doesn’t know what objects are."

Curtis doesn't say things without a reason. Please read the full blog post for more informations. I am one of the biggest followers of Curtis and never miss his posts.

If you remember, I mentioned Advent Calendars in the editorial of the weekly newsletter edition #536. Ever since then, I have noticed lots of activities in the Perl and Raku communities about Advent Calendars. I am confident this year we are going to have another fun filled Advent Calendar.

Last but not least, The Weekly Challenge (also known as Perl Weekly Challenge) is going to complete 1000 days on 18th December 2021. To celebrate the day, the team is running a FREE T-shirt campaign. If you are interested in knowing about it then please check out the blog post.

Stay safe and avoid the new COVID-19 variant Omicron.

Dancer2: Install Perl, editor, Dancer2

Perl Maven

Published by Gabor Szabo on Monday 29 November 2021 09:30

Part of the Dancer2 video course available both to Pro subscribers and attendees of the Perl Dancer course on Leanpub.

The examples used here are from The Weekly Challenge problem statement and demonstrate the working solution.

Part 1

You are given two decimal-coded binary numbers, $a and $b. Write a script to simulate the addition of the given binary numbers.

Solution


use strict;
use warnings;
sub add_binary{
    my($x, $y) = @_;
    my $sum = ""; 
    my @a = reverse(split(//, $x));            
    my @b = reverse(split(//, $y));            
    if(@b > @a){
        my @c = @b;
        @b = @a;
        @a = @c;   
    } 
    my $carry = 0; 
    for(my $d = 0; $d <= @a - 1; $d++){ 
        my $d0 = $a[$d]; 
        my $d1 = $b[$d];
        if($d1){
            $sum = "0$sum", $carry = 0 if $d0 == 1 && $d1 == 1 && $carry == 1;  
            $sum = "1$sum", $carry = 0 if $d0 == 1 && $d1 == 0 && $carry == 0; 
            $sum = "0$sum", $carry = 1 if $d0 == 1 && $d1 == 1 && $carry == 0; 
            $sum = "0$sum", $carry = 1 if $d0 == 0 && $d1 == 1 && $carry == 1; 
            $sum = "0$sum", $carry = 0 if $d0 == 0 && $d1 == 0 && $carry == 0; 
            $sum = "1$sum", $carry = 0 if $d0 == 0 && $d1 == 0 && $carry == 1; 
            $sum = "0$sum", $carry = 1 if $d0 == 1 && $d1 == 0 && $carry == 1; 
            $sum = "1$sum", $carry = 0 if $d0 == 0 && $d1 == 1 && $carry == 0; 
        } 
        else{
            $sum = "0$sum", $carry = 1, next if $d0 == 1 && $carry == 1;  
            $sum = "1$sum", $carry = 0, next if $d0 == 0 && $carry == 1;  
            $sum = "0$sum", $carry = 0, next if $d0 == 0 && $carry == 0;  
            $sum = "1$sum", $carry = 0, next if $d0 == 1 && $carry == 0;  
        }  
    } 
    $sum = "$carry$sum" if $carry == 1;  
    return $sum; 
}

MAIN:{
    print add_binary(11, 1) . "\n"; 
    print add_binary(101, 1) . "\n"; 
    print add_binary(100, 11) . "\n"; 
}

Sample Run


$ perl perl/ch-1.pl
100
110
111

Notes

I have an unusual fondness for Perl's right hand conditional. But that is pretty obvious from the way I wrote this, right?

Part 2

You are given 3 positive integers, $i, $j and $k. Write a script to print the $kth element in the sorted multiplication table of $i and $j.

Solution


use strict;
use warnings;
sub nth_from_table{
    my($i, $j, $k) = @_;
    my @table;
    for my $x (1 .. $i){
        for my $y (1 .. $j){
            push @table, $x * $y; 
        }  
    }  
    return (sort {$a <=> $b} @table)[$k - 1];   
} 

MAIN:{
    print nth_from_table(2, 3, 4) . "\n";  
    print nth_from_table(3, 3, 6) . "\n";  
} 

Sample Run


$ perl perl/ch-2.pl 
3
4

Notes

Full Disclosure: At first I wanted to do this in some convoluted way for fun. After experimenting with, like, nested maps for a few minutes I lost all interest in "fun" and just went with a couple of for loops!

References

Challenge 140

Maintaining Perl 5 Core (Tony Cook): October 2021

Perl Foundation News

Published by Jason A. Crome on Sunday 28 November 2021 17:01

Approximately 25 tickets were reviewed, and 3 patches were applied. ``` [Hours] [Activity] 0.83 #12957 review discussion (mentioned in #19182), review sdbm source, comment 1.27 #18958 look over code #18958 review code and comment #18958 comment 1.59 #19020 research - should the shell be trapping in this case? haven’t found anything appropriate, also look at dash source #19020 more dash source, comment 2.74 #19121 research and comment #19121 re-work #19121 consider alternative for the delimiter, looking over code #19121 look at a :crlf issue - is it a problem elsewhere cpan #133233 0.30 #19122 apply to blead, perldelta 1.27 #19124 debugging 0.30 #19166 review 0.45 #19171 review and briefly comment #19171 review latest changes and apply to blead 1.22 #19172 review and a couple of brief comments 0.30 #19174 review discussion, testing, research and comment 2.42 #19182 review, comments #19182 review changes, comment #19182 review, testing, brief comment 0.43 #19184 review, consider if feature regen needs some re- work #19184 briefly review 0.60 #19186 review, research and comment 0.62 #19187 review and approve 0.63 #19190 review, research, briefly comment 0.91 #19191 review and apply, also supply patch upstream and update customizations #19191 comment 0.43 #19194 review and approve 2.18 #19197 review, research (who wrote that test? it was me) and comment #19197 review new commits 9.56 #19198 debugging and comment #19198 debugging, work on a fix, simple testing, comment #19198 recheck, testing, try to find good place to add regression test, minor perl5db.t re-work #19198 minor cleanup and document DebugWrap in perl5db.t, write test, test the test with and without the fix, testing #19198 fix global destruction issue from fix, debug another threads only failure and work out a fix #19198 polish, documentation, more testing, update PR 0.92 review coverity reported defects, create PR #19200 1.42 :utf8 more on re-working fill/readdelim 0.65 :utf8 more readdelim re-work 1.58 :utf8 work on re-working fill 0.17 github notifications

1.07 security list: research, testing, comment

33.86 hours total ```

(ccclxix) 12 great CPAN modules released last week

Niceperl

Published by Unknown on Saturday 27 November 2021 17:56

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Netdisco - An open source web-based network management tool.
    • Version: 2.051005 on 2021-11-25
    • Votes: 13
    • Previous version: 2.051001 was 11 days before
  2. App::perlbrew - Manage perl installations in your $HOME
    • Version: 0.93 on 2021-11-22
    • Votes: 167
    • Previous version: 0.92 was 7 months, 7 days before
  3. App::Sqitch - Sensible database change management
    • Version: v1.2.0 on 2021-11-20
    • Votes: 38
    • Previous version: v1.1.0 was 1 year, 6 months before
  4. Code::TidyAll - Engine for tidyall, your all-in-one code tidier and validator
    • Version: 0.80 on 2021-11-26
    • Votes: 36
    • Previous version: 0.78 was 1 year, 7 months, 1 day before
  5. CPAN - query, download and build perl modules from CPAN sites
    • Version: 2.29 on 2021-11-23
    • Votes: 33
    • Previous version: 2.28 was 1 year, 5 months, 10 days before
  6. CPANPLUS - Ameliorated interface to the CPAN
    • Version: 0.9914 on 2021-11-23
    • Votes: 12
    • Previous version: 0.9912 was 3 months, 2 days before
  7. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20211120 on 2021-11-21
    • Votes: 37
    • Previous version: 5.20211020 was 1 month before
  8. Mojo::mysql - Mojolicious and Async MySQL/MariaDB
    • Version: 1.25 on 2021-11-22
    • Votes: 35
    • Previous version: 1.24 was 1 month, 17 days before
  9. Mojolicious::Plugin::OpenAPI - OpenAPI / Swagger plugin for Mojolicious
    • Version: 5.02 on 2021-11-21
    • Votes: 41
    • Previous version: 5.01 was 1 day before
  10. Object::Pad - a simple syntax for lexical slot-based objects
    • Version: 0.58 on 2021-11-25
    • Votes: 22
    • Previous version: 0.57 was 7 days before
  11. RapidApp - Turnkey ajaxy webapps
    • Version: 1.3500 on 2021-11-22
    • Votes: 29
    • Previous version: 1.3404 was 2 months, 5 days before
  12. Try::Tiny - Minimal try/catch with proper preservation of $@
    • Version: 0.31 on 2021-11-23
    • Votes: 168
    • Previous version: 0.30 was 3 years, 11 months, 2 days before

(cdxcii) metacpan weekly report - DBD::Pg

Niceperl

Published by Unknown on Saturday 27 November 2021 17:54

This is the weekly favourites list of CPAN distributions. Votes count: 24

Week's winner: DBD::Pg (+2)

Build date: 2021/11/27 16:53:53 GMT


Clicked for first time:


Increasing its reputation:

#539 - Farewell to David H. Adler

Perl Weekly

Published on Monday 22 November 2021 10:00

Hi

Really sad news: that David H. Adler, known by many as DHA has passed away. I've included a number of links where people who can express themselves way better than I do say goodbye to him.

In other news the ephemeral miniconf took place this week. You can read about it below.

Lastly, The Weekly Challenge celebrates its first 1000 days. Congratulations to Mohammad Anwar, my co-editor of the Perl Weekly for keeping up with this project!

Enjoy your week!

Jort Sort the First Five Long Primes

RabbitFarm Perl

Published on Sunday 21 November 2021 16:34

The examples used here are from The Weekly Challenge problem statement and demonstrate the working solution.

Part 1

You are given a list of numbers. Write a script to implement JortSort. It should return true/false depending if the given list of numbers are already sorted.

Solution


use strict;
use warnings;
use boolean;

sub jort_sort{
    for(my $i=0; $i < @_ - 1; $i++){
        return false if $_[$i + 1] < $_[$i];  
    }  
    return true;
}

MAIN:{
    print jort_sort(1, 2, 3, 4, 5) . "\n";
    print jort_sort(1, 3, 2, 4, 5) . "\n";
}

Sample Run


$ perl perl/ch-1.pl
1
0

Notes

Apparently Jort Sort is a joke sort started by somebody in the JavaScript community. I didn't find it all that funny, but the code to implement it only took a quick minute.

Part 2

Write a script to generate the first 5 Long Primes.

Solution


use strict;
use warnings;
use boolean;
use LWP::UserAgent;
use constant PRIME_URL => "http://primes.utm.edu/lists/small/100000.txt";

sub get_primes{
    my @primes;  
    my $ua = new LWP::UserAgent(
        ssl_opts => {verify_hostname => 0}
    );
    my $response = $ua->get(PRIME_URL);
    my @lines = split(/\n/,$response->decoded_content);
    foreach my $line (@lines){
        my @p = split(/\s+/, $line);
        unless(@p < 10){
            push @primes, @p[1..(@p - 1)]; 
        }  
    }
    return @primes; 
}

sub divide{
    my($n, $d) = @_; 
    my @remainders;
    my $q = (int($n / $d)) . ".";
    my $r = $n % $d; 
    push @remainders, $r; 
    my @a;
    for (0 .. $d){
        $q .= int($r*10 / $d);  
        $r = $r*10 % $d;
        @a = grep { $remainders[$_] == $r } (0 .. @remainders - 1);
        last if(@a); 
        push @remainders, $r; 
    }
    my $r_i = $a[0];
    my $i = index($q, ".");
    my $decimal_part = substr($q, $i+1); 
    return substr($q, 0, $i + 1) . substr($decimal_part, 0, $r_i) . "(" . substr($q, $i + $r_i + 1) . ")";  
}   

sub long_primes_five{
    my @long_primes;
    my @primes = get_primes();
    do{
        my $prime = shift @primes;    
        my $max_repetend = $prime - 1; 
        my $repeats = true if($prime != 2 && $prime != 5); 
        if($repeats){
            my $x = divide(1, $prime, [], []); 
            $x =~ m/\((\d+)\)/;
            my $repetend = $1;
            push @long_primes, [$prime, $x] if length($repetend) == $prime - 1;   
        }
    }while(@long_primes < 5);
    return @long_primes;
}

MAIN:{
    for my $p (long_primes_five()){
        print $p->[0] . "\t" . $p->[1] . "\n";
    }
}

Sample Run


$ perl perl/ch-2.pl 
7       0.(142857)
17      0.(0588235294117647)
19      0.(052631578947368421)
23      0.(0434782608695652173913)
29      0.(0344827586206896551724137931)

Notes

This second part of the challenge was much more fun! Maybe my favorite part was that it largely re-used code from challenge 106 and also Challenge 015. Here we grab a list of pre-computed primes and then check each one for the desired property. After we find five, as required, we're done.

References

Jort Sort

Long Prime

Challenge 139

Perl Tk: Keyboard and Mouse events

Perl Maven

Published by Gabor Szabo on Sunday 21 November 2021 09:00

Showing how to create to bind keyboard an mouse events to functions in a Perl Tk based application.

Grants: November 2021 Votes

Perl Foundation News

Published by Jason A. Crome on Tuesday 16 November 2021 20:00


The Grants Committee has concluded voting on the November 2021 round. One grant request was submitted:

Grant Proposal: Optimizations building upon the new Raku dispatch mechanism (USD 12,000)

VOTING RESULTS: Approved. 4 YES votes (14 points), 0 NO votes, 1 ABSTAIN

We accept proposals throughout the year; our next round of review will begin in January. You can submit proposals at any time.

If you want to help with funding and increase our budget, please visit our donations page. We sincerely appreciate all the donors which make the grant program possible. If you donate, please take advantage of your employers' matching donation program.

As always, thanks to our donors, both large and small, who support this program to give back to the community.

#538 - Are you Perl blogger?

Perl Weekly

Published on Monday 15 November 2021 10:00

Hi there

Going back to the editorial of last week's edition of Perl Weekly, it is still there in my head raising many questions. It is a universal fact that Perl is no longer the favourite language among the youths in college these days. I remember when I was first introduced to Perl in 1999, it was the most popular language. Being a Perl fan, I feel it is my responsibility to bring Perl back to the discussion table. One way to do that is to talk about Perl-based projects and share them with the users of other languages. I have noticed that not as many people blog about Perl as they used to in the past. Talking about blogs, I would like to thank The Weekly Challenge team for spreading the power of Perl. For the record, in the last 138 weeks, the team have created and shared over 2000 blog posts. I used to blog regularly in the past but nowadays I hardly find time. Having said that, I still wrte a monthly report blog post where I talk about Perl and my other projects. I find making a live video is less time consuming than writing a blog post. I made some and then I stopped that as well, sorry. You can take a look my YouTube Channel if you fancy. I came across series of videos by Gabor Szabo teaching Perl. It is great source for someone new to Perl and explore with live action.

I remember, Dave Cross, wrote a blog post talking about something that not many people would speak about. I personally agree with him fully and believe knowing more than one programming languages helps you as a programmer. I can tell from my own experience, I picked up Raku last year while participating the weekly challenge. Whatever little I know about Raku to date, I can tell it has helped me a lot. I use that knowledge to write better Perl programs. Recently, I started writing Python as well - just the basics for now. So the question is, does knowing Raku or Python make me any less of a Perl programmer? I am still a Perl programmer and all of my thought processes still follow Perl's rules. Perl is my first love and will remain so for the rest of my life. Even today, after more than 22 years, I find features that I never knew existed before, especially in the field of regex. I would like to give credit to Abigail for sharing the magic of regex in Perl as part of his weekly contributions.

While talking about blogs, how can I miss those who are doing great service on a regular basis? I would like to thank Flavio Poletti for blogging every day for months now. Also there is another name, Mark Gardner, who writes a weekly blog post. Similarly we have weekly stats by Miguel Prz.

Enjoy the rest of the newsletter.

Perl Tk: The label, button, and entry widgets

Perl Maven

Published by Gabor Szabo on Monday 15 November 2021 08:00

Volunteers needed for Perl dev room at FOSDEM (online)

Perl Foundation News

Published by Nic Evans on Thursday 11 November 2021 16:20

FOSDEM will run online on 5-6 February 2022 and is calling for participation. The Marketing Committee would like to run a Perl dev room (basically a talk track), but need volunteers and speakers to assist.

Talks need to be prerecorded and uploaded prior to the conference, which are then streamed to participants with Q&A after each.

Before we request the room from FOSDEM and call for speakers, we need volunteers to assist with both organizing and supporting speakers.

We need 2-3 people who can meet regularly and spend time planning it, encouraging calls for papers, helping speakers record their talks as needed, supporting the management of our dev room on the day, including the collation of recordings etc.

If you can commit to support, we can submit a proposal for dev room (deadline is 23:59 UTC, Monday 15 November).

Please contact nic@perlfoundation.org by 12 noon UTC, Monday 15 November if you can help.

Grant update: Persistent Data Structures for Raku, Daniel Sockwell

Perl Foundation News

Published by Moritz Lenz on Tuesday 09 November 2021 13:05

Daniel Sockwell has provided two updates to his Grant: Persistent Data Structures for Raku.

The first update was already submitted in October, and not posted due to my own error. Enjoy the thorough description!


Raku Persistent Data Structures Grant Report for September 2021

In September, I spent 28 hours on implementing Raku persistent data structures. That's a bit under the 10 hour/week I'd estimated that I'd spend, but I was still able to make a good start on the implementation.

The main reason I didn't spend the full 40 hours on the grant – as well as the explanation for why you're getting this update now instead of closer to the beginning of the month – is that I decided to devote a large chunk of my Raku time to two other issues – both of which are directly related to implementing persistent data structures. (Though not so related that I'm counting time spent on them towards the grant. Which is good, because doing so would mean that I well exceeded my ~40/month budget!)

I'm going to tell you a bit about these two projects and how each relates to Raku persistent data structures, and then I'll describe the progress I made during the 28 hours I'm actually counting.

What even is immutability?

The biggest selling point of persistent data structures is, of course, that they are immutable while also being fairly cheap to copy. But to deliver on that promise, it's important to have a very clear idea of what we mean by "immutability".

I don't mean in some sort of fancy philosophical sense; I'm asking the very practical question "When we describe the List in my $list = (1, 2, 3) as immutable, what does that mean? What different behavior should I expect when I know that the List is immutable?" (You could say "semantic difference" instead of "different behavior" if you want to sound a bit more technical.)

Raku has some very clear answers to parts of this question but is also slightly confused (to say nothing of confusing) on some others. And, as it turns out, some of the less-clear areas of the question end up being pretty significant for the sort of immutability that give persistent data structures so much of their power.

Let's start with an area where Raku is clear: the distinction between immutable values and immutable bindings. This is well documented (and isn't what I spent time on; this is still background), but here's a recap:

Values

When we describe the list (1, 2, 3), the string codesections, or the number 42 as immutable, we are saying that they are immutable values. That is, there's no way to take (1, 2, 3) and change the 2 into a 0 while still having the same list we started with – we can create a new list with those values, but that's not at all the same thing. Contrast that with the Array of my @a = [1, 2, 3]: here, the code @a[1] = 0 literally changes the existing Array.

Strings and numbers in Raku are immutable values in exactly the way Lists are (and Arrays aren't). That is, there's no way to transform a string or a number; all you can do is to return a modified version of the string or number. In contrast, C, C++, Rust, and many other languages allow you to mutate strings in-place; they essentially treat strings as arrays of characters or codepoints and let you modify that array in exactly the same way Rakoons modify an Array. (For C's char* strings, you're literally dealing with character arrays; for other languages that's more of an (over) simplification). I'm pretty sure that no programming language is crazy enough to have mutable numbers (math would get weird) but I'd love to learn that I'm wrong about that.

Note that I'm discussing the semantics of immutable values that Raku (the language) provides, not implementation details of how any particular compiler stores those values. So, while Lists and Strs provide immutable semantics, there's nothing preventing an implementation from storing different values in a more complex way. Put differently, when you say my $a = 'foo' x 500; my $b = $a ~ ' and bar', you conceptually have two separate and immutable strings, but implementations are free to let those strings share (or even mutate) some of their backing data. And, indeed, MoarVM uses a really amazing system for storing strings that shares memory in a way that's not that different from the way the data structures I'm building do.

Bindings

Entirely separate from the question of mutable versus immutable values, programming languages can also constrain the mutability of variable bindings. That is: once the programmer has defined a variable as pointing at a particular value, can they later change it to point at a different value?

One non-Raku example that many programmers may be familiar with is JavaScript's let and const, especially when used with arrays or objects. In JavaScript, const enforces immutable bindings – you can't say const a = [1, 2, 3]; a = [4, 5, 6]. But it does nothing to make the values immutable – and, in js, arrays are always mutable. This means that you can validly write const a = [1, 2, 3]; a[1] = 99 to mutate the value that a points to. (This distinction, and the lack of immutable arrays in JavaScript, leads to perennial debates about the advisability of using const in js.)

Raku gives us accesses to the semantics of both mutable and immutable bindings, and lets us combine either with mutable or immutable values. Thus, we can write my @a = [1, 2, 3] - mutable binding to a mutable value. Or my $l = (1, 2, 3) – an mutable binding to an immutable value (we can't change the value of the List, but we can make $l point to a different value altogether). Or we can write constant @a = [1, 2, 3] – an immutable binding to a mutable value (@a will always point to that exact Array, but we can freely mutate that Array with code like @a[1] = 99). Or, finally, we can write constant $l = (1, 2, 3) – an immutable binding to an immutable value ($l will always point to the same List, and that List will always contain exactly the values 1, 2, and 3).

This last category – an immutable binding to an immutable value – best fits many peoples intuitions about what the terms "immutable" or "constant" mean – that variable always gives me the same value, no matter what. In addition to being more intuitive, this last category is notable for how much it can simplify reasoning about code. There's something very clarifying about knowing the value of a variable without needing to track whether that value has changed; just like the related idea of using referentially transparent functions, programming with variables that are immutably bound to immutable values can greatly simplify complex codebases.

How immutable values and binding fit with persistent data structures

The distinction between the mutability of values and bindings matters for the data structures I'm implementing in two ways.

First, because the data structures I'm creating are immutable values, we'll be able to use them to be in that wonderful forth category – immutable binding to immutable values – if and only if Raku lets us bind to them immutably. So it's really important to be clear on what options Raku provides for immutable bindings.

I've already shown one way that Raku supports immutable bindings: constant $l = (1, 2, 3). But constants are evaluated at compile time and (even though Raku's definition of "compile time" is pretty expansive) the options for using constants are inherently limited. So what other options does Raku provide?

Well, before the beginning of September, I thought that Raku provided three other options for immutable bindings: so-called sigilless variables (like my \l = (1, 2, 3)); function parameters that aren't declared as rw or copy (like sub fn($l) {…}); and lists of variables that are destructured with the := operator (which share the same semantics as parameters). Now, in mid-October, I still think that's the case, but I admit that the situation is more complex than I first thought and, at a minimum, existing compilers don't fully implement those semantics.

(Note: some people don't like the term "sigilless variable", since the semantics I just described mean that they don't really vary. But it's the term the docs use, so I'm going to stick with it for this post.)

I first realized something odd was going on when comment in this docs.raku.org issue pointed out that, while you can't reassign a sigilless variable, you can sometimes re-bind it: my Int \x = 2; x := 4 changes the value bound to x. At first I thought that this issue only applied when certain type constraints were applied, but I soon realized that it applied whenever any type is mentioned, including Any or Mu. Moreover, it's not just sigilless variables that can be rebound – parameters can too: sub f(Any $a) { $a := 42; $a}; f 0 returns 42. In the last step to make this pervasive, the @ and % sigils add implicit type constraints, which means that any parameter with either of those sigils can be rebound. (As mentioned, destructuring with := shares the semantics of parameter binding, all of that applies there as well.)

Given that sigilless variables and parameters both allowed rebinding, all of the above lead me to wonder whether Raku is trying to provide immutable bindings at all. It turns out that there isn't a Roast specification test that's 100% on point. And, when I asked on StackOverflow I mixed opinions about whether the current behavior (allowing rebinding) was intentional or a bug.

If Raku does pervasively allow rebinding, that would be pretty bad news for the usefulness of the data structures I'm implementing. They'd still have some value – more than enough to be worthwhile. But it'd mean that we pretty much can never get to the immutable-bindings-to-immutable-values promised land that can have such a simplifying effect on code.

Given that impact, I decided to research the issue pretty thoroughly. After doing so, I've concluded that Raku is designed to provide immutable bindings after all, and I produced a pull request that both explains my logic and provides a patch that gives Raku those semantics. That PR hasn't been merged yet (I still need to respond to at least one change request about what error to throw, and I'd like to see if anyone else weighs in). But I'm optimistic that it will be and that fixing that bug will make immutable bindings to immutable values much more practical in Raku.

What's on the inside

I mentioned earlier that the values-versus-binding distinction is important for the work I'm doing in two ways. The second way it matters is because it gives us the right framework to talk about "shallow" immutability (also sometimes called interior mutability).

Lets look at a slightly different constant: constant $l = (1, 2, $). What can we say about $l? Well, the variable $l is immutably bound to a List, and the List is comprised of three values, 1, 2, and … a scalar. But a scalar a container – specifically, a mutable container. Thus, despite the immutability of $l's binding and the immutability of Lists, we can still make changes: $l[2] = 99 is valid code.

That change is allowed because Lists are only shallowly immutable. Or, to say the same thing in different words, Lists are an immutable type that allows for interior mutability.

One of the goals I mentioned in the initial grant proposal for the persistent data structures is to provide deeply immutable data types. I hope that the discussion so far gives some context to what I meant by that. But it's a point I'll return to in a just a minute.

Meanwhile, in another thread altogether…

The second related-but-distinct task has occupied a good deal of my time recently has involved concurrency and parallelism (which are famously not the same thing). One of the key selling points of immutable data – especially deeply immutable data – is that it can be more easily shared across threads. A lot of the headache of dealing with parallel code (locks, mutexes, atomics, cache misses, deadlocks, livelocks, etc) come from ways to prevent different threads from modifying data at the same time. But different threads will never modify (deeply) immutable data at the same time – because nothing will ever mutate that data at any time. I'm very optimistic that the data structures I'm working on will have significant application for anyone writing multithreaded Raku.

So I've been slightly concerned that I hadn't yet written any non-trivial parallel and concurrent code in Raku. Not that concerned – I've written quite a bit of concurrent and parallel Rust code and concurrent JavaScript (JS is single-threaded, so it's inherently non-parallel). And I'd gotten to know Raku's concurrency model by watching some excellent conference talks and by reading the docs/other resources. Still, though, I was keeping my eye out for a medium-scale project that would let me confirm my understanding of Raku's approach to concurrency.

In late September, I found the perfect project. Someone on the r/rakulang subreddit asked if there was a good way to recreate a JavaScript exercise in Raku. That exercise involved rendering balls to an HTML5 canvas in a browser – a task that's pretty challenging with a version of Raku running on MoarVM. (It would obviously be much easier using the JavaScript backend, but that wouldn't currently be my first choice for someone new to Raku.) So, I decided to write a simple Cro server that would let someone write Raku code to display balls on an HTML canvas.

Specifically, I decided on the following architecture: A Cro server that would listen for commands from the user's Raku code, generate a set of balls to display, and then send a JSON representation of those balls to any connected clients via WebSockets. Oh, and then do that again with updated ball positions 16 milliseconds later – with the goal of maintaining a frame rate of 60 fps. It's this last bit would normally be a bit of an odd choice: WebSockets are typically used to communicate with a remote server, and thus it'd typically be a questionable choice to stream updates 60 times per second. But the program I had in mind would run locally, which removes network latency from consideration. And, besides, I figured that this design would provide good trial by fire for my understanding of Raku's concurrency model: if my program can stream balls to dozens of simultaneously connected clients at 60 fps, I must have a pretty decent understanding of how Raku handles parallelism and concurrency.

You can see the program I ended up with on GitHub. I'm pretty happy with how it turned out: in not much more than 100 lines of Raku, the program sets up exactly the architecture described above plus an Erlang-style error recovery strategy for resetting to a known-good state when user input that doesn't produce valid output.

But getting there took longer than I expected it to and, in the middle, had me seriously questioning whether I actually understood Raku's concurrency model well enough to be implementing new data structures that'd fit with it. One of the first things anyone learns about programming is that "it is not a compiler error. It is never a compiler error." In other words, even if some issue seems to be caused by a bug in the compiler/runtime/etc, it's far more likely to be caused by a bug in your own code.

And when writing that module, I kept running into some crashes that made me think that I'd either run into the mythical compiler bug or – more likely – was seriously misunderstanding how to write concurrent code. After spending quite a bit of time double-checking my code, I determined that, as expected, I had not run into a compiler bug. No, I'd run into two of them.

In fairness, neither was a compiler bug; they were both issues with the MoarVM runtime. Moreover, both were fixed extremely quickly, and the second never even made it into production – I only ran into it because I was using a pre-release build to try out Rakudo's new dispatch mechanism. The whole point of running pre-release builds is to help catch edge-case bugs like these, and I'm certainly not complaining – but it turns out that streaming 60 fps to dozens of clients was a good stress test for the new dispatch system and not just for my understanding of Raku's concurrency model.

Lessons for persistent data structures

Once I realized that I wasn't fundamentally misunderstanding concurrency in Raku, implementing Learn::Raku::With taught me a couple of lessons with important (though subtle) implication for the persistent data structures I'm building.

"Concurrency control" ≠ "concurrency"

The first lesson I learned is that I'd slightly misunderstood what's going on in code like this (adapted from an example in the docs):

```raku

A simple stock ticker for Apple and Google stock

react { whenever $stock-info-for-APPL { say "Apple's stock price is now: $_ " } whenever $stock-info-for-GOOG { say "Google's stock price is now: $_ " } } ```

My previous (incorrect) reading of that example was pretty much just an English version of the code:

> Whenever we get an update about either Apple or Google's stock, print that update. We don't know > when these updates could come, but we want to react to them the instant they do, whether that > means reacting to the Apple one first, reacting to the Google one first, or reacting to both at > the same time.

That's so close to correct, but it goes wrong in the very last 7 words: we are guaranteed not to react to both updates at the same time. This is a consequence of Raku's run-to-completion semantics: the code inside a whenever block is guaranteed to finish executing (that is, to "run to completion") before the code in any other whenever block can start. This wouldn't matter much for the simple example above, but here are two modified versions in which it would:

```raku

Lets store our results in a Map instead of printing them

react { my %prices is Map; react { my Map $prices = Map.new; whenever $stock-info-for-APPL { $prices = %(|$prices, :APPL($)) } whenever $stock-info-for-GOOG { $prices = %(|$prices, :GOOG($)) } } } ```

Under the incorrect understanding, we could've be in both whenever blocks at the same time and this would create a race condition – we might lose a price update if two came in at the same time. However, because the concurrency control Raku provides prevents concurrent access here, this is actually 100% fine: we'll be in exactly one whenever block at a time. Raku's semantics protect us from any race condition.

Here's an example where Raku's semantics would have a less ideal impact on naive code:

```raku

Now with buy/sell recomendations!

react { whenever $stock-info-for-APPL { say "Apple's stock price is now: $_ "; say "Rating: " ~slow-fn-to-calculte-recomendation('APPL', $); } whenever $stock-info-for-GOOG { say "Google's stock price is now: $ "; say "Rating: " ~slow-fn-to-calculte-recomendation('GOOG', $_); } } ```

This code would not behave as the author might hope, due to those same run-to-completion semantics. Specifically, while slow-fn-to-calculate-recomendation is running, other updates would be blocked. If you don't want this behavior, Raku offers several ways to schedule work outside of the run-to-completion guarantee of the whenever block; Raku's semantics aren't going to stop you from doing anything here, but it's important to understand the behavior they provide by default.

(It may be helpful to compare Raku's semantics to JavaScript, which guarantees that every function will run to completion before any other work begins – a model I've contrasted with Raku's before.)

So, how does all of this impact persistent data structures? Well, some of the performance optimizations for persistent data structures involve temporarily suspending the immutability guarantee when there's provably only one copy of the data (or part of the data). And whether that's provably the case or not depends crucially on whether other sections of the program can have concurrent access to the data. So being clear on all of the above is pretty essential for safe implementations of the data structures I'm working on.

Under pressure

I mentioned that implementing Learn::Raku::With helped me learn two lessons, but the second was really a reminder of a Raku concurrency feature that I already knew about/a good example of how important that feature can be. The feature I'm talking about is the way Raku manages backpressure.

Raku's high level concurrency tools are carefully designed with backpressure in mind. Quoting directly from the 6guts blog post that I've already linked to:

> Another interesting problem for any system processing asynchronous messages is that of > backpressure. In short, how do we make a source of messages emit them at a rate no greater than > that of the processing logic? The general principle with Supply is that the sender of a message > pays the cost of its processing. So, if I have $source.map(&foo).tap(&bar), then whatever emits at > the source pays the cost of the map of that message along with the processing done by the tap callback.

Backpreasure is extremely helpful because it can allow a program to handle arbitrary amounts of input without requiring a proportionally arbitrary (i.e., infinite) amount of memory. Of course, given Raku's commitment to flexibility and programmer control, it shouldn't come as any surprise that Raku also makes it easy to opt out of this automatic backpreasure when doing so makes sense.

Learn::Raku::With presents a good example of a situation where it makes sense to opt out of Raku's default backpressure semantics. As I mentioned above, Learn::Raku::With generates a stream of frames and transmits those frames to all connected clients. If we applied backpressure to this system in the normal way, then transmitting the frames to the clients would exert backpressure on generating the frames – that is, we'd be guaranteed not to generate frames any faster than we can transmit them.

This would have some benefits – most notably, we'd never need to buffer generated frames and our memory use would never grow. But, in the Learn::Raku::With context, it'd have one very notable flaw as well: the normal backpreasure system would mean that any time we can't transmit frames to clients at 60 fps, the actual speed of Learn::Raku::With's balls would get reduced. This obviously isn't ideal – while 60 fps is a nice goal to shoot for, 45 fps is an absolutely fine frame rate and it'd be much better to drop 1 out of 4 frames and to display the balls at normal speed in 45 fps than to keep every frame but to display the balls in slow motion.

Thus, Learn::Raku::With pairs a Supply with a Channel to allow it to drop frames instead of generating frames more slowly – but that, of course, means that it needs to store at least a few frames at least temporarily.

But storing extra frames, in turn, means both that the program will take up more memory and, crucially, that there will be more work for the garbage collector to do. And that work will take some time – time that necessarily cannot be spent transmitting those frames to the client. But since the whole reason we need to store those extra frames was that we're generating them faster than we can ship them off to the client, it's possible for the GC-pause-induced extra delay to result in more extra frames. Which can then result in more garbage to collect, and thus more delay, and thus more frames.

I'm sure you can see where that sort of infinite loop heads and it's certainly not to anywhere good. In fact, a previous version of Learn::Raku::With could get itself into exactly that sort of situation pretty easily, consuming ever-more RAM and operating ever more slowly (I think I measured it as hitting at least 50 GB). The current version is much more resilient, but the risk remains.

All of this is relevant to persistent data structures because one of the main benefits of these data structures is that the power of structural sharing makes copies vastly less expensive – verging on free, in fact. This benefit would go a long way towards avoiding the sort of issues I outlined above. I'd always known that these inexpensive copies are a key benefit of the persistent data structures I'm building, but I'd previously focused on this as a way to reduce the memory footprint of Raku programs. After building Learn::Raku::With, however, I recognize that structural sharing also reduces GC pressure and thus improves the speed at which Raku programs can execute – which can avoid the sort of death spirals described above. This means that having persistent data structures will not only make many Raku programs more memory efficient but will also certain of Raku programs much more feasible to write without spiraling into a need for ever-increasing amounts of RAM.

Implementation progress

So, having spent a few thousand words explaining things related to but not directly covered by the grant, let me say at least a bit about the implementation work itself. In the nature of implementation work, it's not nearly as interesting as more exploratory work and I don't have very much to say – I've very much stuck to the plan in my grant proposal.

Specifically, I started by implementing the persistent version of the List (still open to bikeshedding on the name!). I have made good progress on implementing a persistent List that with the structural sharing features described in my proposal but without the bit-shifting indexing (I'm instead currently using the simplified indexing scheme that involves converting the key to a different base and calculating the relevant indices manually). This simplified indexing scheme will allow me to test the soundness of the list before switching to the bit-shifting index (which will be key to the performance of the persistent list).

Additionally, I've written a set of tests to confirm the correctness of my implementation so far and a (much smaller and still preliminary) set of tests that measure the performance of deep copies of built in Lists and Arrays, which should provide a nice baseline to compare the persistent versions against.

Conclusion

Over the course of September, I made a good start on implementing persistent data structures for Raku. This included significant, though limited, progress on actually implementing the persistent List and much more time-consuming work on auxiliary projects (not counted as part of my grant hours) that have given me a much better foundation for the remaining work. The downside of this auxiliary work is that it took up considerable time that I might have devoted to the actual implementation, both in September and in October. In fact, given the amount of time I've already spent on these research/side projects in October, it's possible that I won't have much or any grant progress to report at the end of this month. Nevertheless, I'm glad I invested the effort into those projects since I believe both left me significantly better equipped to carry out the remainder of the grant. I look forward to putting these lessons into practice as I continue to implement persistent data structures for Raku.


Raku Persistent Data Structures Grant Report for October 2021

The expectation I mentioned in my September grant report came true in October: I had significantly less time to devote to work on this grant in October. This was due in large part to the conceptual work I previously reported on, which stretched well into October.

Accordingly, I was only able to spend 5 hours on the grant in October; I used that time to continue work on the persistent List. Specifically, I nearly completed the basic API (without the bit-shifting optimizations). I was also able to add some initial test coverage. Given that progress, I am now nearing point where I'll be able to share the WIP code publicly.

On the conceptual front (i.e., work that's relevant to the persistent data structures but not relevant enough that I'm counting it towards the hours funded by this grant), I've also made a fair bit of progress. Most significantly, I was able to complete the work to add fully immutable binding to Rakudo. Thanks to some helpful feedback from Vadim Belman, I also added significantly more detailed error messages, which should help explain the nature of these binding to Rakoons who aren't as used to programming with in an immutable style. As discussed in my last grant report, being able to immutably bind values to their names will mean that the persistent data structures implemented for this grant will be truly immutable – and thereby significantly easier to reason about. This work was merged in rakudo#4536 and should be part of the 2021.11 Rakudo release.

Additionally, I've been putting some thought into how Raku's notion of value types fits with the persistent data structures I'm implementing. As I mentioned last month, one of the core motivations behind this grant proposal is to provide deeply immutable types, so it's clear that the types I'm implementing will be value types and thus will return a ValueObjAt from their WHICH method. But what should the persistent types do when the user adds a non-value type? I can see three options:

  1. Throw an error/require the user to create a value type to pass in. This is basically the approach taken by Elizabeth Mattijsen's ValueType module.
  2. Recursively copy each non-value type into an equivalent value type when it's added to the persistent type ("copy-on-read"). This is straightforward for some types (an Array becomes a List) but might involve more metaprogramming trickery for others (especially user classes).
    1. Store the non-value type if we can prove that no other part of the code has access to it, and only make a copy if we need to hand it out to someone (basically copy-on-write, with some slight tweaks).

Each of these has different pros and cons. In particular, the first is the easiest to implement, the most verbose, and gives the user the most control over/awareness of the performance costs. Conversely, the last is the most "magical" and the only one that risks introducing correctness bugs. Additionally, there could be ways to combine different aspects of these three approaches – for example, one combination of 1. and 2. would be to copy simple/built in types but to throw an error when passed a user type with a mutable field. I'm currently leaning towards the copy-on-read option #2 as a reasonable middle ground, but it's definitely something I'd be interested in hearing other thoughts on.

November is already off to a good start, and I'm optimistic that I'll be able to make significant progress on this grant before the end-of-year business (and exciting things like the Raku Advent Calendar!) start to eat into my coding time.

#537 - How do you spend your free time?

Perl Weekly

Published on Monday 08 November 2021 10:00

Hi

The Perl Weekly newsletter is more than 10 years old and over those years we have witnessed a huge decline in the number and diversity of posts. There are still a few people who write about Perl regularly, The Weekly Challenge of Mohammad generates many posts, but in general we see fewer and fewer posts.

I wonder, do those people who used to write about Perl write about other technologies or have they just stopped? How come that so few people write about Perl? Are there no new things in Perl? Are there no new problems that one solves in Perl?

I know that writing about stuff is not the same thing as writing the stuff, but as many other people who write or teach will attest, the best way to improve your knowledge is to explain it. to explain it to live audience (at a Meetup) or to write about it in a blog post.

So how do you spend your free time? Do you write more code? Do you try to explain what you wrote, how you wrote and why? Or do you spend your time on some other activities? I, for examples started to read a lot more than I did in the recent years. And sure, I'll even share my recommendations with you. (See below.)

In any case enjoy your week!

Perl Tk: The label, button, and entry widgets

Perl Maven

Published by Gabor Szabo on Sunday 07 November 2021 23:00