We're happy to have Abigail present at the German Perl Workshop 2026!
Sharding a database, twice richtet sich an Alle und wird in English gehalten.
There comes a time in the life time of a database, the database takes too many resources (be it disk space, number of I/O transactions, or something else) to be handled by a single box.
Sharding, where data is distributed over several identically shaped databases is one technique to solve this.
For a high volume database I used to work with, we hit this limit about a dozen
years ago. Then we hit the limit again two years ago.
In this talk, we will first discuss how we initialized switched our systems to make use of a sharded database, without any significant downtime.
PAGI 0.001017 is on CPAN. The headline feature is HTTP/2 support in PAGI::Server, built on the nghttp2 C library and validated against h2spec's conformance suite. HTTP/2 is marked experimental in this release -- the protocol works, the compliance numbers are solid, and we want production feedback before dropping that label.
This post focuses on why h2c (cleartext HTTP/2) between your reverse proxy and backend matters, and how PAGI::Server implements it.
The Quick Version
# Install the HTTP/2 dependency
cpanm Net::HTTP2::nghttp2
# Run with HTTP/2 over TLS
pagi-server --http2 --ssl-cert cert.pem --ssl-key key.pem app.pl
# Run with cleartext HTTP/2 (h2c) -- behind a proxy
pagi-server --http2 app.pl
Your app code doesn't change. HTTP/2 is a transport concern handled entirely by the server. The same async handlers that serve HTTP/1.1 serve HTTP/2 without modification.
"I Have nginx in Front, Why Do I Care?"
If nginx terminates TLS and speaks HTTP/2 to clients, why does the backend need HTTP/2 too?
The answer is h2c -- cleartext HTTP/2 between your proxy and your application server. No TLS overhead, but all of HTTP/2's protocol benefits on the internal hop: stream multiplexing over a single TCP connection, HPACK header compression (especially effective for repetitive internal headers like auth tokens and tracing IDs), and per-stream flow control so a slow response on one stream doesn't block others.
The practical wins: fewer TCP connections between proxy and backend (one multiplexed h2c connection replaces a pool of HTTP/1.1 connections), less file descriptor and kernel memory pressure, and no TIME_WAIT churn from connection recycling.
Where h2c Matters
gRPC requires HTTP/2 -- it doesn't work over HTTP/1.1 at all. If you're building gRPC services, h2c is mandatory.
API gateway fan-out is where multiplexing shines. When your gateway fans out to 10 backend services per request, h2c means 1-2 connections per backend instead of a pool of 50-100.
Service mesh environments (Envoy/Istio sidecars) default to HTTP/2 between services. A backend that speaks h2c natively means one less protocol translation.
A Note on Proxies
Not all proxies handle h2c equally:
- Envoy has the best h2c upstream support with full multiplexing
-
Caddy makes it trivial:
reverse_proxy h2c://localhost:8080 -
nginx supports h2c via
grpc_passfor gRPC workloads, but its genericproxy_passdoesn't supportproxy_http_version 2.0
For full multiplexing to backends, Envoy or Caddy are better choices than nginx today.
HTTP/2 Over TLS -- No Proxy Required
h2c isn't the only mode. PAGI::Server also does full HTTP/2 over TLS with ALPN negotiation:
pagi-server --http2 --ssl-cert cert.pem --ssl-key key.pem app.pl
This is useful when you don't want the overhead or complexity of a reverse proxy -- internal tools, admin dashboards, development servers, or any app where the traffic doesn't justify a separate proxy layer. Browsers get HTTP/2 directly, with TLS, no nginx required.
What PAGI::Server Does
Dual-Mode Protocol Detection
With TLS, PAGI::Server uses ALPN negotiation during the handshake -- advertising h2 and http/1.1, letting the client choose. The protocol is decided before the first byte of application data.
Without TLS (h2c mode), PAGI::Server inspects the first 24 bytes of each connection for the HTTP/2 client connection preface. If it matches, the connection upgrades to HTTP/2. If not, it falls through to HTTP/1.1. Both protocols coexist on the same port, same worker -- no configuration needed beyond --http2.
Either way, HTTP/1.1 clients are still served normally. The server handles both protocols on the same port.
WebSocket over HTTP/2 (RFC 8441)
Most HTTP/2 implementations skip this. PAGI::Server supports the Extended CONNECT protocol from RFC 8441, which tunnels WebSocket connections over HTTP/2 streams. Multiple WebSocket connections multiplex over a single TCP connection instead of requiring one TCP connection each.
Compliance
Built on nghttp2 (the same C library behind curl, Firefox, and Apache's mod_http2). PAGI::Server passes 137 of 146 h2spec conformance tests (93.8%). The 9 remaining failures are in nghttp2 itself and shared with every server that uses it. Load tested with h2load at 60,000 requests across 50 concurrent connections with no data loss or protocol violations.
Full test-by-test results are published: HTTP/2 Compliance Results.
Multi-Worker and Tunable
HTTP/2 works in multi-worker prefork mode. Each worker independently handles HTTP/2 sessions:
pagi-server --http2 --workers 4 app.pl
Protocol settings are exposed for environments that need fine-tuning:
my $server = PAGI::Server->new(
app => $app,
http2 => 1,
h2_max_concurrent_streams => 50, # default: 100
h2_initial_window_size => 131072, # default: 65535
h2_max_frame_size => 32768, # default: 16384
h2_max_header_list_size => 32768, # default: 65536
);
Most deployments won't need to touch these. The defaults follow the RFC recommendations.
Context in the Perl Ecosystem
Perl has had HTTP/2 libraries on CPAN (Protocol::HTTP2, Net::HTTP2), but application servers haven't integrated them with validated compliance testing. PAGI::Server is the first to publish h2spec results and ship h2c with automatic protocol detection alongside HTTP/1.1. If you're currently running Starman, Twiggy, or Hypnotoad, none of them offer HTTP/2.
What Else Is in 0.001017
The rest of the release is operational improvements:
-
Worker heartbeat monitoring -- parent process detects workers with blocked event loops and replaces them via SIGKILL + respawn. Default 50s timeout. Only triggers on true event loop starvation; async handlers using
awaitare unaffected. -
Custom access log format -- format strings with atoms like
%a(address),%s(status),%D(duration). -
TLS performance fix -- shared SSL context via
SSL_reuse_ctxeliminates per-connection CA bundle parsing. 26x throughput improvement at 8+ concurrent TLS connections. - SSE wire format fix -- now handles CRLF, LF, and bare CR line endings per the SSE specification.
- Multi-worker fixes -- shutdown escalation, parameter pass-through, and various stability improvements.
Getting Started
# Install PAGI
cpanm PAGI
# Install HTTP/2 support (optional)
cpanm Net::HTTP2::nghttp2
# Run your app with HTTP/2
pagi-server --http2 app.pl
Links
- PAGI 0.001017 on CPAN
- HTTP/2 Compliance Results
- PAGI on GitHub
- h2spec -- HTTP/2 conformance testing tool
It's been a while since I commented on a Weekly Challenge solution, but here we are at week 360. Such a useful number. So divisible, so circular. It deserves twenty minutes.
Task 2: Word Sorter
The task
You are given a sentence. Write a script to order words in the given
sentence alphabetically but keep the words themselves unchanged.
# Example 1 Input: $str = "The quick brown fox"
# Output: "brown fox quick The"
# Example 2 Input: $str = "Hello World! How are you?"
# Output: "are Hello How World! you?"
# Example 3 Input: $str = "Hello"
# Output: "Hello"
# Example 4 Input: $str = "Hello, World! How are you?"
# Output: "are Hello, How World! you?"
# Example 5 Input: $str = "I have 2 apples and 3 bananas!"
# Output: "2 3 and apples bananas! have I"
The thoughts
This should be quite simple: split the words, sort them, put them back together. The sort should be case-insensitive.
join " ", sort { lc($a) cmp lc($b) } split(" ", $str);
Creeping doubt #1
Is converting to lowercase with lc the right way to do case-insenstive compares? Not really. Perl has the fc -- fold-case -- function to take care of subtleties in Unicode. We won't see those in simple ASCII text, but for the full rabbit hole, start with the documentation of fc.
Creeping doubt #2
Doing the case conversion inside the sort means that we will invoke that every time there's a string comparison, which will be quite redundant. We could (probably?) speed it up by pre-calculating the conversions once.
The solution
sub sorter($str)
{
return join " ",
map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, fc($_)] }
split(" ", $str);
}
This solution uses the idiom of Schwartzian transformation. Every word gets turned into a pair of [original_word, case_folded_word]. That array of pairs gets sorted, and then we select the original words out of the sorted pairs. This is best read bottom-up.
-
split(" ", $str)-- turns the string into an array of words, where words are loosely defined by being white-space separated. -
map { [$_, fc($_)] }-- every word turns into a pair: the original, and its case-folded variation. The result is a list of array references. -
sort { $a->[1] cmp $b->[1] }-- sort by the case-folded versions. The result is still a list of array references. -
map { $_->[0] }-- select the original word from each pair -
join " "-- Return a single string, where the words are separated by one space.
Does it blend?
A quick benchmark shows that it is indeed faster to pre-calculate the case folding. This example used a text string of about 60 words.
Rate oneline pre_lc
oneline 32258/s -- -45%
pre_lc 58140/s 80% --
My intuition says that when the strings are much shorter, the overhead of the transform might not offset the gains in the sort, but as is so often true, my intuition is crap. This is the result for a string of five words:
oneline 470588/s -- -59%
pre_lc 1142857/s 143% --
cpan/IO-Compress - Update to version 2.217
2.217 1 February 2026
* Update release date in README
Sun Feb 1 11:02:46 2026 +0000
ce5ff6ea860443bc27ca180993852eb6ef1a63e8
* Refresh zipdetails from https://github.com/pmqs/zipdetails
Sun Feb 1 10:58:21 2026 +0000
0fa7ba236438fb2e022f9f2bb92caac25f075cc5
* Delete GZIP environment variable before running interop tests Fixes #24
Sat Jan 31 12:22:30 2026 +0000
6304ffa76606030467900b17de088504ce026a0d
* Update version to 2.217
Sat Jan 31 11:36:36 2026 +0000
6405df64a348fd953a0ddebb30422d20e2c1bd9f
cpan/Compress-Raw-Bzip2 - Update to version 2.217
2.217 31 January 2026
* Update version to 2.217
Sat Jan 31 11:52:54 2026 +0000
773773f59082a73134212a2d63368683027122ed
cpan/Compress-Raw-Zlib - Update to version 2.218
2.218 3 February 2026
* Update version to 2.218
Tue Feb 3 10:29:39 2026 +0000
ad08cfc854823793fb1fa4f6f5e2dda644f4c581
* Fix for regression of #34
Tue Feb 3 10:17:49 2026 +0000
4133174c92633fb7d8562f514746d08baf667ad3
embed.pl: Create long_names.c and populate it This replaces the defective method previously used to automatically generate long name (prefixed with "Perl_") synonyms for macros that have an implicit thread context. That method had been in place since 5.43.2. Leon Timmermans pointed out the flaw (https://github.com/Perl/perl5/pull/23458). The long name synonyms have the thread context passed explicitly. The method worked by discarding the explicit parameter, relying on the implicit one. This is valid if and only if the explicit value matches the implicit value. This is always the case when the caller is from inside the perl core. When the macro is not visible outside core, we know it can only be called from inside, so the method is retained in that case. Otherwise, it still is quite likely to be the case, but not always. In an application that contains more than one embedded perl instance, that application could be intermixing calls to the instances, using the long name forms, and the explicit thread context might not match the implicit one. The solution adopted here that works for this case (and I can't think of another way) is to create an actual function for each long name synonym. The function merely calls its respective macro. The embedding application calls the function with the correct thread, and the wrapped macro gets that properly. Perl has long automatically generated short name macros for functions listed in embed.fnc. A consequence of this commit is that it is now possible to go the other direction, to start with a short name macro, and automatically generate a long name function for it. This means we can swap implementations at will, without affecting any calling source code. That doesn't work for fancy macros that use the C preprocessor language for things, like the '#' and '##' commands to it, or expanding a single argument to a list, such as 'STR_WITH_LEN' does. And the behavior isn't precisely synonymous if the macro evaluates an argument more than once, and is called with that argument being an expression with side effects.
embed.pl: White space only Indent, in preparation for next commit which will place these lines in a new block.
… or maybe some more ;)
-
00:00 Introduction
-
01:30 OSDC Perl, mention last week
-
03:00 Nikolai Shaplov NATARAJ, one of our guests author of Lingua-StarDict-Writer on GitLab.
-
04:30 Nikolai explaining his goals about security memory leak in Net::SSLeay
-
05:58 What we did earlier. (Low hanging fruits.)
-
07:00 Let's take a look at the repository of Net::SSLeay
-
08:00 Try understand what happens in the repository?
-
09:15 A bit of explanation about adopting a module. (co-maintainer, unauthorized uploads)
-
11:00 PAUSE
-
15:30 Check the "river" status of the distribution. (reverse dependency)
-
17:20 You can CC-me in your correspondence.
-
18:45 Ask people to review your pull-requests.
-
21:30 Mention the issue with DBIx::Class and how to take over a module.
-
23:50 A bit about the OSDC Perl page.
-
24:55 CPAN Dashboard and how to add yourself to it.
-
27:40 Show the issues I opened asking author if they are interested in setting up GitHub Actions.
-
29:25 Start working on Dancer-Template-Mason
-
30:00 clone it
-
31:15 perl-tester Docker image.
-
33:30 Installing the dependencies in the Docker container
-
34:40 Create the GitHub Workflow file. Add to git. Push it out to GitHub.
-
40:55 First failure in the CI which is unclear.
-
42:30 Verifying the problem locally.
-
43:10 Open an issue.
-
58:25 Can you talk about dzil and Dist::Zilla?
-
1:02:25 We get back to working in the CI.
-
1:03:25 Add
--notestto make installations run faster. -
1:05:30 Add the git configuration to the CI workflow.
-
1:06:32 Is it safe to use
--notestwhen installing dependencies? -
1:11:05 git rebase squashing the commits into one commit
-
1:13:35
git push --force -
1:14:10 Send the pull-request.
I've published version 0.28 of App::Test::Generator, the black-box test case generator. I focused on tightening SchemaExtractor’s handling of accessor methods and making the generated schemas more honest and testable. I fixed cases where getter/setter and combined getset routines were being missed, added targeted tests to lock in correct detection of getset accessors, and clarified output typing so weak scalar inference no longer masquerades as a real type. I added explicit 'isa' coverage, ensuring that object expectations are captured and that generated tests correctly fail when passed the wrong object type.
[link] [comments]
I would like to use a Perl one-liner to modify numeric values in a text file. My data are stored in a text file:
0, 0, (1.263566e+02, -5.062154e+02)
0, 1, (1.069488e+02, -1.636887e+02)
0, 2, (-2.281294e-01, -7.787449e-01)
0, 3, (5.492424e+00, -4.145492e+01)
0, 4, (-7.961223e-01, 2.740912e+01)
These are complex numbers with their respective i and j coordinates: i, j, (real, imag). I would like to modify the coordinates, to shift them from zero-based to one-based indexing. In other words I would like to add one to each i and each j. I can correctly capture the i and j, but I'm struggling to treat them as numbers not as strings. This is the one-liner I'm using:
perl -p -i.bak -w -e 's/^(\d+), (\d+)/$1+1, $2+1/' complex.txt
How do I tell Perl to treat $1 and $2 as numbers?
My expected output would be:
1, 1, (1.263566e+02, -5.062154e+02)
1, 2, (1.069488e+02, -1.636887e+02)
1, 3, (-2.281294e-01, -7.787449e-01)
1, 4, (5.492424e+00, -4.145492e+01)
1, 5, (-7.961223e-01, 2.740912e+01)
In Perl under Unix (HP-UX and Linux) I start instances of a program and redirect their outputs to files. I fork without problem, close the old STDOUT and STDERR, open a file to use as STDOUT, then assign STDOUT to STDERR. In the Perl child instance, if I write:
print "This goes to STDOUT\n";
print STDERR "This goes to STDERR\n";
both lines go in the file redirection target. I use exec to start instances of another program whose output will be redirected to that file.
A test script in Bash periodically produces output to the standard output, sleeps, produces output to the error output, sleeps, and loops infinitely. When I call it using the command line I see all output. If I redirect the output on the command line to different files (one for STDOUT, one for STDERR) the outputs get written to the different files.
When I exec my Perl script into that test script, after redirecting the outputs in Perl I only see the lines the script produce to STDOUT, not the lines to STDERR. *STDERR = *STDOUT works for Perl but not for any program it becomes after exec. How do I solve this?
Perl code:
#!/bin/perl
$plc = 666;
$pid = fork;
if ($pid == -1) {
die "ERREUR: Le fork a échoué !";
}
if ($pid) {
# On est dans le processus PARENT. $pid est le PID de l'enfant créé.
print "Fork réussi: PID de l'enfant: $pid\n";
exit 0;
} else {
print "Gaga ? ($plc)\n";
close *STDOUT; # On ferme le descripteur de fichier correspondant à la sortie standard.
open *STDOUT, '>', 'stdout.txt' or die "Gaga n'a pu ouvrir le fichier stdout.txt en écriture !"; # On le rouvre, sur le fichier spécifié
close *STDERR; # On ferme la sortie d'erreur
*STDERR = *STDOUT; # Et on lui assigne le même fichier que la sortie standard.
print "Gaga sur stdout.\n";
print STDERR "Gaga sur stderr.\n";
exec {'zombieWriter'} ('zombieWriter' ,$plc);
}
zombieWriter script:
#!/bin/bash
while [[ 1 ]];
do
date
echo "Je suis un zombie ($1) et c'est ma joie !"
sleep 2
date >&2
echo "Je suis un zombie ($1) qui écrit en erreur..." >&2
sleep 3
done
The output in the stdout.txt file:
Gaga sur stdout.
Gaga sur stderr.
jeu. 05 févr. 2026 12:15:25 CET
Je suis un zombie (666) et c'est ma joie !
jeu. 05 févr. 2026 12:15:30 CET
Je suis un zombie (666) et c'est ma joie !
jeu. 05 févr. 2026 12:15:35 CET
Je suis un zombie (666) et c'est ma joie !
File redirection works in Perl but not for the shell script it execs into. If I do not explicitly close *STDERR before *STDERR = *STDOUT, the lines the zombieWriter script outputs to the error output show on my shell.
How do I get a true redirection before I exec into that script?
Trying to write some Perl code, wherein a hash reference is defined and some functions that can work with that hash reference.
The following code works
package test {
sub run {
my $h1 = {
e => "elder",
h => "hazel",
i => "ivy",
m => "maple",
o => "oregano",
s => "sycamore",
y => "yarrow",
};
test_ref_01 ($h1);
}
sub test_ref_01 {
my ($h1) = $_[0];
print "\n";
print $h1;
print "\n";
print ref ($h1);
print "\n";
print ref (\$h1);
print "\n";
# print $h1{e};
# print "\n";
print $h1->{e};
print "\n";
}
}
but this code doesn't gives error.
package test {
my $h1 = {
e => "elder",
h => "hazel",
i => "ivy",
m => "maple",
o => "oregano",
s => "sycamore",
y => "yarrow",
};
sub test_ref_01 {
print "\n";
print $h1;
print "\n";
print ref ($h1);
print "\n";
print ref (\$h1);
print "\n";
# print $h1{e};
# print "\n";
print $h1->{e};
print "\n";
}
}
What I'm trying to accomplish is
- define a hash reference (
$h1) at the package level - each of the subroutines defined in the package can work with the same hash reference (
$h1).
Sydney Perl continues regular meetings with our next in February
Please join us on Tuesday 24th Feb 2026 at Organic Trader Pty Ltd.
Unit 13/809-821 Botany Road Rosebery
6:30pm to 9pm.
Chances are folks will head to a nearby Pub afterward.
I will talk about my 5 years working at Meta Platforms and 6 months at Amazon Inc. specifically contrasting their engineering culture, and generally discussing what Google calls an SRE culture. Contrasting my experiences at Big Tech to "Middle Tech".
Getting there:
Come in the front door marked
"Konnect" then take the first door on the right, and up the stairs to
Level 1.
Mascot station + 20 minute walk or 358 bus to Gardener's Road if you
don't want to walk so far.
Or Waterloo Metro station + 309 bus to Botany Road after Harcourt Parade.
We have a Signal group chat which we use to co-ordinate travel assistance on the day. For example, if you are lost or need a pick up from the station when it's raining etc. Reach out and someone will add you.
Join the email list!
The email list is very low volume and the place to get these updates (I sometimes forget to post them here).
Ensure to add the "from" email sydney-pm@pm.org to a custom filter and your allow-lists (and similar) to maximize chances Google/Microsoft/etc don't discard them. Plug to Australian native and Perl-ish Fastmail which is very popular and plays well.
Have you ever been working on someone else's Perl code or perhaps your own from 25 years ago and wondered what the formatting style should be?
I looked around and did not see anything and have had the idea for a decade so
I started trying to piece something together, I decided to use perltidy itself of course, its not production ready, heck it may not even be formatted to perltidy's perltidyrc!
however its done enough to share the idea and see if there is any other interest out there, please fork it and hack away, I have also opened 'issue' with perltidy to share;
Perl::Tidy::StyleDetector
https://github.com/tur-tle/perltidy
https://github.com/tur-tle/perltidy/blob/detect-format/STYLE_DETECTOR_README.md
[link] [comments]
As announced two weeks ago, I'm starting a series on dev.to about "Beautiful Perl features". The intent is to touch people outside of the Perl Community and try to convince them that Perl is not the dreadful language they imagine, by showing them facts, not opinions.
The first two posts are now online:
- https://dev.to/damil/beautiful-perl-features-introduction-to-the-series-b6a
- https://dev.to/damil/beautiful-perl-feature-blocks-2o4
At this occasion I also started a new hashtag #beautifulperl - do not hesitate to reuse it in your own publications whenever appropriate.
Of course I'm happy to get feedback, either through online comments here or on dev.to, or through private email.
Many thanks to the reviewers who helped me to polish this material!
Best regards, Laurent
Initial set posted on our Reddit page (https://www.reddit.com/r/perlcommunity/). Please direct all comments there.
As we did last time, we are giving early access to members of our low-volume mailing list. You may join by going to https://perlcommunity.org/science/#mailing_list.
You should be planning to attend the Summer PCC 2026 (Austin, TX / Virtual), which, as consistently promised, will take place on July 3rd and 4th. A CFP and paper call will be available soon.
Videos from this past Winter PCC (December 17–18) will be made available after our Summer PCC.
Calls to Action:
- join our mailing list
- think about submitting a Perl 5 talk or SPJ paper candidate to the Summer PCC in Austin, TX
- visit The Perl Community YouTube Channel for 2024 PCC videos (click here)
- consider supporting the Science Perl Journal; Issue #1 is still available at Barnes & Noble: https://science.perlcommunity.org/spj
2026 is sure to be as productive and busy for The Perl Community+ as 2024 and 2025.
Cheers, Brett Estrade (OODLER)
+The Perl Community is a 501(c)(3) organization based in Austin, Texas, USA. It is dedicated to the advancement of Perl 5 through its committees, including AI Perl, Perl::Types, and the Science Perl Committees, as well as publications like the Science Perl Journal. (Registered DOI Prefix: 10.63971)

My first interaction with Mojo and WebSocket. I have documented my experience in this post: https://theweeklychallenge.org/blog/mojo-with-websocket
-
App::rdapper - a command-line RDAP client.
- Version: 1.23 on 2026-02-02, with 21 votes
- Previous CPAN version: 1.22 was 3 days before
- Author: GBROWN
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.030 on 2026-02-04, with 19 votes
- Previous CPAN version: 1.029 was 1 day before
- Author: PREACTION
-
BerkeleyDB - Perl extension for Berkeley DB version 2, 3, 4, 5 or 6
- Version: 0.67 on 2026-02-01, with 14 votes
- Previous CPAN version: 0.66 was 1 year, 3 months, 18 days before
- Author: PMQS
-
Data::Alias - Comprehensive set of aliasing operations
- Version: 1.29 on 2026-02-02, with 19 votes
- Previous CPAN version: 1.28 was 3 years, 1 month, 12 days before
- Author: XMATH
-
Image::ExifTool - Read and write meta information
- Version: 13.50 on 2026-02-07, with 44 votes
- Previous CPAN version: 13.44 was 1 month, 22 days before
- Author: EXIFTOOL
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.217 on 2026-02-01, with 19 votes
- Previous CPAN version: 2.216 was 1 day before
- Author: PMQS
-
Perl::Tidy - indent and reformat perl scripts
- Version: 20260204 on 2026-02-03, with 147 votes
- Previous CPAN version: 20260109 was 25 days before
- Author: SHANCOCK
-
Sisimai - Mail Analyzing Interface for bounce mails.
- Version: v5.6.0 on 2026-02-02, with 81 votes
- Previous CPAN version: v5.5.0 was 1 month, 28 days before
- Author: AKXLIX
-
SPVM - The SPVM Language
- Version: 0.990127 on 2026-02-04, with 36 votes
- Previous CPAN version: 0.990126 was before
- Author: KIMOTO
-
Term::Choose - Choose items from a list interactively.
- Version: 1.780 on 2026-02-04, with 15 votes
- Previous CPAN version: 1.779 was 2 days before
- Author: KUERBIS
This is the weekly favourites list of CPAN distributions. Votes count: 61
Week's winner: XS::JIT (+5)
Build date: 2026/02/07 20:47:56 GMT
Clicked for first time:
- Ancient - Post-Apocalyptic Perl
- App::CPANTS::Lint - front-end to Module::CPANTS::Analyse
- Claude::Agent - Perl SDK for the Claude Agent SDK
- Dancer2::Plugin::OpenAPI - create OpenAPI documentation of your application
- Meow - Object Orientation
- Net::Z3950::ZOOM - Perl extension for invoking the ZOOM-C API.
Increasing its reputation:
- AnyEvent (+1=168)
- App::ccdiff (+1=3)
- App::Software::License (+1=3)
- Class::XSAccessor (+1=29)
- Class::XSConstructor (+3=8)
- Const::Fast (+1=38)
- CPAN::Digger (+1=4)
- CPAN::Uploader (+1=25)
- CryptX (+1=53)
- DBIx::Class::Async (+1=2)
- Devel::Cover::Report::Coveralls (+1=19)
- Dist::Zilla (+1=188)
- Excel::ValueReader::XLSX (+1=2)
- Excel::ValueWriter::XLSX (+1=3)
- File::HomeDir (+1=35)
- File::Tail (+1=8)
- Hypersonic (+3=3)
- Image::PHash (+1=3)
- IO::Async (+1=80)
- Marlin (+4=11)
- MetaCPAN::Client (+1=26)
- Mojo::Redis (+1=21)
- Mojolicious (+1=510)
- Moos (+1=6)
- MooseX::XSConstructor (+1=3)
- MooX::Singleton (+1=6)
- MooX::XSConstructor (+1=3)
- Net::Daemon (+1=3)
- Net::Libwebsockets (+1=4)
- Net::Server (+1=34)
- ODF::lpOD (+1=4)
- PAGI (+1=7)
- Parallel::ForkManager (+1=102)
- PathTools (+1=84)
- perl (+1=442)
- Plack::Middleware::ProofOfWork (+2=2)
- Regexp::Grammars (+1=39)
- Reply (+1=62)
- Scalar::List::Utils (+1=184)
- Sub::HandlesVia (+1=10)
- Sub::StrictDecl (+1=3)
- Sys::Statistics::Linux (+1=4)
- XS::JIT (+5=5)
Beautiful Perl series
This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.
BLOCK : sequence of statements
Today's topic is the construct named BLOCK in the perl documentation: a sequence of statements enclosed in curly brackets {}. Both the concept and the syntax are common to many programming languages - they can also be found in languages of C heritage like Java, JavaScript and C++, among others; but Perl is different on various aspects - read on to dig into the details.
BLOCKs in Perl may be used:
- as part of a compound statement, after an initial control flow construct like
if,while,foreach, etc.; - as the body of a
subdeclaration (subroutine or function); - at any location where a single statement is expected. Obviously it would also be possible to just insert a plain sequence of statements, but enclosing them in a BLOCK has the advantage of creating a new delimited lexical scope so that the effect of inner declarations is guaranteed to end when control flow exits the BLOCK. Details about lexical scopes are discussed below;
- as part of a
doexpression, so that the whole BLOCK becomes a value that can be inserted within a more complex expression. This may be convenient for clarity of thought in some algorithms, and also for avoiding a subroutine call when efficiency is at stake.
All modern programmming languages have constructs equivalent to usages 1 and 2, because these are crucial for structuring algorithms and for handling complexity in programming. Usage 3 is less common, and usage 4 is quite particular to Perl. The next chapters will cover these various aspects in more depth.
Lexical scope
In all usage situations listed above, a Perl BLOCK always opens a new lexical scope, i.e. a portion of code that delimits the effect of inner declarations. Things that can be temporarily declared inside a lexical scope are:
-
lexical variables, temporarily binding a variable name to a memory location on the stack - declared through keywords
myorstate; -
lexical pragmata, temporarily importing semantics into the current BLOCK - introduced through keywords
useorno; - (beginning with Perl 5.18) lexical subroutines, only accessible within the scope - also declared through keywords
myorstate.
When a BLOCK is used as part of a compound statement (if, foreach, etc.), the initial clause before the BLOCK is already part of the lexical scope, so that variables declared in that clause can then be used within the BLOCK:
foreach my $member (@list) {
work_with($member); # here $member can be used
}
say $member; # ERROR : $member is no longer in scope
This is also true when a compound statement has several clauses and therefore several BLOCKs, like if (...) {...} elsif {...} else {...}. Further examples, together with detailed explanations, can be found in perlsub.
Declarations of variables, pragmata or subroutines have the following facts in common:
- they take effect starting from the statement after the declaration. Technically declarations can occur anywhere in the BLOCK; the most common usage is to put them at the beginning, but there is no obligation to do so;
- they may temporarily shadow the effect of declarations in higher scopes;
- their effect ends when control flow exits the BLOCK, whatever the exit cause may be (normal end of the block, explicit exit through instructions like
return,nextorgoto, or exit because an exception was encountered).
Declarations in lexical scopes have effects both at compile-time (the Perl interpreter temporarily alters its parsing rules) and at runtime (the interpreter temporarily allocates or releases resources). For example in the following snippet:
{
my $db_handle = DBI->connect(@db_connection_args);
my @large_array = $db_handle->selectall_array($some_sql, @some_bind_values);
open my $file_handle, ">", $output_file or die "could not open $output_file: $!";
print $file_handle formatted_ouput($_) foreach @large_array;
}
the interpreter knows at compile-time that variables $db_handle, @large_array and $file_handle are allowed within the BLOCK, but not outside of it, so it can check statically for typos or other misuses of variables names; then at runtime the interpreter will dynamically allocate and release memory and external handles when control flow crosses the BLOCK. This is quite similar to what happens in a statically typed language like Java. By contrast, Python, often grouped in the same family as Perl because it is also a dynamically-typed language, does not have the same treatment of lexical scopes.
Lexical scopes in Python are not like Perl
In Python, there is no generally available construct as versatile as a Perl BLOCK. Subsequences of statements are expressed through indentation, but this is only allowed as part of a function definition or as part of a compound statement. A compound statement must always start with a keyword (if, for, while, etc.) that opens the header clause and is followed by a suite:
if is_success():
summary = summarize_results()
report_to_user(summary)
cleanup_resources()
A 'suite' in Python is not to be confused with a 'block'. The official documentation is very careful about the distinction, but many informal texts in Python literature make the confusion; yet the difference is quite important because:
- a Python block, "a piece of Python program text that is executed as a unit", only occurs within a module, a function body or a class definition (https://docs.python.org/3/reference/executionmodel.html#structure-of-a-program);
- a Python suite, "a group of statements controlled by a clause", occurs whenever a clause in a compound statement expects to be followed by some instructions.
Blocks and suites look similar, because they are both expressed as indented sequences of statements; but the difference is that a 'block' opens a new lexical scope, while a 'suite' does not. In particular, this means that variables declared in a compound statement are still available after the statement has ended, which is quite surprising for programmers coming from a C-like culture (including Perl and Java). So for example
for i in [1, 2, 3]:
pass
print(i)
is valid Python code and prints 3. This example is taken from https://eli.thegreenplace.net/2015/the-scope-of-index-variables-in-pythons-for-loops/ which does a very good job at explaining why Python in this respect works differently from many other languages.
Lexical scope vs dynamic scope
In addition to traditional lexical scoping, Perl also has another construct named dynamic scoping, introduced through the keyword local. Dynamic scoping is a vestige from Perl1, but still very useful in some specific use cases; it will be discussed in a future post in this series. For the moment let us just say that in all common situations, lexical scoping is the most appropriate mechanism for working with variables guaranteed not to interfere with the global state of the program.
Lexical variables
Lexical variables in Perl are introduced with the my keyword. Several variables, possibly of different types, can be declared and initialized in one single statement:
my @table = ([qw/x y z/], [1, 2, 3], [9, 8, 7]);
my ($nb_rows, $nb_cols) = (scalar @table, scalar $table[0]->@*);
my (@db_connection_args, %select_args);
Variable initialization and list destructuring
Lexical variables can only be used starting from the statement after the declaration. Therefore it is illegal in Perl to write something like:
my ($x, $y, $z) = (123, $x + 1, $x + 2);
because at the point where expression $x + 1 is encountered, variable $x is not yet in scope. By contrast, Java would accept int x = 123, y = x + 1, z = x + 2, or JavaScript would accept let x = 123, y = x + 1, z = x + 2, because in those languages variable initializations occur in sequence, while Perl starts by evaluating the complete list on the right-hand side of the assignment, and then distributes the values into variables on the left-hand side.
Python is like Perl: it does not accept x, y = 123, x + 1 because x is not defined in the right-hand side. Both Perl and Python had from the start the notion of "destructuring" a list into several individual variables. Other languages adopted similar features much later:
- JavaScript has an advanced mechanism of destructuring since ES6 (2015), that can be applied not only to lists, but also to objects (records). For destructuring a list the syntax variables must be put in angle brackets on the left-hand side of the assignment, in order to avoid ambiguity with sequences of ordinary assignments:
let x = 123, y = x + 1, z = x + 2; // sequence of assignments
let [a, b, c] = [123, 124, 125]; // list destructuring
- The Amber project for Java recently introduced several mechanisms for pattern matching, which is quite close to the idea of destructuring. However for the moment it can only be used for destructuring records, but not yet for lists.
Coming back to Perl, list destructuring has always been part of common idioms, notably for:
- extracting items from command-line arguments
my ($user, $password, @others) = @ARGV;
- extracting items from the argument list to a subroutine
my ($height, $width, $depth) = @_;
- switching variables
($x, $y) = ($y, $x);
Shadowing variables from higher scopes
Like in other languages, a lexical variable in Perl can shadow another variable of the same name at a higher lexical scope. However, the shadowing effect only starts at the statement after the declaration of the variable. As a result, the shadowed value can still be used in the initializing expression:
my $x = 987;
{ my ($x, $y) = (123, $x + 1);
say "inner scope, x is $x and y is $y"; # "inner scope, x is 123 and y is 988"
}
say "outer scope, x is $x"; # "outer scope, x is 987"
Now let us see how other dynamically typed languages handle variable shadowing.
Shadowing variables in Python
Python has no explicit variable declarations; instead, any assignment instruction implicitly declares the target of the assignment to be a lexical variable in the current lexical scope.
def foo():
x = 123 # declares lexical variable x
y = 456 # declares lexical variable y
x = 789 # assigns a new value to existing variable x
Since the intention of declaring is not explicitly stated by the programmer, the interpreter is of little help for detecting errors that would be identified as typos in other languages. In the example above, one could suspect that the intent was to declare a z variable instead of assigning a new value to x.
If an assignment occurs in the middle of a lexical scope, the corresponding variable is nevertheless treated as being lexical from the very beginning of the scope. As a consequence, newcomers to Python can easily be surprised by an UnboundLocalError, which can be shortly demonstrated by this example from the official documentation:
x = 10
def foo():
print(x)
x += 1
foo()
Here the assignment x += 1 implicitly declares x to be a lexical variable for the whole body of the foo() function, even if the assignment comes at the end. In this situation the print() statement raises an exception because at this point lexical variable x is not bound to a value. By contrast, if the assignment is commented out
x = 10
def foo():
print(x)
# x += 1
foo()
the program happily prints 10, because here x is no longer interpreted as a lexical variable, but as the global x.
Python statements global and nonlocal can instruct the parser that some specific variables should not be declared in the current lexical scope, but should instead be taken from the global module scope, or, in case of nested functions or classes, from the next higher scope. So on this respect, Python programming is just the opposite to Perl or Java : instead of explicitly declaring lexical variables, one must explicitly declare the variables that are not lexical. Furthermore, since such declarations apply to the whole current lexical scope, independently of the place where they are inserted, it is an exclusive choice : any use of a named variable must be either from the current lexical scope or from a higher scope or from the global scope. Therefore it is not possible, like in Perl, to use the value of global x in the initialization expression for local lexical x.
Shadowing variables in JavaScript
The historical construct for declaring lexical variables in JavaScript was through the var keyword, which is still present in the language. The behaviour of var is quite similar to Python lexical variables: variables appear to exist even before they are declared (which is called hoisting in JavaScript); they are scoped by functions or modules, not by blocks, so they still hold values afer exiting from the block; and the interpreter does not complain if a variable is declared twice. For all these reasons, var is now deprecated, replaced since ES6 (2015) by keywords const (for variables that do not change after initialization) or let (for mutable variables).
These new constructs indeed introduced more security in usage of lexical variables in JavaScript: such variables can no longer be used after exiting from the block, and redeclarations raise syntax errors. Yet one ambiguity remains: the shadowing effect of a variable declared with let does not start at the location of the declaration, but at the beginning of the enclosing block. This is no longer called "hoisting", but still it means that from the beginning of the block that variable name shadows any variable with the same name in higher scopes. This is called temporal dead zone in JavaScript literature.
Shadowing is prohibited in Java
Java has no ambiguity with shadowing ... because it has a more radical approach: it raises an exception when a variable is declared in an inner block with a name already in use at a higher scope! The following snippet
public class ScopeDemo {
public static void main(String[] args) {
int x = 987;
{
int x = 123, y = x + 1, z = x + 2;
System.out.println("here x is " + x + " and y is " + y);
}
System.out.println("here x is " + x);
}
}
yields:
ScopeDemo.java:6: error: variable x is already defined in method main(String[])
int x = 123, y = x + 1, z = x + 2;
^
1 error
error: compilation failed
Lexical pragmata
In Perl, lexical scopes are not only used to control the lifetime of lexical variables: they are also used for lexical pragmata that temporarily alter the behaviour of the interpreter, either by adding some semantics (through the keyword use) or by removing some semantics (through the keyword no). Here is an example of one very common idiom:
use strict;
use warnings;
foreach my $user (get_users_from_database()) {
no warnings 'uninitialized';
my $body = "Dear $user->{firstname} $user->{lastname}, bla bla bla";
...
}
At the beginning of the program, the warnings pragma is activated, because this is general good practice, so that the interpreter can detect suspect situations and warn about them. But when working with a $user record from the database, some fields might be undef, which is OK, there is no reason to issue a warning - so within that BLOCK the interpreter is instructed to treat undefined data as empty strings, without complaining.
In a similar vein, it is sometimes necessary to alleviate the controls performed by use strict, in particular on the subject of symbolic references. This control forbids programmatic insertion of new subroutines into the symbol table of a module, a safe measure of prevention; yet this feature is powerful and very useful in some specific situations - so when needed, one can temporarily disable the control:
foreach my $method_name (@list_of_names) {
no strict 'refs';
*{$method_name} = generate_closure_for($method_name);
}
This technique is used quite extensively for example in the Object-Relational Mapping module DBIx::DataModel for generating methods that implement navigation from one table to another related table. The source code can demonstrate usage patterns.
Some pragmata can also reinforce controls instead of alleviating them. One very good example is the autovivification module, which changes the default behaviour of Perl on implicit creation of intermediate references:
my $tree; # at this point, $tree is undef
$tree->{foo}{bar}[1] = 99; # no error; now $tree is {foo => {bar => [undef, 99]}}
Autovivification can be very handy, but it can be dangerous too. If we want to be on the safe side, we can write
{ no autovivification qw/fetch store/;
my $tree; # at this point, $tree is undef
$tree->{foo}{bar}[1] = 99; # ERROR: Can't vivify reference
}
Like for lexical variables, lexical pragmata can be nested, the innermost use or no declaration temporarily shadowing previous declarations on the same pragma.
Other examples of lexical pragmata include bigint, which transparently transforms all arithmetic operations to work with instances of Math::BigInt; or the incredible Regexp::Grammars module that adds grammatical parsing features to Perl regexes. The perlpragma documentation explains how module authors can implement new lexical pragmata.
'do': transform a BLOCK into an expression
In all examples seen so far, BLOCKs were treated as statements; but thanks to the do BLOCK construct, it is also possible to insert a BLOCK anywhere in an expression. The value from the last instruction in the BLOCK is then processed by operators in the expression. This is very convenient for performing a small computation in-place, either for clarity or for efficiency reasons.
The first example is a cheap version of XML entity encoding, slight adaptation from my module Excel::ValueWriter::XLSX:
my %ENTITY_TABLE = ( '<' => '<', '>' => '>', '&' => '&' );
my $entity_regex = do {my $chars = join "", keys %ENTITY_TABLE; qr/[$chars]/};
...
$text =~ s/($entity_regex)/$ENTITY_TABLE{$1}/g; # encode entity characters in $text
The second example is from the cousin module Excel::ValueReader::XLSX. Here we are parsing the content of table in an Excel sheet, and the data is returned either in the form of a list of arrayrefs (plain values), or in the form of a list of hashrefs (column name => value), depending on an option given by the caller:
my $row = $args{want_records} ? do {my %r; @r{@{$args{columns}}} = @$vals; \%r}
: $vals;
If the caller wants records, the do block performs a hash slice assignment into a lexical hash variable to create a new record on the fly.
Wrapping up
Thanks to BLOCKs, lexical scoping can be introduced very flexibly almost anywhere in Perl code. The semantics of lexical variable and lexical pragmata cleanly defines that the lexical effect starts at the next statement after the declaration, and that it ends at exit from the block, without any of the surprises that we have seen in some other languages. The shadowing effect of lexical variables in inner scopes is easily understandable and consistent across all higher scopes, including the next englobing lexical scopes and the global module scope.
What a beautiful language design !
The next post will be about dynamic scoping through the local keyword - another, complementary way for temporarily changing the behaviour of the interpreter.
About the cover image
The picture is an excerpt from the initial movement of Verdi's Requiem, at a place where Verdi shadows several characteristics of the movement : for a short while, the orchestra stays still, leaving the choir a cappella, with a different speed, different tonality and different dynamics; then after this parenthesis, all parameters come back to their initial state, come prima as stated in the score.
Expressiveness of the Perl programming language
The collection of features in the Perl programming language is quite unique. Some of these features may seem surprising or even distasteful to people coming from other languages; yet their combination provides a fantastic toolbox for expressiveness in programming, where you can write not only effective, but also beautiful code.
True enough, expressivity can also be used for other purposes. Many years ago, a common entertainment game in the Perl community was the obfuscated Perl contest, where participants tried to exploit the most arcane parts of the language for producing illegible, yet working programs -- in the same spirit as today's brainrot videos! People were excited by the creativity potential of Perl. Other productions from that time also include Perl golf, Perl poetry and Perl haikus. But apart from fun, if your goals are conciseness, clarity of algorithms and long-term readability, you can use Perl too, with considerable benefits, because the code can be organized so that it closely reflects your thoughts. This is what I hope to show in this series.
Perl vs other languages : Compare facts, not opinions
Of course using the word "beautiful" in the title is opinionated ... but that is mainly to draw the reader's attention!
This series will try to focus on factual evidence, considering in turn various aspects of the Perl programming language and comparing those to similar or dissimilar constructs in other languages (mainly Python, Javascript and Java). This approach tries to stay away from the vast corpus of opinionated debates about Perl that can be found in many essays, posts or tweets. In most cases such discussions start with a general opinion and then use a few examples to illustrate the argument: under such conditions it is very unlikely that readers will be able to judge for themselves. A more profound comparison of languages requires time, requires details, and requires broad coverage; so here my intention is to go bottom-up, looking at many features of different granularity, so that you can decide for yourself if you find them indeed "beautiful".
What about Raku ?
Perl has a sister language now named Raku, formerly known as Perl6. Raku took a very long time to emerge, after a huge participative design effort that carefully discussed weaknesses of Perl and weighed the pros and cons of mechanisms to be incorporated into the new language. Since it was a fresh start, without backwards compatibility constraints, it was possible to freely choose what to keep and what to change.
The result is more than beautiful, it is awesome and brings Perl's spirit to another dimension ... but unfortunately Raku is still in a niche much smaller than Perl. Raku really deserves to get a larger audience, and has very qualified advocates for that; but that's not the purpose of the present series, which focuses on Perl5.
Technical notes about Perl fragments in the series
Upcoming posts will discuss various Perl features, in no particular order; some topics are small details, others are fundamental mechanisms.
Code fragments will be written in modern Perl, namely version 5.42.0, of course with pragmata strict, warnings and utf8 always activated (even if not repeated in every code fragment). Subroutine declarations will always use signatures. Object-oriented programming will mainly use core classes, or sometimes Moose when more sophisticated mechanisms are needed. Yes, this is Perl, there are several ways to do objects !
About the author
After 10 years in academia doing research in the field of theory of programming languages, I spent the rest of my career in public administration, using Perl for more than 25 years for a large number of tasks ranging from small, one-shot migration scripts to large enterprise applications or complex data analysis tools. The majority of these applications are still in use today, maintained by a team (contrary to the saying that "Perl code is write-only"), and so rich in features that their replacement by other technology will probably not happen for a while. I have also authored 36 modules published on the Comprehensive Perl Archive Network (CPAN).
My knowledge of Java and of Python is mostly theoretical, not consolidated by practical experience, so I apologize in advance if I write inaccurate statements about these languages -- please correct me if this is the case.
About the cover image
The picture shows the initial pages of Johann Sebastian Bach's motet Singet dem Herrn ein neues Lied. In the second half of the XVIIIth century, Bach's style went largely out of fashion because it was considered too complex, with an "excess of art that altered the beauty of the music". Common taste at that time preferred the galant style, with simpler melodies and reduced polyphony. Bach remained appreciated by a small circle of connoisseurs, though.
In 1789 when Mozart - already in his maturity - first heard the motet "Singet dem Herrn", he showed great enthusiasm, eagerly asking "what is this" and keen to study "something new at last from which he could learn". After this episode it took another 50 years before Bach's music started a slow revival, finally leading to his current recognition as one of the greatest composers of all times.
So Bach is here to remind us that notions of beauty and expressivity, and their relations with complexity or simplicity, may evolve over time!
Acknowledgements
Many thanks to Boyd Duffee, Matthew O. Persico and Marc Perry who took time to review this material before publication.

Dave writes:
During January, I finished working on another tranche of ExtUtils::ParseXS fixups, this time focussing on:
adding and rewording warning and error messages, and adding new tests for them;
improving test coverage: all XS keywords have tests now;
reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;
refactoring and improving the length(str) pseudo-parameter implementation.
I also started work on my annual "get './TEST -deparse' working again" campaign. This option runs all the test suite files through a round trip in the deparser before running them. Over the course of the year we invariably accumulate new breakage; sometimes this involves fixing Deparse.pm, and sometimes just back-listing the test file as it is now tickling an already known issue in the deparser.
I also worked on a couple of bugs.
Summary:
- 0:53 GH #13878 COW speedup lost after e8c6a474
- 4:05 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build
- 12:14 fix up Deparse breakage
- 26:12 improve Extutils::ParseXS
Total:
- 43:24 (HH::MM)

Tony writes:
``` [Hours] [Activity] 2026/01/05 Monday 0.23 #24055 review, research and approve with comment 0.08 #24054 review and approve 0.12 #24052 review and comment 1.13 #24044 review, research and approve with comment 0.37 #24043 review and approve 0.78 #23918 rebase, testing, push and mark ready for review 1.58 #24001 fix related call_sv() issue, testing
0.65 #24001 testing, debugging
4.94
2026/01/06 Tuesday 0.90 #24034 review and comment 1.12 #24024 research and follow-up 1.40 #24001 debug crash in init_debugger() and work up a fix, testing
0.08 #24001 re-check, push for CI
3.50
2026/01/07 Wednesday 0.15 #24034 review updates and approve 0.55 #23641 review and comment 0.23 #23961 review and approve 0.28 #23988 review and approve with comment 0.62 #24001 check CI results and open PR 24060 0.50 #24059 review and comments 0.82 #24024 work on a test and the fix, testing 0.27 #24024 add perldelta, testing, make PR 24061 1.13 #24040 work on a test and a fix, testing, investigate
other possible similar problems
4.55
2026/01/08 Thursday 0.35 #24024 minor fixes, comment, get approval, apply to blead 0.72 #24040 rebase, perldelta, find a simplification, testing and re-push 0.18 #24053 review and approve 0.77 #24063 review, research, testing and comment 1.47 #24040 look at goto too, look for other similar issues and open 24064, fix for goto (PVOP), testing and push for CI 0.32 #24040 check CI results, make PR 24065
0.28 #24050 review and comment
4.09
2026/01/09 Friday
0.20 #24059 review updates and comment
0.20
2026/01/12 Monday 0.32 #24059 review updates and approve 0.35 #24066 review and approve 0.22 #24040 rebase and clean up whitespace, apply to blead 1.00 #23966 rebase, testing (expected issues from the #23885 merge but I guess I got the magic docs right) 1.05 #24062 review up to ‘ParseXS: tidy up INCLUDE error messages’ 0.25 #24069 review and comment
0.62 #24071 part review and comment
3.81
2026/01/13 Tuesday 1.70 #24070 research and comment 0.23 #24069 review and comment 0.42 #24071 more review
1.02 #24071 more review, comments
3.37
2026/01/14 Wednesday 0.23 #23918 minor fix 0.18 #24069 review updates and approve 0.32 #24077 review and comments 0.53 #24073 review, research and comment 0.87 #24075 review, research and comment 0.45 #24071 benchmarking and comment
1.47 #24019 debugging, brief comment on work so far
4.05
2026/01/15 Thursday 0.37 #24019 debugging and comment on cause of win32 issues 1.02 #24077 review, follow-up 0.08 #24079 review and approve 0.25 #24076 review and approve 0.73 #24062 more review up to ‘ParseXS: refactor: don't set $_ in Param::parse()’ 2.03 #24062 more review to ‘ParseXS: refactor: 001-basic.t: add
TODO flag’ and comments
4.48
2026/01/19 Monday 1.12 maint-votes, vote apply/testing one of the commits 0.43 github notifications 0.08 #24079 review updates and comment 0.70 #24075 research and approve 0.57 #24063 research, try to break it, comment 1.43 #24062 more review to ‘ParseXS: add basic tests for
PREINIT keyword’
4.33
2026/01/20 Tuesday 2.23 #24078 review, testing, comments 0.85 #24098 review, research and comment 1.00 #24062 more review up to ‘ParseXS: 001-basic.t: add more
ellipsis tests’
4.08
2026/01/21 Wednesday 0.82 #23995 research and follow-up 0.25 #22125 follow-up 0.23 #24056 research, comment 0.67 #24103 review, research and approve
1.45 #24062 more review to end, comment
3.42
2026/01/22 Thursday 0.10 #24079 review update and approve 0.08 #24106 review and approve 0.10 #24096 review and approve 0.08 #24094 review and approve 0.82 #24080 review, research and comments 0.08 #24081 review and approve 0.75 #24082 review, testing, comment 1.15 #23918 rebase #23966, testing and apply to blead, start on
string APIs
3.16
2026/01/27 Tuesday 0.35 #23956 fix perldelta issues 1.27 #22125 remove debugging detritus, research and comment 1.57 #24080 debugging into SvOOK and PVIO 0.67 #24080 more debugging, comment 0.25 #24120 review and approve
1.03 #23984 review, research and approve
5.14
2026/01/28 Wednesday 0.37 #24080 follow-up 0.10 #24128 review and apply to blead 0.70 #24105 review, look at changes needed 0.15 #23956 check CI results and apply to blead 0.10 #22125 check CI results and apply to blead 0.13 #4106 rebase PR 23262 and testing 0.53 #24001 rebase PR 24060 and testing 0.57 #24129 review and comments 0.28 #24127 review and approve 0.10 #24124 review and approve
0.20 #24123 review and approve with comment
3.23
2026/01/29 Thursday 0.27 #23262 minor change suggested by xenu, testing, push for CI 0.22 #24060 comment 1.63 #24082 review, testing, comments 0.43 #24130 review, check some side issues, approve 0.12 #24077 review updates and approve 0.08 #24121 review and approve 0.08 #24122 review and comment
0.30 #24119 review and approve
3.13
Which I calculate is 59.48 hours.
Approximately 57 tickets were reviewed or worked on, and 6 patches were applied. ```

Paul writes:
This month I managed to finish off a few refalias-related issues, as well as lend some time to help BooK further progress implementing PPC0014
- 1 = Clear pad after multivar foreach
- https://github.com/Perl/perl5/pull/240
- 3 = Fix B::Concise output for OP_MULTIPARAM
- https://github.com/Perl/perl5/pull/24066
- 6 = Implement multivariable
foreachon refalias- https://github.com/Perl/perl5/pull/24094
- 1 = SVf_AMAGIC flag tidying (as yet unmerged)
- https://github.com/Perl/perl5/pull/24129
- 2.5 = Mentoring BooK towards implementing PPC0014
- 2 = Various github code reviews
Total: 15.5 hours
My focus for February will now be to try to get both attributes-v2
and magic-v2 branches in a state where they can be reviewed, and at
least the first parts merged in time for 5.43.9, and hence 5.44, giving
us a good base to build further feature ideas on top of.
I wanted to install Perl::LanguageServer so, following the author's instructions, I ran
sudo apt install build-essential libanyevent-perl libclass-refresh-perl libcompiler-lexer-perl \
libdata-dump-perl libio-aio-perl libjson-perl libmoose-perl libpadwalker-perl \
libscalar-list-utils-perl libcoro-perl
sudo cpan Perl::LanguageServer
But that didn't work:
[Error - 3:03:14 PM] Connection to server is erroring. Shutting down server.
Can't locate Perl/LanguageServer.pm in @INC (you may need to install the Perl::LanguageServer module) (@INC entries checked: /etc/perl /usr/local/lib/x86_64-linux-gnu/perl/5.40.1 /usr/local/share/perl/5.40.1 /usr/lib/x86_64-linux-gnu/perl5/5.40 /usr/share/perl5 /usr/lib/x86_64-linux-gnu/perl-base /usr/lib/x86_64-linux-gnu/perl/5.40 /usr/share/perl/5.40 /usr/local/lib/site_perl).
BEGIN failed--compilation aborted.
Upon further investigation, we find that Hash::Safekeys is not installing properly; when cpan Hash::SafeKeys is ran, we notice the error fatal error: crypt.h: No such file or directory is issued:
Loading internal logger. Log::Log4perl recommended for better logging
Reading '/root/.cpan/Metadata'
Database was generated on Sat, 07 Feb 2026 10:41:02 GMT
Running install for module 'Hash::SafeKeys'
CPAN: Digest::SHA loaded ok (v6.04)
CPAN: Compress::Zlib loaded ok (v2.212)
Checksum for /root/.cpan/sources/authors/id/M/MO/MOB/Hash-SafeKeys-0.04.tar.gz ok
'YAML' not installed, will not store persistent state
CPAN: CPAN::Meta::Requirements loaded ok (v2.143)
CPAN: Parse::CPAN::Meta loaded ok (v2.150010)
CPAN: CPAN::Meta loaded ok (v2.150010)
CPAN: Module::CoreList loaded ok (v5.20250118_40)
Configuring M/MO/MOB/Hash-SafeKeys-0.04.tar.gz with Makefile.PL
Checking if your kit is complete...
Looks good
Generating a Unix-style Makefile
Writing Makefile for Hash::SafeKeys
Writing MYMETA.yml and MYMETA.json
MOB/Hash-SafeKeys-0.04.tar.gz
/usr/bin/perl Makefile.PL INSTALLDIRS=site -- OK
Running make for M/MO/MOB/Hash-SafeKeys-0.04.tar.gz
cp lib/Hash/SafeKeys.pm blib/lib/Hash/SafeKeys.pm
Running Mkbootstrap for SafeKeys ()
chmod 644 "SafeKeys.bs"
"/usr/bin/perl" -MExtUtils::Command::MM -e 'cp_nonempty' -- SafeKeys.bs blib/arch/auto/Hash/SafeKeys/SafeKeys.bs 644
"/usr/bin/perl" "/usr/share/perl/5.40/ExtUtils/xsubpp" -typemap '/usr/share/perl/5.40/ExtUtils/typemap' SafeKeys.xs > SafeKeys.xsc
mv SafeKeys.xsc SafeKeys.c
x86_64-linux-gnu-gcc -c -D_REENTRANT -D_GNU_SOURCE -DDEBIAN -fwrapv -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -O2 -g -DVERSION=\"0.04\" -DXS_VERSION=\"0.04\" -fPIC "-I/usr/lib/x86_64-linux-gnu/perl/5.40/CORE" SafeKeys.c
In file included from /usr/lib/x86_64-linux-gnu/perl/5.40/CORE/op.h:700,
from /usr/lib/x86_64-linux-gnu/perl/5.40/CORE/perl.h:4549,
from SafeKeys.xs:2:
/usr/lib/x86_64-linux-gnu/perl/5.40/CORE/reentr.h:126:16: fatal error: crypt.h: No such file or directory
126 | # include <crypt.h>
| ^~~~~~~~~
compilation terminated.
make: *** [Makefile:337: SafeKeys.o] Error 1
MOB/Hash-SafeKeys-0.04.tar.gz
/usr/bin/make -- NOT OK
Any ideas on how to fix this?
German text below.
2025 was a tough year for The Perl and Raku Foundation (TPRF). Funds were sorely needed. The community grants program had been paused due to budget constraints and we were in danger of needing to pause the Perl 5 core maintenance grants. Fastmail stepped up with a USD 10,000 donation and helped TPRF to continue to support Perl 5 core maintenance. Ricardo Signes explains why Fastmail helped keep this very important work on track.
Perl has served us quite well since Fastmail’s inception. We’ve built up a large code base that has continued to work, grow, and improve over twenty years. We’ve stuck with Perl because Perl stuck with us: it kept working and growing and improving, and very rarely did those improvements require us to stop the world and adapt to onerous changes. We know that kind of stability is, in part, a function of the developers of Perl, whose time is spent figuring out how to make Perl better without also making it worse. The money we give toward those efforts is well-spent, because it keeps the improvements coming and the language reliable.
— Ricardo Signes, Director & Chief Developer Experience Officer, Fastmail
One of the reasons that you don’t hear about Perl in the headlines is its reliability. Upgrading your Perl from one version to the next? That can be a very boring deployment. You code worked before and it continues to “just work” after the upgrade. You don’t need to rant about short deprecation cycles, performance degradation or dependencies which no longer install. The Perl 5 core maintainers take great care to ensure that you don’t have to care very much about upgrading your Perl. Backwards compatibility is top of mind. If your deployment is boring, it’s because a lot of care and attention has been given to this matter by the people who love Perl and love to work on it.
As we moved to secure TPRF’s 2025 budget, we reached out to organizations which rely on Perl. A number of these companies immediately offered to help. Fastmail has already been a supporter of TPRF for quite some time. In addition to this much needed donation, Fastmail has been providing rock solid free email hosting to the foundation for many years.
While Fastmail’s donation has been allocated towards Perl 5 Core maintenance, TPRF is now in the position to re-open the community grants program, funding it with USD 10,000 for 2026. There is also an opportunity to increase the community grants funding if sponsor participation increases. As we begin our 2026 fundraising, we are looking to cast a wider net and bring more sponsor organizations on board to help support healthy Perl and Raku ecosystems.
Maybe your organization will be the one to help us double our community grants budget in 2026. To become a sponsor, contact: olaf@perlfoundation.org
“Perl is my cast-iron pan - reliable, versatile, durable, and continues to be ever so useful.” TPRC 2026 brings together a community that embodies all of these qualities, and we’re looking for sponsors to help make this special gathering possible.
About the Conference
The Perl and Raku Conference 2026 is a community-organized gathering of developers, enthusiasts, and industry professionals. It takes place from June 26-28, 2026, in Greenville, South Carolina. The conference will feature an intimate, single-track format that promises high sponsor visibility. We look forward to approximately 80 participants with some of those staying in town for the shoulder days (June 25-29) and a Monday workshop.
Why Sponsor?
- Give back to the language and communities which have already given so much to you
- Connect with the developers and craftspeople who build your tools – the ones that are built to last
- Help to ensure that The Perl and Raku Foundation can continue to fund Perl 5 core maintenance and Community Grants
Sponsorship Tiers
Platinum Sponsor ($6,000)
- Only 1 sponsorship is available at this level
- Premium logo placement on conference website
- This donation qualifies your organization to be a Bronze Level Sponsor of The Perl and Raku Foundation
- 5-minute speaking slot during opening ceremony
- 2 complimentary conference passes
- Priority choice of rollup banner placement
- Logo prominently displayed on conference badges
- First choice of major named sponsorship (Conference Dinner, T-shirts, or Swag Bags)
- Logo on main stage backdrop and conference banners
- Social media promotion
- All benefits of lower tiers
Gold Sponsor ($4,000)
- Logo on all conference materials
- One complimentary conference pass
- Rollup banner on display
- Choice of named sponsorship (Lunch or Snacks)
- Logo on backdrop and banners
- Dedicated social media recognition
- All benefits of lower tiers
Silver Sponsor ($2,000)
- Logo on conference website
- Logo on backdrop and banners
- Choice of smaller named sponsorship (Beverage Bars)
- Social media mention
- All benefits of lower tier
Bronze Sponsor ($1,000)
- Name/logo on conference website
- Name/logo on backdrop and banners
All Sponsors Receive
- Logo/name in Update::Daily conference newsletter sidebar
- Opportunity to provide materials for conference swag bags
- Recognition during opening and closing ceremonies
- Listed on conference website sponsor page
- Mentioned in conference social media
Named Sponsorship Opportunities
Exclusive naming rights available for:
- Conference Dinner ($2,000) - Signage on tables and buffet
- Conference Swag Bags ($1,500) - Logo on bags
- Conference T-Shirts ($1,500) - Logo on sleeve
- Lunches ($1,500) - Signage at pickup and on menu tickets
- Snacks ($1,000) - Signage at snack bar
- Update::Daily Printing ($200) - Logo on masthead
About The Perl and Raku Foundation
Proceeds beyond conference expenses support The Perl and Raku Foundation, a non-profit organization dedicated to advancing the Perl and Raku programming languages through open source development, education, and community building.
Contact Information
For more information on how to become a sponsor, please contact: olaf@perlfoundation.org
Weekly Challenge 359
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
Task 1: Digital Root
Task
You are given a positive integer, $int.
Write a function that calculates the additive persistence of a positive integer and also return the digital root.
- Digital root is the recursive sum of all digits in a number until a single digit is obtained.
- Additive persistence is the number of times you need to sum the digits to reach a single digit.
My solution
Starting this week, I'm going to turn off VS Code Copilot when completing the challenge. As AI takes over the task of code completion, it's still a good exercise to do things by hand so as I don't depend on it.
For this tasks, I create two variables. The first is called persistence and starts at 0. The second variable is called digital_root and starts with the supplied integer. This is called number in Python, as int is a reserved word.
I then have a loop that continues until digital_root is a single digit. Breaking down the line:
-
str(digital_root)will convert thedigital_rootinteger to a string -
d for d inwill convert each character to a single digit -
map(int(...will convert the digit back to an integer -
sum(...will add the single digit
I also increment the persistence value by one. I end by returning the two values. The main function is responsible for displaying the text as per the examples.
def get_digital_root(number: int) -> tuple[int, int]:
if number <= 0:
raise ValueError("You must provide a positive integer")
persistence = 0
digital_root = number
while len(str(digital_root)) > 1:
digital_root = sum(map(int, (d for d in str(digital_root))))
persistence += 1
return persistence, digital_root
The Perl solution follows the same logic. Perl doesn't make a distinction between string and integers (yes, there are some exceptions to this rule), but does require the split function to break the digital_root value into individual digits.
use List::Util 'sum';
sub main ($int) {
my $persistence = 0;
my $digital_root = $int;
# Keep iterating until we have a single digital
while ( length($digital_root) > 1 ) {
$digital_root = sum( split( //, $digital_root ) );
$persistence++;
}
say "Persistence = $persistence";
say "Digital Root = $digital_root";
Examples
$ ./ch-1.py 38
Persistence = 2
Digital Root = 2
$ ./ch-1.py 7
Persistence = 0
Digital Root = 7
$ ./ch-1.py 999
Persistence = 2
Digital Root = 9
$ ./ch-1.py 1999999999
Persistence = 3
Digital Root = 1
$ ./ch-1.py 101010
Persistence = 1
Digital Root = 3
Task 2: String Reduction
Task
You are given a word containing only alphabets.
Write a function that repeatedly removes adjacent duplicate characters from a string until no adjacent duplicates remain and return the final word.
My solution
This is a duplicate (albeit slightly re-worded) of the first task in week 340. Therefore I copy and pasted that code, and renamed the function. See my original blog post on how this was solved.
Examples
$ ./ch-2.py aabbccdd
""
$ ./ch-2.py abccba
""
$ ./ch-2.py abcdef
"abcdef"
$ ./ch-2.py aabbaeaccdd
"aea"
$ ./ch-2.py mississippi
"m"
- 00:00 Introduction to OSDC
- 01:30 Introducing myself Perl Maven, Perl Weekly
- 02:10 The earlier issues.
- 03:10 How to select a project to contribute to?
- 04:50 Chat on OSDC Zulip
- 06:45 How to select a Perl project?
- 09:20 CPAN::Digger
- 10:10 Modules that don't have a link to their VCS.
- 13:00 Missing CI - GitHub Actions or GitLab Pipeline and Travis-CI.
- 14:00 Look at Term-ANSIEncode by Richard Kelsch - How to find the repository of this project?
- 15:38 Switching top look at Common-CodingTools by mistake.
- 16:30 How MetaCPAN knows where is the repository?
- 17:52 Clone the repository.
- 18:15 Use the szabgab/perl Docker container.
- 22:10 Run
perl Makefile.PL, install dependency, runmakeandmake distdir. - 23:40 See the generated
META.jsonfile. - 24:05 Edit the
Makefile.PL - 24:55 Explaining my method of cloning first (calling it
origin) and forking later and calling thatfork. - 27:00 Really edit
Makefile.PLand add theMETA_MERGEsection and verify the generatedMETA.jsonfile. - 29:00 Create a branch locally. Commit the change.
- 30:10 Create a fork on GitHub.
- 31:45 Add the
forkas a remote repository and push the branch to it. - 33:20 Linking to the PR on the OSDC Perl report page.
- 35:00 Planning to add
.gitignoreand maybe setting up GitHub Action. - 36:00 Start from the
mainbranch, create the.gitignorefile. - 39:00 Run the tests locally. Set up GitHub Actions to run the tests on every push.
- 44:00 Editing the GHA configuration file.
- 48:30 Commit, push to the fork, check the results of GitHub Action in my fork on GitHub.
- 51:45 Look at the version of the perldocker/perl-tester Docker image.
- 54:40 Update list of Perl versions in the CI. See the results on GitHub.
- 55:30 Show the version number of perl.
-
App::ccdiff - Colored Character Diff
- Version: 0.35 on 2026-01-25, with 20 votes
- Previous CPAN version: 0.34 was 1 year, 23 days before
- Author: HMBRAND
-
App::rdapper - a command-line RDAP client.
- Version: 1.22 on 2026-01-29, with 21 votes
- Previous CPAN version: 1.21 was 1 day before
- Author: GBROWN
-
App::SpeedTest - Command line interface to speedtest.net
- Version: 0.31 on 2026-01-25, with 32 votes
- Previous CPAN version: 0.30 was 1 year, 18 days before
- Author: HMBRAND
-
CPAN::Meta - the distribution metadata for a CPAN dist
- Version: 2.150012 on 2026-01-25, with 39 votes
- Previous CPAN version: 2.150011 was 3 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260129.001 on 2026-01-29, with 25 votes
- Previous CPAN version: 20260125.001 was 3 days before
- Author: BRIANDFOY
-
Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
- Version: 9.0.0 on 2026-01-25, with 17 votes
- Previous CPAN version: 8.1.0 was 1 month, 29 days before
- Author: CUKEBOT
-
Dancer - lightweight yet powerful web application framework
- Version: 1.3522 on 2026-01-26, with 149 votes
- Previous CPAN version: 1.3521 was 2 years, 11 months, 18 days before
- Author: BIGPRESH
-
Dist::Zilla - distribution builder; installer not included!
- Version: 6.037 on 2026-01-25, with 188 votes
- Previous CPAN version: 6.036 was 2 months, 15 days before
- Author: RJBS
-
Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
- Version: v30.0.0 on 2026-01-28, with 20 votes
- Previous CPAN version: v29.0.1 was 3 months, 12 days before
- Author: CHEVALIER
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.216 on 2026-01-30, with 19 votes
- Previous CPAN version: 2.215
- Author: PMQS
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.038000 on 2026-01-29, with 27 votes
- Previous CPAN version: 2.037000 was 22 days before
- Author: MICKEY
-
Net::Server - Extensible Perl internet server
- Version: 2.016 on 2026-01-28, with 34 votes
- Previous CPAN version: 2.015 was 5 days before
- Author: BBB
-
SPVM - The SPVM Language
- Version: 0.990124 on 2026-01-31, with 36 votes
- Previous CPAN version: 0.990123
- Author: KIMOTO
-
UV - Perl interface to libuv
- Version: 2.002 on 2026-01-28, with 14 votes
- Previous CPAN version: 2.001 was 22 days before
- Author: PEVANS
Most projects are started by a single person in a GitHub repository of that person. Later for various reasons people establish GitHub organizations and move the projects there. Sometimes the organization contains a set of sub-projects related to a central project. (e.g. A web framework and its extensions or database access.) Sometimes it is a collection of projects related to a single topic. (e.g. testing or IDE support.) Sometimes it is just a random collection of project where people band together in the hope that no project will be left behind. (e.g. the CPAN Authors organization.)
Organizations make it easier to have multiple maintainers and thus ensuring continuity of project, but it might also mean that none of the members really feel the urge to continue working on something.
In any case, I tried to collect all the Perl-related GitHub organizations.
Hopefully in ABC order...
-
Beyond grep - It is mostly for
ack, abetter grepdeveloped by Andy Lester. No public members. 4 repositories. -
Catalyst - Catalyst is a web-framework. Its runtime and various extensions are maintained in this organization. 10 members and 39 repositories.
-
cpan-authors - A place for CPAN authors to collaborate more easily. 4 members and 9 repositories.
-
davorg cpan - Organisation for maintaining Dave Cross's CPAN modules. No public members and 47 repositories.
-
ExifTool - No public members. 3 repositories.
-
Foswiki - The Foswiki and related projects. 13 members and 649 repositories.
-
gitpan - An archive of CPAN modules - 2 members and 5k+ read-only repositories.
-
Kelp framework - a web development framework. 1 member and 12 repositories.
-
MetaCPAN - The source of the MetaCPAN site - 9 member and 56 rpositories.
-
Mojolicious - Some Perl and many JavaScript projects. 5 members and 29 repositories.
-
Moose - Moose,
MooseX-*, Moo, etc. 11 members and 69 repositories. -
Netdisco - Netdisco and SNMP-Info projects. 10 members and 14 repositories.
-
PadreIDE - Padre, the Perl IDE. 13 members and 102 repositories.
-
Paracamelus - No public members. 2 repositories.
-
Perl - Perl 5 itself, Docker images, etc. 20 members, 8 repositories.
-
Perl Actions - GitHub Actions to be used in workflows. 5 members and 9 repositories.
-
Perl Advent Calendar - including the source of perl.com. 3 members and 8 repositories.
-
Perl Bitcoin - Perl Bitcoin Toolchain Collective. 3 members and 7 repositories.
-
Perl Toolchain Gang - ExtUtils::MakeMaker, Module::Build, etc. 27 members and 41 repositories.
-
Perl Tools Team - source of planetperl , perl-ads etc. No public members. 6 repositories.
-
Perl Dancer - Dancer, Dancer2, many plugins. 30 members and 79 repositories.
-
perltidy - Only for Perl::Tidy. No public members. 1 repository.
-
perl5-dbi - DBI, several
DBD::*modules, and some related modules. 7 members and 15 repositories. -
perl.org - also cpan.org and perldoc.perl.org. 3 members and 7 repositories.
-
Perl5 - DBIx-Class and DBIx-Class-Historic. No public members. 2 repositories.
-
perl5-utils - List::MoreUtils, File::ShareDir etc. 2 members and 22 repositories.
-
Perl-Critic - PPI, Perl::Critic and related. 7 members and 5 repositories.
-
perl-ide - Perl Development Environments. 26 members and 13 repositories.
-
perl-pod - Pod::Simple, Test::Pod. 1 member and 4 repositories.
-
PkgConfig - 1 member 1 repository.
-
plack - psgi-specs, Plack and a few middlewares. 5 members and 7 repositories.
-
RexOps - Rex, Rexify and related projects. 1 member and 46 repositories.
-
Sqitch - Sqitch for Sensible database change management and related projects. No public members. 16 repositories.
-
StrawberryPerl - The Perl distribution for MS Windows. 4 members and 10 repositories.
-
Test-More - Test::Builder, Test::Simple, Test2 etc. - 4 members, 27 repositories.
-
The Enlightened Perl Organisation - Task::Kensho and
Task::Kensho::*. 1 member and 1 repository. -
Thunderhorse Framework - a modern web development framework. No public members. 4 repositories.
-
Webmin - Webmin is a web-based system administration tool for Unix-like servers. - 5 repositories.
Companies
These are bot Perl-specific GitHub organizations, but some of the repositories are in Perl.
-
cPanel - Open Source Software provided by cPanel. 1 member and 22 repositories.
-
DuckDuckGo - The search engine. 19 members and 122 repositories.
-
Fastmail - Open-source software developed at Fastmail. 4 members and 38 repositories.
-
RotherOSS - Otobo (OTRS fork). No public members. 47 repositories.
Since my native language isn’t English, the German text follows below.
I've gone through the Custom Data Labels documentation carefully, and reduced down to a simple example:
#!/usr/bin/perl
use strict;
use warnings;
use Excel::Writer::XLSX;
my $workbook = Excel::Writer::XLSX->new( 'chart_custom_labels.xlsx' );
my $worksheet = $workbook->add_worksheet();
# Chart data
my $data = [
[ 'Cat', 'Dog', 'Pig' ],
[ 10, 40, 50 ],
];
$worksheet->write( 'A1', $data );
# Custom labels
my $custom_labels = [
{ value => 'Jan' },
{ value => 'Feb' },
{ value => 'Mar' },
];
my $chart = $workbook->add_chart( type => 'column' );
# Configure the series with custom string data labels
$chart->add_series(
categories => '=Sheet1!$A$1:$A$3',
values => '=Sheet1!$B$1:$B$3',
data_labels => {
value => 1,
custom => $custom_labels,
},
);
$workbook->close();
I expected this to apply labels of "Jan", "Feb", and "Mar" to the graph. However, the labels I get are just the values I would have gotten from value => 1 even if I had not included the custom labels line, i.e. 10, 40, 50:

I've also tried removing the value => 1 line but keeping the custom line, and that results in no labels at all. And I've tried a different approach where I keep the value => 1 line but use the delete property of custom property to remove some labels. That also did not work, and just kept the values for labels.
Is this functionality broken or am I missing something?
Environment details:
Cygwin
Perl v5.40.3
Excel::Writer::XLSX 1.03
If you don't like the autovivification or simply would like to make sure the code does not accidentally alter a hash the Hash::Util module is for you.
You can lock_hash and later you can unlock_hash if you'd like to make some changes to it.
In this example you can see 3 different actions commented out. Each one would raise an exception if someone tries to call them on a locked hash. After we unlock the hash we can execute those actions again.
I tried this both in perl 5.40 and 5.42.
use strict;
use warnings;
use feature 'say';
use Hash::Util qw(lock_hash unlock_hash);
use Data::Dumper qw(Dumper);
my %person = (
fname => "Foo",
lname => "Bar",
);
lock_hash(%person);
print Dumper \%person;
print "$person{fname} $person{lname}\n";
say "fname exists ", exists $person{fname};
say "language exists ", exists $person{language};
# $person{fname} = "Peti"; # Modification of a read-only value attempted
# delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
# $person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
unlock_hash(%person);
$person{fname} = "Peti"; # Modification of a read-only value attempted
delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
$person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
print Dumper \%person;
$VAR1 = {
'lname' => 'Bar',
'fname' => 'Foo'
};
Foo Bar
fname exists 1
language exists
$VAR1 = {
'language' => 'Perl',
'fname' => 'Peti'
};
My name is Alex. Over the last years I’ve implemented several versions of the Raku’s documentation format (Synopsys 26 / Raku’s Pod) in Perl and JavaScript.
At an early stage, I shared the idea of creating a lightweight version of Raku’s Pod, with Damian Conway, the original author of the Synopsys 26 Documentation specification (S26). He was supportive of the concept and offered several valuable insights that helped shape the vision of what later became Podlite.
Today, Podlite is a small block-based markup language that is easy to read as plain text, simple to parse, and flexible enough to be used everywhere — in code, notes, technical documents, long-form writing, and even full documentation systems.
This article is an introduction for the Perl community — what Podlite is, how it looks, how you can already use it in Perl via a source filter, and what’s coming next.
The Block Structure of Podlite
One of the core ideas behind Podlite is its consistent block-based structure. Every meaningful element of a document — a heading, a paragraph, a list item, a table, a code block, a callout — is represented as a block. This makes documents both readable for humans and predictable for tools.
Podlite supports three interchangeable block styles: delimited, paragraph, and abbreviated.
Abbreviated blocks (=BLOCK)
This is the most compact form.
A block starts with = followed by the block name.
=head1 Installation Guide
=item Perl 5.8 or newer
=para This tool automates the process.
- ends on the next directive or a blank line
- best used for simple one-line blocks
- cannot include configuration options (attributes)
Paragraph blocks (=for BLOCK)
Use this form when you want a multi-line block or need attributes.
=for code :lang<perl>
say "Hello from Podlite!";
- ends when a blank line appears
- can include complex content
- allows attributes such as
:lang,:id,:caption,:nested, …
Delimited blocks (=begin BLOCK … =end BLOCK)
The most expressive form. Useful for large sections, nested blocks, or structures that require clarity.
=begin nested :notify<important>
Make sure you have administrator privileges.
=end nested
- explicit start and end markers
- perfect for code, lists, tables, notifications, markdown, formulas
- can contain other blocks, including nested ones
These block styles differ in syntax convenience, but all produce the same internal structure.

Regardless of which syntax you choose:
- all three forms represent the same block type
- attributes apply the same way (
:lang,:caption,:id, …) - tools and renderers treat them uniformly
- nested blocks work identically
- you can freely mix styles inside a document
Example: Comparing POD and Podlite
Let’s see how the same document looks in traditional POD versus Podlite:

Each block has clear boundaries, so you don’t need blank lines between them. This makes your documentation more compact and easier to read. This is one of the reasons Podlite remains compact yet powerful: the syntax stays flexible, while the underlying document model stays clean and consistent.
This Podlite example rendered as on the following screen:

Inside the Podlite Specification 1.0
One important point about Podlite is that it is first and foremost a specification. It does not belong to any particular programming language, platform, or tooling ecosystem. The specification defines the document model, syntax rules, and semantics.
From the Podlite 1.0 specification, notable features include:
- headings (
=head1,=head2, …) - lists and definition lists, and including task lists
- tables (simple and advanced)
- CSV-backed tables
- callouts / notifications (
=nested :notify<tip|warning|important|note|caution>) - table of contents (
=toc) - includes (
=include) - embedded data (
=data) - pictures (
=pictureand inlineP<>) - formulas (
=formulaand inlineF<>) - user defined blocks and markup codes
- Markdown integration
The =markdown block is part of the standard block set defined by the Podlite Specification 1.0.
This means Markdown is not an add-on or optional plugin — it is a fully integrated, first-class component of the language.
Markdown content becomes part of Podlite’s unified document structure, and its headings merge naturally with Podlite headings inside the TOC and document outline.
Below is a screenshot showing how Markdown inside Perl is rendered in the in-development VS Code extension, demonstrating both the block structure and live preview:

Using Podlite in Perl via the source filter
To make Podlite directly usable in Perl code, there is a module on CPAN: Podlite — Use Podlite markup language in Perl programs
A minimal example could look like this:
use Podlite; # enable Podlite blocks inside Perl
=head1 Quick Example
=begin markdown
Podlite can live inside your Perl programs.
=end markdown
print "Podlite active\n";
Roadmap: what’s next for Podlite
Podlite continues to grow, and the Specification 1.0 is only the beginning. Several areas are already in active development, and more will evolve with community feedback.
Some of the things currently planned or in progress:
- CLI tools
- command-line utilities for converting Podlite to HTML, PDF, man pages, etc.
- improve pipelines for building documentation sites from Podlite sources
- VS Code integration
- Ecosystem growth
- develop comprehensive documentation and tutorials
- community-driven block types and conventions
Try Podlite and share feedback
If this resonates with you, I’d be very happy to hear from you:
- ideas for useful block types
- suggestions for tools or integrations
- feedback on the syntax and specification
https://github.com/podlite/podlite-specs/discussions
Even small contributions — a comment, a GitHub star, or trying an early tool — help shape the future of the specification and encourage further development.
Useful links:
- CPAN: https://metacpan.org/pod/Podlite
- GitHub:https://github.com/podlite
- Specification
- Project site: https://podlite.org
- Roadmap: https://podlite.org/#Roadmap
Thanks for reading, Alex
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.03 on 2026-01-19, with 56 votes
- Previous CPAN version: 10.02 was 10 days before
- Author: UTASHIRO
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.028 on 2026-01-21, with 19 votes
- Previous CPAN version: 1.027 was 1 month, 15 days before
- Author: PREACTION
-
CPAN::Meta - the distribution metadata for a CPAN dist
- Version: 2.150011 on 2026-01-22, with 39 votes
- Previous CPAN version: 2.150010 was 9 years, 5 months, 4 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260120.004 on 2026-01-20, with 25 votes
- Previous CPAN version: 20260120.002
- Author: BRIANDFOY
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.24 on 2026-01-18, with 19 votes
- Previous CPAN version: 1.23_03 was 5 days before
- Author: SCHUBIGER
-
EV - perl interface to libev, a high performance full-featured event loop
- Version: 4.37 on 2026-01-22, with 50 votes
- Previous CPAN version: 4.36 was 4 months, 2 days before
- Author: MLEHMANN
-
Git::Repository - Perl interface to Git repositories
- Version: 1.326 on 2026-01-18, with 27 votes
- Previous CPAN version: 1.325 was 4 years, 7 months, 17 days before
- Author: BOOK
-
IO::Async - Asynchronous event-driven programming
- Version: 0.805 on 2026-01-19, with 80 votes
- Previous CPAN version: 0.804 was 8 months, 26 days before
- Author: PEVANS
-
Mac::PropertyList - work with Mac plists at a low level
- Version: 1.606 on 2026-01-20, with 13 votes
- Previous CPAN version: 1.605 was 5 months, 11 days before
- Author: BRIANDFOY
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260119 on 2026-01-19, with 44 votes
- Previous CPAN version: 5.20251220 was 29 days before
- Author: BINGOS
-
Net::Server - Extensible Perl internet server
- Version: 2.015 on 2026-01-22, with 33 votes
- Previous CPAN version: 2.014 was 2 years, 10 months, 7 days before
- Author: BBB
-
Net::SSH::Perl - Perl client Interface to SSH
- Version: 2.144 on 2026-01-23, with 20 votes
- Previous CPAN version: 2.144 was 8 days before
- Author: BDFOY
-
Release::Checklist - A QA checklist for CPAN releases
- Version: 0.19 on 2026-01-25, with 16 votes
- Previous CPAN version: 0.18 was 1 month, 15 days before
- Author: HMBRAND
-
Spreadsheet::Read - Meta-Wrapper for reading spreadsheet data
- Version: 0.95 on 2026-01-25, with 31 votes
- Previous CPAN version: 0.94 was 1 month, 15 days before
- Author: HMBRAND
-
SPVM - The SPVM Language
- Version: 0.990117 on 2026-01-24, with 36 votes
- Previous CPAN version: 0.990116
- Author: KIMOTO
-
utf8::all - turn on Unicode - all of it
- Version: 0.026 on 2026-01-18, with 31 votes
- Previous CPAN version: 0.025 was 1 day before
- Author: HAYOBAAN
This is the weekly favourites list of CPAN distributions. Votes count: 36
Week's winner: Marlin (+3)
Build date: 2026/01/25 12:53:03 GMT
Clicked for first time:
- App::CPANTS::Lint - front-end to Module::CPANTS::Analyse
- DBIx::Class::Async - Asynchronous database operations for DBIx::Class
- Doubly - Thread-safe doubly linked list
- Mooish::Base - importer for Mooish classes
- Pod::Github - Make beautiful Markdown readmes from your POD
- Pod::Markdown::Githubert - convert POD to Github-flavored Markdown
Increasing its reputation:
- Class::Closure (+1=2)
- Class::DBI (+1=11)
- Class::Slot (+1=2)
- Class::XSConstructor (+2=5)
- CPAN::Changes (+1=33)
- Devel::NYTProf (+1=197)
- Faker (+1=13)
- Future::AsyncAwait (+1=51)
- List::AllUtils (+1=32)
- Marlin (+3=7)
- MCP (+1=8)
- Module::Starter (+1=35)
- Mooish::AttributeBuilder (+1=2)
- MooseX::XSConstructor (+1=2)
- PAGI (+2=6)
- Pod::Coverage (+1=15)
- Pod::Markdown (+1=34)
- Pod::Markdown::Github (+1=7)
- SDL3 (+1=3)
- Test::Kwalitee (+1=8)
- Test::Pod (+1=23)
- Test::Pod::Coverage (+1=24)
- Test::Spelling (+1=14)
- utf8::all (+1=31)
- Venus (+1=8)
- YAML::LibYAML (+1=60)
Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions
See OSDC Perl
-
00:00 Working with Peter Nilsson
-
00:01 Find a module to add GitHub Action to. go to CPAN::Digger recent
-
00:10 Found Tree-STR
-
01:20 Bug in CPAN Digger that shows a GitHub link even if it is broken.
-
01:30 Search for the module name on GitHub.
-
02:25 Verify that the name of the module author is the owner of the GitHub repository.
-
03:25 Edit the Makefile.PL.
-
04:05 Edit the file, fork the repository.
-
05:40 Send the Pull-Request.
-
06:30 Back to CPAN Digger recent to find a module without GitHub Actions.
-
07:20 Add file / Fork repository gives us "unexpected error".
-
07:45 Direct fork works.
-
08:00 Create the
.github/workflows/ci.ymlfile. -
09:00 Example CI yaml file copy it and edit it.
-
14:25 Look at a GitLab CI file for a few seconds.
-
14:58 Commit - change the branch and add a description!
-
17:31 Check if the GitHub Action works properly.
-
18:17 There is a warning while the tests are running.
-
21:20 Opening an issue.
-
21:48 Opening the PR (on the wrong repository).
-
22:30 Linking to output of a CI?
-
23:40 Looking at the file to see the source of the warning.
-
25:25 Assigning an issue? In an open source project?
-
27:15 Edit the already created issue.
-
28:30 USe the Preview!
-
29:20 Sending the Pull-Request to the project owner.
-
31:25 Switching to Jonathan
-
33:10 CPAN Digger recent
-
34:00 Net-SSH-Perl of BDFOY - Testing a networking module is hard and Jonathan is using Windows.
-
35:13 Frequency of update of CPAN Digger.
-
36:00 Looking at our notes to find the GitHub account of the module author LNATION.
-
38:10 Look at the modules of LNATION on MetaCPAN
-
38:47 Found JSON::Lines
-
39:42 Install the dependencies, run the tests, generate test coverage.
-
40:32 Cygwin?
-
42:45 Add Github Action copying it from the previous PR.
-
43:54
META.ymlshould not be committed as it is a generated file. -
48:25 I am looking for sponsors!
-
48:50 Create a branch that reflects what we do.
-
51:38 commit the changes
-
53:10 Fork the project on GitHub and setup git remote locally.
-
55:05
git push -u fork add-ci -
57:44 Sending the Pull-Request.
-
59:10 The 7 dwarfs and Snowwhite. My hope is to have a 100 people sending these PRs.
-
1:01:30 Feedback.
-
1:02:10 Did you think this was useful?
-
1:02:55 Would you be willing to tell people you know that you did this and you will do it again?
-
1:03:17 You can put this on your resume. It means you know how to do it.
-
1:04:16 ... and Zoom suddenly closed the recording...
Announcing the Perl Toolchain Summit 2026!
The organizers have been working behind the scenes since last September, and today I’m happy to announce that the 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.
This post is brought to you by Simplelists, a group email and mailing list service provider, and a recurring sponsor of the Perl Toolchain Summit.

Started in 2008 as the Perl QA Hackathon in Oslo, the Perl Toolchain Summit is an annual event that brings together the key developers working on the Perl toolchain. Each year (except for 2020-2022), the event moves from country to country all over Europe, organised by local teams of volunteers. The surplus money from previous summits helps fund the next one.
Since 2023, the organizing team is formally split between a “global” team and a “local” team (although this setup has been informally used before).
The global team is made up of veteran PTS organizers, who deal with invitations, finding sponsors, paying bills and communications. They are Laurent Boivin (ELBEHO), Philippe Bruhat (BOOK), Thibault Duponchelle (CONTRA), Tina Müller (TINITA) and Breno de Oliveira (GARU), supported by Les Mongueurs de Perl’s bank account.
The local team members for this year have organized several events in Vienna (including the Perl QA Hackathon 2010!) and deal with finding the venue, the hotel, the catering and welcoming our attendees in Vienna in April. They are Alexander Hartmaier (ABRAXXA), Thomas Klausner (DOMM), Maroš Kollár (MAROS), Michael Kröll and Helmut Wollmersdorfer (WOLLMERS).
The developers who maintain CPAN and associated tools and services are all volunteers, scattered across the globe. This event is the one time in the year when they can get together.
The summit provides dedicated time to work on the critical systems and tools, with all the right people in the same room. The attendees hammer out solutions to thorny problems and discuss new ideas to keep the toolchain moving forward. This year, about 40 people have been invited, with 35 participants expected to join us in Vienna.
If you want to find out more about the work being done at the Toolchain Summit, and hear the teams and people involved, you can listen to several episodes of The Underbar podcast, which were recorded during the 2025 edition in Leipzig, Germany:
Given the important nature of the attendees’ work and their volunteer status, we try to pay for most expenses (travel, lodging, food, etc.) through sponsorship. If you’re interested in helping sponsor the summit, please get in touch with the global team at pts2026@perltoolchainsummit.org.
Simplelists has been sponsoring the Perl Toolchain Summit for several years now. We are very grateful for their continued support.
Simplelists is proud to sponsor the 2026 Perl Toolchain Summit, as Perl forms the core of our technology stack. We are grateful that we can rely on the robust and comprehensive Perl ecosystem, from the core of Perl itself to a whole myriad of CPAN modules. We are glad that the PTS continues its unsung work, ensuring that Simplelists can continue to rely on these many tools.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development…

Dave writes:
During December, I fixed assorted bugs, and started work on another tranche of ExtUtils::ParseXS fixups, this time focussing on:
adding and rewording warning and error messages, and adding new tests for them;
improving test coverage: all XS keywords have tests now;
reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;
refactoring and improving the length(str) pseudo-parameter implementation.
By the end of this report period, that work was about half finished; it is currently finished and being reviewed.
Summary:
* 10:25 GH #16197 re eval stack unwinding
* 1:39 GH #23903 BBC: bleadperl breaks ETHER/Package-Stash-XS-0.30.tar.gz
* 0:09 GH #23986 Perl_rpp_popfree_to(SV sp**) questionable design
* 3:02 fix Pod::Html stderr noise
* 27:47 improve Extutils::ParseXS
* 1:47 modernise perlxs.pod
Total: * 44:49 (HH::MM)

Tony writes:
``` [Hours] [Activity] 2025/12/01 Monday 0.23 memEQ cast discussion with khw 0.42 #23965 testing, review and comment 2.03 #23885 review, testing, comments 0.08 #23970 review and approve 0.13 #23971 review and approve
0.08 #23965 follow-up
2.97
2025/12/02 Tuesday 0.73 #23969 research and comment 0.30 #23974 review and approve 0.87 #23975 review and comment 0.38 #23975 review reply and approve 0.25 #23976 review, research and approve 0.43 #23977 review, research and approve
1.20 #23918 try to produce expected bug and succeed
4.16
2025/12/03 Wednesday 0.35 #23883 check updates and approve with comment 0.72 #23979 review, try to trigger the messages and approve 0.33 #23968 review, research and approve 0.25 #23961 review and comment 2.42 #23918 fix handling of context, testing, push to update,
comment on overload handling plans, start on it
4.07
2025/12/04 Thursday 2.05 #23980 review, comment and approve, fix group_end() decorator and make PR 23983 0.25 #23982 review, research and approve 1.30 #23918 test for skipping numeric overload, and fix, start
on force overload
3.60
2025/12/05 Friday
0.63 #23980 comment
0.63
2025/12/08 Monday 0.90 #23984 review and comment 0.13 #23988 review and comment 2.03 #23918 work on force overload implmentation
1.45 #23918 testing, docs
4.51
2025/12/09 Tuesday 0.32 github notifications 1.23 #23918 add more tests 0.30 #23992 review 0.47 #23993 research, testing and comment
0.58 #23993 review and comment
2.90
2025/12/10 Wednesday 0.72 #23992 review updates, testing and comment 1.22 #23782 review (and some #23885 discussion in irc) 1.35 look into Jim’s freebsd core dump, reproduce and find cause, email him and briefly comment in irc, more 23885
discussion and approve 23885
3.29
2025/12/11 Thursday 0.33 #23997 comment 1.08 #23995 research and comment 0.47 #23998 review and approve
1.15 #23918 cleanup
3.03
2025/11/15 Saturday 0.20 #23998 review updates and approve 0.53 #23975 review comment, research and follow-up 1.25 #24002 review discussion, debugging and comment 0.28 #23993 comment 0.67 #23918 commit cleanup 0.20 #24002 follow-up
0.65 #23975 research and follow-up
3.78
2025/12/16 Tuesday 0.40 #23997 review, comment, approve 0.37 #23988 review and comment 0.95 #24001 debugging and comment 0.27 #24006 review and comment 0.23 #24004 review and nothing to say
1.27 #23918 more cleanup, documentation
3.49
2025/12/17 Wednesday 0.32 #24008 testing, debugging and comment 0.08 #24006 review update and approve 0.60 #23795 quick re-check and approve 1.02 #23918 more fixes, address each PR comment and push for CI 0.75 #23956 work on a test and a fix, push for CI 0.93 #24001 write a test, and a fix, testing 0.67 #24001 write an inverted test too, commit message and push for CI 0.17 #23956 perldelta 0.08 #23956 check CI results, make PR 24010
0.15 #24001 perldelta and make PR 24011
4.77
2025/12/18 Thursday 0.27 #24001 rebase, local testing, push for CI 1.15 #24012 research 0.50 #23995 testing and comment
0.08 #24001 check CI results and apply to blead
2.00
Which I calculate is 43.2 hours.
Approximately 32 tickets were reviewed or worked on, and 1 patches were applied. ```


