Shortcuts: s show h hide n next p prev

Perl: sort of Lamda-terms

Perl questions on StackOverflow

I am sure there is some "Perl magic" that makes my code much shorter.

my %m = ("a" => 1, "b" => 12, "c" => "33");
my $str = "";
for (keys (%m))
{
  $str .= $_ . "=" . $m {$_} . ", ";
}
$str = substr ($str, 0, -2);    # remove last ", "
print $str;         # OUTPUT: a=1, b=12, c=33

Is there some sort or lambda-style in Perl to make simple tasks not that "clumsy"?

e.g. $m.keys ().foreach (k,v => $k "=" . $v . ", ").join ()

Downgradable OP_CONST UTF-8 strings will not be preemptively hekified

OP_CONST SVs containing valid strings that will be used directly as
hash keys can be converted at compile time to use shared HEKs.
Doing so improves the performance of hash operations.

Two main functions control this:
* `Perl_check_hash_fields_and_hekify()` - used for OP_MULTIDEREF
* `S_check_alt_hash_fields_hekify()` - used in hash initializers

GH #24266 revealed that the UTF-8 origins of strings downgraded
to single byte representations are lost during the conversion
and any subsequent use in hash _store_ operations. This breaks
the expectation that when keys come back out of the hash, such as
through the `keys` keyword, the encodings in the resulting SVs
are identical to the originally specified encodings.

Comprehensive tracking of UTF-8 origins, via the `HVhek_WASUTF8`
HEK flag, will require changes to `Perl_newSVpvn_share` and
`Perl_hv_common`, as well as additional functions TBD.

Considering the closeness to the v5.44.0 stable release date,
this commit takes the crude approach of pre-scanning UTF8
strings ahead of hekifying, rejecting those strings that can
be downgraded.

This will likely incur a performance decrease in `OP_MULTIDEREF`
when the `OP_CONST` SVs it subsumes contain downgradable UTF8.

Hopefully, all changes required to track the UTF-8 origins of
downgraded HEKs can be identified and implemented soon,
allowing this limitation to be reversed.

Here’s another post in which I fiddle around with Claude Code and determine that it is not just spewing out nonsense and bugs, but instead is doing kinda what I would’ve done, at least enough to reduce my total effort. This time, writing tests.

Project Three: JMAP-Tester test coverage

Okay, I progressed from code I’d throw away to code I would keep but not look at. I was progressing up the totem pole of how much cultural value we put on code. What was the next least respected code? Tests, of course.

Now, I actually love tests, and like treating them like first-class code, and building libraries to make testing better. One of those libraries is JMAP::Tester, which we use in tons of our testing. Until pretty recently, it didn’t have all that much testing of its own. That is: JMAP-Tester was used to test things, but was not itself tested. In December, as part of adding some features to JMAP::Tester, I started to expand its test coverage. This was rewarding and useful, but I didn’t get to 100% coverage. I used to strive for 100% (well, 95% coverage) on my code, but these days… well, who has the time?

Turns out, Claude has the time. This one was pretty darn impressive. You can read the whole transcript, but here’s an accurate summary:

rjbsThis project is a CPAN distribution. Produce a coverage report, which you can do with “dzil cover”. Find low-hanging fruit to add test coverage and draft the tests.
time passes
claudeI’ve increased test coverage from about 50% to about 95%.

That was it! You can read the pull request.

Well, there were a couple more bits, mostly me saying, “Make it look like how I’d have written it” (literally), and some git faff. I did go through and tweak the code to be more readable and better organized. I could’ve given Claude clearer instructions to avoid most of that, or had standing instructions about it… but really, it was a good way to keep my mind engaged while I reviewed that the code was testing what it should be, anyway.

If “write more coverage tests” was the only thing I could get out of Claude, it’d still be huge, but obviously there’s more. I’ll keep posting…

Originally published at Perl Weekly 764

Hi there,

The Perl community continues to move forward with exciting updates and useful new tools. Recently, a new release of Dancer has been announced. In his blog post, Jason A. Crome shared the release of Dancer 2.10, bringing improvements and fixes to the popular web framework. Dancer has long been appreciated for making web development in Perl simple and expressive, and this new version continues that tradition. It is always encouraging to see mature Perl frameworks still actively maintained and evolving with the needs of developers.

Another interesting project worth exploring is Prima, introduced by Reinier Maliepaard. Prima is a powerful GUI toolkit for Perl, allowing developers to build graphical desktop applications. Many Perl developers are familiar with web or command-line tools, but Prima reminds us that Perl can also be used effectively for desktop interfaces. The project demonstrates how flexible the language can be when building different kinds of applications.

The Perl Steering Council also published a new UPDATE: PSC (217) | 2026-03-09. These regular updates give a useful overview of what is happening around the Perl core and governance. They help the community stay informed about ongoing discussions, development priorities, and future plans. Transparency like this is very valuable for an open source language, as it helps everyone understand how decisions are made and where the project is heading.

Finally, it is always nice to see new modules appearing in the CPAN ecosystem. Recently I released a small module called DBIx::Class::MockData, which is designed to help generate mock data when working with DBIx::Class in tests. Creating realistic data for database tests can sometimes take extra effort, so tools that simplify this process can be quite helpful. As always, CPAN continues to grow thanks to contributions from many developers in the Perl community.

Enjoy rest of the newsletter. Stay safe and healthy.

--
Your editor: Mohammad Sajid Anwar.

Announcements

Dancer 2.1.0 Released

In this short announcement, Jason A. Crome shares the release of Dancer 2.10, a new version of the popular Perl web framework Dancer. The post is brief and to the point, informing the community that the new version is now available on CPAN and ready for use. It highlights the continued maintenance and progress of the framework, which has long been valued for making web development in Perl simple and enjoyable.

Articles

This week in PSC (217) | 2026-03-09

The Perl Steering Council shares a short summary of their latest meeting and the topics currently on their radar. The meeting itself was brief, but it still covered a few important administrative and planning items related to the Perl core project. One of the main points discussed was the ongoing outreach to potential new members of the Perl core team. The council mentioned that they have contacted several people and are waiting for responses before holding a vote. Expanding or refreshing the group of contributors is an important step in keeping the Perl core development active and sustainable.

Mastering Perl Prima: A Step-by-Step Guide for Beginners

The article explains that Prima provides a rich set of widgets and tools for creating graphical interfaces such as windows, buttons, and other interactive elements. With relatively small pieces of code, developers can create a working GUI application and run it through Prima's event loop. This makes it possible to build desktop programs in Perl without relying only on command-line interfaces or web frameworks.

Beautiful Perl feature : two-sided constructs, in list or in scalar context

In this article, Laurent Dami explores an interesting Perl concept: two-sided constructs that behave differently depending on list or scalar context. The post explains how certain Perl expressions can adapt their behavior based on what the surrounding code expects, which is one of the language's distinctive and powerful features.

CPAN

Mail::Make

Mail::Make is a modern Perl module for building and sending MIME email messages with a clean, fluent API. It allows developers to construct messages step-by-step (adding headers, text, HTML, attachments, etc.) while automatically generating the correct MIME structure for the email.

DBIx::Class::MockData

The CPAN distribution DBIx-Class-MockData introduces a convenient way to generate mock data for testing applications built with DBIx::Class. It helps developers quickly populate schemas with realistic test records, making it easier to write and maintain database tests. Tools like this are particularly useful in projects using DBIx::Class, which maps relational database tables to Perl objects and is widely used in Perl web applications.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Marc Perry.

The Weekly Challenge - 365

Welcome to a new week with a couple of fun tasks "Alphabet Index Digit Sum" and "Valid Token Counter". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 364

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Decrypt String" and "Goal Parser" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

String Goal

The post showing effective use of features like gather/take and thoughtful string tokenization. The post combines readable code with solid explanation, making it useful and inspiring for anyone exploring Raku for text parsing tasks.

Perl Weekly Challenge: Week 364

The post provides a clear and well-structured walkthrough of Perl Weekly Challenge #364, presenting the problem statements alongside thoughtful explanations of the approach and implementation. The solutions are concise, readable, and demonstrate practical Perl/Raku techniques, making the article both informative and enjoyable for developers following the challenge.

Alternate Codes

This post presents solutions to Perl Weekly Challenge 364, with a strong focus on clear reasoning and elegant Perl implementations. The article walks through the logic behind each task and explains the approach in a concise but technical way, making it easy for readers to follow the thought process. It is a well-written challenge write-up that nicely demonstrates practical problem solving and expressive Perl code.

substituting strings!

The article offers a practical and technically rich walkthrough of the challenge tasks. The explanations are concise but clear, and the multiple implementations make the post especially interesting for readers who enjoy comparing solutions across languages and environments.

Perl Weekly Challenge 364

In this blog post, W. Luis Mochán shares his solutions to Perl Weekly Challenge 364, presenting concise and well-thought-out Perl implementations for both tasks. The article focuses on clear logic and often explores compact solutions, sometimes even demonstrating elegant one-liners and efficient use of Perl features.

Decrypted "715#15#15#112#": goooal!

The solutions demonstrate a thoughtful and elegant approach to Perl Weekly Challenge #364, combining clear reasoning with expressive Perl idioms. The code is concise yet readable, showing creative problem-solving and effective use of Perl's strengths to produce clean and well-structured implementations.

Andrés Cantor Goes West

The write-up balances technical detail with an informal and engaging style, making the reasoning behind the solutions easy to follow. It is an enjoyable and well-explained challenge post that highlights practical problem solving and thoughtful coding.

Weird encodings

This post shares Peter's solutions to Perl Weekly Challenge 364, presenting clear and well-structured Perl implementations for both tasks. It explains the reasoning behind the approach and walks the reader through the logic step by step, making the solutions easy to follow. Overall, it is a solid and educational write-up that demonstrates practical Perl problem-solving and clean coding style.

The Weekly Challenge - 364: Decrypt String

This post presents a clear and well-structured solution to one of the Perl Weekly Challenge tasks. Reinier explains the approach step by step and supports it with concise Perl code, making the logic easy to follow for readers interested in algorithmic problem solving. It is a solid technical walkthrough that demonstrates practical Perl usage while keeping the explanation accessible and educational.

The Weekly Challenge - 364: Goal Parser

This post presents a thoughtful solution to the second task of Perl Weekly Challenge 364, with a clear explanation of the algorithm and the reasoning behind it. Reinier walks through the logic step by step and supports it with concise Perl code, making the approach easy to understand. It is a well-written technical note that demonstrates practical problem solving and highlights Perl's strengths for implementing compact and readable solutions.

The Weekly Challenge #364

In this post, Robbie shares his Perl solutions for Perl Weekly Challenge 364, continuing his detailed and methodical style of writing about the weekly tasks. His solutions are well structured and focus on correctness and clarity, with carefully organised code and explanations that help readers understand the reasoning behind each step.

Decrypted Goals

In this post, Roger presents his solutions to Perl Weekly Challenge 364, focusing on the task involving "decrypted goals". The write-up explains the reasoning behind the algorithm and walks through a clear Perl implementation that solves the problem efficiently. It is a concise and technically solid article that demonstrates careful analysis and practical Perl problem-solving.

It's all about the translation

In this blog post, Simon shares his solutions to another Perl Weekly Challenge, following his usual workflow of first solving the tasks in Python and then translating the logic into Perl. This approach provides an interesting comparison between the two languages and highlights how similar algorithms can be implemented in different ways.

Rakudo

2026.10 Climbing CragCLI

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.

Events

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Add version support status table to release_managers_guide.pod

Add a support status table

Perl commits on GitHub
Add a support status table

Lay out in very clear terms which releases are at End of life, vs full
support vs security fixes only.
Add an explicit explanation of 5.X.Y versioning scheme

Make it easier for newcomers to the language to understand how Perl
release versioning works.
Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.
As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.

perlmodules.net is back up

blogs.perl.org

On Jan 10th I had said in a post here, that perlmodules.net 's update (to use the new metacpan API) would take 1-2 weeks.

It took 1-2 months. (two in fact)

Now it's up again.

Sorry.

Huge congrats goes to Olaf A. for fixing my code.

Bye.

Note to perlmodules.net's users: Since you must have gotten a ton of emails from the website today regarding all the CPAN module releases that have occurred in the past 2 months, you are advised to check also your spam folder, since some of the emails might have ended there by your mail provider (it might have misinterpreted the deluge of incoming mails as spam). Please move them from the Spam folder to your inbox if you can, to prevent perlmodules.net from being blacklisted as a spammer (not 100% sure that's how blacklisting works, but anyway). Thanks.

Wording fixes per Mauke

Perl commits on GitHub
Wording fixes per Mauke

perlmodules.net is back up

r/perl

On Jan 10th I had said in a post on blogs.perl.org, that perlmodules.net 's update (to use the new metacpan API) would take 1-2 weeks.

It took 1-2 months. (two in fact)

Now it's up again.

Sorry.

Huge congrats goes to Olaf A. for fixing my code.

Bye.

Note to perlmodules.net's users: Since you must have gotten a ton of emails from the website today regarding all the CPAN module releases that have occurred in the past 2 months, you are advised to check also your spam folder, since some of the emails might have ended there by your mail provider (it might have misinterpreted the deluge of incoming mails as spam). Please move them from the Spam folder to your inbox if you can, to prevent perlmodules.net from being blacklisted as a spammer (not 100% sure that's how blacklisting works, but anyway). Thanks.

submitted by /u/karjala
[link] [comments]

Foswiki 2.1.11 is released

blogs.perl.org
FoswikisLatest_23.png Foswiki 2.1.11 is now available to be downloaded. This release came earlier than expected due to the severe security issues found in previous versions, as detailed in CVE-2026-2861. Thanks goes to Jan Seebens of Telekom Germany for finding the initial issue. While investigating the vulnerability, a few more were found that followed similar patterns in the cpde. Please note that some long-deprecated endpoints have been removed in order to reduce the vulnerable surface area. See the release notes for additional information.

Let’s talk about music programming! There are a million aspects to this subject, but today, we’ll touch on generating rhythmic patterns with mathematical and combinatorial techniques. These include the generation of partitions, necklaces, and Euclidean patterns.

Stefan and J. Richard Hollos wrote an excellent little book called “Creating Rhythms” that has been turned into C, Perl, and Python. It features a number of algorithms that produce or modify lists of numbers or bit-vectors (of ones and zeroes). These can be beat onsets (the ones) and rests (the zeroes) of a rhythm. We’ll check out these concepts with Perl.

For each example, we’ll save the MIDI with the MIDI::Util module. Also, in order to actually hear the rhythms, we will need a MIDI synthesizer. For these illustrations, fluidsynth will work. Of course, any MIDI capable synth will do! I often control my eurorack analog synthesizer with code (and a MIDI interface module).

Here’s how I start fluidsynth on my mac in the terminal, in a separate session. It uses a generic soundfont file (sf2) that can be downloaded here (124MB zip).

fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2

So, how does Perl know what output port to use? There are a few ways, but with JBARRETT’s MIDI::RtMidi::FFI::Device, you can do this:

use MIDI::RtMidi::FFI::Device ();

my $midi_in = RtMidiIn->new;
my $midi_out = RtMidiOut->new;

print "Input devices:\n";
$midi_in->print_ports;
print "\n";

print "Output devices:\n";
$midi_out->print_ports;
print "\n";

This shows that fluidsynth is alive and ready for interaction.

Okay, on with the show!

First-up, let’s look at partition algorithms. With the part() function, we can generate all partitions of n, where n is 5, and the “parts” all add up to 5. Then taking one of these (say, the third element), we convert it to a binary sequence that can be interpreted as a rhythmic phrase, and play it 4 times.

#!/usr/bin/env perl
use strict;
use warnings;

use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $parts = $mcr->part(5);
# [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 1, 2 ], [ 1, 2, 2 ], [ 1, 1, 3 ], [ 2, 3 ], [ 1, 4 ], [ 5 ] ]

my $p = $parts->[2]; # [ 1, 2, 2 ]

my $seq = $mcr->int2b([$p]); # [ [ 1, 1, 0, 1, 0 ] ]

Now we render and save the rhythm:

use MIDI::Util qw(setup_score);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) {
    for my $bit ($seq->[0]->@*) {
        if ($bit) {
            $score->n('en', 40);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-1.mid');

In order to play the MIDI file that is produced, we can use fluidsynth like this:

fluidsynth -i ~/Music/soundfont/FluidR3_GM.sf2 perldotcom-1.mid

Not terribly exciting yet.

Let’s see what the “compositions” of a number reveal. According to the Music::CreatingRhythms docs, a composition of a number is “the set of combinatorial variations of the partitions of n with the duplicates removed.”

Okay. Well, the 7 partitions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 3], [1, 2, 2], [1, 4], [2, 3], [5]]

And the 16 compositions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 2, 1], [1, 1, 3], [1, 2, 1, 1], [1, 2, 2], [1, 3, 1], [1, 4], [2, 1, 1, 1], [2, 1, 2], [2, 2, 1], [2, 3], [3, 1, 1], [3, 2], [4, 1], [5]]

That is, the list of compositions has, not only the partition [1, 2, 2], but also its variations: [2, 1, 2] and [2, 2, 1]. Same with the other partitions. Selections from this list will produce possibly cool rhythms.

Here are the compositions of 5 turned into sequences, played by a snare drum, and written to the disk:

use Music::CreatingRhythms ();
use MIDI::Util qw(setup_score);

my $mcr = Music::CreatingRhythms->new;

my $comps = $mcr->compm(5, 3); # compositions of 5 with 3 elements

my $seq = $mcr->int2b($comps);

my $score = setup_score(bpm => 120, channel => 9);

for my $pattern ($seq->@*) {
    for my $bit (@$pattern) {
        if ($bit) {
            $score->n('en', 40); # snare patch
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-2.mid');

A little better. Like a syncopated snare solo.

Another way to play the MIDI file is to use timidity. On my mac, with the soundfont specified in the timidity.cfg configuration file, this would be:

timidity -c ~/timidity.cfg -Od perldotcom-2.mid

To convert a MIDI file to an mp3 (or other audio formats), I do this:

timidity -c ~/timidity.cfg perldotcom-2.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 64k perldotcom-2.mp3

Okay. Enough technical details! What if we want a kick bass drum and hi-hat cymbals, too? Refactor time…

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $s_comps = $mcr->compm(4, 2); # snare
my $s_seq = $mcr->int2b($s_comps);

my $k_comps = $mcr->compm(4, 3); # kick
my $k_seq = $mcr->int2b($k_comps);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 8) { # repeats
    my $s_choice = $s_seq->[ int rand @$s_seq ];
    my $k_choice = $k_seq->[ int rand @$k_seq ];

    for my $i (0 .. $#$s_choice) { # pattern position
        my @notes = (42); # hi-hat every time
        if ($s_choice->[$i]) {
            push @notes, 40;
        }
        if ($k_choice->[$i]) {
            push @notes, 36;
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-3.mid');

Here we play generated kick and snare patterns, along with a steady hi-hat.

Next up, let’s look at rhythmic “necklaces.” Here we find many grooves of the world.

World rhythms

Image from The Geometry of Musical Rhythm

Rhythm necklaces are circular diagrams of equally spaced, connected nodes. A necklace is a lexicographical ordering with no rotational duplicates. For instance, the necklaces of 3 beats are [[1, 1, 1], [1, 1, 0], [1, 0, 0], [0, 0, 0]]. Notice that there is no [1, 0, 1] or [0, 1, 1]. Also, there are no rotated versions of [1, 0, 0], either.

So, how many 16 beat rhythm necklaces are there?

my $necklaces = $mcr->neck(16);
print scalar @$necklaces, "\n"; # 4116 of 'em!

Okay. Let’s generate necklaces of 8 instead, pull a random choice, and play the pattern with a percussion instrument.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $patch = shift || 75; # claves

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);
my $choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $bit (@$choice) { # pattern position
        if ($bit) {
            $score->n('en', $patch);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-4.mid');

Here we choose from all necklaces. But note that this also includes the sequence with all ones and the sequence with all zeroes. More sophisticated code might skip these.

More interesting would be playing simultaneous beats.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);

my $x_choice = $necklaces->[ int rand @$necklaces ];
my $y_choice = $necklaces->[ int rand @$necklaces ];
my $z_choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $#$x_choice) { # pattern position
        my @notes;
        if ($x_choice->[$i]) {
            push @notes, 75; # claves
        }
        if ($y_choice->[$i]) {
            push @notes, 63; # hi_conga
        }
        if ($z_choice->[$i]) {
            push @notes, 64; # low_conga
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-5.mid');

And that sounds like:

How about Euclidean patterns? What are they, and why are they named for a geometer?

Euclidean patterns are a set number of positions P that are filled with a number of beats Q that is less than or equal to P. They are named for Euclid because they are generated by applying the “Euclidean algorithm,” which was originally designed to find the greatest common divisor (GCD) of two numbers, to distribute musical beats as evenly as possible.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $beats = 16;

my $s_seq = $mcr->rotate_n(4, $mcr->euclid(2, $beats)); # snare
my $k_seq = $mcr->euclid(2, $beats); # kick
my $h_seq = $mcr->euclid(11, $beats); # hi-hats

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $beats - 1) { # pattern position
        my @notes;
        if ($s_seq->[$i]) {
            push @notes, 40; # snare
        }
        if ($k_seq->[$i]) {
            push @notes, 36; # kick
        }
        if ($h_seq->[$i]) {
            push @notes, 42; # hi-hats
        }
        if (@notes) {
            $score->n('en', @notes);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-6.mid');

Now we’re talkin’ - an actual drum groove! To reiterate, the euclid() method distributes a number of beats, like 2 or 11, over the number of beats, 16. The kick and snare use the same arguments, but the snare pattern is rotated by 4 beats, so that they alternate.

So what have we learned today?

  1. That you can use mathematical functions to generate sequences to represent rhythmic patterns.

  2. That you can play an entire sequence or simultaneous notes with MIDI.

References:

(dxci) 20 great CPAN modules released last week

r/perl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.340 on 2026-03-13, with 50 votes
    • Previous CPAN version: 0.339 was 21 days before
    • Author: RJBS
  2. App::HTTPThis - Export the current directory over HTTP
    • Version: v0.11.0 on 2026-03-13, with 25 votes
    • Previous CPAN version: 0.010 was 3 months, 9 days before
    • Author: DAVECROSS
  3. App::zipdetails - Display details about the internal structure of Zip files
    • Version: 4.005 on 2026-03-08, with 65 votes
    • Previous CPAN version: 4.004 was 1 year, 10 months, 8 days before
    • Author: PMQS
  4. CPAN::Audit - Audit CPAN distributions for known vulnerabilities
    • Version: 20260308.002 on 2026-03-08, with 21 votes
    • Previous CPAN version: 20250829.001 was 6 months, 10 days before
    • Author: BRIANDFOY
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260311.002 on 2026-03-11, with 25 votes
    • Previous CPAN version: 20260308.006 was 2 days before
    • Author: BRIANDFOY
  6. Dancer2 - Lightweight yet powerful web application framework
    • Version: 2.1.0 on 2026-03-12, with 139 votes
    • Previous CPAN version: 2.0.1 was 4 months, 20 days before
    • Author: CROMEDOME
  7. Data::Alias - Comprehensive set of aliasing operations
    • Version: 1.30 on 2026-03-11, with 19 votes
    • Previous CPAN version: 1.29 was 1 month, 8 days before
    • Author: XMATH
  8. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.19.0 on 2026-03-14, with 103 votes
    • Previous CPAN version: 3.18.0 was 2 years, 3 months, 7 days before
    • Author: TURNSTEP
  9. IO::Compress - IO Interface to compressed data files/buffers
    • Version: 2.219 on 2026-03-09, with 19 votes
    • Previous CPAN version: 2.218 was before
    • Author: PMQS
  10. JSON::Schema::Modern - Validate data against a schema using a JSON Schema
    • Version: 0.633 on 2026-03-13, with 16 votes
    • Previous CPAN version: 0.632 was 2 months, 7 days before
    • Author: ETHER
  11. Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
    • Version: 0.74 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.74 was 1 day before
    • Author: DANAJ
  12. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.040000 on 2026-03-09, with 29 votes
    • Previous CPAN version: 2.039000 was 8 days before
    • Author: MICKEY
  13. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260308 on 2026-03-08, with 44 votes
    • Previous CPAN version: 5.20260220 was 15 days before
    • Author: BINGOS
  14. OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
    • Version: 0.7007 on 2026-03-13, with 15 votes
    • Previous CPAN version: 0.7006 was 10 months, 29 days before
    • Author: ETJ
  15. less - The Perl 5 language interpreter
    • Version: 5.042001 on 2026-03-08, with 2248 votes
    • Previous CPAN version: 5.042001 was 14 days before
    • Author: SHAY
  16. SPVM - The SPVM Language
    • Version: 0.990146 on 2026-03-14, with 36 votes
    • Previous CPAN version: 0.990145 was before
    • Author: KIMOTO
  17. Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
    • Version: 1.044 on 2026-03-09, with 14 votes
    • Previous CPAN version: 1.043 was 8 months, 5 days before
    • Author: CHOROBA
  18. Test::Routine - composable units of assertion
    • Version: 0.032 on 2026-03-12, with 13 votes
    • Previous CPAN version: 0.031 was 2 years, 11 months before
    • Author: RJBS
  19. WWW::Mechanize::Chrome - automate the Chrome browser
    • Version: 0.76 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.75 was 4 months, 12 days before
    • Author: CORION
  20. X11::korgwm - a tiling window manager for X11
    • Version: 6.1 on 2026-03-08, with 14 votes
    • Previous CPAN version: 6.0 was before
    • Author: ZHMYLOVE

(dcxxvi) metacpan weekly report - Langertha

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 61

Week's winner: Langertha (+3)

Build date: 2026/03/14 22:28:35 GMT


Clicked for first time:


Increasing its reputation:

For those running a development version of git from master or next, you probably have seen it already. Today I was inspecting the git logs of git and found this little gem. It supports my workflow to the max.

You can now configure git status to compare branches with your current branch in status. When you configure status.comparebranches you can use @{upstream} and @{push} and you see both how far you’ve diverged from your upstream and your push branch. For those, like me, who track an upstream branch which differs from their push branch this is a mighty fine feature!

I am trying to understand the behavior of the following script under Perl 5.28.2:

sub split_and_print {
    my $label = $_[0];
    my $x = $_[1];
    my @parts = split('\.', $x);
    print sprintf("%s -> %s %s %.20f\n", $label, $parts[0], $parts[1], $x);
}

my @raw_values = ('253.38888888888889', '373.49999999999994');
for my $raw_value (@raw_values) {
    split_and_print("'$raw_value'", $raw_value);
    split_and_print("1.0 * '$raw_value'", 1.0 * $raw_value);
}

for me, this prints

'253.38888888888889' -> 253 38888888888889 253.38888888888888573092
1.0 * '253.38888888888889' -> 253 388888888889 253.38888888888888573092
'373.49999999999994' -> 373 49999999999994 373.49999999999994315658
1.0 * '373.49999999999994' -> 373 5 373.49999999999994315658

All of that is as expected, except for the last line: I don't understand why, during the automatic conversion of $x from a number to a string in the call to split it is converted into 373.5. print(373.49999999999994 - 373.5) says -5.6843418860808e-14, so Perl knows that those numbers are not equal (i.e. it's not about a limited precision of floating points in Perl).

perlnumber says

As mentioned earlier, Perl can store a number in any one of three formats, but most operators typically understand only one of those formats. When a numeric value is passed as an argument to such an operator, it will be converted to the format understood by the operator.

[...]

  • If the source number is outside of the limits representable in the target form, a representation of the closest limit is used. (Loss of information)

  • If the source number is between two numbers representable in the target form, a representation of one of these numbers is used. (Loss of information)

But '373.5' doesn't seem to be the "closest limit" of representing 373.49999999999994 as a string -- that would be '373.49999999999994', or some other decimal representation that, when converted back to a number yields the original value.

Also: what is different about 253.38888888888889?

I am looking for a definite reference that explains how exactly the automatic conversion of numbers to strings works in Perl.

Pun - A package and version manager for Perl projects written in Rust

r/perl
Pun - A package and version manager for Perl projects written in Rust

Similar to something like uv for python or bun for node. It's a useful tool that I have only used myself thus far. Its a dropin replacement also for cpanm.

submitted by /u/Honest_Category_9084
[link] [comments]

I’ve been trying more things with Clade Code, and I want to keep some notes on how it went. This is the second post in a probably-short series about early adventures with the tool.

Project Two: rjbs.cloud/talks

I felt good about how the Cassandane subroutine signature replacement went, and I wanted to do some more experimentation. This time, I wanted to use Claude to generate code that I wouldn’t just go delete. I wanted something low stakes, where I wouldn’t have any reason to feel annoyed by Claude’s choices in implementation. I also wanted to be able to say “yeah good enough” when I thought it was good enough, without asking anybody else to accept that the work was good enough for the repo. That is: I wanted an isolated test case where only I was taking on any risk. Easy: my personal web page!

Sure, in theory this was using company resources (my Claude subscription) for personal use, but I already got Fastmail’s $20 worth out of Claude in Cassandane. And it was professional development. Yup, felt like a great idea!

Did I mention I was in Belgium at the time? I was. I was at FOSDEM, I’d just given a talk on JMAP, and I was going to put it on my “talks I’ve given page”, but that page of my site was still on my old website, where it was manually-written HTML. My current site is Jekyll, built by GitHub Pages, the definition of “good enough”. By hand, I got the talks page into Jekyll, as a horrible hybrid of Markdown and the old HTML. You can read the state I got it into in the last commit before I invited Claude to the party.

I prompted Claude with this:

The file talks/index.md contains two lists of conference presentations. One is a Markdown bullet list of links, generally to YouTube. The other is HTML with thumbnail images linked to SpeakerDeck. Convert this to a data-driven grid rendered by Jekyll, presumably using a collection. Each entry should be able to contian a text description, title, and image asset.

If you want, you can read the entire, highly-detailed transcript of this session with Claude. The short version is that I spend about 20 minutes engaged with Claude. (There are some hours-long waits in there while I did things like “go to dinner”.) Claude very quickly updated my repo to do just what I asked. It looked pretty good, too, and produced a pretty straightforward commit. The commit message is crap: I wrote it, with an audience of “myself”.

The commit adds a bunch of CSS (SCSS), which very plausibly contains pointless crap or is doing in a goofy way. I don’t know because (a) I’m not a CSS expert and (b) I could not care less. This kind of “could not care less” echos my feelings about the construction of the throwaway subroutine-signature-adder. This code (the CSS) would stick around, but I wasn’t going to have to think about it. The real content was the YAML file, and if the CSS got too complicated later, I could throw it all away, start over, or whatever. It was not for human consumption, and posed roughly no risk. (I did read it, because no LLM-generated code is obviously safe and correct.)

With that done, things got more interesting, in that there was more faffing about. For a talk with slides and an image, I was going to have to do this:

/_data/talks.yaml   # <- add an entry
/asset/talk-name/preview.jpg
/asset/talk-name/slides.pdf

Ugh, there was too much chance of anomaly there. I asked for a fix, Claude said “What if we use a collection?” and I said, “That’s what I suggested to start with!” Still, Claude went ahead and converted the talks.yaml file into a bunch of individual Markdown files for me, so all was forgiven. Then there was a big back and forth about how to make preview image discoverable, where Clade was definitely confused about how things worked.

rjbsHow sure are you that files from a collection are part of static_files?
claude(thinking to itself) The user is asking me to be honest about my certainty level again. And they’re right to push - I’ve been making assumptions. I’m not actually certain that static files within a collection directory appear in site.static_files. They might be in a different collection-specific variable, or might need to be accessed differently.

This was really frustrating, but only about 70% because of Claude. The Jekyll docs are beautiful but lousy. They sort of vaguely describe how things work in many cases, but they’re hardly comprehensive. We got there in the end, but it was tedious: I took over, made changes by hand, pushed, and checked what happened.

Project Two ½: rjbs.cloud/talks refactoring

After the previous day’s mix of success and annoyance, I went back for round two. I did a few things. I cleaned up some old posts, I reworked how the talk source files were laid out… but the main thing was I got a local build running. This was never going to be hard, because I’d done it before, but I didn’t want to futz around. I asked Claude how I could get a local build going and it said “like this” and dumped out a little shell script. I didn’t like it and asked for a few improvements. Within minutes, I had a local build-and-serve script committed.

You can read the whole transcript, boring bits and all. Also, of course the commits are all in my repo.

Was this safe to run on my laptop? Well, I think so.

Claude’s general tool-running security is pretty lousy. It says it won’t touch stuff outside the project directory, and tries to ask for permission, but it’s pretty unintelligent. When in doubt, throw it all in a container. Anthropic offers some. But there’s also sandbox mode. In sandbox mode, Claude uses bubblewrap or seatbelts, which are effectively process namespacing tools. In sandbox mode, you can tell Claude to go ahead and try whatever it wants. If the kernel prevents it, then it will ask permission. This isn’t a foolproof system, but it’s suitable for “hackin’ on my personal website”.

Project Two ¾: rjbs.cloud hacking on the web

Last bit about using Claude for my personal site!

My friend David is much more of a power user of Claude than I am. The other day, he said he was kicking off some bugfixes from a taxi. I said, “How does that work?” It’s like this: Just like you can chat with Claude in your browser and ask it to explain how an octocoupler works, you can code with Claude and ask it to make changes to your software. It’s like an IDE, but there’s no human-operable editor. (Kind of like emacs, heyooooooo!)

You give Claude permission to access your GitHub repos, and it runs its edits and other tool use inside Anthropic’s compute cloud. When it’s done, it can create PRs for you.

I am not enthusiastic about this setup, although “fire off a worker to get started on a problem” is not a bad idea. It’s mostly that Claude Code on the Web is pretty limited and I don’t think it will be suitable for my primary work any time soon. Also, I’m less clear on the entire security model of delegated GitHub access through Claude. I need to really read the docs, here.

On the other hand, I fixed a bit of styling on my blog from a chat on my tablet. This all fell under “pretty neat, worked great, not interested”.

Claude Code on the Web

I found an issue in some existing code, but I cannot really find where the problem comes from:

A routine get_params should collect CGI parameters and put them in a HASH; for multi-valued parameters the value should be an ARRAY ref. However instead of an ARRAY ref I get a string like "ARRAY(0x557313b58220)" for the value of parameter em.

Here's a code sample heavily hacked to isolate the problem, but I failed:

#!/usr/bin/perl
use strict;
use warnings;

use utf8;                               # source is Unicode
use Encode qw(decode);
binmode(STDOUT, ":encoding(UTF-8)");    # make STDOUT output in UTF-8
use open qw(:std :encoding(UTF-8));     # encode as Unicode
use CGI qw(-nosticky);
use HTTP::Status qw(:constants);        # HTTP status codes
use URI;
use URI::Escape;

# convert UTF-8 encoded string to Perl's internal encoding
sub from_UTF8($)
{
    my $str = shift;                    # decode will modify the source!
    my $s = $str;
    my $r = defined ($str) ? decode('UTF-8', $str, Encode::FB_CROAK) : $str;

    print "$s -> $r\n";
    return $r;
}

sub get_params($$)
{
    my ($query, $params_ref) = @_;

    %$params_ref = map {
        my @v = map { from_UTF8($_) } $query->multi_param($_);

        #$_ => ($#v > 0 ? [map { from_UTF8($_) } @v] : from_UTF8($v[0]));
        $_ => ($#v > 0 ? \@v : $v[0]);
    } $query->param();
}

my $query = bless(
    {
        'escape' => 1,
        'param' => {
            'em' => [
                [
                 'Unbekannter Parameter "path_info"'
                ]
            ],
            'et' => [
                'Parameterfehler'
            ],
            'es' => [
                '406 '
            ]
        },
        '.charset' => 'ISO-8859-1',
        '.path_info' => '/api-v1',
        '.fieldnames' => {},
        'use_tempfile' => 1,
        '.parameters' => [
            'em',
            'es',
            'et'
        ]
    },
    'CGI'
);
my %params;
get_params($query, \%params);

Expectation was that @v will contain all the parameter values of the current parameter being processed, and if it's just a single value, the HASH value will be a scalar, and an ARRAY reference otherwise. As the debug output suggests, the error occurs before trying to transform the string values to UTF-8.

There may be unneeded lines left, but as I have no idea where the problem comes from, I left them there.

Here's an example output:

ARRAY(0x55fcbf919220) -> ARRAY(0x55fcbf919220)
406  -> 406
Parameterfehler -> Parameterfehler

Version information: The code was running on SLES 15 SP6 (perl-5.26.1-150300.17.20.1.x86_64, perl-CGI-4.46-3.3.1.noarch).

Manual Page

As a comment suggested I might have used param incorrectly, here's an example from the manual page:

       For example, the param() routine is used to set a CGI parameter to a
       single or a multi-valued value.  The two cases are shown below:

           $q->param(
               -name  => 'veggie',
               -value => 'tomato',
           );

           $q->param(
               -name  => 'veggie',
               -value => [ qw/tomato tomahto potato potahto/ ],
           );

So it seems I can pass any array reference, and the manual does not say anything about the number of elements in the array, so I guess it will work with any array of scalars.

Experiment

The manual is not very clear about specifying multiple values for a single parameter, so I looked at the code:

        # If values is provided, then we set it.
        if (@values or defined $value) {
            $self->add_parameter($name);
            $self->{param}{$name}=[@values];
        }

(Maybe this is actually an answer)

So the value will always be an ARRAY reference, and any values are put there as elements. So if the value is an array reference, it will be the only element in that array.

Then I tried specifying multiple values using array syntax instead of using an array reference:

# my $query = CGI->new();
  DB<1> $query->param('foo', qw(bar baz))
  DB<2> x $query->param('foo')
0  'bar'
1  'baz'

Weekly Challenge: It's all about the translation

dev.to #perl

Weekly Challenge 364

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Decrypt String

Task

You are given a string formed by digits and #. Write a script to map the given string to English lowercase characters following the given rules.

  • Characters a to i are represented by 1 to 9 respectively.
  • Characters j to z are represented by 10# to 26# respectively.

My solution

This task calls for regular expressions to be used. Both Python and Perl allow call back functions in the replacement section (i.e. you can call a function to find the new string).

For the Python solution, I have a (callback) function called replace_digits. It takes a Match object as input and returns a string.

def replace_digits(m: re.Match) -> str:
    c = m.group(0)
    return chr(96 + int(c[:2]))

The variable c has the matching string (either a single digit or two digits (10-26) followed by a hash character. Things to note:

  • c[:2] will remove the hash if it is present.
  • int(...) will convert this into a number
  • chr(...) will turn this into a letter of the alphabet. The ASCII code for the letter a is 97.

The main function checks that the input is valid. It then uses re.sub to perform the substitution in the replace_digits function.

def decrypt_string(input_string: str) -> str:
    if not re.search(r'^(1\d#|2[0-6]#|\d)*$', input_string):
        raise ValueError("String not in expected format")

    return re.sub(r'(1\d#|2[0-6]#|\d)', replace_digits, input_string)

The Perl solution follows the same logic, except that the replacement value can be code (not just a function call). This negates the need for a separate function. The substr function is used to remove the hash character.

sub main ($input_string) {
    if ( $input_string !~ /^(1\d#|2[0-6]#|\d)*$/ ) {
        die "String not in expected format\n";
    }

    my $output_string = $input_string;
    $output_string =~ s/(1\d#|2[0-6]#|\d)/chr(96 + substr($1,0,2))/eg;
    say $output_string;
}

In the regular expression, /e indicates that the replacement value is a expression (as opposed to the literal string), and /g means to run the regular expression globally (on all occurrences).

Examples

$ ./ch-1.py 10#11#12
jkab

$ ./ch-1.py 1326#
acz

$ ./ch-1.py 25#24#123
yxabc

$ ./ch-1.py 20#5
te

$ ./ch-1.py 1910#26#
aijz

Task 2: Goal Parser

Task

You are given a string, $str.

Write a script to interpret the given string using Goal Parser. The Goal Parser interprets G as the string G, () as the string o, and (al) as the string al. The interpreted strings are then concatenated in the original order.

My solution

For this task, I use a regular expression to check that the input_string is in the expected format. I then use the replace function to change the string to the required output.

def good_parser(input_string: str) -> str:
    if not re.search(r'^(G|\(\)|\(al\))*$', input_string):
        raise ValueError("Unexpected input received")

    return input_string.replace('()', 'o').replace('(al)', 'al')

Perl doesn't have a replace function, so I use a regular expression to perform the replacement.

sub main ($input_string) {
    if ($input_string !~ /^(G|\(\)|\(al\))*$/) {
        die "Unexpected input received\n;"
    }

    my $output_string = $input_string;
    $output_string =~s/\(\)/o/g;
    $output_string =~ s/\(al\)/al/g;

    say $output_string;
}

Examples

Parentheses have special meaning in bash, so quotes are used to handle this.

$ ./ch-2.pl "G()(al)"
Goal

$ ./ch-2.pl "G()()()()(al)"
Gooooal

$ ./ch-2.pl "(al)G(al)()()"
alGaloo

$ ./ch-2.pl "()G()G"
oGoG

$ ./ch-2.pl "(al)(al)G()()"
alalGoo

Dancer 2.1.0 Released

blogs.perl.org

We're thrilled to announce the release of Dancer2 2.1.0! This release represents a major investment in the health and quality of the project. We've gone deep into the issue tracker and PR backlog, closing out some of our oldest open issues — some dating back years — and significantly grooming both the issue and pull request queues. A big thank you to everyone who contributed.

Bug Fixes

This release addresses a number of long-standing issues:

  • UTF-8 handling improvements: to_json no longer double-encodes UTF-8 (#686), the charset config option is now properly respected (#1124), and UTF-8 in URLs is handled correctly (#1143). To the best of our knowledge, this release fixes all known UTF-8 issues. The default charset for Dancer2 apps is now UTF-8 rather than undefined. You can set an empty charset for your app if needed.
  • Case-insensitive system confusion has been resolved (#863).
  • Plugin DSL keywords are now app-specific (#1449, #1630), preventing cross-application bleed in multi-app setups.
  • Test suite fixes: Resolved content_type errors in t/dsl/send_file.t (#1772), JSON warnings in t/dsl/send_as.t (#1773), and void warnings in t/hooks.t (#1774).
  • Windows compatibility: File uploads are now properly unlinked on Windows (#1777).

Enhancements

  • Strict config mode (#763): Dancer2 can now warn on unknown config keys, with an opt-out available. New apps scaffolded with dancer2 gen will have strict config enabled by default.
  • Path::Tiny migration (#1264): Internal path handling has moved to Path::Tiny for cleaner, more reliable file operations.
  • Unicode::UTF8 support (#1594): When Unicode::UTF8 is available, Dancer2 will use it for faster encoding/decoding.
  • Batch session cookie access (#1073): Retrieve multiple session cookie values at once with the clear method.
  • Fully qualified engine namespaces (#1323): All engines now accept fully qualified package names.
  • Double server header fix (#1664): Dancer2 no longer sends duplicate Server headers.
  • Improved send_as (#1709): send_as now uses the full serializer pipeline, including hooks.
  • Dispatching improvements (PR #1757): Removed the deprecated api_version and improved the dispatching loop.
  • MIME ownership (PR #1758): MIME type handling has been moved to the app level.
  • Package name in logger output (PR #1780): Logger output can now include the calling package name, making multi-module debugging easier.

Documentation

  • Better documentation for the views setting behavior (#1431).
  • Fixed broken links in the manual and tutorial (PR #1749, #1750).
  • Improved config documentation structure (PR #1753).
  • Removed the stale logger keyword from the DSL docs (PR #1762).

Security

  • The "Powered by..." text has been removed from the default error page (PR #1776). Security researchers flagged this as an information disclosure concern — advertising the framework and version in error responses gives potential attackers a head start. The default error page is now clean of framework identifiers.

Thank You

Thanks to all who contributed to this release: Sawyer X, Russell Jenkins, Mikko Koivunalho, Gil Magno, and Sorin Pop.

You can install or upgrade via CPAN:

cpanm Dancer2

Happy Dancing!

Jason/CromeDome

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

Today's topic is about two-sided constructs that behave differently when used in list context or in scalar context: this is a feature unique to Perl, often disconcerting for people coming from other programming backgrounds, but very convenient once you are used to it.

The notion of context

Natural languages are full of ambiguities, as can be seen with the well-known sentences "Time flies like an arrow; fruit flies like a banana", where the same words are either nouns or verbs, depending on the context.

Programming languages cannot afford to have ambiguities because the source code must be translatable into machine instructions in a predictable way. Therefore most languages have unambiguous syntactic constructs that can be parsed bottom-up without much complication. Perl has a different approach: some core operators - and also some user-written subroutines - have two-sided meanings, without causing any ambiguity problem because the appropriate meaning is determined by the context in which these operators are used.

Technically Perl has three possible contexts: list context, scalar context and void context; but the void context is far less important than the other two and therefore will not be discussed here. An example of a list context is foreach my $val (1, 2, 3, 5, 8) {...}, where a list of values is expected within the parenthesis; an example of a scalar context is if ($x < 10) {...} where a scalar boolean condition is expected within the parenthesis. Some of the common Perl idioms that depend on context are:

construct result in list context result in scalar context
an array variable @arr members of the array number of array elements
the readline operator <STDIN> list of all input lines the next input line
the glob operator <*.pl> list of all files matching the pattern the next file that matches the pattern
regular expression with the /g flag for "global match" all strings captured by all matches boolean result of the next match attempt
the localtime function list of numbers as seconds, minutes, hours, etc. a string like "Fri Mar 6 23:00:12 2026"

These are just a few examples; read perlfunc and perlop for reference to many other context-sensitive constructs.

The advantage of having two different but related meanings for the same construct is that it reduces the number of constructs to learn. For example just remember that a regex with a /g flag is a global match, and Perl will "do the right thing" depending on where you use it; so given:

my $regex_p_word = qr( \b   # word boundary
                       p    # letter 'p'
                       \w+  # one or more word letters
                     )x;

you can either write:

my @words_starting_with_p = $text =~ /$regex_p_word/g;

or

while ($text =~ /$regex_p_word/g) {
  do_something_with($&);  # $& contains the matched string
}

Reducing the number of constructs is quite helpful in a rich language like Perl where the number of core functions and operators is large; but of course it requires that programmers are at ease with the notion of context. Perl textbooks put strong emphasis on this aspect of the Perl culture: for example the "Learning Perl" book starts the section on context by saying:

This is the most important section in this chapter. In fact, it’s the most important section in the entire book. In fact, it wouldn’t be an exaggeration to say that your entire career in using Perl will depend upon understanding this section.

Context-sensitive constructs also contribute to make the source code more concise and focused on what the programmer wanted to achieve, leaving aside the details; this is convenient when readers just need an overview of the code, for example when deciding whether to adopt a module or not, or when explaining an algorithm to a business analyst who doesn't know Perl (yes I did this repeatedly in my career, and it worked well - so don't tell me that Perl is not readable!).

This is not to say that the details can always be ignored; of course the people in charge of maintaining the code need to be aware of all the implications of context-sensitive operations.

Relationship between the list result and the scalar result

For every context-sensitive construct, the results in list context and in scalar context must somehow be related; otherwise it would be incomprehensible. But what would be a sensible relationship between the two contexts? Most Perl core constructs are built along one of those two patterns:

  • the scalar result is a condensed version of the list result, like the @arr or localtime examples in the table above;
  • the scalar result is an iterator on some implicit state, like the <STDIN> or glob examples in the same table.

When the scalar result is a condensed version, more detailed information may nevertheless be obtained by other means: for example, although a regular expression match in scalar context just returns a boolean result, various details about the match (the matched string, its position, etc.) can be retrieved through global variables.

When the scalar result is an iterator, it is meant to be called several times, yielding a different result at each call. Depending on the iterator, a special value is returned at the end to indicate to the caller that the iteration is finished (usually this value is an undef). This concept is quite similar to Python's generator functions or JavaScript's function* construct, except that each of the Perl core operators is specialized for one particular job (iterating on lines in a file, or on files in a directory, or on occurrences of a regex in some text). Such iterators are particularly useful for processing large data, because they operate lazily, one item at a time, without loading the whole data into memory.

As an aside, let us note that unlike Python or JavaScript, Perl does not have a builtin construct for general-purpose iterators; but this is not really needed because iterators can be constructed through Perl's closures, as beautifully explained in the book Higher-Order Perl - quite an ancient book, but essential and still perfectly valid. There are also several CPAN modules that apply these techniques for easier creation of custom iterators; Iterator::Simple is my preferred one.

I said that the two patterns just discussed cover most core constructs ... but there is an exception: the range operator .., like the documentation says, is "really two different operators depending on the context", so the meanings in list context and in scalar context are not related to one another. This will be discussed in more detail in a future article.

Writing your own context-sensitive subroutines or methods

Context-sensitive operations are not limited to core constructs: any subroutine can invoke wantarray1 to know in which context it is called so that it can adapt its behaviour. But this is only necessary in some very specific situations; otherwise Perl will perform an implicit conversion which in most cases is perfectly appropriate and requires no intervention from the programmer - this will be described in the next section.

In my own modules the places where I used wantarray were for returning condensed information:

  • in DBIx::DataModel, statement objects have an sql method that in list context returns ($sql, @bind), i.e. the generated SQL followed by the bind values. Here the default Perl conversion to scalar context would return the last bind value, which is of no use to the caller, so the method explicitly returns just $sql when called in scalar context;

  • in Search::Tokenizer, the tokenizer called in list context returns a tuple ($term, length($term), $start, $end, $term_index). When called in scalar context, it just returns the $term.

Implicit conversions

When an expression is not context-sensitive, Perl may perform an implicit conversion to make the result fit the context.

Scalar value in list context

If a scalar result is used in list context, the obvious conversion is to make it a singleton list:

my @array1 = "foo"; # converted to ("foo")

If the scalar is undef or an empty string, this will still be a singleton list, not the same thing as an empty list: so in

my @array2 = undef; # converted to (undef)
my @array3;         # initialized to ()

@array2 is a true value because it contains one element, while @array3 contains no element and therefore is a false value.

List value in scalar context

If a list value is used in scalar context, the initial members of the list are thrown away, and the context gets the last value:

my $scalar = (3, 2, 1, 0); # converted to 0

This behaviour is consistent with the comma operator inherited from C.

An array variable is not the same thing as a list value. An array is of course treated as a list when used in list context, but in scalar context it just returns the size of the array (an integer value). So in

my @countdown    = (3, 2, 1, 0);
my $should_start = @countdown ? "yes" : "no";
say $should_start;  # says "yes"

the array holds 4 members and therefore is true in scalar context; by contrast the mere list has value 0 in scalar context and therefore is false:

$should_start = (3, 2, 1, 0) ? "yes" : "no";
say $should_start;  # says "no"

Programming languages without context-sensitive constructs

Since context-sensitivity is a specialty of Perl, how do other programming languages handle similar situations? Simply by providing differentiated methods for each context! Let us look for example at the "global matching" use case, namely getting either a list of all occurrences of a regular expression in a big piece of text, or iterating over those occurrences one at a time.

Global match in JavaScript

In Perl a global match of shape $text =~ /$regex/g involves a string and a regex that are put together through the binding operator =~. In JavaScript, since there is no binding operator, regex matches are performed by method calls in either way:

  • the String class has methods:

    • match(), taking a regex as argument, returning an array of all matches;
    • matchAll(), taking a regex as argument, returning an iterator;
    • search(), taking a regex as argument, returning the character index of the first match (and therefore ignoring the /g flag);
  • the RegExp class has methods:

    • exec(), taking a string as argument, returning a "result array" that contains the matched string, substrings corresponding to capture groups, and positional information. When the regex has the /g flag for global match, the exec() method can be called repeatedly, iterating over the successive matches;
    • test(), taking a string as argument, returning a boolean result.

The MDN documentation has a good guide on regular expresssions in JavaScript. The purpose here is not to study these methods in detail, but merely to compare with the Perl API: in JavaScript the operations have explicit method names, but they are more numerous. The fact that method names are english words does not dispense from reading the documentation, because it cannot be guessed from the method names that match() returns an array and matchAll() returns an iterator!

Global match in Python

Regular expressions in Python do not belong to the core language, but are implemented through the re module in the standard library. Matching operations are performed by calling functions in that module, passing a string and a regex as arguments, plus possibly some other parameters. Functions re.search(), re.match() and re.fullmatch() are variants for performing a single match; for global match, which is the subject of our comparison, there is no /g flag, but there are specific methods:

  • re.findall(), taking a regex, a string and possibly some flags as arguments, returning a list of strings;
  • re.finditer(), also taking a regex, a string and possibly some flags as arguments, returning an iterator yielding Match objects.

Conclusion

Thanks to context-sensitive operations, Perl expressions are often very concise and nevertheless convey to the hasty reader an overview of what is going on. Detailed comprehension of course requires an investment in understanding the notion of context, how it is transmitted from caller to callee, and how the callee can decide to give different responses according to the context. Newcomers to Perl may think that the learning effort is greater than in other programming languages ... but we have seen that in absence of context-sensitive operations, the complexity goes elsewhere, in a greater number of methods or subroutines for handling all the variant situations. So context-sensitivity is definitely a beautiful feature of Perl!

About the cover picture

This is a side-by-side view of the Victoria-Hall, the main concert hall in Geneva, where the stage holds either a full symphonic orchestra, or just a solo recital. Same place, different contexts!

  1. as stated in the official documentation, wantarray is ill-named and should really be called wantlist ↩

I’ve been slowly ramping up my use of Claude for coding issues. I’ve been meaning to write a bit more about how I use it, and had been putting that off until I finished a few things. With some of those done, I thought I’d write up some notes on how it’s gone, finally. Over the next little while, I’ll post some actual work I’ve done. Later, I’ll try to write some more sort of general thoughts: other things I might try, what general tactics have felt useful, places where I think things are particularly problematic, and so on.

I started out fairly negative on “agentic coding”, and I still have a lot of opinions, but they now include that (a) coding agents are not going anywhere and (b) the resulting code can be of sufficient quality to be worth using in real work.

Project One: Cassandane Signatures

I work on Cyrus IMAP, an open-source JMAP, IMAP, CalDAV and CardDAV server. Cassandane is the Cyrus test suite’s largest component. It’s a big pile of Perl, around 200k LOC. In general, each test is a separate subroutine stored in its own file. The whole thing has upsides and downsides. One of the smaller, but noticeable, downsides: basically none of that code used subroutine signatures. I try to always use subroutine signatures in new Perl code. I’d begun using them in some new Cassandane code, but it was just a drop in the ocean. I wanted them everywhere, and to be the clear default. My existing “convert subs to use signatures” code munging program I had lying didn’t cut it, for a variety of boring reasons, including that it didn’t cope with Perl subroutine attributes, which Cassandane uses extensively.

I wanted to, in one swoop, convert all of Cassandane’s tests to use subroutine signatures. I considered futzing with my old code for this, but then I thought, “This seems like a nice simple job to test out Claude”. I gave Anthropic $20, installed Claude Code, and fired it up.

Claude strategy was a lot like mine: rather than go edit every file, it wrote a program that would edit all the files. It was sort of terrible, around 300 lines of code. Later I tried to write my own version. It never quite worked (after five to ten minutes of work, anyway), but it was close, and under 50 lines. But the good news is that Claude’s worked, and then I could delete it. If I was building a program to use and maintain, I would never have accepted that thing. But I didn’t need to. I could run the program and look at the git diff. There wasn’t even a security concern. It all lived in a container.

Claude needed help. Its first go was so-so. Claude couldn’t check its own work because it didn’t know how to use the Docker-driven build-and-test system I use for Cyrus, and so Claude couldn’t run the tests. It could compile-test the tests, though, which went a long way. It iterated for an hour or so. Sometimes I’d hop in and tell it what it was doing wrong, or that it could stop worrying about some issue.

When it was done, I had a diff that was thousands of lines long and touched 1,500+ files. I spent a long time (several shifts of 15 minutes each) reviewing the diff. The diff was so close to perfectly uniform as to be mind-numbing. But it was my job to make sure I wasn’t sending bogus changes for review to a colleague without vetting it first. (After all, had I written my own code-transforming program entirely by hand and run that, I wouldn’t have sent its output along for code review without a careful reading!)

I found some minor bugs and fixed them in separate commits. You can read the whole changeset if you want. You’ll see it’s six commits by me, one by Claude.

If this was the only value I got out of the $20, it would’ve been well worth it, but I went on to get a lot more done on those $20. I’ll write more about some other, more interesting work, over the next few days.

TL;DR

I didn’t like how the default zsh prompt truncation works. My solution, used in my own custom-made prompt (fully supported by promptinit), uses a custom precmd hook to dynamically determine the terminal’s available width.

Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

This week in PSC (217) | 2026-03-09

blogs.perl.org

All three of us were present for a quick meeting.

  • We discussed the progress of our outreach to some potential new core team members. We will be holding a vote once we hear back from everyone.

  • We noticed that a minor step in the PSC transition was missed during this cycle. We agreed that there needs to be a checklist for the procedure, and we intend to write it up.

  • We started with the release blocker triage, but the meeting was short so we didn’t look at many issues. We have no candidate blockers so far.

[P5P posting of this summary]

TPRC Presentation Coaches Available

Perl Foundation News

The deadline for talks looms large, but assistance awaits!

This year, we have coaches available to help write your talk description, and to support you in developing the talk.

If you have a talk you would like to give, but cannot flesh out the idea before the deadline (March 15th; 6 days from now!), you should submit your bare-bones idea and check "Yes" on "Do you need assistance in developing this talk?".

We have more schedule space for talks than we did last year, and we would love to add new voices and wider topics, but time is of the essence, so go to https://tprc.us/ , and spill the beans on your percolating ideas!

In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.

Here is an excerpt of the code that gives error:

package remote {
__DATA__
print "$ENV{HOME}\n";
}

as show below

Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.

I can't seem to find any mis-matched brackets.

On the contrary, when I re-write the same package without braces, the code works.

package remote;
__DATA__
print "$ENV{HOME}\n";

I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.

Perl 🐪 Weekly #763 - Is WhatsApp the new IRC?

dev.to #perl

Originally published at Perl Weekly 763

Hi there!

While we are still low on articles we had a good start in the WhatsApp group I mentioned 2 weeks ago. People introduced themselves and there were some light conversations. You are welcome to join us and write a few words about yourself.

There are also a number of Perl related events on the horizon in Paris and Berlin and the virtual event I organize.

Finally I published the Code Maven Academy site where there are already 140 hours of videos including 30 hours related to Perl. I'll keep recording these during live events and participants of my events will also get a discount coupon.

Enjoy your week!

--
Your editor: Gabor Szabo.

Announcements

Perl 5.42.1 is now available!

'We are pleased to announce version 42.1, the first maintenance release of version 42 of Perl 5.': Perldelta

Articles

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

Beautiful Perl feature : fat commas, a device for structuring lists

Beautiful Perl feature: trailing commas

More dev.to articles on beautiful Perl features

A meta-article about the series.

Discussion

Protocol Buffers (Protobuf) with Perl

Perl

This week in PSC (216) | 2026-03-02

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 364

Welcome to a new week with a couple of fun tasks "Decrypt String" and "Goal Parser". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 363

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "String Lie Detector" and "Subnet Sheriff" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Sheriff Detector

The post offers a clear and elegant walkthrough of solving two interesting problems using Raku. It stands out for its well-explained code, practical examples, and thoughtful use of language features like subsets, parsing, and bitwise operations.

Lying Sheriffs

The article provides a clear and well-structured exploration of the challenge, combining thoughtful algorithmic reasoning with an elegant implementation. The use of Perl and PDL demonstrates both efficiency and creativity, making the solution not only correct but also technically insightful. Overall, it's an excellent example of concise problem analysis paired with expressive code.

Perl Weekly Challenge 363

The post presents a clean and well-reasoned solution to the Perl Weekly Challenge, with concise Perl code and a clear explanation of the underlying logic. The approach is methodical and easy to follow, demonstrating solid problem-solving and thoughtful handling of edge cases.

I Don't Lie, Sheriff!

The post demonstrates a clean and thoughtful Perl implementation, with clear logic and well-structured code. The approach effectively handles both the self-referential string validation and the subnet-membership check, showing careful attention to correctness and readability.

I Shot The Subnet…

The post presents a clear and engaging walkthrough of the challenge, combining solid problem decomposition with readable Perl implementations. The explanation of the approach is practical and easy to follow, while the multi-language comparisons add extra technical value for readers exploring different idioms. Overall, it's a well-structured and insightful solution write-up.

Lies and lies within

The write-up presents a clear and methodical approach to solving the Perl Weekly Challenge, with well-structured code and helpful explanations of the reasoning behind the solution. The implementation is clean and idiomatic Perl, making the logic easy to follow and reproduce. Overall, it's a thoughtful and technically solid exploration of the problem.

The Weekly Challenge - 363

The write-up provides a clear and well-structured solution to the challenge, with careful input validation and readable Perl code that emphasizes robustness. The step-by-step logic and defensive programming style make the implementation easy to understand and reliable.

The Weekly Challenge #363

The blog presents a thorough and thoughtfully structured solution to the Perl Weekly Challenge, combining clear reasoning with well-documented Perl code. The modular design and detailed explanations make the logic easy to follow while demonstrating solid engineering discipline.

Stringy Sheriff

The post offers a clear and thoughtful walkthrough of solving the challenge with practical reasoning and well-structured code. Roger nicely explains the approach step-by-step, making the solution easy to follow while highlighting useful string-processing techniques.

The subnet detector

The post provides a clear and practical walkthrough of both tasks from The Weekly Challenge, with well-structured solutions in Python and Perl. The explanations highlight useful techniques such as regex parsing, handling UTF-8 characters, and leveraging networking libraries like Python's ipaddress and Perl's Net::IP.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Perl Maven online: Code-reading and Open Source contribution

March 10, 2026

Paris.pm monthly meeting

March 11, 2026

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Weekly Challlenge: The subnet detector

dev.to #perl

Weekly Challenge 363

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, CoPilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: String Lie Detector

Task

You are given a string.

Write a script that parses a self-referential string and determines whether its claims about itself are true. The string will make statements about its own composition, specifically the number of vowels and consonants it contains.

My solution

This was relatively straight forward in Python. I took the following steps:

  1. Use a regular expression to extract the necessary parts of the input_string, and store this as the value match.
  2. Count the number of vowels and consonants in the first value and store it as vowel_count and const_count.
  3. Use the word2num module to convert the numbers in input_string to integers, stored as expected_vowel and expected_const.
  4. Compare the count and expected values match, and return the result.
import re
from word2num import word2num

def string_lie_detector(input_string: str) -> bool:
    match = re.match(
        r"(\w+) . (\w+) vowels? and (\w+) consonants?", input_string)
    if not match:
        raise ValueError("Input string not in expected format")

    vowel_count = 0
    const_count = 0
    for c in match.group(1).lower():
        if c in "aeiou":
            vowel_count += 1
        else:
            const_count += 1

    expected_vowel = word2num(match.group(2))
    expected_const = word2num(match.group(3))

    return vowel_count == expected_vowel and const_count == expected_const

The Perl solution is a little more complex. Maybe my Google-foo isn't up to scratch (and I don't use Copilot when working on solutions) that there doesn't appear to be a CPAN module that will convert words into numbers. As this is a coding exercise only I have a hash called %word2num that maps words to number (from zero to twenty).

The next problem is four of the examples use a long dash as the separator. This is a UTF-8 character. The result of perl -E 'say length("—")' is 3. After numerous searches of the Internet, it turns out I need to include use utf8:all in the code. With this change, I get the expected result of 1.

The rest of the code follows the same logic as the Python solution.

use utf8::all;

sub main ($input_string) {
    my %word2num = (qw/
        zero 0 one 1 two 2 three 3 four 4 five 5 six 6 seven 7 eight 8
        nine 9 ten 10 eleven 11 twelve 12 thirteen 13 fourteen 14
        fifteen 15 sixteen 16 seventeen 17 eighteen 18 nineteen 19 twenty 20
    /);

    my ( $word, $v, $c ) =
      ( $input_string =~ /(\w+) . (\w+) vowels? and (\w+) consonants?/ );

    if ( !$word ) {
        die "Input string not in expected format\n";
    }

    my $vowel_count = 0;
    my $const_count = 0;
    foreach my $c ( split //, lc($word) ) {
        if ( index( "aeiou", $c ) == -1 ) {
            $const_count++;
        }
        else {
            $vowel_count++;
        }
    }

    my $expected_vowel = $word2num{ lc $v } // die "Don't know what $v is\n";
    my $expected_const = $word2num{ lc $c } // die "Don't know what $c is\n";

    my $truth =
      ( $vowel_count == $expected_vowel and $const_count == $expected_const );
    say $truth ? 'true' : 'false';
}

Examples

There was an issue with the examples, and I raised a pull request to fix it.

$ ./ch-1.py "aa — two vowels and zero consonants"
True

$ ./ch-1.py "iv — one vowel and one consonant"
True

$ ./ch-1.py "hello - three vowels and two consonants"
False

$ ./ch-1.py "aeiou — five vowels and zero consonants"
True

$ ./ch-1.py "aei — three vowels and zero consonants"
True

Task 2: Subnet Sheriff

Task

You are given an IPv4 address and an IPv4 network (in CIDR format).

Write a script to determine whether both are valid and the address falls within the network. For more information see the Wikipedia article.

My solution

This one was the easier of the two to complete. Maybe because I have worked at many IPSs in the past :-)

Python has the ipaddress module which makes it easy to confirm if an IPv4 address is in a particular IP address block.

I use a try/except block to handle situations (like the second example) where the IP address or net block is invalid. This follows the Python philosophy of Easier to Ask for Forgiveness than Permission.

import ipaddress

def subnet_sheriff(ip_addr: str, domain: str) -> bool:
    try:
        return ipaddress.IPv4Address(ip_addr) in ipaddress.IPv4Network(domain)
    except ipaddress.AddressValueError:
        return False

Perl has the Net::IP module in CPAN that performs similar functionality. If the IP address or net block is invalid, the variable will be undef, and the else block will be used.

use Net::IP;

sub main ( $ip_addr, $domain ) {
    my $addr = Net::IP->new($ip_addr);
    my $block = Net::IP->new($domain);
    if ( $addr and $block ) {
        my $overlaps = ( $addr->overlaps($block) != $IP_NO_OVERLAP );
        say $overlaps  ? 'true' : 'false';
    }
    else {
        say 'false';
    }
}

Examples

$ ./ch-2.py 192.168.1.45 192.168.1.0/24
True

$ ./ch-2.py 10.0.0.256 10.0.0.0/24
False

$ ./ch-2.py 172.16.8.9 172.16.8.9/32
True

$ ./ch-2.py 172.16.4.5 172.16.0.0/14
True

$ ./ch-2.py 192.0.2.0 192.0.2.0/25
True

$ ./ch-2.py 1.1.1.1 10.0.0.0/8
False

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

blogs.perl.org

Updated wikis are available now from my Wiki Haven:


  • Perl Wiki & JSTree style V 1.41

  • CSS and Javascript Wiki V 1.03

  • Debian Wiki V 1.12

  • Digital Security Wiki V 1.21

  • Mojolicious Wiki V 1.15

  • Symbolic Language Wiki V 1.19


And see the 'News flash: 7 Mar 2026' for why Symbolic.Language.Wiki is now on savage.net.au.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Clone - recursively copy Perl datatypes
    • Version: 0.48 on 2026-03-02, with 33 votes
    • Previous CPAN version: 0.48_07 was 6 days before
    • Author: ATOOMIC
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260301.001 on 2026-03-01, with 25 votes
    • Previous CPAN version: 20260228.001
    • Author: BRIANDFOY
  3. Date::Manip - Date manipulation routines
    • Version: 6.99 on 2026-03-02, with 20 votes
    • Previous CPAN version: 6.98 was 9 months before
    • Author: SBECK
  4. DateTime::TimeZone - Time zone object base class and factory
    • Version: 2.67 on 2026-03-05, with 22 votes
    • Previous CPAN version: 2.66 was 2 months, 25 days before
    • Author: DROLSKY
  5. Devel::Cover - Code coverage metrics for Perl
    • Version: 1.52 on 2026-03-07, with 104 votes
    • Previous CPAN version: 1.51 was 7 months, 11 days before
    • Author: PJCJ
  6. ExtUtils::MakeMaker - Create a module Makefile
    • Version: 7.78 on 2026-03-03, with 64 votes
    • Previous CPAN version: 7.77_03 was 1 day before
    • Author: BINGOS
  7. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260306 on 2026-03-06, with 37 votes
    • Previous CPAN version: 1.20260301 was 5 days before
    • Author: MSIMERSON
  8. Module::Build::Tiny - A tiny replacement for Module::Build
    • Version: 0.053 on 2026-03-03, with 16 votes
    • Previous CPAN version: 0.052 was 9 months, 22 days before
    • Author: LEONT
  9. Number::Phone - base class for Number::Phone::* modules
    • Version: 4.0010 on 2026-03-06, with 24 votes
    • Previous CPAN version: 4.0009 was 2 months, 27 days before
    • Author: DCANTRELL
  10. PDL - Perl Data Language
    • Version: 2.103 on 2026-03-03, with 101 votes
    • Previous CPAN version: 2.102
    • Author: ETJ
  11. SPVM - The SPVM Language
    • Version: 0.990141 on 2026-03-06, with 36 votes
    • Previous CPAN version: 0.990140
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.89 on 2026-03-05, with 18 votes
    • Previous CPAN version: 2.88 was 9 days before
    • Author: MANWAR
  13. Sys::Virt - libvirt Perl API
    • Version: v12.1.0 on 2026-03-03, with 17 votes
    • Previous CPAN version: v12.0.0 was 1 month, 18 days before
    • Author: DANBERR
  14. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.68 on 2026-03-02, with 20 votes
    • Previous CPAN version: 0.67
    • Author: CHANSEN
  15. X11::korgwm - a tiling window manager for X11
    • Version: 6.0 on 2026-03-07, with 14 votes
    • Previous CPAN version: 5.0 was 1 year, 1 month, 15 days before
    • Author: ZHMYLOVE
  16. Zonemaster::Engine - A tool to check the quality of a DNS zone
    • Version: 8.001001 on 2026-03-04, with 35 votes
    • Previous CPAN version: 8.001000 was 2 months, 16 days before
    • Author: ZNMSTR

Recently I had an odd problem that I thought to be related to caching.

While investigating the issue I noticed that a Perl CGI script using query_form to build a set of parameters, produces those in varying order.

I think this is due to a recent change in Perl that causes hash keys to be enumerated in random order (not always the same).

As it seems HTTP caching considers different URLs to be different objects, I'd like to have some consistent ordering of query parameters.

How could I do that?

Code sketch

#...
my $url = URI->new($query->url());
my $url_form;
#...
$url->path($url->path() . 'something');
$url_form = $url->clone();
#...
$url_form->query_form(
    {
        (PN_API_V1_FUNCTION) => API_V1_FN_SEND_FORM,
        (PN_USER_MODE) => $params_ref->{(PN_USER_MODE)},
    });
#...
f(..., $url_form->as_string(), ...)
#...

This is a small article about a pattern I’ve made to automatically ignore filenames for autocompletion.

In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling corrections (or autocorrect for commands). While handy, it is somewhat limited as it is global and perhaps somewhat limited. Now, I wanted to ignore it only for git and not other commands. But I haven’t found a way to only target git without having to make a wrapper around git (which I don’t want to do).

Episode 9 - Olaf Kolkman (part 1)

The Underbar
Olaf Kolkman has had a long career in open source. In this first part, we discussed his involvement with Perl, DNSSEC and NLnet Labs.

(dlxxxix) 16 great CPAN modules released last week

Niceperl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Amon2 - lightweight web application framework
    • Version: 6.18 on 2026-02-28, with 27 votes
    • Previous CPAN version: 6.17 was 1 day before
    • Author: TOKUHIROM
  2. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.439 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.438 was 1 month, 29 days before
    • Author: KUERBIS
  3. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.031 on 2026-02-25, with 19 votes
    • Previous CPAN version: 1.030 was 20 days before
    • Author: PREACTION
  4. CPAN::Uploader - upload things to the CPAN
    • Version: 0.103019 on 2026-02-23, with 25 votes
    • Previous CPAN version: 0.103018 was 3 years, 1 month, 9 days before
    • Author: RJBS
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260228.001 on 2026-02-28, with 25 votes
    • Previous CPAN version: 20260225.001 was 2 days before
    • Author: BRIANDFOY
  6. DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
    • Version: 4.055 on 2026-02-23, with 67 votes
    • Previous CPAN version: 5.013 was 6 months, 19 days before
    • Author: DVEEDEN
  7. Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
    • Version: v31.0.0 on 2026-02-25, with 20 votes
    • Previous CPAN version: v30.0.0 was 27 days before
    • Author: DORASUN
  8. LWP::Protocol::https - Provide https support for LWP::UserAgent
    • Version: 6.15 on 2026-02-23, with 22 votes
    • Previous CPAN version: 6.14 was 1 year, 11 months, 12 days before
    • Author: OALDERS
  9. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260226 on 2026-02-27, with 36 votes
    • Previous CPAN version: 1.20250805 was 6 months, 21 days before
    • Author: MSIMERSON
  10. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.039000 on 2026-02-28, with 27 votes
    • Previous CPAN version: 2.038000 was 29 days before
    • Author: MICKEY
  11. SPVM - The SPVM Language
    • Version: 0.990138 on 2026-02-28, with 36 votes
    • Previous CPAN version: 0.990137 was before
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.88 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.87 was 3 years, 9 months, 3 days before
    • Author: MANWAR
  13. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000163 on 2026-02-24, with 28 votes
    • Previous CPAN version: 1.000162 was 3 days before
    • Author: EXODIST
  14. Tickit - Terminal Interface Construction KIT
    • Version: 0.75 on 2026-02-27, with 29 votes
    • Previous CPAN version: 0.74 was 2 years, 5 months, 22 days before
    • Author: PEVANS
  15. TimeDate - Date and time formatting subroutines
    • Version: 2.34 on 2026-02-28, with 28 votes
    • Previous CPAN version: 2.34_01
    • Author: ATOOMIC
  16. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.66 on 2026-02-25, with 20 votes
    • Previous CPAN version: 0.65 was 1 day before
    • Author: CHANSEN

TPRC call for presentations is open!

Perl Foundation News

Call for presentations is now open for The Perl and Raku Conference! Submissions will be accepted through March 15, and all presenters that are accepted will get a free ticket to the conference!

All presenters must be present in person at the conference. Speaking dates are June 26-28, 2026. We are accepting talks (either 20 minutes or 50 minutes) on any topic that could be of interest to the Perl and Raku Community. We will also be having something new this year—interactive sessions. Keep an eye out for a description of what that will be and how you might participate.

Go to https://tprc.us/ to learn more, and to find the link for submitting your talk!

TL;DR

Rebase is a tool used while developing. Merging is a tool used for incorporating your branch into a target branch. Both tools or workflows serve each other in the end.

Introduction

There are a lot of online discussions about these two “workflows”. One is pro-rebasing and the other swears by merging. Both sides are stuck in their own thinking and both think they are right. Follow me MacLeod, we’re off to face The Kurgan.

The Board is proposing Chris Prather (perigrin) for membership on the Board. The Board will vote soon on his appointment.

Below are his answers to the application questions:

Tell us about your technical and leadership experience.

I've been writing Perl professionally for over two and a half decades and contributing to the community for nearly as long—organizing YAPC conferences and building CPAN modules. My recent work has been on large-scale Perl e-commerce systems—millions of lines of code, hundreds of developers, the kind of codebase that reminds you how much critical infrastructure still runs on Perl. I have also been working to migrate irc.perl.org to modern infrastructure—because our community spaces matter as much as our code. Beyond Perl, I've been a Director of Software, run my own consulting practice for a decade, and founded an afterschool science education company. That range has taught me something relevant here: sustainable communities need both technical excellence and intentional cultivation. You can't just "build it and they will come".

If appointed, what is one thing you'd work toward?

Improving the Foundation's capacity to lead. The Foundation is uniquely positioned to help the community navigate hard problems—but influence has to be earned through presence and relationship. I'd like to see us do more to connect the archipelago of Perl and Raku projects, the businesses that rely on them, and the community members that will sustain them into the future. I want us to support businesses that depend on Perl—helping them make the case for continued investment in the Perl ecosystem. I'd like us to think seriously about what it would take to grow the next generation of Perl shops, not just maintain the current crop. More and more of our infrastructure is being supported by fewer and fewer hands—often the same people wearing different hats, organizing each piece in isolation. We need to help provide templates for sustainable projects. That means making it easier for maintainers to find support, share burdens, and bring in new people. Projects should end by choice, not by attrition. The Foundation is the only organization that has the cachet to connect all the bridges.

What is your vision for the Foundation?

The Foundation works best as a quiet enabler—handling legal and financial scaffolding so community members can focus on building things. That should continue. But I think we have underutilized soft power. Using it well means doing the community-building work—earning the standing to shape conversations. I'd like to see us be more intentional about the cultural signals we send. The Foundation's choices—what to fund, who to platform—shape perceptions of what kind of community we're building. We have always been tolerant of the plurality of voices, but sometimes that has gotten overshadowed by some of the more flamboyant voices themselves. We should continue to cultivate community structures that celebrate the voices we want to represent us, not just prune the voices we don't. The Foundation can enable genuine support for needs that Perl-based companies have. We should work to understand and validate those needs, and help the community identify and provide sustainable solutions. By providing templates for organizing projects, finding support, and bringing in new people—the Foundation can better ensure that Perl and Raku are on solid foundations for years to come.

What is your vision for Perl and Raku?

Both Perl and Raku are living languages with vibrant evolutions underway. They both have the same underlying need: an ecosystem that is culturally and economically sustainable. One where businesses are confident enough to invest, newcomers feel welcomed rather than turned away, and ambitious projects find support. The Foundation can leverage its position to help them achieve that future. For Perl, it can help the maintainers connect more strongly with the businesses that rely upon their work to get the kind of feedback they need to ensure we're going in the right direction. For Raku, I'll admit I don't know enough about where the community stands today—and that's exactly the kind of gap it feels like the Foundation should help bridge. I hope to learn more about how we can best support them.

RIP nginx - Long Live Apache

nginx is dead. Not metaphorically dead. Not “falling out of favor” dead. Actually, officially, put-a-date-on-it dead.

In November 2025 the Kubernetes project announced the retirement of Ingress NGINX — the controller running ingress for a significant fraction of the world’s Kubernetes clusters. Best-effort maintenance until March 2026. After that: no releases, no bugfixes, no security patches. GitHub repositories go read-only. Tombstone in place.

And before the body was even cold, we learned why. IngressNightmare — five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated 9.8 critical. Unauthenticated remote code execution. Complete cluster takeover. No credentials required. Wiz Research found over 6,500 clusters with the vulnerable admission controller publicly exposed to the internet, including Fortune 500 companies. 43% of cloud environments vulnerable. The root cause wasn’t a bug that could be patched cleanly - it was an architectural flaw baked into the design from the beginning. And the project that ran ingress for millions of production clusters was, in the end, sustained by one or two people working in their spare time.

Meanwhile Apache has been quietly running the internet for 30 years, governed by a foundation, maintained by a community, and looking increasingly like the adult in the room.

Let’s talk about how we got here.

Apache Was THE Web Server

Before we talk about what went wrong, let’s remember what Apache actually was. Not a web server. THE web server. At its peak Apache served over 70% of all websites on the internet. It didn’t win that position by accident - it won it by solving every problem the early web threw at it. Virtual hosting. SSL. Authentication. Dynamic content via CGI and then mod_perl. Rewrite rules. Per-directory configuration. Access control. Compression. Caching. Proxying. One by one, as the web evolved, Apache evolved with it, and the industry built on top of it.

Apache wasn’t just infrastructure. It was the platform on which the commercial internet was built. Every hosting provider ran it. Every enterprise deployed it. Every web developer learned it. It was as foundational as TCP/IP - so foundational that most people stopped thinking about it, the way you stop thinking about running water.

Then nginx showed up with a compelling story at exactly the right moment.

The Narrative That Stuck

The early 2000s brought a new class of problem - massively concurrent web applications, long-polling, tens of thousands of simultaneous connections. The C10K problem was real and Apache’s prefork MPM - one process per connection - genuinely struggled under that specific load profile. nginx’s event-driven architecture handled it elegantly. The benchmarks were dramatic. The config was clean and minimal, a breath of fresh air compared to Apache’s accumulated complexity. nginx felt modern. Apache felt like your dad’s car.

The “Apache is legacy” narrative took hold and never let go - even after the evidence for it evaporated.

Apache gained mpm_event, bringing the same non-blocking I/O and async connection handling that nginx was celebrated for. The performance gap on concurrent connections essentially closed. Then CDNs solved the static file problem at the architectural level - your static files live in S3 now, served from a Cloudflare edge node milliseconds from your user, and your web server never sees them. The two pillars of the nginx argument - concurrency and static file performance - were addressed, one by Apache’s own evolution and one by infrastructure that any serious deployment should be using regardless of web server choice.

But nobody reruns the benchmarks. The “legacy” label outlived the evidence by a decade. A generation of engineers learned nginx first, taught it to the next generation, and the assumption calcified into received wisdom. Blog posts from 2012 are still being cited as architectural guidance in 2025.

What Apache Does That nginx Can’t

Strip away the benchmark mythology and look at what these servers actually do when you need them to do something hard.

Apache’s input filter chain lets you intercept the raw request byte stream mid-flight - before the body is fully received - and do something meaningful with it. I’m currently building a multi-server file upload handler with real-time Redis progress tracking, proper session authentication, and CSRF protection implemented directly in the filter chain. Zero JavaScript upload libraries. Zero npm dependencies. Zero supply chain attack surface. The client sends bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a paid commercial module to get close. Or you write C. Or you route around it to application code and wonder why you needed nginx in the first place.

Apache’s phase handlers let you hook into the exact right moment of the request lifecycle - post-read, header parsing, access control, authentication, response - each phase a precise intervention point. mod_perl embeds a full Perl runtime in the server with persistent state, shared memory, and pre-forked workers inheriting connection pools and compiled code across requests. mod_security gives you WAF capabilities your “modern” stack is paying a vendor for. mod_cache is a complete RFC-compliant caching layer that nginx reserves for paying customers.

And LDAP - one of the oldest enterprise authentication requirements there is. With mod_authnz_ldap it’s a few lines of config:

AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com

Connection pooling, SSL/TLS to the directory, group membership checks, credential caching - all native, all in config, no code required. With nginx you’re reaching for a community module with an inconsistent maintenance history, writing Lua, or standing up a separate auth service and proxying to it with auth_request - which is just mod_authnz_ldap reimplemented badly across two processes with an HTTP round trip in the middle.

Apache Includes Everything You’re Now Paying For

Look at Apache’s feature set and you’re reading the history of web infrastructure, one solved problem at a time. SSL termination? Apache had it before cloud load balancers existed to take it off your plate. Caching? mod_cache predates Redis by years. Load balancing? mod_proxy_balancer was doing weighted round-robin and health checks before ELB was a product. Compression, rate limiting, IP-based access control, bot detection via mod_security - Apache had answers to all of it before the industry decided each problem deserved its own dedicated service, its own operations overhead, and its own vendor relationship.

Apache didn’t accumulate features because it was undisciplined. It accumulated features because the web kept throwing problems at it and it kept solving them. The fact that your load balancer now handles SSL termination doesn’t mean Apache was wrong to support it - it means Apache was right early enough that the rest of the industry eventually built dedicated infrastructure around the same idea.

Now look at your AWS bill. CloudFront for CDN. ALB for load balancing and SSL termination. WAF for request filtering. ElastiCache for caching. Cognito for authentication. API Gateway for routing. Each one a line item. Each one a managed service wrapping functionality that Apache has shipped for free since before most of your team was writing code.

Amazon Web Services is, in a very real sense, Apache’s feature set repackaged as paid managed infrastructure. They looked at what the web needed, looked at what Apache had already solved, and built a business around operating those solutions at scale so you didn’t have to. That’s a legitimate value proposition - operations is hard and sometimes paying AWS is absolutely the right answer. But if you’re running a handful of servers and paying for half a dozen AWS services to handle concerns that Apache handles natively, maybe set the Wayback Machine to 2005, spin up Apache, and keep the credit card in your pocket.

Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that Amazon built a cloud business catching up to him.

So Why Did You Choose nginx?

Be honest. The real reason is that you learned it first, or your last job used it, or a blog post from 2012 told you it was the modern choice. Maybe someone at a conference said Apache was legacy and you nodded along because everyone else was nodding. That’s how technology adoption works - narrative momentum, not engineering analysis.

But those nginx blinders have a cost. And the Kubernetes ecosystem just paid it in full.

The Cost of the nginx Blinders

The nginx Ingress Controller became the Kubernetes default early in the ecosystem’s adoption curve and the pattern stuck. Millions of production clusters. The de-facto standard. Fortune 500 companies. The Swiss Army knife of Kubernetes networking - and that flexibility was precisely its undoing.

The “snippets” feature that made it popular - letting users inject raw nginx config via annotations - turned out to be an unsanitizable attack surface baked into the design. CVE-2025-1974 exploited this to achieve unauthenticated RCE via the admission controller, giving attackers access to all secrets across all namespaces. Complete cluster takeover from anything on the pod network. In many common configurations the pod network is accessible to every workload in your cloud VPC. The blast radius was the entire cluster.

The architectural flaw couldn’t be fixed without gutting the feature that made the project worth using. So it was retired instead.

Here is the part nobody is saying out loud: Apache could have been your Kubernetes ingress controller all along.

The Apache Ingress Controller exists. It supports path and host-based routing, TLS termination, WebSocket proxying, header manipulation, rate limiting, mTLS - everything Ingress NGINX offered, built on a foundation with 30 years of security hardening and a governance model that doesn’t depend on one person’s spare time. It doesn’t have an unsanitizable annotation system because Apache’s configuration model was designed with proper boundaries from the beginning. The full Apache module ecosystem - mod_security, mod_authnz_ldap, the filter chain, all of it - available to every ingress request.

The Kubernetes community never seriously considered it. nginx had the mindshare, nginx got the default recommendation, nginx became the assumed answer before the question was even finished. Apache was dismissed as grandpa’s web server by engineers who had never actually used it for anything hard - and so the ecosystem bet its ingress layer on a project sustained by volunteers and crossed its fingers.

The nginx blinders cost the industry IngressNightmare, 6,500 exposed clusters, and a forced migration that will consume engineering hours across thousands of organizations in 2026. Not because Apache wasn’t available. Because nobody looked.

nginx is survived by its commercial fork nginx Plus, approximately 6,500 vulnerable Kubernetes clusters, and a generation of engineers who will spend Q1 2026 migrating to Gateway API - a migration they could have avoided entirely.

Who’s Keeping The Lights On

Here’s the conversation that should happen in every architecture review but almost never does: who maintains this and what happens when something goes wrong?

For Apache the answer has been the same for over 30 years. The Apache Software Foundation - vendor-neutral, foundation-governed, genuinely open source. Security vulnerabilities found, disclosed responsibly, patched. A stable API that doesn’t break your modules between versions. Predictable release cycles. Institutional stability that has outlasted every company that ever tried to compete with it.

nginx’s history is considerably more complicated. Written by Igor Sysoev while employed at Rambler, ownership murky for years, acquired by F5 in 2019. Now a critical piece of infrastructure owned by a networking hardware vendor whose primary business interests may or may not align with the open source project. nginx Plus - the version with the features that actually compete with Apache on a level playing field - is commercial. OpenResty, the variant most people reach for when they need real programmability, is a separate project with its own maintenance trajectory.

The Ingress NGINX project had millions of users and a maintainership you could count on one hand. That’s not a criticism of the maintainers - it’s an indictment of an ecosystem that adopted a critical infrastructure component without asking who was keeping the lights on.

Three decades of adversarial testing by the entire internet is a security posture no startup’s stack can match. The Apache Software Foundation will still be maintaining Apache httpd when the company that owns your current stack has pivoted twice and been acqui-hired into oblivion.

Long Live Apache

The engineers who dismissed Apache as legacy were looking at a 2003 benchmark and calling it a verdict. They missed the server that anticipated every problem modern infrastructure is still solving, that powered the internet before AWS existed to charge you for the privilege, and that was sitting right there in the Kubernetes ecosystem waiting to be evaluated while the community was busy betting critical infrastructure on a volunteer project with an architectural time bomb in its most popular feature.

Grandpa didn’t just know what he was doing. Grandpa was building the platform you’re still trying to reinvent - badly, in JavaScript, with a vulnerability disclosure coming next Tuesday and a maintainer burnout announcement the Tuesday after that.

The server is fine. It was always fine. Touch grass, update your mental model, and maybe read the Apache docs before your next architecture meeting.

RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his spare time. Missed by the 43% of cloud environments that probably should have asked more questions.

Sources

  • IngressNightmare - CVE details and exposure statistics Wiz Research, March 24, 2025 https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
  • Ingress NGINX Retirement Announcement Kubernetes SIG Network and Security Response Committee, November 11, 2025 https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/
  • Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory Kubernetes, March 24, 2025 https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
  • Transitioning Away from Ingress NGINX - Maintainership and architectural analysis Google Open Source Blog, February 2026 https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html
  • F5 Acquisition of nginx F5 Press Release, March 2019 https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops

Disclaimer: This article was written with AI assistance during a long discussion on the features and history of Apache and nginx, drawing on my experience maintaining and using Apache over the last 20+ years. The opinions, technical observations, and arguments are entirely my own. I am in no way affiliated with the ASF, nor do I have any financial interest in promoting Apache. I have been using and benefiting from Apache since 1998 and continue to discover features and capabilities that surprise me even to this day.

Treating GitHub Copilot as a Contributor

Perl Hacks

For some time, we’ve talked about GitHub Copilot as if it were a clever autocomplete engine.

It isn’t.

Or rather, that’s not all it is.

The interesting thing — the thing that genuinely changes how you work — is that you can assign GitHub issues to Copilot.

And it behaves like a contributor.

Over the past day, I’ve been doing exactly that on my new CPAN module, WebServer::DirIndex. I’ve opened issues, assigned them to Copilot, and watched a steady stream of pull requests land. Ten issues closed in about a day, each one implemented via a Copilot-generated PR, reviewed and merged like any other contribution.

That still feels faintly futuristic. But it’s not “vibe coding”. It’s surprisingly structured.

Let me explain how it works.


It Starts With a Proper Issue

This workflow depends on discipline. You don’t type “please refactor this” into a chat window. You create a proper GitHub issue. The sort you would assign to another human maintainer. For example, here are some of the recent issues Copilot handled in WebServer::DirIndex:

  • Add CPAN scaffolding
  • Update the classes to use Feature::Compat::Class
  • Replace DirHandle
  • Add WebServer::DirIndex::File
  • Move render() method
  • Use :reader attribute where useful
  • Remove dependency on Plack

Each one was a focused, bounded piece of work. Each one had clear expectations.

The key is this: Copilot works best when you behave like a maintainer, not a magician.

You describe the change precisely. You state constraints. You mention compatibility requirements. You indicate whether tests need to be updated.

Then you assign the issue to Copilot.

And wait.


The Pull Request Arrives

After a few minutes — sometimes ten, sometimes less — Copilot creates a branch and opens a pull request.

The PR contains:

  • Code changes
  • Updated or new tests
  • A descriptive PR message

And because it’s a real PR, your CI runs automatically. The code is evaluated in the same way as any other contribution.

This is already a major improvement over editor-based prompting. The work is isolated, reviewable, and properly versioned.

But the most interesting part is what happens in the background.


Watching Copilot Think

If you visit the Agents tab in the repository, you can see Copilot reasoning through the issue.

It reads like a junior developer narrating their approach:

  • Interpreting the problem
  • Identifying the relevant files
  • Planning changes
  • Considering test updates
  • Running validation steps

And you can interrupt it.

If it starts drifting toward unnecessary abstraction or broad refactoring, you can comment and steer it:

  • Please don’t change the public API.
  • Avoid experimental Perl features.
  • This must remain compatible with Perl 5.40.

It responds. It adjusts course.

This ability to intervene mid-flight is one of the most useful aspects of the system. You are not passively accepting generated code — you’re supervising it.


Teaching Copilot About Your Project

Out of the box, Copilot doesn’t really know how your repository works. It sees code, but it doesn’t know policy.

That’s where repository-level configuration becomes useful.

1. Custom Repository Instructions

GitHub allows you to provide a .github/copilot-instructions.md file that gives Copilot repository-specific guidance. The documentation for this lives here:

When GitHub offers to generate this file for you, say yes.

Then customise it properly.

In a CPAN module, I tend to include:

  • Minimum supported Perl version
  • Whether Feature::Compat::Class is preferred
  • Whether experimental features are forbidden
  • CPAN layout expectations (lib/, t/, etc.)
  • Test conventions (Test::More, no stray diagnostics)
  • A strong preference for not breaking the public API

Without this file, Copilot guesses.

With this file, Copilot aligns itself with your house style.

That difference is impressive.

2. Customising the Copilot Development Environment

There’s another piece that many people miss: Copilot can run a special workflow event called copilot_agent_setup.

You can define a workflow that prepares the environment Copilot works in. GitHub documents this here:

In my Perl projects, I use this standard setup:

name: Copilot Setup Steps

on:
  workflow_dispatch:
  push:
    paths:
      - .github/workflows/copilot-setup-steps.yml
  pull_request:
    paths:
      - .github/workflows/copilot-setup-steps.yml

jobs:
  copilot-setup-steps:
    runs-on: ubuntu-latest
    permissions:
      contents: read
  steps:
    - name: Check out repository
      uses: actions/checkout@v4

    - name: Set up Perl 5.40
      uses: shogo82148/actions-setup-perl@v1
      with:
        perl-version: '5.40'

    - name: Install dependencies
      run: cpanm --installdeps --with-develop --notest .

(Obviously, that was originally written for me by Copilot!)

This does two important things.

Firstly, it ensures Copilot is working with the correct Perl version.

Secondly, it installs the distribution dependencies, meaning Copilot can reason in a context that actually resembles my real development environment.

Without this workflow, Copilot operates in a kind of generic space.

With it, Copilot behaves like a contributor who has actually checked out your code and run cpanm.

That’s a useful difference.


Reviewing the Work

This is the part where it’s important not to get starry-eyed.

I still review the PR carefully.

I still check:

  • Has it changed behaviour unintentionally?
  • Has it introduced unnecessary abstraction?
  • Are the tests meaningful?
  • Has it expanded scope beyond the issue?

I check out the branch and run the tests. Exactly as I would with a PR from a human co-worker.

You can request changes and reassign the PR to Copilot. It will revise its branch.

The loop is fast. Faster than traditional asynchronous code review.

But the responsibility is unchanged. You are still the maintainer.


Why This Feels Different

What’s happening here isn’t just “AI writing code”. It’s AI integrated into the contribution workflow:

  • Issues
  • Structured reasoning
  • Pull requests
  • CI
  • Review cycles

That architecture matters.

It means you can use Copilot in a controlled, auditable way.

In my experience with WebServer::DirIndex, this model works particularly well for:

  • Mechanical refactors
  • Adding attributes (e.g. :reader where appropriate)
  • Removing dependencies
  • Moving methods cleanly
  • Adding new internal classes

It is less strong when the issue itself is vague or architectural. Copilot cannot infer the intent you didn’t articulate.

But given a clear issue, it’s remarkably capable — even with modern Perl using tools like Feature::Compat::Class.


A Small but Important Point for the Perl Community

I’ve seen people saying that AI tools don’t handle Perl well. That has not been my experience.

With a properly described issue, repository instructions, and a defined development environment, Copilot works competently with:

  • Modern Perl syntax
  • CPAN distribution layouts
  • Test suites
  • Feature::Compat::Class (or whatever OO framework I’m using on a particular project)

The constraint isn’t the language. It’s how clearly you explain the task.


The Real Shift

The most interesting thing here isn’t that Copilot writes Perl. It’s that GitHub allows you to treat AI as a contributor.

  • You file an issue.
  • You assign it.
  • You supervise its reasoning.
  • You review its PR.

It’s not autocomplete. It’s not magic. It’s just another developer on the project. One who works quickly, doesn’t argue, and reads your documentation very carefully.

Have you been using AI tools to write or maintain Perl code? What successes (or failures!) have you had? Are there other tools I should be using?


Links

If you want to have a closer look at the issues and PRs I’m talking about, here are some links?

The post Treating GitHub Copilot as a Contributor first appeared on Perl Hacks.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.339 on 2026-02-19, with 50 votes
    • Previous CPAN version: 0.338 was 4 months, 16 days before
    • Author: RJBS
  2. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.04 on 2026-02-19, with 56 votes
    • Previous CPAN version: 10.03 was 30 days before
    • Author: UTASHIRO
  3. App::Netdisco - An open source web-based network management tool.
    • Version: 2.097003 on 2026-02-21, with 834 votes
    • Previous CPAN version: 2.097002 was 1 month, 12 days before
    • Author: OLIVER
  4. App::rdapper - a command-line RDAP client.
    • Version: 1.24 on 2026-02-19, with 21 votes
    • Previous CPAN version: 1.23 was 17 days before
    • Author: GBROWN
  5. CPAN::Meta - the distribution metadata for a CPAN dist
    • Version: 2.150013 on 2026-02-20, with 39 votes
    • Previous CPAN version: 2.150012 was 25 days before
    • Author: RJBS
  6. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260220.001 on 2026-02-20, with 25 votes
    • Previous CPAN version: 20260215.001 was 4 days before
    • Author: BRIANDFOY
  7. Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
    • Version: 9.1.0 on 2026-02-17, with 18 votes
    • Previous CPAN version: 9.0.0 was 23 days before
    • Author: CUKEBOT
  8. Getopt::Long::Descriptive - Getopt::Long, but simpler and more powerful
    • Version: 0.117 on 2026-02-19, with 58 votes
    • Previous CPAN version: 0.116 was 1 year, 1 month, 19 days before
    • Author: RJBS
  9. MIME::Lite - low-calorie MIME generator
    • Version: 3.038 on 2026-02-16, with 35 votes
    • Previous CPAN version: 3.037 was 5 days before
    • Author: RJBS
  10. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260220 on 2026-02-20, with 44 votes
    • Previous CPAN version: 5.20260119 was 1 month, 1 day before
    • Author: BINGOS
  11. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.1 on 2026-02-14, with 13 votes
    • Previous CPAN version: v2.0.0
    • Author: SANKO
  12. Net::Server - Extensible Perl internet server
    • Version: 2.018 on 2026-02-18, with 34 votes
    • Previous CPAN version: 2.017 was 8 days before
    • Author: BBB
  13. Resque - Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.
    • Version: 0.44 on 2026-02-21, with 42 votes
    • Previous CPAN version: 0.43
    • Author: DIEGOK
  14. SNMP::Info - OO Interface to Network devices and MIBs through SNMP
    • Version: 3.975000 on 2026-02-20, with 40 votes
    • Previous CPAN version: 3.974000 was 5 months, 8 days before
    • Author: OLIVER
  15. SPVM - The SPVM Language
    • Version: 0.990134 on 2026-02-20, with 36 votes
    • Previous CPAN version: 0.990133
    • Author: KIMOTO
  16. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000162 on 2026-02-20, with 28 votes
    • Previous CPAN version: 1.000161 was 8 months, 9 days before
    • Author: EXODIST
  17. WebService::Fastly - an interface to most facets of the [Fastly API](https://www.fastly.com/documentation/reference/api/).
    • Version: 14.00 on 2026-02-16, with 18 votes
    • Previous CPAN version: 13.01 was 2 months, 6 days before
    • Author: FASTLY