Shortcuts: s show h hide n next p prev
Add SAVEt_PADSV; fixes memory leak in pp_methstart

The previous commit here (6b450442) intended to fix `pp_methstart` to
allow refaliasing operations on fields. The way it did this had the
unintended side-effect of no longer undoing the `SvREFCNT_inc` applied
to each field, and thus caused every method call to leak SV references
to every field it captured.

This change fixes this, by introducing a new savestack operation,
`SAVEt_PADSV`. This operation saves the current SV in a pad slot, then
on scope exit restores it back by first decrementing the reference count
of *whatever SV* is now found in that pad slot. This was the original
bug in `pp_methstart` that refaliasing into fields highlighted and the
previous commit intended to fix.

In addition to the bugfix, this new savestack operation makes it more
efficient than the previous code, as it combines into a single operation
the two behaviours which previously required two separate ones
(SAVEt_SPTR + SAVEt_CLEARSV).

Originally published at Perl Weekly 763

Hi there!

While we are still low on articles we had a good start in the WhatsApp group I mentioned 2 weeks ago. People introduced themselves and there were some light conversations. You are welcome to join us and write a few words about yourself.

There are also a number of Perl related events on the horizon in Paris and Berlin and the virtual event I organize.

Finally I published the Code Maven Academy site where there are already 140 hours of videos including 30 hours related to Perl. I'll keep recording these during live events and participants of my events will also get a discount coupon.

Enjoy your week!

--
Your editor: Gabor Szabo.

Announcements

Perl 5.42.1 is now available!

'We are pleased to announce version 42.1, the first maintenance release of version 42 of Perl 5.': Perldelta

Articles

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

Beautiful Perl feature : fat commas, a device for structuring lists

Beautiful Perl feature: trailing commas

More dev.to articles on beautiful Perl features

A meta-article about the series.

Discussion

Protocol Buffers (Protobuf) with Perl

Perl

This week in PSC (216) | 2026-03-02

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 364

Welcome to a new week with a couple of fun tasks "Decrypt String" and "Goal Parser". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 363

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "String Lie Detector" and "Subnet Sheriff" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Sheriff Detector

The post offers a clear and elegant walkthrough of solving two interesting problems using Raku. It stands out for its well-explained code, practical examples, and thoughtful use of language features like subsets, parsing, and bitwise operations.

Lying Sheriffs

The article provides a clear and well-structured exploration of the challenge, combining thoughtful algorithmic reasoning with an elegant implementation. The use of Perl and PDL demonstrates both efficiency and creativity, making the solution not only correct but also technically insightful. Overall, it's an excellent example of concise problem analysis paired with expressive code.

Perl Weekly Challenge 363

The post presents a clean and well-reasoned solution to the Perl Weekly Challenge, with concise Perl code and a clear explanation of the underlying logic. The approach is methodical and easy to follow, demonstrating solid problem-solving and thoughtful handling of edge cases.

I Don't Lie, Sheriff!

The post demonstrates a clean and thoughtful Perl implementation, with clear logic and well-structured code. The approach effectively handles both the self-referential string validation and the subnet-membership check, showing careful attention to correctness and readability.

I Shot The Subnet


The post presents a clear and engaging walkthrough of the challenge, combining solid problem decomposition with readable Perl implementations. The explanation of the approach is practical and easy to follow, while the multi-language comparisons add extra technical value for readers exploring different idioms. Overall, it's a well-structured and insightful solution write-up.

Lies and lies within

The write-up presents a clear and methodical approach to solving the Perl Weekly Challenge, with well-structured code and helpful explanations of the reasoning behind the solution. The implementation is clean and idiomatic Perl, making the logic easy to follow and reproduce. Overall, it's a thoughtful and technically solid exploration of the problem.

The Weekly Challenge - 363

The write-up provides a clear and well-structured solution to the challenge, with careful input validation and readable Perl code that emphasizes robustness. The step-by-step logic and defensive programming style make the implementation easy to understand and reliable.

The Weekly Challenge #363

The blog presents a thorough and thoughtfully structured solution to the Perl Weekly Challenge, combining clear reasoning with well-documented Perl code. The modular design and detailed explanations make the logic easy to follow while demonstrating solid engineering discipline.

Stringy Sheriff

The post offers a clear and thoughtful walkthrough of solving the challenge with practical reasoning and well-structured code. Roger nicely explains the approach step-by-step, making the solution easy to follow while highlighting useful string-processing techniques.

The subnet detector

The post provides a clear and practical walkthrough of both tasks from The Weekly Challenge, with well-structured solutions in Python and Perl. The explanations highlight useful techniques such as regex parsing, handling UTF-8 characters, and leveraging networking libraries like Python's ipaddress and Perl's Net::IP.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Perl Maven online: Code-reading and Open Source contribution

March 10, 2026

Paris.pm monthly meeting

March 11, 2026

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Thank you Team PWC for your continuous support and encouragement.
Welcome to the Week #364 of The Weekly Challenge.

Perl 5.42.1 is now available!

r/perl

Import perl5421delta.pod

Perl commits on GitHub
Import perl5421delta.pod
Update Module-CoreList with data for 5.42.1

Tick off 5.42.1

Perl commits on GitHub
Tick off 5.42.1

Add epigraph for 5.42.1

Perl commits on GitHub
Add epigraph for 5.42.1

Weekly Challlenge: The subnet detector

dev.to #perl

Weekly Challenge 363

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, CoPilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: String Lie Detector

Task

You are given a string.

Write a script that parses a self-referential string and determines whether its claims about itself are true. The string will make statements about its own composition, specifically the number of vowels and consonants it contains.

My solution

This was relatively straight forward in Python. I took the following steps:

  1. Use a regular expression to extract the necessary parts of the input_string, and store this as the value match.
  2. Count the number of vowels and consonants in the first value and store it as vowel_count and const_count.
  3. Use the word2num module to convert the numbers in input_string to integers, stored as expected_vowel and expected_const.
  4. Compare the count and expected values match, and return the result.
import re
from word2num import word2num

def string_lie_detector(input_string: str) -> bool:
    match = re.match(
        r"(\w+) . (\w+) vowels? and (\w+) consonants?", input_string)
    if not match:
        raise ValueError("Input string not in expected format")

    vowel_count = 0
    const_count = 0
    for c in match.group(1).lower():
        if c in "aeiou":
            vowel_count += 1
        else:
            const_count += 1

    expected_vowel = word2num(match.group(2))
    expected_const = word2num(match.group(3))

    return vowel_count == expected_vowel and const_count == expected_const

The Perl solution is a little more complex. Maybe my Google-foo isn't up to scratch (and I don't use Copilot when working on solutions) that there doesn't appear to be a CPAN module that will convert words into numbers. As this is a coding exercise only I have a hash called %word2num that maps words to number (from zero to twenty).

The next problem is four of the examples use a long dash as the separator. This is a UTF-8 character. The result of perl -E 'say length("—")' is 3. After numerous searches of the Internet, it turns out I need to include use utf8:all in the code. With this change, I get the expected result of 1.

The rest of the code follows the same logic as the Python solution.

use utf8::all;

sub main ($input_string) {
    my %word2num = (qw/
        zero 0 one 1 two 2 three 3 four 4 five 5 six 6 seven 7 eight 8
        nine 9 ten 10 eleven 11 twelve 12 thirteen 13 fourteen 14
        fifteen 15 sixteen 16 seventeen 17 eighteen 18 nineteen 19 twenty 20
    /);

    my ( $word, $v, $c ) =
      ( $input_string =~ /(\w+) . (\w+) vowels? and (\w+) consonants?/ );

    if ( !$word ) {
        die "Input string not in expected format\n";
    }

    my $vowel_count = 0;
    my $const_count = 0;
    foreach my $c ( split //, lc($word) ) {
        if ( index( "aeiou", $c ) == -1 ) {
            $const_count++;
        }
        else {
            $vowel_count++;
        }
    }

    my $expected_vowel = $word2num{ lc $v } // die "Don't know what $v is\n";
    my $expected_const = $word2num{ lc $c } // die "Don't know what $c is\n";

    my $truth =
      ( $vowel_count == $expected_vowel and $const_count == $expected_const );
    say $truth ? 'true' : 'false';
}

Examples

There was an issue with the examples, and I raised a pull request to fix it.

$ ./ch-1.py "aa — two vowels and zero consonants"
True

$ ./ch-1.py "iv — one vowel and one consonant"
True

$ ./ch-1.py "hello - three vowels and two consonants"
False

$ ./ch-1.py "aeiou — five vowels and zero consonants"
True

$ ./ch-1.py "aei — three vowels and zero consonants"
True

Task 2: Subnet Sheriff

Task

You are given an IPv4 address and an IPv4 network (in CIDR format).

Write a script to determine whether both are valid and the address falls within the network. For more information see the Wikipedia article.

My solution

This one was the easier of the two to complete. Maybe because I have worked at many IPSs in the past :-)

Python has the ipaddress module which makes it easy to confirm if an IPv4 address is in a particular IP address block.

I use a try/except block to handle situations (like the second example) where the IP address or net block is invalid. This follows the Python philosophy of Easier to Ask for Forgiveness than Permission.

import ipaddress

def subnet_sheriff(ip_addr: str, domain: str) -> bool:
    try:
        return ipaddress.IPv4Address(ip_addr) in ipaddress.IPv4Network(domain)
    except ipaddress.AddressValueError:
        return False

Perl has the Net::IP module in CPAN that performs similar functionality. If the IP address or net block is invalid, the variable will be undef, and the else block will be used.

use Net::IP;

sub main ( $ip_addr, $domain ) {
    my $addr = Net::IP->new($ip_addr);
    my $block = Net::IP->new($domain);
    if ( $addr and $block ) {
        my $overlaps = ( $addr->overlaps($block) != $IP_NO_OVERLAP );
        say $overlaps  ? 'true' : 'false';
    }
    else {
        say 'false';
    }
}

Examples

$ ./ch-2.py 192.168.1.45 192.168.1.0/24
True

$ ./ch-2.py 10.0.0.256 10.0.0.0/24
False

$ ./ch-2.py 172.16.8.9 172.16.8.9/32
True

$ ./ch-2.py 172.16.4.5 172.16.0.0/14
True

$ ./ch-2.py 192.0.2.0 192.0.2.0/25
True

$ ./ch-2.py 1.1.1.1 10.0.0.0/8
False

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

blogs.perl.org

Updated wikis are available now from my Wiki Haven:


  • Perl Wiki & JSTree style V 1.41

  • CSS and Javascript Wiki V 1.03

  • Debian Wiki V 1.12

  • Digital Security Wiki V 1.21

  • Mojolicious Wiki V 1.15

  • Symbolic Language Wiki V 1.19


And see the 'News flash: 7 Mar 2026' for why Symbolic.Language.Wiki is now on savage.net.au.

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Clone - recursively copy Perl datatypes
    • Version: 0.48 on 2026-03-02, with 33 votes
    • Previous CPAN version: 0.48_07 was 6 days before
    • Author: ATOOMIC
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260301.001 on 2026-03-01, with 25 votes
    • Previous CPAN version: 20260228.001
    • Author: BRIANDFOY
  3. Date::Manip - Date manipulation routines
    • Version: 6.99 on 2026-03-02, with 20 votes
    • Previous CPAN version: 6.98 was 9 months before
    • Author: SBECK
  4. DateTime::TimeZone - Time zone object base class and factory
    • Version: 2.67 on 2026-03-05, with 22 votes
    • Previous CPAN version: 2.66 was 2 months, 25 days before
    • Author: DROLSKY
  5. Devel::Cover - Code coverage metrics for Perl
    • Version: 1.52 on 2026-03-07, with 104 votes
    • Previous CPAN version: 1.51 was 7 months, 11 days before
    • Author: PJCJ
  6. ExtUtils::MakeMaker - Create a module Makefile
    • Version: 7.78 on 2026-03-03, with 64 votes
    • Previous CPAN version: 7.77_03 was 1 day before
    • Author: BINGOS
  7. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260306 on 2026-03-06, with 37 votes
    • Previous CPAN version: 1.20260301 was 5 days before
    • Author: MSIMERSON
  8. Module::Build::Tiny - A tiny replacement for Module::Build
    • Version: 0.053 on 2026-03-03, with 16 votes
    • Previous CPAN version: 0.052 was 9 months, 22 days before
    • Author: LEONT
  9. Number::Phone - base class for Number::Phone::* modules
    • Version: 4.0010 on 2026-03-06, with 24 votes
    • Previous CPAN version: 4.0009 was 2 months, 27 days before
    • Author: DCANTRELL
  10. PDL - Perl Data Language
    • Version: 2.103 on 2026-03-03, with 101 votes
    • Previous CPAN version: 2.102
    • Author: ETJ
  11. SPVM - The SPVM Language
    • Version: 0.990141 on 2026-03-06, with 36 votes
    • Previous CPAN version: 0.990140
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.89 on 2026-03-05, with 18 votes
    • Previous CPAN version: 2.88 was 9 days before
    • Author: MANWAR
  13. Sys::Virt - libvirt Perl API
    • Version: v12.1.0 on 2026-03-03, with 17 votes
    • Previous CPAN version: v12.0.0 was 1 month, 18 days before
    • Author: DANBERR
  14. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.68 on 2026-03-02, with 20 votes
    • Previous CPAN version: 0.67
    • Author: CHANSEN
  15. X11::korgwm - a tiling window manager for X11
    • Version: 6.0 on 2026-03-07, with 14 votes
    • Previous CPAN version: 5.0 was 1 year, 1 month, 15 days before
    • Author: ZHMYLOVE
  16. Zonemaster::Engine - A tool to check the quality of a DNS zone
    • Version: 8.001001 on 2026-03-04, with 35 votes
    • Previous CPAN version: 8.001000 was 2 months, 16 days before
    • Author: ZNMSTR

Protocol Buffers (Protobuf) with Perl

blogs.perl.org

I'm hoping to reach anyone using Protocol Buffers in Perl, soliciting their experiences and best practices.

A Googler is soliciting help on an official set of bindings just this year. Which is great!

There seems to be Google::ProtocolBuffers which is now 10 years old - I suspect it is not a good choice.

The Google::ProtocolBuffers::Dynamic seems like the best choice right now but wont compile for me on Debian Trixie. I haven't gone too deep in to why, but upb seems to be on a branch, is old, and fails to compile.

A colleague recently just for fun had Claude create a pure perl library that passes all the tests and dropped it on github.

I ask as have been experimenting with protoconf which builds on protocol buffers.

Related. I do like Thrift a lot and it seems to be maintained which is nice, but it seems to have failed to gain traction.


Recently I had an odd problem that I thought to be related to caching.

While investigating the issue I noticed that a Perl CGI script using query_form to build a set of parameters, produces those in varying order.

I think this is due to a recent change in Perl that causes hash keys to be enumerated in random order (not always the same).

As it seems HTTP caching considers different URLs to be different objects, I'd like to have some consistent ordering of query parameters.

How could I do that?

Code sketch

#...
my $url = URI->new($query->url());
my $url_form;
#...
$url->path($url->path() . 'something');
$url_form = $url->clone();
#...
$url_form->query_form(
    {
        (PN_API_V1_FUNCTION) => API_V1_FN_SEND_FORM,
        (PN_USER_MODE) => $params_ref->{(PN_USER_MODE)},
    });
#...
f(..., $url_form->as_string(), ...)
#...

The Underbar, episode 9: Olaf Kolkman (part 1)

r/perl

The Underbar, episode 7: CPAN Security Group

r/perl

This is a small article about a pattern I’ve made to automatically ignore filenames for autocompletion.

In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling corrections (or autocorrect for commands). While handy, it is somewhat limited as it is global and perhaps somewhat limited. Now, I wanted to ignore it only for git and not other commands. But I haven’t found a way to only target git without having to make a wrapper around git (which I don’t want to do).

The "Beautiful Perl features" series on dev.to continues !

Since my last announcement, the following articles have been added :

I'm still hoping to attract interest from people from other programming cultures, but so far most comments came from people already in the Perl community. Let's see what the future brings us!

The more I investigate into various programming features, the more I'm impressed by the Perl vision: the initial design and later evolution into Perl5 were incredibly innovative and coherent. Raku is even more impressive, but that's another story. Regarding Perl, I am tired of reading comments on so many platforms that the language is "ugly" and "write-only" -- this is not true! If this dev.to series can help to reverse the trend, I will be happy :-)

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

Today's topic is a Perl construct called fat comma, which is quite different from the trailing commas discussed in the last post.

Fat comma: an introduction

A fat comma in Perl is a construct that doesn't involve a typographic comma! Visually it consists of an expression followed by an arrow sign => and another expression. This is used in many contexts, the most common being for initializing hashes or for passing named parameters to subroutines:

my %rect = (x      => 12,
            y      => 34,
            width  => 20,
            height => 10);
draw_shape(kind   => 'rect', 
           coords => \%rect,
           color  => 'green');

A fat comma is semantically equivalent to a comma; the only difference with a regular comma is purely syntactic: if the left-hand side is a string that begins with a letter or underscore and is composed only of letters, digits and underscores, then that string doesn't need to be enclosed in quotes. The example above took advantage of this feature, but it could as well have been written:

my %rect = ('x'      => 12,
            'y'      => 34,
            'width'  => 20,
            'height' => 10);
draw_shape('kind'   => 'rect', 
           'coords' => \%rect,
           'color'  => 'green');

or even:

my %rect = ('x', 12, 'y', 34, 'width', 20, 'height', 10);
draw_shape('kind', 'rect', 'coords', \%rect, 'color', 'green');

This last variant has exactly the same technical meaning, but clearly it does not convey the same impression to the reader; so the fat comma is mainly a device for improving code readability.

More general usage

Since Perl does not impose many constraints, fat commas can be used in many other ways than just initializing hashes or passing named parameters to subroutine calls:

  1. they can appear at any place where a list is expected;
  2. they need not be only in pairs: triplets, quadruplets, etc. are allowed;
  3. mixtures of fat commas and regular commas are allowed (and even frequent);
  4. the expression on the left-hand side of a fat comma need not be a string - it can be any value.

Most of these points are excellently illustrated in a collection of examples designed in 2017 by Sinan ÜnĂŒr. Here is an excerpt from his answer to a StackOverflow question asking when to use fat comma if not for hashes:

Any time you want to automatically quote a bareword to the left of a fat comma:

system ls => '-lh';

or

my $x = [ a => [ 1, 2 ], b => [ 3, 4 ] ];

Any time you think it makes the code easier to see

join ', ' => @data;

Any time you want to say "into":

bless { value => 5 } => $class;

In short, => is a comma, plain and simple. You can use it anywhere you can use a comma. E.g.:

my $z = f($x) => g($y); # invoke f($x) (for its side effects) and g($y)
                        # assign the result of g($y) to $z

Fat commas for domain-specific languages

A number of CPAN modules took advantage of fat commas for designing domain-specific languages (DSLs), exploiting the fact that fat commas can be used liberally for other purposes than just expressing pairs.

Moose

Attribute declarations

Moose is the most well-known object-oriented framework for Perl; it also influenced several competing frameworks1. Here is a short excerpt from the synopsis, showing a class declaration:

package Point;
use Moose;

has 'x' => (isa => 'Int', is => 'rw', required => 1);
has 'y' => (isa => 'Int', is => 'rw', required => 1);

This is an example where the fat comma does not introduce a pair of values, but rather a longer list in which the first element (the attribute name) is deliberately emphasized. Technically this x attribute could have been declared as:

  has('x', 'isa', 'Int', 'is', 'rw', 'required', 1);

with exactly the same result, but much less readability. Observe that in addition to the fat comma, the recommended Moose syntax also takes advantage here of two other Perl features, namely:

  1. the fact that a subroutine can be treated like a list operator2, without parenthesis around the arguments: so the call has 'x' => ... is technically equivalent to has('x' => ...).
  2. the fact that a list within another list is flattened, so the parenthesis in 'x' => (isa => 'Int', ...) are technically not necessary; they are present just for stylistic preference.

You may have noticed that the single quotes around the attribute name are technically unnecessary: the x attribute name could go unquoted in

  has x => (isa => 'Int', is => 'rw', required => 1);

Here again it's a matter of stylistic preference; in this context I suppose that the Moose authors wanted to emphasize the difference between the subroutine name has and the string x passed as first argument.

Subtype declarations

Another domain-specific language in Moose is for declaring types. The cookbook has this example:

use Moose::Util::TypeConstraints;
use Locale::US;

my $STATES = Locale::US->new;

subtype 'USState'
    => as Str
    => where {
           (    exists $STATES->{code2state}{ uc($_) }
             || exists $STATES->{state2code}{ uc($_) } );
       };

Here again, fat commas and subroutine calls expressed as list operators were cleverly combined to form an expressive DSL for declaring Moose types.

Mojo

Mojolicious is one of the major Web frameworks for Perl. It uses a domain-specific language for declaring the routes supported by the Web application; here are some excerpts from the documentation:

my $route = $r->get('/:foo');
my $route = $r->get('/:foo' => sub ($c) {...});
my $route = $r->get('/:foo' => sub ($c) {...} => 'name');
my $route = $r->get('/:foo' => {foo => 'bar'} => sub ($c) {...});
my $route = $r->get('/:foo' => [foo => qr/\w+/] => sub ($c) {...});
my $route = $r->get('/:foo' => (agent => qr/Firefox/) => sub ($c) {...});
...
my $route = $r->any(['GET', 'POST'] => '/:foo' => sub ($c) {...});

Through these many variants we see a flexible language for declaring routes, where fat commas are used to visually convey some idea of structure within the lists of arguments. Observe that lines 3 and following are not pairs, but triplets, and that the last line has an arrayref (not a string!) to the left of the fat comma.

A word of caution about the quoting mechanism

Let's repeat the syntactic rule: if the left-hand side of the fat comma is a string that begins with a letter or underscore and is composed only of letters, digits and underscores, then that string doesn't need to be enclosed in quotes. We have seen numerous examples above that relied on this rule for more elegance and readability. One has to be careful, however, that builtin functions or user-defined subroutines could inadvertently be interpreted as strings instead of the intended subroutine calls. For example consider this snippet:

use constant foo => "tac";

sub build_hash {
 return {shift => 123, foo => 456, toe => 789};
}

my $h = build_hash('tic');

One could easily expect that the value of $h is {tic => 123, tac => 456, toe => 789} ... but actually the result is {foo => 456, shift => 123, toe => 789}, because both shift and foo were interpreted here as mere strings instead of subroutine calls. The ambiguity can be resolved easily, either by putting an empty argument list after the subroutine calls, or by enclosing them in parenthesis:

sub build_hash {
 return {shift() => 123, foo() => 456, toe => 789};
 # or: return {(shift) => 123, (foo) => 456, toe => 789};
}

Some people would perhaps argue that the Perl interpreter should automatically detect that shift or foo are subroutine names ... but that would introduce too much fragility. The interpreter would then be dependent on the list of builtin Perl functions, and also be dependent on the list of symbols declared at that point in the code; future evolutions on either side could easily break the behaviour. So Perl's design, that blindly applies the syntactic rule formulated above, is much wiser.

Similar constructs in other languages

To my knowledge, no other programming language has a general-purpose comma operator comparable to Perl's fat comma. What is quite common, however, is to have specific syntax for hashes (or "objects" or "dictionaries" or "records", as they are called in other languages), and sometimes specific syntax for named parameters in subroutine calls or method calls. This chapter explores some aspects on these directions.

JavaScript

JavaScript Objects

The equivalent of a Perl hash is called "object" in JavaScript; it is initialized as follows (example copied from the MDN documentation):

const obj = {
  property1:    value1, // property name may be an identifier
  2:            value2, // or a number
  "property n": value3, // or a string
};

Here the syntax is : instead of =>3. Like in Perl, any quoted string can be used as a property name, or a number, or an unquoted string if that string can be parsed as an identifier. What is not allowed, however, is to use an expression on the left-hand side: {(2+2): value} or {compute_name(): value} are syntax errors. The workaround for using expressions as property names is to first create the object, and then assign properties to it:

const obj           = {};
obj[2+2]            = value1;
obj[compute_name()] = value2;

Named parameters

JavaScript has no direct support for passing named parameters to subroutines; however there is of course an indirect way, which is to pass an object to the function:

function show_user(u) {
  return `${u.firstname} ${u.lastname} has id ${u.id}`;
}
console.log(show_user({id: 123, firstname:"John", lastname:"Doe"})); 

Recent versions of JavaScript have a more sophistictated way of exploiting the object received as parameter: rather than grabbing successive properties into the object, the receiving function could instead use object destructuring to extract the values into local lexical variables:

function show_user_v2({firstname, lastname, id}) {
  return `${firstname} ${lastname} has id ${id}`;
}
console.log(show_user_v2({id: 123, firstname:"John", lastname:"Doe"})); 

This technique can go even further by supplying default values to the lexical variables - an advanced technique described in the MDN documentation.

Python

Dictionaries

In Python the closest equivalent of a Perl hash is called a "dictionary". Like in JavaScript, dictionaries are initialized with list of keys and values separated by :, enclosed in curly braces :

point = {'x': 34, 'y': -1}

But unlike in JavaScript or Perl, keys on the left of the : separator are not quoted automatically: they are just ordinary expressions. This requires more typing from the programmer, but makes it possible to use operators or function calls, like in this example:

def double (x):
    return x * 2

obj = {
    'hello' + 'world': 11,
    234:               'foobar',
    double(3):         'doubled',
    }

print(obj) # prints : {'helloworld': 11, 234: 'foobar', 6: 'doubled'}

Keyword arguments

In Python, named parameters are called keyword arguments. The syntax is different from dictionary initializers: the symbol = is used to connect keywords to their values:

draw_line(x1=12, y1=-3, x2=55, y2=66)

Here the left-hand side does not need to be quoted; but it must obey the syntax rules for identifiers, which means for example that strings containing spaces are not eligible.

The construct of functions with keyword arguments is clearly different from the construct of dictionaries. They can be combined, however: a dictionary can be unpacked as a list of key-value pairs to be passed as arguments to a function.

points = {'x1':1, 'y1':2, 'x2':3, 'y2':4}
draw_line(**points)

but unlike in Perl or JavaScript, if the dictionary contains other keys than those expected by the function, an exception is raised ("got an unexpected keyword argument"). This is beneficial for defensive programming, where the interpreter exerts more control, but at the detriment of flexibility, because a dictionary received from an external source (for example a config file or an HTTP request) must be filtered before it can be flattened and passed to the called function.

PHP

PHP uses the => notation for key-value pairs in associative arrays, like in Perl, but without the automatic quoting feature. Therefore keys must be enclosed in double quotes or single quotes, like in Python.

In addition, PHP also uses the same notation => for anonymous functions, like in JavaScript, except that the fn keyword must also be present.

Here is an example where the two features are combined:

$array1 = ["foo" => "bar", 
           "fun" => fn($x) => fn($y) => $x+$y,
          ];

This is an associative array (like a Perl hash) where the key foo is associated to value "bar", and the key fun is associated with a function that returns another function. So beware when visually parsing a => in a PHP program!

Wrapping up

The Perl construct of fat commas is very simple, with coherent syntax and semantics, and applicable in a wide range of situations. It helps to write readable code by allowing the programmer to structure lists and emphasize some relations between values in the list. This capability is often used to design domain-specific sublanguages within Perl. A beautiful construct indeed!

About the cover picture

The picture shows the coupling mechanism on an old pipe organ. The french word for this is "accouplement", which in other contexts also means "mating"!

When the mechanism is activated, notes played on the lower keyword also trigger the notes on the upper keyboard ... which bears some resemblance to the bindings in programming that were discussed in this article.

  1. since v5.38 some object-oriented features are also implemented in Perl core; but CPAN object-oriented frameworks like Moose are still heavily used. ↩

  2. See perlsyn: "Declaring a subroutine allows a subroutine name to be used as if it were a list operator from that point forward in the program". ↩

  3. the notation => is also present in JavaScript, but with a meaning totally different from Perl: it is used for arrow function expressions, a compact alternative to traditional function expression. ↩

In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.

Here is an excerpt of the code that gives error:

package remote {
__DATA__
print "$ENV{HOME}\n";
}

as show below

Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.

I can't seem to find any mis-matched brackets.

On the contrary, when I re-write the same package without braces, the code works.

package remote;
__DATA__
print "$ENV{HOME}\n";

I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.

This week in PSC (216) | 2026-03-02

blogs.perl.org

After two weeks of various people absent on various travels, we got the band back together for a fairly adminstrivial meeting.

  • Contentious changes freeze has set in, so we will now be monitoring for release blockers. We took stock of the currently open issues and PRs from this release cycle and will be triaging for blockers in the coming weeks.

  • We need release managers for two more dev releases and the final. Paul will probably take over one of the dev releases. We are figuring out the rest.

[P5P posting of this summary]

I am working on a Windows Installshield MSI installer that has Strawberry Perl as a prerequisite. This has existed for several years; I'm trying to modify it to install the latest version of Perl. In the Prerequisite Editor, the command I have specified runs a .bat file that uninstalls previous versions of Perl and runs the latest installer using msiexec.exe. The installation completes successfully, but Installshield reports that the batch command file returned an error. How can I find out what is going on, or at least suppress the Installshield message? This is Installshield 2019, by the way. Here is the batch file:

@echo on
MsiExec.exe /passive /X{0BE917CD-6CE8-1014-9C0C-9680A6A774DD} 
MsiExec.exe /passive /X{8075BCC9-804A-1014-97A8-A0999374D9D1}
MsiExec.exe /norestart /i strawberry-perl-5.42.0.1-64bit.msi  /qb /Le "c:\perl_install.log" 
echo Exit Code is %errorlevel% >> c:\perl_install.log

I don't get the output from the echo command in the log file, ether. Strangely, I get a string of Kanji.

Perl đŸȘ Weekly #762 - Perl with MetaCPAN

dev.to #perl

Originally published at Perl Weekly 762

Hi there,

If there's one thing that keeps impressing me in our community, it's the dedication of people like Olaf Alders. Week after week, Olaf keeps refining MetaCPAN, polishing small details and improving the user experience. It's not always flashy work, but when you use MetaCPAN, you can feel it - everything feels smoother, faster, and more reliable. That kind of steady, thoughtful dedication really inspires me.

Speaking of inspiration, Dave Cross recently shared a neat little trick that I think many of us can use. He showed how your README file can be turned into a static website - yes, the README you already have for your module! The article on Dev.to is called "Your README is already a website" and it's a fun, practical reminder that sometimes the tools we already have, can do more than we think. I love seeing simple ideas like that applied in clever ways.

On my side of things, I've been quietly wrestling with some longstanding issues in DBIx::Class. Not only that, but DBIx::Class::Async shares the same quirks, so it's been double the fun. I managed to fix a few problems already, but some issues are tied to PostgreSQL-specific behavior. That turned out to be tricky because most of the existing tests run on SQLite - easy to spin up, but not the same as PostgreSQL. I stumbled upon Test::PostgreSQL, which looked perfect, and then found Test::DBIx::Class, which integrates smoothly with it. I thought, "Great! Just write a use case and done." Ha! Not quite. My Ubuntu 24.04 setup didn't want to play nicely with Test::PostgreSQL - some socket permission issues blocked me entirely. I decided to tinker and ended up creating Test::PostgreSQL::v2, which worked like a charm. Then came integrating it with Test::DBIx::Class, which needed a trait for my new module. After a little more work, I added Test::DBIx::Class::SchemaManager::Trait::Testpostgresqlv2, and voila - now anyone using Test::DBIx::Class with PostgreSQL can benefit. For me, that meant I finally had a reliable way to reproduce the issue and verify the fix, like in this unit test for DBIx::Class::Async v0.64: t/156-resultset-inflate-datetime.t. Feels good to see it all working.

While I was in tinkering mode, I also revisited Test::Spelling. I always get tripped up on British vs. American English in POD, and I wanted a unit test that could work across all my modules. Initially, I had to manually add stopwords per module - tedious! So I upgraded Test::Spelling and created Test::Spelling::Stopwords. Now I can generate stopwords automatically, and the same script works for every module. It's been a real time-saver, and I'm even using it in DBIx::Class::Async here: t/spell-pod.t

It's funny how small tools and little tweaks can make such a difference. Between Olaf's continuous improvements, Dave's clever README trick, and the testing adventures I've had, I feel reminded of what makes our Perl community special - curiosity, persistence, and a little bit of playful tinkering.

Enjoy rest of the newsletter and stay safe & healthy.

--
Your editor: Mohammad Sajid Anwar.

Announcements

TPRC call for presentations is open!

The Perl & Raku Foundation has opened its call for presentations for TPRC 2026, inviting submissions of 20 or 50 minute talks on topics of interest to the Perl and Raku communities. Accepted speakers will receive complimentary conference tickets, with sessions scheduled for June 26–28 in Greenville, SC—an excellent opportunity to share insights and help shape this year's technical programme.

Board Proposes Chris Prather (perigrin) for Membership

The Perl Foundation's board has put forward Chris Prather (perigrin) as a candidate for board membership, highlighting his decades of professional Perl experience and long‑standing community contributions. His vision emphasises strengthening the Foundation's role in uniting Perl and Raku projects, supporting maintainers, and fostering sustainable ecosystem growth.

Articles

Cloud VM Performance / Price comparison 2026

This post offers a timely, data‑driven benchmark of CPU performance versus cost across 7 major cloud providers and 44 VM families, using Perl‑based tooling for reproducible results. The concise summary and practical Docker‑ready benchmark suite make it a valuable reference for developers and architects seeking real‑world insights into cloud compute value.

Automatic cross-platform testing: part 7: 32 bit, again

David Cantrell's latest on automatic cross‑platform testing tackles the perennial challenge of running CI on 32‑bit environments using modern GitHub Actions, showing how to assemble a unified workflow across Unix‑like systems while handling 32‑bit builds. He walks through clever tricks for downloading artifacts and even building a 32‑bit Perl with 32‑bit integers for more thorough testing. It’s a practical, hands‑on guide for anyone keen to broaden test coverage beyond the usual 64‑bit platforms.

Podcast

The Underbar, episode 9: Olaf Kolkman (part 1)

Olaf Kolkman has had a long career in networking and Open Source that led him to be working on Internet Technology, Policy and Advocacy at the Internet Society. In September 2025, we had a long conversation with him. In this first part, we discussed his involvement with Perl, DNSSEC and NLnet Labs.

CPAN

DBD::Mock::Session::GenerateFixtures v1.03

The latest release includes automatic mock data generation for transactional database interactions. That means it's much easier to capture and replay sequences that involve BEGIN WORK, COMMIT, ROLLBACK, and even nested try/catch logic.

App::Test::Generator v0.29

The latest release introduces mutation testing alongside a sleek HTML mutation dashboard, making it easier to see which lines of code survived mutations and where your tests could miss mistakes. Instead of just coverage numbers, you can now ask, "Would a mistake here be caught?" The dashboard highlights affected lines, provides helpful tooltips, and allows detailed per-line inspection.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 363

Welcome to a new week with a couple of fun tasks "String Lie Detector" and "Subnet Sheriff". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 362

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Echo Chamber" and "Spellbound Sorting" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

TWC362

This blog post delivers clean, idiomatic Perl solutions to both parts of TWC 362, with clear logic in the echo_chamber looping and a well-structured number-to-words sorting implementation. The use of Perl's core functions keeps the code readable and efficient, making it a helpful reference for Perl Weekly Challenge enthusiasts.

Spellbound Echo

This post offers a clear and idiomatic Raku solution to the 'Echo Chamber' challenge, showcasing concise use of core language features like map, substr, and the repetition operator. The explanation is practical and easy to follow, making it a great example of writing expressive, efficient Raku code.

Echo Chamber

This write-up for PWC 362 gives a thoughtful and practical exploration of multiple Perl approaches to the 'Echo Chamber' string transformation problem. Bob clearly explains regex, list-mapping, and string-building techniques, offering insights into Perl's expressive power and performance trade-offs. It's a solid, engaging read with useful benchmarks for comparison.

Perl Weekly Challenge: Week 362

This Week 362 post on Braincells.com presents clear, idiomatic Perl solutions to the 'Echo Chamber' and spell-sorting tasks, with concise logic leveraging Perl's core functions for string repetition and custom sorting. The explanations walk through the problem and implementation cleanly, making it accessible even for those new to the weekly challenges. It's a solid technical write-up that showcases effective Perl problem-solving.

Lingua to the rescue!

This write-up on Perl Weekly Challenge 362: Lingua to the rescue! gives a clear and practical set of Perl and Raku solutions, especially for the 'Echo Chamber' string task using Raku's expressive constructs and Perl's repetition operator. The post balances readability with technical depth, making it engaging and informative for developers exploring language features.

Perl Weekly Challenge 362

This post delivers clear, idiomatic Perl solutions to both tasks of Perl Weekly Challenge 362, using expressive constructs like map‑based repetition for Echo Chamber and a well‑structured Schwartzian sort with language‑specific converters for Spellbound Sorting. The explanations make the logic easy to follow and showcase Perl's strengths in string and list processing.

What Sort of Echo?

This post gives a straightforward and well-structured Perl implementation for both parts of Perl Weekly Challenge 362, cleanly illustrating string expansion and English-word sorting logic. The code leverages familiar Perl idioms like map and split for clarity and effectiveness, making it easy to follow for readers interested in Perl string and list processing.

You Have No Choice

This write-up offers clear, practical multi-language solutions to the Perl Weekly Challenge 362 tasks, with nicely explained approaches in Raku, Perl, Python, and Elixir that make the logic easy to follow. Packy balances straightforward implementations with thoughtful commentary, making it a technically solid and engaging post for challenge enthusiasts.

Echo and wordy numbers

This challenge page from Peter presents the Perl Weekly Challenge 362 tasks with clear problem statements for both 'Echo Chamber' and 'Spellbound Sorting'. It provides a solid foundation for exploring string manipulation and sorting by word form, making it a useful resource for practicing concise algorithm design in Perl.

Echo Chamber

This Weekly Challenge 362 post offers a clean, beginner-friendly Perl implementation of the 'Echo Chamber' task, contrasting a straightforward loop approach with a more declarative map-and-join variant. The explanations highlight readable coding practices and clarify the benefits of each style, making it both instructive and approachable for Perl programmers.

Spellbound Sorting

This PWC 362 Part 2 post presents a clear and efficient Perl solution for sorting numbers by their spelled-out word forms using a classic Schwartzian Transform. The explanation shows thoughtful use of Lingua::Any::Numbers for multilingual support and highlights how to avoid repeated conversions for better performance. It's a technically solid and instructive example of Perl's data-processing strengths.

The Weekly Challenge #362

This post presents well‑thought‑out Perl solutions to the Perl Weekly Challenge 362 problems with clear logic and use of idiomatic Perl constructs. The code is structured for readability and correctness, making it a valuable example for anyone exploring challenge‑style problem solving in Perl.

Spellbound Echo

This post delivers a clear and well‑explained exploration of The Weekly Challenge 362 tasks, walking through character‑duplication and spelled‑number sorting logic with readable examples. The author balances practical code with thoughtful commentary, offering valuable insights into expressive string and list manipulation techniques.

The one liners

This post delivers clean, one‑liner solutions in both Python and Perl for the Weekly Challenge 362 tasks, showing concise use of enumeration and string repetition for Echo Chamber and leveraging language‑specific libraries for Spellbound Sorting. Simon's examples and side‑by‑side language comparison make the logic easy to grasp and technically satisfying.

Rakudo

2026.08 Positional Adverbs

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Perl Maven online: Code-reading and Open Source contribution

March 10, 2026

Paris.pm monthly meeting

March 11, 2026

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Thank you Team PWC for your continuous support and encouragement.
Welcome to the Week #363 of The Weekly Challenge.

Weekly Challenge: The one liners

dev.to #perl

Weekly Challenge 362

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, CoPilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Echo Chamber

Task

You are given a string containing lowercase letters.

Write a script to transform the string based on the index position of each character (starting from 0). For each character at position i, repeat it i + 1 times.

My solution

Both of this weeks solutions are a one-liner in Python. For this task, the function is

def echo_chamber(input_string: str) -> str:
    return ''.join(
        letter * pos
        for pos, letter in enumerate(input_string, start=1)
    )

Multiplying a string (letter) by an integer (pos) will repeat the string the specified number of times.

The enumerate function in Python returns the index and the value from a iterator (in this case characters in a string). A little-used feature of this function is the start parameter which will start counting at a value other than the default of zero.

The Perl solution is a little more verbose, as I build the string a letter at a time. By using ++$cnt, the value is incremented before it is evaluated (as opposed to $cnt++). The x operator will repeat the string the required number of times.

sub main ($input_string) {
    my $output_string = '';

    my $cnt = 0;
    foreach my $letter ( split //, $input_string ) {
        $output_string .= $letter x ++$cnt;
    }

    say $output_string;
}

Examples

$ ./ch-1.py abca
abbcccaaaa

$ ./ch-1.py xyz
xyyzzz

$ ./ch-1.py code
coodddeeee

$ ./ch-1.py hello
heelllllllooooo

$ ./ch-1.py a
a

Task 2: Spellbound Sorting

Task

You are given an array of integers.

Write a script to return them in alphabetical order, in any language of your choosing. Default language is English.

My solution

Thankfully the perfectly round wheels have already been invented for converting integers into strings. To make this task a little easier on myself, I'm only using English.

Python has the num2words module. The sorted function has the key parameter to determine how the integers should be sorted.

from num2words import num2words

def spellbound_sorting(ints: list[int]) -> list[int]:
    return sorted(ints, key=num2words)

Perl has the Lingua::EN::Numbers module. The sort function in Perl also has built in features to determine the sort order.

use Lingua::EN::Numbers 'num2en';

sub main (@ints) {
    my @sorted_ints = sort { num2en($a) cmp num2en($b) } @ints;
    say join( ", ", @sorted_ints );
}

Examples

$ ./ch-2.py 6 7 8 9 10
[8, 9, 7, 6, 10]

$ ./ch-2.py -3 0 1000 99
[-3, 99, 1000, 0]

$ ./ch-2.py 1 2 3 4 5
[5, 4, 1, 3, 2]

$ ./ch-2.py 0 -1 -2 -3 -4
[-4, -1, -3, -2, 0]

$ ./ch-2.py 100 101 102
[100, 101, 102]

Cloud VM Performance / Price comparison 2026

blogs.perl.org

If you are using Cloud VMs you might want to check out a CPU performance and price comparison across 7 providers, 44 VM families.

The main benchmark suite used was the Perl-based (Benchmark::DKbench). If you want to try it out yourself, on a machine with docker just run:

docker run -it --rm dkechag/dkbench

I have a script that starts like this:


use strict;
use feature 'say';
use warnings FATAL => 'all';
use autodie ':default';
use Term::ANSIColor;
use Cwd 'getcwd';
use SimpleFlow qw(task say2);
use Getopt::ArgParse;
use File::Basename;
use POSIX 'strftime';
use File::Temp 'tempfile';
use Scalar::Util qw(looks_like_number);
use List::Util qw(min max sum);

which runs fine in Perl 5.42

I am attempting to create a standalone executable on Linux, so that the Perl version and all relevant modules are included, so that there is no additional installation required.

I've been using PAR::Packer which is pp, but it's not working:
pp -I /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/ -o x.pepPriML x.pepPriML.pl

which outputs:

Built-in function 'builtin::blessed' is experimental at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/overload.pm line 103.
Perl v5.40.0 required--this is only v5.38.2, stopped at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/builtin.pm line 3.
BEGIN failed--compilation aborted at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/builtin.pm line 3.
Compilation failed in require at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/File/Copy.pm line 14.
BEGIN failed--compilation aborted at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/File/Copy.pm line 14.
Compilation failed in require at /usr/share/perl5/Archive/Zip/Archive.pm line 9.
BEGIN failed--compilation aborted at /usr/share/perl5/Archive/Zip/Archive.pm line 9.
Compilation failed in require at /usr/share/perl5/Archive/Zip.pm line 316.
Compilation failed in require at -e line 236.
Failed to execute temporary parl (class PAR::StrippedPARL::Static) '/tmp/parl1X6c': $?=65280 at /usr/share/perl5/PAR/StrippedPARL/Base.pm line 77, <DATA> line 1.
/usr/bin/pp: Failed to extract a parl from 'PAR::StrippedPARL::Static' to file '/tmp/parlHjpr_2o' at /usr/share/perl5/PAR/Packer.pm line 1216, <DATA> line 1.

even when I specify perl 5.0382 within the script, it cannot compile with pp

How can I create a standalone executable to have all libraries included with the Perl version?

Episode 9 - Olaf Kolkman (part 1)

The Underbar
Olaf Kolkman has had a long career in open source. In this first part, we discussed his involvement with Perl, DNSSEC and NLnet Labs.

(dlxxxix) 16 great CPAN modules released last week

Niceperl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Amon2 - lightweight web application framework
    • Version: 6.18 on 2026-02-28, with 27 votes
    • Previous CPAN version: 6.17 was 1 day before
    • Author: TOKUHIROM
  2. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.439 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.438 was 1 month, 29 days before
    • Author: KUERBIS
  3. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.031 on 2026-02-25, with 19 votes
    • Previous CPAN version: 1.030 was 20 days before
    • Author: PREACTION
  4. CPAN::Uploader - upload things to the CPAN
    • Version: 0.103019 on 2026-02-23, with 25 votes
    • Previous CPAN version: 0.103018 was 3 years, 1 month, 9 days before
    • Author: RJBS
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260228.001 on 2026-02-28, with 25 votes
    • Previous CPAN version: 20260225.001 was 2 days before
    • Author: BRIANDFOY
  6. DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
    • Version: 4.055 on 2026-02-23, with 67 votes
    • Previous CPAN version: 5.013 was 6 months, 19 days before
    • Author: DVEEDEN
  7. Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
    • Version: v31.0.0 on 2026-02-25, with 20 votes
    • Previous CPAN version: v30.0.0 was 27 days before
    • Author: DORASUN
  8. LWP::Protocol::https - Provide https support for LWP::UserAgent
    • Version: 6.15 on 2026-02-23, with 22 votes
    • Previous CPAN version: 6.14 was 1 year, 11 months, 12 days before
    • Author: OALDERS
  9. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260226 on 2026-02-27, with 36 votes
    • Previous CPAN version: 1.20250805 was 6 months, 21 days before
    • Author: MSIMERSON
  10. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.039000 on 2026-02-28, with 27 votes
    • Previous CPAN version: 2.038000 was 29 days before
    • Author: MICKEY
  11. SPVM - The SPVM Language
    • Version: 0.990138 on 2026-02-28, with 36 votes
    • Previous CPAN version: 0.990137 was before
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.88 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.87 was 3 years, 9 months, 3 days before
    • Author: MANWAR
  13. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000163 on 2026-02-24, with 28 votes
    • Previous CPAN version: 1.000162 was 3 days before
    • Author: EXODIST
  14. Tickit - Terminal Interface Construction KIT
    • Version: 0.75 on 2026-02-27, with 29 votes
    • Previous CPAN version: 0.74 was 2 years, 5 months, 22 days before
    • Author: PEVANS
  15. TimeDate - Date and time formatting subroutines
    • Version: 2.34 on 2026-02-28, with 28 votes
    • Previous CPAN version: 2.34_01
    • Author: ATOOMIC
  16. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.66 on 2026-02-25, with 20 votes
    • Previous CPAN version: 0.65 was 1 day before
    • Author: CHANSEN

TPRC call for presentations is open!

Perl Foundation News

Call for presentations is now open for The Perl and Raku Conference! Submissions will be accepted through March 15, and all presenters that are accepted will get a free ticket to the conference!

All presenters must be present in person at the conference. Speaking dates are June 26-28, 2026. We are accepting talks (either 20 minutes or 50 minutes) on any topic that could be of interest to the Perl and Raku Community. We will also be having something new this year—interactive sessions. Keep an eye out for a description of what that will be and how you might participate.

Go to https://tprc.us/ to learn more, and to find the link for submitting your talk!

Perl: sort of Lamda-terms

Perl questions on StackOverflow

I am sure there is some "Perl magic" that makes my code much shorter.

my %m = ("a" => 1, "b" => 12, "c" => "33");
my $str = "";
for (keys (%m))
{
  $str .= $_ . "=" . $m {$_} . ", ";
}
$str = substr ($str, 0, -2);    # remove last ", "
print $str;         # OUTPUT: a=1, b=12, c=33

Is there some sort or lambda-style in Perl to make simple tasks not that "clumsy"?

e.g. $m.keys ().foreach (k,v => $k "=" . $v . ", ").join ()

Introduction

Rebase vs merge, which is better? There are a lot of online discussions about these two “workflows”. One is pro-rebasing and the other swears by merging. Both sides are stuck in their own thinking and both think they are right. So which is it? In my opinion: neither or both. Before you get defensive, hear me out. I’ll try to explain this in the following post.

TL;DR

Rebase is a tool used while developing. Merging is a tool used for incorporating your branch into a target branch. Both tools or workflows serve each other in the end.

The Board is proposing Chris Prather (perigrin) for membership on the Board. The Board will vote soon on his appointment.

Below are his answers to the application questions:

Tell us about your technical and leadership experience.

I've been writing Perl professionally for over two and a half decades and contributing to the community for nearly as long—organizing YAPC conferences and building CPAN modules. My recent work has been on large-scale Perl e-commerce systems—millions of lines of code, hundreds of developers, the kind of codebase that reminds you how much critical infrastructure still runs on Perl. I have also been working to migrate irc.perl.org to modern infrastructure—because our community spaces matter as much as our code. Beyond Perl, I've been a Director of Software, run my own consulting practice for a decade, and founded an afterschool science education company. That range has taught me something relevant here: sustainable communities need both technical excellence and intentional cultivation. You can't just "build it and they will come".

If appointed, what is one thing you'd work toward?

Improving the Foundation's capacity to lead. The Foundation is uniquely positioned to help the community navigate hard problems—but influence has to be earned through presence and relationship. I'd like to see us do more to connect the archipelago of Perl and Raku projects, the businesses that rely on them, and the community members that will sustain them into the future. I want us to support businesses that depend on Perl—helping them make the case for continued investment in the Perl ecosystem. I'd like us to think seriously about what it would take to grow the next generation of Perl shops, not just maintain the current crop. More and more of our infrastructure is being supported by fewer and fewer hands—often the same people wearing different hats, organizing each piece in isolation. We need to help provide templates for sustainable projects. That means making it easier for maintainers to find support, share burdens, and bring in new people. Projects should end by choice, not by attrition. The Foundation is the only organization that has the cachet to connect all the bridges.

What is your vision for the Foundation?

The Foundation works best as a quiet enabler—handling legal and financial scaffolding so community members can focus on building things. That should continue. But I think we have underutilized soft power. Using it well means doing the community-building work—earning the standing to shape conversations. I'd like to see us be more intentional about the cultural signals we send. The Foundation's choices—what to fund, who to platform—shape perceptions of what kind of community we're building. We have always been tolerant of the plurality of voices, but sometimes that has gotten overshadowed by some of the more flamboyant voices themselves. We should continue to cultivate community structures that celebrate the voices we want to represent us, not just prune the voices we don't. The Foundation can enable genuine support for needs that Perl-based companies have. We should work to understand and validate those needs, and help the community identify and provide sustainable solutions. By providing templates for organizing projects, finding support, and bringing in new people—the Foundation can better ensure that Perl and Raku are on solid foundations for years to come.

What is your vision for Perl and Raku?

Both Perl and Raku are living languages with vibrant evolutions underway. They both have the same underlying need: an ecosystem that is culturally and economically sustainable. One where businesses are confident enough to invest, newcomers feel welcomed rather than turned away, and ambitious projects find support. The Foundation can leverage its position to help them achieve that future. For Perl, it can help the maintainers connect more strongly with the businesses that rely upon their work to get the kind of feedback they need to ensure we're going in the right direction. For Raku, I'll admit I don't know enough about where the community stands today—and that's exactly the kind of gap it feels like the Foundation should help bridge. I hope to learn more about how we can best support them.

RIP nginx - Long Live Apache

nginx is dead. Not metaphorically dead. Not “falling out of favor” dead. Actually, officially, put-a-date-on-it dead.

In November 2025 the Kubernetes project announced the retirement of Ingress NGINX — the controller running ingress for a significant fraction of the world’s Kubernetes clusters. Best-effort maintenance until March 2026. After that: no releases, no bugfixes, no security patches. GitHub repositories go read-only. Tombstone in place.

And before the body was even cold, we learned why. IngressNightmare — five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated 9.8 critical. Unauthenticated remote code execution. Complete cluster takeover. No credentials required. Wiz Research found over 6,500 clusters with the vulnerable admission controller publicly exposed to the internet, including Fortune 500 companies. 43% of cloud environments vulnerable. The root cause wasn’t a bug that could be patched cleanly - it was an architectural flaw baked into the design from the beginning. And the project that ran ingress for millions of production clusters was, in the end, sustained by one or two people working in their spare time.

Meanwhile Apache has been quietly running the internet for 30 years, governed by a foundation, maintained by a community, and looking increasingly like the adult in the room.

Let’s talk about how we got here.

Apache Was THE Web Server

Before we talk about what went wrong, let’s remember what Apache actually was. Not a web server. THE web server. At its peak Apache served over 70% of all websites on the internet. It didn’t win that position by accident - it won it by solving every problem the early web threw at it. Virtual hosting. SSL. Authentication. Dynamic content via CGI and then mod_perl. Rewrite rules. Per-directory configuration. Access control. Compression. Caching. Proxying. One by one, as the web evolved, Apache evolved with it, and the industry built on top of it.

Apache wasn’t just infrastructure. It was the platform on which the commercial internet was built. Every hosting provider ran it. Every enterprise deployed it. Every web developer learned it. It was as foundational as TCP/IP - so foundational that most people stopped thinking about it, the way you stop thinking about running water.

Then nginx showed up with a compelling story at exactly the right moment.

The Narrative That Stuck

The early 2000s brought a new class of problem - massively concurrent web applications, long-polling, tens of thousands of simultaneous connections. The C10K problem was real and Apache’s prefork MPM - one process per connection - genuinely struggled under that specific load profile. nginx’s event-driven architecture handled it elegantly. The benchmarks were dramatic. The config was clean and minimal, a breath of fresh air compared to Apache’s accumulated complexity. nginx felt modern. Apache felt like your dad’s car.

The “Apache is legacy” narrative took hold and never let go - even after the evidence for it evaporated.

Apache gained mpm_event, bringing the same non-blocking I/O and async connection handling that nginx was celebrated for. The performance gap on concurrent connections essentially closed. Then CDNs solved the static file problem at the architectural level - your static files live in S3 now, served from a Cloudflare edge node milliseconds from your user, and your web server never sees them. The two pillars of the nginx argument - concurrency and static file performance - were addressed, one by Apache’s own evolution and one by infrastructure that any serious deployment should be using regardless of web server choice.

But nobody reruns the benchmarks. The “legacy” label outlived the evidence by a decade. A generation of engineers learned nginx first, taught it to the next generation, and the assumption calcified into received wisdom. Blog posts from 2012 are still being cited as architectural guidance in 2025.

What Apache Does That nginx Can’t

Strip away the benchmark mythology and look at what these servers actually do when you need them to do something hard.

Apache’s input filter chain lets you intercept the raw request byte stream mid-flight - before the body is fully received - and do something meaningful with it. I’m currently building a multi-server file upload handler with real-time Redis progress tracking, proper session authentication, and CSRF protection implemented directly in the filter chain. Zero JavaScript upload libraries. Zero npm dependencies. Zero supply chain attack surface. The client sends bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a paid commercial module to get close. Or you write C. Or you route around it to application code and wonder why you needed nginx in the first place.

Apache’s phase handlers let you hook into the exact right moment of the request lifecycle - post-read, header parsing, access control, authentication, response - each phase a precise intervention point. mod_perl embeds a full Perl runtime in the server with persistent state, shared memory, and pre-forked workers inheriting connection pools and compiled code across requests. mod_security gives you WAF capabilities your “modern” stack is paying a vendor for. mod_cache is a complete RFC-compliant caching layer that nginx reserves for paying customers.

And LDAP - one of the oldest enterprise authentication requirements there is. With mod_authnz_ldap it’s a few lines of config:

AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com

Connection pooling, SSL/TLS to the directory, group membership checks, credential caching - all native, all in config, no code required. With nginx you’re reaching for a community module with an inconsistent maintenance history, writing Lua, or standing up a separate auth service and proxying to it with auth_request - which is just mod_authnz_ldap reimplemented badly across two processes with an HTTP round trip in the middle.

Apache Includes Everything You’re Now Paying For

Look at Apache’s feature set and you’re reading the history of web infrastructure, one solved problem at a time. SSL termination? Apache had it before cloud load balancers existed to take it off your plate. Caching? mod_cache predates Redis by years. Load balancing? mod_proxy_balancer was doing weighted round-robin and health checks before ELB was a product. Compression, rate limiting, IP-based access control, bot detection via mod_security - Apache had answers to all of it before the industry decided each problem deserved its own dedicated service, its own operations overhead, and its own vendor relationship.

Apache didn’t accumulate features because it was undisciplined. It accumulated features because the web kept throwing problems at it and it kept solving them. The fact that your load balancer now handles SSL termination doesn’t mean Apache was wrong to support it - it means Apache was right early enough that the rest of the industry eventually built dedicated infrastructure around the same idea.

Now look at your AWS bill. CloudFront for CDN. ALB for load balancing and SSL termination. WAF for request filtering. ElastiCache for caching. Cognito for authentication. API Gateway for routing. Each one a line item. Each one a managed service wrapping functionality that Apache has shipped for free since before most of your team was writing code.

Amazon Web Services is, in a very real sense, Apache’s feature set repackaged as paid managed infrastructure. They looked at what the web needed, looked at what Apache had already solved, and built a business around operating those solutions at scale so you didn’t have to. That’s a legitimate value proposition - operations is hard and sometimes paying AWS is absolutely the right answer. But if you’re running a handful of servers and paying for half a dozen AWS services to handle concerns that Apache handles natively, maybe set the Wayback Machine to 2005, spin up Apache, and keep the credit card in your pocket.

Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that Amazon built a cloud business catching up to him.

So Why Did You Choose nginx?

Be honest. The real reason is that you learned it first, or your last job used it, or a blog post from 2012 told you it was the modern choice. Maybe someone at a conference said Apache was legacy and you nodded along because everyone else was nodding. That’s how technology adoption works - narrative momentum, not engineering analysis.

But those nginx blinders have a cost. And the Kubernetes ecosystem just paid it in full.

The Cost of the nginx Blinders

The nginx Ingress Controller became the Kubernetes default early in the ecosystem’s adoption curve and the pattern stuck. Millions of production clusters. The de-facto standard. Fortune 500 companies. The Swiss Army knife of Kubernetes networking - and that flexibility was precisely its undoing.

The “snippets” feature that made it popular - letting users inject raw nginx config via annotations - turned out to be an unsanitizable attack surface baked into the design. CVE-2025-1974 exploited this to achieve unauthenticated RCE via the admission controller, giving attackers access to all secrets across all namespaces. Complete cluster takeover from anything on the pod network. In many common configurations the pod network is accessible to every workload in your cloud VPC. The blast radius was the entire cluster.

The architectural flaw couldn’t be fixed without gutting the feature that made the project worth using. So it was retired instead.

Here is the part nobody is saying out loud: Apache could have been your Kubernetes ingress controller all along.

The Apache Ingress Controller exists. It supports path and host-based routing, TLS termination, WebSocket proxying, header manipulation, rate limiting, mTLS - everything Ingress NGINX offered, built on a foundation with 30 years of security hardening and a governance model that doesn’t depend on one person’s spare time. It doesn’t have an unsanitizable annotation system because Apache’s configuration model was designed with proper boundaries from the beginning. The full Apache module ecosystem - mod_security, mod_authnz_ldap, the filter chain, all of it - available to every ingress request.

The Kubernetes community never seriously considered it. nginx had the mindshare, nginx got the default recommendation, nginx became the assumed answer before the question was even finished. Apache was dismissed as grandpa’s web server by engineers who had never actually used it for anything hard - and so the ecosystem bet its ingress layer on a project sustained by volunteers and crossed its fingers.

The nginx blinders cost the industry IngressNightmare, 6,500 exposed clusters, and a forced migration that will consume engineering hours across thousands of organizations in 2026. Not because Apache wasn’t available. Because nobody looked.

nginx is survived by its commercial fork nginx Plus, approximately 6,500 vulnerable Kubernetes clusters, and a generation of engineers who will spend Q1 2026 migrating to Gateway API - a migration they could have avoided entirely.

Who’s Keeping The Lights On

Here’s the conversation that should happen in every architecture review but almost never does: who maintains this and what happens when something goes wrong?

For Apache the answer has been the same for over 30 years. The Apache Software Foundation - vendor-neutral, foundation-governed, genuinely open source. Security vulnerabilities found, disclosed responsibly, patched. A stable API that doesn’t break your modules between versions. Predictable release cycles. Institutional stability that has outlasted every company that ever tried to compete with it.

nginx’s history is considerably more complicated. Written by Igor Sysoev while employed at Rambler, ownership murky for years, acquired by F5 in 2019. Now a critical piece of infrastructure owned by a networking hardware vendor whose primary business interests may or may not align with the open source project. nginx Plus - the version with the features that actually compete with Apache on a level playing field - is commercial. OpenResty, the variant most people reach for when they need real programmability, is a separate project with its own maintenance trajectory.

The Ingress NGINX project had millions of users and a maintainership you could count on one hand. That’s not a criticism of the maintainers - it’s an indictment of an ecosystem that adopted a critical infrastructure component without asking who was keeping the lights on.

Three decades of adversarial testing by the entire internet is a security posture no startup’s stack can match. The Apache Software Foundation will still be maintaining Apache httpd when the company that owns your current stack has pivoted twice and been acqui-hired into oblivion.

Long Live Apache

The engineers who dismissed Apache as legacy were looking at a 2003 benchmark and calling it a verdict. They missed the server that anticipated every problem modern infrastructure is still solving, that powered the internet before AWS existed to charge you for the privilege, and that was sitting right there in the Kubernetes ecosystem waiting to be evaluated while the community was busy betting critical infrastructure on a volunteer project with an architectural time bomb in its most popular feature.

Grandpa didn’t just know what he was doing. Grandpa was building the platform you’re still trying to reinvent - badly, in JavaScript, with a vulnerability disclosure coming next Tuesday and a maintainer burnout announcement the Tuesday after that.

The server is fine. It was always fine. Touch grass, update your mental model, and maybe read the Apache docs before your next architecture meeting.

RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his spare time. Missed by the 43% of cloud environments that probably should have asked more questions.

Sources

  • IngressNightmare - CVE details and exposure statistics Wiz Research, March 24, 2025 https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
  • Ingress NGINX Retirement Announcement Kubernetes SIG Network and Security Response Committee, November 11, 2025 https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/
  • Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory Kubernetes, March 24, 2025 https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
  • Transitioning Away from Ingress NGINX - Maintainership and architectural analysis Google Open Source Blog, February 2026 https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html
  • F5 Acquisition of nginx F5 Press Release, March 2019 https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops

Disclaimer: This article was written with AI assistance during a long discussion on the features and history of Apache and nginx, drawing on my experience maintaining and using Apache over the last 20+ years. The opinions, technical observations, and arguments are entirely my own. I am in no way affiliated with the ASF, nor do I have any financial interest in promoting Apache. I have been using and benefiting from Apache since 1998 and continue to discover features and capabilities that surprise me even to this day.

Treating GitHub Copilot as a Contributor

Perl Hacks

For some time, we’ve talked about GitHub Copilot as if it were a clever autocomplete engine.

It isn’t.

Or rather, that’s not all it is.

The interesting thing — the thing that genuinely changes how you work — is that you can assign GitHub issues to Copilot.

And it behaves like a contributor.

Over the past day, I’ve been doing exactly that on my new CPAN module, WebServer::DirIndex. I’ve opened issues, assigned them to Copilot, and watched a steady stream of pull requests land. Ten issues closed in about a day, each one implemented via a Copilot-generated PR, reviewed and merged like any other contribution.

That still feels faintly futuristic. But it’s not “vibe coding”. It’s surprisingly structured.

Let me explain how it works.


It Starts With a Proper Issue

This workflow depends on discipline. You don’t type “please refactor this” into a chat window. You create a proper GitHub issue. The sort you would assign to another human maintainer. For example, here are some of the recent issues Copilot handled in WebServer::DirIndex:

  • Add CPAN scaffolding
  • Update the classes to use Feature::Compat::Class
  • Replace DirHandle
  • Add WebServer::DirIndex::File
  • Move render() method
  • Use :reader attribute where useful
  • Remove dependency on Plack

Each one was a focused, bounded piece of work. Each one had clear expectations.

The key is this: Copilot works best when you behave like a maintainer, not a magician.

You describe the change precisely. You state constraints. You mention compatibility requirements. You indicate whether tests need to be updated.

Then you assign the issue to Copilot.

And wait.


The Pull Request Arrives

After a few minutes — sometimes ten, sometimes less — Copilot creates a branch and opens a pull request.

The PR contains:

  • Code changes
  • Updated or new tests
  • A descriptive PR message

And because it’s a real PR, your CI runs automatically. The code is evaluated in the same way as any other contribution.

This is already a major improvement over editor-based prompting. The work is isolated, reviewable, and properly versioned.

But the most interesting part is what happens in the background.


Watching Copilot Think

If you visit the Agents tab in the repository, you can see Copilot reasoning through the issue.

It reads like a junior developer narrating their approach:

  • Interpreting the problem
  • Identifying the relevant files
  • Planning changes
  • Considering test updates
  • Running validation steps

And you can interrupt it.

If it starts drifting toward unnecessary abstraction or broad refactoring, you can comment and steer it:

  • Please don’t change the public API.
  • Avoid experimental Perl features.
  • This must remain compatible with Perl 5.40.

It responds. It adjusts course.

This ability to intervene mid-flight is one of the most useful aspects of the system. You are not passively accepting generated code — you’re supervising it.


Teaching Copilot About Your Project

Out of the box, Copilot doesn’t really know how your repository works. It sees code, but it doesn’t know policy.

That’s where repository-level configuration becomes useful.

1. Custom Repository Instructions

GitHub allows you to provide a .github/copilot-instructions.md file that gives Copilot repository-specific guidance. The documentation for this lives here:

When GitHub offers to generate this file for you, say yes.

Then customise it properly.

In a CPAN module, I tend to include:

  • Minimum supported Perl version
  • Whether Feature::Compat::Class is preferred
  • Whether experimental features are forbidden
  • CPAN layout expectations (lib/, t/, etc.)
  • Test conventions (Test::More, no stray diagnostics)
  • A strong preference for not breaking the public API

Without this file, Copilot guesses.

With this file, Copilot aligns itself with your house style.

That difference is impressive.

2. Customising the Copilot Development Environment

There’s another piece that many people miss: Copilot can run a special workflow event called copilot_agent_setup.

You can define a workflow that prepares the environment Copilot works in. GitHub documents this here:

In my Perl projects, I use this standard setup:

name: Copilot Setup Steps

on:
  workflow_dispatch:
  push:
    paths:
      - .github/workflows/copilot-setup-steps.yml
  pull_request:
    paths:
      - .github/workflows/copilot-setup-steps.yml

jobs:
  copilot-setup-steps:
    runs-on: ubuntu-latest
    permissions:
      contents: read
  steps:
    - name: Check out repository
      uses: actions/checkout@v4

    - name: Set up Perl 5.40
      uses: shogo82148/actions-setup-perl@v1
      with:
        perl-version: '5.40'

    - name: Install dependencies
      run: cpanm --installdeps --with-develop --notest .

(Obviously, that was originally written for me by Copilot!)

This does two important things.

Firstly, it ensures Copilot is working with the correct Perl version.

Secondly, it installs the distribution dependencies, meaning Copilot can reason in a context that actually resembles my real development environment.

Without this workflow, Copilot operates in a kind of generic space.

With it, Copilot behaves like a contributor who has actually checked out your code and run cpanm.

That’s a useful difference.


Reviewing the Work

This is the part where it’s important not to get starry-eyed.

I still review the PR carefully.

I still check:

  • Has it changed behaviour unintentionally?
  • Has it introduced unnecessary abstraction?
  • Are the tests meaningful?
  • Has it expanded scope beyond the issue?

I check out the branch and run the tests. Exactly as I would with a PR from a human co-worker.

You can request changes and reassign the PR to Copilot. It will revise its branch.

The loop is fast. Faster than traditional asynchronous code review.

But the responsibility is unchanged. You are still the maintainer.


Why This Feels Different

What’s happening here isn’t just “AI writing code”. It’s AI integrated into the contribution workflow:

  • Issues
  • Structured reasoning
  • Pull requests
  • CI
  • Review cycles

That architecture matters.

It means you can use Copilot in a controlled, auditable way.

In my experience with WebServer::DirIndex, this model works particularly well for:

  • Mechanical refactors
  • Adding attributes (e.g. :reader where appropriate)
  • Removing dependencies
  • Moving methods cleanly
  • Adding new internal classes

It is less strong when the issue itself is vague or architectural. Copilot cannot infer the intent you didn’t articulate.

But given a clear issue, it’s remarkably capable — even with modern Perl using tools like Feature::Compat::Class.


A Small but Important Point for the Perl Community

I’ve seen people saying that AI tools don’t handle Perl well. That has not been my experience.

With a properly described issue, repository instructions, and a defined development environment, Copilot works competently with:

  • Modern Perl syntax
  • CPAN distribution layouts
  • Test suites
  • Feature::Compat::Class (or whatever OO framework I’m using on a particular project)

The constraint isn’t the language. It’s how clearly you explain the task.


The Real Shift

The most interesting thing here isn’t that Copilot writes Perl. It’s that GitHub allows you to treat AI as a contributor.

  • You file an issue.
  • You assign it.
  • You supervise its reasoning.
  • You review its PR.

It’s not autocomplete. It’s not magic. It’s just another developer on the project. One who works quickly, doesn’t argue, and reads your documentation very carefully.

Have you been using AI tools to write or maintain Perl code? What successes (or failures!) have you had? Are there other tools I should be using?


Links

If you want to have a closer look at the issues and PRs I’m talking about, here are some links?

The post Treating GitHub Copilot as a Contributor first appeared on Perl Hacks.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.339 on 2026-02-19, with 50 votes
    • Previous CPAN version: 0.338 was 4 months, 16 days before
    • Author: RJBS
  2. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.04 on 2026-02-19, with 56 votes
    • Previous CPAN version: 10.03 was 30 days before
    • Author: UTASHIRO
  3. App::Netdisco - An open source web-based network management tool.
    • Version: 2.097003 on 2026-02-21, with 834 votes
    • Previous CPAN version: 2.097002 was 1 month, 12 days before
    • Author: OLIVER
  4. App::rdapper - a command-line RDAP client.
    • Version: 1.24 on 2026-02-19, with 21 votes
    • Previous CPAN version: 1.23 was 17 days before
    • Author: GBROWN
  5. CPAN::Meta - the distribution metadata for a CPAN dist
    • Version: 2.150013 on 2026-02-20, with 39 votes
    • Previous CPAN version: 2.150012 was 25 days before
    • Author: RJBS
  6. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260220.001 on 2026-02-20, with 25 votes
    • Previous CPAN version: 20260215.001 was 4 days before
    • Author: BRIANDFOY
  7. Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
    • Version: 9.1.0 on 2026-02-17, with 18 votes
    • Previous CPAN version: 9.0.0 was 23 days before
    • Author: CUKEBOT
  8. Getopt::Long::Descriptive - Getopt::Long, but simpler and more powerful
    • Version: 0.117 on 2026-02-19, with 58 votes
    • Previous CPAN version: 0.116 was 1 year, 1 month, 19 days before
    • Author: RJBS
  9. MIME::Lite - low-calorie MIME generator
    • Version: 3.038 on 2026-02-16, with 35 votes
    • Previous CPAN version: 3.037 was 5 days before
    • Author: RJBS
  10. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260220 on 2026-02-20, with 44 votes
    • Previous CPAN version: 5.20260119 was 1 month, 1 day before
    • Author: BINGOS
  11. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.1 on 2026-02-14, with 13 votes
    • Previous CPAN version: v2.0.0
    • Author: SANKO
  12. Net::Server - Extensible Perl internet server
    • Version: 2.018 on 2026-02-18, with 34 votes
    • Previous CPAN version: 2.017 was 8 days before
    • Author: BBB
  13. Resque - Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.
    • Version: 0.44 on 2026-02-21, with 42 votes
    • Previous CPAN version: 0.43
    • Author: DIEGOK
  14. SNMP::Info - OO Interface to Network devices and MIBs through SNMP
    • Version: 3.975000 on 2026-02-20, with 40 votes
    • Previous CPAN version: 3.974000 was 5 months, 8 days before
    • Author: OLIVER
  15. SPVM - The SPVM Language
    • Version: 0.990134 on 2026-02-20, with 36 votes
    • Previous CPAN version: 0.990133
    • Author: KIMOTO
  16. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000162 on 2026-02-20, with 28 votes
    • Previous CPAN version: 1.000161 was 8 months, 9 days before
    • Author: EXODIST
  17. WebService::Fastly - an interface to most facets of the [Fastly API](https://www.fastly.com/documentation/reference/api/).
    • Version: 14.00 on 2026-02-16, with 18 votes
    • Previous CPAN version: 13.01 was 2 months, 6 days before
    • Author: FASTLY

This is the weekly favourites list of CPAN distributions. Votes count: 53

Week's winner: Linux::Event::Fork (+2)

Build date: 2026/02/21 21:48:43 GMT


Clicked for first time:


Increasing its reputation:

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Data::ObjectDriver - Simple, transparent data interface, with caching
    • Version: 0.27 on 2026-02-13, with 16 votes
    • Previous CPAN version: 0.26 was 3 months, 27 days before
    • Author: SIXAPART
  2. DateTime::Format::Natural - Parse informal natural language date/time strings
    • Version: 1.25 on 2026-02-13, with 19 votes
    • Previous CPAN version: 1.24_01 was 1 day before
    • Author: SCHUBIGER
  3. Devel::Size - Perl extension for finding the memory usage of Perl variables
    • Version: 0.86 on 2026-02-10, with 22 votes
    • Previous CPAN version: 0.86 was 1 day before
    • Author: NWCLARK
  4. Marlin - 🐟 pretty fast class builder with most Moo/Moose features 🐟
    • Version: 0.023001 on 2026-02-14, with 12 votes
    • Previous CPAN version: 0.023000 was 7 days before
    • Author: TOBYINK
  5. MIME::Lite - low-calorie MIME generator
    • Version: 3.037 on 2026-02-11, with 35 votes
    • Previous CPAN version: 3.036 was 1 day before
    • Author: RJBS
  6. MIME::Body - Tools to manipulate MIME messages
    • Version: 5.517 on 2026-02-11, with 15 votes
    • Previous CPAN version: 5.516
    • Author: DSKOLL
  7. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.0 on 2026-02-13, with 13 votes
    • Previous CPAN version: 0.052 was 15 years, 10 months before
    • Author: SANKO
  8. Net::Server - Extensible Perl internet server
    • Version: 2.017 on 2026-02-09, with 34 votes
    • Previous CPAN version: 2.016 was 12 days before
    • Author: BBB
  9. Protocol::HTTP2 - HTTP/2 protocol implementation (RFC 7540)
    • Version: 1.12 on 2026-02-14, with 27 votes
    • Previous CPAN version: 1.11 was 1 year, 8 months, 25 days before
    • Author: CRUX
  10. SPVM - The SPVM Language
    • Version: 0.990130 on 2026-02-13, with 36 votes
    • Previous CPAN version: 0.990129 was 1 day before
    • Author: KIMOTO

Join us for TPRC 2026 in Greenville, SC!

Perl Foundation News

We are pleased to announce the dates of our next Perl and Raku Conference, to be held in Greenville, SC on June 26-28, 2026.  The venue is the same as last year, but we are expanding the conference to 3 days of talks/presentations across the weekend.  One or more classes will be scheduled for Monday the 29th as well. The hackathon will be running continuously from June 25 through June 29—so if you can come early or stay late, there will be opportunities for involvement with other members of the community.

Mark your calendars and save the dates!

Our website, https://www.tprc.us/  has more details including links to reserve your hotel room and a link to register for the conference at the early bird price.  Watch for more updates as more plans are finalized.

Our theme for 2026 is “Perl is my cast iron pan”.  Perl is reliable, versatile, durable, and continues to be ever so useful!  Just like your favorite cast iron pan! Raku might map to tempered steel.  also quite reliable and useful, and with some very attractive updates!

We hope to see you in June!

  • 00:00 Introduction

  • 01:30 OSDC Perl, mention last week

  • 03:00 Nikolai Shaplov NATARAJ, one of our guests author of Lingua-StarDict-Writer on GitLab.

  • 04:30 Nikolai explaining his goals about security memory leak in Net::SSLeay

  • 05:58 What we did earlier. (Low hanging fruits.)

  • 07:00 Let's take a look at the repository of Net::SSLeay

  • 08:00 Try understand what happens in the repository?

  • 09:15 A bit of explanation about adopting a module. (co-maintainer, unauthorized uploads)

  • 11:00 PAUSE

  • 15:30 Check the "river" status of the distribution. (reverse dependency)

  • 17:20 You can CC-me in your correspondence.

  • 18:45 Ask people to review your pull-requests.

  • 21:30 Mention the issue with DBIx::Class and how to take over a module.

  • 23:50 A bit about the OSDC Perl page.

  • 24:55 CPAN Dashboard and how to add yourself to it.

  • 27:40 Show the issues I opened asking author if they are interested in setting up GitHub Actions.

  • 29:25 Start working on Dancer-Template-Mason

  • 30:00 clone it

  • 31:15 perl-tester Docker image.

  • 33:30 Installing the dependencies in the Docker container

  • 34:40 Create the GitHub Workflow file. Add to git. Push it out to GitHub.

  • 40:55 First failure in the CI which is unclear.

  • 42:30 Verifying the problem locally.

  • 43:10 Open an issue.

  • 58:25 Can you talk about dzil and Dist::Zilla?

  • 1:02:25 We get back to working in the CI.

  • 1:03:25 Add --notest to make installations run faster.

  • 1:05:30 Add the git configuration to the CI workflow.

  • 1:06:32 Is it safe to use --notest when installing dependencies?

  • 1:11:05 git rebase squashing the commits into one commit

  • 1:13:35 git push --force

  • 1:14:10 Send the pull-request.

Answer

I use xscreensaver and to forbid it:

! in .Xresources
xscreensaver.splash: false
! Set to nothing makes user switching not possible
*.newLoginCommand:

Lightdm supports .d directories, by default they aren’t created on Debian but upstream documents them clearly. In other words: /etc/lightdm/lightdm.conf.d/ is read.

Which means you need to drop a file, /etc/lightdm/lightdm.conf.d/10-local-overrides.conf and add the content: