Shortcuts: s show h hide n next p prev

Announcing Mail::Make: a modern, fluent MIME email builder for Perl, with OpenPGP and S/MIME support

Hi everyone,

After a lot of time spent on this, I am happy to share with you all my new module: Mail::Make

It is a clean, production-grade MIME email builder for Perl, designed around a fluent interface, streaming serialisation, and first-class support for secure email via OpenPGP (RFC 3156) and S/MIME (RFC 5751).


Why write another email builder?

Perl's existing options (MIME::Lite, Email::MIME, MIME::Entity) are mature but were designed in an earlier era: they require multiple steps to assemble a message, rely on deprecated patterns, or lack built-in delivery and cryptographic signing.

Mail::Make tries to fill that gap:

  • Fluent, chainable API: build and send a message in one expression.
  • Automatic MIME structure: the right multipart/* wrapper is chosen for you based on the parts you add; no manual nesting required.
  • Streaming serialisation: message bodies flow through an encoder pipeline (base64, quoted-printable) to a filehandle without accumulating the full message in memory, which is important for large attachments.
  • Built-in SMTP delivery via Net::SMTP, with STARTTLS, SMTPS (port 465), and SASL authentication (PLAIN / LOGIN) out of the box.
  • OpenPGP signing and encryption (RFC 3156) via gpg / gpg2 and IPC::Run providing detached ASCII-armoured signatures, encrypted payloads, sign-then-encrypt, keyserver auto-fetch.
  • S/MIME signing and encryption (RFC 5751) via Crypt::SMIME providing detached signatures (multipart/signed), enveloped encryption (application/pkcs7-mime), and sign-then-encrypt.
  • Proper RFC 2047 encoding of non-ASCII display names and subjects.
  • Mail headers API uses a custom module (MM::Table) that mirrors the API in the Apache module APR::Table providing a case-agnostic ergonomic API to manage headers.
  • Minimal dependencies: core Perl modules plus a handful of well-maintained CPAN modules; no XS required for the base functionality.

Basic usage

use Mail::Make; my $mail = Mail::Make->new ->from( 'jack@example.com' ) ->to( 'alice@example.com' ) ->subject( 'Hello Alice' ) ->plain( "Hi Alice,\n\nThis is a test.\n" ); $mail->smtpsend( Host => 'smtp.example.com', Port => 587, StartTLS => 1, Username => 'jack@example.com', Password => 'secret', ); 

Plain text + HTML alternative + attachment leads to the correct multipart/* structure to be assembled automatically:

Mail::Make->new ->from( 'jack@example.com' ) ->to( 'alice@example.com' ) ->subject( 'Report' ) ->plain( "Please find the report attached.\n" ) ->html( '<p>Please find the report <b>attached</b>.</p>' ) ->attach( '/path/to/report.pdf' ) ->smtpsend( Host => 'smtp.example.com' ); 

OpenPGP - RFC 3156

Requires a working gpg or gpg2 installation and IPC::Run.

# Detached signature; multipart/signed my $signed = $mail->gpg_sign( KeyId => '35ADBC3AF8355E845139D8965F3C0261CDB2E752', Passphrase => sub { MyKeyring::get('gpg') }, ) || die $mail->error; $signed->smtpsend( %smtp_opts ); # Encryption; multipart/encrypted my $encrypted = $mail->gpg_encrypt( Recipients => [ 'alice@example.com' ], KeyServer => 'keys.openpgp.org', AutoFetch => 1, ) || die $mail->error; # Sign then encrypt my $protected = $mail->gpg_sign_encrypt( KeyId => '35ADBC3AF8355E845139D8965F3C0261CDB2E752', Passphrase => 'secret', Recipients => [ 'alice@example.com' ], ) || die $mail->error; 

I could confirm this to be valid and working in Thunderbird for all three variants.


S/MIME - RFC 5751

Requires Crypt::SMIME (XS, wraps OpenSSL libcrypto). Certificates and keys are supplied as PEM strings or file paths.

# Detached signature; multipart/signed my $signed = $mail->smime_sign( Cert => '/path/to/my.cert.pem', Key => '/path/to/my.key.pem', CACert => '/path/to/ca.crt', ) || die $mail->error; $signed->smtpsend( %smtp_opts ); # Encryption; application/pkcs7-mime my $encrypted = $mail->smime_encrypt( RecipientCert => '/path/to/recipient.cert.pem', ) || die $mail->error; # Sign then encrypt my $protected = $mail->smime_sign_encrypt( Cert => '/path/to/my.cert.pem', Key => '/path/to/my.key.pem', RecipientCert => '/path/to/recipient.cert.pem', ) || die $mail->error; 

I also verified it to be working in Thunderbird. Note that Crypt::SMIME loads the full message into memory, which is fine for typical email, but worth knowing for very large attachments. A future v0.2.0 may add an openssl smime backend for streaming.


Streaming encoder pipeline

The body serialisation is built around a Mail::Make::Stream pipeline: each encoder (base64, quoted-printable) reads from an upstream source and writes to a downstream sink without materialising the full encoded body in memory. Temporary files are used automatically when a body exceeds a configurable threshold (max_body_in_memory_size).


Companion App

I have also developed a handy companion command line app App::mailmake that relies on Mail::Make, and that you can call like:

  • Plain-text message

    mailmake --from alice@example.com --to bob@example.com \ --subject "Hello" --plain "Hi Bob." \ --smtp-host mail.example.com

  • HTML + plain text (alternative) with attachment

    mailmake --from alice@example.com --to bob@example.com \ --subject "Report" \ --plain-file body.txt --html-file body.html \ --attach report.pdf \ --smtp-host mail.example.com --smtp-port 587 --smtp-starttls \ --smtp-user alice@example.com --smtp-password secret

  • Print the raw RFC 2822 message instead of sending

    mailmake --from alice@example.com --to bob@example.com \ --subject "Test" --plain "Test" --print

  • OpenPGP detached signature

    mailmake --from alice@example.com --to bob@example.com \ --subject "Signed" --plain "Signed message." \ --gpg-sign --gpg-key-id FINGERPRINT \ --smtp-host mail.example.com

  • OpenPGP sign + encrypt

    mailmake --from alice@example.com --to bob@example.com \ --subject "Secret" --plain "Encrypted message." \ --gpg-sign --gpg-encrypt \ --gpg-key-id FINGERPRINT --gpg-passphrase secret \ --smtp-host mail.example.com

  • S/MIME signature

    mailmake --from alice@example.com --to bob@example.com \ --subject "Signed" --plain "Signed message." \ --smime-sign \ --smime-cert /path/to/my.cert.pem \ --smime-key /path/to/my.key.pem \ --smime-ca-cert /path/to/ca.crt \ --smtp-host mail.example.com

  • S/MIME sign + encrypt

    mailmake --from alice@example.com --to bob@example.com \ --subject "Secret" --plain "Encrypted." \ --smime-sign --smime-encrypt \ --smime-cert /path/to/my.cert.pem \ --smime-key /path/to/my.key.pem \ --smime-recipient-cert /path/to/recipient.cert.pem \ --smtp-host mail.example.com


Documentation & test suite

The distribution ships with:

  • Full POD for every public method across all modules.
  • A complete unit test suite covering headers, bodies, streams, entity assembly, multipart structure, and SMTP delivery (mock and live).
  • Live test scripts for OpenPGP (t/94_gpg_live.t) and S/MIME (t/95_smime_live.t) that send real messages and verify delivery.
  • A command line utility mailmake to create, sign, and send mail.

What is next?

  • S/MIME streaming backend (openssl smime + IPC::Run) for large messages.

Feedback is very welcome, especially if you test the OpenPGP or S/MIME paths with a mail client other than Thunderbird !

Thanks for reading, and I hope this is useful to our Perl community !

submitted by /u/jacktokyo
[link] [comments]
cpan/ExtUtils-MakeMaker - Update to version 7.78

7.78    Tue  3 Mar 20:21:53 GMT 2026

    No changes since v7.77_03

7.77_03 Mon  2 Mar 17:32:54 GMT 2026

    Macosx fixes:
    - Unbreak Perl builds

7.77_02 Wed 20 Aug 11:00:32 BST 2025

    Core fixes:
    - Do not copy args when using PERL_MM_SHEBANG=relocatable

7.77_01 Mon 28 Jul 18:46:15 BST 2025

    Enhancements:
    - Support 'class' VERSIONs, like 'package'

    Core fixes:
    - Disable XS prototypes by default

    Test fixes:
    - Make macros portably in basic.t
    - Can't test embedded newlines on VMS in oneliner.t
    - Use LIBS not LDFROM to link against a library in 02-xsdynamic.t

I found an issue in some existing code, but I cannot really find where the problem comes from:

A routine get_params should collect CGI parameters and put them in a HASH; for multi-valued parameters the value should be an ARRAY ref. However instead of an ARRAY ref I get a string like "ARRAY(0x557313b58220)" for the value of parameter em.

Here's a code sample heavily hacked to isolate the problem, but I failed:

#!/usr/bin/perl
use strict;
use warnings;

use utf8;                               # source is Unicode
use Encode qw(decode);
binmode(STDOUT, ":encoding(UTF-8)");    # make STDOUT output in UTF-8
use open qw(:std :encoding(UTF-8));     # encode as Unicode
use CGI qw(-nosticky);
use HTTP::Status qw(:constants);        # HTTP status codes
use URI;
use URI::Escape;

# convert UTF-8 encoded string to Perl's internal encoding
sub from_UTF8($)
{
    my $str = shift;                    # decode will modify the source!
    my $s = $str;
    my $r = defined ($str) ? decode('UTF-8', $str, Encode::FB_CROAK) : $str;

    print "$s -> $r\n";
    return $r;
}

sub get_params($$)
{
    my ($query, $params_ref) = @_;

    %$params_ref = map {
        my @v = map { from_UTF8($_) } $query->multi_param($_);

        #$_ => ($#v > 0 ? [map { from_UTF8($_) } @v] : from_UTF8($v[0]));
        $_ => ($#v > 0 ? \@v : $v[0]);
    } $query->param();
}

my $query = bless(
    {
        'escape' => 1,
        'param' => {
            'em' => [
                [
                 'Unbekannter Parameter "path_info"'
                ]
            ],
            'et' => [
                'Parameterfehler'
            ],
            'es' => [
                '406 '
            ]
        },
        '.charset' => 'ISO-8859-1',
        '.path_info' => '/api-v1',
        '.fieldnames' => {},
        'use_tempfile' => 1,
        '.parameters' => [
            'em',
            'es',
            'et'
        ]
    },
    'CGI'
);
my %params;
get_params($query, \%params);

Expectation was that @v will contain all the parameter values of the current parameter being processed, and if it's just a single value, the HASH value will be a scalar, and an ARRAY reference otherwise. As the debug output suggests, the error occurs before trying to transform the string values to UTF-8.

There may be unneeded lines left, but as I have no idea where the problem comes from, I left them there.

Here's an example output:

ARRAY(0x55fcbf919220) -> ARRAY(0x55fcbf919220)
406  -> 406
Parameterfehler -> Parameterfehler

Version information: The code was running on SLES 15 SP6 (perl-5.26.1-150300.17.20.1.x86_64, perl-CGI-4.46-3.3.1.noarch).

Manual Page

As a comment suggested I might have used param incorrectly, here's an example from the manual page:

       For example, the param() routine is used to set a CGI parameter to a
       single or a multi-valued value.  The two cases are shown below:

           $q->param(
               -name  => 'veggie',
               -value => 'tomato',
           );

           $q->param(
               -name  => 'veggie',
               -value => [ qw/tomato tomahto potato potahto/ ],
           );

So it seems I can pass any array reference, and the manual does not say anything about the number of elements in the array, so I guess it will work with any array of scalars.

Experiment

The manual is not very clear about specifying multiple values for a single parameter, so I looked at the code:

        # If values is provided, then we set it.
        if (@values or defined $value) {
            $self->add_parameter($name);
            $self->{param}{$name}=[@values];
        }

(Maybe this is actually an answer)

So the value will always be an ARRAY reference, and any values are put there as elements. So if the value is an array reference, it will be the only element in that array.

Then I tried specifying multiple values using array syntax instead of using an array reference:

# my $query = CGI->new();
  DB<1> $query->param('foo', qw(bar baz))
  DB<2> x $query->param('foo')
0  'bar'
1  'baz'

Weekly Challenge: It's all about the translation

dev.to #perl

Weekly Challenge 364

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Decrypt String

Task

You are given a string formed by digits and #. Write a script to map the given string to English lowercase characters following the given rules.

  • Characters a to i are represented by 1 to 9 respectively.
  • Characters j to z are represented by 10# to 26# respectively.

My solution

This task calls for regular expressions to be used. Both Python and Perl allow call back functions in the replacement section (i.e. you can call a function to find the new string).

For the Python solution, I have a (callback) function called replace_digits. It takes a Match object as input and returns a string.

def replace_digits(m: re.Match) -> str:
    c = m.group(0)
    return chr(96 + int(c[:2]))

The variable c has the matching string (either a single digit or two digits (10-26) followed by a hash character. Things to note:

  • c[:2] will remove the hash if it is present.
  • int(...) will convert this into a number
  • chr(...) will turn this into a letter of the alphabet. The ASCII code for the letter a is 97.

The main function checks that the input is valid. It then uses re.sub to perform the substitution in the replace_digits function.

def decrypt_string(input_string: str) -> str:
    if not re.search(r'^(1\d#|2[0-6]#|\d)*$', input_string):
        raise ValueError("String not in expected format")

    return re.sub(r'(1\d#|2[0-6]#|\d)', replace_digits, input_string)

The Perl solution follows the same logic, except that the replacement value can be code (not just a function call). This negates the need for a separate function. The substr function is used to remove the hash character.

sub main ($input_string) {
    if ( $input_string !~ /^(1\d#|2[0-6]#|\d)*$/ ) {
        die "String not in expected format\n";
    }

    my $output_string = $input_string;
    $output_string =~ s/(1\d#|2[0-6]#|\d)/chr(96 + substr($1,0,2))/eg;
    say $output_string;
}

In the regular expression, /e indicates that the replacement value is a expression (as opposed to the literal string), and /g means to run the regular expression globally (on all occurrences).

Examples

$ ./ch-1.py 10#11#12
jkab

$ ./ch-1.py 1326#
acz

$ ./ch-1.py 25#24#123
yxabc

$ ./ch-1.py 20#5
te

$ ./ch-1.py 1910#26#
aijz

Task 2: Goal Parser

Task

You are given a string, $str.

Write a script to interpret the given string using Goal Parser. The Goal Parser interprets G as the string G, () as the string o, and (al) as the string al. The interpreted strings are then concatenated in the original order.

My solution

For this task, I use a regular expression to check that the input_string is in the expected format. I then use the replace function to change the string to the required output.

def good_parser(input_string: str) -> str:
    if not re.search(r'^(G|\(\)|\(al\))*$', input_string):
        raise ValueError("Unexpected input received")

    return input_string.replace('()', 'o').replace('(al)', 'al')

Perl doesn't have a replace function, so I use a regular expression to perform the replacement.

sub main ($input_string) {
    if ($input_string !~ /^(G|\(\)|\(al\))*$/) {
        die "Unexpected input received\n;"
    }

    my $output_string = $input_string;
    $output_string =~s/\(\)/o/g;
    $output_string =~ s/\(al\)/al/g;

    say $output_string;
}

Examples

Parentheses have special meaning in bash, so quotes are used to handle this.

$ ./ch-2.pl "G()(al)"
Goal

$ ./ch-2.pl "G()()()()(al)"
Gooooal

$ ./ch-2.pl "(al)G(al)()()"
alGaloo

$ ./ch-2.pl "()G()G"
oGoG

$ ./ch-2.pl "(al)(al)G()()"
alalGoo

Dancer 2.1.0 Released

blogs.perl.org

We're thrilled to announce the release of Dancer2 2.1.0! This release represents a major investment in the health and quality of the project. We've gone deep into the issue tracker and PR backlog, closing out some of our oldest open issues — some dating back years — and significantly grooming both the issue and pull request queues. A big thank you to everyone who contributed.

Bug Fixes

This release addresses a number of long-standing issues:

  • UTF-8 handling improvements: to_json no longer double-encodes UTF-8 (#686), the charset config option is now properly respected (#1124), and UTF-8 in URLs is handled correctly (#1143). To the best of our knowledge, this release fixes all known UTF-8 issues. The default charset for Dancer2 apps is now UTF-8 rather than undefined. You can set an empty charset for your app if needed.
  • Case-insensitive system confusion has been resolved (#863).
  • Plugin DSL keywords are now app-specific (#1449, #1630), preventing cross-application bleed in multi-app setups.
  • Test suite fixes: Resolved content_type errors in t/dsl/send_file.t (#1772), JSON warnings in t/dsl/send_as.t (#1773), and void warnings in t/hooks.t (#1774).
  • Windows compatibility: File uploads are now properly unlinked on Windows (#1777).

Enhancements

  • Strict config mode (#763): Dancer2 can now warn on unknown config keys, with an opt-out available. New apps scaffolded with dancer2 gen will have strict config enabled by default.
  • Path::Tiny migration (#1264): Internal path handling has moved to Path::Tiny for cleaner, more reliable file operations.
  • Unicode::UTF8 support (#1594): When Unicode::UTF8 is available, Dancer2 will use it for faster encoding/decoding.
  • Batch session cookie access (#1073): Retrieve multiple session cookie values at once with the clear method.
  • Fully qualified engine namespaces (#1323): All engines now accept fully qualified package names.
  • Double server header fix (#1664): Dancer2 no longer sends duplicate Server headers.
  • Improved send_as (#1709): send_as now uses the full serializer pipeline, including hooks.
  • Dispatching improvements (PR #1757): Removed the deprecated api_version and improved the dispatching loop.
  • MIME ownership (PR #1758): MIME type handling has been moved to the app level.
  • Package name in logger output (PR #1780): Logger output can now include the calling package name, making multi-module debugging easier.

Documentation

  • Better documentation for the views setting behavior (#1431).
  • Fixed broken links in the manual and tutorial (PR #1749, #1750).
  • Improved config documentation structure (PR #1753).
  • Removed the stale logger keyword from the DSL docs (PR #1762).

Security

  • The "Powered by..." text has been removed from the default error page (PR #1776). Security researchers flagged this as an information disclosure concern — advertising the framework and version in error responses gives potential attackers a head start. The default error page is now clean of framework identifiers.

Thank You

Thanks to all who contributed to this release: Sawyer X, Russell Jenkins, Mikko Koivunalho, Gil Magno, and Sorin Pop.

You can install or upgrade via CPAN:

cpanm Dancer2

Happy Dancing!

Jason/CromeDome

cpan/IO-Compress - Update to version 2.219

  2.219 9 March 2026

    * Fix a few  typos
      Mon Mar 9 12:03:18 2026 +0000
      132941df2afe808142ca525f361d18ebb3a7f882

    * Squash repeated semicolons
      Mon Mar 9 12:02:15 2026 +0000
      3cd732119e823688a26ba7fbf3a99bd0be18bc88

    * Make dependent version checking consistent amd update module to version 2.219. Fixes #70
      Mon Mar 9 11:51:37 2026 +0000
      b81179d2db053da540d77af4a61ec449b91fca6e

  2.218 8 March 2026
    * Refresh zipdetails to version 4.005 Sourced from https://github.com/pmqs/zipdetails
      Sun Mar 8 15:07:58 2026 +0000
      13528f57cffecebdddc2f04e5d6dc7590b0acd3e

    * version 2.218
      Sun Mar 8 14:28:31 2026 +0000
      3e0ff70d9dd75749a3c03efb666af8d7adff2a19

    * Add SECURITY.md,  Fixes #69
      Sun Mar 8 14:03:41 2026 +0000
      b95e8a32051aa3d7d69e7e60acf3eb7c0c274a7b
      (Note: Not included in Perl core distribution.)

    * fix spelling typo
      Tue Feb 24 19:47:01 2026 +0000
      ecd61fa10ef1d1b9bc30662bfccf68d75118103a

    * Refresh Changes file
      Sun Feb 1 11:04:49 2026 +0000
      c9f81969de01d90104f7abbaafe09608af92bbf1

    * Update release date in README
      Sun Feb 1 11:02:46 2026 +0000
      ce5ff6ea860443bc27ca180993852eb6ef1a63e8

    * Refresh zipdetails from https://github.com/pmqs/zipdetails
      Sun Feb 1 10:58:21 2026 +0000
      0fa7ba236438fb2e022f9f2bb92caac25f075cc5
IO-Compress:  Don't synch SECURITY.md into core

Following precedent set for Config-Perl-V.
cpan/Compress-Raw-Bzip2 - Update to version 2.218

  2.218 8 March 2026

      * Version 2.218
        Sun Mar 8 13:47:17 2026 +0000
        be6054d7ed1536eec4e5cf04117f251fb4389d59

      * Add SECURITY.md Fixes #18
        Sun Mar 8 12:14:23 2026 +0000
        69f9bdebf9fd8b1c5d132e1a423fb7477aa4b347
        (Note: Not included in Perl core distribution.)

      * fix spelling typo
        Tue Feb 24 19:49:03 2026 +0000
        d8b7ef4eb1507575598430c5e23e2cba95ca410e
Compress-Raw-Bzip2: Don't synch SECURITY.md into core

Following precedent set for Config-Perl-V.

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

Today's topic is about two-sided constructs that behave differently when used in list context or in scalar context: this is a feature unique to Perl, often disconcerting for people coming from other programming backgrounds, but very convenient once you are used to it.

The notion of context

Natural languages are full of ambiguities, as can be seen with the well-known sentences "Time flies like an arrow; fruit flies like a banana", where the same words are either nouns or verbs, depending on the context.

Programming languages cannot afford to have ambiguities because the source code must be translatable into machine instructions in a predictable way. Therefore most languages have unambiguous syntactic constructs that can be parsed bottom-up without much complication. Perl has a different approach: some core operators - and also some user-written subroutines - have two-sided meanings, without causing any ambiguity problem because the appropriate meaning is determined by the context in which these operators are used.

Technically Perl has three possible contexts: list context, scalar context and void context; but the void context is far less important than the other two and therefore will not be discussed here. An example of a list context is foreach my $val (1, 2, 3, 5, 8) {...}, where a list of values is expected within the parenthesis; an example of a scalar context is if ($x < 10) {...} where a scalar boolean condition is expected within the parenthesis. Some of the common Perl idioms that depend on context are:

construct result in list context result in scalar context
an array variable @arr members of the array number of array elements
the readline operator <STDIN> list of all input lines the next input line
the glob operator <*.pl> list of all files matching the pattern the next file that matches the pattern
regular expression with the /g flag for "global match" all strings captured by all matches boolean result of the next match attempt
the localtime function list of numbers as seconds, minutes, hours, etc. a string like "Fri Mar 6 23:00:12 2026"

These are just a few examples; read perlfunc and perlop for reference to many other context-sensitive constructs.

The advantage of having two different but related meanings for the same construct is that it reduces the number of constructs to learn. For example just remember that a regex with a /g flag is a global match, and Perl will "do the right thing" depending on where you use it; so given:

my $regex_p_word = qr( \b   # word boundary
                       p    # letter 'p'
                       \w+  # one or more word letters
                     )x;

you can either write:

my @words_starting_with_p = $text =~ /$regex_p_word/g;

or

while ($text =~ /$regex_p_word/g) {
  do_something_with($&);  # $& contains the matched string
}

Reducing the number of constructs is quite helpful in a rich language like Perl where the number of core functions and operators is large; but of course it requires that programmers are at ease with the notion of context. Perl textbooks put strong emphasis on this aspect of the Perl culture: for example the "Learning Perl" book starts the section on context by saying:

This is the most important section in this chapter. In fact, it’s the most important section in the entire book. In fact, it wouldn’t be an exaggeration to say that your entire career in using Perl will depend upon understanding this section.

Context-sensitive constructs also contribute to make the source code more concise and focused on what the programmer wanted to achieve, leaving aside the details; this is convenient when readers just need an overview of the code, for example when deciding whether to adopt a module or not, or when explaining an algorithm to a business analyst who doesn't know Perl (yes I did this repeatedly in my career, and it worked well - so don't tell me that Perl is not readable!).

This is not to say that the details can always be ignored; of course the people in charge of maintaining the code need to be aware of all the implications of context-sensitive operations.

Relationship between the list result and the scalar result

For every context-sensitive construct, the results in list context and in scalar context must somehow be related; otherwise it would be incomprehensible. But what would be a sensible relationship between the two contexts? Most Perl core constructs are built along one of those two patterns:

  • the scalar result is a condensed version of the list result, like the @arr or localtime examples in the table above;
  • the scalar result is an iterator on some implicit state, like the <STDIN> or glob examples in the same table.

When the scalar result is a condensed version, more detailed information may nevertheless be obtained by other means: for example, although a regular expression match in scalar context just returns a boolean result, various details about the match (the matched string, its position, etc.) can be retrieved through global variables.

When the scalar result is an iterator, it is meant to be called several times, yielding a different result at each call. Depending on the iterator, a special value is returned at the end to indicate to the caller that the iteration is finished (usually this value is an undef). This concept is quite similar to Python's generator functions or JavaScript's function* construct, except that each of the Perl core operators is specialized for one particular job (iterating on lines in a file, or on files in a directory, or on occurrences of a regex in some text). Such iterators are particularly useful for processing large data, because they operate lazily, one item at a time, without loading the whole data into memory.

As an aside, let us note that unlike Python or JavaScript, Perl does not have a builtin construct for general-purpose iterators; but this is not really needed because iterators can be constructed through Perl's closures, as beautifully explained in the book Higher-Order Perl - quite an ancient book, but essential and still perfectly valid. There are also several CPAN modules that apply these techniques for easier creation of custom iterators; Iterator::Simple is my preferred one.

I said that the two patterns just discussed cover most core constructs ... but there is an exception: the range operator .., like the documentation says, is "really two different operators depending on the context", so the meanings in list context and in scalar context are not related to one another. This will be discussed in more detail in a future article.

Writing your own context-sensitive subroutines or methods

Context-sensitive operations are not limited to core constructs: any subroutine can invoke wantarray1 to know in which context it is called so that it can adapt its behaviour. But this is only necessary in some very specific situations; otherwise Perl will perform an implicit conversion which in most cases is perfectly appropriate and requires no intervention from the programmer - this will be described in the next section.

In my own modules the places where I used wantarray were for returning condensed information:

  • in DBIx::DataModel, statement objects have an sql method that in list context returns ($sql, @bind), i.e. the generated SQL followed by the bind values. Here the default Perl conversion to scalar context would return the last bind value, which is of no use to the caller, so the method explicitly returns just $sql when called in scalar context;

  • in Search::Tokenizer, the tokenizer called in list context returns a tuple ($term, length($term), $start, $end, $term_index). When called in scalar context, it just returns the $term.

Implicit conversions

When an expression is not context-sensitive, Perl may perform an implicit conversion to make the result fit the context.

Scalar value in list context

If a scalar result is used in list context, the obvious conversion is to make it a singleton list:

my @array1 = "foo"; # converted to ("foo")

If the scalar is undef or an empty string, this will still be a singleton list, not the same thing as an empty list: so in

my @array2 = undef; # converted to (undef)
my @array3;         # initialized to ()

@array2 is a true value because it contains one element, while @array3 contains no element and therefore is a false value.

List value in scalar context

If a list value is used in scalar context, the initial members of the list are thrown away, and the context gets the last value:

my $scalar = (3, 2, 1, 0); # converted to 0

This behaviour is consistent with the comma operator inherited from C.

An array variable is not the same thing as a list value. An array is of course treated as a list when used in list context, but in scalar context it just returns the size of the array (an integer value). So in

my @countdown    = (3, 2, 1, 0);
my $should_start = @countdown ? "yes" : "no";
say $should_start;  # says "yes"

the array holds 4 members and therefore is true in scalar context; by contrast the mere list has value 0 in scalar context and therefore is false:

$should_start = (3, 2, 1, 0) ? "yes" : "no";
say $should_start;  # says "no"

Programming languages without context-sensitive constructs

Since context-sensitivity is a specialty of Perl, how do other programming languages handle similar situations? Simply by providing differentiated methods for each context! Let us look for example at the "global matching" use case, namely getting either a list of all occurrences of a regular expression in a big piece of text, or iterating over those occurrences one at a time.

Global match in JavaScript

In Perl a global match of shape $text =~ /$regex/g involves a string and a regex that are put together through the binding operator =~. In JavaScript, since there is no binding operator, regex matches are performed by method calls in either way:

  • the String class has methods:

    • match(), taking a regex as argument, returning an array of all matches;
    • matchAll(), taking a regex as argument, returning an iterator;
    • search(), taking a regex as argument, returning the character index of the first match (and therefore ignoring the /g flag);
  • the RegExp class has methods:

    • exec(), taking a string as argument, returning a "result array" that contains the matched string, substrings corresponding to capture groups, and positional information. When the regex has the /g flag for global match, the exec() method can be called repeatedly, iterating over the successive matches;
    • test(), taking a string as argument, returning a boolean result.

The MDN documentation has a good guide on regular expresssions in JavaScript. The purpose here is not to study these methods in detail, but merely to compare with the Perl API: in JavaScript the operations have explicit method names, but they are more numerous. The fact that method names are english words does not dispense from reading the documentation, because it cannot be guessed from the method names that match() returns an array and matchAll() returns an iterator!

Global match in Python

Regular expressions in Python do not belong to the core language, but are implemented through the re module in the standard library. Matching operations are performed by calling functions in that module, passing a string and a regex as arguments, plus possibly some other parameters. Functions re.search(), re.match() and re.fullmatch() are variants for performing a single match; for global match, which is the subject of our comparison, there is no /g flag, but there are specific methods:

  • re.findall(), taking a regex, a string and possibly some flags as arguments, returning a list of strings;
  • re.finditer(), also taking a regex, a string and possibly some flags as arguments, returning an iterator yielding Match objects.

Conclusion

Thanks to context-sensitive operations, Perl expressions are often very concise and nevertheless convey to the hasty reader an overview of what is going on. Detailed comprehension of course requires an investment in understanding the notion of context, how it is transmitted from caller to callee, and how the callee can decide to give different responses according to the context. Newcomers to Perl may think that the learning effort is greater than in other programming languages ... but we have seen that in absence of context-sensitive operations, the complexity goes elsewhere, in a greater number of methods or subroutines for handling all the variant situations. So context-sensitivity is definitely a beautiful feature of Perl!

About the cover picture

This is a side-by-side view of the Victoria-Hall, the main concert hall in Geneva, where the stage holds either a full symphonic orchestra, or just a solo recital. Same place, different contexts!

  1. as stated in the official documentation, wantarray is ill-named and should really be called wantlist ↩

TPRC Presentation Coaches Available

r/perl

Beautiful Perl feature: trailing commas

r/perl

As a beginner I have one or maybe a couple questions about Perl .. ?

r/perl

Hi sub"

Being new to programming languages I have been looking at Perl as the stable legacy appeals to me, however I have a few questions...

As a beginner there doesn't see to be on framework that stands out as best for newbies. Is this right or am i looking in the wrong places?

To pair with that the number of tutorials ghat take you through everything like Rails or Symfony etc don't seem to be there? why is this?

If the language is wanted to be kept alive I feel beginners are vital to it, so why such a lack of resources for them?

I mean no disrespect or am trying to start arguments. I'm just confused about the way things are?

Thank you.

submitted by /u/Salt_Photograph_1891
[link] [comments]

I’ve been slowly ramping up my use of Claude for coding issues. I’ve been meaning to write a bit more about how I use it, and had been putting that off until I finished a few things. With some of those done, I thought I’d write up some notes on how it’s gone, finally. Over the next little while, I’ll post some actual work I’ve done. Later, I’ll try to write some more sort of general thoughts: other things I might try, what general tactics have felt useful, places where I think things are particularly problematic, and so on.

I started out fairly negative on “agentic coding”, and I still have a lot of opinions, but they now include that (a) coding agents are not going anywhere and (b) the resulting code can be of sufficient quality to be worth using in real work.

Project One: Cassandane Signatures

I work on Cyrus IMAP, an open-source JMAP, IMAP, CalDAV and CardDAV server. Cassandane is the Cyrus test suite’s largest component. It’s a big pile of Perl, around 200k LOC. In general, each test is a separate subroutine stored in its own file. The whole thing has upsides and downsides. One of the smaller, but noticeable, downsides: basically none of that code used subroutine signatures. I try to always use subroutine signatures in new Perl code. I’d begun using them in some new Cassandane code, but it was just a drop in the ocean. I wanted them everywhere, and to be the clear default. My existing “convert subs to use signatures” code munging program I had lying didn’t cut it, for a variety of boring reasons, including that it didn’t cope with Perl subroutine attributes, which Cassandane uses extensively.

I wanted to, in one swoop, convert all of Cassandane’s tests to use subroutine signatures. I considered futzing with my old code for this, but then I thought, “This seems like a nice simple job to test out Claude”. I gave Anthropic $20, installed Claude Code, and fired it up.

Claude strategy was a lot like mine: rather than go edit every file, it wrote a program that would edit all the files. It was sort of terrible, around 300 lines of code. Later I tried to write my own version. It never quite worked (after five to ten minutes of work, anyway), but it was close, and under 50 lines. But the good news is that Claude’s worked, and then I could delete it. If I was building a program to use and maintain, I would never have accepted that thing. But I didn’t need to. I could run the program and look at the git diff. There wasn’t even a security concern. It all lived in a container.

Claude needed help. Its first go was so-so. Claude couldn’t check its own work because it didn’t know how to use the Docker-driven build-and-test system I use for Cyrus, and so Claude couldn’t run the tests. It could compile-test the tests, though, which went a long way. It iterated for an hour or so. Sometimes I’d hop in and tell it what it was doing wrong, or that it could stop worrying about some issue.

When it was done, I had a diff that was thousands of lines long and touched 1,500+ files. I spent a long time (several shifts of 15 minutes each) reviewing the diff. The diff was so close to perfectly uniform as to be mind-numbing. But it was my job to make sure I wasn’t sending bogus changes for review to a colleague without vetting it first. (After all, had I written my own code-transforming program entirely by hand and run that, I wouldn’t have sent its output along for code review without a careful reading!)

I found some minor bugs and fixed them in separate commits. You can read the whole changeset if you want. You’ll see it’s six commits by me, one by Claude.

If this was the only value I got out of the $20, it would’ve been well worth it, but I went on to get a lot more done on those $20. I’ll write more about some other, more interesting work, over the next few days.

Why is $#ARGV 0?

r/perl

TL;DR

I didn’t like how the default zsh prompt truncation works. My solution, used in my own custom-made prompt (fully supported by promptinit), uses a custom precmd hook to dynamically determine the terminal’s available width.

Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

This week in PSC (217) | 2026-03-09

blogs.perl.org

All three of us were present for a quick meeting.

  • We discussed the progress of our outreach to some potential new core team members. We will be holding a vote once we hear back from everyone.

  • We noticed that a minor step in the PSC transition was missed during this cycle. We agreed that there needs to be a checklist for the procedure, and we intend to write it up.

  • We started with the release blocker triage, but the meeting was short so we didn’t look at many issues. We have no candidate blockers so far.

[P5P posting of this summary]

TPRC Presentation Coaches Available

Perl Foundation News

The deadline for talks looms large, but assistance awaits!

This year, we have coaches available to help write your talk description, and to support you in developing the talk.

If you have a talk you would like to give, but cannot flesh out the idea before the deadline (March 15th; 6 days from now!), you should submit your bare-bones idea and check "Yes" on "Do you need assistance in developing this talk?".

We have more schedule space for talks than we did last year, and we would love to add new voices and wider topics, but time is of the essence, so go to https://tprc.us/ , and spill the beans on your percolating ideas!

In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.

Here is an excerpt of the code that gives error:

package remote {
__DATA__
print "$ENV{HOME}\n";
}

as show below

Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.

I can't seem to find any mis-matched brackets.

On the contrary, when I re-write the same package without braces, the code works.

package remote;
__DATA__
print "$ENV{HOME}\n";

I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.

Perl 🐪 Weekly #763 - Is WhatsApp the new IRC?

dev.to #perl

Originally published at Perl Weekly 763

Hi there!

While we are still low on articles we had a good start in the WhatsApp group I mentioned 2 weeks ago. People introduced themselves and there were some light conversations. You are welcome to join us and write a few words about yourself.

There are also a number of Perl related events on the horizon in Paris and Berlin and the virtual event I organize.

Finally I published the Code Maven Academy site where there are already 140 hours of videos including 30 hours related to Perl. I'll keep recording these during live events and participants of my events will also get a discount coupon.

Enjoy your week!

--
Your editor: Gabor Szabo.

Announcements

Perl 5.42.1 is now available!

'We are pleased to announce version 42.1, the first maintenance release of version 42 of Perl 5.': Perldelta

Articles

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

Beautiful Perl feature : fat commas, a device for structuring lists

Beautiful Perl feature: trailing commas

More dev.to articles on beautiful Perl features

A meta-article about the series.

Discussion

Protocol Buffers (Protobuf) with Perl

Perl

This week in PSC (216) | 2026-03-02

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 364

Welcome to a new week with a couple of fun tasks "Decrypt String" and "Goal Parser". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 363

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "String Lie Detector" and "Subnet Sheriff" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Sheriff Detector

The post offers a clear and elegant walkthrough of solving two interesting problems using Raku. It stands out for its well-explained code, practical examples, and thoughtful use of language features like subsets, parsing, and bitwise operations.

Lying Sheriffs

The article provides a clear and well-structured exploration of the challenge, combining thoughtful algorithmic reasoning with an elegant implementation. The use of Perl and PDL demonstrates both efficiency and creativity, making the solution not only correct but also technically insightful. Overall, it's an excellent example of concise problem analysis paired with expressive code.

Perl Weekly Challenge 363

The post presents a clean and well-reasoned solution to the Perl Weekly Challenge, with concise Perl code and a clear explanation of the underlying logic. The approach is methodical and easy to follow, demonstrating solid problem-solving and thoughtful handling of edge cases.

I Don't Lie, Sheriff!

The post demonstrates a clean and thoughtful Perl implementation, with clear logic and well-structured code. The approach effectively handles both the self-referential string validation and the subnet-membership check, showing careful attention to correctness and readability.

I Shot The Subnet…

The post presents a clear and engaging walkthrough of the challenge, combining solid problem decomposition with readable Perl implementations. The explanation of the approach is practical and easy to follow, while the multi-language comparisons add extra technical value for readers exploring different idioms. Overall, it's a well-structured and insightful solution write-up.

Lies and lies within

The write-up presents a clear and methodical approach to solving the Perl Weekly Challenge, with well-structured code and helpful explanations of the reasoning behind the solution. The implementation is clean and idiomatic Perl, making the logic easy to follow and reproduce. Overall, it's a thoughtful and technically solid exploration of the problem.

The Weekly Challenge - 363

The write-up provides a clear and well-structured solution to the challenge, with careful input validation and readable Perl code that emphasizes robustness. The step-by-step logic and defensive programming style make the implementation easy to understand and reliable.

The Weekly Challenge #363

The blog presents a thorough and thoughtfully structured solution to the Perl Weekly Challenge, combining clear reasoning with well-documented Perl code. The modular design and detailed explanations make the logic easy to follow while demonstrating solid engineering discipline.

Stringy Sheriff

The post offers a clear and thoughtful walkthrough of solving the challenge with practical reasoning and well-structured code. Roger nicely explains the approach step-by-step, making the solution easy to follow while highlighting useful string-processing techniques.

The subnet detector

The post provides a clear and practical walkthrough of both tasks from The Weekly Challenge, with well-structured solutions in Python and Perl. The explanations highlight useful techniques such as regex parsing, handling UTF-8 characters, and leveraging networking libraries like Python's ipaddress and Perl's Net::IP.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Perl Maven online: Code-reading and Open Source contribution

March 10, 2026

Paris.pm monthly meeting

March 11, 2026

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Thank you Team PWC for your continuous support and encouragement.
Welcome to the Week #364 of The Weekly Challenge.

Weekly Challlenge: The subnet detector

dev.to #perl

Weekly Challenge 363

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, CoPilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: String Lie Detector

Task

You are given a string.

Write a script that parses a self-referential string and determines whether its claims about itself are true. The string will make statements about its own composition, specifically the number of vowels and consonants it contains.

My solution

This was relatively straight forward in Python. I took the following steps:

  1. Use a regular expression to extract the necessary parts of the input_string, and store this as the value match.
  2. Count the number of vowels and consonants in the first value and store it as vowel_count and const_count.
  3. Use the word2num module to convert the numbers in input_string to integers, stored as expected_vowel and expected_const.
  4. Compare the count and expected values match, and return the result.
import re
from word2num import word2num

def string_lie_detector(input_string: str) -> bool:
    match = re.match(
        r"(\w+) . (\w+) vowels? and (\w+) consonants?", input_string)
    if not match:
        raise ValueError("Input string not in expected format")

    vowel_count = 0
    const_count = 0
    for c in match.group(1).lower():
        if c in "aeiou":
            vowel_count += 1
        else:
            const_count += 1

    expected_vowel = word2num(match.group(2))
    expected_const = word2num(match.group(3))

    return vowel_count == expected_vowel and const_count == expected_const

The Perl solution is a little more complex. Maybe my Google-foo isn't up to scratch (and I don't use Copilot when working on solutions) that there doesn't appear to be a CPAN module that will convert words into numbers. As this is a coding exercise only I have a hash called %word2num that maps words to number (from zero to twenty).

The next problem is four of the examples use a long dash as the separator. This is a UTF-8 character. The result of perl -E 'say length("—")' is 3. After numerous searches of the Internet, it turns out I need to include use utf8:all in the code. With this change, I get the expected result of 1.

The rest of the code follows the same logic as the Python solution.

use utf8::all;

sub main ($input_string) {
    my %word2num = (qw/
        zero 0 one 1 two 2 three 3 four 4 five 5 six 6 seven 7 eight 8
        nine 9 ten 10 eleven 11 twelve 12 thirteen 13 fourteen 14
        fifteen 15 sixteen 16 seventeen 17 eighteen 18 nineteen 19 twenty 20
    /);

    my ( $word, $v, $c ) =
      ( $input_string =~ /(\w+) . (\w+) vowels? and (\w+) consonants?/ );

    if ( !$word ) {
        die "Input string not in expected format\n";
    }

    my $vowel_count = 0;
    my $const_count = 0;
    foreach my $c ( split //, lc($word) ) {
        if ( index( "aeiou", $c ) == -1 ) {
            $const_count++;
        }
        else {
            $vowel_count++;
        }
    }

    my $expected_vowel = $word2num{ lc $v } // die "Don't know what $v is\n";
    my $expected_const = $word2num{ lc $c } // die "Don't know what $c is\n";

    my $truth =
      ( $vowel_count == $expected_vowel and $const_count == $expected_const );
    say $truth ? 'true' : 'false';
}

Examples

There was an issue with the examples, and I raised a pull request to fix it.

$ ./ch-1.py "aa — two vowels and zero consonants"
True

$ ./ch-1.py "iv — one vowel and one consonant"
True

$ ./ch-1.py "hello - three vowels and two consonants"
False

$ ./ch-1.py "aeiou — five vowels and zero consonants"
True

$ ./ch-1.py "aei — three vowels and zero consonants"
True

Task 2: Subnet Sheriff

Task

You are given an IPv4 address and an IPv4 network (in CIDR format).

Write a script to determine whether both are valid and the address falls within the network. For more information see the Wikipedia article.

My solution

This one was the easier of the two to complete. Maybe because I have worked at many IPSs in the past :-)

Python has the ipaddress module which makes it easy to confirm if an IPv4 address is in a particular IP address block.

I use a try/except block to handle situations (like the second example) where the IP address or net block is invalid. This follows the Python philosophy of Easier to Ask for Forgiveness than Permission.

import ipaddress

def subnet_sheriff(ip_addr: str, domain: str) -> bool:
    try:
        return ipaddress.IPv4Address(ip_addr) in ipaddress.IPv4Network(domain)
    except ipaddress.AddressValueError:
        return False

Perl has the Net::IP module in CPAN that performs similar functionality. If the IP address or net block is invalid, the variable will be undef, and the else block will be used.

use Net::IP;

sub main ( $ip_addr, $domain ) {
    my $addr = Net::IP->new($ip_addr);
    my $block = Net::IP->new($domain);
    if ( $addr and $block ) {
        my $overlaps = ( $addr->overlaps($block) != $IP_NO_OVERLAP );
        say $overlaps  ? 'true' : 'false';
    }
    else {
        say 'false';
    }
}

Examples

$ ./ch-2.py 192.168.1.45 192.168.1.0/24
True

$ ./ch-2.py 10.0.0.256 10.0.0.0/24
False

$ ./ch-2.py 172.16.8.9 172.16.8.9/32
True

$ ./ch-2.py 172.16.4.5 172.16.0.0/14
True

$ ./ch-2.py 192.0.2.0 192.0.2.0/25
True

$ ./ch-2.py 1.1.1.1 10.0.0.0/8
False

ANNOUNCE: Perl.Wiki & JSTree V 1.41, etc

blogs.perl.org

Updated wikis are available now from my Wiki Haven:


  • Perl Wiki & JSTree style V 1.41

  • CSS and Javascript Wiki V 1.03

  • Debian Wiki V 1.12

  • Digital Security Wiki V 1.21

  • Mojolicious Wiki V 1.15

  • Symbolic Language Wiki V 1.19


And see the 'News flash: 7 Mar 2026' for why Symbolic.Language.Wiki is now on savage.net.au.

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Clone - recursively copy Perl datatypes
    • Version: 0.48 on 2026-03-02, with 33 votes
    • Previous CPAN version: 0.48_07 was 6 days before
    • Author: ATOOMIC
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260301.001 on 2026-03-01, with 25 votes
    • Previous CPAN version: 20260228.001
    • Author: BRIANDFOY
  3. Date::Manip - Date manipulation routines
    • Version: 6.99 on 2026-03-02, with 20 votes
    • Previous CPAN version: 6.98 was 9 months before
    • Author: SBECK
  4. DateTime::TimeZone - Time zone object base class and factory
    • Version: 2.67 on 2026-03-05, with 22 votes
    • Previous CPAN version: 2.66 was 2 months, 25 days before
    • Author: DROLSKY
  5. Devel::Cover - Code coverage metrics for Perl
    • Version: 1.52 on 2026-03-07, with 104 votes
    • Previous CPAN version: 1.51 was 7 months, 11 days before
    • Author: PJCJ
  6. ExtUtils::MakeMaker - Create a module Makefile
    • Version: 7.78 on 2026-03-03, with 64 votes
    • Previous CPAN version: 7.77_03 was 1 day before
    • Author: BINGOS
  7. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260306 on 2026-03-06, with 37 votes
    • Previous CPAN version: 1.20260301 was 5 days before
    • Author: MSIMERSON
  8. Module::Build::Tiny - A tiny replacement for Module::Build
    • Version: 0.053 on 2026-03-03, with 16 votes
    • Previous CPAN version: 0.052 was 9 months, 22 days before
    • Author: LEONT
  9. Number::Phone - base class for Number::Phone::* modules
    • Version: 4.0010 on 2026-03-06, with 24 votes
    • Previous CPAN version: 4.0009 was 2 months, 27 days before
    • Author: DCANTRELL
  10. PDL - Perl Data Language
    • Version: 2.103 on 2026-03-03, with 101 votes
    • Previous CPAN version: 2.102
    • Author: ETJ
  11. SPVM - The SPVM Language
    • Version: 0.990141 on 2026-03-06, with 36 votes
    • Previous CPAN version: 0.990140
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.89 on 2026-03-05, with 18 votes
    • Previous CPAN version: 2.88 was 9 days before
    • Author: MANWAR
  13. Sys::Virt - libvirt Perl API
    • Version: v12.1.0 on 2026-03-03, with 17 votes
    • Previous CPAN version: v12.0.0 was 1 month, 18 days before
    • Author: DANBERR
  14. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.68 on 2026-03-02, with 20 votes
    • Previous CPAN version: 0.67
    • Author: CHANSEN
  15. X11::korgwm - a tiling window manager for X11
    • Version: 6.0 on 2026-03-07, with 14 votes
    • Previous CPAN version: 5.0 was 1 year, 1 month, 15 days before
    • Author: ZHMYLOVE
  16. Zonemaster::Engine - A tool to check the quality of a DNS zone
    • Version: 8.001001 on 2026-03-04, with 35 votes
    • Previous CPAN version: 8.001000 was 2 months, 16 days before
    • Author: ZNMSTR

Protocol Buffers (Protobuf) with Perl

blogs.perl.org

I'm hoping to reach anyone using Protocol Buffers in Perl, soliciting their experiences and best practices.

A Googler is soliciting help on an official set of bindings just this year. Which is great!

There seems to be Google::ProtocolBuffers which is now 10 years old - I suspect it is not a good choice.

The Google::ProtocolBuffers::Dynamic seems like the best choice right now but wont compile for me on Debian Trixie. I haven't gone too deep in to why, but upb seems to be on a branch, is old, and fails to compile.

A colleague recently just for fun had Claude create a pure perl library that passes all the tests and dropped it on github.

I ask as have been experimenting with protoconf which builds on protocol buffers.

Related. I do like Thrift a lot and it seems to be maintained which is nice, but it seems to have failed to gain traction.


Recently I had an odd problem that I thought to be related to caching.

While investigating the issue I noticed that a Perl CGI script using query_form to build a set of parameters, produces those in varying order.

I think this is due to a recent change in Perl that causes hash keys to be enumerated in random order (not always the same).

As it seems HTTP caching considers different URLs to be different objects, I'd like to have some consistent ordering of query parameters.

How could I do that?

Code sketch

#...
my $url = URI->new($query->url());
my $url_form;
#...
$url->path($url->path() . 'something');
$url_form = $url->clone();
#...
$url_form->query_form(
    {
        (PN_API_V1_FUNCTION) => API_V1_FN_SEND_FORM,
        (PN_USER_MODE) => $params_ref->{(PN_USER_MODE)},
    });
#...
f(..., $url_form->as_string(), ...)
#...

This is a small article about a pattern I’ve made to automatically ignore filenames for autocompletion.

In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling corrections (or autocorrect for commands). While handy, it is somewhat limited as it is global and perhaps somewhat limited. Now, I wanted to ignore it only for git and not other commands. But I haven’t found a way to only target git without having to make a wrapper around git (which I don’t want to do).

The "Beautiful Perl features" series on dev.to continues !

Since my last announcement, the following articles have been added :

I'm still hoping to attract interest from people from other programming cultures, but so far most comments came from people already in the Perl community. Let's see what the future brings us!

The more I investigate into various programming features, the more I'm impressed by the Perl vision: the initial design and later evolution into Perl5 were incredibly innovative and coherent. Raku is even more impressive, but that's another story. Regarding Perl, I am tired of reading comments on so many platforms that the language is "ugly" and "write-only" -- this is not true! If this dev.to series can help to reverse the trend, I will be happy :-)

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

Today's topic is a Perl construct called fat comma, which is quite different from the trailing commas discussed in the last post.

Fat comma: an introduction

A fat comma in Perl is a construct that doesn't involve a typographic comma! Visually it consists of an expression followed by an arrow sign => and another expression. This is used in many contexts, the most common being for initializing hashes or for passing named parameters to subroutines:

my %rect = (x      => 12,
            y      => 34,
            width  => 20,
            height => 10);
draw_shape(kind   => 'rect', 
           coords => \%rect,
           color  => 'green');

A fat comma is semantically equivalent to a comma; the only difference with a regular comma is purely syntactic: if the left-hand side is a string that begins with a letter or underscore and is composed only of letters, digits and underscores, then that string doesn't need to be enclosed in quotes. The example above took advantage of this feature, but it could as well have been written:

my %rect = ('x'      => 12,
            'y'      => 34,
            'width'  => 20,
            'height' => 10);
draw_shape('kind'   => 'rect', 
           'coords' => \%rect,
           'color'  => 'green');

or even:

my %rect = ('x', 12, 'y', 34, 'width', 20, 'height', 10);
draw_shape('kind', 'rect', 'coords', \%rect, 'color', 'green');

This last variant has exactly the same technical meaning, but clearly it does not convey the same impression to the reader; so the fat comma is mainly a device for improving code readability.

More general usage

Since Perl does not impose many constraints, fat commas can be used in many other ways than just initializing hashes or passing named parameters to subroutine calls:

  1. they can appear at any place where a list is expected;
  2. they need not be only in pairs: triplets, quadruplets, etc. are allowed;
  3. mixtures of fat commas and regular commas are allowed (and even frequent);
  4. the expression on the left-hand side of a fat comma need not be a string - it can be any value.

Most of these points are excellently illustrated in a collection of examples designed in 2017 by Sinan Ünür. Here is an excerpt from his answer to a StackOverflow question asking when to use fat comma if not for hashes:

Any time you want to automatically quote a bareword to the left of a fat comma:

system ls => '-lh';

or

my $x = [ a => [ 1, 2 ], b => [ 3, 4 ] ];

Any time you think it makes the code easier to see

join ', ' => @data;

Any time you want to say "into":

bless { value => 5 } => $class;

In short, => is a comma, plain and simple. You can use it anywhere you can use a comma. E.g.:

my $z = f($x) => g($y); # invoke f($x) (for its side effects) and g($y)
                        # assign the result of g($y) to $z

Fat commas for domain-specific languages

A number of CPAN modules took advantage of fat commas for designing domain-specific languages (DSLs), exploiting the fact that fat commas can be used liberally for other purposes than just expressing pairs.

Moose

Attribute declarations

Moose is the most well-known object-oriented framework for Perl; it also influenced several competing frameworks1. Here is a short excerpt from the synopsis, showing a class declaration:

package Point;
use Moose;

has 'x' => (isa => 'Int', is => 'rw', required => 1);
has 'y' => (isa => 'Int', is => 'rw', required => 1);

This is an example where the fat comma does not introduce a pair of values, but rather a longer list in which the first element (the attribute name) is deliberately emphasized. Technically this x attribute could have been declared as:

  has('x', 'isa', 'Int', 'is', 'rw', 'required', 1);

with exactly the same result, but much less readability. Observe that in addition to the fat comma, the recommended Moose syntax also takes advantage here of two other Perl features, namely:

  1. the fact that a subroutine can be treated like a list operator2, without parenthesis around the arguments: so the call has 'x' => ... is technically equivalent to has('x' => ...).
  2. the fact that a list within another list is flattened, so the parenthesis in 'x' => (isa => 'Int', ...) are technically not necessary; they are present just for stylistic preference.

You may have noticed that the single quotes around the attribute name are technically unnecessary: the x attribute name could go unquoted in

  has x => (isa => 'Int', is => 'rw', required => 1);

Here again it's a matter of stylistic preference; in this context I suppose that the Moose authors wanted to emphasize the difference between the subroutine name has and the string x passed as first argument.

Subtype declarations

Another domain-specific language in Moose is for declaring types. The cookbook has this example:

use Moose::Util::TypeConstraints;
use Locale::US;

my $STATES = Locale::US->new;

subtype 'USState'
    => as Str
    => where {
           (    exists $STATES->{code2state}{ uc($_) }
             || exists $STATES->{state2code}{ uc($_) } );
       };

Here again, fat commas and subroutine calls expressed as list operators were cleverly combined to form an expressive DSL for declaring Moose types.

Mojo

Mojolicious is one of the major Web frameworks for Perl. It uses a domain-specific language for declaring the routes supported by the Web application; here are some excerpts from the documentation:

my $route = $r->get('/:foo');
my $route = $r->get('/:foo' => sub ($c) {...});
my $route = $r->get('/:foo' => sub ($c) {...} => 'name');
my $route = $r->get('/:foo' => {foo => 'bar'} => sub ($c) {...});
my $route = $r->get('/:foo' => [foo => qr/\w+/] => sub ($c) {...});
my $route = $r->get('/:foo' => (agent => qr/Firefox/) => sub ($c) {...});
...
my $route = $r->any(['GET', 'POST'] => '/:foo' => sub ($c) {...});

Through these many variants we see a flexible language for declaring routes, where fat commas are used to visually convey some idea of structure within the lists of arguments. Observe that lines 3 and following are not pairs, but triplets, and that the last line has an arrayref (not a string!) to the left of the fat comma.

A word of caution about the quoting mechanism

Let's repeat the syntactic rule: if the left-hand side of the fat comma is a string that begins with a letter or underscore and is composed only of letters, digits and underscores, then that string doesn't need to be enclosed in quotes. We have seen numerous examples above that relied on this rule for more elegance and readability. One has to be careful, however, that builtin functions or user-defined subroutines could inadvertently be interpreted as strings instead of the intended subroutine calls. For example consider this snippet:

use constant foo => "tac";

sub build_hash {
 return {shift => 123, foo => 456, toe => 789};
}

my $h = build_hash('tic');

One could easily expect that the value of $h is {tic => 123, tac => 456, toe => 789} ... but actually the result is {foo => 456, shift => 123, toe => 789}, because both shift and foo were interpreted here as mere strings instead of subroutine calls. The ambiguity can be resolved easily, either by putting an empty argument list after the subroutine calls, or by enclosing them in parenthesis:

sub build_hash {
 return {shift() => 123, foo() => 456, toe => 789};
 # or: return {(shift) => 123, (foo) => 456, toe => 789};
}

Some people would perhaps argue that the Perl interpreter should automatically detect that shift or foo are subroutine names ... but that would introduce too much fragility. The interpreter would then be dependent on the list of builtin Perl functions, and also be dependent on the list of symbols declared at that point in the code; future evolutions on either side could easily break the behaviour. So Perl's design, that blindly applies the syntactic rule formulated above, is much wiser.

Similar constructs in other languages

To my knowledge, no other programming language has a general-purpose comma operator comparable to Perl's fat comma. What is quite common, however, is to have specific syntax for hashes (or "objects" or "dictionaries" or "records", as they are called in other languages), and sometimes specific syntax for named parameters in subroutine calls or method calls. This chapter explores some aspects on these directions.

JavaScript

JavaScript Objects

The equivalent of a Perl hash is called "object" in JavaScript; it is initialized as follows (example copied from the MDN documentation):

const obj = {
  property1:    value1, // property name may be an identifier
  2:            value2, // or a number
  "property n": value3, // or a string
};

Here the syntax is : instead of =>3. Like in Perl, any quoted string can be used as a property name, or a number, or an unquoted string if that string can be parsed as an identifier. What is not allowed, however, is to use an expression on the left-hand side: {(2+2): value} or {compute_name(): value} are syntax errors. The workaround for using expressions as property names is to first create the object, and then assign properties to it:

const obj           = {};
obj[2+2]            = value1;
obj[compute_name()] = value2;

Named parameters

JavaScript has no direct support for passing named parameters to subroutines; however there is of course an indirect way, which is to pass an object to the function:

function show_user(u) {
  return `${u.firstname} ${u.lastname} has id ${u.id}`;
}
console.log(show_user({id: 123, firstname:"John", lastname:"Doe"})); 

Recent versions of JavaScript have a more sophistictated way of exploiting the object received as parameter: rather than grabbing successive properties into the object, the receiving function could instead use object destructuring to extract the values into local lexical variables:

function show_user_v2({firstname, lastname, id}) {
  return `${firstname} ${lastname} has id ${id}`;
}
console.log(show_user_v2({id: 123, firstname:"John", lastname:"Doe"})); 

This technique can go even further by supplying default values to the lexical variables - an advanced technique described in the MDN documentation.

Python

Dictionaries

In Python the closest equivalent of a Perl hash is called a "dictionary". Like in JavaScript, dictionaries are initialized with list of keys and values separated by :, enclosed in curly braces :

point = {'x': 34, 'y': -1}

But unlike in JavaScript or Perl, keys on the left of the : separator are not quoted automatically: they are just ordinary expressions. This requires more typing from the programmer, but makes it possible to use operators or function calls, like in this example:

def double (x):
    return x * 2

obj = {
    'hello' + 'world': 11,
    234:               'foobar',
    double(3):         'doubled',
    }

print(obj) # prints : {'helloworld': 11, 234: 'foobar', 6: 'doubled'}

Keyword arguments

In Python, named parameters are called keyword arguments. The syntax is different from dictionary initializers: the symbol = is used to connect keywords to their values:

draw_line(x1=12, y1=-3, x2=55, y2=66)

Here the left-hand side does not need to be quoted; but it must obey the syntax rules for identifiers, which means for example that strings containing spaces are not eligible.

The construct of functions with keyword arguments is clearly different from the construct of dictionaries. They can be combined, however: a dictionary can be unpacked as a list of key-value pairs to be passed as arguments to a function.

points = {'x1':1, 'y1':2, 'x2':3, 'y2':4}
draw_line(**points)

but unlike in Perl or JavaScript, if the dictionary contains other keys than those expected by the function, an exception is raised ("got an unexpected keyword argument"). This is beneficial for defensive programming, where the interpreter exerts more control, but at the detriment of flexibility, because a dictionary received from an external source (for example a config file or an HTTP request) must be filtered before it can be flattened and passed to the called function.

PHP

PHP uses the => notation for key-value pairs in associative arrays, like in Perl, but without the automatic quoting feature. Therefore keys must be enclosed in double quotes or single quotes, like in Python.

In addition, PHP also uses the same notation => for anonymous functions, like in JavaScript, except that the fn keyword must also be present.

Here is an example where the two features are combined:

$array1 = ["foo" => "bar", 
           "fun" => fn($x) => fn($y) => $x+$y,
          ];

This is an associative array (like a Perl hash) where the key foo is associated to value "bar", and the key fun is associated with a function that returns another function. So beware when visually parsing a => in a PHP program!

Wrapping up

The Perl construct of fat commas is very simple, with coherent syntax and semantics, and applicable in a wide range of situations. It helps to write readable code by allowing the programmer to structure lists and emphasize some relations between values in the list. This capability is often used to design domain-specific sublanguages within Perl. A beautiful construct indeed!

About the cover picture

The picture shows the coupling mechanism on an old pipe organ. The french word for this is "accouplement", which in other contexts also means "mating"!

When the mechanism is activated, notes played on the lower keyword also trigger the notes on the upper keyboard ... which bears some resemblance to the bindings in programming that were discussed in this article.

  1. since v5.38 some object-oriented features are also implemented in Perl core; but CPAN object-oriented frameworks like Moose are still heavily used. ↩

  2. See perlsyn: "Declaring a subroutine allows a subroutine name to be used as if it were a list operator from that point forward in the program". ↩

  3. the notation => is also present in JavaScript, but with a meaning totally different from Perl: it is used for arrow function expressions, a compact alternative to traditional function expression. ↩

I am working on a Windows Installshield MSI installer that has Strawberry Perl as a prerequisite. This has existed for several years; I'm trying to modify it to install the latest version of Perl. In the Prerequisite Editor, the command I have specified runs a .bat file that uninstalls previous versions of Perl and runs the latest installer using msiexec.exe. The installation completes successfully, but Installshield reports that the batch command file returned an error. How can I find out what is going on, or at least suppress the Installshield message? This is Installshield 2019, by the way. Here is the batch file:

@echo on
MsiExec.exe /passive /X{0BE917CD-6CE8-1014-9C0C-9680A6A774DD} 
MsiExec.exe /passive /X{8075BCC9-804A-1014-97A8-A0999374D9D1}
MsiExec.exe /norestart /i strawberry-perl-5.42.0.1-64bit.msi  /qb /Le "c:\perl_install.log" 
echo Exit Code is %errorlevel% >> c:\perl_install.log

I don't get the output from the echo command in the log file, ether. Strangely, I get a string of Kanji.

Thank you Team PWC for your continuous support and encouragement.
Welcome to the Week #363 of The Weekly Challenge.

I have a script that starts like this:


use strict;
use feature 'say';
use warnings FATAL => 'all';
use autodie ':default';
use Term::ANSIColor;
use Cwd 'getcwd';
use SimpleFlow qw(task say2);
use Getopt::ArgParse;
use File::Basename;
use POSIX 'strftime';
use File::Temp 'tempfile';
use Scalar::Util qw(looks_like_number);
use List::Util qw(min max sum);

which runs fine in Perl 5.42

I am attempting to create a standalone executable on Linux, so that the Perl version and all relevant modules are included, so that there is no additional installation required.

I've been using PAR::Packer which is pp, but it's not working:
pp -I /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/ -o x.pepPriML x.pepPriML.pl

which outputs:

Built-in function 'builtin::blessed' is experimental at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/overload.pm line 103.
Perl v5.40.0 required--this is only v5.38.2, stopped at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/builtin.pm line 3.
BEGIN failed--compilation aborted at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/builtin.pm line 3.
Compilation failed in require at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/File/Copy.pm line 14.
BEGIN failed--compilation aborted at /home/con/perl5/perlbrew/perls/perl-5.42.0/lib/5.42.0/File/Copy.pm line 14.
Compilation failed in require at /usr/share/perl5/Archive/Zip/Archive.pm line 9.
BEGIN failed--compilation aborted at /usr/share/perl5/Archive/Zip/Archive.pm line 9.
Compilation failed in require at /usr/share/perl5/Archive/Zip.pm line 316.
Compilation failed in require at -e line 236.
Failed to execute temporary parl (class PAR::StrippedPARL::Static) '/tmp/parl1X6c': $?=65280 at /usr/share/perl5/PAR/StrippedPARL/Base.pm line 77, <DATA> line 1.
/usr/bin/pp: Failed to extract a parl from 'PAR::StrippedPARL::Static' to file '/tmp/parlHjpr_2o' at /usr/share/perl5/PAR/Packer.pm line 1216, <DATA> line 1.

even when I specify perl 5.0382 within the script, it cannot compile with pp

How can I create a standalone executable to have all libraries included with the Perl version?

Episode 9 - Olaf Kolkman (part 1)

The Underbar
Olaf Kolkman has had a long career in open source. In this first part, we discussed his involvement with Perl, DNSSEC and NLnet Labs.

(dlxxxix) 16 great CPAN modules released last week

Niceperl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Amon2 - lightweight web application framework
    • Version: 6.18 on 2026-02-28, with 27 votes
    • Previous CPAN version: 6.17 was 1 day before
    • Author: TOKUHIROM
  2. App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
    • Version: 2.439 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.438 was 1 month, 29 days before
    • Author: KUERBIS
  3. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.031 on 2026-02-25, with 19 votes
    • Previous CPAN version: 1.030 was 20 days before
    • Author: PREACTION
  4. CPAN::Uploader - upload things to the CPAN
    • Version: 0.103019 on 2026-02-23, with 25 votes
    • Previous CPAN version: 0.103018 was 3 years, 1 month, 9 days before
    • Author: RJBS
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260228.001 on 2026-02-28, with 25 votes
    • Previous CPAN version: 20260225.001 was 2 days before
    • Author: BRIANDFOY
  6. DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
    • Version: 4.055 on 2026-02-23, with 67 votes
    • Previous CPAN version: 5.013 was 6 months, 19 days before
    • Author: DVEEDEN
  7. Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
    • Version: v31.0.0 on 2026-02-25, with 20 votes
    • Previous CPAN version: v30.0.0 was 27 days before
    • Author: DORASUN
  8. LWP::Protocol::https - Provide https support for LWP::UserAgent
    • Version: 6.15 on 2026-02-23, with 22 votes
    • Previous CPAN version: 6.14 was 1 year, 11 months, 12 days before
    • Author: OALDERS
  9. Mail::DMARC - Perl implementation of DMARC
    • Version: 1.20260226 on 2026-02-27, with 36 votes
    • Previous CPAN version: 1.20250805 was 6 months, 21 days before
    • Author: MSIMERSON
  10. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.039000 on 2026-02-28, with 27 votes
    • Previous CPAN version: 2.038000 was 29 days before
    • Author: MICKEY
  11. SPVM - The SPVM Language
    • Version: 0.990138 on 2026-02-28, with 36 votes
    • Previous CPAN version: 0.990137 was before
    • Author: KIMOTO
  12. SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
    • Version: 2.88 on 2026-02-23, with 18 votes
    • Previous CPAN version: 2.87 was 3 years, 9 months, 3 days before
    • Author: MANWAR
  13. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000163 on 2026-02-24, with 28 votes
    • Previous CPAN version: 1.000162 was 3 days before
    • Author: EXODIST
  14. Tickit - Terminal Interface Construction KIT
    • Version: 0.75 on 2026-02-27, with 29 votes
    • Previous CPAN version: 0.74 was 2 years, 5 months, 22 days before
    • Author: PEVANS
  15. TimeDate - Date and time formatting subroutines
    • Version: 2.34 on 2026-02-28, with 28 votes
    • Previous CPAN version: 2.34_01
    • Author: ATOOMIC
  16. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.66 on 2026-02-25, with 20 votes
    • Previous CPAN version: 0.65 was 1 day before
    • Author: CHANSEN

TPRC call for presentations is open!

Perl Foundation News

Call for presentations is now open for The Perl and Raku Conference! Submissions will be accepted through March 15, and all presenters that are accepted will get a free ticket to the conference!

All presenters must be present in person at the conference. Speaking dates are June 26-28, 2026. We are accepting talks (either 20 minutes or 50 minutes) on any topic that could be of interest to the Perl and Raku Community. We will also be having something new this year—interactive sessions. Keep an eye out for a description of what that will be and how you might participate.

Go to https://tprc.us/ to learn more, and to find the link for submitting your talk!

Introduction

Rebase vs merge, which is better? There are a lot of online discussions about these two “workflows”. One is pro-rebasing and the other swears by merging. Both sides are stuck in their own thinking and both think they are right. So which is it? In my opinion: neither or both. Before you get defensive, hear me out. I’ll try to explain this in the following post.

TL;DR

Rebase is a tool used while developing. Merging is a tool used for incorporating your branch into a target branch. Both tools or workflows serve each other in the end.

The Board is proposing Chris Prather (perigrin) for membership on the Board. The Board will vote soon on his appointment.

Below are his answers to the application questions:

Tell us about your technical and leadership experience.

I've been writing Perl professionally for over two and a half decades and contributing to the community for nearly as long—organizing YAPC conferences and building CPAN modules. My recent work has been on large-scale Perl e-commerce systems—millions of lines of code, hundreds of developers, the kind of codebase that reminds you how much critical infrastructure still runs on Perl. I have also been working to migrate irc.perl.org to modern infrastructure—because our community spaces matter as much as our code. Beyond Perl, I've been a Director of Software, run my own consulting practice for a decade, and founded an afterschool science education company. That range has taught me something relevant here: sustainable communities need both technical excellence and intentional cultivation. You can't just "build it and they will come".

If appointed, what is one thing you'd work toward?

Improving the Foundation's capacity to lead. The Foundation is uniquely positioned to help the community navigate hard problems—but influence has to be earned through presence and relationship. I'd like to see us do more to connect the archipelago of Perl and Raku projects, the businesses that rely on them, and the community members that will sustain them into the future. I want us to support businesses that depend on Perl—helping them make the case for continued investment in the Perl ecosystem. I'd like us to think seriously about what it would take to grow the next generation of Perl shops, not just maintain the current crop. More and more of our infrastructure is being supported by fewer and fewer hands—often the same people wearing different hats, organizing each piece in isolation. We need to help provide templates for sustainable projects. That means making it easier for maintainers to find support, share burdens, and bring in new people. Projects should end by choice, not by attrition. The Foundation is the only organization that has the cachet to connect all the bridges.

What is your vision for the Foundation?

The Foundation works best as a quiet enabler—handling legal and financial scaffolding so community members can focus on building things. That should continue. But I think we have underutilized soft power. Using it well means doing the community-building work—earning the standing to shape conversations. I'd like to see us be more intentional about the cultural signals we send. The Foundation's choices—what to fund, who to platform—shape perceptions of what kind of community we're building. We have always been tolerant of the plurality of voices, but sometimes that has gotten overshadowed by some of the more flamboyant voices themselves. We should continue to cultivate community structures that celebrate the voices we want to represent us, not just prune the voices we don't. The Foundation can enable genuine support for needs that Perl-based companies have. We should work to understand and validate those needs, and help the community identify and provide sustainable solutions. By providing templates for organizing projects, finding support, and bringing in new people—the Foundation can better ensure that Perl and Raku are on solid foundations for years to come.

What is your vision for Perl and Raku?

Both Perl and Raku are living languages with vibrant evolutions underway. They both have the same underlying need: an ecosystem that is culturally and economically sustainable. One where businesses are confident enough to invest, newcomers feel welcomed rather than turned away, and ambitious projects find support. The Foundation can leverage its position to help them achieve that future. For Perl, it can help the maintainers connect more strongly with the businesses that rely upon their work to get the kind of feedback they need to ensure we're going in the right direction. For Raku, I'll admit I don't know enough about where the community stands today—and that's exactly the kind of gap it feels like the Foundation should help bridge. I hope to learn more about how we can best support them.

RIP nginx - Long Live Apache

nginx is dead. Not metaphorically dead. Not “falling out of favor” dead. Actually, officially, put-a-date-on-it dead.

In November 2025 the Kubernetes project announced the retirement of Ingress NGINX — the controller running ingress for a significant fraction of the world’s Kubernetes clusters. Best-effort maintenance until March 2026. After that: no releases, no bugfixes, no security patches. GitHub repositories go read-only. Tombstone in place.

And before the body was even cold, we learned why. IngressNightmare — five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated 9.8 critical. Unauthenticated remote code execution. Complete cluster takeover. No credentials required. Wiz Research found over 6,500 clusters with the vulnerable admission controller publicly exposed to the internet, including Fortune 500 companies. 43% of cloud environments vulnerable. The root cause wasn’t a bug that could be patched cleanly - it was an architectural flaw baked into the design from the beginning. And the project that ran ingress for millions of production clusters was, in the end, sustained by one or two people working in their spare time.

Meanwhile Apache has been quietly running the internet for 30 years, governed by a foundation, maintained by a community, and looking increasingly like the adult in the room.

Let’s talk about how we got here.

Apache Was THE Web Server

Before we talk about what went wrong, let’s remember what Apache actually was. Not a web server. THE web server. At its peak Apache served over 70% of all websites on the internet. It didn’t win that position by accident - it won it by solving every problem the early web threw at it. Virtual hosting. SSL. Authentication. Dynamic content via CGI and then mod_perl. Rewrite rules. Per-directory configuration. Access control. Compression. Caching. Proxying. One by one, as the web evolved, Apache evolved with it, and the industry built on top of it.

Apache wasn’t just infrastructure. It was the platform on which the commercial internet was built. Every hosting provider ran it. Every enterprise deployed it. Every web developer learned it. It was as foundational as TCP/IP - so foundational that most people stopped thinking about it, the way you stop thinking about running water.

Then nginx showed up with a compelling story at exactly the right moment.

The Narrative That Stuck

The early 2000s brought a new class of problem - massively concurrent web applications, long-polling, tens of thousands of simultaneous connections. The C10K problem was real and Apache’s prefork MPM - one process per connection - genuinely struggled under that specific load profile. nginx’s event-driven architecture handled it elegantly. The benchmarks were dramatic. The config was clean and minimal, a breath of fresh air compared to Apache’s accumulated complexity. nginx felt modern. Apache felt like your dad’s car.

The “Apache is legacy” narrative took hold and never let go - even after the evidence for it evaporated.

Apache gained mpm_event, bringing the same non-blocking I/O and async connection handling that nginx was celebrated for. The performance gap on concurrent connections essentially closed. Then CDNs solved the static file problem at the architectural level - your static files live in S3 now, served from a Cloudflare edge node milliseconds from your user, and your web server never sees them. The two pillars of the nginx argument - concurrency and static file performance - were addressed, one by Apache’s own evolution and one by infrastructure that any serious deployment should be using regardless of web server choice.

But nobody reruns the benchmarks. The “legacy” label outlived the evidence by a decade. A generation of engineers learned nginx first, taught it to the next generation, and the assumption calcified into received wisdom. Blog posts from 2012 are still being cited as architectural guidance in 2025.

What Apache Does That nginx Can’t

Strip away the benchmark mythology and look at what these servers actually do when you need them to do something hard.

Apache’s input filter chain lets you intercept the raw request byte stream mid-flight - before the body is fully received - and do something meaningful with it. I’m currently building a multi-server file upload handler with real-time Redis progress tracking, proper session authentication, and CSRF protection implemented directly in the filter chain. Zero JavaScript upload libraries. Zero npm dependencies. Zero supply chain attack surface. The client sends bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a paid commercial module to get close. Or you write C. Or you route around it to application code and wonder why you needed nginx in the first place.

Apache’s phase handlers let you hook into the exact right moment of the request lifecycle - post-read, header parsing, access control, authentication, response - each phase a precise intervention point. mod_perl embeds a full Perl runtime in the server with persistent state, shared memory, and pre-forked workers inheriting connection pools and compiled code across requests. mod_security gives you WAF capabilities your “modern” stack is paying a vendor for. mod_cache is a complete RFC-compliant caching layer that nginx reserves for paying customers.

And LDAP - one of the oldest enterprise authentication requirements there is. With mod_authnz_ldap it’s a few lines of config:

AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com

Connection pooling, SSL/TLS to the directory, group membership checks, credential caching - all native, all in config, no code required. With nginx you’re reaching for a community module with an inconsistent maintenance history, writing Lua, or standing up a separate auth service and proxying to it with auth_request - which is just mod_authnz_ldap reimplemented badly across two processes with an HTTP round trip in the middle.

Apache Includes Everything You’re Now Paying For

Look at Apache’s feature set and you’re reading the history of web infrastructure, one solved problem at a time. SSL termination? Apache had it before cloud load balancers existed to take it off your plate. Caching? mod_cache predates Redis by years. Load balancing? mod_proxy_balancer was doing weighted round-robin and health checks before ELB was a product. Compression, rate limiting, IP-based access control, bot detection via mod_security - Apache had answers to all of it before the industry decided each problem deserved its own dedicated service, its own operations overhead, and its own vendor relationship.

Apache didn’t accumulate features because it was undisciplined. It accumulated features because the web kept throwing problems at it and it kept solving them. The fact that your load balancer now handles SSL termination doesn’t mean Apache was wrong to support it - it means Apache was right early enough that the rest of the industry eventually built dedicated infrastructure around the same idea.

Now look at your AWS bill. CloudFront for CDN. ALB for load balancing and SSL termination. WAF for request filtering. ElastiCache for caching. Cognito for authentication. API Gateway for routing. Each one a line item. Each one a managed service wrapping functionality that Apache has shipped for free since before most of your team was writing code.

Amazon Web Services is, in a very real sense, Apache’s feature set repackaged as paid managed infrastructure. They looked at what the web needed, looked at what Apache had already solved, and built a business around operating those solutions at scale so you didn’t have to. That’s a legitimate value proposition - operations is hard and sometimes paying AWS is absolutely the right answer. But if you’re running a handful of servers and paying for half a dozen AWS services to handle concerns that Apache handles natively, maybe set the Wayback Machine to 2005, spin up Apache, and keep the credit card in your pocket.

Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that Amazon built a cloud business catching up to him.

So Why Did You Choose nginx?

Be honest. The real reason is that you learned it first, or your last job used it, or a blog post from 2012 told you it was the modern choice. Maybe someone at a conference said Apache was legacy and you nodded along because everyone else was nodding. That’s how technology adoption works - narrative momentum, not engineering analysis.

But those nginx blinders have a cost. And the Kubernetes ecosystem just paid it in full.

The Cost of the nginx Blinders

The nginx Ingress Controller became the Kubernetes default early in the ecosystem’s adoption curve and the pattern stuck. Millions of production clusters. The de-facto standard. Fortune 500 companies. The Swiss Army knife of Kubernetes networking - and that flexibility was precisely its undoing.

The “snippets” feature that made it popular - letting users inject raw nginx config via annotations - turned out to be an unsanitizable attack surface baked into the design. CVE-2025-1974 exploited this to achieve unauthenticated RCE via the admission controller, giving attackers access to all secrets across all namespaces. Complete cluster takeover from anything on the pod network. In many common configurations the pod network is accessible to every workload in your cloud VPC. The blast radius was the entire cluster.

The architectural flaw couldn’t be fixed without gutting the feature that made the project worth using. So it was retired instead.

Here is the part nobody is saying out loud: Apache could have been your Kubernetes ingress controller all along.

The Apache Ingress Controller exists. It supports path and host-based routing, TLS termination, WebSocket proxying, header manipulation, rate limiting, mTLS - everything Ingress NGINX offered, built on a foundation with 30 years of security hardening and a governance model that doesn’t depend on one person’s spare time. It doesn’t have an unsanitizable annotation system because Apache’s configuration model was designed with proper boundaries from the beginning. The full Apache module ecosystem - mod_security, mod_authnz_ldap, the filter chain, all of it - available to every ingress request.

The Kubernetes community never seriously considered it. nginx had the mindshare, nginx got the default recommendation, nginx became the assumed answer before the question was even finished. Apache was dismissed as grandpa’s web server by engineers who had never actually used it for anything hard - and so the ecosystem bet its ingress layer on a project sustained by volunteers and crossed its fingers.

The nginx blinders cost the industry IngressNightmare, 6,500 exposed clusters, and a forced migration that will consume engineering hours across thousands of organizations in 2026. Not because Apache wasn’t available. Because nobody looked.

nginx is survived by its commercial fork nginx Plus, approximately 6,500 vulnerable Kubernetes clusters, and a generation of engineers who will spend Q1 2026 migrating to Gateway API - a migration they could have avoided entirely.

Who’s Keeping The Lights On

Here’s the conversation that should happen in every architecture review but almost never does: who maintains this and what happens when something goes wrong?

For Apache the answer has been the same for over 30 years. The Apache Software Foundation - vendor-neutral, foundation-governed, genuinely open source. Security vulnerabilities found, disclosed responsibly, patched. A stable API that doesn’t break your modules between versions. Predictable release cycles. Institutional stability that has outlasted every company that ever tried to compete with it.

nginx’s history is considerably more complicated. Written by Igor Sysoev while employed at Rambler, ownership murky for years, acquired by F5 in 2019. Now a critical piece of infrastructure owned by a networking hardware vendor whose primary business interests may or may not align with the open source project. nginx Plus - the version with the features that actually compete with Apache on a level playing field - is commercial. OpenResty, the variant most people reach for when they need real programmability, is a separate project with its own maintenance trajectory.

The Ingress NGINX project had millions of users and a maintainership you could count on one hand. That’s not a criticism of the maintainers - it’s an indictment of an ecosystem that adopted a critical infrastructure component without asking who was keeping the lights on.

Three decades of adversarial testing by the entire internet is a security posture no startup’s stack can match. The Apache Software Foundation will still be maintaining Apache httpd when the company that owns your current stack has pivoted twice and been acqui-hired into oblivion.

Long Live Apache

The engineers who dismissed Apache as legacy were looking at a 2003 benchmark and calling it a verdict. They missed the server that anticipated every problem modern infrastructure is still solving, that powered the internet before AWS existed to charge you for the privilege, and that was sitting right there in the Kubernetes ecosystem waiting to be evaluated while the community was busy betting critical infrastructure on a volunteer project with an architectural time bomb in its most popular feature.

Grandpa didn’t just know what he was doing. Grandpa was building the platform you’re still trying to reinvent - badly, in JavaScript, with a vulnerability disclosure coming next Tuesday and a maintainer burnout announcement the Tuesday after that.

The server is fine. It was always fine. Touch grass, update your mental model, and maybe read the Apache docs before your next architecture meeting.

RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his spare time. Missed by the 43% of cloud environments that probably should have asked more questions.

Sources

  • IngressNightmare - CVE details and exposure statistics Wiz Research, March 24, 2025 https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
  • Ingress NGINX Retirement Announcement Kubernetes SIG Network and Security Response Committee, November 11, 2025 https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/
  • Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory Kubernetes, March 24, 2025 https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
  • Transitioning Away from Ingress NGINX - Maintainership and architectural analysis Google Open Source Blog, February 2026 https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html
  • F5 Acquisition of nginx F5 Press Release, March 2019 https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops

Disclaimer: This article was written with AI assistance during a long discussion on the features and history of Apache and nginx, drawing on my experience maintaining and using Apache over the last 20+ years. The opinions, technical observations, and arguments are entirely my own. I am in no way affiliated with the ASF, nor do I have any financial interest in promoting Apache. I have been using and benefiting from Apache since 1998 and continue to discover features and capabilities that surprise me even to this day.

Treating GitHub Copilot as a Contributor

Perl Hacks

For some time, we’ve talked about GitHub Copilot as if it were a clever autocomplete engine.

It isn’t.

Or rather, that’s not all it is.

The interesting thing — the thing that genuinely changes how you work — is that you can assign GitHub issues to Copilot.

And it behaves like a contributor.

Over the past day, I’ve been doing exactly that on my new CPAN module, WebServer::DirIndex. I’ve opened issues, assigned them to Copilot, and watched a steady stream of pull requests land. Ten issues closed in about a day, each one implemented via a Copilot-generated PR, reviewed and merged like any other contribution.

That still feels faintly futuristic. But it’s not “vibe coding”. It’s surprisingly structured.

Let me explain how it works.


It Starts With a Proper Issue

This workflow depends on discipline. You don’t type “please refactor this” into a chat window. You create a proper GitHub issue. The sort you would assign to another human maintainer. For example, here are some of the recent issues Copilot handled in WebServer::DirIndex:

  • Add CPAN scaffolding
  • Update the classes to use Feature::Compat::Class
  • Replace DirHandle
  • Add WebServer::DirIndex::File
  • Move render() method
  • Use :reader attribute where useful
  • Remove dependency on Plack

Each one was a focused, bounded piece of work. Each one had clear expectations.

The key is this: Copilot works best when you behave like a maintainer, not a magician.

You describe the change precisely. You state constraints. You mention compatibility requirements. You indicate whether tests need to be updated.

Then you assign the issue to Copilot.

And wait.


The Pull Request Arrives

After a few minutes — sometimes ten, sometimes less — Copilot creates a branch and opens a pull request.

The PR contains:

  • Code changes
  • Updated or new tests
  • A descriptive PR message

And because it’s a real PR, your CI runs automatically. The code is evaluated in the same way as any other contribution.

This is already a major improvement over editor-based prompting. The work is isolated, reviewable, and properly versioned.

But the most interesting part is what happens in the background.


Watching Copilot Think

If you visit the Agents tab in the repository, you can see Copilot reasoning through the issue.

It reads like a junior developer narrating their approach:

  • Interpreting the problem
  • Identifying the relevant files
  • Planning changes
  • Considering test updates
  • Running validation steps

And you can interrupt it.

If it starts drifting toward unnecessary abstraction or broad refactoring, you can comment and steer it:

  • Please don’t change the public API.
  • Avoid experimental Perl features.
  • This must remain compatible with Perl 5.40.

It responds. It adjusts course.

This ability to intervene mid-flight is one of the most useful aspects of the system. You are not passively accepting generated code — you’re supervising it.


Teaching Copilot About Your Project

Out of the box, Copilot doesn’t really know how your repository works. It sees code, but it doesn’t know policy.

That’s where repository-level configuration becomes useful.

1. Custom Repository Instructions

GitHub allows you to provide a .github/copilot-instructions.md file that gives Copilot repository-specific guidance. The documentation for this lives here:

When GitHub offers to generate this file for you, say yes.

Then customise it properly.

In a CPAN module, I tend to include:

  • Minimum supported Perl version
  • Whether Feature::Compat::Class is preferred
  • Whether experimental features are forbidden
  • CPAN layout expectations (lib/, t/, etc.)
  • Test conventions (Test::More, no stray diagnostics)
  • A strong preference for not breaking the public API

Without this file, Copilot guesses.

With this file, Copilot aligns itself with your house style.

That difference is impressive.

2. Customising the Copilot Development Environment

There’s another piece that many people miss: Copilot can run a special workflow event called copilot_agent_setup.

You can define a workflow that prepares the environment Copilot works in. GitHub documents this here:

In my Perl projects, I use this standard setup:

name: Copilot Setup Steps

on:
  workflow_dispatch:
  push:
    paths:
      - .github/workflows/copilot-setup-steps.yml
  pull_request:
    paths:
      - .github/workflows/copilot-setup-steps.yml

jobs:
  copilot-setup-steps:
    runs-on: ubuntu-latest
    permissions:
      contents: read
  steps:
    - name: Check out repository
      uses: actions/checkout@v4

    - name: Set up Perl 5.40
      uses: shogo82148/actions-setup-perl@v1
      with:
        perl-version: '5.40'

    - name: Install dependencies
      run: cpanm --installdeps --with-develop --notest .

(Obviously, that was originally written for me by Copilot!)

This does two important things.

Firstly, it ensures Copilot is working with the correct Perl version.

Secondly, it installs the distribution dependencies, meaning Copilot can reason in a context that actually resembles my real development environment.

Without this workflow, Copilot operates in a kind of generic space.

With it, Copilot behaves like a contributor who has actually checked out your code and run cpanm.

That’s a useful difference.


Reviewing the Work

This is the part where it’s important not to get starry-eyed.

I still review the PR carefully.

I still check:

  • Has it changed behaviour unintentionally?
  • Has it introduced unnecessary abstraction?
  • Are the tests meaningful?
  • Has it expanded scope beyond the issue?

I check out the branch and run the tests. Exactly as I would with a PR from a human co-worker.

You can request changes and reassign the PR to Copilot. It will revise its branch.

The loop is fast. Faster than traditional asynchronous code review.

But the responsibility is unchanged. You are still the maintainer.


Why This Feels Different

What’s happening here isn’t just “AI writing code”. It’s AI integrated into the contribution workflow:

  • Issues
  • Structured reasoning
  • Pull requests
  • CI
  • Review cycles

That architecture matters.

It means you can use Copilot in a controlled, auditable way.

In my experience with WebServer::DirIndex, this model works particularly well for:

  • Mechanical refactors
  • Adding attributes (e.g. :reader where appropriate)
  • Removing dependencies
  • Moving methods cleanly
  • Adding new internal classes

It is less strong when the issue itself is vague or architectural. Copilot cannot infer the intent you didn’t articulate.

But given a clear issue, it’s remarkably capable — even with modern Perl using tools like Feature::Compat::Class.


A Small but Important Point for the Perl Community

I’ve seen people saying that AI tools don’t handle Perl well. That has not been my experience.

With a properly described issue, repository instructions, and a defined development environment, Copilot works competently with:

  • Modern Perl syntax
  • CPAN distribution layouts
  • Test suites
  • Feature::Compat::Class (or whatever OO framework I’m using on a particular project)

The constraint isn’t the language. It’s how clearly you explain the task.


The Real Shift

The most interesting thing here isn’t that Copilot writes Perl. It’s that GitHub allows you to treat AI as a contributor.

  • You file an issue.
  • You assign it.
  • You supervise its reasoning.
  • You review its PR.

It’s not autocomplete. It’s not magic. It’s just another developer on the project. One who works quickly, doesn’t argue, and reads your documentation very carefully.

Have you been using AI tools to write or maintain Perl code? What successes (or failures!) have you had? Are there other tools I should be using?


Links

If you want to have a closer look at the issues and PRs I’m talking about, here are some links?

The post Treating GitHub Copilot as a Contributor first appeared on Perl Hacks.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.339 on 2026-02-19, with 50 votes
    • Previous CPAN version: 0.338 was 4 months, 16 days before
    • Author: RJBS
  2. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.04 on 2026-02-19, with 56 votes
    • Previous CPAN version: 10.03 was 30 days before
    • Author: UTASHIRO
  3. App::Netdisco - An open source web-based network management tool.
    • Version: 2.097003 on 2026-02-21, with 834 votes
    • Previous CPAN version: 2.097002 was 1 month, 12 days before
    • Author: OLIVER
  4. App::rdapper - a command-line RDAP client.
    • Version: 1.24 on 2026-02-19, with 21 votes
    • Previous CPAN version: 1.23 was 17 days before
    • Author: GBROWN
  5. CPAN::Meta - the distribution metadata for a CPAN dist
    • Version: 2.150013 on 2026-02-20, with 39 votes
    • Previous CPAN version: 2.150012 was 25 days before
    • Author: RJBS
  6. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260220.001 on 2026-02-20, with 25 votes
    • Previous CPAN version: 20260215.001 was 4 days before
    • Author: BRIANDFOY
  7. Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
    • Version: 9.1.0 on 2026-02-17, with 18 votes
    • Previous CPAN version: 9.0.0 was 23 days before
    • Author: CUKEBOT
  8. Getopt::Long::Descriptive - Getopt::Long, but simpler and more powerful
    • Version: 0.117 on 2026-02-19, with 58 votes
    • Previous CPAN version: 0.116 was 1 year, 1 month, 19 days before
    • Author: RJBS
  9. MIME::Lite - low-calorie MIME generator
    • Version: 3.038 on 2026-02-16, with 35 votes
    • Previous CPAN version: 3.037 was 5 days before
    • Author: RJBS
  10. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260220 on 2026-02-20, with 44 votes
    • Previous CPAN version: 5.20260119 was 1 month, 1 day before
    • Author: BINGOS
  11. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.1 on 2026-02-14, with 13 votes
    • Previous CPAN version: v2.0.0
    • Author: SANKO
  12. Net::Server - Extensible Perl internet server
    • Version: 2.018 on 2026-02-18, with 34 votes
    • Previous CPAN version: 2.017 was 8 days before
    • Author: BBB
  13. Resque - Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.
    • Version: 0.44 on 2026-02-21, with 42 votes
    • Previous CPAN version: 0.43
    • Author: DIEGOK
  14. SNMP::Info - OO Interface to Network devices and MIBs through SNMP
    • Version: 3.975000 on 2026-02-20, with 40 votes
    • Previous CPAN version: 3.974000 was 5 months, 8 days before
    • Author: OLIVER
  15. SPVM - The SPVM Language
    • Version: 0.990134 on 2026-02-20, with 36 votes
    • Previous CPAN version: 0.990133
    • Author: KIMOTO
  16. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000162 on 2026-02-20, with 28 votes
    • Previous CPAN version: 1.000161 was 8 months, 9 days before
    • Author: EXODIST
  17. WebService::Fastly - an interface to most facets of the [Fastly API](https://www.fastly.com/documentation/reference/api/).
    • Version: 14.00 on 2026-02-16, with 18 votes
    • Previous CPAN version: 13.01 was 2 months, 6 days before
    • Author: FASTLY

This is the weekly favourites list of CPAN distributions. Votes count: 53

Week's winner: Linux::Event::Fork (+2)

Build date: 2026/02/21 21:48:43 GMT


Clicked for first time:


Increasing its reputation:

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Data::ObjectDriver - Simple, transparent data interface, with caching
    • Version: 0.27 on 2026-02-13, with 16 votes
    • Previous CPAN version: 0.26 was 3 months, 27 days before
    • Author: SIXAPART
  2. DateTime::Format::Natural - Parse informal natural language date/time strings
    • Version: 1.25 on 2026-02-13, with 19 votes
    • Previous CPAN version: 1.24_01 was 1 day before
    • Author: SCHUBIGER
  3. Devel::Size - Perl extension for finding the memory usage of Perl variables
    • Version: 0.86 on 2026-02-10, with 22 votes
    • Previous CPAN version: 0.86 was 1 day before
    • Author: NWCLARK
  4. Marlin - 🐟 pretty fast class builder with most Moo/Moose features 🐟
    • Version: 0.023001 on 2026-02-14, with 12 votes
    • Previous CPAN version: 0.023000 was 7 days before
    • Author: TOBYINK
  5. MIME::Lite - low-calorie MIME generator
    • Version: 3.037 on 2026-02-11, with 35 votes
    • Previous CPAN version: 3.036 was 1 day before
    • Author: RJBS
  6. MIME::Body - Tools to manipulate MIME messages
    • Version: 5.517 on 2026-02-11, with 15 votes
    • Previous CPAN version: 5.516
    • Author: DSKOLL
  7. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.0 on 2026-02-13, with 13 votes
    • Previous CPAN version: 0.052 was 15 years, 10 months before
    • Author: SANKO
  8. Net::Server - Extensible Perl internet server
    • Version: 2.017 on 2026-02-09, with 34 votes
    • Previous CPAN version: 2.016 was 12 days before
    • Author: BBB
  9. Protocol::HTTP2 - HTTP/2 protocol implementation (RFC 7540)
    • Version: 1.12 on 2026-02-14, with 27 votes
    • Previous CPAN version: 1.11 was 1 year, 8 months, 25 days before
    • Author: CRUX
  10. SPVM - The SPVM Language
    • Version: 0.990130 on 2026-02-13, with 36 votes
    • Previous CPAN version: 0.990129 was 1 day before
    • Author: KIMOTO

Join us for TPRC 2026 in Greenville, SC!

Perl Foundation News

We are pleased to announce the dates of our next Perl and Raku Conference, to be held in Greenville, SC on June 26-28, 2026.  The venue is the same as last year, but we are expanding the conference to 3 days of talks/presentations across the weekend.  One or more classes will be scheduled for Monday the 29th as well. The hackathon will be running continuously from June 25 through June 29—so if you can come early or stay late, there will be opportunities for involvement with other members of the community.

Mark your calendars and save the dates!

Our website, https://www.tprc.us/  has more details including links to reserve your hotel room and a link to register for the conference at the early bird price.  Watch for more updates as more plans are finalized.

Our theme for 2026 is “Perl is my cast iron pan”.  Perl is reliable, versatile, durable, and continues to be ever so useful!  Just like your favorite cast iron pan! Raku might map to tempered steel.  also quite reliable and useful, and with some very attractive updates!

We hope to see you in June!