Task 1: Match String
The Task
You are given an array of strings. Write a script to return all strings that are a substring of another word in the given array in the order they occur.
-
Example 1:
-
Input:
@words = ("cat", "cats", "dog", "dogcat", "dogcat", "rat", "ratcatdogcat") -
Output:
("cat", "dog", "dogcat", "rat")
-
Input:
The Deep Thoughts
Example 1 implies a couple of constraints. A complete match counts as a substring. Words may be repeated in the input. Repeated words are not duplicated in the output. The output order must match the input order. Noted.
For each word, we'll need to take it out of the list, check for substrings, and then put it back for the next round. A relatively simple way of doing that would be to shift it off the front of the list, do what needs to be done, and then return it by pushing it onto the back.
I don't see much of a way of getting around the idea that every word must be compared to every other word, so we're looking at an O(n2) performance algorithm.
One possibility for reducing the operation count might be to compare a pair of words both ways. If I have $w1 and $w2 in hand, I can check both directions for substring-i-ness. That way, instead of having to loop over the entire NxN matrix, we would only have to loop over the upper or lower triangle of the matrix. It's fewer looping operations, but still the same number of string comparisons. But it would mean more complexity in keeping track of what's been checked, so, meh.
The Code
sub matchString(@words)
{
use List::Util qw/any/;
my %seen;
my @match;
for ( 1 .. @words )
{
my $w = shift @words;
push @match, $w if ( ! $seen{$w}++ && any { index($_, $w) >= 0 } @words );
push @words, $w;
}
return \@match;
}
Notes:
- Use a hash (
%seen) to eliminate duplicates, and coincidentally optimize a little by not searching multiple times. - The main loop is once per word, so simply count. Normally, I would prefer something obviously tied to the array (like
for (@words)), but since I'm changing the array within the loop, I don't want to have to think too hard about whether the iterator remains valid after removing and adding an element. -
indexis a cheaper operation than a regular expression match. Benchmarking bears this out -- it's about three times more efficient in my environment and test data. -
List::Util::anystops as soon as something works, so it's cheaper thangrep, which would evaluate the entire list every time.
Task 2: Binary Prefix
The Task
You are given an array, @nums, where each element is either 0 or 1.
Define xi as the number formed by taking the first i+1 bits of @nums (from $nums[0] to $nums[i]) and interpreting them as a binary number, with $nums[0] being the most significant bit.
For example: If @nums = (1, 0, 1), then:
x0 = 1 (binary 1)
x1 = 2 (binary 10)
x2 = 5 (binary 101)
For each i, check whether xi is divisible by 5.
Write a script to return an array @answer where $answer[i] is true if xi is divisible by 5, otherwise false.
-
Example 1:
- Input:
@nums = (0,1,1,0,0,1,0,1,1,1) - Output: (true, false, false, false, false, true, true, false, false, false)
- Input:
Binary numbers formed (decimal values):
0: 0 (false)
01: 1 (false)
011: 3 (false)
0110: 6 (false)
01100: 12 (false)
011001: 25 ( true)
0110010: 50 ( true)
01100101: 101 (false)
011001011: 203 (false)
0110010111: 407 (false)
The Deep Thoughts
Weird flex, but OK.
This puts me in mind of bit-twiddling with C code, but bit-twiddling in Perl is equally possible. The other alternative is probably to build strings and do binary number conversions by prefixing with '0b'.
I kind of want to do a list-based solution, using map to convert each 0 or 1 to a true or false, but I think that would be obfuscating a pretty simple loop.
The Code
sub bpre(@nums)
{
my @isFive;
my $b = 0;
while ( defined(my $bit = shift @nums) )
{
$b = ($b << 1) | $bit;
push @isFive, ( $b % 5 == 0 );
}
return \@isFive;
}
Notes:
- Destroy the input by shifting a 1 or 0 off the left until the list is gone.
- Do bit operations to form a new number.
- Build an answer array consisting of boolean values.
Musical Interlude
The solution to task 1 reminds me of Katy Perry's Hot N Cold -- "You're yes, then you're no. You're in, then you're out."
And for Task 2, Mambo Number 5 is oddly appropriate. "Take one step left and one step right, One to the front and one to the side. Clap your hands once and clap your hands twice, And if it looks like this, then you're doin' it right."
perldelta - Fix spelling of modernize, as per cb60726 (-ize also happens to be the more widely used UK spelling, in addition to being the US spelling; -ise is a (UK-specific) variant spelling)
perldelta - Document new/changed diagnostics
perldelta - Copy-editing, update modules and fill in GH links
perldelta - Remove boilerplate
perldelta for f3c8c58
![]()
Foswiki 2.1.10 can now be downloaded - landing right before Christmas, a full year since the last version dropped. Please be advised that this release includes several security fixes that require your attention. We would like to express our gratitude to Evgeny Kopytin of Positive Technologies for conducting a thorough audit of Foswiki and providing a comprehensive vulnerability report. Despite adhering closely to our security procedures, we were unable to obtain a response from the CVE Assignment Team regarding the allocation of official CVE-IDs. It is for this reason that the new security alerts covered by the 2.1.10er release had to be documented with a "CVE-2025-Unassigned" tag, since no better option was available.
See the release notes for additional information.

Does the Perl world need another object-oriented programming framework?
To be honest, probably not.
But here’s why you might want to give Marlin a try anyway.
-
Most of your constructors and accessors will be implemented in XS and be really, really fast.
-
If you accept a few basic principles like “attributes should usually be read-only”, it can be really, really concise to declare a class and its attributes.
An example
use v5.20.0; use experimental qw(signatures); # Import useful constants, types, etc. use Marlin::Util -all, -lexical; use Types::Common -all, -lexical; package Person { use Marlin 'given_name!' => NonEmptyStr, 'family_name!' => NonEmptyStr, 'name_style' => { enum => [qw/western eastern/], default => 'western' }, 'full_name' => { is => lazy, builder => true }, 'birth_date?'; sub _build_full_name ( $self ) { return sprintf( '%s %s', uc($self->family_name), $self->given_name ) if $self->name_style eq 'eastern'; return sprintf( '%s %s', $self->given_name, $self->family_name ); } } package Payable { use Marlin::Role -requires => [ 'bank_details' ]; sub make_payment ( $self ) { ...; } } package Employee { use Marlin -extends => [ 'Person' ], -with => [ 'Payable' ], 'bank_details!' => HashRef, 'employee_id!' => Int, 'manager?' => { isa => 'Employee' }; } my $manager = Employee->new( given_name => 'Simon', family_name => 'Lee', name_style => 'eastern', employee_id => 1, bank_details => {}, ); my $staff = Employee->new( given_name => 'Lea', family_name => 'Simons', employee_id => 2, bank_details => {}, manager => $manager, ); printf( "%s's manager is %s.\n", $staff->full_name, $staff->manager->full_name, ) if $staff->has_manager;
Some things you might notice:
-
It supports most of the features of Moose… or most of the ones you actually use anyway.
-
Declaring an attribute is often as simple as listing it’s name on the
use Marlinline. -
It can be followed by some options, but if you’re happy with Marlin’s defaults (read-only attributes), it doesn’t need to be.
-
You can use the
!to quickly mark an attribute as required instead of the longer{ required => true }. -
You can use
?to request a predicate method instead of the longer{ predicate => true }.
Benchmarks
My initial benchmarking shows that Marlin is fast.
Constructors
Rate Tiny Plain Moo Moose Marlin Core Tiny 1317/s -- -2% -48% -53% -54% -72% Plain 1340/s 2% -- -47% -53% -53% -72% Moo 2527/s 92% 89% -- -11% -12% -47% Moose 2828/s 115% 111% 12% -- -2% -40% Marlin 2873/s 118% 114% 14% 2% -- -39% Core 4727/s 259% 253% 87% 67% 65% --
Only the new Perl core class keyword generates a constructor faster than Marlin’s. And it is significantly faster; there’s no denying that. However, object construction is only part of what you are likely to need.
Accessors
Rate Tiny Moose Plain Core Moo Marlin Tiny 17345/s -- -1% -3% -7% -36% -45% Moose 17602/s 1% -- -2% -6% -35% -44% Plain 17893/s 3% 2% -- -4% -34% -44% Core 18732/s 8% 6% 5% -- -31% -41% Moo 27226/s 57% 55% 52% 45% -- -14% Marlin 31688/s 83% 80% 77% 69% 16% --
By accessors, I’m talking about not just standard getter and setters, but also predicate methods and clearers. Marlin and Moo both use Class::XSAccessor when possible, giving them a significant lead over the others. Marlin uses some sneaky tricks to squeeze out a little bit of extra performance by creating aliases for parent class methods directly in the child class symbol tables, allowing Perl to bypass a lot of the normal method resolution stuff.
I really expected class to do a lot better than it does. Its readers and writers are basically implemented in pure Perl currently, though I guess there’s scope to improve them in future releases.
Native Traits / Handles Via / Delegations
Rate Tiny Core Plain Moose Moo Marlin Tiny 675/s -- -56% -57% -59% -61% -61% Core 1518/s 125% -- -4% -8% -13% -13% Plain 1581/s 134% 4% -- -4% -9% -10% Moose 1642/s 143% 8% 4% -- -5% -6% Moo 1736/s 157% 14% 10% 6% -- -1% Marlin 1752/s 160% 15% 11% 7% 1% --
If you don’t know what I mean by native traits, it’s the ability to create small methods like this:
sub add_language ( $self, $lang ) { push $self->languages->@*, $lang; }
As part of the attribute definition:
use Marlin languages => { is => 'ro', isa => ArrayRef[Str], default => [], handles_via => 'Array', handles => { add_language => 'push', count_languages => 'count' }, };
There’s not an awful lot of difference between the performance of most of these, but Marlin slightly wins. Marlin and Moose are also the only frameworks that include this out of the box without needing extension modules.
By the way, that default => [] was not a typo. You can set an empty arrayref or empty hashref as a default, and Marlin will assume you meant something like default => sub { [] }, but it cleverly skips over needing to actually call the coderef (slow), instead creating a reference to a new empty array in XS (fast)!
Combined
Rate Tiny Plain Core Moose Moo Marlin Tiny 545/s -- -48% -56% -58% -60% -64% Plain 1051/s 93% -- -16% -19% -22% -31% Core 1249/s 129% 19% -- -4% -8% -18% Moose 1304/s 139% 24% 4% -- -4% -14% Moo 1355/s 148% 29% 8% 4% -- -11% Marlin 1519/s 179% 45% 22% 17% 12% --
A realistic bit of code that constructs some objects and calls a bunch of accessors and delegations on them. Marlin performs very well.
Lexical accessors and private attributes
Marlin has first class support for lexical methods!
use v5.42.0; package Widget { use Marlin name => { isa => Str }, internal_id => { reader => 'my internal_id', storage => 'PRIVATE' }; ... printf "%d: %s\n", $w->&internal_id, $w->name, } # dies because internal_id is lexically scoped Widget->new->&internal_id;
Support for the ->& operator was added in Perl 5.42. On older Perls (from Perl 5.12 onwards), lexical methods are still supported but you need to use function call syntax (internal_id($w)).
The storage => "PRIVATE" hint tells Marlin to use inside-out storage for that attribute, meaning that trying to access the internal_id by poking into the object’s internals ($obj->{internal_id}) won’t work.
This gives you true private attributes.
On Perl 5.18 and above, you can of course declare lexical methods using the normal my sub foo syntax, so you have private attributes as well as private methods.
Constant attributes
package Person { use Marlin name => { isa => Str, required => true }, species_name => { isa => Str, constant => "Homo sapiens" }; }
Constant attributes are declared like regular attributes, but are always very read-only and illegal to pass to the constructor.
Like other attributes, they support delegations, provided the delegated method isn’t one which could change the value.
Perl version support
Although some of the lexical features need newer versions of Perl, Marlin runs on Perl versions as old as 5.8.8.
Future directions
Some ideas I’ve had:
- If Moose is loaded, create meta object protocol stuff for Marlin classes and roles, like Moo does.
Does the Perl world need another object-oriented programming framework?
To be honest, probably not.
But here’s why you might want to give Marlin a try anyway.
Most of your constructors and accessors will be implemented in XS and be really, really fast.
If you accept a few basic principles like “attributes should usually be read-only”, it can be really, really concise to declare a class and its attributes.
An example
use v5.20.0;
use experimental qw(signatures);
# Import useful constants, types, etc.
use Marlin::Util -all, -lexical;
use Types::Common -all, -lexical;
package Person {
use Marlin
'given_name!' => NonEmptyStr,
'family_name!' => NonEmptyStr,
'name_style' => { enum => [qw/western eastern/], default => 'western' },
'full_name' => { is => lazy, builder => true },
'birth_date?';
sub _build_full_name ( $self ) {
return sprintf( '%s %s', uc($self->family_name), $self->given_name )
if $self->name_style eq 'eastern';
return sprintf( '%s %s', $self->given_name, $self->family_name );
}
}
package Payable {
use Marlin::Role
-requires => [ 'bank\_details' ];
sub make_payment ( $self ) {
...;
}
}
package Employee {
use Marlin
-extends => [ 'Person' ],
-with => [ 'Payable' ],
'bank_details!' => HashRef,
'employee_id!' => Int,
'manager?' => { isa => 'Employee' };
}
my $manager = Employee->new(
given_name => 'Simon',
family_name => 'Lee',
name_style => 'eastern',
employee_id => 1,
bank_details => {},
);
my $staff = Employee->new(
given_name => 'Lea',
family_name => 'Simons',
employee_id => 2,
bank_details => {},
manager => $manager,
);
printf(
"%s's manager is %s.\n",
$staff->full_name,
$staff->manager->full_name,
) if $staff->has_manager;
Some things you might notice:
It supports most of the features of Moose… or most of the ones you actually use anyway.
Declaring an attribute is often as simple as listing it’s name on the
use Marlinline.It can be followed by some options, but if you’re happy with Marlin’s defaults (read-only attributes), it doesn’t need to be.
You can use the
!to quickly mark an attribute as required instead of the longer{ required => true }.You can use
?to request a predicate method instead of the longer{ predicate => true }.
Benchmarks
My initial benchmarking shows that Marlin is fast.
Constructors
Rate Tiny Plain Moo Moose Marlin Core
Tiny 1317/s -- -2% -48% -53% -54% -72%
Plain 1340/s 2% -- -47% -53% -53% -72%
Moo 2527/s 92% 89% -- -11% -12% -47%
Moose 2828/s 115% 111% 12% -- -2% -40%
Marlin 2873/s 118% 114% 14% 2% -- -39%
Core 4727/s 259% 253% 87% 67% 65% --
Only the new Perl core class keyword generates a constructor faster than Marlin’s. And it is significantly faster; there’s no denying that. However, object construction is only part of what you are likely to need.
Accessors
Rate Tiny Moose Plain Core Moo Marlin
Tiny 17345/s -- -1% -3% -7% -36% -45%
Moose 17602/s 1% -- -2% -6% -35% -44%
Plain 17893/s 3% 2% -- -4% -34% -44%
Core 18732/s 8% 6% 5% -- -31% -41%
Moo 27226/s 57% 55% 52% 45% -- -14%
Marlin 31688/s 83% 80% 77% 69% 16% --
By accessors, I’m talking about not just standard getter and setters, but also predicate methods and clearers. Marlin and Moo both use Class::XSAccessor when possible, giving them a significant lead over the others. Marlin uses some sneaky tricks to squeeze out a little bit of extra performance by creating aliases for parent class methods directly in the child class symbol tables, allowing Perl to bypass a lot of the normal method resolution stuff.
I really expected class to do a lot better than it does. Its readers and writers are basically implemented in pure Perl currently, though I guess there’s scope to improve them in future releases.
Native Traits / Handles Via / Delegations
Rate Tiny Core Plain Moose Moo Marlin
Tiny 675/s -- -56% -57% -59% -61% -61%
Core 1518/s 125% -- -4% -8% -13% -13%
Plain 1581/s 134% 4% -- -4% -9% -10%
Moose 1642/s 143% 8% 4% -- -5% -6%
Moo 1736/s 157% 14% 10% 6% -- -1%
Marlin 1752/s 160% 15% 11% 7% 1% --
If you don’t know what I mean by native traits, it’s the ability to create small methods like this:
sub add_language ( $self, $lang ) {
push $self->languages->@*, $lang;
}
As part of the attribute definition:
use Marlin
languages => {
is => 'ro',
isa => ArrayRef[Str],
default => [],
handles_via => 'Array',
handles => { add_language => 'push', count_languages => 'count' },
};
There’s not an awful lot of difference between the performance of most of these, but Marlin slightly wins. Marlin and Moose are also the only frameworks that include this out of the box without needing extension modules.
By the way, that default => [] was not a typo. You can set an empty arrayref or empty hashref as a default, and Marlin will assume you meant something like default => sub { [] }, but it cleverly skips over needing to actually call the coderef (slow), instead creating a reference to a new empty array in XS (fast)!
Combined
Rate Tiny Plain Core Moose Moo Marlin
Tiny 545/s -- -48% -56% -58% -60% -64%
Plain 1051/s 93% -- -16% -19% -22% -31%
Core 1249/s 129% 19% -- -4% -8% -18%
Moose 1304/s 139% 24% 4% -- -4% -14%
Moo 1355/s 148% 29% 8% 4% -- -11%
Marlin 1519/s 179% 45% 22% 17% 12% --
A realistic bit of code that constructs some objects and calls a bunch of accessors and delegations on them. Marlin performs very well.
Lexical accessors and private attributes
Marlin has first class support for lexical methods!
use v5.42.0;
package Widget {
use Marlin
name => { isa => Str },
internal_id => { reader => 'my internal_id', storage => 'PRIVATE' };
...
printf "%d: %s\n", $w->&internal_id, $w->name,
}
# dies because internal_id is lexically scoped
Widget->new->&internal_id;
Support for the ->& operator was added in Perl 5.42. On older Perls (from Perl 5.12 onwards), lexical methods are still supported but you need to use function call syntax (internal_id($w)).
The storage => "PRIVATE" hint tells Marlin to use inside-out storage for that attribute, meaning that trying to access the internal_id by poking into the object’s internals ($obj->{internal_id}) won’t work.
This gives you true private attributes.
On Perl 5.18 and above, you can of course declare lexical methods using the normal my sub foo syntax, so you have private attributes as well as private methods.
Constant attributes
package Person {
use Marlin
name => { isa => Str, required => true },
species_name => { isa => Str, constant => "Homo sapiens" };
}
Constant attributes are declared like regular attributes, but are always very read-only and illegal to pass to the constructor.
Like other attributes, they support delegations, provided the delegated method isn’t one which could change the value.
Perl version support
Although some of the lexical features need newer versions of Perl, Marlin runs on Perl versions as old as 5.8.8.
Future directions
Some ideas I’ve had:
- If Moose is loaded, create meta object protocol stuff for Marlin classes and roles, like Moo does.
When trying to match the "randomart" part of ssh-keygen's output using Expect, expect() finds an empty string after the "randomart", but not the expected eof, but I don't understand why.
I'm using this code fragment:
# other values are:
# DB<2> x CMD_SSH_KEYGEN, @params
#0 'ssh-keygen'
#1 '-a'
#2 123
#3 '-b'
#4 4096
#5 '-f'
#6 'keys/host-aa71f91580'
#7 '-t'
#8 'rsa'
#9 '-C'
#10 'testXXX'
#11 '-N'
#12 'Rssg2(pm)j'
if (defined(my $exp = Expect->spawn(CMD_SSH_KEYGEN, @params))) {
my $timeout = 30;
my $CRLF = '\r?\n';
$exp->exp_internal(1); # for debugging
$exp->log_stdout(0);
my ($match_pos, $err, $match, $before, $after) = $exp->expect(
$timeout,
# ... matches for other parts
[qr/^The key's randomart image is:$CRLF/,
sub ($$) {
my ($exp, $tag) = @_;
my $r;
while (($r = $exp->expect(
$timeout,
'-re', qr/^[|+](.+?)[|+]$CRLF/,
'eof')) && $r == 1) {
if (my $m = $exp->match()) {
$m =~ s/$CRLF\$//;
print "match($tag): ", $m, "\n";
}
}
return exp_continue_timeout;
},
'randomart'],
[qr/^.*?$CRLF/,
sub ($$) {
my ($exp, $tag) = @_;
$exp = $exp->match(); # ugly, sorry!
$exp =~ s/$CRLF\$//;
print "match($tag): ", $exp, "\n";
return exp_continue_timeout;
},
'default'],
# ... more matches
);
When debugging I see:
#...
spawn id(7): list of patterns:
#1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'
#2: -ex `eof'
spawn id(7): Does `+----[SHA256]-----+\r\n'
match:
pattern #1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'? YES!!
Before match string: `'
Match string: `+----[SHA256]-----+\r\n'
After match string: `'
Matchlist: (`----[SHA256]-----')
\atch(randomart): "+----[SHA256]-----+\
"
#...
spawn id(7): list of patterns:
#1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'
#2: -ex `eof'
spawn id(7): Does `'
match:
pattern #1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'? No.
pattern #2: -ex `eof'? No.
Continuing expect...
^Z
I pressed ^Z when expect was waiting for timeout. So the obvious problem is the empty string when I expected "eof"; what's wrong?
(The other thing is that "match(randomart):..." got mangled to "\atch(randomart):..." for some reason.)
When I redirect the command output to a file, then +----[SHA256]-----+ is the last line.
So maybe when testing use this output:
Generating public/private rsa key pair.
Your identification has been saved in keys/host-a03a9693b2
Your public key has been saved in keys/host-a03a9693b2.pub
The key fingerprint is:
SHA256:d85NzqB1gvlyxUjP27q4KlzqMRklpb2zGbqwLlbezr4 testXXX
The key's randomart image is:
+---[RSA 4096]----+
| . |
| + |
| o o . |
| o = = |
| S B * B |
| . =.% X o |
| o.o=o= = = .|
| o .+=+ o . . |
| . oo+Eo..o.o. |
+----[SHA256]-----+
Experiment
When I added the empty string to the list of patterns, then the wait for timeout does not happen, but still I don't understand why eof does not match instead:
...
spawn id(7): list of patterns:
#1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'
#2: -ex `'
#3: -ex `eof'
spawn id(7): Does `'
match:
pattern #1: -re `(?^:^[|+](.+?)[|+]\\r?\\n)'? No.
pattern #2: -ex `'? YES!!
Before match string: `'
Match string: `'
After match string: `'
Matchlist: ()
Continuing expect...
...
Software versions used are: perl 5.26.1, Expect.pm 1.35
Our SaaS application is used by thousands of businesses to track, test and optimize their online marketing. And as part of the core development team you'll help us to build great new features, you'll help refactor old legacy code, and everything else in between.
Everything you do will be super important, you'll directly help improve the lives and businesses of our customers, and you'll have the opportunity to participate in the growth of the business as we take things to the next level.
If you're just looking for a job and a paycheck, this is not for you.
We're looking for someone who, once we both determine we're a great fit, is willing and excited about making a commitment to joining our team and riding the wave with us for at least the next few years.
This is a full-time remote position. W2 or contract, whichever you prefer.
I get a "Prototype mismatch" warning in code that either imports 'blessed' from Scalar::Util or defines it depending on the version of Scalar::Util. Is there a way to suppress the warning, or do I need to turn off signatures and add a prototype to my own blessed() sub as shown below? Am I correct that the latter is only a good practice if Scalar::Util::blessed will have a prototype forever? (I don't understand why it has a prototype now.)
use strict; use warnings;
use feature qw( unicode_strings signatures );
no warnings 'experimental::signatures';
use utf8;
use version;
require Scalar::Util;
if ( version->parse( Scalar::Util->VERSION ) >= version->parse( '1.53' ) ) {
STDERR->print( "Using Scalar::Util::blessed()\n" );
Scalar::Util->import( 'blessed' );
}
else {
STDERR->print( "Using our own blessed()\n" );
no feature 'signatures';
sub blessed($) {
# Workaround for Scalar-List-Utils bug #124515 fixed in 1.53 (Perl v5.31.6)
my $class = Scalar::Util::blessed($_[0]);
utf8::decode($class) if defined($class) && ! utf8::is_utf8($class);
return $class;
}
}
sub new ( $class, %arg ) {
return bless( { CODE => '', %arg }, $class );
}
In my previous post, in February, I announced the overhaul of the MailBox software. The MailBox suite of distributions implement automatic email handling processes. I started development back in 1999, so it had aged a bit. And I can now proudly tell you that the work has been completed!
As you may have experienced yourself: software ages. It's not directly that it does not work anymore, however your own opinion about programming, the features of the language and libraries you use, and the source specifications keep on changing. Basic maintenance picks some of the low-hanging fruits as refreshment, but you usually stay away from major rewrites. Well, the marvelous NLnet Foundation helped me to realize just that!
Some of the changes:
- Supported Perl went from 5.8.5 to 5.16 (2015), to get some nice syntax features like
`//` and `s///r` - OODoc improvements make the documentation even better manageable
- Major updates on the documentation text
- New HTML version of the generated docs, purely using client-side rendering
- Code simplifications and less line folding (80 -> 132 chars)
- Some code-style preference changes
- replaced the error handling into real exceptions
The latter change is backwards compatibility breaking: Major version 3 will be maintenance releases for the old error handling. Major version 4 releases will be the main branch for maintenance and development with the new system.
Gladly, the software had over 15.000 regression tests. Also, cpantesters is so incredibly useful, with its active disciples Slaven and Andreas! Thanks so much, guys! However, I do expect some fall-out, especially because exceptions are produced at error conditions, and errors rarely occur. Hence, the code path of errors is rarely used to tend to contain the most issues.
As module author, I am always glad to receive bug-reports; it's much better to have bugs (even tiny typos) fixed, than have them linger around. Help authors by reporting issues!
For exception infra, I picked (my own) Log::Report, which is extremely powerful. I should post a few examples on this medium soon. It integrates nicely with many other exception frameworks, but also opens the path to translation.
Many, many thanks to NLnet and cpantesters.
A working link for Tom Christiansen's slides on "Unicode, The Good, the Bad, and the (mostly) Ugly" is at https://web.archive.org/web/20121224081332/http://98.245.80.27/tcpc/OSCON2011/gbu.html. (We are writing a book on debugging at home, and I needed a usable link to Tom's talk.)
A simple proxy in Perl runs as a CGI on the webserver of my ISP. It's purpose is to forward https GET and POST to a http webserver running on my PC. With that https works without the need of any TLS certificate and domain on my PC.
It generally works, but the original headers can't be forwarded, because I have not found a method accessing them. I can only get them via %ENV, but then "Content-Type" becomes "CONTENT_TYPE", and that of course isn't recognized by the PC webserver.
Is there a way to access the original headers? I don't want to retranslate them to the original ones. There should be an easier way, but I found nothing on the web.
Hello! a long time ago a friend suggested me to learn to use POE with perl, the POE framework it is still in maintenance? i will ike to make my own IDS for hobby
[link] [comments]
Doomed
It is an unfortunate fact of life reflected in the stages of man, that we start off facing problems looking to others to solve these problems. Later we learn to solve these problems ourselves, we teach others to do the same. After that we delegate problem solving to those we have taught but find that as our own capacity diminishes, those that come after us simply ask an AI to do that which we struggled to learn in the past. A steady spiral ensuring future humanity’s cognitive decline, fuelled by the genius of its ancestors. We had become masters of our destiny only to hand it over to machines, because we hope machines will do it better. Perhaps they will.
In my job, tools that were created to make our job easier demand data from us, enforce protocols and are the exclusive conduit for information. Thus in our so called “caring profession”, the modern doctor spends as much time staring at a monitor as looking at patients, more hands on keyboard than hands-on examination, relying more on scans than an unreliable clinical acumen. Indeed this future may be safer and it is foolish to value old system of compassionate care delivery just because dispassionate algorithms have dispensed with the need for a human touch.
Clouded Judgement
Enter one of my newest colleagues; let’s call him Waq. A gentle giant of a youth, who looked like he could beat you to a pulp with a cheerful handshake. This big-brained surgeon whose head was always in the clouds, had discovered LLMs residing in those clouds and wished to bring them to the reach of the lesser mortals such as myself.
“I know what you need!” he announced. Of course I knew exactly what I needed, and it didn’t involve a smart-alec youth (even if he is 8 feet tall, with arms the size of my thighs) telling me.
“Oh, hello Waq”, I said putting on my well-practised fake sincerity, “I was just hoping you would come along and tell me.”
Waq didn’t need any encouragement, seeming to derive an unending supply of enthusiasm from the ether, “You need A Cloud”.
“Goodness, you’re right!” I said, “A Cloud, you say? Lenticular or Cumulonimbus do you think?”
“You know how you hate AI and think it will take our jobs, end the world and so on?”
“I have seen Terminator, Waq”, I said “Of course I know what’s coming.”, with visions of Waq revealing himself as a cyborg with living biomimetic skin grown over a metallic endoskeleton.
“A Two-Way Interface for a Dynamic Digital Learning Experience and Deterministic Encoding of Expertise”, he declared. “Now this will use experts to encoding their knowledge, experience and available evidence in a form that be used to train other individuals, rather than rely on some dependency inducing neural network on a server somewhere.”
TWIDDLE-DEE? Twiddle dumb, I thought. But this was going way over my simpleton head, “Ah…and how is this better than, I don’t know, something archaic, say a textbook?”
Waq’s face clouded over, “Well it will have a Cloud, be Digital, Dynamic, and Deterministic.”
“Did you ask ChatGPT to come up with this?”
Waq realised how easily he had fallen fell into the trap created by an AI that was not above subterfuge in the guise of being helpful.
“It’s ok, Waq.”, I consoled him, “This is exactly why you should watch 80s Sci-Fi on VHS tapes instead of adding to the training dataset of those silicon cybernetic systems planning to take over the world.”
“It’s too late, isn’t it?”
I would have patted him on his shoulder if I could have reached it. But an idea struck me. There is another kind of cloud that could be useful. “The future is not set, Waq”, I said, “You’ll be back.”
Interactive Word Cloud
Examples of Word Cloud Generators abound on-line. While useful for one off projects, for flexible and easily configurable it is handy to have a module for your own language of choice. MetaCPAN has a couple of options. c0bra(Brian Hann) created Image::WordCloud to generate raster Word Cloud Images. It is rather clever, depends on SDL to determine the dimensions of a word to create a attractive word cloud. Sarah Roberts created HTML::TagCloud to generate HTML Tags.

I am developing yet another option, this time using addressable elements to generate Word Clouds in svg and html. This potentially allows dynamic manipulation and interaction with the cloud. One possible goal is to develop a deterministic tree connecting multiple such word clouds as part of an expert system.
My early effort is CloudElement
Originally published at Perl Weekly 751
Hi there,
Ten days ago I participated at an online event organized by the Toronto Perl Mongers. where we had some really nice presentations about OpenQA and the Open Build Service, both written (partially) in Perl. I hope they are going to organize more such events about other projects written in Perl.
Last week I spent some time tracking down the LinkedIn profile of some of the authors who had entries included in the Perl Weekly and I also managed to include the picture of quite a few people. However I'd like to get your help to further enhance this.
All the authors are described in the authors.json file. I'd love to get your help in tracking down the missing details. For some authors we don't even have the names. Many are missing the image and the other details you might find for others. In addition I'd like to further enhance the listing by adding a link to the github and gitlab profile of each author. Would you like to help? Send a Pull-Request!
If we are already talking about contribution, we are going to have another online Perl Maven event contributing to a CPAN module. Register now!
Enjoy your week!
--
Your editor: Gabor Szabo.
Articles
Teaching Art to Computers the Hard Way
How many of you were told that "You shouldn't do art" by some teacher? Was Mondrian told the same? Here Ruth breaks the spell using Moo, SVG::Simple, and Imager.
The Ghost of Web Frameworks Future
PAGI, Perl Asynchronous Gateway Interface: The Spiritual Successor to PSGI. discuss
Layout strategy for a script with supporting functions
Talking about your in-house solution, getting feedback, getting suggestions how to pick a name for it, where is your public repo?
A Pod plugin for VSCode
The Night Before Deployment: How Melian Saved Christmas (and How It Can Speed Up Your App Too)
December 23rd is the annual Mega Load Test for the Christmas Eve delivery system. Every service (the Toy Inventory API, the Naughty-or-Nice Scoring Engine, the Elf Logistics Portal) suddenly wakes up to millions of requests. And every year, something new falls over. This year, it was the Toy Service.
Thirty Slices, Twenty-Four Days: How Christmas Was Saved By Abandoning Estimation
The Twelve Slices Of Christmas: How Vasco Chained the Chaos
Behind the scenes at Perl School Publishing
Grants
Maintaining Perl (Tony Cook) November 2025
Tony wrote: "In addition to the typical stream of small changes to review, Dave's second AST rebuild of ExtUtils::ParseXS arrived (#23883), and I spent several hours reviewing it."
PEVANS Core Perl 5: Grant Report for November 2025
Paul wrote: "A mix of things this month, though I didn't get much done in the final week because of preparations for my talk at LPW2025. A useful event though because a few ideas came out of discussions that I shall be looking at for core perl soon."
Maintaining Perl 5 Core (Dave Mitchell): November 2025
Dave wrote: "Last month was relatively quiet. I worked on a couple of bugs and did some final updates to my branch which rewrites perlxs.pod - which I intend to merge in the next few days."
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 352
Welcome to a new week with a couple of fun tasks "Match String" and "Binary Prefix". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 351
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Special Average" and "Arithmetic Progression" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
TWC351
The post provides concise, runnable Perl code that solves the stated problems for typical cases.
Special Progression
This is a well-crafted, educational post. Arne successfully solves the challenges with idiomatic Raku, provides clear explanations, and thoughtfully explores the efficiency vs. elegance trade-off by implementing multiple solutions.
Average Progression
Special Average task uses a standard and efficient approach. The solutions correctly filter out the minimum and maximum values and calculate the average of the remaining numbers. Arithmetic Progression task employs a highly advanced and unconventional method.
Perl Weekly Challenge 351
This solution takes a mathematical, array-oriented approach using Perl Data Language (PDL), demonstrating sophisticated numerical computing techniques rather than traditional Perl list processing.
A pretty average progression…
The post presents clean, readable, and correct solutions to both programming tasks in Raku, Perl, Python, and Elixir. It adopts a straightforward, practical approach without unnecessary complexity. Packy thoughtfully acknowledges trade-offs in their design choices.
Fun with arrays
This is a well-structured, practical implementation with good documentation and error handling. Peter demonstrates solid Perl programming practices while making reasonable design decisions based on their interpretation of the problem requirements.
The Weekly Challenge #351
This is a professional, well-documented, robust implementation with excellent attention to detail, defensive programming, and clean code organization. Robbie demonstrates advanced Perl expertise with thoughtful design choices.
Special Arithmetic
The blog post provides functionally correct and easy-to-understand solutions for two programming tasks. Roger makes deliberate, practical choices in their implementations, explicitly favoring simplicity over micro-optimizations for problems of "this scale."
Average Progression
This post presents concise, readable solutions to two programming tasks Perl and Python. The solutions are algorithmically correct, efficient, and practically focused, making them suitable for real-world use.
Weekly collections
NICEPERL's lists
Great CPAN modules released last week.
Events
Perl Maven online: Live Open Source contribution
December 26, 2025
Boston.pm - online
January 13, 2025
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
We’ve just published a new Perl School book: Design Patterns in Modern Perl by Mohammad Sajid Anwar.
It’s been a while since we last released a new title, and in the meantime, the world of eBooks has moved on – Amazon don’t use .mobi any more, tools have changed, and my old “it mostly works if you squint” build pipeline was starting to creak.
On top of that, we had a hard deadline: we wanted the book ready in time for the London Perl Workshop. As the date loomed, last-minute fixes and manual tweaks became more and more terrifying. We really needed a reliable, reproducible way to go from manuscript to “good quality PDF + EPUB” every time.
So over the last couple of weeks, I’ve been rebuilding the Perl School book pipeline from the ground up. This post is the story of that process, the tools I ended up using, and how you can steal it for your own books.
The old world, and why it wasn’t good enough
The original Perl School pipeline dates back to a very different era:
-
Amazon wanted
.mobifiles. -
EPUB support was patchy.
-
I was happy to glue things together with shell scripts and hope for the best.
It worked… until it didn’t. Each book had slightly different scripts, slightly different assumptions, and a slightly different set of last-minute manual tweaks. It certainly wasn’t something I’d hand to a new author and say, “trust this”.
Coming back to it for Design Patterns in Modern Perl made that painfully obvious. The book itself is modern and well-structured; the pipeline that produced it shouldn’t feel like a relic.
Choosing tools: Pandoc and wkhtmltopdf (and no LaTeX, thanks)
The new pipeline is built around two main tools:
-
Pandoc – the Swiss Army knife of document conversion. It can take Markdown/Markua plus metadata and produce HTML, EPUB, and much, much more.
-
wkhtmltopdf– which turns HTML into a print-ready PDF using a headless browser engine.
Why not LaTeX? Because I’m allergic. LaTeX is enormously powerful, but every time I’ve tried to use it seriously, I end up debugging page breaks in a language I don’t enjoy. HTML + CSS I can live with; browsers I can reason about. So the PDF route is:
- Markdown → HTML (via Pandoc) → PDF (via
wkhtmltopdf)
And the EPUB route is:
- Markdown → EPUB (via Pandoc) → validated with
epubcheck
The front matter (cover page, title page, copyright, etc.) is generated with Template Toolkit from a simple book-metadata.yml file, and then stitched together with the chapters to produce a nice, consistent book.
That got us a long way… but then a reader found a bug.
The iBooks bug report
Shortly after publication, I got an email from a reader who’d bought the Leanpub EPUB and was reading it in Apple Books (iBooks). Instead of happily flipping through Design Patterns in Modern Perl, they were greeted with a big pink error box.
Apple’s error message boiled down to:
There’s something wrong with the XHTML in this EPUB.
That was slightly worrying. But, hey, every day is a learning opportunity. And, after a bit of digging, this is what I found out.
EPUB 3 files are essentially a ZIP containing:
-
XHTML content files
-
a bit of XML metadata
-
CSS, images, and so on
Apple Books is quite strict about the “X” in XHTML: it expects well-formed XML, not just “kind of valid HTML”. So when working with EPUB, you need to forget all of that nice HTML5 flexibility that you’ve got used to over the last decade or so.
The first job was to see if we could reproduce the error and work out where it was coming from.
Discovering epubcheck
Enter epubcheck.
epubcheck is the reference validator for EPUB files. Point it at an .epub and it will unpack it, parse all the XML/XHTML, check the metadata and manifest, and tell you exactly what’s wrong.
Running it on the book immediately produced this:
Fatal Error while parsing file: The element type
brmust be terminated by the matching end-tag</br>.
That’s the XML parser’s way of saying:
-
In HTML,
<br>is fine. -
In XHTML (which is XML), you must use
<br />(self-closing) or<br></br>.
And there were a number of these scattered across a few chapters.
In other words: perfectly reasonable raw HTML in the manuscript had been passed straight through by Pandoc into the EPUB, but that HTML was not strictly valid XHTML, so Apple Books rejected it. I should note at this point that the documentation for EPUB explicitly says that it won’t touch HTML fragments it finds in a Markdown file when converting it to EPUB. It’s down to the author to ensure they’re using valid XHTML
A quick (but not scalable) fix
Under time pressure, the quickest way to confirm the diagnosis was:
-
Unzip the generated EPUB.
-
Open the offending XHTML file.
-
Manually turn
<br>into<br />in a couple of places. -
Re-zip the EPUB.
-
Run
epubcheckagain. -
Try it in Apple Books.
That worked. The errors vanished, epubcheck was happy, and the reader confirmed that the fixed file opened fine in iBooks.
But clearly:
Open the EPUB in a text editor and fix the XHTML by hand
is not a sustainable publishing strategy.
So the next step was to move from “hacky manual fix” to “the pipeline prevents this from happening again”.
HTML vs XHTML, and why linters matter
The underlying issue is straightforward once you remember it:
-
HTML is very forgiving. Browsers will happily fix up all kinds of broken markup.
-
XHTML is XML, so it’s not forgiving:
-
empty elements must be self-closed (
<br />,<img />,<hr />, etc.), -
tags must be properly nested and balanced,
-
attributes must be quoted.
-
EPUB 3 content files are XHTML. If you feed them sloppy HTML, some readers (like Apple Books) will just refuse to load the chapter.
So I added a manuscript HTML linter to the toolchain, before we ever get to Pandoc or epubcheck.
Roughly, the linter:
-
Reads the manuscript (ignoring fenced code blocks so it doesn’t complain about
<in Perl examples). -
Extracts any raw HTML chunks.
-
Wraps those chunks in a temporary root element.
-
Uses
XML::LibXMLto check they’re well-formed XML. -
Reports any errors with file and line number.
It’s not trying to be a full HTML validator; it’s just checking: “If this HTML ends up in an EPUB, will the XML parser choke?”
That would have caught the <br> problem before the book ever left my machine.
Hardening the pipeline: epubcheck in the loop
The linter catches the obvious issues in the manuscript; epubcheck is still the final authority on the finished EPUB.
So the pipeline now looks like this:
-
Lint the manuscript HTML
Catch broken raw HTML/XHTML before conversion. -
Build PDF + EPUB via
make_book-
Generate front matter from metadata (cover, title pages, copyright).
-
Turn Markdown + front matter into HTML.
-
Use
wkhtmltopdffor a print-ready PDF. -
Use Pandoc for the EPUB.
-
-
Run
epubcheckon the EPUB
Ensure the final file is standards-compliant. -
Only then do we upload it to Leanpub and Amazon, making it available to eager readers.
The nice side-effect of this is that any future changes (new CSS, new template, different metadata) still go through the same gauntlet. If something breaks, the pipeline shouts at me long before a reader has to.
Docker and GitHub Actions: making it reproducible
Having a nice Perl script and a list of tools installed on my laptop is fine for a solo project; it’s not great if:
-
other authors might want to build their own drafts, or
-
I want the build to happen automatically in CI.
So the next step was to package everything into a Docker image and wire it into GitHub Actions.
The Docker image is based on a slim Ubuntu and includes:
-
Perl +
cpanm+ all CPAN modules from the repo’scpanfile -
pandoc -
wkhtmltopdf -
Java +
epubcheck -
The Perl School utility scripts themselves (
make_book,check_ms_html, etc.)
The workflow in a book repo is simple:
-
Mount the book’s Git repo into
/work. -
Run
check_ms_htmlto lint the manuscript. -
Run
make_bookto buildbuilt/*.pdfandbuilt/*.epub. -
Run
epubcheckon the EPUB. -
Upload the
built/artefacts.
GitHub Actions then uses that same image as a container for the job, so every push or pull request can build the book in a clean, consistent environment, without needing each author to install Pandoc, wkhtmltopdf, Java, and a large chunk of CPAN locally.
Why I’m making this public
At this point, the pipeline feels:
-
modern (Pandoc, HTML/CSS layout, EPUB 3),
-
robust (lint +
epubcheck), -
reproducible (Docker + Actions),
-
and not tied to Perl in any deep way.
Yes, Design Patterns in Modern Perl is a Perl book, and the utilities live under the “Perl School” banner, but nothing is stopping you from using the same setup for your own book on whatever topic you care about.
So I’ve made the utilities available in a public repository (the perlschool-util repo on GitHub). There you’ll find:
-
the build scripts,
-
the Dockerfile and helper script,
-
example GitHub Actions configuration,
-
and notes on how to structure a book repo.
If you’ve ever thought:
I’d like to write a small technical book, but I don’t want to fight with LaTeX or invent a build system from scratch…
then you’re very much the person I had in mind.
eBook publishing really is pretty easy once you’ve got a solid pipeline. If these tools help you get your ideas out into the world, that’s a win.
And, of course, if you’d like to write a book for Perl School, I’m still very interested in talking to potential authors – especially if you’re doing interesting modern Perl in the real world.
The post Behind the scenes at Perl School Publishing first appeared on Perl Hacks.
Weekly Challenge 351
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
Task 1: Special Average
Task
You are given an array of integers.
Write a script to return the average excluding the minimum and maximum of the given array.
My solution
This task doesn't require much explanation. I start by calculating the min_value and max_value. I then have a list (array in Perl) called short_list which has the original integers with any min_value or max_value values removed.
If the short_list is empty, I return 0. Otherwise I return the average ( sum ÷ length ) of the short list.
def special_average(ints: list) -> float:
min_value = min(ints)
max_value = max(ints)
short_list = [n for n in ints if n != min_value and n != max_value]
if not short_list:
return 0
return sum(short_list)/len(short_list)
The Perl solution has the same logic, and uses the grep function to remove values.
sub main (@ints) {
my $min_value = min(@ints);
my $max_value = max(@ints);
my @short_array = grep { $_ != $min_value && $_ != $max_value } @ints;
if ($#short_array == -1) {
say 0;
}
say sum(@short_array) / scalar(@short_array);
}
Examples
$ ./ch-1.py 8000 5000 6000 2000 3000 7000
5250.0
$ ./ch-1.py 100000 80000 110000 90000
95000.0
$ ./ch-1.py 2500 2500 2500 2500
0
$ ./ch-1.py 2000
0
$ ./ch-1.py 1000 2000 3000 4000 5000 6000
3500.0
Task 2: Arithmetic Progression
Task
You are given an array of numbers.
Write a script to return true if the given array can be re-arranged to form an arithmetic progression, otherwise return false.
A sequence of numbers is called an arithmetic progression if the difference between any two consecutive elements is the same.
My solution
This task might throw up some gotchas by some less experience developers. Experienced developers should know that 0.1 + 0.2 is not 0.3. The reason for this is explained in this Python web page.
$ python3
Python 3.13.9 (main, Oct 14 2025, 00:00:00) [GCC 15.2.1 20250808 (Red Hat 15.2.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> 0.1 + 0.2
0.30000000000000004
Therefore the inputs to this function need to be either integers or of the Decimal type. With that out of the way, these are the steps I take
- Create a
sorted_listlist (array in Perl) which has the values sorted numerically (smallest first). - Create a variable
diffthat is the difference between the first two values (i.e.sorted_ints[1] - sorted_ints[0]). - Have a loop called
iwhich stats with 2 to one less than the length of the list. For each iteration, I check the difference between the number at that position and the previous one is the same asdiff. If it isn't, I returnFalse. - Once the loop is exhausted, I return
True.
def arithmetic_progression(ints: list) -> bool:
sorted_ints = sorted(ints)
diff = sorted_ints[1] - sorted_ints[0]
for i in range(2, len(sorted_ints)):
if sorted_ints[i] - sorted_ints[i - 1] != diff:
return False
return True
Perl also uses the same floating point arithmetic, and thus has the same issue. Perl has the Math::BigFloat function to handle this. The Perl solution uses the same logic, and converts the values to BigFloat before doing the computations.
sub main (@ints) {
my @sorted_ints = map { Math::BigFloat->new($_) } sort { $a <=> $b } @ints;
my $diff = $sorted_ints[1] - $sorted_ints[0];
foreach my $i ( 2 .. $#sorted_ints ) {
if ( $sorted_ints[$i] - $sorted_ints[ $i - 1 ] != $diff ) {
say 'false';
return;
}
}
say 'true';
}
Examples
$ ./ch-2.py 1 3 5 7 9
True
$ ./ch-2.py 9 1 7 5 3
True
$ ./ch-2.py 1 2 4 8 16
False
$ ./ch-2.py 5 -1 3 1 -3
True
$ ./ch-2.py 1.5 3 0 4.5 6
True
$ ./ch-2.py 0.1 0.3 0.2 0.4
True
I have a regular expression in an extension of java by florian ingerl
to parse a latex command,
\\\\DocumentMetadata(?<docMetadata>\\{(?:[^{}]|(?'docMetadata'))*\\})
but the important thing is that it allows nested braces.
The concrete string to be matched is
\DocumentMetadata{pdfversion=1,7,pdfstandard={a-3b,UA-1}}
but earlier or later time comes where deeper nesting is required.
To that end, I use the extension com.florianingerl.util.regex.MatchResult of the builtin java regular expressions.
Now I want to use latexmk for latex which is in Perl and need to adapt .latemkrc which is just Perl code.
So i need the same regular expression or at least something similar i can automatically transform.
Up to now each expression worked in both worlds. But this one does not match in Perl.
Maybe there is some extension which does.
I found in Perl recursion but not with named groups.
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.00 on 2025-12-11, with 56 votes
- Previous CPAN version: 9.23 was 7 months, 4 days before
- Author: UTASHIRO
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.096001 on 2025-12-13, with 804 votes
- Previous CPAN version: 2.096000 was 5 days before
- Author: OLIVER
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.027 on 2025-12-06, with 18 votes
- Previous CPAN version: 1.026 was 1 year, 2 months, 23 days before
- Author: PREACTION
-
Bitcoin::Crypto - Bitcoin cryptography in Perl
- Version: 4.003 on 2025-12-11, with 14 votes
- Previous CPAN version: 4.002 was 27 days before
- Author: BRTASTIC
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251207.001 on 2025-12-07, with 25 votes
- Previous CPAN version: 20251130.001 was 6 days before
- Author: BRIANDFOY
-
DateTime::Format::Strptime - Parse and format strp and strf time patterns
- Version: 1.80 on 2025-12-06, with 25 votes
- Previous CPAN version: 1.79 was 4 years, 7 months, 3 days before
- Author: DROLSKY
-
DateTime::TimeZone - Time zone object base class and factory
- Version: 2.66 on 2025-12-11, with 22 votes
- Previous CPAN version: 2.65 was 8 months, 15 days before
- Author: DROLSKY
-
DBIx::Class::DeploymentHandler - Extensible DBIx::Class deployment
- Version: 0.002235 on 2025-12-12, with 21 votes
- Previous CPAN version: 0.002234 was 1 year, 4 months, 26 days before
- Author: WESM
-
Exporter::Tiny - an exporter with the features of Sub::Exporter but only core dependencies
- Version: 1.006003 on 2025-12-07, with 24 votes
- Previous CPAN version: 1.006002 was 2 years, 8 months, 6 days before
- Author: TOBYINK
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.629 on 2025-12-12, with 16 votes
- Previous CPAN version: 0.628 was 5 days before
- Author: ETHER
-
Mail::Box - complete E-mail handling suite
- Version: 4.01 on 2025-12-13, with 16 votes
- Previous CPAN version: 4.00 was 1 day before
- Author: MARKOV
-
Module::Release - Automate software releases
- Version: 2.137 on 2025-12-12, with 12 votes
- Previous CPAN version: 2.136 was 11 months, 8 days before
- Author: BRIANDFOY
-
Number::Phone - base class for Number::Phone::* modules
- Version: 4.0009 on 2025-12-10, with 24 votes
- Previous CPAN version: 4.0008 was 2 months, 27 days before
- Author: DCANTRELL
-
Object::Pad - a simple syntax for lexical field-based objects
- Version: 0.823 on 2025-12-08, with 46 votes
- Previous CPAN version: 0.822 was 7 days before
- Author: PEVANS
-
Release::Checklist - A QA checklist for CPAN releases
- Version: 0.18 on 2025-12-09, with 16 votes
- Previous CPAN version: 0.17 was 2 years, 7 months, 9 days before
- Author: HMBRAND
-
Spreadsheet::Read - Meta-Wrapper for reading spreadsheet data
- Version: 0.94 on 2025-12-09, with 31 votes
- Previous CPAN version: 0.93 was 8 months, 22 days before
- Author: HMBRAND
-
SPVM - The SPVM Language
- Version: 0.990109 on 2025-12-08, with 36 votes
- Previous CPAN version: 0.990108 was 4 days before
- Author: KIMOTO
-
Test::Simple - Basic utilities for writing tests.
- Version: 1.302219 on 2025-12-09, with 199 votes
- Previous CPAN version: 1.302218 was before
- Author: EXODIST
-
WebService::Fastly - an interface to most facets of the [Fastly API](https://www.fastly.com/documentation/reference/api/).
- Version: 13.01 on 2025-12-09, with 18 votes
- Previous CPAN version: 13.00 was 1 month, 8 days before
- Author: FASTLY
I'd like to use constants to build regular expressions. However, in this case I got an unexpected syntax error:
#!/usr/bin/perl
use strict;
use warnings;
use constant CR_SAFE => '[:alnum:]@,._\-!%=';
# quote argument if needed
sub cond_quote($)
{
my $arg = shift;
return $arg
if ($arg =~ /^[${\CR_SAFE}]+$/);
$arg =~ s/[^${\CR_SAFE}[:space:]]/\\$&/g;
return '"' . $arg . '"';
}
$ perl -c ./foo.pl
syntax error at ./foo.pl line 14, near "[:"
./foo.pl had compilation errors.
However if I move [:space:] before expanding the constant ($arg =~ s/[^[:space:]${\CR_SAFE}]/\\$&/g;), then I get no syntax error.
Perl version is 5.26.1 on x86_64.
Don't I see the obvious, or who can explain?

Tony write:
``` In addition to the typical stream of small changes to review, Dave's second AST rebuild of ExtUtils::ParseXS arrived (#23883), and I spent several hours reviewing it.
In response to #23918 I worked on adding numeric comparison APIs, which are complicated by overloading, NaNs, SVs dual IV/NV implmentation, and of course by overloading. This includes some fixes for the existing sv_numeq() API. You can see the current state of this work in #23966.
[Hours] [Activity] 2025/11/03 Monday 0.37 #23886 review and approve 0.22 #23873 review other comments and follow-up 0.47 #23887 review, research and approve 1.72 #23890 review, testing 0.23 #23890 comment 0.08 #23891 review and approve 0.18 #23895 review and approve
0.67 #23896 review and comment
3.94
2025/11/04 Tuesday 0.57 coverity scan results, testing, comment on #23871 1.15 #23885 review and comment 1.03 #23871 testing per wolfsage’s example, work on a regression test and fix, testing, push to PR 23897 1.67 #21877 debugging, fix my understanding on PerlIO and the
code, testing
4.42
2025/11/05 Wednesday 0.70 #23897 fix non-taint perl, testing and update PR 0.58 #23896 recheck 1.50 #23885 comment 0.57 #21877 look into remaining test failure, find the cause
and workaround it
3.35
2025/11/06 Thursday 0.08 #23902 review and approve 0.08 #23898 review and approve 0.55 #23899 review and approve 0.97 #23901 review and approve 0.95 #23883 review
1.40 #23883 review up to Node::include
4.03
2025/11/10 Monday 1.60 #23795 review updates, comment 0.35 #23907 review, research and approve 1.07 #23908 review, research, comment (fixed while I worked)
0.63 #23883 continue review, comment
3.65
2025/11/11 Tuesday 0.57 #23908 review updates and approve 0.40 #23911 review, review history of associated ticket and approve 0.85 #23883 more review
1.37 #23883 more review
3.19
2025/11/12 Wednesday 0.73 #23913 review, research and approve 0.77 #23914 review, check for SvIsUV() usage on CPAN 0.83 #23910 testing, get some strange results 0.82 #23910 debugging, can’t reproduce in new builds
0.67 #23883 more review
3.82
2025/11/13 Thursday 0.73 #23918 review discussion and research 0.75 #23917 review and approve 0.23 #23919 review and approve 1.03 #23883 more review
1.27 #23883 more review
4.01
2025/11/17 Monday 1.13 testing, comments on new XS API list thread 0.97 #23923 review and approve 1.25 #23914 testing, comment, review 0.43 #23914 more review and approve
0.93 #23888 review, comments, some side discussion of 23921
4.71
2025/11/18 Tuesday 0.50 #23888 review updates, testing,approve 0.27 #23943 review and approve 0.52 #23883 more review
1.27 #23883 more review
2.56
2025/11/19 Wednesday 0.78 #23922 review and approve 1.08 #23918 work on new compare APIs 0.53 #23918 debugging 1.22 #23918 testing, cleanup
0.82 #23918 re-work documentation
4.43
2025/11/20 Thursday 2.50 #23918 work on sv_numcmp(), research, test code, testing, debugging 1.07 #23918 work out an issue, more testing, document sv_numcmp
variants
3.57
2025/11/24 Monday 0.08 #23819 review and approve 2.77 #23918 NULL tests and fix, test for NV/IV mishandling and fix 0.82 #23918 open #23956, start on le lt ge gt implementation
1.20 #23918 finish implementation, test code, testing
4.87
2025/11/25 Tuesday 0.67 #23885 review, comment 1.13 #23885 more review
1.03 #23918 some polish
2.83
2025/11/26 Wednesday 0.07 #23960 review and approve 2.07 #23885 review, research and comments 0.48 #23918 more polish, testing
1.60 #23918 finish polish, push for CI
4.22
2025/11/27 Thursday 0.58 #23918 check CI, add perldelta and push 0.58 check CI results and make PR 23966
0.48 comment on dist discussion on list
1.64
2025/11/28 Friday
0.18 #23918 fix a minor issue
0.18
Which I calculate is 59.42 hours.
Approximately 32 tickets were reviewed or worked on. ```

Paul writes:
A mix of things this month, though I didn't get much done in the final week because of preparations for my talk at LPW2025. A useful event though because a few ideas came out of discussions that I shall be looking at for core perl soon.
- 4 = Mentoring preparation for BooK + Eric on PPC 0014
- 4.5 = attributes-v2 branch
- https://github.com/Perl/perl5/pull/23923
- 3 = Experiments with refalias in signatures in XS::Parse::Sublike
- 4 = Support for signature named parameters in
meta - 3 = Experiments with lexical class constructor functions in Object::Pad.
- While this is a CPAN module and not directly core perl, it serves as the experimental base for what gets implemented in future versions of perl, so it is still of interest to core development.
- 1 = Other github code reviews
Total: 19.5 hours
My aim for December is to continue the attributes-v2 branch, and get
into a good position to implement perhaps the :abstract and
:lexical_new attributes on classes.

Dave writes:
Last month was relatively quiet.
I worked on a couple of bugs and did some final updates to my branch which rewrites perlxs.pod - which I intend to merge in the next few days.
Summary:
- 10:33 GH #16197 re eval stack unwinding
- 4:47 GH #18669 dereferencing result of ternary operator skips autovivification
- 2:06 make perl -Dx display lexical variable names
- 10:58 modernise perlxs.pod
Total:
- 28:24 TOTAL (HH::MM)
Originally published at Perl Weekly 750
Hi there,
One of the most enjoyable yearly customs in the community, the Perl Advent Calendar 2025, is being introduced this week. A new article, tutorial, or in-depth analysis demonstrating the ingenuity and skill that continue to propel Perl forward is released every day.
The calendar has something for every skill level, whether you're interested in cutting-edge Perl techniques, witty one-liners, CPAN gems, or true engineering tales. It serves as a reminder that Perl's ecosystem is still active, creative, and developing-driven by a fervent community that enjoys exchanging knowledge.
If you still want more, be sure to check out, The Weekly Challenge Advent Calendar 2025. There you'll find not just Perl, but Raku as well.
Last but not least, I'd like to extend my heartfelt thanks to Gabor Szabo for kindly promoting my book: Design Patterns in Modern Perl - your support means a great deal. And to the Perl community: thank you for embracing my first book with such warmth and encouragement. Your enthusiasm continues to inspire me.
Enjoy rest of the newsletter, stay safe and healthy.
--
Your editor: Mohammad Sajid Anwar.
Articles
PAGI: ASGI For Perl, or the Spiritual Successor to Plack
PAGI (Perl Asynchronous Gateway Interface) is a new specification for async Perl web applications, inspired by Python's ASGI. It supports HTTP, WebSockets, and Server-Sent Events natively, and can wrap existing PSGI applications for backward compatibility.
plenv-where
A plenv plugin to show which Perl versions have a particular module.
LPW 2025 - Event Report
Here is my detailed report of LPW 2025 that includes the slides of my presentation.
Living Perl: Building a CNN Image Classifier with AI::MXNet
This article demonstrates Perl's continued relevance in cutting-edge fields by showcasing integration with MXNet, a major deep learning framework. The ability to build convolutional neural networks (CNNs) in Perl for image classification represents significant technical sophistication.
Perl Advent Calendar
The Ghost of Perl Developer Surveys Past, Present, and Future
The article demonstrates sophisticated understanding of developer tooling ecosystems and community trends. The comparison between 2009-2010 surveys and the 2025 results shows deep insight into how Perl development practices have evolved while maintaining continuity.
All I Want for Christmas Is the Right Aspect Ratio
The step-by-step progression from simple Perl script to full Docker deployment serves as an excellent tutorial on modern Perl module distribution. It shows how a well-designed module can serve diverse audiences from command-line power users to web developers to DevOps teams.
Santa's Secret Music Studio
The step-by-step approach from "need to identify devices" to "controlling a synth" serves as an excellent mini-tutorial. The mention of related modules (MIDI::RtController, MIDI::RtController::Filter::Tonal) provides helpful pointers for readers wanting to explore further.
Stopping the Evil Grinch: A Holiday Defense Guide
This article demonstrates enterprise-grade security automation using Perl as a robust orchestration layer. The solution elegantly combines multiple security tools (Lynis for auditing, ClamAV for malware scanning) with professional email reporting.
Santa needs to know about new toys...
This article successfully teaches professional API integration through storytelling, making technical concepts accessible while demonstrating production-ready Perl code patterns. The holiday theme enhances rather than distracts from the educational content.
ToyCo want to push new toy updates
This article beautifully demonstrates transitioning from a polling-based API client to a webhook-based service - a common and important architectural pattern in modern web development. The scenario of "crippling ToyCo's servers" with excessive polling is both realistic and educational.
Abstract storage of Christmas letters
This solution demonstrates sophisticated software design with the strategic use of Storage::Abstract to create a clean abstraction layer between business logic and data storage. The anticipation of changing storage requirements and preemptive abstraction is professional forward-thinking.
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 351
Welcome to a new week with a couple of fun tasks "Special Average" and "Arithmetic Progression". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 350
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Good Substrings" and "Shuffle Pairs" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
TWC350
This implementation demonstrates elegant Perl craftsmanship. The good substrings solution is particularly clever, using a regex lookahead to capture all overlapping 3-character substrings in one pass, then filtering to ensure no repeated characters - a beautifully concise one-liner.
The Good Shuffle
The solutions demonstrate strong understanding of both algorithmic thinking and Raku language features. The shuffle pairs solution is particularly clever in its use of canonical forms and early termination conditions.
Good Substring / Shuffle Pairs
The Perl implementation demonstrates clean, readable code with thoughtful organization. The good substrings solution uses efficient array slicing and clear manual comparison logic that's easily understandable.
Shuffled Strings
This is an exceptionally elegant Perl implementation showcasing expert-level Perl idioms. Both solutions exemplify Perl's philosophy of "making easy things easy and hard things possible" with concise, expressive code that solves the problems elegantly without unnecessary complexity.
only Perl!
This is a comprehensive and impressively diverse implementation across multiple languages and environments. The Raku solutions showcase excellent use of the language's functional features. The PL/Perl implementations are particularly noteworthy for their adaptability to database environments.
Perl Weekly Challenge 350
This solution stands out for its deep mathematical analysis and optimization. The Task 2 solution demonstrates remarkable theoretical insight by using modular arithmetic with modulo 9 to significantly reduce the search space - achieving a 5.2x speedup is an impressive feat of algorithmic optimization.
Shuffling the Good
This solution demonstrates exceptional cross-language programming skills with clean, idiomatic implementations across four different languages (Raku, Perl, Python, Elixir). The consistent algorithmic approach while respecting each language's unique idioms shows deep understanding of multiple programming paradigms.
Good pairs
Both solutions showcase excellent Perl craftsmanship with thoughtful comments, clear variable naming, and robust handling of edge cases. Peter demonstrates both theoretical understanding (mathematical bounds, algorithmic complexity) and practical implementation skills.
The Weekly Challenge #350
This is a masterclass in professional Perl documentation and code structure. The solutions feature comprehensive POD documentation with clear attribution, problem descriptions, notes, and IO specifications - demonstrating exceptional software engineering practices.
A Good Shuffle
This solution demonstrates elegant Perl craftsmanship with a particularly clever approach. Using a regex with a lookahead assertion /(?=(...))/g to capture overlapping substrings is an expert-level Perl idiom that showcases deep understanding of the language's regex capabilities.
Good shuffling
This solution demonstrates excellent cross-language programming skills with clear parallel implementations in both Python and Perl. The Task 1 solution is elegantly simple - the Python version using set(substr) for uniqueness checking and the Perl version using a hash with early returns showcase appropriate idioms for each language while maintaining the same algorithmic approach.
Rakudo
2025.48 Advent is Here
Weekly collections
NICEPERL's lists
Great CPAN modules released last week;
MetaCPAN weekly report.
Events
Paris.pm monthly meeting
December 10, 2025
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
-
App::cpm - a fast CPAN module installer
- Version: 0.998002 on 2025-12-04, with 177 votes
- Previous CPAN version: 0.998001 was 21 days before
- Author: SKAJI
-
App::HTTPThis - Export the current directory over HTTP
- Version: 0.010 on 2025-12-04, with 24 votes
- Previous CPAN version: 0.009 was 2 years, 5 months, 12 days before
- Author: DAVECROSS
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095006 on 2025-11-30, with 800 votes
- Previous CPAN version: 2.095005
- Author: OLIVER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251130.001 on 2025-11-30, with 25 votes
- Previous CPAN version: 20251123.001 was 7 days before
- Author: BRIANDFOY
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.627 on 2025-12-04, with 15 votes
- Previous CPAN version: 0.626 was 2 days before
- Author: ETHER
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.034000 on 2025-12-03, with 27 votes
- Previous CPAN version: 2.033000 was 1 year, 8 days before
- Author: MICKEY
-
Minion::Backend::mysql - MySQL backend
- Version: 1.007 on 2025-12-01, with 13 votes
- Previous CPAN version: 1.006 was 1 year, 6 months, 9 days before
- Author: PREACTION
-
Object::Pad - a simple syntax for lexical field-based objects
- Version: 0.822 on 2025-11-30, with 46 votes
- Previous CPAN version: 0.821 was 4 months, 18 days before
- Author: PEVANS
-
Sisimai - Mail Analyzing Interface for bounce mails.
- Version: v5.5.0 on 2025-12-05, with 81 votes
- Previous CPAN version: v5.4.1 was 3 months, 5 days before
- Author: AKXLIX
-
SPVM - The SPVM Language
- Version: 0.990108 on 2025-12-03, with 36 votes
- Previous CPAN version: 0.990107 was 15 days before
- Author: KIMOTO
-
Sys::Virt - libvirt Perl API
- Version: v11.10.0 on 2025-12-01, with 17 votes
- Previous CPAN version: v11.8.0 was 24 days before
- Author: DANBERR
-
Time::Moment - Represents a date and time of day with an offset from UTC
- Version: 0.46 on 2025-12-04, with 76 votes
- Previous CPAN version: 0.44 was 7 years, 6 months, 25 days before
- Author: CHANSEN
This is the weekly favourites list of CPAN distributions. Votes count: 72
Week's winner: JSON::Schema::Validate (+3)
Build date: 2025/12/06 16:48:33 GMT
Clicked for first time:
- Chess::Plisco - Representation of a chess position with move generator, legality checker etc.
- Dev::Util - Utilities useful in the development of perl programs
- Disk::SmartTools - Provide tools to work with disks via S.M.A.R.T.
- Dump::Krumo - Fancy, colorful, human readable dumps of your data
- Melian - Perl client to the Melian cache
Increasing its reputation:
- Algorithm::Diff (+1=24)
- AnyEvent (+1=167)
- App::perlimports (+1=22)
- Beekeeper (+1=3)
- BioPerl (+1=36)
- CGI::Compile (+1=2)
- CGI::Emulate::PSGI (+1=3)
- Crypt::SecretBuffer (+1=3)
- Data::Processor (+1=2)
- DBD::Oracle (+1=32)
- Devel::Examine::Subs (+1=4)
- Devel::NYTProf (+1=196)
- Dpkg (+1=4)
- Email::Filter (+1=2)
- Email::Simple (+1=23)
- GD (+1=32)
- Geo::Parser::Text (+1=3)
- Hash::Merge::Simple (+1=18)
- IO::K8s (+1=5)
- IO::Uring (+1=2)
- JSON::Schema::Modern (+2=8)
- JSON::Schema::Validate (+3=4)
- libapreq2 (+1=5)
- List::Gen (+1=24)
- Log::Any (+1=68)
- LWP::Protocol::https (+1=22)
- Math::BigInt (+1=14)
- MCP (+1=4)
- meta (+1=15)
- MIME::Base64 (+1=25)
- Module::Generic (+1=4)
- Mojolicious (+1=509)
- Mojolicious::Plugin::OpenAPI::Modern (+1=6)
- Moose (+1=334)
- MooX::TypeTiny (+1=11)
- Nice::Try (+1=11)
- OpenAPI::Modern (+2=4)
- OpenTelemetry (+1=5)
- Params::Validate::Strict (+1=2)
- perl (+1=440)
- Perl::Critic (+1=135)
- Perl::Critic::Community (+1=8)
- Perl::Tidy (+1=147)
- Safe (+1=14)
- Scalar::List::Utils (+1=184)
- Scope::Guard (+1=21)
- Sereal::Encoder (+1=25)
- signatures (+1=8)
- Software::License (+1=17)
- Sub::Quote (+1=11)
- Syntax::Keyword::Try (+1=47)
- Template::Nest (+1=2)
- Test2::Harness (+1=20)
- Test::Harness (+1=65)
- Test::Simple (+1=199)
- Text::CSV (+1=82)
- Text::Diff (+1=17)
- Text::PO (+2=2)
- Time::HiRes (+1=65)
- Time::Piece (+1=66)
- Types::JSONSchema (+2=2)
A language awakens the moment its community shares what it has lived and built.
If you were building web applications during the first dot-com boom, chances are you wrote Perl. And if you’re now a CTO, tech lead, or senior architect, you may instinctively steer teams away from it—even if you can’t quite explain why.
This reflexive aversion isn’t just a preference. It’s what I call Dotcom Survivor Syndrome: a long-standing bias formed by the messy, experimental, high-pressure environment of the early web, where Perl was both a lifeline and a liability.
Perl wasn’t the problem. The conditions under which we used it were. And unfortunately, those conditions, combined with a separate, prolonged misstep over versioning, continue to distort Perl’s reputation to this day.
The Glory Days: Perl at the Heart of the Early Web
In the mid- to late-1990s, Perl was the web’s duct tape.
-
It powered CGI scripts on Apache servers.
-
It automated deployments before DevOps had a name.
-
It parsed logs, scraped data, processed form input, and glued together whatever needed glueing.
Perl 5, released in 1994, introduced real structure: references, modules, and the birth of CPAN, which became one of the most effective software ecosystems in the world.
Perl wasn’t just part of the early web—it was instrumental in creating it.
The Dotcom Boom: Shipping Fast and Breaking Everything
To understand the long shadow Perl casts, you have to understand the speed and pressure of the dot-com boom.
We weren’t just building websites.
We were inventing how to build websites.
Best practices? Mostly unwritten.
Frameworks? Few existed.
Code reviews? Uncommon.
Continuous integration? Still a dream.
The pace was frantic. You built something overnight, demoed it in the morning, and deployed it that afternoon. And Perl let you do that.
But that same flexibility—its greatest strength—became its greatest weakness in that environment. With deadlines looming and scalability an afterthought, we ended up with:
-
Thousands of lines of unstructured CGI scripts
-
Minimal documentation
-
Global variables everywhere
-
Inline HTML mixed with business logic
-
Security holes you could drive a truck through
When the crash came, these codebases didn’t age gracefully. The people who inherited them, often the same people who now run engineering orgs, remember Perl not as a powerful tool, but as the source of late-night chaos and technical debt.
Dotcom Survivor Syndrome: Bias with a Backstory
Many senior engineers today carry these memories with them. They associate Perl with:
-
Fragile legacy systems
-
Inconsistent, “write-only” code
-
The bad old days of early web development
And that’s understandable. But it also creates a bias—often unconscious—that prevents Perl from getting a fair hearing in modern development discussions.
Version Number Paralysis: The Perl 6 Effect
If Dotcom Boom Survivor Syndrome created the emotional case against Perl, then Perl 6 created the optical one.
In 2000, Perl 6 was announced as a ground-up redesign of the language. It promised modern syntax, new paradigms, and a bright future. But it didn’t ship—not for a very long time.
In the meantime:
-
Perl 5 continued to evolve quietly, but with the implied expectation that it would eventually be replaced.
-
Years turned into decades, and confusion set in. Was Perl 5 deprecated? Was Perl 6 compatible? What was the future of Perl?
To outsiders—and even many Perl users—it looked like the language was stalled. Perl 5 releases were labelled 5.8, 5.10, 5.12… but never 6. Perl 6 finally emerged in 2015, but as an entirely different language, not a successor.
Eventually, the community admitted what everyone already knew: Perl 6 wasn’t Perl. In 2019, it was renamed Raku.
But the damage was done. For nearly two decades, the version number “6” hung over Perl 5 like a storm cloud – a constant reminder that its future was uncertain, even when that wasn’t true.
This is what I call Version Number Paralysis:
-
A stalled major version that made the language look obsolete.
-
A missed opportunity to signal continued relevance and evolution.
-
A marketing failure that deepened the sense that Perl was a thing of the past.
Even today, many developers believe Perl is “stuck at version 5,” unaware that modern Perl is actively maintained, well-supported, and quite capable.
While Dotcom Survivor Syndrome left many people with an aversion to Perl, Version Number Paralysis gave them an excuse not to look closely at Perl to see if it had changed.
What They Missed While Looking Away
While the world was confused or looking elsewhere, Perl 5 gained:
-
Modern object systems (Moo, Moose)
-
A mature testing culture (Test::More, Test2)
-
Widespread use of best practices (Perl::Critic, perltidy, etc.)
-
Core team stability and annual releases
-
Huge CPAN growth and refinements
But those who weren’t paying attention, especially those still carrying dotcom-era baggage, never saw it. They still think Perl looks like it did in 2002.
Can We Move On?
Dotcom Survivor Syndrome is real. So is Version Number Paralysis. Together, they’ve unfairly buried a language that remains fast, expressive, and battle-tested.
We can’t change the past. But we can:
-
Acknowledge the emotional and historical baggage
-
Celebrate the role Perl played in inventing the modern web
-
Educate developers about what Perl really is today
-
Push back against the assumption that old == obsolete
Conclusion
Perl’s early success was its own undoing. It became the default tool for the first web boom, and in doing so, it took the brunt of that era’s chaos. Then, just as it began to mature, its versioning story confused the industry into thinking it had stalled.
But the truth is that modern Perl is thriving quietly in the margins – maintained by a loyal community, used in production, and capable of great things.
The only thing holding it back is a generation of developers still haunted by memories of CGI scripts, and a version number that suggested a future that never came.
Maybe it’s time we looked again.
The post Dotcom Survivor Syndrome – How Perl’s Early Success Created the Seeds of Its Downfall first appeared on Perl Hacks.
I was writing a data intensive code in Perl relying heavily on PDL for some statistical calculations (estimation of percentile points in some very BIG vectors, e.g. 100k to 1B elements), when
I noticed that PDL was taking a very (and unusually long!) time to produce results compared to my experience in Python.
This happened irrespective of whether one used the pct or oddpct functions in PDL::Ufunc.
The performance degradation had a very interesting quantitative aspect: if one asked PDL to return a single percentile it did so very fast;
but if one were to ask for more than one percentiles, the time-to-solution increased linearly with the number of percentiles specified.
Looking at the source code of the pct function, it seems that it is implemented by calling the function pctover, which according to the PDL documentation “Broadcasts over its inputs.”
But what is exactly broadcasting? According to PDL::Broadcasting : “[broadcasting] can produce very compact and very fast PDL code by avoiding multiple nested for loops that C and BASIC users may be familiar with. The trouble is that it can take some getting used to, and new users may not appreciate the benefits of broadcasting.” Reading the relevant PDL examples and revisiting the NumPy documentation (which also uses this technique), broadcasting : treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python..
It seems that when one does something like:
use PDL::Lite;
my $very_big_ndarray = ... ; # code that constructs a HUGE PDL ndarrat
my $pct = sequence(100)/100; # all percentiles from 0 to 100%
my $pct_values = pct( $very_big_ndarray, $pct);
the broadcasting effectively executes sequentially the code for calculating a single percentile and concatenates the results.
The problem with broadcasting for this operation is that the percentile calculation includes a VERY expensive operation, namely the
sorting of the $very_big_darray before the (trivial) calculation of the percentile from the sorted values as detailed in
Wikipedia. So when the percentile operation is broadcast by PDL, the sorting is repeated for each
percentile value in $pct, leading to catastrophic loss of performance!
How can we fix this? It turns out to be reasonably trivial : we need to reimplement the percentile function so that it does not broadcast.
One of the simplest quantile functions to implement, is the one based on the empirical cumulative distribution function (this corresponds to the Type 3 quantile in the
classification by Hyndman and Fan).
This one can be trivially implemented in Perl using PDL as:
sub quantile_type_3 {
my ( $data, $pct ) = @_;
my $sorted_data = $data->qsort;
my $nelem = $data->nelem;
my $cum_ranks = floor( $pct * $nelem );
$sorted_data->index($cum_ranks);
}
(The other quantiles can be implemented equally trivially using affine operations as explained in R’s documentation of the quantile function).
To see how well this works, I wrote a Perl benchmark script that benchmarks
the builtin function pct, the quantile_type_3 function on synthetic data and then calls the companion
R script to profile the 9 quantile functions and the 3 sort
functions in R for the same dataset.
I obtained the following performance figures in my old Xeon: the “de-broadcasted” version of the quantile function achieves the same performance as the R implementations, whereas the
PDL broadcasting version is 100 times slower.
| Test | Iterations | Elements | Quantiles | Elapsed Time (s) |
|---|---|---|---|---|
| pct | 10 | 1000000 | 100 | 132.430000 |
| quantile_type_3 | 10 | 1000000 | 100 | 1.320000 |
| pct_R_1 | 10 | 1000000 | 100 | 1.290000 |
| pct_R_2 | 10 | 1000000 | 100 | 1.281000 |
| pct_R_3 | 10 | 1000000 | 100 | 1.274000 |
| pct_R_4 | 10 | 1000000 | 100 | 1.283000 |
| pct_R_5 | 10 | 1000000 | 100 | 1.290000 |
| pct_R_6 | 10 | 1000000 | 100 | 1.286000 |
| pct_R_7 | 10 | 1000000 | 100 | 1.233000 |
| pct_R_8 | 10 | 1000000 | 100 | 1.309000 |
| pct_R_9 | 10 | 1000000 | 100 | 1.291000 |
| sort_quick | 10 | 1000000 | 100 | 1.220000 |
| sort_shell | 10 | 1000000 | 100 | 1.758000 |
| sort_radix | 10 | 1000000 | 100 | 0.924000 |
As can be seen from the table, the sorting operations account mostly for the bulk of the execution time of the quantile functions.
Two major takehome points:
1) don’t be afraid to look under the hood/inside the blackbox when performance is surprisingly disappointing!
2) be careful of broadcasting operations in PDL, NumPy, or Matlab.
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095004 on 2025-11-23, with 798 votes
- Previous CPAN version: 2.095003 was 4 days before
- Author: OLIVER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251123.001 on 2025-11-23, with 25 votes
- Previous CPAN version: 20251116.001 was 7 days before
- Author: BRIANDFOY
-
Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
- Version: 8.1.0 on 2025-11-26, with 16 votes
- Previous CPAN version: 8.0.0 was 1 month, 11 days before
- Author: CUKEBOT
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.625 on 2025-11-28, with 14 votes
- Previous CPAN version: 0.624 was 2 days before
- Author: ETHER
-
Mail::Box - complete E-mail handling suite
- Version: 3.012 on 2025-11-27, with 16 votes
- Previous CPAN version: 3.011 was 7 months, 8 days before
- Author: MARKOV
-
meta - meta-programming API
- Version: 0.015 on 2025-11-28, with 14 votes
- Previous CPAN version: 0.014 was 2 months, 24 days before
- Author: PEVANS
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.008006 on 2025-11-26, with 146 votes
- Previous CPAN version: 2.008005 was 5 days before
- Author: TOBYINK
-
Workflow - Simple, flexible system to implement workflows
- Version: 2.09 on 2025-11-23, with 34 votes
- Previous CPAN version: 2.08 was 10 days before
- Author: JONASBN
In last week’s post I showed how to run a modern Dancer2 app on Google Cloud Run. That’s lovely if your codebase already speaks PSGI and lives in a nice, testable, framework-shaped box.
But that’s not where a lot of Perl lives.
Plenty of useful Perl on the internet is still stuck in old-school CGI – the kind of thing you’d drop into cgi-bin on a shared host in 2003 and then try not to think about too much.
So in this post, I want to show that:
If you can run a Dancer2 app on Cloud Run, you can also run ancient CGI on Cloud Run – without rewriting it.
To keep things on the right side of history, we’ll use nms FormMail rather than Matt Wright’s original script, but the principle is exactly the same.
Prerequisites: Google Cloud and Cloud Run
If you already followed the Dancer2 post and have Cloud Run working, you can skip this section and go straight to “Wrapping nms FormMail in PSGI”.
If not, here’s the minimum you need.
-
Google account and project
-
Go to the Google Cloud Console.
-
Create a new project (e.g. “perl-cgi-cloud-run-demo”).
-
-
Enable billing
-
Cloud Run is pay-as-you-go with a generous free tier, but you must attach a billing account to your project.
-
-
Install the
gcloudCLI-
Install the Google Cloud SDK for your platform.
-
Run:
and follow the prompts to:
-
log in
-
select your project
-
pick a default region (I’ll assume “europe-west1” below).
-
-
-
Enable required APIs
In your project:
-
Create a Docker repository in Artifact Registry
That’s all the GCP groundwork. Now we can worry about Perl.
The starting point: an old CGI FormMail
Our starting assumption:
-
You already have a CGI script like nms FormMail
-
It’s a single “.pl” file, intended to be dropped into “cgi-bin”
-
It expects to be called via the CGI interface and send mail using:
On a traditional host, Apache (or similar) would:
-
parse the HTTP request
-
set CGI environment variables (
REQUEST_METHOD,QUERY_STRING, etc.) -
run
formmail.plas a process -
let it call
/usr/sbin/sendmail
Cloud Run gives us none of that. It gives us:
-
a HTTP endpoint
-
backed by a container
-
listening on a port (
$PORT)
Our job is to recreate just enough of that old environment inside a container.
We’ll do that in two small pieces:
-
A PSGI wrapper that emulates CGI.
-
A sendmail shim so the script can still “talk” sendmail.
Architecture in one paragraph
Inside the container we’ll have:
-
nms FormMail – unchanged CGI script at
/app/formmail.pl -
PSGI wrapper (
app.psgi) – usingCGI::CompileandCGI::Emulate::PSGI -
Plack/Starlet – a simple HTTP server exposing
app.psgion$PORT -
msmtp-mta – providing
/usr/sbin/sendmailand relaying mail to a real SMTP server
Cloud Run just sees “HTTP service running in a container”. Our CGI script still thinks it’s on a early-2000s shared host.
Step 1 – Wrapping nms FormMail in PSGI
First we write a tiny PSGI wrapper. This is the only new Perl we need:
-
CGI::Compileloads the CGI script and turns itsmainpackage into a coderef. -
CGI::Emulate::PSGIfakes the CGI environment for each request. -
The CGI script doesn’t know or care that it’s no longer being run by Apache.
Later, we’ll run this with:
Step 2 – Adding a sendmail shim
Next problem: Cloud Run doesn’t give you a local mail transfer agent.
There is no real /usr/sbin/sendmail, and you wouldn’t want to run a full MTA in a stateless container anyway.
Instead, we’ll install msmtp-mta, a light-weight SMTP client that includes a sendmail-compatible wrapper. It gives you a /usr/sbin/sendmail binary that forwards mail to a remote SMTP server (Mailgun, SES, your mail provider, etc.).
From the CGI script’s point of view, nothing changes:
We’ll configure msmtp from environment variables at container start-up, so Cloud Run’s --set-env-vars values are actually used.
Step 3 – Dockerfile (+ entrypoint) for Perl, PSGI and sendmail shim
Here’s a complete Dockerfile that pulls this together.
-
We never touch
formmail.pl. It goes into/appand that’s it. -
msmtp gives us
/usr/sbin/sendmail, so the CGI script stays in its 1990s comfort zone. -
The entrypoint writes
/etc/msmtprcat runtime, so Cloud Run’s environment variables are actually used.
Step 4 – Building and pushing the image
With the Dockerfile and docker-entrypoint.sh in place, we can build and push the image to Artifact Registry.
I’ll assume:
-
Project ID:
PROJECT_ID -
Region:
europe-west1 -
Repository:
formmail-repo -
Image name:
nms-formmail
First, build the image locally:
The post Elderly Camels in the Cloud first appeared on Perl Hacks.
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095003 on 2025-11-18, with 799 votes
- Previous CPAN version: 2.095002 was 2 days before
- Author: OLIVER
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.623 on 2025-11-17, with 13 votes
- Previous CPAN version: 0.622 was 8 days before
- Author: ETHER
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20251120 on 2025-11-20, with 44 votes
- Previous CPAN version: 5.20251022 was 27 days before
- Author: BINGOS
-
Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service
- Version: 0.992 on 2025-11-22, with 13 votes
- Previous CPAN version: 0.991 was 3 years, 4 months, 5 days before
- Author: BARNEY
-
OpenTelemetry - A Perl implementation of the OpenTelemetry standard
- Version: 0.033 on 2025-11-21, with 30 votes
- Previous CPAN version: 0.032 was 1 day before
- Author: JJATRIA
-
SPVM - The SPVM Language
- Version: 0.990107 on 2025-11-18, with 36 votes
- Previous CPAN version: 0.990106 was 6 days before
- Author: KIMOTO
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.008005 on 2025-11-20, with 145 votes
- Previous CPAN version: 2.008004 was 1 month, 3 days before
- Author: TOBYINK
-
XML::Feed - XML Syndication Feed Support
- Version: v1.0.0 on 2025-11-17, with 19 votes
- Previous CPAN version: 0.65 was 1 year, 4 months, 8 days before
- Author: DAVECROSS

Dave writes:
Last month was mostly spent doing a second big refactor of ExtUtils::ParseXS. My previous refactor converted the parser to assemble each XSUB into an Abstract Syntax Tree (AST) and only then emit the C code for it (previously the parsing and C code emitting were interleaved on the fly). This new work extends that so that the whole XS file is now one big AST, and the C code is only generated once all parsing is complete.
As well as fixing lots of minor parsing bugs along the way, another benefit of this big refactoring is that ExtUtils::ParseXS becomes manageable once again. Rather than one big 1400-line parsing loop, the parsing and code generating is split up into lots of little methods in subclasses which represent the nodes of the AST and which process just one thing.
As an example, the logic which handled (permissible) duplicate XSUB declarations in different C processor branches, such as
#ifdef USE_2ARG
int foo(int i, int j)
#else
int foo(int i)
#endif
used to be spread over many parts of the program; it's now almost all concentrated into the parsing and code-emitting methods of a single Node subclass.
This branch is currently pushed and undergoing review.
My earlier work on rewriting the XS reference manual, perlxs.pod, was made into a PR a month ago, and this month I revised it based on reviewers' feedback.
Summary: * 11:39 modernise perlxs.pod * 64:57 refactor Extutils::ParseXS: file-scoped AST
Total: * 76:36 (HH::MM)
Do you thrive in a fast-paced scale-up environment, surrounded by an ambitious and creative team?
We’re on a mission to make payments simple, secure, and accessible for every business. With powerful in-house technology and deep expertise, our modular platform brings online, in-person, and cross-border payments together in one place — giving merchants the flexibility to scale on their own terms. Through a partnership-first approach, we tackle complexity head-on, keep payments running smoothly, and boost success rates. It’s how we level the playing field for businesses of all sizes and ambitions.
Join a leading tech company driving innovation in the payments industry. You’ll work with global leaders like Visa and Mastercard, as well as next generation “pay later” solutions such as Klarna and Afterpay. Our engineering teams apply Domain-Driven Design (DDD) principles, microservices architecture to build scalable and maintainable systems.
•Develop and maintain Perl-based applications and systems to handle risk management, monitoring, and onboarding processes
•Collaborate with other developers, and cross-functional teams to define, design, and deliver new features and functionalities
•Assist in the migration of projects from Perl to other languages, such as Java, while ensuring the smooth operation and transition of systems
•Contribute to code reviews and provide valuable insights to uphold coding standards and best practices
•Stay up to date with the latest industry trends and technologies to drive innovation and enhance our products
Company policy is on-site with 1/2 workday from home depending on your location.
-

