I am trying to read (and possibly write to) a spreadsheet in perl . Google told me to try spreadsheeet::read. cpan told me it didn't exit. as in
Could not expand [spreadsheet::read]. Check the module name.
I used -x and it included a suggestion for using spreadsheet::XLSX since that was the original type of the spreadsheet I was trying to read. Here are the first few lines of code:
use Spreadsheet::XLSX;
my $parser = Spreadsheet::ParseExcel->new();
my $workbook = $parser->parse('Holiday Gift Fund.XLSx');
if ( !defined $workbook ) {
die $parser->error();
}
when I run it under the perl debugger, I get the message File not found at calc.pl line 7.
at calc.pl line 7.
Debugged program terminated. Use q to quit or R to restart,
The xlsx file does exit. Any suggestions for reading it?
Fix spelling of 'unrecognised' to en_US version
Seeking community wisdom on why the ApacheBench utility consistently returns a lot of "Non-2xx responses" ( 502 Bad Gateway ) when running a benchmark test of my helloworld web app using Perl's Net::Async::FastCGI and Nginx as a reverse proxy. The number of concurrent requests is pretty low at 50, but it still return lots of "Non-2xx responses"?? Any insight is greatly appreciated. Please see below for the test code and the ApacheBench command for benchmark test.
It'd be great if someone try running the same thing and see if you experiencing the same issue? All u need is Perl 5.14 or above, install the module Net::Async::FastCGI and the Nginx web server, then configure Nginx with the little snippet to hook it up to the helloworld FastCGI script, then run the script on one terminal, and start Nginx on another terminal.
ApacheBench command:
ab -l -v 2 -n 100 -c 50 "http://localhost:9510/helloworld/"
which returns:
... Concurrency Level: 50 Time taken for tests: 0.015 seconds Complete requests: 100 Failed requests: 0 Non-2xx responses: 85 #NOTE: all of these are "502 Bad Gateway" ...Nginx config:
location /helloworld/ { proxy_buffering off ; gzip off ; fastcgi_pass unix:/testFolder/myPath/myUDS.sock ; include fastcgi_params ; }helloworld test script:
use strict ;
use warnings ;
use IO::Async::Loop ;
use Net::Async::FastCGI ;
# This script will respond to HTTP requests with a simple "Hello, World!" message.
#If using TCP port for communication:
#my $PORT = 9890 ;
#If using Unix domain socket for communication:
my $uds = 'myUDS.sock' ;
# Create an event loop
my $loop = IO::Async::Loop->new() ;
# Define the FastCGI request handler subroutine
sub on_request {#Parms: request(Net::Async::FastCGI::Request); #Return: void;
my ( $fcgi, $req ) = @_ ;
# Prepare the HTTP response
my $response = "Hello, World!\n" ;
my $respLen = length( $response ) ;
# Print HTTP response headers
$req->print_stdout(
"Status: 200 OK" . "\n" .
"Content-type: text/plain" . "\n" .
"Content-length: " . $respLen . "\n" .
"\n" .
$response
) ;
# Finish the request
$req->finish() ;
}#end sub
# Create a new FastCGI server instance
my $fcgi = Net::Async::FastCGI->new(
#handle => \*STDIN , # Read FastCGI requests from STDIN
on_request => \&on_request , # Assign the request handler subroutine
) ;
# Add the FastCGI server instance to the event loop
$loop->add($fcgi) ;
$fcgi->listen(
#service => $PORT , #if using TCP portnum
addr => {# if using Unix domain socket
family => "unix" ,
socktype => "stream" ,
path => "$uds" ,
} ,
#host => '127.0.0.1' ,
on_resolve_error => sub { print "Cannot resolve - $_[-1]\n" } ,
on_listen_error => sub { print "Cannot listen - $_[-1]\n" } ,
) ;
$SIG{ HUP } = sub {
system( "rm -f $uds" ) ;
exit ;
} ;
$SIG{ TERM } = sub {
system( "rm -f $uds" ) ;
exit ;
} ;
$SIG{ INT } = sub {
system( "rm -f $uds" ) ;
exit ;
} ;
# Run the event loop
$loop->run() ;
| submitted by /u/Upper-Minute-9371 [link] [comments] |
As title, this is a pure appreciate post for feature deffer.
I just use it like:
chdir $any_path or die $!;
defer { chdir ".." }
I know this is silly, but it actually make my day easier :)
[link] [comments]
Revert "handy.h: Add void * casts to memEQ, memNE" This reverts commit 9d3980bc229750e6c07726fe529f02bf4dc6a5a5. It can create problems to add casts to macros; potentially hiding real issues. These casts caused Tony Cook some lost time recently; whatever the cause for this commit isn't showing up on my box; so I'll try reverting it to see if something shows up on some other platform.
make pregexec() handle zero-length strings again GH #23903 In embed.fnc, commit v5.43.3-167-g45ea12db26 added SPTR, EPTR parameter modifiers to (amongst other API functions), Perl_pregexec(). These cause assert constraints to be added to the effect that SPTR < EPTR (since the latter is supposed to be a pointer to the byte after the last character in the string). This falls down for an empty string since in this case pregexec() is called with strbeg == strend. This was causing an assert failure in the test suite for Package-Stash-XS. The reason it wasn't noticed before is because: 1) pregexec() is a thin wrapper over regexec_flags(); 2) The perl core (e.g. pp_match()) calls regexec_flags() rather than pregexec(); 3) Package::Stash::XS has XS code which calls pregexec() directly rather than using CALLREGEXEC() (which would call regexec_flags()); 4) In embed.fnc, regexec_flags()'s strend parameter is declared as NN rather than EPTR, so it doesn't get the assert added. So very little code was actually using pregexec(). This commit, for now, changes pregexec()'s strend parameter from EPTR to EPTRQ, which has the net effect of allowing zero-length strings to be passed, and thus fixes the CPAN issue. But longer term, we need to decide: is the general logic for EPTR wrong? Should the assert be SPTR <= EPTR? And should EPTR be applied to regexec_flags()'s strend parameter too?
perlapi: Combine grok_(bin|hex|oct) into single entry
handy.h: Avoid UB in nBIT_MASK() I discovered the hard way that this is undefined behavior when operating on the widest unsigned integer type available on the platform. I couldn't think of a way to write this without a branch that worked both for that condition and a zero length mask
A plenv plugin to show which Perl versions have a particular module.
I use plenv daily to manage the many Perl configurations which I use for different projects. Sometimes I have to install huge collections of Perl modules for some specific use case. And then I forget which Perl installation under plenv it was where I installed them.
So I wrote this plugin to fix that.
Example use cases:
$ plenv where Dist::Zilla
5.24.4
5.28.2
5.34.1-dzil
5.39.2
It can also report the actual path and/or the module version:
$ plenv where --path --module-version Dist::Zilla
/[..]versions/5.24.4/lib/perl5/site_perl/5.24.4/Dist/Zilla.pm 6.031
/[..]versions/5.28.2/lib/perl5/site_perl/5.28.2/Dist/Zilla.pm 6.032
/[..]versions/5.34.1-dzil/lib/perl5/site_perl/5.34.1/Dist/Zilla.pm 6.033
/[..]versions/5.39.2/lib/perl5/site_perl/5.39.2/Dist/Zilla.pm 6.030
Configuration
This plugin also uses a configuration file. plenv-where where reads a configuration from file ${XDG_CONFIG_HOME}/plenv/where, or, if the variable XDG_CONFIG_HOME does not exist, from file ${HOME}/.config/plenv/where. In the config file, we place every option on its own line.
Installation
The installation is manual.
mkdir -p "$(plenv root)/plugins"
git clone https://github.com/mikkoi/plenv-where.git "$(plenv root)/plugins/plenv-where"
Musical Interlude
The movie version of Wicked is in theaters right now, so I am reminded of the song For Good -- but I'm gonna link to the Broadway version, because I'm classy like that. It's relevant to programming in Perl because "I don't know if I've been changed for the better, but I have been changed for good." For part two, Lido Shuffle by Boz Scaggs.
Task 1: Good Substrings
The Task
You are given a string. Write a script to return the number of good substrings of length three in the given string. A string is good if there are no repeated characters.
- Example 1: Input $str = "abcaefg", Output: 5
- Good substrings of length 3: abc, bca, cae, aef and efg
- Example 2: Input: $str = "xyzzabc", Output: 3
- Example 3: Input: $str = "aababc", Output: 1
- Example 4: Input: $str = "qwerty", Output: 4
- Example 5: Input: $str = "zzzaaa", Output: 0
The Think-y Part
There's probably a regular expression for this, but I'm not going to find it. Do the simplest thing that works: take three characters at a time and see if they're different.
The Code-y Part
sub goodSubstring($str)
{
my $good = 0;
my @s = split(//, $str);
for ( 0 .. $#s - 2 )
{
my ($first, $second, $third) = @s[$_, $_+1, $_+2];
$good++ if ( $first ne $second && $first ne $third && $second ne $third );
}
return $good;
}
Notes:
- Start by turning the string into a list of characters. It could be done with
substr, but that would be untidy. -
@s[$_, $_+1, $_+2]-- With a nod to readability, I'll extract three consecutive characters with an array slice. It occurs to me that I'll always have two of the next three characters in hand at the bottom of the loop, so doing a complete splice every time could probably be optimized, but I declare it "good" enough. - Since there's exactly three characters in play, check for uniqueness in the most obvious way.
Task 2:Shuffle Pairs
The Task
If two integers A <= B have the same digits but in different orders, we say that they belong to the same shuffle pair if and only if there is an integer k such that A = B * k. k is called the witness of the pair. For example, 1359 and 9513 belong to the same shuffle pair, because 1359 * 7 = 9513.
Interestingly, some integers belong to several different shuffle pairs. For example, 123876 forms one shuffle pair with 371628, and another with 867132, as 123876 * 3 = 371628, and 123876 * 7 = 867132.
Write a function that for a given $from, $to, and $count returns the number of integers $i in the range $from <= $i <= $to that belong to at least $count different shuffle pairs.
- Example 1:
- Input: $from = 1, $to = 1000, $count = 1
- Output: 0
- There are no shuffle pairs with elements less than 1000.
-
Example 2:
- Input: $from = 1500, $to = 2500, $count = 1
- Output: 3
- There are 3 integers between 1500 and 2500 that belong to shuffle pairs.
- 1782, the other element is 7128 (witness 4)
- 2178, the other element is 8712 (witness 4)
- 2475, the other element is 7425 (witness 3)
-
Example 3:
- Input: $from = 1_000_000, $to = 1_500_000, $count = 5
- Output: 2
- There are 2 integers in the given range that belong to 5 different shuffle pairs.
- 1428570 pairs with 2857140, 4285710, 5714280, 7142850, and 8571420
- 1429857 pairs with 2859714, 4289571, 5719428, 7149285, and 8579142
-
Example 4:
- Input: $from = 13_427_000, $to = 14_100_000, $count = 2
- Output: 11
- 6 integers in the given range belong to 3 different shuffle pairs,
- 5 integers belong to 2 different ones.
-
Example 5:
- Input: $from = 1030, $to = 1130, $count = 1
- Output: 2
- There are 2 integers between 1020 and 1120 that belong to at least one shuffle pair:
- 1035, the other element is 3105 (witness k = 3)
- 1089, the other element is 9801 (witness k = 9)
Deliberations
It takes a minute to digest this one.
I first wondered if there's some algebraic number theory trick that would cut the search space way down, but that made my head hurt, so I moved to doing what computers do best: grinding through a lot of possibilities.
A bad first thought was to try all combinations of the digits, but that's going to die an excruciating slow death on the crucifix of combinatorics, not to mention that we'd be completely wasting our time on all but a few combinations.
A better thought is to look only at the multiples of a given number. There are at most 8 multiples of a number in play: the 10th would add a digit, and therefore can't possibly be a reordering. Examples 3 and 4 have a lot of numbers to grind through, but how long can it take, really? It's one banana, Michael; how much could it cost, ten dollars?
How will I decide that a number is a re-ordering? I think I'll reduce each number to a canonical form where the digits are sorted, then use string compare to see if a multiple has the same canonical form.
To the bat-editor, Robin!
First, a little function to turn a number into a canonical form with its digits in sorted order. Turn the number into a list of digits, sort, and then join the digits back into a string.
sub canonical($n)
{
join("", sort split(//, $n));
}
Now the main course. I'll want to examine every number in the range $from to $to, inclusive. For each number, I'll want to examine its multiples to see if they have the same digits. I need to count the ones that work so that I can check that there are at least $count of them.
sub shufflePair($from, $to, $count)
{
my $answer = 0;
for my $n ( $from .. $to )
{
my $base = canonical($n);
my $max = (9 x length($n))+0;
my $pair = 0;
for ( 2 .. 9 )
{
my $multiple = $n * $_;
next if $multiple > $max
|| index($base, substr($multiple, 0, 1)) < 0
|| index($base, substr($multiple, -1, 1)) < 0;
if ( canonical($multiple) eq $base )
{
$pair++;
}
}
$answer++ if $pair >= $count;
}
return $answer;
}
Notes:
-
my $base = canonical($n)-- hang on to this for comparison. -
my $max = (9 x length($n))+0;-- An optimization. The maximum number we need to be concerned with is one that has the same number of digits, but is all 9s. For example, if$nis 480, then we are dealing with 3-digit numbers, so the largest possible is 999. That's less than 480*3=1440, so we don't have to examine any of the multiples beyond 480*2. -
for ( 2..9 )-- These are the only multiples of$nthat could possibly have the same number of digits. -
next if ...-- Besides the check on$max, we can make cheap checks on a single digit. If the first or last digit isn't one of the possible digits, we can avoid the overhead ofcanonical(), which isn't horrendous, but it does involve allocating lists and a sort. -
canonical($multiple) eq $base-- This string compare is where we decide if we have a shuffle pair. -
$answer++ if $pair >= $count-- We increment the answer if this number has at least$countshuffle pairs.
This solution takes a few seconds to run the examples. My optimizations to bail early in many cases cut the run time approximately in half (from about 8 seconds to about 4.5).
Introducing PAGI: Async Web Development for Perl
TL;DR: PAGI (Perl Asynchronous Gateway Interface) is a new specification for async Perl web applications, inspired by Python's ASGI. It supports HTTP, WebSockets, and Server-Sent Events natively, and can wrap existing PSGI applications for backward compatibility.
The Problem
Modern web applications need more than traditional request-response cycles. Real-time features like live notifications, collaborative editing, and streaming data require persistent connections. This means:
- WebSockets for bidirectional communication
- Server-Sent Events for efficient server push
- Streaming responses for large payloads
- Connection lifecycle management for resource pooling
PSGI, Perl's venerable web server interface, assumes a synchronous world. While frameworks like Mojolicious have built async capabilities on top, there's no shared standard that allows different async frameworks and servers to interoperate.
PAGI aims to fill that gap.
What is PAGI?
PAGI defines a standard interface between async-capable Perl web servers and applications. If you're familiar with Python's ecosystem, think of it as Perl's answer to ASGI.
A PAGI application is an async coderef with three parameters:
use Future::AsyncAwait;
use experimental 'signatures';
async sub app ($scope, $receive, $send) {
# $scope - connection metadata (type, headers, path, etc.)
# $receive - async coderef to get events from the client
# $send - async coderef to send events to the client
}
The $scope->{type} tells you what kind of connection you're handling:
| Type | Description |
|---|---|
http |
Standard HTTP request/response |
websocket |
Persistent WebSocket connection |
sse |
Server-Sent Events stream |
lifespan |
Application startup/shutdown lifecycle |
A Simple HTTP Example
Here's "Hello World" in raw PAGI:
use Future::AsyncAwait;
async sub app ($scope, $receive, $send) {
die "Unsupported: $scope->{type}" if $scope->{type} ne 'http';
await $send->({
type => 'http.response.start',
status => 200,
headers => [['content-type', 'text/plain']],
});
await $send->({
type => 'http.response.body',
body => 'Hello from PAGI!',
more => 0,
});
}
Run it:
pagi-server --app app.pl --port 5000
curl http://localhost:5000/
# => Hello from PAGI!
The response is split into http.response.start (headers) and http.response.body (content). This separation enables streaming—send multiple body chunks with more => 1 before the final more => 0.
WebSocket Support
WebSockets are first-class citizens in PAGI:
async sub app ($scope, $receive, $send) {
if ($scope->{type} eq 'websocket') {
await $send->({ type => 'websocket.accept' });
while (1) {
my $event = await $receive->();
if ($event->{type} eq 'websocket.receive') {
my $msg = $event->{text} // $event->{bytes};
await $send->({
type => 'websocket.send',
text => "Echo: $msg",
});
}
elsif ($event->{type} eq 'websocket.disconnect') {
last;
}
}
}
else {
die "Unsupported: $scope->{type}";
}
}
The event loop pattern is consistent across all connection types: await events from $receive, send responses via $send.
PSGI Compatibility
One of PAGI's key features is backward compatibility with PSGI. The PAGI::App::WrapPSGI adapter lets you run existing PSGI applications on a PAGI server:
use PAGI::App::WrapPSGI;
# Your existing Catalyst/Dancer/Plack app
my $psgi_app = MyApp->psgi_app;
my $wrapper = PAGI::App::WrapPSGI->new(psgi_app => $psgi_app);
$wrapper->to_app;
The wrapper handles all the translation: building %env from PAGI scope, collecting request bodies, and converting responses back to PAGI events.
This means you can:
- Run legacy applications on a PAGI server
- Add WebSocket endpoints alongside existing routes
- Migrate incrementally from PSGI to PAGI
- Share connection pools between old and new code
PAGI::Simple Micro-Framework
For rapid development, PAGI ships with a micro-framework inspired by Express.js:
use PAGI::Simple;
my $app = PAGI::Simple->new(name => 'My API');
$app->get('/' => sub ($c) {
$c->text('Hello, World!');
});
$app->get('/users/:id' => sub ($c) {
my $id = $c->path_params->{id};
$c->json({ user_id => $id });
});
$app->post('/api/data' => sub ($c) {
my $data = $c->json_body;
$c->json({ received => $data, status => 'ok' });
});
$app->to_app;
WebSockets are equally clean:
$app->websocket('/chat' => sub ($ws) {
$ws->on(message => sub ($data) {
$ws->broadcast("Someone said: $data");
});
});
PAGI::Simple includes:
- Express-style routing with path parameters
- JSON request/response helpers
- Session management
- Middleware support (CORS, logging, rate limiting, etc.)
- Static file serving
- WebSocket rooms and broadcasting
- SSE channels with pub/sub
Current Status
PAGI is currently in early beta. The test suite passes, the examples work, but it hasn't been battle-tested in production.
What exists today:
- Complete PAGI specification
- Reference server implementation (
PAGI::Server) - PAGI::Simple micro-framework
- 13 example applications
- PSGI compatibility layer
- 483 passing tests
What it needs:
- Developers willing to experiment and provide feedback
- Real-world testing
- Framework authors interested in building on PAGI
- Performance profiling and optimization
Getting Started
git clone https://github.com/jjn1056/pagi.git
cd pagi
cpanm --installdeps .
prove -l t/
# Try the examples
pagi-server --app examples/01-hello-http/app.pl --port 5000
pagi-server --app examples/simple-01-hello/app.pl --port 5000
Why This Matters
Perl has excellent async primitives (IO::Async, Future::AsyncAwait), but no shared specification for async web applications. Each framework implements its own approach, which limits interoperability.
PAGI provides that shared foundation. By standardizing on a common interface:
- Servers can focus on performance and protocol handling
- Frameworks can focus on developer experience
- Middleware becomes portable across implementations
- The ecosystem can grow together rather than in isolation
If you're interested in the future of async Perl web development, I'd love your feedback. Check out the repository, try the examples, and let me know what you think.
Repository: github.com/jjn1056/pagi
PAGI is not yet on CPAN. It's experimental software—please don't use it in production unless you really know what you're doing.
A language awakens the moment its community shares what it has lived and built.

I attended the London Perl & Raku Workshop 2025 last Saturday. Please find the detailed event report: https://theweeklychallenge.org/blog/lpw-2025
Originally published at Perl Weekly 749
Hi there!
The big announcement is that Mohammad Sajid Anwar who runs The Weekly Challenge and who is the other editor of the Perl Weekly newsletter, has published his first book called Design Patterns in Modern Perl. You can buy it both on Amazon and on Leanpub. Leanpub gives you the option to change the price so you can also use this opportunity to give a one-time donation to him. As far as I know, Leanpub also gives a much bigger part of the price to the author than Amazon does. You can also ask them to send the book to your Kindle or you can upload it yourself. I already bought it and started to read it. Now you go, buy the book!
In just a few hours we are going to have the online meeting Perl Code-reading and testing. You can still register here.
Perl on WhatsApp: I am part of a lot of WhatsApp groups about Python and Rust and other non-tech stuff. I figured I could create one for Perl as well. If you are interested join here. There are also two groups on Telegram. One is called Perl 5 that has 141 members and the other one is called Perl Maven community that I created, because I did not know about the other one. The latter has 59 members. You are invited to join any or all of these channels.
I started a poll in the Perl Community Facebook group. There are already 63 votes. It would be nice if you answered too.
Enjoy your week!
--
Your editor: Gabor Szabo.
Announcements
Design Patterns in Modern Perl
Manwar, congratulations! Everyone else, go buy the book! (comments)
Articles
ANNOUNCE: Various updated wikis, including Perl.Wiki
Dotcom Survivor Syndrome – How Perl’s Early Success Created the Seeds of Its Downfall
I like the sentiment, but as one of the commenters pointed out there was PHP as well.
GitHub and the Perl License
In a nutshell, if you'd like to use 'the Perl license' you probably should include two separate license files. (comments)
Showcase: Localised JSON Schema validation in Perl + JavaScript (CodePen demo included!)
A small project that might interest anyone in dealing with form validation, localisation, and JSON Schema in their Perl web applications / REST API.
Web
Catalyst::Request body issues with the file position pointer
For those using the Perl Catalyst web framework in ways involving structured request bodies (e.g. API POSTs)...
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 350
Welcome to a new week with a couple of fun tasks "Good Substrings" and "Shuffle Pairs". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 349
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Power String" and "Meeting Point" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
TWC349
Both solutions use a straightforward, single-pass approach that is perfectly suited for the problem. They process the input string only once, making them very efficient (O(n)) and demonstrating a solid grasp of fundamental algorithmic thinking.
Power Pointing
The post is an excellent, practical demonstration of Raku's expressiveness and built-in functionality. It successfully showcases how Raku allows a programmer to transition from a straightforward, imperative approach to a concise, idiomatic, and highly readable functional solution.
More complex than it has to be
The post presents a fascinating and honest case study of over-engineering. Bob deliberately explores a complex, "enterprise-grade" solution to a simple problem, contrasting it with the obvious simple solution.
Meeting Strings
This is an exceptionally well-crafted post. It demonstrates a deep understanding of Raku's idioms and standard library, transforming simple problems into masterclasses in concise, expressive, and functional programming.
moving and grepping
This is an exemplary post that demonstrates exceptional technical breadth, deep practical knowledge, and a clear, effective pedagogical style. It transcends being a mere solution set and serves as a masterclass in polyglot programming and database extensibility.
Perl Weekly Challenge 349
The post demonstrates both deep Perl knowledge and strong pedagogical skills, making complex solutions accessible while showcasing advanced language features.
Power Meets Points
This post demonstrates expert-level Perl programming with deep language knowledge and thoughtful engineering considerations. Matthias combines elegant solutions with practical performance analysis.
The Power of String
This is a high-quality technical post that successfully demonstrates how to solve the same problems in multiple programming languages while maintaining algorithmic consistency.
Powering to the origin
This post demonstrates creative problem-solving with elegant regex decrementing for Task 1 and a clever eval-based dispatch system for Task 2. Peter shows strong analytical thinking by carefully distinguishing between final-position and intermediate-position checks, and makes practical engineering trade-offs between cleverness and performance.
The Weekly Challenge #349
This is a well-structured, professional-grade solution with excellent documentation and robust code organization. Robbie demonstrates strong analytical thinking by carefully addressing potential ambiguities in the problem statement and explicitly warning against common algorithmic pitfalls.
Power Meeting
Roger demonstrates strong analytical skills by questioning the problem statement itself and providing robust solutions for different interpretations, showing both practical implementation skills and deeper algorithmic thinking.
Power Meeting
This is a clean, practical, and well-explained approach to the weekly challenges. Simon demonstrates strong fundamentals with a clear, step-by-step problem-solving methodology.
Weekly collections
NICEPERL's lists
Great CPAN modules released last week.
Events
Perl Maven online: Code-reading and testing
December 1, 2025
Toronto.pm - online - How SUSE is using Perl
December 6, 2025
Paris.pm monthly meeting
December 10, 2025
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Get them from My Wiki Haven and Symboliciq.au. Details:
- Perl.Wiki V 1.35
- Mojolicious Wiki V 1.10
- Debian Wiki V 1.11
- (New) PHP Wiki V 1.01
- Symbolic Language Wiki V 1.18 (at symboliciq.au)
If you were building web applications during the first dot-com boom, chances are you wrote Perl. And if you’re now a CTO, tech lead, or senior architect, you may instinctively steer teams away from it—even if you can’t quite explain why.
This reflexive aversion isn’t just a preference. It’s what I call Dotcom Survivor Syndrome: a long-standing bias formed by the messy, experimental, high-pressure environment of the early web, where Perl was both a lifeline and a liability.
Perl wasn’t the problem. The conditions under which we used it were. And unfortunately, those conditions, combined with a separate, prolonged misstep over versioning, continue to distort Perl’s reputation to this day.
The Glory Days: Perl at the Heart of the Early Web
In the mid- to late-1990s, Perl was the web’s duct tape.
-
It powered CGI scripts on Apache servers.
-
It automated deployments before DevOps had a name.
-
It parsed logs, scraped data, processed form input, and glued together whatever needed glueing.
Perl 5, released in 1994, introduced real structure: references, modules, and the birth of CPAN, which became one of the most effective software ecosystems in the world.
Perl wasn’t just part of the early web—it was instrumental in creating it.
The Dotcom Boom: Shipping Fast and Breaking Everything
To understand the long shadow Perl casts, you have to understand the speed and pressure of the dot-com boom.
We weren’t just building websites.
We were inventing how to build websites.
Best practices? Mostly unwritten.
Frameworks? Few existed.
Code reviews? Uncommon.
Continuous integration? Still a dream.
The pace was frantic. You built something overnight, demoed it in the morning, and deployed it that afternoon. And Perl let you do that.
But that same flexibility—its greatest strength—became its greatest weakness in that environment. With deadlines looming and scalability an afterthought, we ended up with:
-
Thousands of lines of unstructured CGI scripts
-
Minimal documentation
-
Global variables everywhere
-
Inline HTML mixed with business logic
-
Security holes you could drive a truck through
When the crash came, these codebases didn’t age gracefully. The people who inherited them, often the same people who now run engineering orgs, remember Perl not as a powerful tool, but as the source of late-night chaos and technical debt.
Dotcom Survivor Syndrome: Bias with a Backstory
Many senior engineers today carry these memories with them. They associate Perl with:
-
Fragile legacy systems
-
Inconsistent, “write-only” code
-
The bad old days of early web development
And that’s understandable. But it also creates a bias—often unconscious—that prevents Perl from getting a fair hearing in modern development discussions.
Version Number Paralysis: The Perl 6 Effect
If Dotcom Boom Survivor Syndrome created the emotional case against Perl, then Perl 6 created the optical one.
In 2000, Perl 6 was announced as a ground-up redesign of the language. It promised modern syntax, new paradigms, and a bright future. But it didn’t ship—not for a very long time.
In the meantime:
-
Perl 5 continued to evolve quietly, but with the implied expectation that it would eventually be replaced.
-
Years turned into decades, and confusion set in. Was Perl 5 deprecated? Was Perl 6 compatible? What was the future of Perl?
To outsiders—and even many Perl users—it looked like the language was stalled. Perl 5 releases were labelled 5.8, 5.10, 5.12… but never 6. Perl 6 finally emerged in 2015, but as an entirely different language, not a successor.
Eventually, the community admitted what everyone already knew: Perl 6 wasn’t Perl. In 2019, it was renamed Raku.
But the damage was done. For nearly two decades, the version number “6” hung over Perl 5 like a storm cloud – a constant reminder that its future was uncertain, even when that wasn’t true.
This is what I call Version Number Paralysis:
-
A stalled major version that made the language look obsolete.
-
A missed opportunity to signal continued relevance and evolution.
-
A marketing failure that deepened the sense that Perl was a thing of the past.
Even today, many developers believe Perl is “stuck at version 5,” unaware that modern Perl is actively maintained, well-supported, and quite capable.
While Dotcom Survivor Syndrome left many people with an aversion to Perl, Version Number Paralysis gave them an excuse not to look closely at Perl to see if it had changed.
What They Missed While Looking Away
While the world was confused or looking elsewhere, Perl 5 gained:
-
Modern object systems (Moo, Moose)
-
A mature testing culture (Test::More, Test2)
-
Widespread use of best practices (Perl::Critic, perltidy, etc.)
-
Core team stability and annual releases
-
Huge CPAN growth and refinements
But those who weren’t paying attention, especially those still carrying dotcom-era baggage, never saw it. They still think Perl looks like it did in 2002.
Can We Move On?
Dotcom Survivor Syndrome is real. So is Version Number Paralysis. Together, they’ve unfairly buried a language that remains fast, expressive, and battle-tested.
We can’t change the past. But we can:
-
Acknowledge the emotional and historical baggage
-
Celebrate the role Perl played in inventing the modern web
-
Educate developers about what Perl really is today
-
Push back against the assumption that old == obsolete
Conclusion
Perl’s early success was its own undoing. It became the default tool for the first web boom, and in doing so, it took the brunt of that era’s chaos. Then, just as it began to mature, its versioning story confused the industry into thinking it had stalled.
But the truth is that modern Perl is thriving quietly in the margins – maintained by a loyal community, used in production, and capable of great things.
The only thing holding it back is a generation of developers still haunted by memories of CGI scripts, and a version number that suggested a future that never came.
Maybe it’s time we looked again.
The post Dotcom Survivor Syndrome – How Perl’s Early Success Created the Seeds of Its Downfall first appeared on Perl Hacks.
If you were building web applications during the first dot-com boom, chances are you wrote Perl. And if you’re now a CTO, tech lead, or senior architect, you may instinctively steer teams away from it—even if you can’t quite explain why.
This reflexive aversion isn’t just a preference. It’s what I call Dotcom Survivor Syndrome : a long-standing bias formed by the messy, experimental, high-pressure environment of the early web, where Perl was both a lifeline and a liability.
Perl wasn’t the problem. The conditions under which we used it were. And unfortunately, those conditions, combined with a separate, prolonged misstep over versioning, continue to distort Perl’s reputation to this day.
The Glory Days: Perl at the Heart of the Early Web
In the mid- to late-1990s, Perl was the web’s duct tape.
It powered CGI scripts on Apache servers.
It automated deployments before DevOps had a name.
It parsed logs, scraped data, processed form input, and glued together whatever needed glueing.
Perl 5 , released in 1994, introduced real structure: references, modules, and the birth of CPAN , which became one of the most effective software ecosystems in the world.
Perl wasn’t just part of the early web—it was instrumental in creating it.
The Dotcom Boom: Shipping Fast and Breaking Everything
To understand the long shadow Perl casts, you have to understand the speed and pressure of the dot-com boom.
We weren’t just building websites.
We were inventing how to build websites.
Best practices? Mostly unwritten.
Frameworks? Few existed.
Code reviews? Uncommon.
Continuous integration? Still a dream.
The pace was frantic. You built something overnight, demoed it in the morning, and deployed it that afternoon. And Perl let you do that.
But that same flexibility—its greatest strength—became its greatest weakness in that environment. With deadlines looming and scalability an afterthought, we ended up with:
Thousands of lines of unstructured CGI scripts
Minimal documentation
Global variables everywhere
Inline HTML mixed with business logic
Security holes you could drive a truck through
When the crash came, these codebases didn’t age gracefully. The people who inherited them, often the same people who now run engineering orgs, remember Perl not as a powerful tool, but as the source of late-night chaos and technical debt.
Dotcom Survivor Syndrome: Bias with a Backstory
Many senior engineers today carry these memories with them. They associate Perl with:
Fragile legacy systems
Inconsistent, “write-only” code
The bad old days of early web development
And that’s understandable. But it also creates a bias—often unconscious—that prevents Perl from getting a fair hearing in modern development discussions.
Version Number Paralysis: The Perl 6 Effect
If Dotcom Boom Survivor Syndrome created the emotional case against Perl, then Perl 6 created the optical one.
In 2000, Perl 6 was announced as a ground-up redesign of the language. It promised modern syntax, new paradigms, and a bright future. But it didn’t ship—not for a very long time.
In the meantime:
Perl 5 continued to evolve quietly, but with the implied expectation that it would eventually be replaced.
Years turned into decades , and confusion set in. Was Perl 5 deprecated? Was Perl 6 compatible? What was the future of Perl?
To outsiders—and even many Perl users—it looked like the language was stalled. Perl 5 releases were labelled 5.8, 5.10, 5.12… but never 6. Perl 6 finally emerged in 2015, but as an entirely different language, not a successor.
Eventually, the community admitted what everyone already knew: Perl 6 wasn’t Perl. In 2019, it was renamed Raku.
But the damage was done. For nearly two decades, the version number “6” hung over Perl 5 like a storm cloud – a constant reminder that its future was uncertain, even when that wasn’t true.
This is what I call Version Number Paralysis :
A stalled major version that made the language look obsolete.
A missed opportunity to signal continued relevance and evolution.
A marketing failure that deepened the sense that Perl was a thing of the past.
Even today, many developers believe Perl is “stuck at version 5,” unaware that modern Perl is actively maintained, well-supported, and quite capable.
While Dotcom Survivor Syndrome left many people with an aversion to Perl, Version Number Paralysis gave them an excuse not to look closely at Perl to see if it had changed.
What They Missed While Looking Away
While the world was confused or looking elsewhere, Perl 5 gained:
Modern object systems (Moo, Moose)
A mature testing culture (Test::More, Test2)
Widespread use of best practices (Perl::Critic, perltidy, etc.)
Core team stability and annual releases
Huge CPAN growth and refinements
But those who weren’t paying attention, especially those still carrying dotcom-era baggage, never saw it. They still think Perl looks like it did in 2002.
Can We Move On?
Dotcom Survivor Syndrome is real. So is Version Number Paralysis. Together, they’ve unfairly buried a language that remains fast, expressive, and battle-tested.
We can’t change the past. But we can:
Acknowledge the emotional and historical baggage
Celebrate the role Perl played in inventing the modern web
Educate developers about what Perl really is today
Push back against the assumption that old == obsolete
Conclusion
Perl’s early success was its own undoing. It became the default tool for the first web boom, and in doing so, it took the brunt of that era’s chaos. Then, just as it began to mature, its versioning story confused the industry into thinking it had stalled.
But the truth is that modern Perl is thriving quietly in the margins – maintained by a loyal community, used in production, and capable of great things.
The only thing holding it back is a generation of developers still haunted by memories of CGI scripts, and a version number that suggested a future that never came.
Maybe it’s time we looked again.
The post Dotcom Survivor Syndrome – How Perl’s Early Success Created the Seeds of Its Downfall first appeared on Perl Hacks.
I was writing a data intensive code in Perl relying heavily on PDL for some statitical calculations (estimation of percentile points in some very BIG vectors, e.g. 100k to 1B elements), when
I noticed that PDL was taking a very (and unusually long!) time to produce results compared to my experience in Python.
This happened irrespective of whether one used the pct or oddpct functions in PDL::Ufunc.
The performance degradation had a very interesting quantitative aspect: if one asked PDL to return a single percentile it did so very fast;
but if one were to ask for more than one percentiles, the time-to-solution increased linearly with the number of percentiles specified.
Looking at the source code of the pct function, it seems that it is implemented by calling the function pctover, which according to the PDL documentation “Broadcasts over its inputs.”
But what is exactly broadcasting? According to PDL::Broadcasting : “[broadcasting] can produce very compact and very fast PDL code by avoiding multiple nested for loops that C and BASIC users may be familiar with. The trouble is that it can take some getting used to, and new users may not appreciate the benefits of broadcasting.” Reading the relevant PDL examples and revisiting the NumPy documentation (which also uses this technique), broadcasting : treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python..
It seems that when one does something like:
use PDL::Lite;
my $very_big_ndarray = ... ; # code that constructs a HUGE PDL ndarrat
my $pct = sequence(100)/100; # all percentiles from 0 to 100%
my $pct_values = pct( $very_big_ndarray, $pct);
the broadcasting effectively executes sequentially the code for calculating a single percentile and concatenates the results.
The problem with broadcasting for this operation is that the percentile calculation includes a VERY expensive operation, namely the
sorting of the $very_big_darray before the (trivial) calculation of the percentile from the sorted values as detailed in
Wikipedia. So when the percentile operation is broadcast by PDL, the sorting is repeated for each
percentile value in $pct, leading to catastrophic loss of performance!
How can we fix this? It turns out to be reasonably trivial : we need to reimplement the percentile function so that it does not broadcast.
One of the simplest quantile functions to implement, is the one based on the empirical cumulative distribution function (this corresponds to the Type 3 quantile in the
classification by Hyndman and Fan).
This one can be trivially implemented in Perl using PDL as:
sub quantile_type_3 {
my ( $data, $pct ) = @_;
my $sorted_data = $data->qsort;
my $nelem = $data->nelem;
my $cum_ranks = floor( $pct * $nelem );
$sorted_data->index($cum_ranks);
}
(The other quantiles can be implemented equally trivially using affine operations as explained in R’s documentation of the quantile function).
To see how well this works, I wrote a Perl benchmark script that benchmarks
the builtin function pct, the quantile_type_3 function on synthetic data and then calls the companion
R script to profile the 9 quantile functions and the 3 sort
functions in R for the same dataset.
I obtained the following performance figures in my old Xeon: the “de-broadcasted” version of the quantile function achieves the same performance as the R implementations, whereas the
PDL broadcasting version is 100 times slower.
| Test | Iterations | Elements | Quantiles | Elapsed Time (s) |
|---|---|---|---|---|
| pct | 10 | 1000000 | 100 | 132.430000 |
| quantile_type_3 | 10 | 1000000 | 100 | 1.320000 |
| pct_R_1 | 10 | 1000000 | 100 | 1.290000 |
| pct_R_2 | 10 | 1000000 | 100 | 1.281000 |
| pct_R_3 | 10 | 1000000 | 100 | 1.274000 |
| pct_R_4 | 10 | 1000000 | 100 | 1.283000 |
| pct_R_5 | 10 | 1000000 | 100 | 1.290000 |
| pct_R_6 | 10 | 1000000 | 100 | 1.286000 |
| pct_R_7 | 10 | 1000000 | 100 | 1.233000 |
| pct_R_8 | 10 | 1000000 | 100 | 1.309000 |
| pct_R_9 | 10 | 1000000 | 100 | 1.291000 |
| sort_quick | 10 | 1000000 | 100 | 1.220000 |
| sort_shell | 10 | 1000000 | 100 | 1.758000 |
| sort_radix | 10 | 1000000 | 100 | 0.924000 |
As can be seen from the table, the sorting operations account mostly for the bulk of the execution time of the quantile functions.
Two major takehome points:
1) don’t be afraid to look under the hood/inside the blackbox when performance is surprisingly disappointing!
2) be careful of broadcasting operations in PDL, NumPy, or Matlab.
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095004 on 2025-11-23, with 798 votes
- Previous CPAN version: 2.095003 was 4 days before
- Author: OLIVER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251123.001 on 2025-11-23, with 25 votes
- Previous CPAN version: 20251116.001 was 7 days before
- Author: BRIANDFOY
-
Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
- Version: 8.1.0 on 2025-11-26, with 16 votes
- Previous CPAN version: 8.0.0 was 1 month, 11 days before
- Author: CUKEBOT
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.625 on 2025-11-28, with 14 votes
- Previous CPAN version: 0.624 was 2 days before
- Author: ETHER
-
Mail::Box - complete E-mail handling suite
- Version: 3.012 on 2025-11-27, with 16 votes
- Previous CPAN version: 3.011 was 7 months, 8 days before
- Author: MARKOV
-
meta - meta-programming API
- Version: 0.015 on 2025-11-28, with 14 votes
- Previous CPAN version: 0.014 was 2 months, 24 days before
- Author: PEVANS
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.008006 on 2025-11-26, with 146 votes
- Previous CPAN version: 2.008005 was 5 days before
- Author: TOBYINK
-
Workflow - Simple, flexible system to implement workflows
- Version: 2.09 on 2025-11-23, with 34 votes
- Previous CPAN version: 2.08 was 10 days before
- Author: JONASBN
OK, so...
For those using the Perl Catalyst web framework in ways involving structured request bodies (e.g. API POSTs)...
$c->req->body is a string, unless Content-Type is application/x-www-form-urlencoded, text/xml, or multipart/form-data (or in fact application/json, which isn't in the docs), in which case it's a File::Temp (an overloaded file handle), and $c->req->body_data gets you the deserialised body.
For various reasons, largely to do with the idiosyncrasies of one particular module, I need to read the raw body data from the $c->req->body file handle to process a Stripe webhook payload. For various other reasons, as part of the API call logging, I need to call $c->req->body_data to get at the deserialised body in another module.
You may imagine my delight when adding the latter caused the former to fail.
An afternoon of bad language and extra debug eventually revealed that $c->req->body_data doesn't clean up after itself, and I have to seek( $c->req->body, 0, 0) before I can read any data from the body file handle.
If this is useful to anyone else, you have my sympathies.
When we publish our Perl module repository on GitHub, we might notice something peculiar in the "About"
section of our repository: GitHub doesn't recognize the Perl 5 license. This can be a bit
confusing, especially when we've explicitly stated the licensing in our LICENSE file.
Without properly defined license, GitHub ranks the quality of a repository lower. This is also unfortunate because it limits the "searchability" of our repository. GitHub cannot index it according to the license and users cannot search by license. This is today more important than ever before as many enterprises rule out open source projects purely on the grounds that their license is poorly managed.
The Problem: Two Licenses in One File
The standard Perl 5 license, as used by many modules, is a dual license: Artistic License (2.0) and GNU
General Public License (GPL) version 1 or later. Often, this is included in a single LICENSE file
in the repository root.
GitHub's license detection mechanism, powered by Licensee, is designed to identify a single, clear license. When it encounters a file with two distinct licenses concatenated, it fails to make a definitive identification.
Here's an example of a repository where GitHub doesn't recognize the license. Notice the missing license badge in the "About" section:

Also the "quick select" banner above Readme file does not acknowledge which license there is.

The Solution: Separate License Files
The simplest and most effective solution is to provide each license in its own dedicated file. This allows Licensee to easily identify and display both licenses. This is perfectly valid because the Perl 5 license explicitly allows for distribution under either the Artistic License or the GPL. Providing both licenses separately simply makes it clearer which licenses apply and how they are presented.
(The other reason for having multiple licenses is situation where different parts of the repository are under different licenses. But this is not our problem here.)
For example, instead of a single LICENSE file containing both, we would have:
- LICENSE-Artistic-2.0
- LICENSE-GPL-3
Let's look at an example from my own env-assert repository. In this repository, I've separated the licenses into LICENSE-Artistic-2.0 and LICENSE-GPL-3.
And here's how GitHub's "About" section looks for env-assert, clearly recognizing both licenses:

As we can see, GitHub now correctly identifies "Artistic-2.0" and "GPL-3.0" as the licenses for the project.
Same is also visible in the "quick select" bar:

Automating with Software::Policies and Dist::Zilla::Plugin::Software::Policies
Manually creating and maintaining these separate license files for every module can be tedious. Fortunately, there is a way to automate this process if you are using Dist::Zilla for authoring.
Dist::Zilla::Plugin::Software::Policies
If we're using Dist::Zilla for our module authoring,
Dist-Zilla-Plugin-Software-Policies
can automatically check that we have the correct License files. It uses Dist::Zilla's
internal variable licence to determine the correct license files.
The Dist::Zilla plugin uses Software-Policies as a backend to do the heavy lifting.
Software::Policies
Software::Policies is a module that provides a
framework for defining and enforcing software policies, including licensing. It comes with a
pre-defined policy for Perl 5's double license. It can also generate other policy files,
such as CONTRIBUTING.md, CODE_OF_CONDUCT.md, and SECURITY.md.
By using Software::Policies, we can programmatically check for the presence and content of our
license files.
This approach not only solves the GitHub license detection problem but also helps us maintain consistent and correct licensing across all our Perl modules, integrating it directly into our build workflow.
By configuring this plugin in our dist.ini, we can ensure that our distribution always includes
the correct and properly formatted license files, making GitHub (and other license scanners) happy.
Here's a simplified example of how we might configure it in our dist.ini:
[Software::Policies / License]
policy_attribute = perl_5_double_license = true
[Test::Software::Policies]
include_policy = License
This configuration tells Dist::Zilla plugin Test::Software::Policies to apply the Perl
licensing policy, which typically means Artistic License 2.0 and GPL. When we build our
distribution with Dist::Zilla, the plugin will create a test file checks for the existence
and content of the LICENSE-Artistic-2.0 and LICENSE-GPL-3 files.
During testing phase, when running dzil test or dzil release, the test files will be run
and if the license files are missing or incorrect, the tests will fail.
To generate the files, we can run the command dzil policies License or just dzil policies.
This will create the files according to config in dist.ini, the [Software::Policies / License]
part of dist.ini.
We cannot create the files automatically during build because then they will only be included in the release, not in the repository. It is precisely in the repository that we need them for GitHub's sake. So the process to create or update the license files has to have this small manual stage.
I am trying to implement a Mojolicious application that (also) acts as a proxy in front or rclone serve.
To that purpose I am trying to get Mojolicious to act as a proxy, and request from rclone serve and serve the response. Pure redirecting won't do for authentication reasons.
This is the minimal working example I got thus far:
use Mojolicious::Lite -signatures;
use Mojo::Util qw/dumper/;
use Time::HiRes qw/time/;
helper "handle" => sub {
my $self = shift;
my $start = time(); my $start_all = $start;
my $req = $self->req->clone;
$req->url->scheme("http")->host("127.0.0.1")->port("3002");
my $ua = $self->ua;
my $tx = $ua->start(Mojo::Transaction::HTTP->new(req => $req));
$self->res->headers->from_hash($tx->res->headers->to_hash);
my $body = $tx->res->body;
$self->render(data => $body);
};
get '/*reqpath' => sub ($c) {
return $c->handle();
};
app->start;
It works fine - but is very slow, as getting the same content directly vs. via Mojolicious takes about 5 times as long.
The culprit seems to be
my $tx = $ua->start(Mojo::Transaction::HTTP->new(req => $req));
What's the reason? what should I do differently?
I have also tried to do that asynchronously, but as I expected it did not speed up the single transaction, it just became more responsive across a bunch of them.
All three of us met.
- The refalias draft PPC on the mailing list is looking good. We encourage Gianni to turn it into a full PPC doc PR
- We still would like more automation around making real CPAN distributions out of
dist/dirs. Paul will write an email requesting assistance on that subject specifically - Briefly discussed the subject of the
metamodule handling signatures with named parameters. Further discussion will continue on the email thread.
I have filenames with this pattern [a-zA-Z0-9]{10} and .txt extension. They might also contain a pattern like this [super] and it could be either before or after [a-zA-Z0-9]{10} with '_' separating optionally. I want to select the filenames that do not contain the string [super] either before or after [a-zA-Z0-9]{10}. My mwe is not working of course.
mwe:
#!/usr/bin/env perl
use strict; use warnings;
my @filenames = ( "0001_abc_[super][abcdefghij].txt",
"0002_abc_[acegikmoqs]_[super].txt",
"0008_cde_[zyxwvutsrq].txt" );
foreach (@filenames) {
if ($_ =~ /^(?!.*\[super])_?\[[a-zA-Z0-9]]\.txt$/) {print "match 1\n";}
elsif ($_ =~ /\[[a-zA-Z0-9]]_?(?!\[super])\.txt$/) {print "match 2\n";}
}
In last week’s post I showed how to run a modern Dancer2 app on Google Cloud Run. That’s lovely if your codebase already speaks PSGI and lives in a nice, testable, framework-shaped box.
But that’s not where a lot of Perl lives.
Plenty of useful Perl on the internet is still stuck in old-school CGI – the kind of thing you’d drop into cgi-bin on a shared host in 2003 and then try not to think about too much.
So in this post, I want to show that:
If you can run a Dancer2 app on Cloud Run, you can also run ancient CGI on Cloud Run – without rewriting it.
To keep things on the right side of history, we’ll use nms FormMail rather than Matt Wright’s original script, but the principle is exactly the same.
Prerequisites: Google Cloud and Cloud Run
If you already followed the Dancer2 post and have Cloud Run working, you can skip this section and go straight to “Wrapping nms FormMail in PSGI”.
If not, here’s the minimum you need.
-
Google account and project
-
Go to the Google Cloud Console.
-
Create a new project (e.g. “perl-cgi-cloud-run-demo”).
-
-
Enable billing
-
Cloud Run is pay-as-you-go with a generous free tier, but you must attach a billing account to your project.
-
-
Install the
gcloudCLI-
Install the Google Cloud SDK for your platform.
-
Run:
and follow the prompts to:
-
log in
-
select your project
-
pick a default region (I’ll assume “europe-west1” below).
-
-
-
Enable required APIs
In your project:
-
Create a Docker repository in Artifact Registry
That’s all the GCP groundwork. Now we can worry about Perl.
The starting point: an old CGI FormMail
Our starting assumption:
-
You already have a CGI script like nms FormMail
-
It’s a single “.pl” file, intended to be dropped into “cgi-bin”
-
It expects to be called via the CGI interface and send mail using:
On a traditional host, Apache (or similar) would:
-
parse the HTTP request
-
set CGI environment variables (
REQUEST_METHOD,QUERY_STRING, etc.) -
run
formmail.plas a process -
let it call
/usr/sbin/sendmail
Cloud Run gives us none of that. It gives us:
-
a HTTP endpoint
-
backed by a container
-
listening on a port (
$PORT)
Our job is to recreate just enough of that old environment inside a container.
We’ll do that in two small pieces:
-
A PSGI wrapper that emulates CGI.
-
A sendmail shim so the script can still “talk” sendmail.
Architecture in one paragraph
Inside the container we’ll have:
-
nms FormMail – unchanged CGI script at
/app/formmail.pl -
PSGI wrapper (
app.psgi) – usingCGI::CompileandCGI::Emulate::PSGI -
Plack/Starlet – a simple HTTP server exposing
app.psgion$PORT -
msmtp-mta – providing
/usr/sbin/sendmailand relaying mail to a real SMTP server
Cloud Run just sees “HTTP service running in a container”. Our CGI script still thinks it’s on a early-2000s shared host.
Step 1 – Wrapping nms FormMail in PSGI
First we write a tiny PSGI wrapper. This is the only new Perl we need:
-
CGI::Compileloads the CGI script and turns itsmainpackage into a coderef. -
CGI::Emulate::PSGIfakes the CGI environment for each request. -
The CGI script doesn’t know or care that it’s no longer being run by Apache.
Later, we’ll run this with:
Step 2 – Adding a sendmail shim
Next problem: Cloud Run doesn’t give you a local mail transfer agent.
There is no real /usr/sbin/sendmail, and you wouldn’t want to run a full MTA in a stateless container anyway.
Instead, we’ll install msmtp-mta, a light-weight SMTP client that includes a sendmail-compatible wrapper. It gives you a /usr/sbin/sendmail binary that forwards mail to a remote SMTP server (Mailgun, SES, your mail provider, etc.).
From the CGI script’s point of view, nothing changes:
We’ll configure msmtp from environment variables at container start-up, so Cloud Run’s --set-env-vars values are actually used.
Step 3 – Dockerfile (+ entrypoint) for Perl, PSGI and sendmail shim
Here’s a complete Dockerfile that pulls this together.
-
We never touch
formmail.pl. It goes into/appand that’s it. -
msmtp gives us
/usr/sbin/sendmail, so the CGI script stays in its 1990s comfort zone. -
The entrypoint writes
/etc/msmtprcat runtime, so Cloud Run’s environment variables are actually used.
Step 4 – Building and pushing the image
With the Dockerfile and docker-entrypoint.sh in place, we can build and push the image to Artifact Registry.
I’ll assume:
-
Project ID:
PROJECT_ID -
Region:
europe-west1 -
Repository:
formmail-repo -
Image name:
nms-formmail
First, build the image locally:
The post Elderly Camels in the Cloud first appeared on Perl Hacks.
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095003 on 2025-11-18, with 799 votes
- Previous CPAN version: 2.095002 was 2 days before
- Author: OLIVER
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.623 on 2025-11-17, with 13 votes
- Previous CPAN version: 0.622 was 8 days before
- Author: ETHER
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20251120 on 2025-11-20, with 44 votes
- Previous CPAN version: 5.20251022 was 27 days before
- Author: BINGOS
-
Net::Amazon::S3 - Use the Amazon S3 - Simple Storage Service
- Version: 0.992 on 2025-11-22, with 13 votes
- Previous CPAN version: 0.991 was 3 years, 4 months, 5 days before
- Author: BARNEY
-
OpenTelemetry - A Perl implementation of the OpenTelemetry standard
- Version: 0.033 on 2025-11-21, with 30 votes
- Previous CPAN version: 0.032 was 1 day before
- Author: JJATRIA
-
SPVM - The SPVM Language
- Version: 0.990107 on 2025-11-18, with 36 votes
- Previous CPAN version: 0.990106 was 6 days before
- Author: KIMOTO
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.008005 on 2025-11-20, with 145 votes
- Previous CPAN version: 2.008004 was 1 month, 3 days before
- Author: TOBYINK
-
XML::Feed - XML Syndication Feed Support
- Version: v1.0.0 on 2025-11-17, with 19 votes
- Previous CPAN version: 0.65 was 1 year, 4 months, 8 days before
- Author: DAVECROSS

Dave writes:
Last month was mostly spent doing a second big refactor of ExtUtils::ParseXS. My previous refactor converted the parser to assemble each XSUB into an Abstract Syntax Tree (AST) and only then emit the C code for it (previously the parsing and C code emitting were interleaved on the fly). This new work extends that so that the whole XS file is now one big AST, and the C code is only generated once all parsing is complete.
As well as fixing lots of minor parsing bugs along the way, another benefit of this big refactoring is that ExtUtils::ParseXS becomes manageable once again. Rather than one big 1400-line parsing loop, the parsing and code generating is split up into lots of little methods in subclasses which represent the nodes of the AST and which process just one thing.
As an example, the logic which handled (permissible) duplicate XSUB declarations in different C processor branches, such as
#ifdef USE_2ARG
int foo(int i, int j)
#else
int foo(int i)
#endif
used to be spread over many parts of the program; it's now almost all concentrated into the parsing and code-emitting methods of a single Node subclass.
This branch is currently pushed and undergoing review.
My earlier work on rewriting the XS reference manual, perlxs.pod, was made into a PR a month ago, and this month I revised it based on reviewers' feedback.
Summary: * 11:39 modernise perlxs.pod * 64:57 refactor Extutils::ParseXS: file-scoped AST
Total: * 76:36 (HH::MM)
Do you thrive in a fast-paced scale-up environment, surrounded by an ambitious and creative team?
We’re on a mission to make payments simple, secure, and accessible for every business. With powerful in-house technology and deep expertise, our modular platform brings online, in-person, and cross-border payments together in one place — giving merchants the flexibility to scale on their own terms. Through a partnership-first approach, we tackle complexity head-on, keep payments running smoothly, and boost success rates. It’s how we level the playing field for businesses of all sizes and ambitions.
Join a leading tech company driving innovation in the payments industry. You’ll work with global leaders like Visa and Mastercard, as well as next generation “pay later” solutions such as Klarna and Afterpay. Our engineering teams apply Domain-Driven Design (DDD) principles, microservices architecture to build scalable and maintainable systems.
•Develop and maintain Perl-based applications and systems to handle risk management, monitoring, and onboarding processes
•Collaborate with other developers, and cross-functional teams to define, design, and deliver new features and functionalities
•Assist in the migration of projects from Perl to other languages, such as Java, while ensuring the smooth operation and transition of systems
•Contribute to code reviews and provide valuable insights to uphold coding standards and best practices
•Stay up to date with the latest industry trends and technologies to drive innovation and enhance our products
Company policy is on-site with 1/2 workday from home depending on your location.
-
I needed to have some defaults available in my i3 configuration and was using LightDM. I asked in the i3 github discussion pages if people knew why it was failing. It appears Debian stripped some functionality. So how do you solve this?
Answer
You want to have your own session wrapper for lightdm. I stole this recipe from Ubuntu:
#!/usr/bin/sh
for file in "/etc/profile" "$HOME/.profile" "/etc/xprofile" "$HOME/.xprofile"; do
[ ! -f "$file" ] && continue
. $file
done
/etc/X11/Xsession $@
I install this in /usr/local/bin/lightdm-session. And then dpkg-divert the
Debian version of lightdm.conf:
This is the weekly favourites list of CPAN distributions. Votes count: 25
This week there isn't any remarkable distribution
Build date: 2025/11/16 11:03:31 GMT
Clicked for first time:
- App::jsonvalidate - App harness for the jsonvalidate CLI
- Bitcoin::Secp256k1 - Perl interface to libsecp256k1
- minion::task - A task boilerplate for Minion
- Pod::Abstract - Abstract document tree for Perl POD documents
Increasing its reputation:
- BioX::Seq (+1=3)
- Bitcoin::Crypto (+1=8)
- CGI::Tiny (+1=9)
- Dancer2 (+1=139)
- DBD::DuckDB (+1=6)
- Devel::MAT (+1=30)
- File::Slurp (+1=78)
- Git::Repository (+1=27)
- IO::Compress (+1=19)
- mojo::debugbar (+1=2)
- mojo::util::collection (+1=2)
- Mojolicious::Plugin::Debugbar (+1=2)
- Net::OpenSSH (+1=43)
- Perl::Critic (+1=134)
- Perl::Tidy (+1=146)
- Pod::Parser (+1=14)
- Readonly (+1=24)
- Scalar::List::Utils (+1=183)
- Set::Object (+1=13)
- Task::Kensho (+1=121)
- Time::Piece (+1=65)
-
App::cpm - a fast CPAN module installer
- Version: 0.998001 on 2025-11-13, with 176 votes
- Previous CPAN version: 0.998000 was 5 days before
- Author: SKAJI
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.095001 on 2025-11-15, with 794 votes
- Previous CPAN version: 2.095000
- Author: OLIVER
-
App::Rakubrew - Raku environment manager
- Version: 45 on 2025-11-13, with 28 votes
- Previous CPAN version: 44 was 2 days before
- Author: PATRICKB
-
Bitcoin::Crypto - Bitcoin cryptography in Perl
- Version: 4.002 on 2025-11-14, with 14 votes
- Previous CPAN version: 4.001 was 2 days before
- Author: BRTASTIC
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251116.001 on 2025-11-16, with 25 votes
- Previous CPAN version: 20251109.001 was 6 days before
- Author: BRIANDFOY
-
Dist::Zilla - distribution builder; installer not included!
- Version: 6.036 on 2025-11-09, with 189 votes
- Previous CPAN version: 6.035 was before
- Author: RJBS
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.622 on 2025-11-08, with 12 votes
- Previous CPAN version: 0.621 was 9 days before
- Author: ETHER
-
Net::SIP - Framework SIP (Voice Over IP, RFC3261)
- Version: 0.840 on 2025-11-10, with 16 votes
- Previous CPAN version: 0.839 was 2 months, 5 days before
- Author: SULLR
-
SPVM - The SPVM Language
- Version: 0.990106 on 2025-11-11, with 36 votes
- Previous CPAN version: 0.990105 was 26 days before
- Author: KIMOTO
-
Test::Simple - Basic utilities for writing tests.
- Version: 1.302216 on 2025-11-16, with 199 votes
- Previous CPAN version: 1.302215 was 2 days before
- Author: EXODIST
-
Time::Piece - Object Oriented time objects
- Version: 1.41 on 2025-11-12, with 65 votes
- Previous CPAN version: 1.40 was 4 days before
- Author: ESAYM
-
Workflow - Simple, flexible system to implement workflows
- Version: 2.08 on 2025-11-12, with 34 votes
- Previous CPAN version: 2.07 was 4 days before
- Author: JONASBN
The Perl and Raku Foundation has announced a £1,000 sponsorship of the upcoming London Perl and Raku Workshop, reinforcing its ongoing commitment to supporting community-driven technical events. The workshop, one of the longest-running grassroots Perl gatherings in the UK, brings together developers, educators, and open-source enthusiasts for a day of talks, hands-on sessions, and collaborative learning centered on Perl, Raku, and related technologies.
The foundation’s contribution will help cover venue expenses, accessibility measures, and attendee resources. Organizers intend to use the support received from sponsros to keep the event free to attend, maintaining its tradition of lowering barriers for both newcomers and experienced programmers.
This year’s workshop is expected to feature a broad program, including presentations on language internals, modern development practices, and applied use cases within industry and research. Community members from across Europe are anticipated to participate, reflecting the workshop’s reputation as a focal point for Perl activity. The workshop is scheduled for November 29, 2025, in the heart of London at ISH Venues, near Regent's Park. Several speakers are already confirmed for this year's workshop, including Stevan Little and TPRF White Camel Award winners Sawyer X and Mohammad Sajid Anwar. For more information about the event, visit https://www.londonperlworkshop.com/.
By backing the event, The Perl and Raku Foundation continues its broader mission to foster growth, education, and innovation across both language communities. The London Perl Workshop remains one of the foundation’s key community touchpoints, offering a collaborative space for developers to share knowledge and help shape the future of the languages.
Go Ahead ‘make’ My Day (Part III)
This is the last in a 3 part series on Scriptlets. You can catch up by reading our introduction and dissection of Scriptlets.
In this final part, we talk about restraint - the discipline that keeps a clever trick from turning into a maintenance hazard.
That uneasy feeling…
So you are starting to write a few scriptlets and it seems pretty cool. But something doesn’t feel quite right…
You’re editing a Makefile and suddenly you feel anxious. Ah, you
expected syntax highlighting, linting, proper indentation, and maybe
that warm blanket of static analysis. So when we drop a 20 - line
chunk of Perl or Python into our Makefile, our inner OCD alarms go
off. No highlighting. No linting. Just raw text.
The discomfort isn’t a flaw - it’s feedback. It tells you when you’ve added too much salt to the soup.
A scriptlet is not a script!
A scriptlet is a small, focused snippet of code embedded inside a
Makefile that performs one job quickly and
deterministically. The “-let” suffix matters. It’s not a standalone
program. It’s a helper function, a convenience, a single brushstroke
that belongs in the same canvas as the build logic it supports.
If you ever feel the urge to bite your nails, pick at your skin, or start counting the spaces in your indentation - stop. You’ve crossed the line. What you’ve written is no longer a scriptlet; it’s a script. Give it a real file, a shebang, and a test harness. Keep the build clean.
Why we use them
Scriptlets shine where proximity and simplicity matter more than reuse
(not that we can’t throw it in a separate file and include it in our
Makefile).
- Cleanliness: prevents a recipe from looking like a shell script.
- Locality: live where they’re used. No path lookups, no installs.
- Determinism: transform well-defined input into output. Nothing more.
- Portability (of the idea): every CI/CD system that can run
makecan run a one-liner.
A Makefile that can generate its own dependency file, extract version
numbers, or rewrite a cpanfile doesn’t need a constellation of helper
scripts. It just needs a few lines of inline glue.
Why they’re sometimes painful
We lose the comforts that make us feel like professional developers:
- No syntax highlighting.
- No linting or type hints.
- No indentation guides.
- No “Format on Save.”
The trick is to accept that pain as a necessary check on the limits of
the scriptlet. If you’re constantly wishing for linting and editor
help, it’s your subconscious telling you: this doesn’t belong inline
anymore. You’ve outgrown the -let.
When to promote your scriplet to a script…
Promote a scriptlet to a full-blown script when:
- It exceeds 30-50 lines.
- It gains conditionals or error handling.
- You need to test it independently.
- It uses more than 1 or 2 non-core features.
- It’s used by more than one target or project.
- You’re debugging quoting more than logic.
- You’re spending more time fixing indentation, than working on the build
At that point, you’re writing software, not glue. Give it a name, a
shebang, and a home in your tools/ directory.
When to keep it inside your Makefile…
Keep it inline when:
- It’s short, pure, and single-use.
- It depends primarily on the environment already assumed by your build (Perl, Python, awk, etc.).
- It’s faster to read than to reference.
A good scriptlet reads like a make recipe: do this transformation right here, right now.
define create_cpanfile =
while (<STDIN>) {
s/[#].*//; s/^\s+|\s+$//g; next if $_ eq q{};
my ($mod,$v) = split /\s+/, $_, 2;
print qq{requires "$mod", "$v";\n};
}
endef
export s_create_cpanfile = $(value create_cpanfile)
That’s a perfect scriptlet: small, readable, deterministic, and local.
Rule of Thumb: If it fits on one screen, keep it inline. If it scrolls, promote it.
Tools for the OCD developer
If you must relieve the OCD symptoms without promotion of your scriptlet to a script…
- Add a
lint-scriptletstarget:perl -c -e '$(s_create_requires)'checks syntax without running it. - Some editors (Emacs
mmm-mode, Vimpolyglot) can treat marked sections as sub-languages to enable localized language specific editing features. - Use
includeto include a scriptlet into yourMakefile
…however try to resist the urge to over-optimize the tooling. Feeling the uneasiness grow helps identify the boundary between scriptlets and scripts.
You’ve been warned!
Because scriptlets are powerful, flexible, and fast, it’s easy to
reach for them too often or make them the focus of your project. They
start as a cure for friction - a way to express a small transformation
inline - but left unchecked, they can sometimes grow arms and
legs. Before long, your Makefile turns into a Frankenstein monster.
The great philosopher Basho (or at least I think it was him) once said:
A single aspirin tablet eases pain. A whole bottle sends you to the hospital.
Thanks for reading.
Learn More
For years, most of my Perl web apps lived happily enough on a VPS. I had full control of the box, I could install whatever I liked, and I knew where everything lived.
In fact, over the last eighteen months or so, I wrote a series of blog posts explaining how I developed a system for deploying Dancer2 apps and, eventually, controlling them using systemd. I’m slightly embarrassed by those posts now.
Because the control that my VPS gave me also came with a price: I also had to worry about OS upgrades, SSL renewals, kernel updates, and the occasional morning waking up to automatic notifications that one of my apps had been offline since midnight.
Back in 2019, I started writing a series of blog posts called Into the Cloud that would follow my progress as I moved all my apps into Docker containers. But real life intruded and I never made much progress on the project.
Recently, I returned to this idea (yes, I’m at least five years late here!) I’ve been working on migrating those old Dancer2 applications from my IONOS VPS to Google Cloud Run. The difference has been amazing. My apps now run in their own containers, scale automatically, and the server infrastructure requires almost no maintenance.
This post walks through how I made the jump – and how you can too – using Perl, Dancer2, Docker, GitHub Actions, and Google Cloud Run.
Why move away from a VPS?
Running everything on a single VPS used to make sense. You could ssh in, restart services, and feel like you were in control. But over time, the drawbacks grow:
-
You have to maintain the OS and packages yourself.
-
One bad app or memory leak can affect everything else.
-
You’re paying for full-time CPU and RAM even when nothing’s happening.
-
Scaling means provisioning a new server — not something you do in a coffee break.
Cloud Run, on the other hand, runs each app as a container and only charges you while requests are being served. When no-one’s using your app, it scales to zero and costs nothing.
Even better: no servers to patch, no ports to open, no SSL certificates to renew — Google does all of that for you.
What we’ll build
Here’s the plan. We’ll take a simple Dancer2 app and:
-
Package it as a Docker container.
-
Build that container automatically in GitHub Actions.
-
Deploy it to Google Cloud Run, where it runs securely and scales automatically.
-
Map a custom domain to it and forget about server admin forever.
If you’ve never touched Docker or Cloud Run before, don’t worry – I’ll explain what’s going on as we go.
Why Cloud Run fits Perl surprisingly well
Perl’s ecosystem has always valued stability and control. Containers give you both: you can lock in a Perl version, CPAN modules, and any shared libraries your app needs. The image you build today will still work next year.
Cloud Run runs those containers on demand. It’s effectively a managed starman farm where Google handles the hard parts – scaling, routing, and HTTPS.
You pay for CPU and memory per request, not per server. For small or moderate-traffic Perl apps, it’s often well under £1/month.
Step 1: Dockerising a Dancer2 app
If you’re new to Docker, think of it as a way of bundling your whole environment — Perl, modules, and configuration — into a portable image. It’s like freezing a working copy of your app so it can run identically anywhere.
Here’s a minimal Dockerfile for a Dancer2 app:
-
FROM perl:5.42— starts from an official Perl image on Docker Hub. -
Cartonkeeps dependencies consistent between environments. -
The app is copied into
/app, andcarton install --deploymentinstalls exactly what’s in yourcpanfile.snapshot. -
The container exposes port 8080 (Cloud Run’s default).
-
The
CMDruns Starman, serving your Dancer2 app.
To test it locally:
Then visit http://localhost:8080. If you see your Dancer2 homepage, you’ve successfully containerised your app.
Step 2: Building the image in GitHub Actions
Once it works locally, we can automate it. GitHub Actions will build and push our image to Google Artifact Registry whenever we push to main or tag a release.
Here’s a simplified workflow file (.github/workflows/build.yml):
Once that’s set up, every push builds a fresh, versioned container image.
Step 3: Deploying to Cloud Run
Now we’re ready to run it in the cloud. We’ll do that using Google’s command line program, gcloud. It’s available from Google’s official downloads or through most Linux package managers — for example:
# Fedora, RedHat or similar sudo dnf install google-cloud-cli # or on Debian/Ubuntu: sudo apt install google-cloud-cli
Once installed, authenticate it with your Google account:
Once that’s done, you can deploy manually from the command line:
This tells Cloud Run to start a new service called myapp, using the image we just built.
After a minute or two, Google will give you a live HTTPS URL, like:
Visit it — and if all went well, you’ll see your familiar Dancer2 app, running happily on Cloud Run.
To connect your own domain, run:
gcloud run domain-mappings create \ --service=myapp \ --domain=myapp.example.com
Then update your DNS records as instructed. Within an hour or so, Cloud Run will issue a free SSL certificate for you.
Step 4: Automating the deployment
Once the manual deployment works, we can automate it too.
Here’s a second GitHub Actions workflow (deploy.yml) that triggers after a successful build:
You can take it further by splitting environments — e.g. main deploys to staging, tagged releases to production — but even this simple setup is a big step forward from ssh and git pull.
Step 5: Environment variables and configuration
Each Cloud Run service can have its own configuration and secrets. You can set these from the console or CLI:
gcloud run services update myapp \ --set-env-vars="DANCER_ENV=production,DATABASE_URL=postgres://..."
In your Dancer2 app, you can then access them with:
$ENV{DATABASE_URL}
It’s a good idea to keep database credentials and API keys out of your code and inject them at deploy time like this.
Step 6: Monitoring and logs
Cloud Run integrates neatly with Google Cloud’s logging tools.
To see recent logs from your app:
If you prefer a UI, you can use the Cloud Console’s Log Explorer to filter by service or severity.
Step 7: The payoff
Once you’ve done one migration, the next becomes almost trivial. Each Dancer2 app gets:
-
Its own Dockerfile and GitHub workflows.
-
Its own Cloud Run service and domain.
-
Its own scaling and logging.
And none of them share a single byte of RAM with each other.
Here’s how the experience compares:
| Aspect | Old VPS | Cloud Run |
|---|---|---|
| OS maintenance | Manual upgrades | Managed |
| Scaling | Fixed size | Automatic |
| SSL | Let’s Encrypt renewals | Automatic |
| Deployment | SSH + git pull | Push to GitHub |
| Cost | Fixed monthly | Pay-per-request |
| Downtime risk | One app can crash all | Each isolated |
For small apps with light traffic, Cloud Run often costs pennies per month – less than the price of a coffee for peace of mind.
Lessons learned
After a few migrations, a few patterns emerged:
-
Keep apps self-contained. Don’t share config or code across services; treat each app as a unit.
-
Use digest-based deploys. Deploy by image digest (
@sha256:...) rather than tag for true immutability. -
Logs are your friend. Cloud Run’s logs are rich; you rarely need to
sshanywhere again. -
Cold starts exist, but aren’t scary. If your app is infrequently used, expect the first request after a while to take a second longer.
-
CI/CD is liberating. Once the pipeline’s in place, deployment becomes a non-event.
Costs and practicalities
One of the most pleasant surprises was the cost. My smallest Dancer2 app, which only gets a handful of requests each day, usually costs under £0.50/month on Cloud Run. Heavier ones rarely top a few pounds.
Compare that to the £10–£15/month I was paying for the old VPS — and the VPS didn’t scale, didn’t auto-restart cleanly, and didn’t come with HTTPS certificates for free.
What’s next
This post covers the essentials: containerising a Dancer2 app and deploying it to Cloud Run via GitHub Actions.
In future articles, I’ll look at:
-
Connecting to persistent databases.
-
Using caching.
-
Adding monitoring and dashboards.
-
Managing secrets with Google Secret Manager.
Conclusion
After two decades of running Perl web apps on traditional servers, Cloud Run feels like the future has finally caught up with me.
You still get to write your code in Dancer2 – the framework that’s made Perl web development fun for years – but you deploy it in a way that’s modern, repeatable, and blissfully low-maintenance.
No more patching kernels. No more 3 a.m. alerts. Just code, commit, and dance in the clouds.
The post Dancing in the Clouds: Moving Dancer2 Apps from a VPS to Cloud Run first appeared on Perl Hacks.
Go Ahead ‘make’ My Day (Part II)
In our previous blog post “Go Ahead ‘make’ My
Day” we
presented the scriptlet, an advanced make technique for spicing up
your Makefile recipes. In this follow-up, we’ll deconstruct the
scriptlet and detail the ingredients that make up the secret sauce.
Introducing the Scriptlet
Makefile scriptlets are an advanced technique that uses
GNU make’s powerful functions to safely embed a multi-line script
(Perl, in our example) into a single, clean shell command. It turns a
complex block of logic into an easily executable template.
An Example Scriptlet
#-*- mode: makefile; -*-
DARKPAN_TEMPLATE="https://cpan.openbedrock.net/orepan2/authors/D/DU/DUMMY/%s-%s.tar.gz"
define create_requires =
# scriptlet to create cpanfile from an list of required Perl modules
# skip comments
my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};
while (s/^#[^\n]+\n//g){};
# skip blank lines
while (s/\n\n/\n/) {};
for (split/\n/) {
my ($mod, $v) = split /\s+/;
next if !$mod;
my $dist = $mod;
$dist =~s/::/\-/g;
my $url = sprintf $DARKPAN_TEMPLATE, $dist, $v;
print <<"EOF";
requires \"$mod\", \"$v\",
url => \"$url\";
EOF
}
endef
export s_create_requires = $(value create_requires)
cpanfile.darkpan: requires.darkpan
DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $< > $@ || rm $@
Dissecting the Scriptlet
1. The Container: Defining the Script (define / endef)
This section creates the multi-line variable that holds your entire Perl program.
define create_requires =
# Perl code here...
endef
define ... endef: This is GNU Make’s mechanism for defining a recursively expanded variable that spans multiple lines. The content is not processed by the shell yet; it’s simply stored bymake.- The Advantage: This is the only clean way to write readable,
indented code (like your
whileloop andifstatements) directly inside aMakefile.
2. The Bridge: Passing Environment Data (my $ENV{...})
This is a critical step for making your script template portable and configurable.
my $DARKPAN_TEMPLATE=$ENV{DARKPAN_TEMPLATE};
- The Problem: Your Perl script needs dynamic values (like the
template URL) that are set by
make. - The Solution: Instead of hardcoding the URL, the Perl code is
designed to read from the shell environment variable
$ENV{DARKPAN_TEMPLATE}. This makes the script agnostic to its calling environment, delegating the data management back to theMakefile.
3. The Transformer: Shell Preparation (export and $(value))
This is the “magic” that turns the multi-line Make variable into a single, clean shell command.
export s_create_requires = $(value create_requires)
$(value create_requires): This is a specific Make function that performs a direct, single-pass expansion of the variable’s raw content. Crucially, it converts the entire multi-line block into a single-line string suitable for export, preserving special characters and line breaks that the shell will execute.export s_create_requires = ...: This exports the multi-line Perl script content as an environment variable (s_create_requires) that will be accessible to any shell process running in the recipe’s environment.
4. The Execution: Atomic Execution ($$ and perl -0ne)
The final recipe executes the entire, complex process as a single, atomic operation, which is the goal of robust Makefiles.
cpanfile.darkpan: requires.darkpan
DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE); \
DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl -0ne "$$s_create_requires" $< > $@ || rm $@
DARKPAN_TEMPLATE=$(DARKPAN_TEMPLATE): This creates the local shell variable.DARKPAN_TEMPLATE=$$DARKPAN_TEMPLATE perl...: This is the clean execution. The firstDARKPAN_TEMPLATE=passes the newly created shell variable’s value as an environment variable to theperlprocess. The$$ensures the shell variable is properly expanded before the Perl interpreter runs it.perl -0ne "...": Runs the Perl script:* `-n` and `-e` (Execute script on input) * `-0`: Tells Perl to read the input as one single block (slurping the file), which is necessary for your multi-line regex and `split/\n/` logic.|| rm $@: This is the final mark of quality. It makes the entire command transactional—if the Perl script fails, the half-written target file ($@) is deleted, forcingmaketo try again later.
Hey Now! You’re a Rockstar!
(..get your game on!)
Mastering build automation using make will transform you from
being an average DevOps engineer into a rockstar. GNU make is a Swiss
Army knife with more tools than you might think! The knives are sharp
and the tools are highly targeted to handle all the real-world issues
build automation has encountered over the decades. Learning to use
make effectively will put you head and shoulders above the herd (see
what I did there? 😉).
Calling All Pythonistas!
The scriptlet technique creates a powerful, universal pattern for clean, atomic builds:
- It’s Language Agnostic: Pythonistas! Join the fun! The same
define/exporttechnique works perfectly withpython -c. - The Win: This ensures that every developer - regardless of their preferred language - can achieve the same clean, atomic build and avoid external script chaos.
Learn more about GNU
make
and move your Makefiles from simple shell commands to precision
instruments of automation.
Thanks for reading.
Learn More
A long time ago I used Shutter and found it as an excellent tool. Now I get all kinds of crashes.
Actually "Now" was a while ago, since then I upgraded Ubuntu and now I get all kinds of other error messages.
However, I wonder.
Why are there so many errors?
Who's fault is it?
-
A failure of the Perl community?
-
A failure of the Ubuntu or the Debian developers?
-
A failure of the whole idea of Open Source?
-
Maybe I broke the system?
It starts so badly and then it crashes. I don't want to spend time figuring out what is the problem. I don't even have the energy to open a ticket. I am not even sure where should I do it. On Ubuntu? On the Shutter project?
Here is the output:
$ shutter
Subroutine Pango::Layout::set_text redefined at /usr/share/perl5/Gtk3.pm line 2299.
require Gtk3.pm called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
Subroutine Pango::Layout::set_markup redefined at /usr/share/perl5/Gtk3.pm line 2305.
require Gtk3.pm called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-GObject-CRITICAL **: g_boxed_type_register_static: assertion 'g_type_from_name (name) == 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-GObject-CRITICAL **: g_boxed_type_register_static: assertion 'g_type_from_name (name) == 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-GObject-CRITICAL **: g_boxed_type_register_static: assertion 'g_type_from_name (name) == 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
GLib-CRITICAL **: g_once_init_leave: assertion 'result != 0' failed at /usr/lib/x86_64-linux-gnu/perl5/5.36/Glib/Object/Introspection.pm line 110.
at /usr/share/perl5/Gtk3.pm line 489.
Gtk3::import("Gtk3", "-init") called at /usr/bin/shutter line 72
Shutter::App::BEGIN() called at /usr/bin/shutter line 72
eval {...} called at /usr/bin/shutter line 72
Variable "$progname_active" will not stay shared at /usr/bin/shutter line 2778.
Variable "$progname" will not stay shared at /usr/bin/shutter line 2779.
Variable "$im_colors_active" will not stay shared at /usr/bin/shutter line 2787.
Variable "$combobox_im_colors" will not stay shared at /usr/bin/shutter line 2788.
Variable "$trans_check" will not stay shared at /usr/bin/shutter line 2798.
... About 700 similar error messages ...
Name "Gtk3::Gdk::SELECTION_CLIPBOARD" used only once: possible typo at /usr/bin/shutter line 291.
WARNING: gnome-web-photo is missing --> screenshots of websites will be disabled!
at /usr/bin/shutter line 9038.
Shutter::App::fct_init_depend() called at /usr/bin/shutter line 195
Useless use of hash element in void context at /usr/share/perl5/Shutter/App/Common.pm line 77.
require Shutter/App/Common.pm called at /usr/bin/shutter line 206
Useless use of hash element in void context at /usr/share/perl5/Shutter/App/Common.pm line 80.
require Shutter/App/Common.pm called at /usr/bin/shutter line 206
Subroutine lookup redefined at /usr/share/perl5/Shutter/Draw/DrawingTool.pm line 28.
require Shutter/Draw/DrawingTool.pm called at /usr/bin/shutter line 228
Variable "$self" will not stay shared at /usr/share/perl5/Shutter/Draw/DrawingTool.pm line 671.
require Shutter/Draw/DrawingTool.pm called at /usr/bin/shutter line 228
Variable "$self" will not stay shared at /usr/share/perl5/Shutter/Screenshot/SelectorAdvanced.pm line 840.
require Shutter/Screenshot/SelectorAdvanced.pm called at /usr/bin/shutter line 233
Failed to register: GDBus.Error:org.freedesktop.DBus.Error.NoReply: Message recipient disconnected from message bus without replying
Notes from the live-coding session (part of the Perl Maven live events.
-
SVG the module for which we wrote tests.
-
Devl-Cover to generate test coverage report run
cover -test. -
done_testing()ordone_testing(2)
Meeting summary
Quick recap
The meeting began with informal introductions and discussions about Perl programming, including experiences with maintaining Perl codebases and the challenges of the language's syntax. The main technical focus was on testing and code coverage, with detailed demonstrations of using Devel::Cover and various testing modules in Perl, including examples of testing SVG functionality and handling exceptions. The session concluded with discussions about testing practices, code coverage implementation, and the benefits of automated testing, while also touching on practical aspects of Perl's object-oriented programming and error handling features.
SVG Test Coverage Analysis
Gabor demonstrated how to use Devel::Cover to generate test coverage reports for the SVG.pm module. He showed that the main module has 98% coverage, while some submodules have lower coverage. Gabor explained how to interpret the coverage reports, including statement, branch, and condition coverage. He also discussed the importance of identifying and removing unused code that appears uncovered by tests. Gabor then walked through some example tests in the SVG distribution, explaining how they verify different aspects of the SVG module's functionality.
Original announcement
Adding tests to legacy Perl code
During this live coding event we'll take a Perl module from CPAN and add some tests to it.
Further events
Register on the Perl Maven Luma calendar.

Tony writes:
``` [Hours] [Activity] 2025/10/02 Thursday 1.03 #23782 testing, comments 0.23 #23794 review change, research and comment 0.32 #23787 review and approve 0.27 #23777 review, research and comment 0.17 #23775 review and comment
0.48 #23608 research and comment
2.50
2025/10/03 Friday 1.30 #21877 code review, find another possible bug 0.08 #23787 review updates, has other approval, apply to blead 0.68 #21877 bug report rcatline - #23798 0.08 #23794 review updates and approve 0.08 #16865 follow-up
0.90 #23704 research and comment
3.12
2025/10/06 Monday 0.27 #23728 review and comment 0.60 #23752 review, testing and comment 0.15 #23813 review, but nothing further to say 0.18 #23809 comment 0.18 #21877 write more tests 0.07 #23817 review, got merged as I looked at it 0.22 #23795 start review
1.95 #23795 more review
3.62
2025/10/07 Tuesday 0.60 #23774 review 0.68 #23796 review and approve 0.37 #23797 review 0.08 #23797 finish review and approve 0.25 #23799 review and comment 0.12 #23800 review and approve 0.10 #23801 review and comment 0.17 #23802 comment 0.08 #23752 review and approve
0.60 #23795 more review
3.05
2025/10/08 Wednesday 0.55 #23795 more review 0.72 #23795 more review 0.12 #23782 marked resolved comments resolved
0.08 #23801 review updates and approve
1.47
2025/10/09 Thursday 0.45 #23799 comment 0.08 #23821 review and approve 0.08 #23824 review and approve 0.12 #23827 review, research and approve 0.10 #23820 review, research and comment 0.30 #23812 review, existing comment matches my opinion 0.10 #23805 briefly comment 2.30 #21877 add #23798 tests, testing, more work on re- implementation 1.67 #21877 work on re-implementation
0.55 #21877 more work
5.75
2025/10/10 Friday
0.23 #23828 review discussion and failing message, comment
0.23
2025/10/13 Monday 0.23 #23829 review discussion and comment 0.22 #23833 comment 0.20 #23834 review and approve with comment 0.42 #23837 review and approve 0.30 #23838 review and comment 0.42 #23840 review and comment 0.08 #23843 review and approve 0.15 #23842 review and comment 0.23 #23836 review test failures and comment 0.17 #23841 review discussion, research and comment 0.52 #23676 search for other possible modules, review and comment 0.13 #23833 review and comment 0.47 #21877 more tests, debugging 0.32 #23802 research, comment
1.18 #21877 debugging
5.04
2025/10/14 Tuesday 1.20 #23844 review, comments 0.15 #23845 review and approve 0.33 #21877 debugging 0.72 #21877 debugging, testing
0.67 #21877 debugging
3.07
2025/10/15 Wednesday
0.23 check coverity scan reports
0.23
2025/10/16 Thursday 0.55 #23847 review, #p5p discussion re 5.42.1, approve 0.53 #23851 review, research and comment, more maint-5.42 discussion 0.98 #23850 review, comments 0.53 #23852 research and comment
0.37 maint votes: vote and look for anything else
2.96
2025/10/20 Monday 0.35 #23840 review updates and approve 0.78 #23838 review updates, review build logs, comments 0.33 #23833 investigate build errors and warnings, restart mingw64 CI 0.08 #23818 review updates and approve 0.28 #23851 research and comments
1.97 #23795 more review
3.79
2025/10/21 Tuesday 0.12 #23838 check updates, restart failed CI job 0.38 #23853 review, research and comment 0.62 #23865 review, coverage testing and approve 1.13 #23858 review, testing, comments 0.08 #23838 check CI results and approve 1.07 #23795 more review and let dave know I’m done for now 1.08 #23852 work on re-working docs, research equivalence of
sigprocmask and pthread_sigmask, comment
4.48
2025/10/22 Wednesday 0.27 #23858 review updates and conditionally approve 0.27 #23868 review and approve 0.47 update t/test_pl.pod with the new PREAMBLE directive PR 23869 1.43 #23782 try to understand the code, minor change and testing
0.42 #23782 more testing, debugging
2.86
2025/10/23 Thursday
2.78 #23871 review, testing, comments
2.78
2025/10/27 Monday 2.80 #23871 review updates, comment, testing
0.43 #23795 comments
3.23
2025/10/28 Tuesday 0.40 #23879 review changes and research, comment on referred ticket 0.10 #23781 comment 0.08 #23809 briefly comment 0.67 #23867 review 0.45 #23867 comments
1.30 #23872 review
3.00
2025/10/29 Wednesday 1.10 #23782 testing and follow-up
0.53 #23781 re-check
1.63
2025/10/30 Thursday 0.35 #23882 review and comment 1.70 #23873 review, testing and approve 0.33 #23614 comment
1.55 #21877 debugging - fix one issue
3.93
Which I calculate is 56.74 hours.
Approximately 59 tickets were reviewed or worked on, and 1 patches were applied. ```
-
App::cpm - a fast CPAN module installer
- Version: 0.998000 on 2025-11-07, with 176 votes
- Previous CPAN version: 0.997024 was 3 months, 27 days before
- Author: SKAJI
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.094003 on 2025-11-03, with 777 votes
- Previous CPAN version: 2.094002 was 5 days before
- Author: OLIVER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251102.001 on 2025-11-02, with 25 votes
- Previous CPAN version: 20251026.001 was 7 days before
- Author: BRIANDFOY
-
Dist::Zilla - distribution builder; installer not included!
- Version: 6.034 on 2025-11-07, with 189 votes
- Previous CPAN version: 6.033 was 6 months, 5 days before
- Author: RJBS
-
PerlPowerTools - BSD utilities written in pure Perl
- Version: 1.053 on 2025-11-04, with 223 votes
- Previous CPAN version: 1.052 was 3 months, 17 days before
- Author: BRIANDFOY
-
Sys::Virt - libvirt Perl API
- Version: v11.8.0 on 2025-11-07, with 17 votes
- Previous CPAN version: v11.6.0 was 3 months, 3 days before
- Author: DANBERR
-
Test::Fatal - incredibly simple helpers for testing code with exceptions
- Version: 0.018 on 2025-11-06, with 40 votes
- Previous CPAN version: 0.017 was 2 years, 10 months, 5 days before
- Author: RJBS
-
Time::Piece - Object Oriented time objects
- Version: 1.40 on 2025-11-08, with 64 votes
- Previous CPAN version: 1.39 was 14 days before
- Author: ESAYM
-
Workflow - Simple, flexible system to implement workflows
- Version: 2.07 on 2025-11-08, with 34 votes
- Previous CPAN version: 2.06 was 2 months, 26 days before
- Author: JONASBN

