Import perl5404delta.pod/perl5422delta.pod
Update Module-CoreList with data for 5.40.4/5.42.2
Tick off 5.40.4/5.42.2
Add epigraphs for 5.40.4/5.42.2
Continuing the dev.to series about beautiful Perl features, here are the recent articles (March 2026) :
Introduction
Since I last wrote a XS tutorial my knowledge within C has increased this has come from improvements in LLM software that has assisted in improving my knowledge where previously I would be stuck. This knowledge and software has since enabled me to craft more elegant and efficient XS implementations.
Today I will share with you my technique for writing reusable C/XS code.
One of the most powerful patterns in XS development is writing your core logic in pure C header files. This gives you:
-
Zero-cost reuse - no runtime linking, no shared libraries, just a
#includeline. - No Perl dependency in the C layer - your headers work in any C project
- Compile-time inlining - the compiler sees everything, optimises aggressively
-
Simple distribution - headers are installed alongside the Perl module via
PM
This tutorial walks through the complete pattern step by step, using a minimal working example you can build and run yourself.
The Example
We will create two distributions:
-
Abacus - a provider distribution that ships a reusable pure-C
abacus_math.hheader containing simple arithmetic functions -
Tally - a consumer distribution that
#includes the Abacus header to build its own XS module, without duplicating any C code
As always lets start by creating the distributions that we will need for this tutorial. Open your terminal and run module-starter. If you are using a modern version of Module::Starter then the command has changed slightly since my last posts.
module-starter --module=Abacus --author="LNATION <email@lnation.org>"
module-starter --module=Tally --author="LNATION <email@lnation.org>"
Part 1: The Provider Distribution (Abacus)
Write the pure-C header
This is the reusable part. It has zero Perl dependencies - just standard C.
Now enter the Abacus and create the include directory:
cd Abucus
mkdir include
Then create a new file abacus_math.h:
touch abacus_math.h
vim abacus_math.h
Paste the following code into the file:
#ifndef ABACUS_MATH_H
#define ABACUS_MATH_H
/*
* abacus_math.h - Pure C arithmetic library (no Perl dependencies)
*
* This header is the reusable entry point for any C or XS project
* that needs basic arithmetic operations. It has ZERO Perl/XS
* dependencies.
*
* Usage from another XS module:
*
* #include "abacus_math.h"
*
* Build: add -I/path/to/Abacus/include to your compiler flags.
*/
#include <stdint.h>
/* ── Error handling hook ─────────────────────────────────────────
*
* Consumers can #define ABACUS_FATAL(msg) before including this
* header to route errors through their own mechanism.
*
* In an XS module you would typically do:
*
* #define ABACUS_FATAL(msg) croak("%s", (msg))
* #include "abacus_math.h"
*
* In plain C the default behaviour is fprintf + abort.
*/
#ifndef ABACUS_FATAL
# include <stdio.h>
# include <stdlib.h>
# define ABACUS_FATAL(msg) do { \
fprintf(stderr, "abacus fatal: %s\n", (msg)); \
abort(); \
} while (0)
#endif
/* ── Arithmetic operations ───────────────────────────────────── */
static inline int32_t
abacus_add(int32_t a, int32_t b) {
return a + b;
}
static inline int32_t
abacus_subtract(int32_t a, int32_t b) {
return a - b;
}
static inline int32_t
abacus_multiply(int32_t a, int32_t b) {
return a * b;
}
static inline int32_t
abacus_divide(int32_t a, int32_t b) {
if (b == 0) {
ABACUS_FATAL("division by zero");
}
return a / b;
}
static inline int32_t
abacus_factorial(int32_t n) {
int32_t result = 1;
int32_t i;
if (n < 0) {
ABACUS_FATAL("factorial of negative number");
}
for (i = 2; i <= n; i++) {
result *= i;
}
return result;
}
#endif /* ABACUS_MATH_H */
The code above demonstrates three critical design patterns for reusable C headers:
static inline functions eliminate linker complications by giving each translation unit its own copy of the function. The compiler can then inline these small arithmetic operations directly into the call site, producing zero-overhead abstractions. This is key to the "zero-cost reuse" principle—there is no shared library dependency, no function call overhead, just pure generated code.
The ABACUS_FATAL macro hook provides a customization point for error handling. By default, it calls fprintf() and abort() in standalone C programs; but consumers can #define ABACUS_FATAL(msg) croak("%s", (msg)) before including the header to integrate seamlessly with Perl's exception system. This single mechanism allows the same C header to work across Perl XS, plain C, and other environments without code duplication.
The use of only stdint.h integers and no Perl types ensures the header remains truly portable. There are no SV* pointers, no pTHX context variables, no XSUB.h includes—just standard C99 types. This purity is what allows the header to be #included into any C or XS project without creating hidden Perl dependencies at the C layer.
Write the Perl-facing XS header
Next we will add another header which will hold the Perl/XS specific logic. Inside the include directory create a new file called abacus.h. The rational behind this thin wrapper is to pull in Perl's headers and sets up the ABACUS_FATAL macro to use croak(). To reiterate only a XS distribution should include this header, whereas abacus_math.h is generic and could be used by other languages which bind C.
touch abacus.h
vim abacus.h
Paste the following code into the file
#ifndef ABACUS_H
#define ABACUS_H
/*
* abacus.h - Perl XS wrapper header for the Abacus library
*
* This header sets up Perl-specific error handling and includes
* the pure C core library.
*
* For reuse from OTHER XS modules without Perl overhead, include
* abacus_math.h directly instead (see that header for usage).
*/
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "ppport.h"
/* Route fatal errors through Perl's core croak() */
#define ABACUS_FATAL(msg) croak("%s", (msg))
/* Pull in the pure-C library */
#include "abacus_math.h"
#endif /* ABACUS_H */
Now any .xs file can #include "abacus.h" and get the full Perl/XS environment plus all the pure-C functions, with errors properly integrated.
Write the XS file
Next we will create the XS file, return to the root directory and then enter the lib directoy where you should see the Abacus.pm file already. Create a new XS file called Abacus.xs. This will be the glue that exposes the C functions to Perl.
cd ../lib
touch Abacus.xs
vim Abacus.xs
#include "abacus.h"
MODULE = Abacus PACKAGE = Abacus
PROTOTYPES: DISABLE
int
add(a, b)
int a
int b
CODE:
RETVAL = abacus_add(a, b);
OUTPUT:
RETVAL
int
subtract(a, b)
int a
int b
CODE:
RETVAL = abacus_subtract(a, b);
OUTPUT:
RETVAL
int
multiply(a, b)
int a
int b
CODE:
RETVAL = abacus_multiply(a, b);
OUTPUT:
RETVAL
int
divide(a, b)
int a
int b
CODE:
RETVAL = abacus_divide(a, b);
OUTPUT:
RETVAL
int
factorial(n)
int n
CODE:
RETVAL = abacus_factorial(n);
OUTPUT:
RETVAL
As you can see we include abacus.h which pulls in all the C that we need to create our XS module. We then define add, subtract, multiply, divide and factorial as XSUBs. As you should know by now XSUBs can be called directly from your perl code.
Write the Perl module with include_dir()
Next open the pm file and update to add an exporter for the XSUBS we have just created.
package Abacus;
use 5.008003;
use strict;
use warnings;
our $VERSION = '0.01';
use Exporter 'import';
our @EXPORT_OK = qw(add subtract multiply divide factorial);
require XSLoader;
XSLoader::load('Abacus', $VERSION);
Now the critical piece that makes header sharing work. A include_dir() method which returns the path to the installed headers so that consumer distributions can find them at build time.
sub include_dir {
my $dir = $INC{'Abacus.pm'};
$dir =~ s{Abacus\.pm$}{Abacus/include};
return $dir;
}
1;
How include_dir() works:
- When Perl loads
Abacus.pm, it records the full path in%INC(e.g./usr/lib/perl5/site_perl/Abacus.pm) -
include_dir()replacesAbacus.pmwithAbacus/include - That directory exists because
Makefile.PLinstalls the headers there (see next step)
Write the Makefile.PL that installs headers
The PM hash is what makes headers available to other distributions after install. It maps source files to their installation destinations.
Abacus/Makefile.PL
use 5.008003;
use strict;
use warnings;
use ExtUtils::MakeMaker;
WriteMakefile(
NAME => 'Abacus',
AUTHOR => 'Your Name <you@example.com>',
VERSION_FROM => 'lib/Abacus.pm',
ABSTRACT_FROM => 'lib/Abacus.pm',
LICENSE => 'artistic_2',
MIN_PERL_VERSION => '5.008003',
CONFIGURE_REQUIRES => {
'ExtUtils::MakeMaker' => '0',
},
TEST_REQUIRES => {
'Test::More' => '0',
},
PREREQ_PM => {},
XSMULTI => 1,
# XS configuration
INC => '-I. -Iinclude',
OBJECT => '$(O_FILES)',
# *** THIS IS THE KEY PART ***
# Install headers alongside the module so dependent
# distributions can find them via Abacus->include_dir()
PM => {
'lib/Abacus.pm' => '$(INST_LIB)/Abacus.pm',
'include/abacus.h' => '$(INST_LIB)/Abacus/include/abacus.h',
'include/abacus_math.h' => '$(INST_LIB)/Abacus/include/abacus_math.h',
},
dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz' },
clean => { FILES => 'Abacus-*' },
);
The PM hash does two things:
- Installs
Abacus.pmas normal -
Copies the header files into
Abacus/include/alongside the module
After make install, the filesystem looks like:
site_perl/
Abacus.pm
Abacus/
include/
abacus.h
abacus_math.h
Write a test
Abacus/t/01-basic.t
use strict;
use warnings;
use Test::More;
use Abacus qw(add subtract multiply divide factorial);
is(add(2, 3), 5, 'add');
is(subtract(10, 4), 6, 'subtract');
is(multiply(3, 7), 21, 'multiply');
is(divide(20, 4), 5, 'divide');
is(factorial(5), 120, 'factorial');
eval { divide(1, 0) };
like($@, qr/division by zero/, 'divide by zero croaks');
eval { factorial(-1) };
like($@, qr/negative/, 'negative factorial croaks');
done_testing;
Build and install Abacus
cd Abacus
perl Makefile.PL
make
make test
make install # installs headers into site_perl
Part 2: The Consumer Distribution (Tally)
Tally is a separate distribution that reuses Abacus's C arithmetic without duplicating any code. It adds its own "running total" functionality on top.
Write the Makefile.PL that finds Abacus headers
This is where the consumer locates the provider's headers. The two-step resolution strategy supports both installed (CPAN) and development (sibling
directory) scenarios.
Tally/Makefile.PL
use 5.008003;
use strict;
use warnings;
use ExtUtils::MakeMaker;
# Resolve Abacus include directory:
# 1. Try installed Abacus module (CPAN / system)
# 2. Fall back to sibling directory (development)
my $abacus_inc;
eval {
no warnings 'redefine';
local *XSLoader::load = sub {}; # skip XS bootstrap
require Abacus;
my $dir = Abacus->include_dir();
$abacus_inc = $dir if $dir && -d $dir;
};
if (!$abacus_inc && -d '../Abacus/include') {
$abacus_inc = '../Abacus/include';
}
die "Cannot find Abacus include directory.\n"
. "Install Abacus or place it as a sibling directory.\n"
unless $abacus_inc;
WriteMakefile(
NAME => 'Tally',
AUTHOR => 'Your Name <you@example.com>',
VERSION_FROM => 'lib/Tally.pm',
ABSTRACT_FROM => 'lib/Tally.pm',
LICENSE => 'artistic_2',
MIN_PERL_VERSION => '5.008003',
CONFIGURE_REQUIRES => {
'ExtUtils::MakeMaker' => '0',
'Abacus' => '0.01',
},
TEST_REQUIRES => {
'Test::More' => '0',
},
PREREQ_PM => {
'Abacus' => '0.01',
},
# Point the compiler at Abacus's installed headers
INC => "-I$abacus_inc",
OBJECT => '$(O_FILES)',
dist => { COMPRESS => 'gzip -9f', SUFFIX => 'gz' },
clean => { FILES => 'Tally-*' },
);
Let's walk through the header resolution:
-
Try the installed path first -
require Abacusloads the module, thenAbacus->include_dir()returns the path where the headers were installed. We stub outXSLoader::loadbecause we only need the pure-Perlinclude_dir()method, not the XS functions. -
Fall back to sibling directory - during development, Abacus and Tally
often live side by side.
../Abacus/includehandles this case. - Die with a clear message if neither path works.
The resolved path is passed to INC, which adds it to the C compiler's include search path (-I/path/to/Abacus/include).
Abacus is listed in both CONFIGURE_REQUIRES and PREREQ_PM:
-
CONFIGURE_REQUIRESensures Abacus is installed beforeMakefile.PLruns (needed because werequire Abacusat configure time) -
PREREQ_PMensures it is available at runtime too
Write the XS file
This is where the reuse happens. Tally includes abacus_math.h directly -
no Perl coupling, just pure C function calls.
Tally/Tally.xs
#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
/* Hook Abacus errors into Perl's croak() */
#define ABACUS_FATAL(msg) croak("%s", (msg))
/* Include the pure-C header from Abacus - no Perl deps in the header */
#include "abacus_math.h"
/* ── Tally's own C logic, built on top of Abacus ─────────────── */
typedef struct {
int32_t total;
} tally_state_t;
static inline void
tally_init(tally_state_t *state) {
state->total = 0;
}
static inline int32_t
tally_add(tally_state_t *state, int32_t value) {
state->total = abacus_add(state->total, value);
return state->total;
}
static inline int32_t
tally_subtract(tally_state_t *state, int32_t value) {
state->total = abacus_subtract(state->total, value);
return state->total;
}
static inline int32_t
tally_multiply_total(tally_state_t *state, int32_t value) {
state->total = abacus_multiply(state->total, value);
return state->total;
}
static inline int32_t
tally_get(tally_state_t *state) {
return state->total;
}
static inline void
tally_reset(tally_state_t *state) {
state->total = 0;
}
/* ── XS bindings ─────────────────────────────────────────────── */
MODULE = Tally PACKAGE = Tally
PROTOTYPES: DISABLE
SV *
new(class)
const char *class
CODE:
tally_state_t *state;
Newxz(state, 1, tally_state_t);
tally_init(state);
RETVAL = newSV(0);
sv_setref_pv(RETVAL, class, (void *)state);
OUTPUT:
RETVAL
int
add(self, value)
SV *self
int value
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
RETVAL = tally_add(state, value);
OUTPUT:
RETVAL
int
subtract(self, value)
SV *self
int value
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
RETVAL = tally_subtract(state, value);
OUTPUT:
RETVAL
int
multiply_total(self, value)
SV *self
int value
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
RETVAL = tally_multiply_total(state, value);
OUTPUT:
RETVAL
int
total(self)
SV *self
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
RETVAL = tally_get(state);
OUTPUT:
RETVAL
void
reset(self)
SV *self
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
tally_reset(state);
void
DESTROY(self)
SV *self
CODE:
tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
Safefree(state);
Notice that Tally includes abacus_math.h (the pure C header), not abacus.h (the Perl-facing wrapper). This is intentional - Tally has its own Perl/XS setup and only needs the C functions.
Write the Perl module
Tally/lib/Tally.pm
package Tally;
use 5.008003;
use strict;
use warnings;
our $VERSION = '0.01';
require XSLoader;
XSLoader::load('Tally', $VERSION);
1;
__END__
=head1 NAME
Tally - Running total calculator using Abacus C headers
=head1 SYNOPSIS
use Tally;
my $t = Tally->new;
$t->add(10); # total is now 10
$t->add(5); # total is now 15
$t->subtract(3); # total is now 12
$t->multiply_total(2); # total is now 24
say $t->total; # 24
$t->reset; # back to 0
=cut
Write a test
Tally/t/01-basic.t
use strict;
use warnings;
use Test::More;
use_ok('Tally');
my $t = Tally->new;
isa_ok($t, 'Tally');
is($t->total, 0, 'starts at zero');
is($t->add(10), 10, 'add 10');
is($t->add(5), 15, 'add 5');
is($t->subtract(3), 12, 'subtract 3');
is($t->multiply_total(2), 24, 'multiply by 2');
is($t->total, 24, 'total is 24');
$t->reset;
is($t->total, 0, 'reset to zero');
done_testing;
Build Tally (development mode)
cd Tally
perl Makefile.PL # finds ../Abacus/include automatically
make
make test
I hope you found this tutorial useful! If you have questions about XS, C header reuse, or building modular Perl/C libraries, please leave a message.
5.40.4/5.42.2 today
Three modules, one goal: fast, correct identifier generation in Perl with zero runtime dependencies. Horus generates UUIDs. Sekhmet generates ULIDs. Apophis uses deterministic UUIDs to build content-addressable storage. All three are implemented in C, exposed through XS, and designed to work together.
Horus: Every UUID Version, One Module
Horus implements all UUID versions defined in RFC 9562 -- v1 through v8, plus NIL and MAX. The entire engine is C, compiled once, called millions of times per second.
use Horus qw(:all);
my $random = uuid_v4(); # 122 random bits
my $sortable = uuid_v7(); # timestamp + random, sortable
my $fixed = uuid_v5(UUID_NS_DNS, "example.com"); # deterministic, always the same
Why multiple versions matter
Each version solves a different problem:
v4 is the workhorse. 122 bits of randomness, no coordination needed. Use it for session tokens, request IDs, anything where uniqueness is all you need.
v7 embeds a millisecond timestamp in the high bits, making UUIDs lexicographically sortable. Database indexes love this -- new rows append instead of scattering across B-tree pages. Horus guarantees monotonic ordering within the same millisecond.
my @ids = map { uuid_v7() } 1..3;
# 019d38f6-3e9a-765c-ae1c-1cfeb0c30000
# 019d38f6-3e9a-765c-ae1c-1cfeb0c40000
# 019d38f6-3e9a-765c-ae1c-1cfeb0c50000
# String sort == chronological sort
v5 is deterministic. Given the same namespace and name, it always produces the same UUID. This is the foundation Apophis builds on... the same content, the same identifier, every time.
my $a = uuid_v5(UUID_NS_DNS, "example.com");
my $b = uuid_v5(UUID_NS_DNS, "example.com");
# $a eq $b -- always
Ten output formats
Every generator accepts a format parameter. Convert between them freely:
my $id = uuid_v4();
uuid_convert($id, UUID_FMT_STR); # 550e8400-e29b-41d4-a716-446655440000
uuid_convert($id, UUID_FMT_HEX); # 550e8400e29b41d4a716446655440000
uuid_convert($id, UUID_FMT_BRACES); # {550e8400-e29b-41d4-a716-446655440000}
uuid_convert($id, UUID_FMT_URN); # urn:uuid:550e8400-e29b-41d4-a716-446655440000
uuid_convert($id, UUID_FMT_BASE64); # VQ6EAOKbQdSnFkRmVUQAAA
Bulk generation
When you need thousands of IDs, crossing the Perl/C boundary once beats crossing it thousands of times:
my @ids = uuid_v4_bulk(10_000); # single call, 10k UUIDs back
Utilities
uuid_validate($string); # is this a valid UUID?
uuid_version($string); # which version? (1-8)
uuid_time($v7_uuid); # extract epoch seconds from v7/v6
uuid_cmp($a, $b); # sort comparison (-1, 0, 1)
Sekhmet: ULIDs for When You Need Sortable and Compact
A ULID is 26 characters of Crockford base32 encoding: 10 characters of millisecond timestamp followed by 16 characters of randomness. They sort lexicographically by time, they are URL-safe, and they are shorter than UUIDs.
use Sekhmet qw(:all);
my $ulid = ulid();
# 06EKHXHYKAT25K0YQJHN6A6YJR
Monotonic mode
If you generate multiple ULIDs within the same millisecond, the random component increments to guarantee strict ordering:
my $a = ulid_monotonic();
my $b = ulid_monotonic();
my $c = ulid_monotonic();
# $a lt $b lt $c -- guaranteed, even within the same millisecond
Time extraction
The timestamp is baked into the ULID. Extract it without a database lookup:
my $ulid = ulid();
my $epoch = ulid_time($ulid); # 1774777155.226
my $ms = ulid_time_ms($ulid); # 1774777155226
UUID interoperability
ULIDs and UUID v7 share the same structure -- 48-bit timestamp, random fill. Convert between them losslessly:
my $ulid = ulid();
my $uuid = ulid_to_uuid($ulid); # standard UUID v7 string
# Useful when your API expects UUIDs but you generate ULIDs internally
When to use Sekhmet vs Horus
Use Sekhmet (ulid()) when you want compact, sortable, human friendly identifiers for log entries, event streams, anything displayed in a UI. Use Horus (uuid_v7()) when you need standard UUID format for compatibility with systems that expect 36-character hyphenated strings. Use Horus
(uuid_v4()) when you need pure randomness with no timestamp leakage.
Apophis: Content-Addressable Storage
Apophis answers the question: "Have I seen this content before?" It hashes content with UUID v5 to produce a deterministic identifier, then stores the content in a sharded directory tree. Same content always maps to the same path. Different content never collides.
use Apophis;
my $store = Apophis->new(
namespace => 'my-app',
store_dir => '/var/data/cas',
);
my $id = $store->store(\"Hello, world!");
# 3e856e0f-c7ac-569e-827b-40df723c326f
my $id2 = $store->store(\"Hello, world!");
# 3e856e0f-c7ac-569e-827b-40df723c326f -- same content, same ID
How storage works
Content is stored in a two-level hex-sharded directory tree derived from the UUID. The first four hex characters become two directory levels:
/var/data/cas/
3e/85/3e856e0f-c7ac-569e-827b-40df723c326f
This gives 65,536 possible directories -- enough to keep any single directory from growing too large, even with millions of files.
Writes are atomic: content goes to a temporary file first, then is renamed into place. A crash mid-write leaves no partial files.
Identification without storage
Sometimes you just want the identifier:
my $id = $store->identify(\"some content"); # UUID, no write
my $id = $store->identify_file("/path/to/big.iso"); # streams in 64KB chunks
File identification is streaming -- a 10GB file uses the same memory as a 10KB file.
Metadata
Attach arbitrary metadata as a sidecar:
my $id = $store->store(\"image data", meta => {
mime_type => 'image/png',
original_name => 'photo.png',
uploaded_by => 'user-42',
});
my $meta = $store->meta($id);
# { mime_type => 'image/png', original_name => 'photo.png', ... }
Namespace isolation
The namespace parameter creates a separate UUID v5 namespace. The same content under different namespaces produces different identifiers:
my $a = Apophis->new(namespace => 'uploads');
my $b = Apophis->new(namespace => 'cache');
$a->identify(\"data") ne $b->identify(\"data"); # different IDs
This lets you run multiple independent stores without collision.
Verification
Content-addressable storage has a built-in integrity check: re-hash the content and compare to the filename.
if ($store->verify($id)) {
# content matches its identifier -- no corruption
}
How They Fit Together
Horus (Foundation)
|-- UUID v1-v8, NIL, MAX
|-- C headers reused by downstream XS modules
|
|--- Apophis (Content-addressable storage)
| Uses UUID v5 for deterministic content identification
|
|--- Sekhmet (ULID generation)
Uses Horus C primitives for Crockford base32, CSPRNG, timestamps
Horus is the foundation. Its C headers are standalone -- no Perl types, no interpreter context. Apophis and Sekhmet include them at compile time via Horus->include_dir().
A practical example using all three:
use Horus qw(:all);
use Apophis;
use Sekhmet qw(:all);
# Event tracking system
my $event_id = ulid_monotonic(); # sortable event identifier
my $session = uuid_v4(); # random session token
my $store = Apophis->new(namespace => 'events', store_dir => '/var/events');
# Store event payload, get content-addressable ID
my $payload = encode_json({ action => 'click', target => 'button-1' });
my $content_id = $store->store(\$payload, meta => {
event_id => $event_id,
session_id => $session,
timestamp => ulid_time($event_id),
});
# Later: retrieve by content hash
my $data = $store->fetch($content_id);
# Or find when the event happened from the ULID
my $when = ulid_time($event_id);
Each module handles one concern well. Horus generates identifiers. Sekhmet adds time sortable compact identifiers. Apophis maps content to identifiers and manages storage. No module tries to do what another already does.
Performance
All three modules use custom ops on Perl 5.14+ to eliminate subroutine dispatch overhead. The hot paths are pure C with no Perl API calls.
Getting Started
cpanm Horus
cpanm Sekhmet
cpanm Apophis
All three are on CPAN under the
Artistic License 2.0.
Most code formatters want to own your style. They have opinions about brace placement, line length, trailing commas, and a hundred other things you never asked for. Sometimes you just want the indentation fixed. Tabs consistent, nesting correct, content untouched.
That is was the goal for Eshu and exactly what it does. It reads source code line by line, tracks nesting depth through a state machine, rewrites the leading whitespace, and leaves everything else alone. The entire engine is written in C, exposed to Perl through XS, and ships with a CLI that can check, diff, or fix files in place. The distribution also ships with a vim plugin as that is my editor of choice.
Eight languages, one tool
C | Perl | XS | XML | HTML | CSS | JavaScript | POD
Each language gets its own scanner with its own state machine. They share a common architecture which track depth, emit indentation and advance but each state machine actually handles the constructs that make the specific language awkward to indent.
Perl has heredocs, quoted constructs (qw(), qq{}, s///), and embedded
POD. JavaScript has template literals with nested ${} interpolation. XS
files switch between C and Perl conventions at the MODULE = boundary. HTML
has void elements and verbatim zones inside <pre> and <script>. Each of these needs specific handling, and getting any of them wrong means corrupting content visually.
Why C?
Indentation fixing is embarrassingly linear. You read a line, update state, emit the line with new leading whitespace, repeat. There is no tree to build, no AST to walk, no multipass resolution. A single-pass scanner in C processes a large codebase in milliseconds.
The engine is implemented entirely in standalone C header files -- ten of them, roughly 3,600 lines total. They have no Perl dependencies. No SV*, no croak(), no interpreter context. Just stdlib.h, string.h, and ctype.h. This means they can be reused from any C program or language that can bind them, not just my Perl XS modules.
include/
eshu.h Core types, config, buffer, language enum
eshu_c.h C scanner
eshu_pl.h Perl scanner (heredoc, regex, qw, POD)
eshu_xs.h XS dual-mode scanner
eshu_xml.h XML/HTML scanner
eshu_css.h CSS scanner
eshu_js.h JavaScript scanner (template literals)
eshu_pod.h POD scanner
eshu_file.h File I/O, directory walking, binary detection
eshu_diff.h Unified diff generation
How the scanner works
Every language scanner follows the same pattern. For each line of input:
pre-adjust depth --> emit indent --> copy content --> post-adjust depth
A closing brace on a line means you dedent before emitting that line. An opening brace means you indent the next line. The scanner maintains a state enum to know whether it is inside a string, a comment, a heredoc, a regex, or regular code. State transitions happen character by character within the scan function; depth changes happen at line boundaries.
The Perl scanner, for example, tracks 14 distinct states: regular code, double-quoted strings, single-quoted strings, regex, heredoc (both standard and indented), qw, qq, q, POD, line comments, and block comments. It detects heredoc terminators, remembers whether the variant is indented (<<~EOF), buffers the body verbatim, and resumes normal scanning after the terminator.
The hard parts
Perl: Is / division or regex?
The classic Perl parsing problem. Eshu tracks whether the previous meaningful token was a value (a variable, a closing bracket, a number) or an operator. If it was a value, / is division. Otherwise, it opens a regex. This is the same heuristic that syntax highlighters use, and it covers real-world code well.
XS: Two languages in one file
An XS file is C code at the top and a Perl/C hybrid below MODULE =. Eshu detects the boundary and switches scanners. Below the boundary, it tracks XSUB blocks (each new function declaration resets depth), labels like CODE:, OUTPUT:, and INIT:, and special cases like BOOT: sections that use shallower indentation.
JavaScript: Template literals with nested interpolation
A backtick string in JavaScript can contain ${expr}, and that expression can contain braces, function calls, even another template literal. Eshu maintains a depth counter for interpolation braces so it knows when the } closes the interpolation versus when it closes a block inside the interpolation.
HTML: Script blocks need JavaScript rules
Content inside <script> tags is JavaScript, not HTML. Eshu collects the entire script block, passes it through the JavaScript scanner for reindentation, then splices it back at the correct HTML depth. The same applies to recognising void elements (<br>, <img>, etc.) that should not increase nesting depth.
The CLI
Eshu ships with a command-line tool that supports the three modes you actually need:
# Preview what would change
eshu --diff lib/
# Check in CI (exit 1 if anything needs fixing)
eshu --check lib/ t/
# Fix in place
eshu --fix lib/
By default, nothing is modified. You have to explicitly ask for --fix. Language is detected from file extensions, but can be overridden with --lang. You can filter files with --exclude and --include (regex patterns), choose tabs or spaces, set the indent width, and even restrict processing to a line range within a file.
Directory processing is recursive by default, skips binary files (detected by sampling the first 8KB for NUL bytes), respects a 1MB size limit, and follows file symlinks but not directory symlinks.
The Perl API
Everything the CLI does is available programmatically:
use Eshu;
# Fix a string
my $fixed = Eshu->indent_pl($source, spaces => 4);
# Auto-detect language and process
my $result = Eshu->indent_file('lib/App.pm',
fix => 1,
diff => 1,
);
say $result->{diff} if $result->{status} eq 'changed';
# Process an entire directory
my $report = Eshu->indent_dir('lib/',
fix => 1,
recursive => 1,
exclude => [qr/\.bak$/],
);
say "$report->{files_changed} files fixed";
Each language also has a direct method: indent_c, indent_xs, indent_xml, indent_html, indent_css, indent_js, indent_pod.
Idempotent by design
Running Eshu twice produces the same result as running it once. This is not just a goal, it is tested. The test suite includes real-world Perl example, verifies that processing them does not crash, and asserts that a second pass produces identical output.
This matters for CI integration. If eshu --check passes, you know that running eshu --fix would be a no-op. There is no oscillation, no cascading reformats, no "fix the fix" loops.
What it does not do
Eshu does not reformat code. It does not move braces, break long lines, sort imports, add or remove semicolons, or have opinions about blank lines. It touches leading whitespace and nothing else. Diffs are clean: every changed line shows only the whitespace prefix changing.
This is a deliberate constraint. A tool that only fixes indentation is a tool you can run on any codebase without fear. It will not start a style war. It will not produce a 10,000 line diff that buries your actual changes. It will make the nesting visible and get out of the way.
Vim integration
Eshu ships with a Vim plugin in the distribution. It pipes the current buffer through eshu and replaces the content in place, with automatic language detection and cursor position preservation.
Installation
The easiest way with Vim 8+ native packages:
mkdir -p ~/.vim/pack/eshu/start
ln -s /path/to/Eshu/vim ~/.vim/pack/eshu/start/eshu
For Neovim:
mkdir -p ~/.local/share/nvim/site/pack/eshu/start
ln -s /path/to/Eshu/vim ~/.local/share/nvim/site/pack/eshu/start/eshu
Or if you prefer vim-plug:
Plug '/path/to/Eshu/vim'
Or just source it directly in your .vimrc:
source /path/to/Eshu/vim/plugin/eshu.vim
Usage
Once loaded, you get two commands and a default keybinding:
| Command | Mode | What it does |
|---|---|---|
:EshuFix |
Normal | Fix indentation for the entire file |
:EshuFixRange |
Visual | Fix indentation for the selected lines |
\ef |
Normal | Fix entire file (default <Leader>ef mapping) |
\ef |
Visual | Fix selected lines (default <Leader>ef mapping) |
The plugin detects the language from the file extension -- .pm and .pl map to Perl, .xs to XS, .html to HTML, and so on. If the eshu binary is not in your $PATH, point the plugin at it:
let g:eshu_cmd = '/path/to/Eshu/bin/eshu'
To disable the default mappings and use your own:
let g:eshu_no_mappings = 1
nnoremap <silent> <F6> :EshuFix<CR>
vnoremap <silent> <F6> :EshuFixRange<CR>
Eshu is available on CPAN under the Artistic License 2.0.
If you’ve ever tried to run Perl code in a Java environment, you know the drill. Rewrite everything in Java (expensive, risky), or maintain two separate runtimes with all the deployment headaches.
PerlOnJava offers a third path: compile your Perl to JVM bytecode and run it anywhere Java runs. I’ve been working on Perl-to-JVM compilation, on and off, for longer than I’d like to admit. The latest push has been getting the ecosystem tooling right — and jcpan is the result.
Why This Matters
Some scenarios where this pays off:
Legacy integration. You have 50,000 lines of battle-tested Perl that process reports, transform data, or implement business logic. Rewriting it is a multi-year project with uncertain ROI. With PerlOnJava, you can deploy it as a JAR alongside your Java services.
JDBC database access. Perl’s DBI works with PerlOnJava’s JDBC backend. Connect to PostgreSQL, MySQL, Oracle, or any database with a JDBC driver — no DBD compilation required, no driver version mismatches.
use DBI;
my $dbh = DBI->connect("jdbc:postgresql://localhost/mydb", $user, $pass);
my $sth = $dbh->prepare("SELECT * FROM users WHERE active = ?");
$sth->execute(1);
Container deployments. One Docker image with OpenJDK and your Perl code. You don’t need a Perl installation, cpanm in your Dockerfile, or XS modules that compiled fine on your laptop.
Embedding in Java applications. PerlOnJava implements JSR-223, the standard Java scripting API. Your Java application can eval Perl code, pass data back and forth, and let users write Perl plugins.
The 30-Second Version
git clone https://github.com/fglock/PerlOnJava.git
cd PerlOnJava && make
./jcpan Moo
./jperl -MMoo -e 'print "Moo version: $Moo::VERSION\n"'
That’s it. Moo is installed. No cpanm, no local::lib dance.
What Actually Ships in the JAR
PerlOnJava distributes as a single 23MB JAR file. Inside, you get:
- 568 Perl modules — DBI, JSON, YAML, HTTP::Tiny, Test::More, and the rest of the usual suspects
- Java implementations of key XS modules — DateTime, Digest::MD5, Compress::Zlib
- The compiler and runtime — parse Perl, emit JVM bytecode, execute
When you run ./jperl script.pl, there’s no second download, no dependency resolution. The standard library is there.
# These all work out of the box
use JSON;
use HTTP::Tiny;
use Digest::SHA qw(sha256_hex);
use Archive::Tar;
use DBI;
my $response = HTTP::Tiny->new->get('https://api.example.com/data');
my $data = decode_json($response->{content});
print sha256_hex($data->{token}), "\n";
Installing Additional Modules
The bundled modules cover common use cases, but CPAN has over 200,000 distributions. For everything else, there’s jcpan:
./jcpan Moo # Install a module
./jcpan -f Some::Module # Force install (skip failing tests)
./jcpan -t DateTime # Run a module's test suite
./jcpan # Interactive CPAN shell
Modules install to ~/.perlonjava/lib/, which is automatically in @INC.
How Installation Works Without Make
Traditional CPAN installation runs perl Makefile.PL, then make, then make install. This requires a C compiler and the Perl development headers — things that don’t exist on the JVM.
PerlOnJava ships a custom ExtUtils::MakeMaker that skips the make step entirely. When you run jperl Makefile.PL, it:
- Parses the distribution metadata
- Copies
.pmfiles directly to the install location - Reports any XS files it can’t compile (more on that below)
For pure-Perl modules — which is most of CPAN — this just works.
What About XS Modules?
XS modules contain C code that gets compiled to native machine code. Since PerlOnJava compiles to JVM bytecode, not native code, these need special handling.
For popular XS modules, PerlOnJava includes Java implementations of the XS functions:
- DateTime — java.time APIs
- JSON — fastjson2 library
- Digest::MD5/SHA — Java MessageDigest
- DBI — JDBC backend
- Compress::Zlib — java.util.zip
When you use DateTime, PerlOnJava’s XSLoader detects the Java implementation and loads it automatically. You get the module’s full API, backed by Java libraries.
For XS modules without Java implementations, many have pure-Perl fallbacks that activate automatically. For the rest, installation succeeds (the .pm files install), but runtime fails with a clear error message.
A Real Example: DateTime
DateTime is a good stress test. It has a deep dependency tree — Specio, Params::ValidationCompiler, namespace::autoclean, and so on. It also has XS code for performance-critical date math and a comprehensive test suite.
Here’s what happens when you install and test it:
$ ./jcpan -t DateTime
...
t/00-report-prereqs.t .... ok
t/00load.t ............... ok
t/01sanity.t ............. ok
...
t/19leap-second.t ........ ok
t/20infinite.t ........... ok
...
All tests successful.
Files=51, Tests=3589, 78 wallclock secs
Result: PASS
3,589 tests, all passing. The Java XS implementation handles Rata Die conversions (the internal date representation), leap years, leap seconds, and timezone arithmetic. Under the hood, it’s using java.time.JulianFields — the same code that powers Java’s date/time library.
use DateTime;
my $dt = DateTime->new(
year => 2026,
month => 3,
day => 28,
hour => 14,
minute => 30,
time_zone => 'America/New_York'
);
print $dt->strftime('%Y-%m-%d %H:%M %Z'), "\n";
# Output: 2026-03-28 14:30 EDT
$dt->add(months => 1);
print $dt->ymd, "\n";
# Output: 2026-04-28
The Other Tools
jcpan isn’t the only addition. PerlOnJava now includes:
jperldoc — Read module documentation:
./jperldoc DateTime
./jperldoc Moo::Role
jprove — Run test suites:
./jprove t/*.t
./jprove -v t/specific_test.t
These are the standard Perl tools, running on the JVM.
Performance
Startup is slow. The JVM needs to load classes and initialize. A “hello world” takes about 250ms versus Perl’s 15ms. That’s annoying for command-line scripts, irrelevant for services.
Short-lived programs don’t benefit from JIT compilation. If your script runs for less than a few seconds, the JVM’s just-in-time compiler never kicks in. Test suites, where each .t file is a separate process, run slower than native Perl.
Long-running programs can be significantly faster. After warmup (~10,000 iterations through hot code paths), the JIT compiler optimizes aggressively. Here’s a real benchmark — closure calls in a tight loop:
$ time perl dev/bench/benchmark_closure.pl
timethis 5000: 7 wallclock secs ( 7.49 usr ) @ 667/s
$ time ./jperl dev/bench/benchmark_closure.pl
timethis 5000: 4 wallclock secs ( 3.54 usr ) @ 1411/s
PerlOnJava runs this benchmark 2.1x faster than native Perl. The JVM’s C2 compiler inlines calls and unrolls loops.
Bottom line: use it for long-running services and batch jobs. Not for command-line tools that need to start instantly.
What Doesn’t Work
Some things just can’t work on the JVM:
fork() — The JVM doesn’t do Unix-style forking. Period. For tests that need fork, use native
perl.Weak references —
Scalar::Util::weakenis a no-op. This breaks some cleanup patterns.DESTROY — Object destructors never run. The JVM has garbage collection, not deterministic destruction. If your code depends on DEMOLISH or cleanup in destructors, it won’t work.
Some XS modules — No Java implementation and no pure-Perl fallback means it won’t work.
Spurious warnings — There’s currently a bug where test description strings trigger
Argument "..." isn't numericwarnings. Annoying, being fixed.
Cross-Platform, Single Artifact
PerlOnJava runs on Linux, macOS, and Windows. Same JAR everywhere — the JVM’s “write once, run anywhere” actually delivers here. Your deployment is Java 22+, the JAR, and the wrapper scripts.
The project includes a Dockerfile for containerized deployments and a Debian package recipe (make deb) for system installation.
Getting Started
# Clone and build
git clone https://github.com/fglock/PerlOnJava.git
cd PerlOnJava
make
# Run some Perl
./jperl -E 'say "Hello from the JVM"'
# Install a module
./jcpan Moo
# Use it
./jperl -MMoo -E '
package Point {
use Moo;
has x => (is => "ro");
has y => (is => "ro");
}
my $p = Point->new(x => 3, y => 4);
say "Point: (", $p->x, ", ", $p->y, ")"
'
The project is at github.com/fglock/PerlOnJava, licensed under the Artistic License 2.0. Issues and contributions welcome.
PerlOnJava implements Perl 5.42 semantics and is validated against the Perl test suite. It’s been in development since 2024, building on nearly 30 years of prior work on Perl-JVM integration.
I am trying to understand the behavior of the following script under Perl 5.28.2:
sub split_and_print {
my $label = $_[0];
my $x = $_[1];
my @parts = split('\.', $x);
print sprintf("%s -> %s %s %.20f\n", $label, $parts[0], $parts[1], $x);
}
my @raw_values = ('253.38888888888889', '373.49999999999994');
for my $raw_value (@raw_values) {
split_and_print("'$raw_value'", $raw_value);
split_and_print("1.0 * '$raw_value'", 1.0 * $raw_value);
}
for me, this prints
'253.38888888888889' -> 253 38888888888889 253.38888888888888573092
1.0 * '253.38888888888889' -> 253 388888888889 253.38888888888888573092
'373.49999999999994' -> 373 49999999999994 373.49999999999994315658
1.0 * '373.49999999999994' -> 373 5 373.49999999999994315658
All of that is as expected, except for the last line: I don't understand why, during the automatic conversion of $x from a number to a string in the call to split it is converted into 373.5. print(373.49999999999994 - 373.5) says -5.6843418860808e-14, so Perl knows that those numbers are not equal (i.e. it's not about a limited precision of floating points in Perl).
perlnumber says
As mentioned earlier, Perl can store a number in any one of three formats, but most operators typically understand only one of those formats. When a numeric value is passed as an argument to such an operator, it will be converted to the format understood by the operator.
[...]
If the source number is outside of the limits representable in the target form, a representation of the closest limit is used. (Loss of information)
If the source number is between two numbers representable in the target form, a representation of one of these numbers is used. (Loss of information)
But '373.5' doesn't seem to be the "closest limit" of representing 373.49999999999994 as a string -- that would be '373.49999999999994', or some other decimal representation that, when converted back to a number yields the original value.
Also: what is different about 253.38888888888889?
I am looking for a definite reference that explains how exactly the automatic conversion of numbers to strings works in Perl.
-
Clone - recursively copy Perl datatypes
- Version: 0.50 on 2026-03-28, with 33 votes
- Previous CPAN version: 0.49 was released 3 days before
- Author: ATOOMIC
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260327.002 on 2026-03-27, with 25 votes
- Previous CPAN version: 20260318.001 was released 9 days before
- Author: BRIANDFOY
-
DBD::Oracle - Oracle database driver for the DBI module
- Version: 1.95 on 2026-03-24, with 33 votes
- Previous CPAN version: 1.91_5 was released 8 days before
- Author: ZARQUON
-
IPC::Run - system() and background procs w/ piping, redirs, ptys (Unix, Win32)
- Version: 20260322.0 on 2026-03-22, with 39 votes
- Previous CPAN version: 20250809.0 was released 7 months, 12 days before
- Author: TODDR
-
Mojo::Pg - Mojolicious ♥ PostgreSQL
- Version: 4.29 on 2026-03-23, with 98 votes
- Previous CPAN version: 4.28 was released 5 months, 23 days before
- Author: SRI
-
Object::Pad - a simple syntax for lexical field-based objects
- Version: 0.825 on 2026-03-25, with 48 votes
- Previous CPAN version: 0.824 was released 1 day before
- Author: PEVANS
-
PDL::Stats - a collection of statistics modules in Perl Data Language, with a quick-start guide for non-PDL people.
- Version: 0.856 on 2026-03-22, with 15 votes
- Previous CPAN version: 0.855 was released 1 year, 16 days before
- Author: ETJ
-
SPVM - The SPVM Language
- Version: 0.990152 on 2026-03-26, with 36 votes
- Previous CPAN version: 0.990151 was released the same day
- Author: KIMOTO
-
Term::Choose - Choose items from a list interactively.
- Version: 1.781 on 2026-03-25, with 15 votes
- Previous CPAN version: 1.780 was released 1 month, 20 days before
- Author: KUERBIS
-
YAML::Syck - Fast, lightweight YAML loader and dumper
- Version: 1.42 on 2026-03-27, with 18 votes
- Previous CPAN version: 1.41 was released 4 days before
- Author: TODDR
This is the weekly favourites list of CPAN distributions. Votes count: 43
Week's winner: Mail::Make (+2)
Build date: 2026/03/28 20:47:31 GMT
Clicked for first time:
- DB::Handy - Pure-Perl flat-file relational database with DBI-like interface
- GD::Thumbnail - Thumbnail maker for GD
- HTTP::Handy - A tiny HTTP/1.0 server for Perl 5.5.3+
- Lingua::IND::Nums2Words - This module is deprecated. Please use Lingua::IND::Num2Word instead.
- Lingua::ITA::Word2Num - Word to number conversion in Italian
- Lingua::KOR::Word2Num - Word to number conversion in Korean
- LTSV::LINQ - LINQ-style query interface for LTSV files
- MIDI::RtController::Filter::Tonal - Tonal RtController filters
- Modern::Perl::Prelude - Project prelude for modern Perl style on Perl 5.26+
- Net::Async::SOCKS - basic SOCKS5 connection support for IO::Async
- Restish::Client - A RESTish client...in perl!
Increasing its reputation:
- App::Greple (+1=5)
- Authen::SASL (+1=11)
- CGI (+1=48)
- CHI (+1=64)
- Data::Printer (+1=154)
- DateTime::Format::ISO8601 (+1=11)
- DBD::Pg (+1=104)
- DBI (+1=283)
- FFI::Platypus (+1=70)
- Future (+1=63)
- Future::AsyncAwait (+1=52)
- Imager::QRCode (+1=3)
- IO::Async (+1=81)
- IO::Async::SSL (+1=5)
- IO::Compress (+1=20)
- IO::K8s (+1=5)
- Lingua::JA::Moji (+1=3)
- List::UtilsBy (+1=41)
- Mail::Make (+2=2)
- mb::JSON (+1=3)
- Mojo::Pg (+1=74)
- Mojo::UserAgent::Cached (+1=4)
- Net::Async::HTTP (+1=8)
- Object::Pad (+1=48)
- PDF::API2 (+1=33)
- Regexp::Assemble (+1=36)
- Regexp::Debugger (+1=60)
- Syntax::Keyword::Match (+1=15)
- Syntax::Keyword::Try (+1=48)
- Test2::Harness (+1=21)
- XML::Parser (+1=11)
Beautiful Perl series
This post is part of the beautiful Perl features series. See the introduction post for general explanations about the series.
Previous posts covered random topics ranging from fundamental concepts like blocks or list context and scalar context to sharp details like reusable subregexes. Today's topic is neither very fundamental nor very sharp: it is just a handy convenience for managing multi-line strings in source code, namely the heredoc feature. This is not essential, because multi-line strings can be expressed by other means; but it addresses a need that is quite common in programming, so it is interesting to compare it with other programming languages.
"Heredocs": multi-line data embedded in source code
A "here document", abbreviated as "heredoc", is a piece of multi-line text embedded in the source code. Perl borrowed the idea from Unix shells, and was later imitated by other languages like PHP or Ruby (discussed below). So instead of concatenating single lines, like this:
my $email_template
= "Dear %s,\n"
. "\n"
. "You just won %d at our lottery!\n"
. "To claim your prize, just click on the link below.\n";
one can directly write the whole chunk of text, with a freely-chosen final delimiter, like this:
my $email_template = <<~_END_OF_MAIL_;
Dear %s,
You just won %d at our lottery!
To claim your prize, just click on the link below.
_END_OF_MAIL_
The second version is easier to write and to maintain, and also easier to read.
Perl heredoc syntax
General rules
A piece of "heredoc" data can appear anywhere in a Perl expression. It starts with an initial operator written either << or <<~. The second variant with an added tilde ~, available since Perl v5.26, introduces an indented heredoc, where initial spaces on the left of each line are automatically removed by the interpreter; thanks to this feature the inserted text can be properly indented within the Perl code, as illustrated in the example above. The amount of indent removed from each data line is determined by the indent of the final delimiter.
The heredoc operator must be immediately followed by a delimiter string freely chosen by the programmer. Lines below the current line will be part of the heredoc text, until the second appearance of that delimiter string which closes the heredoc sequence. The delimiter string can be any string enclosed in double quotes or in single quotes1; the double quotes can be omitted if the string would be valid as an identifier (if the string only contains letters, digits and underscores). Indeed, the most common practice is to use unquoted strings in capital letters, often surrounded by underscores - but there is no technical obligation to do so.
When explicit or implicit double quotes are used for the delimiter string, the content of the heredoc text is subject to variable interpolation, like for a usual double-quoted string, whereas with single quotes, no interpolation is performed.
The heredoc content only starts at the line after the << symbol; before that next line, the current line must properly terminate the current expression, i.e. close any open parentheses and terminate the current statement with a final ;. For example, it is perfectly legal (and quite common) to have heredoc data within a subroutine call:
my $html = htmlize_markdown(<<~_END_OF_MARKDOWN_);
# Breaking news
## Perl remontada
After many years of Perl bashing, the software industry
slowly rediscovers the beauty of Perl design. Many indicators ...
_END_OF_MARKDOWN_
Arbitrary string as delimiter
As stated above, the delimiter string can be any string, even if it includes special characters or spaces, provided that the string is explicitly quoted:
my $ordinary_liturgy = <<~ "Ite, missa est";
Kyrie
Gloria
Credo
Sanctus
Agnus dei
Ite, missa est
say $ordinary_liturgy;
Empty string as delimiter
The delimiter string can also be .. an empty string! In that case the heredoc content ends at the next empty line; this is an elegant way to minimize noise around the data. I used it for example for embedding Template toolkit fragments in the test suite for my Array::PseudoScalar module:
my $tmpl = Template->new();
my $result = "";
$tmpl->process(\<<"", \%data, \$result); # 1st arg: reference to heredoc string
[% obj.replace(";", " / ") ; %]
like($result, qr[^\s+FOO / BAR / BUZ$], "scalar .replace()");
$result = "";
$tmpl->process(\<<"", \%data, \$result);
size is [% obj.size %]
last is [% obj.last %]
[% FOREACH member IN obj %]member is [% member %] [% END; # FOREACH %]
like($result, qr/size is 3/, "array .size");
or for embedding SQL fragments in my Lingua::Thesaurus module:
$dbh->do(<<"");
CREATE $term_table;
$dbh->do(<<"");
CREATE TABLE rel_type (
rel_id CHAR PRIMARY KEY,
description CHAR,
is_external BOOL
);
# foreign key control : can't be used with fulltext, because 'docid'
# is not a regular column that can be referenced
my $ref_docid = $params->{use_fulltext} ? '' : 'REFERENCES term(docid)';
$dbh->do(<<"");
CREATE TABLE relation (
lead_term_id INTEGER NOT NULL $ref_docid,
rel_id CHAR NOT NULL REFERENCES rel_type(rel_id),
rel_order INTEGER DEFAULT 1,
other_term_id INTEGER $ref_docid,
external_info CHAR
);
$dbh->do(<<"");
CREATE INDEX ix_lead_term ON relation(lead_term_id);
...
While it is technically possible to write my $content = <<~ ""; for an indented heredoc ending with an empty string (notice the ~), this requires that the blank line at the end be indented accordingly. In that case the fact that the blank line is composed of initial indenting spaces followed by a newline character is not visible when reading the source code, so this is definitely not something to recommend!
Several heredocs on the same line
Several heredocs can start on the same line, as in this example:
my @blogs = $dbh->selectall_array(<<~END_OF_SQL, {}, split(/\n/, <<~END_OF_BIND_VALUES));
select d_publish, title, content
from blog_entries
where pseudo=? and d_publish between ? and ?
END_OF_SQL
chatterbox
01.01.2023
30.06.2024
END_OF_BIND_VALUES
The first heredoc is the SQL request, and the second heredoc is a piece of text containing the bind values; the split operation on the first line transforms this text into an array.
Other Perl mechanisms for multi-line strings
Heredocs are not the only way to express multi-line strings in Perl. String concatenation in source code, as shown in the initial example of this article, is of course always possible, albeit not very practical nor elegant. Yet another way is to simply insert newline characters inside an ordinary quoted string, like this:
my $str1 = "this is
a multi-line string";
my $str2 = qq{and this
is another multi-line string};
but in that case the indenting spaces at the beginning of each line are always part of the data. So Perl offers more that one way to do it, it is up to the programmer to decide what is most appropriate according to the situation.
Perl's acceptance of literal newline characters inside ordinary quoted strings is sometimes very handy, and therefore was also adopted in PHP and Ruby; but the majority of other languages, like Java, JavaScript, Python and C++, do not allow it - this is why they needed to introduce other mechanisms, as we will see later.
Other programming languages
As mentioned in the introduction, the idea of heredocs originated in Unix shells and later percolated into general-purpose programming languages. Some of them, inspired by Perl, adopted the shell-style heredoc syntax; other languages preferred the more familiar syntax of quoted strings, but with variants for supporting multi-line strings. This section will highlight the main differences.
Languages with heredoc syntax
PHP
The syntax for heredocs in PHP is very close to Perl, except that it uses three less-than signs (<<<) instead of two (<<). Like in Perl, the heredoc content is usually subject to variable interpolation, unless when the delimiter string is enclosed in single quotes; in that latter case, PHP uses the name "nowdoc" instead of "heredoc".
There are a couple of technical differences with Perl's heredocs, however:
- the ending delimiter, even if enclosed in double or in single quotes, must be a proper identifier, not an arbitrary string - so it cannot contain space or special characters .. and obviously it cannot be an empty string!
- the PHP expression is interrupted at line where the heredoc starts, and must be properly terminated after the final heredoc delimiter. Here is an example from the official documentation:
<?php
$values = [<<<END
a
b
c
END, 'd e f'];
var_dump($values);
So the reader must mentally connect the lines before and after the heredoc to understand the structure of the complete expression, which might be difficult if the heredoc content spans over many lines.
Ruby
There is little to say about heredocs in Ruby: they work almost exactly like in Perl, with the same syntax, the same variants regarding interpolation of variables or regarding indented content, and the same possibility to use arbitrary quoted strings (including the empty string) as delimiters. Multiple heredocs starting on the same line are also supported.
There is however a minor difference in the interpretation of indented content: in presence of an indented heredoc, called "squiggly heredoc" in Ruby, the interpreter considers the least indented line to be the basis for indentation; the number of spaces before the delimiter string is irrelevant. So in
text = <<~END
foo
bar
END
the value is `" foo\nbar\n" (two spaces before "foo", no spaces before "bar"). In Perl this would raise an exception because the indentation level is determined by the number of spaces before the terminating delimiter string and it is illegal to have content lines with less spaces.
Languages with quoted multi-line strings
Expressing multi-line strings in source code is quite a common need, so programming languages that support neither heredocs nor literal newline characters inside ordinary quoted strings had to offer something. A commonly adopted solution is triple-quoted strings as special syntax for multi-line string literals.
Python
Python was probably the inventor of triple-quoted strings. These solved two problems at once:
- embedded double-quote or single-quote characters need not be escaped
- embedded newline characters are accepted and retained in the string value
as shown in this example:
str = """this is a triple-quoted string
with embedded "double quotes"
and embedded 'single quotes'
and also embedded newlines""";
Triple-quoted literals have no syntactic support for handling indented content, so in the example above, initial spaces at lines 2, 3 and 4 are part of the data. Removing indenting spaces must be explicitly performed at runtime, usually through the textwrap.dedent() function.
Like for regular string literals, interpolation of expressions within triple-quoted strings can be performed through a f prefix, a feature introduced in 2017 in Python 3.6.
JavaScript
Originally JavaScript had no support neither for multi-line strings nor for variable interpolation within string literals. Both features were simultaneously introduced in 2015 through the new construct of template literals: instead of single or double quotes, the string is enclosed in backticks, and it may contain newline characters and interpolated expressions of form ${...}. Those two behavioural aspects always come together, there are no syntactic variants for using them independently.
Java
Java introduced triple-quoted strings as late as 2020, under the name text blocks. The compiler automatically removes "incidental spaces", i.e. indentation spaces at the beginning of lines, and trailing spaces at the end of lines. This mechanism has no basic support for variable interpolation; if needed, programmers have to use other classes like MessageFormat or StringSubstitutor.
Conclusion
Today many programming tasks need to handle some polyglot aspects: besides the main source code, fragments of other languages must be included, like HTML, CSS, XML, SQL, templates, etc. Therefore the need to embed multi-line strings in the main source code is quite frequent.
Surprisingly, some popular languages took a very long time before proposing that kind of feature, and it is not always possible to freely decide some options, like removing indentation or applying variable interpolation. By contrast, Perl always had a rich spectrum of mechanisms, including but not limited to heredoc documents. Various options can be chosen with great flexibility, allowing the programmer to be creative in crafting legible and elegant source code. Yet another example of Perl's beautiful features!
About the cover picture
This score excerpt is taken from Luciano Berio's Sinfonia, written in 1968. In the third movement, Berio makes ample citations of Mahler's Symphony N° 2, a kind of "musical heredoc" inside the new composition.
-
a third possibility is to enclose the string in backticks, but this will not be covered here. ↩
We took PetaMem's 13-year-old Lingua::* number conversion modules - dormant since 2013 with 17 languages - and brought them back to life. The suite now covers 61 languages across 7 writing systems (Latin, Cyrillic, Arabic, Devanagari, Armenian, Hebrew, CJK), including all 24 EU official languages plus Latin, Hindi, Yiddish, Mongolian, Uyghur, and more.
New in this release: cross-language numeral arithmetic with overloaded operators, ordinal support for 14 languages, capabilities introspection, and a Galois-field-based transitive test that walks the entire number space across all languages - 5000 steps, zero failures.
my $a = Lingua::Word2Num->new("zwanzig"); # German 20
my $b = Lingua::Word2Num->new("šestnáct"); # Czech 16
say ($a + $b)->as('fr'); # trente-six
say ($a + $b)->as('la'); # triginta sex
Everything on CPAN: cpanm Task::Lingua::PetaMem
Where We Started
PetaMem has maintained Lingua::* number conversion modules on CPAN since 2002. The original use case was straightforward: convert numbers to their written form for cheques and financial documents - "1234 - in Worten: eintausendzweihundertvierunddreißig". The reverse direction (Word2Num) came later for NLP applications.
By 2013, the collection covered 17 languages and went dormant. The code used SVN versioning, mixed coding styles, and each language module had been implemented independently - some using Parse::RecDescent grammars, others with regex pipelines, yet others with OO interfaces. Some were PetaMem originals, some were forks from other CPAN authors.
The Modernization
In March 2026, we decided to bring the suite back to life. The goals:
- Unified boilerplate:
use 5.16.0; use utf8; use warnings;everywhere - Standardize on
Export::Attrsand consistent API naming - Move all legacy module names to canonical
Lingua::XXX::Num2Word/Lingua::XXX::Word2Num - Date-based versioning (
0.YYMMDDX) - Parallel build system with
Parallel::ForkManager - Auto-discovery: wrappers find new language modules from the filesystem
- Proper CPAN kwalitee (Changes, META.json, LICENSE, SECURITY.md, tests)
61 Languages
The reference implementation - Lingua::DEU::Word2Num -
uses a clean Parse::RecDescent grammar in under 90 lines. We used this
as the template for every new language. Each language gets:
- Word2Num: A declarative RecDescent grammar parsing natural language numerals to integers
- Num2Word: A recursive function converting integers to natural language text
The current language roster spans seven writing systems:
Latin script: Afrikaans, Albanian, Basque, Catalan, Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, Galician, Hungarian, Icelandic, Indonesian, Irish, Italian, Latvian, Lithuanian, Luxembourgish, Maltese, Norwegian, Occitan, Polish, Portuguese, Romanian, Sardinian, Slovak, Slovenian, Somali, Spanish, Swahili, Swedish, Turkish, Vietnamese, Welsh, Azerbaijani, Latin
Cyrillic: Belarusian, Bulgarian, Kazakh, Kyrgyz, Macedonian, Mongolian, Russian, Serbian, Ukrainian
Arabic script: Arabic, Persian, Uyghur
Other scripts: Chinese (traditional), Greek, Hebrew, Hindi (Devanagari), Armenian, Japanese (romanized + kanji), Korean (Hangul + romanized), Thai, Yiddish (Hebrew script)
All 24 EU official languages are covered.
The Wrapper Interface
Individual modules can be used directly, but the wrappers provide a unified API accepting both ISO 639-1 and ISO 639-3 codes:
use Lingua::Num2Word qw(cardinal);
say cardinal('de', 42); # zweiundvierzig
say cardinal('ja', 42); # yon ju ni
say cardinal('ar', 42); # اثنان وأربعون
use Lingua::Word2Num qw(cardinal);
say cardinal('fr', 'quarante-deux'); # 42
Cross-Language Numeral Arithmetic
A distinctive feature: Lingua::Word2Num objects
support overloaded arithmetic across languages. The constructor
auto-detects the source language:
use Lingua::Word2Num;
my $a = Lingua::Word2Num->new("zwanzig"); # German 20
my $b = Lingua::Word2Num->new("šestnáct"); # Czech 16
say $a + $b; # 36
say ($a + $b)->as('de'); # sechsunddreissig
say ($a + $b)->as('fr'); # trente-six
say ($a + $b)->as('ar'); # ستة وثلاثون
$a++;
say $a->as('la'); # viginti unus
Arithmetic returns new numeral objects. ->as($lang)
renders into any supported language on demand. The semantics are
clean: arithmetic produces numbers, words require
explicit ->as().
Ordinals
Fourteen languages currently support ordinal conversion:
use Lingua::Num2Word qw(ordinal has_capability);
say ordinal('de', 3); # dritte
say ordinal('en', 3); # third
say ordinal('fr', 3); # troisième
say ordinal('tr', 3); # üçüncü
# check before calling
if (has_capability('de', 'ordinal')) { ... }
The capabilities() introspection lets callers discover
what each language module supports (cardinal, ordinal, and future
features) without trial and error.
The Galois Walk: Testing at Scale
Traditional per-language unit tests verify individual conversions. But how do you test cross-language consistency across 61 languages and the full number space without exhaustive enumeration?
We use a multiplicative generator over a prime field: g=7 mod
999999937 (the largest prime below 109). Starting
from 1, each step multiplies by 7 modulo the prime, producing a
deterministic, non-sequential walk through the entire number space -
from single digits through hundreds of millions. At each step, the
current value is converted to words in a rotating language, parsed
back to a number, and the generator advances.
A single test of 5000 steps touches all 61 languages, all magnitude
ranges, and all language-pair transitions. Values are clamped to each
language's declared range (via capabilities()), so
languages with smaller intervals still get tested within their valid
space.
The walk immediately proved its value: in its first run, it uncovered parser deficiencies in Korean, Chinese, and Bulgarian that had gone undetected through years of conventional testing. All were fixed - the current exhaustive walk runs 5000/5000 with zero failures.
CPAN Distribution Architecture
Each language produces two CPAN distributions (Num2Word + Word2Num), plus wrapper modules and three Task meta-packages:
shell> cpanm Task::Lingua::PetaMem # install everything shell> cpanm Task::Lingua::Word2Num # just word→number shell> cpanm Task::Lingua::Num2Word # just number→word
The Lingua modules are cherry-picked from a larger internal PetaMem
library and packaged for CPAN via an internal automated script. This
script auto-discovers language modules from the filesystem, builds
distributions in parallel, generates README files with native language
descriptions, derives changelogs from git history (sanitized - no
internal information leaks), and auto-tags after successful uploads.
A --query option fetches CPAN Testers results and CPANTS
kwalitee scores directly from the command line, without building
anything - giving us a tight feedback loop between development and
the CPAN ecosystem.
Legacy module names (the
pre-2026 Lingua::NLD::Numbers, Lingua::SPA::Numeros,
etc.) are preserved as deprecation wrappers that delegate to the
canonical Num2Word namespace with a carp
warning.
AI as a Development Partner
This modernization was carried out with heavy involvement of AI coding agents - credited in every module's POD as "PetaMem AI Coding Agents". This is not a disclaimer; it is a badge of pride.
The AI agents implemented language modules from linguistic specifications, wrote Parse::RecDescent grammars for languages they had never seen test data for, debugged subtle parser failures found by the Galois walk, and produced code that passes rigorous roundtrip verification across the full number space. The human role was specification, architecture, quality control, and linguistic verification - the kind of work where domain expertise matters. The agents handled the volume - 61 languages, each with two modules, POD, tests, and CPAN packaging.
We stand by the quality of the distributed code. Every module roundtrips correctly through the exhaustive Galois walk. Every distribution scores high on CPANTS kwalitee. The code is readable, documented, and tested. That it was produced with AI assistance does not diminish it - it enabled it. No single developer could have written and verified RecDescent grammars for Armenian, Uyghur, and Sardinian in the same week.
That said, we are deliberately holding back. The codebase moves fast when AI agents are involved - new languages, bug fixes, kwalitee improvements, and feature additions can happen in hours rather than weeks. But uploading 100+ distributions to CPAN daily, while technically possible, would cause unnecessary load on the mirrors and testing infrastructure, and frankly more attention strain than benefit for the community. We batch our releases, run the exhaustive Galois walk before every upload, and will aim for quality over frequency.
What's Next
The infrastructure is in place for continued growth:
- More ordinals: Currently supported for 14 languages, with the remaining languages to be added incrementally
- More languages: Adding a language requires just two .pm files - the wrappers and build system auto-discover them
- Phase 3 rewrite: A handful of legacy "foreign code" modules (IND, POR, ENG::Inflect) still use non-standard APIs. These are candidates for rewrite to the RecDescent pattern
- Decimal and negative numbers: The capabilities system is ready for these features
The full suite is on CPAN under PETAMEM. Source code is maintained by PetaMem s.r.o.
- Richard C. Jelinek, PetaMem s.r.o.
I need to move some chunks of text around in a file. I am partially successful, in the sense that I can move only the first chunk successfully.
The text in the file looks like this:
text regtext1 text regtext2 text regtextA regtextZ end
where text is some random text, and regtext1,2,3 are pieces of text conforming to some regular rules / patterns. All of them can contain pretty much any printable character, and a few more (diacritics, end-of-line, ...).
What I do now is something like this:
/(reg)(text\d+.*?)(regtext[A-Z]+)/$1$3$2/gs
the result being that regextA is moved inside regtext1:
text regregtextAtext1 text regtext2 text regtextZ end
The issue is that after the replace, the search-and-replace continues at the position after regtextA, before regtextZ - if I understand the algorithm correctly.
How can I modify the search-and-replace expression in such way to do the same thing for regtext2...regtextZ, and all other such occurrences? The text in the end should look like:
text regregtextAtext1 text regregtextZtext2 text end
but it does not happen.
I might have to use the \G anchor, but I have no idea how. For debugging I use regex101.com.
Looking at a previous example, I tried the following code:
$s =~ s{(?:\G(?!\A)|)\K(reg)(text\d+.*?)(regtext[A-Z]+)}{"$1$3$2"}
but it makes also only one replacement - probably because I do not understand exactly how the original code (and \G) works.
I tried the correct version of the code suggested in the answer, but it takes an "infinity" of time(actually, I forcefully stopped the execution after several minutes) (just like in the previous example) - even if I limit the execution to only one replacement. The presence of the "while" is "malefic". In the absence of the while, the one replacement happens "instantly".
In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.
Here is an excerpt of the code that gives error:
package remote {
__DATA__
print "$ENV{HOME}\n";
}
as show below
Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.
I can't seem to find any mis-matched brackets.
On the contrary, when I re-write the same package without braces, the code works.
package remote;
__DATA__
print "$ENV{HOME}\n";
I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.
PWC 366 Task 2, Valid Times
Here we are at another Weekly Challenge, ticking away the moments that make up a dull day. Fritter and waste the hours in an off-hand way. Kicking around on a piece of ground in your home town, waiting for someone or something to show you the way. Here's a way.
The Requirements Phase
You are given a time in the form ‘HH:MM’. The earliest possible time is ‘00:00’ and the latest possible time is ‘23:59’. In the string time, the digits represented by the ‘?’ symbol are unknown, and must be replaced with a digit from 0 to 9. Write a script to return the count of different ways we can make it a valid time.
-
Example 1: Input:
$time = "?2:34", Output:3- 02:34, 12:34, 22:34
-
Example 2: Input:
$time = "?4:?0", Output:12- Combinations of hours 04 and 14, with minutes 00, 10, 20, 30, 40, 50
-
Example 3: Input:
$time = "??:??", Output:1440 -
Example 4: Input:
$time = "?3:45", Output:3- 03:45, 13:45, 23:45
-
Example 5: Input:
$time = "2?:15", Output:4- 20:15, 21:15, 22:15, 23:15
The Design Phase
My first thought is that it would be fun to make some kind of iterator or generator that replaces each ? with its valid possibilities. My second thought is, "Nah, that seems like a lot of work." And since the cardinal virtue of Perl programming is laziness, I moved on to something simpler.
There are 24 valid hours and 60 valid minutes. The total valid combinations is the cross-product of the two. Let's go with that.
The Implementation Phase
sub validTime($time)
{
state @hour = '00' .. '23';
state @minute = '00' .. '59';
my ($h, $m) = map { s/\?/./gr } split(/:/, $time);
return (grep /$h/, @hour) * (grep /$m/, @minute);
}
Notes:
statevariables -- These reference lists only need to be set up once, and only in the scope of this function. That's whatstatedoes.'00' .. '23'-- The..sequence operator has the nice feature that if you want leading zeroes, you get them. Looking at you,bash.split(/:/, $time)-- Divide the input into its components. As in so many weekly challenge tasks, I'm assuming that the input has already been sanitized and is showing up here in a valid form.map { ... }-- Do something to both the hours and the minutes.s/\?/./gr-- The something is to replace?with.so that it becomes a regular expression. The?is a meta-character, so it needs to be quoted. Adding therflag yields the modified string; otherwises///gwould yield the number of substitutions, which is not useful in this context.my ($h, $m) = ...-- Declaring and initializing two variables from a list of two things.(grep /$h/, @hour)-- Select a list of the valid hours that can match the$hpattern. This is in a scalar context (numerical multiplication), so the scalar value will result -- the number of matches. Similarly for minutes.We only need the count, so multiply the two. We could have generated the list of valid times by using the hours and minutes returned from
grepin array context, but (I believe I already mentioned this), lazy.
The Delivery Phase
And there you have it. You run and you run to catch up with the sun, but it's sinking, and racing around to come up behind you again. The sun is the same in a relative way, but you're older, shorter of breath, and one day closer to death. But at least you solved weekly challenge 366.
I have a calendar week of a given year, like so:
perl -E "use POSIX qw(strftime); say strftime('%Y-%V', localtime)"
How do I generate a unix timestamp for this calendar week? (for example a timestamp for the start of said week).
My use case is that I need to group different timestamps (YYYY-MM-DD) into calendar weeks, but then need unix timestamps of those weeks to proceed further. I use strftime to convert YYYY-MM-DD into calendar weeks, but have difficulties proceeding from there.
Make, Bash, and a scripting language of your choice
Creating AWS Resources…let me count the ways
You need to create an S3 bucket, an SQS queue, an IAM policy and a few other AWS resources. But how?…TIMTOWTDI
The Console
- Pros: visual, immediate feedback, no tooling required, great for exploration
- Cons: not repeatable, not version controllable, opaque, clickops doesn’t scale, “I swear I configured it the same way”
The AWS CLI
- Pros: scriptable, composable, already installed, good for one-offs
- Cons: not idempotent by default, no state management, error handling is manual, scripts can grow into monsters
CloudFormation
- Pros: native AWS, state managed by AWS, rollback support, drift detection
- Cons: YAML/JSON verbosity, slow feedback loop, stack update failures are painful, error messages are famously cryptic, proprietary to AWS, subject to change without notice
Terraform
- Pros: multi-cloud, huge community, mature ecosystem, state management, plan before apply
- Cons: state file complexity, backend configuration, provider versioning, HCL is yet another language to learn, overkill for small projects, often requires tricks & contortions
Pulumi
- Pros: real programming languages, familiar abstractions, state management
- Cons: even more complex than Terraform, another runtime to install and maintain
CDK
- Pros: real programming languages, generates CloudFormation, good for large organizations
- Cons: CloudFormation underneath means CloudFormation problems, Node.js dependency
…and the rest of crew…
Ansible, AWS SAM, Serverless Framework - each with their own opinions, dependencies, and learning curves.
Every option beyond the CLI adds a layer of abstraction, a new language or DSL, a state management story, and a new thing to learn and maintain. For large teams managing hundreds of resources across multiple environments that overhead is justified. For a solo developer or small team managing a focused set of resources it can feel like overkill.
Even in large organizations, not every project should be conflated into the corporate infrastructure IaC tool. Moreover, not every project gets the attention of the DevOps team necessary to create or support the application infrastructure.
What if you could get idempotent, repeatable, version-controlled
infrastructure management using tools you already have? No new
language, no state backend, no provider versioning. Just make,
bash, a scripting language you’re comfortable with, and your cloud
provider’s CLI.
And yes…my love affair with make is endless.
We’ll use AWS examples throughout, but the patterns apply equally to
Google Cloud (gcloud) and Microsoft Azure (az). The CLI tools
differ, the patterns don’t.
A word about the AWS CLI --query option
Before you reach for jq, perl, or python to parse CLI output,
it’s worth knowing that most cloud CLIs have built-in query
support. The AWS CLI’s --query flag implements JMESPath - a query
language for JSON that handles the majority of filtering and
extraction tasks without any additional tools:
# get a specific field
aws lambda get-function \
--function-name my-function \
--query 'Configuration.FunctionArn' \
--output text
# filter a list
aws sqs list-queues \
--query 'QueueUrls[?contains(@, `my-queue`)]|[0]' \
--output text
--query is faster, requires no additional dependencies, and keeps
your pipeline simple. Reach for it first. When it falls short -
complex transformations, arithmetic, multi-value extraction - that’s
when a one-liner earns its place:
# perl
aws lambda get-function --function-name my-function | \
perl -MJSON -n0 -e '$l=decode_json($_); print $l->{Configuration}{FunctionArn}'
# python
aws lambda get-function --function-name my-function | \
python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Configuration']['FunctionArn'])"
Both get the job done. Use whichever lives in your shed.
What is Idempotency?
The word comes from mathematics - an operation is idempotent if applying it multiple times produces the same result as applying it once. Sort of like those ID10T errors…no matter how hard or how many times that user clicks on that button they get the same result.
In the context of infrastructure management it means this: running your resource creation script twice should have exactly the same outcome as running it once. The first run creates the resource. The second run detects it already exists and does nothing - no errors, no duplicates, no side effects.
This sounds simple but it’s surprisingly easy to get wrong. A naive
script that just calls aws lambda create-function will fail on the
second run with a ResourceConflictException. A slightly better
script wraps that in error handling. A truly idempotent script never
attempts to create a resource it knows already exists.
And it works in both directions. The idempotent bug - running a failing process repeatedly and getting the same error every time - is what happens when your failure path is idempotent too. Consistently wrong, no matter how many times you try. The patterns we’ll show are designed to ensure that success is idempotent while failure always leaves the door open for the next attempt.
Cloud APIs fall into four distinct behavioral categories when it comes to idempotency, and your tooling needs to handle each one differently:
Case 1 - The API is idempotent and produces output
Some APIs can be called repeatedly without error and return useful
output each time - whether the resource was just created or already
existed. aws events put-rule is a good example - it returns the rule
ARN whether the rule was just created or already existed. The pattern:
call the read API first, capture the output, call the write API only
if the read returned nothing.
Case 2 - The API is idempotent but produces no output
Some write APIs succeed silently - they return nothing on
success. aws s3api put-bucket-notification-configuration is a good
example. It will happily overwrite an existing configuration without
complaint, but returns no output to confirm success. The pattern: call
the API, synthesize a value for your sentinel using && echo to
capture something meaningful on success.
Case 3 - The API is not idempotent
Some APIs will fail with an error if you try to create a resource that
already exists. aws lambda add-permission returns
ResourceConflictException if the statement ID already exists. aws
lambda create-function returns ResourceConflictException if the
function already exists. These APIs give you no choice - you must
query first and only call the write API if the resource is missing.
Case 4 - The API call fails
Any of the above can fail - network errors, permission problems,
invalid parameters. When a call fails you must not leave behind a
sentinel file that signals success. A stale sentinel is worse than no
sentinel - it tells Make the resource exists when it doesn’t, and
subsequent runs silently skip the creation step. The patterns: || rm
-f $@ when writing directly, or else rm -f $@ when capturing to a
variable first.
The Sentinel File
Before we look at the four patterns in detail, we need to introduce a concept that ties everything together: the sentinel file.
A sentinel file is simply a file whose existence signals that a task
has been completed successfully. It contains no magic - it might hold
the output of the API call that created the resource, or it might just
be an empty file created with touch. What matters is that it exists
when the task succeeded and doesn’t exist when it hasn’t.
make has used this pattern since the 1970s. When you declare a
target in a Makefile, make checks whether a file with that name
exists before deciding whether to run the recipe. If the file exists
and is newer than its dependencies, make skips the recipe
entirely. If the file doesn’t exist, make runs the recipe to create
it.
For infrastructure management this is exactly the behavior we want:
my-resource:
@value="$$(aws some-service describe-resource \
--name $(RESOURCE_NAME) 2>&1)"; \
if [[ -z "$$value" || "$$value" = "ResourceNotFound" ]]; then \
value="$$(aws some-service create-resource \
--name $(RESOURCE_NAME))"; \
fi; \
test -e $@ || echo "$$value" > $@
The first time you run make my-resource the file doesn’t exist,
the recipe runs, the resource is created, and the API response
is written to the sentinel file my-resource. The second time you
run it, make sees the file exists and skips the recipe entirely -
zero API calls.
When an API call fails we want to be sure we do not create the sentinel file. We’ll cover the failure case in more detail in Pattern 4 of the next section.
The Four Patterns
Armed with the sentinel file concept and an understanding of the four API behavioral categories, let’s look at concrete implementations of each pattern.
Pattern 1 - Idempotent API with output
The simplest case. Query the resource first - if it exists capture the output and write the sentinel. If it doesn’t exist, create it, capture the output, and write the sentinel. Either way you end up with a sentinel containing meaningful content.
The SQS queue creation is a good example:
sqs-queue:
@queue="$$(aws sqs list-queues \
--query 'QueueUrls[?contains(@, `$(QUEUE_NAME)`)]|[0]' \
--output text --profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$queue" | grep -q 'error\|Error'; then \
echo "ERROR: list-queues failed: $$queue" >&2; \
exit 1; \
elif [[ -z "$$queue" || "$$queue" = "None" ]]; then \
queue="$(QUEUE_NAME)"; \
aws sqs create-queue --queue-name $(QUEUE_NAME) \
--profile $(AWS_PROFILE); \
fi; \
test -e $@ || echo "$$queue" > $@
Notice --query doing the filtering work before the output reaches
the shell. No jq, no pipeline - the AWS CLI extracts exactly what we
need. The result is either a queue URL or empty. If empty we
create. Either way $$queue ends up with a value and the sentinel is
written exactly once.
The EventBridge rule follows the same pattern:
lambda-eventbridge-rule:
@rule="$$(aws events describe-rule \
--name $(RULE_NAME) \
--profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$rule" | grep -q 'ResourceNotFoundException'; then \
rule="$$(aws events put-rule \
--name $(RULE_NAME) \
--schedule-expression "$(SCHEDULE_EXPRESSION)" \
--state ENABLED \
--profile $(AWS_PROFILE))"; \
elif echo "$$rule" | grep -q 'error\|Error'; then \
echo "ERROR: describe-rule failed: $$rule" >&2; \
exit 1; \
fi; \
test -e $@ || echo "$$rule" > $@
Same shape - query, create if missing, write sentinel once.
Pattern 2 - Idempotent API with no output
Some APIs succeed silently. aws s3api
put-bucket-notification-configuration is the canonical example - it
happily overwrites an existing configuration and returns nothing. No
output means nothing to write to the sentinel.
The solution is to synthesize a value using &&:
define notification_configuration =
use JSON;
my $lambda_function = $ENV{lambda_function};
my $function_arn = decode_json($lambda_function)->{Configuration}->{FunctionArn};
my $configuration = {
LambdaFunctionConfigurations => [ {
LambdaFunctionArn => $function_arn,
Events => [ split ' ', $ENV{s3_event} ],
}
]
};
print encode_json($configuration);
endef
export s_notification_configuration = $(value notification_configuration)
lambda-s3-trigger: lambda-s3-permission
temp="$$(mktemp)"; trap 'rm -f "$$temp"' EXIT; \
lambda_function="$$(cat lambda-function)"; \
echo $$(s3_event="$(S3_EVENT)" lambda_function="$$lambda_function" \
perl -e "$$s_notification_configuration") > $$temp; \
trigger="$$(aws s3api put-bucket-notification-configuration \
--bucket $(BUCKET_NAME) \
--notification-configuration file://$$temp \
--profile $(AWS_PROFILE) && cat $$temp)"; \
test -e $@ || echo "$$trigger" > $@
The && cat $$temp is the key. If the API call succeeds the &&
fires and $$trigger gets the configuration JSON string - something meaningful to
write to the sentinel. If the API call fails && doesn’t fire,
$$trigger stays empty because the Makefile recipe aborts.
Using a
scriptlet (s_notification_configuration)
might seem like overkill, but it’s worth not having to fight shell
quoting issues!
Writing JSON used in many AWS API calls to a temporary file is usually a better way than passing a string on the command line. Unless you wrap the JSON in quotes you’ll be fighting shell quoting and interpolation issues…and of course you can write your scriptlets in Perl or Python!
Pattern 3 - Non-idempotent API
Some APIs are not idempotent - they fail with a
ResourceConflictException or similar if the resource already
exists. aws lambda add-permission and aws lambda create-function
are both in this category. There is no “create or update” variant -
you must check existence first and only call the write API if the
resource is missing.
The Lambda S3 permission target is a good example:
lambda-s3-permission: lambda-function s3-bucket
@permission="$$(aws lambda get-policy \
--function-name $(FUNCTION_NAME) \
--profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$permission" | grep -q 'ResourceNotFoundException' || \
! echo "$$permission" | grep -q s3.amazonaws.com; then \
permission="$$(aws lambda add-permission \
--function-name $(FUNCTION_NAME) \
--statement-id s3-trigger-$(BUCKET_NAME) \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::$(BUCKET_NAME) \
--profile $(AWS_PROFILE))"; \
elif echo "$$permission" | grep -q 'error\|Error'; then \
echo "ERROR: get-policy failed: $$permission" >&2; \
exit 1; \
fi; \
if [[ -n "$$permission" ]]; then \
test -e $@ || echo "$$permission" > $@; \
else \
rm -f $@; \
fi
A few things worth noting here…
get-policyreturns the full policy document which may contain multiple statements - we check for the presence ofs3.amazonaws.comspecifically using! grep -qrather than just checking for an empty response. This handles the case where a policy exists but doesn’t yet have the S3 permission we need.- The sentinel is only written if
$$permissionis non-empty after the if block. This covers the case whereget-policyreturns nothing andadd-permissionalso fails - the sentinel stays absent and the nextmakerun will try again. - We pipe errors to our
bashvariable to detect the case where the resource does not exist or there may have been some other error. When other failures are possible2>&1combined with specific error string matching gives you both idempotency and visibility. Swallowing errors silently (2>/dev/null) is how idempotent bugs are born.
Pattern 4 - Failure handling
This isn’t a separate pattern so much as a discipline that applies to all three of the above. There are two mechanisms depending on how the sentinel is written.
Case 1: When the sentinel is written directly by the command:
aws lambda create-function ... > $@ || rm -f $@
|| rm -f $@ ensures that if the command fails the partial or empty
sentinel is immediately cleaned up. Without it make sees the file on
the next run and silently skips the recipe - an idempotent bug.
Case 2: When the sentinel is written by capturing output to a variable first:
if [[ -n "$$value" ]]; then \
test -e $@ || echo "$$value" > $@; \
else \
rm -f $@; \
fi
The else rm -f $@ serves the same purpose. If the variable is empty
- because the API call failed - the sentinel is removed. If the
sentinel doesn’t exist yet nothing is written. Either way the next
make run will try again.
In both cases the goal is the same: a sentinel file should only exist when the underlying resource exists. A stale sentinel is worse than no sentinel.
Depending on the way your recipe is written you may not need to test
the variable that capture the output at all. In Makefiles we
.SHELLFLAGS := -ec which causes make to exit immediately if any
command in a recipe fails. This means targets that don’t write to
$@ - like our sqs-queue target above
- don’t need explicit failure handling. make will die loudly and the
sentinel won’t be written. In that case you don’t even need to test
$$value and can simplify writing of the sentinel file like this:
test -e $@ || echo "$$value" > $@
Conclusion
Creating AWS resources can be done using several different tools…all of them eventually call AWS APIs and process the return payloads. Each of these tools has its place. Each adds something. Each also has a complexity, dependencies, and a learning curve score.
For a small project or a focused set of resources - the kind a solo
developer or small team manages for a specific application - you don’t
need tools with a high cognitive or resource load. You can use those
tools you already have on your belt; make,bash, [insert favorite
scripting language here], and aws. And you can leverage those same tools
equally well with gcloud or az.
The four patterns we’ve covered handle every AWS API behavior you’ll encounter:
- Query first, create only if missing, write a sentinel
- Synthesize output when the API has none
- Always check before calling a non-idempotent API
- Clean up on failure with
|| rm -f $@
These aren’t new tricks - they’re straightforward applications of
tools that have been around for decades. make has been managing
file-based dependencies since 1976. The sentinel file pattern predates
cloud computing entirely. We’re just applying them to a new problem.
One final thought. The idempotent bug - running a failing process
repeatedly and getting the same error every time - is the mirror image
of what we’ve built here. Our goal is idempotent success: run it once,
it works. Run it again, it still works. Run it a hundred times,
nothing changes. || rm -f $@ is what separates idempotent success
from idempotent failure - it ensures that a bad run always leaves the
door open for the next attempt rather than cementing the failure in
place with a stale sentinel.
Your shed is already well stocked. Sometimes the right tool for the job is the one you’ve had hanging on the wall for thirty years.
Further Reading
- “Advanced Bash-Scripting Guide” - https://tldp.org/LDP/abs/html/index.html
- “GNU Make” - https://www.gnu.org/software/make/manual/html_node/index.html
- Dave Oswald, “Perl One Liners for the Shell” (Perl conference presentation): https://www.slideshare.net/slideshow/perl-oneliners/77841913
- Peteris Krumins, “Perl One-Liners” (No Starch Press): https://nostarch.com/perloneliners
- Sundeep Agarwal, “Perl One-Liners Guide” (free online): https://learnbyexample.github.io/learn_perl_oneliners/
- AWS CLI JMESPath query documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html
I've been using the same upload script for years, but I've recently rewritten this function into a custom Perl Module. This doesn't work as expected.
The problem
The Perl Module states there's nothing to upload ("Can't call method "upload" on an undefined value")
The (possible) reason
My HTML form sends the file thru a <form> with "method="POST" enctype="multipart/form-data" to my Perl script. This script then tries to send it to my upload.pm script, where the data is lost in transport or something?
Some additional clarification where needed
- Main script "edit.cgi" prints out a HTML Form where a local image is selected
<form action="edit.cgi?action=submit" method="POST" enctype="multipart/form-data"> <input type="file" name="vCoverFile"> - A subrouting inside "edit.cgi" handles the data and sends the image on to the upload.pm module
$uploadCover = imageHandling::UploadFileFromFile("$fields{vCoverFile}","vCoverFile","$upload_dir_images/$highResDir/"); - The module tries to upload the image but finds nothing
$myFile = $_[0]; $myFile =~ s/.*[\/\\](.*)/$1/; $upload_filehandle = $q->upload("$_[1]"); open UPLOADFILE, ">$_[2]/$myFile"; binmode UPLOADFILE; while ( <$upload_filehandle> ) { print UPLOADFILE; }; close UPLOADFILE; [link] [comments]
Finally - GTC 2.0, an all in one color library, is released ! This post will not be rehash (of) the (very) fine manual, but give you a sense what you can achieve with this software and why it is better than any other lib of that sort on CPAN. If you like to look under the hood of GTC, please read my last post.
When I released GTC 1.0 in 2022, it had 4 major features:
1. computing color gradients, between 2 colors in RGB
2. computing complementary colors in HSL
3. translating color names from internal constant set into RGB values
4. converting RGB to HSL and back
The HSL support allowed to add and subtract lightness and saturation (make colors darker, or lighter make them more pale or colorful). And by mentioning a very rudimentary distance computation and color blending we reached the bottom of the barrel.
GTC 2.0 expanded in all areas by a manyfold. Going from 2 (RGB and HSL) to now 17 color spaces (soon ~25) has a large effect. Not only being able to read and write color values from 17 spaces makes GTC much more useful, but also computing a gradient and measuring the distance in different spaces gives you options. Some spaces are optimized for human perception (OKLAB or CIELUV) other you would choose for technical necessity. Especially OKLAB and OKHCL are the hype (for a while) and GTC is the only module in CPAN supporting it. Almost all methods (beside ''name'' and ''complement'') let you choose the color space, the method will be computed in. And in is always the named argument you do it with: " in => 'RGB' " just reads natural.
And just to complete bullet point 1: gradient can now take a series of colors and a tilt factor as arguments to produce very expressive and custom gradients. The tilt factor works also for complements. If you use special tilt values from the documentation you can get also split complementary colors as needed by designers but the nice thing about GTC is, you could choose any other value to get exactly what you are looking for. Many libraries have one method for triadic colors one for quadratic. To get them in GTC you just set the steps argument to 3 or 4 but you can choose again also any other number. Complements can be tilted in all 3 Dimensions.
Beside gradient and complement came also a new color set method: cluster. It is for computing a bunch of colors and are centered around a given one, but have a given, minimal dissimilarity. New is also invert, often the fastest way to get a fitting fore/background color, if the original color was not too bland.
The internal color name constants are still the same, but this feature block got 2 expansions. For one you can now ask for the closest color name (closest_name) and select from which standard this name has to come from (e.g. CSS). These Constants are provided by the Graphics::ColorNames::* modules and you can use them also anywhere a color is expected as input. The nice green from X11 standard would be just:'X:forestgreen'.
But since CSS + X11 + Pantone report colors are already included 'forestgreen' works too.
There are many more features that will come the next week, the most requested is probably simulation for color impaired vision, more spaces, a gamut checker is already implement, gamma correction, will be implemented this week and much much more. Just give it a try and please send bug reports and feature requests.
PS. Yes I also heald a lightning talk about GTC in Berlin last week.
Cross-posted from my blog
Last week, the Perl community came together for the 28th German Perl Workshop. This year, it was held at the Heilandskirche in Berlin Moabit. Excitingly, we had the nave for the presentations.
While the name is still German Perl Workshop, we now draw attendees from all over the globe. Presenters came from India, the US and various European countries. Maybe it is time to announce it as a more international conference again.
Bringing the infrastructure to a Perl Workshop is a lot of additional hardware that we hopefully won't need, like looong HDMI cables, various adapters to HDMI, a bundle extension cords and duct tape of the non-Perl variant. Lee also brought the EPO recording set for recording the presentations. The set came back with me from Berlin, as its main use is nowadays recording the talks at a German Perl Workshop for later publication.
Organizing a conference usually means that my attention is divided between running the event, chatting with attendees and giving a presentation or two. Luckily other members of Frankfurt.pm and other long-time attendees are always there to lend a hand.
Over the years, we have organized the German Perl Workshop many times. Local organizers for 2027 already stepped up. Next year, we aim for the city of Hannover. We don't have the contract for a venue signed, so watch https://www.perl-workshop.de/news for announcements.
Such an event can't happen without the sponsors who support us financially. Let me quickly show their logos here:
I'm currently in a train from Berlin to Strasbourg and then onward to Marseille, traveling from the 28th(!) German Perl Workshop to the Koha Hackfest. I spend a few days after the Perl Workshop in Berlin with friends from school who moved to Berlin during/after university, hanging around at their homes and neighborhoods, visiting museums, professional industrial kitchens and other nice and foody places. But I want to review the Perl Workshop, so:
German Perl Workshop
It seems the last time I've attended a German Perl Workshop was in 2020 (literally days before the world shut down...), so I've missed a bunch of nice events and possibilities to meet up with old Perl friends. But even after this longish break it felt a bit like returning home :-)
I traveled to Berlin by sleeper train (worked without a problem) arriving on Monday morning a few hours before the workshop started. I went to a friends place (where I'm staying for the week), dumped my stuff, got a bike, and did a nice morning cycle through Tiergarten to the venue. Which was an actual church! And not even a secularized one.
Day 1
After a short introduction and welcome by Max Maischein (starting with a "Willkommen, liebe Gemeinde" fitting the location) he started the workshop with a talk on Claude Code and Coding-Agents. I only recently started to play around a bit with similar tools, so I could related to a lot of the topics mentioned. And I (again?) need to point out the blog post I Sold Out for $20 a Month and All I Got Was This Perfectly Generated Terraform which sums up my feelings and experiences with LLMs much better than I could.
Abigail then shared a nice story on how they (Booking.com) sharded a database, twice using some "interesting" tricks to move the data around and still getting reads from the correct replicas, all with nearly no downtime. Fun, but as "my" projects usually operate on a much smaller scale than Booking I will probably not try to recreate their solution.
For lunch I met with Michael at a nearby market hall for some Vietnamese food to do some planing for the upcoming Perl Toolchain Summit in Vienna.
Lars Dieckow then talked about data types in databases, or actually the lack of more complex types in databases and how one could still implement such types in SQL. Looks interesting, but probably a bit to hackish for me to actually use. I guess I have to continue handling such cases in code (which of course feels ugly, especially as I've learned to move more and more code into the DB using CTEs and window functions).
Next Flavio S. Glock showed his very impressive progress with PerlOnJava, a Perl distribution for the JVM. Cool, but probably not something I will use (mostly because I don't run Java anywhere, so adding it to our stack would make things more complex).
Then Lars showed us some of his beloved tools in Aus dem Nähkästchen, continuing a tradition started by Sven Guckes (RIP). I am already using some of the tools (realias, fzf, zoxide, htop, ripgrep) but now plan to finally clean up my dotfiles using xdg-ninja.
Now it was time for my first talk at this workshop, on Using class, the new-ish feature available in Perl (since 5.38) for native keywords for object-oriented programming. I also sneaked in some bibliographic data structures (MAB2 and MARCXML) to share my pain with the attendees. I was a tiny bit (more) nervous, as this was the first time I was using my current laptop (a Framework running Sway/Wayland) with an external projector, but wl-present worked like a charm. After the talk Wolfram Schneider showed me his MAB2->MARC online converter, which could maybe have been a basis for our tool, but then writing our own was a "fun" way to learn about MAB2.
The last talk of the day was Lee Johnson with I Bought A Scanner showing us how he got an old (ancient?) high-res foto scanner working again to scan his various film projects. Fun and interesting!
Between the end of the talks and the social event I went for some coffee with Paul Cochrane, and we where joined by Sawyer X and Flavio and some vegan tiramisu. Paul and me then cycled to the Indian restaurant through some light drizzle and along the Spree, and only then I realized that Paul cycled all the way from Hannover to Berlin. I was a bit envious (even though I in fact did cycle to Berlin 16 years ago (oh my, so long ago..)). Dinner was nice, but I did not stay too long.
Day 2
Tuesday started with Richard Jelinek first showing us his rather impressive off-grid house (or "A technocrat's house - 2050s standard") and the software used to automate it before moving on the the actual topic of his talk, Perl mit AI which turned out to be about a Perl implementation in Rust called pperl developed with massive LLM support. Which seems to be rather fast. As with PerlOnJava, I'm not sure I really want to use an alternative implementation (and of course currently pperl is marked as "Research Preview — WORK IN PROGRESS — please do not use in production environments") but maybe I will give it a try when it's more stable. Especially since we now have containers, which make setting up some experimental environments much easier.
Then Alexander Thurow shared his Thoughts on (Modern?) Software Development, lots of inspirational (or depressing) quotes and some LLM criticism lacking at the workshop (until now..)
Next up was Lars (again) with a talk on Hierarchien in SQL where we did a very nice derivation on how to get from some handcrafted SQL to recursive CTEs to query hierarchical graph data (DAG). I used (and even talked about) recursive CTEs a few times, but this was by far the best explanation I've ever seen. And we got to see some geizhals internals :-)
Sören Laird Sörries informed us on Digitale Souveränität und Made in Europe and I'm quite proud to say that I'm already using a lot of the services he showed (mailbox, Hetzner, fairphone, ..) though we could still do better (eg one project is still using a bunch of Google services)
Then Salve J. Nilsen (whose name I will promise to not mangle anymore) showed us his thoughts on What might a CPAN Steward organization look like?. We already talked about this topic a few weeks ago (in preparation of the Perl Toolchain Summit), so I was not paying a lot of attention (and instead hacked up a few short slides for a lightning talk) - Sorry. But in the discussion afterwards Salve clarified that the Cyber Resilience Act applies to all "CE-marked products" and that even a Perl API backend that power a mobile app running on a smartphone count as "CE-marked products". Before that I was under the assumption that only software running on actual physical products need the attestation. So we should really get this Steward organization going and hopefully even profit from it!
The last slot of the day was filled with the Lightning Talks hosted by R Geoffrey Avery and his gong. I submitted two and got a "double domm" slot, where I hurried through my microblog pipeline (on POSSE and getting not-twitter-tweets from my command line via some gitolite to my self hosted microblog and the on to Mastodon) followed by taking up Lars' challenge to show stuff from my own "Nähkästchen", in my case gopass and tofi (and some bash pipes) for an easy password manager.
We had the usual mixture of fun and/or informative short talks, but the highlight for me was Sebastian Gamaga, who did his first talk at a Perl event on How I learned about the problem differentiating a Hash from a HashRef. Good slides, well executed and showing a problem that I'm quite sure everybody encountered when first learning Perl (and I have to admit I also sometimes mix up hash/ref and regular/curly-braces when setting up a hash). Looking forward for a "proper" talk by Sebastian next year :-)
This evening I skipped having dinner with the Perl people, because I had to finish some slides for Wednesday and wanted to hang out with my non-Perl friends. But I've heard that a bunch of people had fun bouldering!
Day 3
I had a job call at 10:00 and (unfortunately) a bug to fix, so I missed the three talks in the morning session and only arrived at the venue during lunch break and in time for Paul Cochrane talking about Getting FIT in Perl (and fit he did get, too!). I've only recently started to collect exercise data (as I got a sport watch for my birthday) and being able to extract and analyze the data using my own software is indeed something I plan to do.
Next up was Julien Fiegehenn on Turning humans into SysAdmins, where he showed us how he used LLMs to adapt his developer mentorship framework to also work for sysadmin and getting them (LLMs, not fresh Sysadmins) to differentiate between Julian and Julien (among other things..)
For the final talk it was my turn again: Deploying Perl apps using Podman, make & gitlab. I'm not too happy with slides, as I had to rush a bit to finish them and did not properly highlight all the important points. But it still went well (enough) and it seemed that a few people found one of the main points (using bash / make in gitlab CI instead of specifying all the steps directly in .gitlab-ci.yml) useful.
Then Max spoke the closing words and announced the location of next years German Perl Workshop, which will take place in Hannover! Nice, I've never been there and plan to attend (and maybe join Paul on a bike ride there?)
Summary
As usual, a lot of thanks to the sponsors, the speakers, the orgas and the attendees. Thanks for making this nice event possible!
-
App::cpanminus - get, unpack, build and install modules from CPAN
- Version: 1.7049 on 2026-03-17, with 286 votes
- Previous CPAN version: 1.7048 was 1 year, 4 months, 18 days before
- Author: MIYAGAWA
-
App::HTTPThis - Export the current directory over HTTP
- Version: v0.11.1 on 2026-03-16, with 25 votes
- Previous CPAN version: v0.11.0 was 2 days before
- Author: DAVECROSS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260318.001 on 2026-03-18, with 25 votes
- Previous CPAN version: 20260315.002 was 3 days before
- Author: BRIANDFOY
-
Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
- Version: 0.022 on 2026-03-21, with 17 votes
- Previous CPAN version: 0.021 was 1 year, 1 month, 17 days before
- Author: LEONT
-
DBD::Pg - DBI PostgreSQL interface
- Version: 3.20.0 on 2026-03-19, with 103 votes
- Previous CPAN version: 3.19.0 was 4 days before
- Author: TURNSTEP
-
Git::CPAN::Patch - Patch CPAN modules using Git
- Version: 2.5.2 on 2026-03-18, with 45 votes
- Previous CPAN version: 2.5.1
- Author: YANICK
-
JSON - JSON (JavaScript Object Notation) encoder/decoder
- Version: 4.11 on 2026-03-22, with 109 votes
- Previous CPAN version: 4.10 was 3 years, 5 months, 13 days before
- Author: ISHIGAKI
-
JSON::PP - JSON::XS compatible pure-Perl module.
- Version: 4.18 on 2026-03-20, with 22 votes
- Previous CPAN version: 4.17_01 was 2 years, 7 months, 21 days before
- Author: ISHIGAKI
-
Log::Any - Bringing loggers and listeners together
- Version: 1.719 on 2026-03-16, with 69 votes
- Previous CPAN version: 1.718 was 9 months, 14 days before
- Author: PREACTION
-
MetaCPAN::API - (DEPRECATED) A comprehensive, DWIM-featured API to MetaCPAN
- Version: 0.52 on 2026-03-16, with 26 votes
- Previous CPAN version: 0.51 was 8 years, 9 months, 9 days before
- Author: HAARG
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260320 on 2026-03-20, with 44 votes
- Previous CPAN version: 5.20260308 was 11 days before
- Author: BINGOS
-
Net::SSLeay - Perl bindings for OpenSSL and LibreSSL
- Version: 1.96 on 2026-03-21, with 27 votes
- Previous CPAN version: 1.95_03
- Author: CHRISN
-
OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
- Version: 0.7009 on 2026-03-19, with 15 votes
- Previous CPAN version: 0.7008
- Author: ETJ
-
SPVM - The SPVM Language
- Version: 0.990150 on 2026-03-19, with 36 votes
- Previous CPAN version: 0.990149
- Author: KIMOTO
-
Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
- Version: 1.045 on 2026-03-19, with 14 votes
- Previous CPAN version: 1.044 was 10 days before
- Author: CHOROBA
-
TimeDate - Date and time formatting subroutines
- Version: 2.35 on 2026-03-21, with 28 votes
- Previous CPAN version: 2.34_03 was 1 day before
- Author: ATOOMIC
-
Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
- Version: 0.70 on 2026-03-19, with 20 votes
- Previous CPAN version: 0.69
- Author: CHANSEN
-
YAML::Syck - Fast, lightweight YAML loader and dumper
- Version: 1.39 on 2026-03-21, with 18 votes
- Previous CPAN version: 1.38
- Author: TODDR

We are re-opening the talk submissions with a new deadline of April 21, 2026. Please submit your 20 minute talks, and 50 minute talks at https://tprc.us/. Let us know if you need help with your submission or your talk development, because we have mentors who can listen to your ideas and guide you.
We are also taking submissions for interactive sessions. These are sessions that have a theme, but invite maximum audience participation; sessions which take advantage of the gathering of community members that have a wide range of experience and ideas to share. You would introduce the theme and moderate the session. If you have ideas for interactive sessions, but don’t want to moderate them yourself, please go to our wiki to enter your ideas, and maybe someone else will pick up the ball!
About eighteen months ago, I wrote a post called On the Bleading Edge about my decision to start using Perl’s new class feature in real code. I knew I was getting ahead of parts of the ecosystem. I knew there would be occasional pain. I decided the benefits were worth it.
I still think that’s true.
But every now and then, the bleading edge reminds you why it’s called that.
Recently, I lost a couple of days to a bug that turned out not to be in my code, not in the module I was installing, and not even in the module that module depended on — but in the installer’s understanding of modern Perl syntax.
This is the story.
The Symptom
I was building a Docker image for Aphra. As part of the build, I needed to install App::HTTPThis, which depends on Plack::App::DirectoryIndex, which depends on WebServer::DirIndex.
The Docker build failed with this error:
#13 45.66 --> Working on WebServer::DirIndex #13 45.66 Fetching https://www.cpan.org/authors/id/D/DA/DAVECROSS/WebServer-DirIndex-0.1.3.tar.gz ... OK #13 45.83 Configuring WebServer-DirIndex-v0.1.3 ... OK #13 46.21 Building WebServer-DirIndex-v0.1.3 ... OK #13 46.75 Successfully installed WebServer-DirIndex-v0.1.3 #13 46.84 ! Installing the dependencies failed: Installed version (undef) of WebServer::DirIndex is not in range 'v0.1.0' #13 46.84 ! Bailing out the installation for Plack-App-DirectoryIndex-v0.2.1.
Now, that’s a deeply confusing error message.
It clearly says that WebServer::DirIndex was successfully installed. And then immediately says that the installed version is undef and not in the required range.
At this point you start wondering if you’ve somehow broken version numbering, or if there’s a packaging error, or if the dependency chain is wrong.
But the version number in WebServer::DirIndex was fine. The module built. The tests passed. Everything looked normal.
So why did the installer think the version was undef?
When This Bug Appears
This only shows up in a fairly specific situation:
- A module uses modern Perl
classsyntax - The module defines a
$VERSION - Another module declares a prerequisite with a specific version requirement
- The installer tries to check the installed version without loading the module
- It uses Module::Metadata to extract
$VERSION - And the version of Module::Metadata it is using doesn’t properly understand
class
If you don’t specify a version requirement, you’ll probably never see this. Which is why I hadn’t seen it before. I don’t often pin minimum versions of my own modules, but in this case, the modules are more tightly coupled than I’d like, and specific versions are required.
So this bug only appears when you combine:
modern Perl syntax + version checks + older toolchain
Which is pretty much the definition of “bleading edge”.
The Real Culprit
The problem turned out to be an older version of Module::Metadata that had been fatpacked into cpanm.
cpanm uses Module::Metadata to inspect modules and extract $VERSION without loading the module. But the older Module::Metadata didn’t correctly understand the class keyword, so it couldn’t work out which package the $VERSION belonged to.
So when it checked the installed version, it found… nothing.
Hence:
Installed version (undef) of WebServer::DirIndex is not in range ‘v0.1.0’
The version wasn’t wrong. The installer just couldn’t see it.
An aside, you may find it amusing to hear an anecdote from my attempts to debug this problem.
I spun up a new Ubuntu Docker container, installed cpanm and tried to install Plack::App::DirectoryIndex. Initially, this gave the same error message. At least the problem was easily reproducible.
I then ran code that was very similar to the code cpanm uses to work out what a module’s version is.
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'This displayed an empty string. I was really onto something here. Module::Metadata couldn’t find the version.
I was using Module::Metadata version 1.000037 and, looking at the change log on CPAN, I saw this:
1.000038 2023-04-28 11:25:40Z-detects "class" syntax
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'
0.1.3That seemed conclusive. Excitedly, I reran the Docker build.
It failed again.
You’ve probably worked out why. But it took me a frustrating half an hour to work it out.
cpanm doesn’t use the installed version of Module::Metadata. It uses its own, fatpacked version. Updating Module::Metadata wouldn’t fix my problem.
The Workaround
I found a workaround. That was to add a redundant package declaration alongside the class declaration, so older versions of Module::Metadata can still identify the package that owns $VERSION.
So instead of just this:
class WebServer::DirIndex {
our $VERSION = '0.1.3';
...
}I now have this:
package WebServer::DirIndex;
class WebServer::DirIndex {
our $VERSION = '0.1.3';
...
}It looks unnecessary. And in a perfect world, it would be unnecessary.
But it allows older tooling to work out the version correctly, and everything installs cleanly again.
The Proper Fix
Of course, the real fix was to update the toolchain.
So I raised an issue against App::cpanminus, pointing out that the fatpacked Module::Metadata was too old to cope properly with modules that use class.
Tatsuhiko Miyagawa responded very quickly, and a new release of cpanm appeared with an updated version of Module::Metadata.
This is one of the nice things about the Perl ecosystem. Sometimes you report a problem and the right person fixes it almost immediately.
When Do I Remove the Workaround?
This leaves me with an interesting question.
The correct fix is “use a recent cpanm”.
But the workaround is “add a redundant package line so older tooling doesn’t get confused”.
So when do I remove the workaround?
The answer is probably: not yet.
Because although a fixed cpanm exists, that doesn’t mean everyone is using it. Old Docker base images, CI environments, bootstrap scripts, and long-lived servers can all have surprisingly ancient versions of cpanm lurking in them.
And the workaround is harmless. It just offends my sense of neatness slightly.
So for now, the redundant package line stays. Not because modern Perl needs it, but because parts of the world around modern Perl are still catching up.
Life on the Bleading Edge
This is what life on the bleading edge actually looks like.
Not dramatic crashes. Not language bugs. Not catastrophic failures.
Just a tool, somewhere in the install chain, that looks at perfectly valid modern Perl code and quietly decides that your module doesn’t have a version number.
And then you lose two days proving that you are not, in fact, going mad.
But I’m still using class. And I’m still happy I am.
You just have to keep an eye on the whole toolchain — not just the language — when you decide to live a little closer to the future than everyone else.
The post Still on the [b]leading edge first appeared on Perl Hacks.
I am currently re-visiting the documentation for Perl's CGI module. In the section about the param() method, there is a warning about using that method in a list context; see here. The warning literally reads:
Warning - calling param() in list context can lead to vulnerabilities if you do not sanitise user input as it is possible to inject other param keys and values into your code. [...]
Then there is an example of what we should not do:
my %user_info = (
id => 1,
name => $q->param('name'),
);
I have understood the warning and the code except one thing:
How can calling param() in list context inject other "param keys" (as the citation calls it) into my code? Could somebody please give an example of a query string or of POST data that lets me reproduce this?
The question is specifically about parameter keys, not about possible multiple values for the same key.
Abstract
Even if you’re skeptical about AI writing your code, you’re leaving time on the table.
Many developers have been slow to adopt AI in their workflows, and that’s understandable. As AI coding assistants become more capable the anxiety is real - nobody wants to feel like they’re training their replacement. But we’re not there yet. Skilled developers who understand logic, mathematics, business needs and user experience will be essential to guide application development for the foreseeable future.
The smarter play is to let AI handle the parts of the job you never liked anyway - the documentation, the release notes, the boilerplate tests - while you stay focused on the work that actually requires your experience and judgment. You don’t need to go all in on day one. Here are six places to start.
1. Unit Test Writing
Writing unit tests is one of those tasks most developers know they should do more of and few enjoy doing. It’s methodical, time-consuming, and the worst time to write them is when the code reviewer asks if they pass.
TDD is a fine theory. In practice, writing tests before you’ve vetted your design means rewriting your tests every time the design evolves - which is often. Most experienced developers write tests after the design has settled, and that’s a perfectly reasonable approach.
The important thing is that they get written at all. Even a test that
simply validates use_ok(qw(Foo::Bar)) puts scaffolding in place that
can be expanded when new features are added or behavior changes. A
placeholder is infinitely more useful than nothing.
This is where AI earns its keep. Feed it a function or a module and it will identify the code paths that need coverage - the happy path, the edge cases, the boundary conditions, the error handling. It will suggest appropriate test data sets including the inputs most likely to expose bugs: empty strings, nulls, negative numbers, off-by-one values - the things a tired developer skips.
You review it, adjust it, own it. AI did the mechanical work of thinking through the permutations. You make sure it reflects how your code is actually used in the real world.
2. Documentation
“Documentation is like sex: when it’s good, it’s very, very good; and when it’s bad, it’s better than nothing.” - said someone somewhere.
Of course, there are developers that justify their disdain for writing documentation with one of two arguments (or both):
- The code is the documentation
- Documentation is wrong the moment it is written
It is true, the single source of truth regarding what code actually does is the code itself. What it is supposed to do is what documentation should be all about. When they diverge it’s either a defect in the software or a misunderstanding of the business requirement captured in the documentation.
Code that changes rapidly is difficult to document, but the intent of the code is not. Especially now with AI. It is trivial to ask AI to review the current documentation and align it with the code, negating point #2.
Feed AI a module and ask it to generate POD. It will describe what the code does. Your job is to verify that what it does is what it should do - which is a much faster review than writing from scratch.
3. Release Notes
If you’ve read this far you may have noticed the irony - this post was written by someone who just published a blog post about automating release notes with AI. So consider this section field-tested.
Release notes sit at the intersection of everything developers dislike: writing prose, summarizing work they’ve already mentally moved on from, and doing it with enough clarity that non-developers can understand what changed and why it matters. It’s the last thing standing between you and shipping.
The problem with feeding a git log to AI is that git logs are written for developers in the moment, not for readers after the fact. “Fix the thing” and “WIP” are not useful release note fodder.
The better approach is to give AI real context - a unified diff, a file manifest, and the actual source of the changed files. With those three inputs AI can identify the primary themes of a release, group related changes, and produce structured notes that actually reflect the architecture rather than just the line changes.
A simple make release-notes target can generate all three assets
automatically from your last git tag. Upload them, prompt for your
preferred format, and you have a first draft in seconds rather than
thirty minutes. Here’s how I built
it.
You still edit it. You add color, context, and the business rationale that only you know. But the mechanical work of reading every diff and turning it into coherent prose? Delegated.
4. Bug Triage
Debugging can be the most frustrating and the most rewarding experience for a developer. Most developers are predisposed to love a puzzle and there is nothing more puzzling than a race condition or a dangling pointer. Even though books and posters have been written about debugging it is sometimes difficult to know exactly where to start.
Describe the symptoms, share the relevant code, toss your theory at it. AI will validate or repudiate without ego - no colleague awkwardly telling you you’re wrong. It will suggest where to look, what telemetry to add, and before you know it you’re instrumenting the code that should have been instrumented from the start.
AI may not find your bug, but it will be a fantastic bug buddy.
5. Code Review
Since I’ve started using AI I’ve found that one of the most valuable things I can do with it is to give it my first draft of a piece of code. Anything more than a dozen or so lines is fair game.
Don’t waste your time polishing a piece of lava that just spewed from your noggin. There’s probably some gold in there and there’s definitely some ash. That’s ok. You created the framework for a discussion on design and implementation. Before you know it you have settled on a path.
AI’s strength is pattern recognition. It will recognize when your code needs to adopt a different pattern or when you nailed it. Get feedback. Push back. It’s not a one-way conversation. Question the approach, flag the inconsistencies that don’t feel right - your input into that review process is critical in evolving the molten rock into a solid foundation.
6. Legacy Code Deciphering
What defines “Legacy Code?” It’s a great question and hard to answer. And not to get too racy again, but as it has been said of pornography, I can’t exactly define it but I know it when I see it.
Fortunately (and yes I do mean fortunately) I have been involved in maintaining legacy code since the day I started working for a family run business in 1998. The code I maintained there was born literally in the late 70’s and still, to this day generates millions of dollars. You will never learn more about coding than by maintaining legacy code.
These are the major characteristics of legacy code from my experience (in order of visibility):
- It generates so much money for a company they could not possibly think of it being unavailable.
- It is monolithic and may in fact consist of modules in multiple languages.
- It is grown organically over the decades.
- It is more than 10 years old.
- The business rules are not documented, opaque and can only be discerned by a careful reading of the software. Product managers and users think they know what the software does, but probably do not have the entire picture.
- It cannot easily be re-written (by humans) because of #5.
- It contains as much dead code that is no longer serving any useful purpose as it does useful code.
I once maintained a C program that searched an ISAM database of legal judgments. The code had been ported from a proprietary in-memory binary tree implementation and was likely older than most of the developers reading this post. The business model was straightforward and terrifying - miss a judgment and we indemnify the client. Every change had to be essentially idempotent. You weren’t fixing code, you were performing surgery on a patient who would sue you if the scar was in the wrong place.
I was fortunate - there were no paydays for a client on my watch. But I wish I’d had AI back then. Not to write the code. To help me read it.
Now, where does AI come in? Points 5, 6, and definitely 7.
Throw a jabberwocky of a function at AI and ask it what it does. Not what it should do - what it actually does. The variable names are cryptic, the comments are either missing or lying, and the original author left the company during the Clinton administration. AI doesn’t care. It reads the code without preconception and gives you a plain English explanation of the logic, the assumptions baked in, and the side effects you never knew existed.
That explanation becomes your documentation. Those assumptions become your unit tests. Those side effects become the bug reports you never filed because you didn’t know they were bugs.
Dead code is where AI particularly shines. Show it a module and ask what’s unreachable. Ask what’s duplicated. Ask what hasn’t been touched in a decade but sits there quietly terrifying anyone who considers deleting it. AI will give you a map of the minefield so you can walk through it rather than around it forever.
Along the way AI will flag security vulnerabilities you never knew were there - input validation gaps, unsafe string handling, authentication assumptions that made sense in 1998 and are a liability today. It will also suggest where instrumentation is missing, the logging and telemetry that would have made every debugging session for the last twenty years shorter. You can’t go back and add it to history, but you can add it now before the next incident.
The irony of legacy code is that the skills required to understand it - patience, pattern recognition, the ability to hold an entire system in your head - are exactly the skills AI complements rather than replaces. You still need to understand the business. AI just helps you read the hieroglyphics.
Conclusion
None of the six items on this list require you to hand over the keys. You are still the architect, the decision maker, the person who understands the business and the user. AI is the tireless assistant who handles the parts of the job that drain your energy without advancing your craft.
The developers who thrive in the next decade won’t be the ones who resisted AI the longest. They’ll be the ones who figured out earliest how to delegate the tedious, the mechanical, and the repetitive - and spent the time they saved on the work that actually requires a human.
You don’t have to go all in. Start with a unit test. Paste some legacy code and ask AI to explain it or document it. Think of AI as that senior developer you go to with the tough problems - the one who has seen everything, judges nothing, and is available at 3am when the production system is on fire.
Only this one never sighs when you knock on the door.
Answer
You can configure grub via several ways to use a specific kernel or you can configure grub to use the latest one, or you can tell grub to pick one from a selection.
One specific kernel
If you inspect /etc/grub/grub.cfg you’ll see entries like this:
# the \ are mine, these are usually one big line but for blog purposes I
# multilined them
menuentry 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' --class debian \
--class gnu-linux --class gnu --class os $menuentry_id_option \
'gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4' {
You can use two identifiers to configure grub; you can use 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' or you can use the $menuentry_id_option
with gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4.
The Problem: Generating Release Notes is Boring
You’ve just finished a marathon refactoring - perhaps splitting a monolithic script into proper modules-and now you need to write the release notes. You could feed an AI a messy git log, but if you want high-fidelity summaries that actually understand your architecture, you need to provide better context.
The Solution: AI Loves Boring Tasks
…and is pretty good at them too!
Instead of manually describing changes or hoping it can interpret my ChangeLog, I’ve automated the production of three ephemeral “Sidecar” assets. These are generated on the fly, uploaded to the LLM, and then purged after analysis - no storage required.
The Assets
- The Manifest (
.lst): A simple list of every file touched, ensuring the AI knows the exact scope of the release. - The Logic (
.diffs): A unified diff (usinggit diff --no-ext-diff) that provides the “what” and “why” of every code change. - The Context (
.tar.gz): This is the “secret sauce.” It contains the full source of the changed files, allowing the AI to see the final implementation - not just the delta.
The Makefile Implementation
If you’ve read any of my blog
posts you
know I’m a huge Makefile fan. To automate this I’m naturally going
to add a recipe to my Makefile or Makefile.am.
First, we explicitly set the shell to /usr/bin/env bash to ensure features
like brace expansion work consistently across all dev environments.
# Ensure a portable bash environment for advanced shell features
SHELL := /usr/bin/env bash
.PHONY: release-notes clean-local
# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)
release-notes:
@curr_ver=$(VERSION); \
last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
diffs="release-$$curr_ver.diffs"; \
diff_list="release-$$curr_ver.lst"; \
diff_tarball="release-$$curr_ver.tar.gz"; \
echo "Comparing $$last_tag to current $$curr_ver..."; \
git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
ls -alrt release-$$curr_ver*
clean-local:
@echo "Cleaning ephemeral release assets..."
rm -f release-*.{tar.gz,lst,diffs}
Breaking Down the Recipe
- The Shell Choice (
/usr/bin/env bash): We avoid hardcoding paths to ensure the script finds the correct Bash path on macOS, Linux, or inside a container. - The Version Override (
VERSION ?=): This allows the “pre-flight” trick: runningmake release-notes VERSION=HEADto iterate on notes before you’ve actually tagged the release. - Smart Tag Discovery (
--sort=-v:refname): Usingv:refnameforces Git to use semantic versioning logic (so1.10.0correctly follows1.2.0), while the glob pattern filters out “noisy” non-version tags. - The Diff Filter (
--diff-filter=AMR): This ensures the tarball only includes files that actually exist (Added, Modified, or Renamed). If a release deleted a file, this filter preventstarfrom erroring out when it can’t find the missing file on disk. - The Cleanup Crew (
clean-local): remove the ephemeral artifacts usingbashexpansion
The AI Prompt
Once your assets are generated, upload them to AI and use a prompt that enforces your specific formatting standards.
The Prompt:
I have uploaded a file manifest (`.lst`), a unified diff
(`.diffs`), and a tarball of the source code (`.tar.gz`). Please
analyze these to identify the primary themes of this
release. Generate a release entry using this protocol:
1. Separate versions with 71 dashes.
2. Include a high-level executive summary paragraph.
3. Indent bullet points exactly 21 spaces to maintain a vertical column.
4. Group changes under category labels like `Core Bedrock:` or `Miniserver:`.
5. Provide plain-text ASCII only inside a code block.
The Result
In a recent dry run for Bedrock 3.8.8, this exact workflow allowed the
AI to correctly identify that bedrock-miniserver.pl.in was
refactored into Bedrock::MiniServer and Bedrock::MiniServer::CLI,
while accurately noting that Bedrock::Error moved to the
Bedrock-Core distribution.
In actuality, version 3.8.8 focuses on the primary goals of MiniServer maturation (more batteries included), Apache decoupling, and distribution hygiene. Here’s AI’s version:
-----------------------------------------------------------------------
Bedrock 3.8.8
-----------------------------------------------------------------------
This release focuses on the maturation of the MiniServer component,
decoupling core Bedrock logic from Apache-specific dependencies, and
improving distribution hygiene. Key architectural changes include
refactoring the MiniServer into modular components and moving
exception handling into the Bedrock-Core distribution.
2026-03-17 - 3.8.8 - MiniServer Maturation and Apache Decoupling
Miniserver:
- Refactored bedrock-miniserver.pl into modular
Bedrock::MiniServer and Bedrock::MiniServer::CLI.
- Implemented zero-config scaffolding to
automatically create application trees.
- Integrated full Bedrock configuration pipeline
for parity with Apache environments.
- Updated bedrock_server_config to support both
getter and setter operations.
Core:
- Moved Bedrock::Error and Bedrock::Exception to
the Bedrock-Core distribution.
- Introduced Bedrock::FauxHandler as a production-
ready alias for test handlers.
- Added dist_dir() to BLM::Startup::Bedrock to
expose distribution paths to templates.
Fixes:
- Demoted Apache-specific modules (mod_perl2,
Apache2::Request) to optional recommendations.
- Improved Bedrock::Test::FauxHandler to handle
caller-supplied loggers and safe destruction.
Conclusion
As I mentioned in a response to a recent Medium article, AI can be an accelerator for seasoned professionals. You’re not cheating. You did the work. AI does the wordsmithing. You edit, add color, and ship. What used to take 30 minutes now takes 3. Now that’s working smarter, not harder!
Pro-Tip
Add this to the top of your Makefile
SHELL := /usr/bin/env bash
# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)
Copy this to a file named release-notes.mk
.PHONY: release-notes clean-local
release-notes:
@curr_ver=$(VERSION); \
last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
diffs="release-$$curr_ver.diffs"; \
diff_list="release-$$curr_ver.lst"; \
diff_tarball="release-$$curr_ver.tar.gz"; \
echo "Comparing $$last_tag to current $$curr_ver..."; \
git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
ls -alrt release-$$curr_ver*
clean-local:
@echo "Cleaning ephemeral release assets..."
rm -f release-*.{tar.gz,lst,diffs}
Then add release-notes.mk to your Makefile
include release-notes.mk

Dave writes:
Last month I worked on various miscellaneous issues, including a few performance and deparsing regressions.
Summary: * 3:00 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build * 2:49 GH# #24212 goto void XSUB in scalar context crashes * 7:19 XS: avoid core distros using void ST(0) hack * 2:40 fix up Deparse breakage * 5:41 remove OP_NULLs in OP_COND execution path
Total: * 21:29 (HH::MM)

Paul writes:
Not too much activity of my own this month here, as I spent a lot of Perl time working on other things like magic-v2 or some CPAN module ecosystem like Future::IO. Plus I had a stage show to finish building props for and manage the running of.
But I did manage to do:
- 3 = Continue work on attributes-v2 and write a provisional PR for the first stage
- https://github.com/Perl/perl5/pull/24171
- 3 = Bugfix in class.c in threaded builds
- https://github.com/Perl/perl5/issues/24150
- https://github.com/Perl/perl5/pull/24171
- 1 = More
foreachlvref neatening- https://github.com/Perl/perl5/pull/24202
- 3 = Various github code reviews
Total: 10 hours
Now that both attributes-v2 and magic-v2 are parked awaiting the start of the 5.45.x development cycle, most of my time until then will be spent on building up some more exciting features to launch those with, as well as continuing to focus on fixing any release-blocker bugs for 5.44.

Tony writes:
``` [Hours] [Activity] 2026/02/02 Monday 0.08 #24122 review updates and comment 0.17 #24063 review updates and apply to blead 0.28 #24062 approve with comment and bonus comment 0.92 #24071 review updates and approve 0.40 #24080 review updates, research and comment 0.18 #24122 review updates and approve 0.27 #24157 look into it and original ticket, comment on original ticket 0.58 #24134 review and comments 0.27 #24144 review and approve with comment 0.18 #24155 review and comment 0.48 #16865 debugging
0.90 #16865 debugging, start a bisect with a better test case
4.71
2026/02/03 Tuesday 0.17 review steve’s suggested maint-votes and vote 0.17 #24155 review updates and approve 1.30 #24073 recheck, comments and apply to blead 0.87 #24082 more review, follow-ups 0.83 #24105 work on threads support
0.65 #24105 more work on threads, hash randomization support
3.99
2026/02/04 Wednesday 0.13 github notifications 1.92 #24163 review, comments 0.48 #24105 rebase some more, fix tests, do a commit and push for CI (needs more work)
1.70 #24105 more cleanup and push for CI
4.23
2026/02/05 Thursday 0.20 github notifications 0.38 #24105 review CI results and fix some issues 1.75 #24082 research and comments 0.63 #24105 more CI results, update the various generated config files and push for CI 0.17 #23561 review updates and comment 0.40 #24163 research and follow-up
0.58 #24098 review updates and comments
4.11
2026/02/09 Monday 0.15 #24082 comment 0.20 #22040 comment 0.30 #24005 research, comment 0.33 #4106 rebase again and apply to blead 0.35 #24133 comment 0.35 #24168 review CI results and comment 0.25 #24098 comment 0.18 #24129 review updates and comment 0.92 #24160 review, comment, approve 0.17 #24136 review and briefly comment 0.78 #24179 review, comments
0.48 #16865 comment, try an approach
4.46
2026/02/10 Tuesday 0.62 #24163 comment 0.23 #24082 research
0.20 #24082 more research
1.05
2026/02/11 Wednesday 0.48 #24163 review updates and approve 0.73 #24129 review updates 0.45 #24098 research and follow-up comment 0.32 #24134 review updates and approve 0.17 #24080 review updates and approve 1.18 #22132 setup, testing and comments on ticket and upstream llvm ticket 0.32 #23561 review update and approve 0.42 #24179 review some more and make a suggestion
1.03 #24187 review and comments
5.10
2026/02/12 Thursday 0.43 #24136 research and comment 0.17 #24190 review and approve 0.90 #24182 review discussion and the change and approve 0.08 #24178 review and briefly comment 0.33 #24177 review, research and comment 0.08 #24187 brief follow-up 0.43 #24176 research, review and approve 0.27 #24191 research, testing 0.20 #24192 review and approve 0.38 #24056 debugging
0.58 #24056 debugging, something in find_lexical_cv()?
3.85
2026/02/16 Monday 0.52 github notifications 0.08 #24178 review updates and approve 2.20 #24098 review and comments 0.88 #24056 more debugging, find at least one bug 0.92 #24056 work up tests, testing, commit message and push for
CI, perldelta and re-push
4.60
2026/02/17 Tuesday 0.18 #24056 check CI results, rebase in case and re-push, open PR 24205 2.88 #24187 review, comments 0.47 #24187 more comments 0.23 reply email from Jim Keenan re git handling for testing PR
tests without the fixes
3.76
2026/02/18 Wednesday 3.02 #24187 review comments, work on fix for assertion, testing, push for CI 0.25 #24187 check CI, make perldelta and make PR 24211
0.35 #24098 review updates and approve
3.62
2026/02/19 Thursday 0.30 #24200 research and comment 0.47 #24215 review, wonder why cmp_version didn’t complain, find out and approve 0.08 #24208 review and comment 0.73 #24213 review, everything that needs saying had been said 0.22 #24206 review and comments 0.53 #24203 review, comment and approve 0.33 #24210 review, research and approve with comment
0.37 #24200 review, research and approve
3.03
2026/02/23 Monday 0.35 #24212 testing add #24213 to 5.42 votes 2.42 #24159 review and benchmarking, comment
0.73 #24187 try to break it
3.50
2026/02/24 Tuesday 0.35 github notifications 1.13 #24187 update PR 24211 commit message, rechecks 0.43 #24001 re-work tests on PR 24060
0.30 #24001 more re-work
2.21
2026/02/25 Wednesday 1.02 #24180 research, comments 0.22 #24206 review update and comment 0.28 #24208 review updates and comment 0.57 #24060 more tests
0.88 #24060 more tests, testing, debugging
2.97
2026/02/26 Thursday 0.47 #24211 minor fixes per comments 0.23 #24206 review updates and approve 0.22 #24180 review updates and approve 0.98 #24236 review and comments 1.30 #24228 review, testing and comments 0.08 #24236 research and comment
0.78 #24159 review updates, testing, comments
4.06
Which I calculate is 59.25 hours.
Approximately 50 tickets were reviewed or worked on, and 3 patches were applied. ```
Let’s talk about music programming! There are a million aspects to this subject, but today, we’ll touch on generating rhythmic patterns with mathematical and combinatorial techniques. These include the generation of partitions, necklaces, and Euclidean patterns.
Stefan and J. Richard Hollos wrote an excellent little book called “Creating Rhythms” that has been turned into C, Perl, and Python. It features a number of algorithms that produce or modify lists of numbers or bit-vectors (of ones and zeroes). These can be beat onsets (the ones) and rests (the zeroes) of a rhythm. We’ll check out these concepts with Perl.
For each example, we’ll save the MIDI with the MIDI::Util module. Also, in order to actually hear the rhythms, we will need a MIDI synthesizer. For these illustrations, fluidsynth will work. Of course, any MIDI capable synth will do! I often control my eurorack analog synthesizer with code (and a MIDI interface module).
Here’s how I start fluidsynth on my mac in the terminal, in a separate session. It uses a generic soundfont file (sf2) that can be downloaded here (124MB zip).
fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2
So, how does Perl know what output port to use? There are a few ways, but with JBARRETT’s MIDI::RtMidi::FFI::Device, you can do this:
use MIDI::RtMidi::FFI::Device ();
my $midi_in = RtMidiIn->new;
my $midi_out = RtMidiOut->new;
print "Input devices:\n";
$midi_in->print_ports;
print "\n";
print "Output devices:\n";
$midi_out->print_ports;
print "\n";
This shows that fluidsynth is alive and ready for interaction.
Okay, on with the show!
First-up, let’s look at partition algorithms. With the part() function, we can generate all partitions of n, where n is 5, and the “parts” all add up to 5. Then taking one of these (say, the third element), we convert it to a binary sequence that can be interpreted as a rhythmic phrase, and play it 4 times.
#!/usr/bin/env perl
use strict;
use warnings;
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $parts = $mcr->part(5);
# [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 1, 2 ], [ 1, 2, 2 ], [ 1, 1, 3 ], [ 2, 3 ], [ 1, 4 ], [ 5 ] ]
my $p = $parts->[2]; # [ 1, 2, 2 ]
my $seq = $mcr->int2b([$p]); # [ [ 1, 1, 0, 1, 0 ] ]
Now we render and save the rhythm:
use MIDI::Util qw(setup_score);
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) {
for my $bit ($seq->[0]->@*) {
if ($bit) {
$score->n('en', 40);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-1.mid');
In order to play the MIDI file that is produced, we can use fluidsynth like this:
fluidsynth -i ~/Music/soundfont/FluidR3_GM.sf2 perldotcom-1.mid
Not terribly exciting yet.
Let’s see what the “compositions” of a number reveal. According to the Music::CreatingRhythms docs, a composition of a number is “the set of combinatorial variations of the partitions of n with the duplicates removed.”
Okay. Well, the 7 partitions of 5 are:
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 3], [1, 2, 2], [1, 4], [2, 3], [5]]
And the 16 compositions of 5 are:
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 2, 1], [1, 1, 3], [1, 2, 1, 1], [1, 2, 2], [1, 3, 1], [1, 4], [2, 1, 1, 1], [2, 1, 2], [2, 2, 1], [2, 3], [3, 1, 1], [3, 2], [4, 1], [5]]
That is, the list of compositions has, not only the partition [1, 2, 2], but also its variations: [2, 1, 2] and [2, 2, 1]. Same with the other partitions. Selections from this list will produce possibly cool rhythms.
Here are the compositions of 5 turned into sequences, played by a snare drum, and written to the disk:
use Music::CreatingRhythms ();
use MIDI::Util qw(setup_score);
my $mcr = Music::CreatingRhythms->new;
my $comps = $mcr->compm(5, 3); # compositions of 5 with 3 elements
my $seq = $mcr->int2b($comps);
my $score = setup_score(bpm => 120, channel => 9);
for my $pattern ($seq->@*) {
for my $bit (@$pattern) {
if ($bit) {
$score->n('en', 40); # snare patch
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-2.mid');
A little better. Like a syncopated snare solo.
Sidebar
Another way to play the MIDI file is to use timidity. On my mac, with the soundfont specified in the timidity.cfg configuration file, this would be:
timidity -c ~/timidity.cfg -Od perldotcom-2.mid
To convert a MIDI file to an mp3 (or other audio formats), I do this:
timidity -c ~/timidity.cfg perldotcom-2.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 64k perldotcom-2.mp3
Okay. Enough technical details! What if we want a kick bass drum and hi-hat cymbals, too? Refactor time…
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $s_comps = $mcr->compm(4, 2); # snare
my $s_seq = $mcr->int2b($s_comps);
my $k_comps = $mcr->compm(4, 3); # kick
my $k_seq = $mcr->int2b($k_comps);
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 8) { # repeats
my $s_choice = $s_seq->[ int rand @$s_seq ];
my $k_choice = $k_seq->[ int rand @$k_seq ];
for my $i (0 .. $#$s_choice) { # pattern position
my @notes = (42); # hi-hat every time
if ($s_choice->[$i]) {
push @notes, 40;
}
if ($k_choice->[$i]) {
push @notes, 36;
}
$score->n('en', @notes);
}
}
$score->write_score('perldotcom-3.mid');
Here we play generated kick and snare patterns, along with a steady hi-hat.
Next up, let’s look at rhythmic “necklaces.” Here we find many grooves of the world.

Image from The Geometry of Musical Rhythm
Rhythm necklaces are circular diagrams of equally spaced, connected nodes. A necklace is a lexicographical ordering with no rotational duplicates. For instance, the necklaces of 3 beats are [[1, 1, 1], [1, 1, 0], [1, 0, 0], [0, 0, 0]]. Notice that there is no [1, 0, 1] or [0, 1, 1]. Also, there are no rotated versions of [1, 0, 0], either.
So, how many 16 beat rhythm necklaces are there?
my $necklaces = $mcr->neck(16);
print scalar @$necklaces, "\n"; # 4116 of 'em!
Okay. Let’s generate necklaces of 8 instead, pull a random choice, and play the pattern with a percussion instrument.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $patch = shift || 75; # claves
my $mcr = Music::CreatingRhythms->new;
my $necklaces = $mcr->neck(8);
my $choice = $necklaces->[ int rand @$necklaces ];
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $bit (@$choice) { # pattern position
if ($bit) {
$score->n('en', $patch);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-4.mid');
Here we choose from all necklaces. But note that this also includes the sequence with all ones and the sequence with all zeroes. More sophisticated code might skip these.
More interesting would be playing simultaneous beats.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $necklaces = $mcr->neck(8);
my $x_choice = $necklaces->[ int rand @$necklaces ];
my $y_choice = $necklaces->[ int rand @$necklaces ];
my $z_choice = $necklaces->[ int rand @$necklaces ];
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $i (0 .. $#$x_choice) { # pattern position
my @notes;
if ($x_choice->[$i]) {
push @notes, 75; # claves
}
if ($y_choice->[$i]) {
push @notes, 63; # hi_conga
}
if ($z_choice->[$i]) {
push @notes, 64; # low_conga
}
$score->n('en', @notes);
}
}
$score->write_score('perldotcom-5.mid');
And that sounds like:
How about Euclidean patterns? What are they, and why are they named for a geometer?
Euclidean patterns are a set number of positions P that are filled with a number of beats Q that is less than or equal to P. They are named for Euclid because they are generated by applying the “Euclidean algorithm,” which was originally designed to find the greatest common divisor (GCD) of two numbers, to distribute musical beats as evenly as possible.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $beats = 16;
my $s_seq = $mcr->rotate_n(4, $mcr->euclid(2, $beats)); # snare
my $k_seq = $mcr->euclid(2, $beats); # kick
my $h_seq = $mcr->euclid(11, $beats); # hi-hats
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $i (0 .. $beats - 1) { # pattern position
my @notes;
if ($s_seq->[$i]) {
push @notes, 40; # snare
}
if ($k_seq->[$i]) {
push @notes, 36; # kick
}
if ($h_seq->[$i]) {
push @notes, 42; # hi-hats
}
if (@notes) {
$score->n('en', @notes);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-6.mid');
Now we’re talkin’ - an actual drum groove! To reiterate, the euclid() method distributes a number of beats, like 2 or 11, over the number of beats, 16. The kick and snare use the same arguments, but the snare pattern is rotated by 4 beats, so that they alternate.
So what have we learned today?
-
That you can use mathematical functions to generate sequences to represent rhythmic patterns.
-
That you can play an entire sequence or simultaneous notes with MIDI.
References:
-
App::Cmd - write command line apps with less suffering
- Version: 0.340 on 2026-03-13, with 50 votes
- Previous CPAN version: 0.339 was 21 days before
- Author: RJBS
-
App::HTTPThis - Export the current directory over HTTP
- Version: v0.11.0 on 2026-03-13, with 25 votes
- Previous CPAN version: 0.010 was 3 months, 9 days before
- Author: DAVECROSS
-
App::zipdetails - Display details about the internal structure of Zip files
- Version: 4.005 on 2026-03-08, with 65 votes
- Previous CPAN version: 4.004 was 1 year, 10 months, 8 days before
- Author: PMQS
-
CPAN::Audit - Audit CPAN distributions for known vulnerabilities
- Version: 20260308.002 on 2026-03-08, with 21 votes
- Previous CPAN version: 20250829.001 was 6 months, 10 days before
- Author: BRIANDFOY
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260311.002 on 2026-03-11, with 25 votes
- Previous CPAN version: 20260308.006 was 2 days before
- Author: BRIANDFOY
-
Dancer2 - Lightweight yet powerful web application framework
- Version: 2.1.0 on 2026-03-12, with 139 votes
- Previous CPAN version: 2.0.1 was 4 months, 20 days before
- Author: CROMEDOME
-
Data::Alias - Comprehensive set of aliasing operations
- Version: 1.30 on 2026-03-11, with 19 votes
- Previous CPAN version: 1.29 was 1 month, 8 days before
- Author: XMATH
-
DBD::Pg - DBI PostgreSQL interface
- Version: 3.19.0 on 2026-03-14, with 103 votes
- Previous CPAN version: 3.18.0 was 2 years, 3 months, 7 days before
- Author: TURNSTEP
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.219 on 2026-03-09, with 19 votes
- Previous CPAN version: 2.218 was before
- Author: PMQS
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.633 on 2026-03-13, with 16 votes
- Previous CPAN version: 0.632 was 2 months, 7 days before
- Author: ETHER
-
Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
- Version: 0.74 on 2026-03-13, with 22 votes
- Previous CPAN version: 0.74 was 1 day before
- Author: DANAJ
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.040000 on 2026-03-09, with 29 votes
- Previous CPAN version: 2.039000 was 8 days before
- Author: MICKEY
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260308 on 2026-03-08, with 44 votes
- Previous CPAN version: 5.20260220 was 15 days before
- Author: BINGOS
-
OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
- Version: 0.7007 on 2026-03-13, with 15 votes
- Previous CPAN version: 0.7006 was 10 months, 29 days before
- Author: ETJ
-
less - The Perl 5 language interpreter
- Version: 5.042001 on 2026-03-08, with 2248 votes
- Previous CPAN version: 5.042001 was 14 days before
- Author: SHAY
-
SPVM - The SPVM Language
- Version: 0.990146 on 2026-03-14, with 36 votes
- Previous CPAN version: 0.990145 was before
- Author: KIMOTO
-
Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
- Version: 1.044 on 2026-03-09, with 14 votes
- Previous CPAN version: 1.043 was 8 months, 5 days before
- Author: CHOROBA
-
Test::Routine - composable units of assertion
- Version: 0.032 on 2026-03-12, with 13 votes
- Previous CPAN version: 0.031 was 2 years, 11 months before
- Author: RJBS
-
WWW::Mechanize::Chrome - automate the Chrome browser
- Version: 0.76 on 2026-03-13, with 22 votes
- Previous CPAN version: 0.75 was 4 months, 12 days before
- Author: CORION
-
X11::korgwm - a tiling window manager for X11
- Version: 6.1 on 2026-03-08, with 14 votes
- Previous CPAN version: 6.0 was before
- Author: ZHMYLOVE
This is the weekly favourites list of CPAN distributions. Votes count: 61
Week's winner: Langertha (+3)
Build date: 2026/03/14 22:28:35 GMT
Clicked for first time:
- Alien::libmaxminddb - Find or install libmaxminddb
- Container::Builder - Build Container archives.
- Data::HashMap - Fast type-specialized hash maps implemented in C
- Data::Path::XS - Fast path-based access to nested data structures
- EV::Future - Minimalist and high-performance async control flow for EV
- Graph::Easy::As_svg - Output a Graph::Easy as Scalable Vector Graphics (SVG)
- HTTP::Handy - A tiny HTTP/1.0 server for Perl 5.5.3+
- LaTeX::Replicase - Perl extension implementing a minimalistic engine for filling real TeX-LaTeX files that act as templates.
- Linux::Event - Front door for the Linux::Event reactor and proactor ecosystem
- Linux::Event::Listen - Listening sockets for Linux::Event
- LTSV::LINQ - LINQ-style query interface for LTSV files
- Mail::Make - Strict, Fluent MIME Email Builder
- Router::Ragel - Router module using Ragel finite state machine
- Search::Tokenizer - Decompose a string into tokens (words)
- Term::ReadLine::Repl - A batteries included interactive Term::ReadLine REPL module
- Test::Mockingbird - Advanced mocking library for Perl with support for dependency injection and spies
- Unicode::Towctrans -
- XML::PugiXML - Perl binding for pugixml C++ XML parser
Increasing its reputation:
- Affix (+1=5)
- App::cpm (+1=78)
- App::perlbrew (+1=181)
- Class::XSConstructor (+1=9)
- Compress::Zstd (+1=7)
- CtrlO::PDF (+1=4)
- Data::MessagePack (+1=18)
- Data::Random (+1=4)
- DateTime::Format::ISO8601 (+1=10)
- DBD::Oracle (+1=33)
- DBD::Pg (+1=103)
- DBIx::DataModel (+1=13)
- Encode::Simple (+1=6)
- EV (+1=50)
- Eval::Closure (+1=11)
- File::HomeDir (+1=36)
- File::Map (+1=24)
- Graph::Easy (+1=11)
- Iterator::Simple (+1=8)
- Langertha (+3=2)
- Locale::Unicode::Data (+1=2)
- LV (+2=4)
- Math::GMPz (+1=4)
- MetaCPAN::Client (+1=27)
- Moose (+1=335)
- MooX::Cmd (+1=9)
- Net::Server (+1=35)
- OpenGL (+1=15)
- PDL (+1=61)
- Perl::Critic (+1=135)
- Pinto (+1=66)
- PLS (+1=18)
- Readonly (+1=24)
- Reply (+1=63)
- Sentinel (+1=9)
- Server::Starter (+1=23)
- Test2::Plugin::SubtestFilter (+1=4)
- Test::LWP::UserAgent (+1=15)
- Text::Trim (+1=7)
- Try::Tiny (+1=181)
For those running a development version of git from master or next, you probably have seen it already. Today I was inspecting the git logs of git and found this little gem. It supports my workflow to the max.
You can now configure git status to compare branches with your current branch
in status. When you configure status.comparebranches you can use
@{upstream} and @{push} and you see both how far you’ve diverged from your
upstream and your push branch. For those, like me, who track an upstream branch
which differs from their push branch this is a mighty fine feature!
TL;DR
I didn’t like how the default zsh prompt truncation works. My solution, used in
my own custom-made prompt (fully supported by promptinit), uses a custom
precmd hook to dynamically determine the terminal’s available width.
Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

The deadline for talks looms large, but assistance awaits!
This year, we have coaches available to help write your talk description, and to support you in developing the talk.
If you have a talk you would like to give, but cannot flesh out the idea before the deadline (March 15th; 6 days from now!), you should submit your bare-bones idea and check "Yes" on "Do you need assistance in developing this talk?".
We have more schedule space for talks than we did last year, and we would love to add new voices and wider topics, but time is of the essence, so go to https://tprc.us/ , and spill the beans on your percolating ideas!
In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling
corrections (or autocorrect for commands). While handy, it is somewhat limited
as it is global. Now, I wanted to ignore it only for git and not other
commands. But I haven’t found a way to only target git without having to make a
wrapper around git (which I don’t want to do).
So I wrote an autoloaded function that does this for me. The idea is rather
simple. In your .zshrc you set a zstyle that tells which file should be
ignored based on files (or directories) that exist in the current directory.
Based on this you build the CORRECT_IGNORE_FILE environment variable or you
just unset it. This function is then hooked into the chpwd action. I went
with three default options, check dir, file, or just exist: d, f, or e. File
wins, then directory, then exists.







