magic.t: don't run %ENV keys through quotemeta on VMS A couple of the tests for downgrading %ENV keys with UTF-8 characters in them were failing. It turns out the native facilities for setting and retrieving logical names can handle UTF-8 characters in the key, but not if you muddy the waters by injecting escape characters, which mean nothing here.
S_clear_special_blocks - allow caller to notice that cv has been freed
GH #16868: `use strict;END{{{{}}}}{END}END{e}` triggers an assertion failure:
```
perl: op.c:10342: CV *Perl_newATTRSUB_x(I32, OP *, OP *, OP *, OP *,
_Bool): Assertion `!cv || evanescent || SvREFCNT((SV*)cv) != 0'
failed.
```
In the following block, `S_clear_special_blocks` frees `cv`, but doesn't signal
that it has done so to the caller. The caller continues to act as if `cv` had
not been freed.
```
if (name) {
if (PL_parser && PL_parser->error_count) {
clear_special_blocks(name, gv, cv);
}
```
This commit changes `S_clear_special_blocks` from a void function to
returning a CV*:
```
return SvIS_FREED(cv) ? NULL : cv;
```
The caller now assigns the result of the call back to `cv`.
This causes the test case to croak:
```
Bareword "e" not allowed while "strict subs" in use at -e line 1.
Execution of -e aborted due to compilation errors.
```
S_scan_const: abort compilation after \N{} errors
Upon encountering errors in parsing `\N{}` sequences, the parser used to
try to continue parsing for a bit before exiting. However, these errors
are - under certain circumstances - associated with problems with the
savestack being incorrectly adjusted.
GH #16930 is an example of this where:
* `PL_comppad_name` points to one struct during allocation of pad slots.
* Savestack activity causes `PL_comppad_name` to point somewhere else.
* The peephole optimiser is called, but needs `PL_comppad_name` to point
to the first struct to match up with the pad allocations.
With this commit, errors in parsing `\N{}` sequences are immediately fatal.
regcomp: Capture group names need be legal Unicode names Previous comits have explicitly made sure that Perl identifiers are legal Unicode names. This extends that to regular expression group (such as capturing) names.
toke.c: Add parse_ident_msg() This new function can be used to have parse_ident() return an error message to its caller instead of dieing. It turns out that regcomp.c is in want of this functionality.
Since my native language isn’t English, the German text follows below.
… or maybe some more ;)
I've gone through the Custom Data Labels documentation carefully, and reduced down to a simple example:
#!/usr/bin/perl
use strict;
use warnings;
use Excel::Writer::XLSX;
my $workbook = Excel::Writer::XLSX->new( 'chart_custom_labels.xlsx' );
my $worksheet = $workbook->add_worksheet();
# Chart data
my $data = [
[ 'Cat', 'Dog', 'Pig' ],
[ 10, 40, 50 ],
];
$worksheet->write( 'A1', $data );
# Custom labels
my $custom_labels = [
{ value => 'Jan' },
{ value => 'Feb' },
{ value => 'Mar' },
];
my $chart = $workbook->add_chart( type => 'column' );
# Configure the series with custom string data labels
$chart->add_series(
categories => '=Sheet1!$A$1:$A$3',
values => '=Sheet1!$B$1:$B$3',
data_labels => {
value => 1,
custom => $custom_labels,
},
);
$workbook->close();
I expected this to apply labels of "Jan", "Feb", and "Mar" to the graph. However, the labels I get are just the values I would have gotten from value => 1 even if I had not included the custom labels line, i.e. 10, 40, 50:

I've also tried removing the value => 1 line but keeping the custom line, and that results in no labels at all. And I've tried a different approach where I keep the value => 1 line but use the delete property of custom property to remove some labels. That also did not work, and just kept the values for labels.
Is this functionality broken or am I missing something?
Environment details:
Cygwin
Perl v5.40.3
Excel::Writer::XLSX 1.03
If you don't like the autovivification or simply would like to make sure the code does not accidentally alter a hash the Hash::Util module is for you.
You can lock_hash and later you can unlock_hash if you'd like to make some changes to it.
In this example you can see 3 different actions commented out. Each one would raise an exception if someone tries to call them on a locked hash. After we unlock the hash we can execute those actions again.
I tried this both in perl 5.40 and 5.42.
use strict;
use warnings;
use feature 'say';
use Hash::Util qw(lock_hash unlock_hash);
use Data::Dumper qw(Dumper);
my %person = (
fname => "Foo",
lname => "Bar",
);
lock_hash(%person);
print Dumper \%person;
print "$person{fname} $person{lname}\n";
say "fname exists ", exists $person{fname};
say "language exists ", exists $person{language};
# $person{fname} = "Peti"; # Modification of a read-only value attempted
# delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
# $person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
unlock_hash(%person);
$person{fname} = "Peti"; # Modification of a read-only value attempted
delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
$person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
print Dumper \%person;
$VAR1 = {
'lname' => 'Bar',
'fname' => 'Foo'
};
Foo Bar
fname exists 1
language exists
$VAR1 = {
'language' => 'Perl',
'fname' => 'Peti'
};
Perl Developers who want to contribute to Perl open source development can learn how by joining a live online session with the Perl Maven Group.
Next live video session details :
Tuesday, February 10
1:00 PM - 3:00 PM EST
Register for the group via Luma on the link below :
https://luma.com/3vlpqn8g
Previous session recordings are available via Youtube ( Please Like and Subscribe to the Channel !!) :
Open source contribution - Perl - MIME::Lite - GitHub Actions, test coverage and adding a test
https://www.youtube.com/watch?v=XuwHFAyldsA
Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions
[link] [comments]
A Perl transpiler that converts Jinja2 templates to Template Toolkit 2 (TT2) syntax.
Source: https://github.com/lucianofedericopereira/jinja2tt2
Description
Jinja2 is deeply integrated with Python, making a direct port impractical. However, since TT2 and Jinja2 share similar concepts and syntax patterns, this transpiler performs a mechanical translation between the two template languages.
Why TT2?
TT2 and Jinja2 share:
- Variable interpolation:
{{ var }}maps to[% var %] - Control structures:
{% if %}/{% for %}map to[% IF %]/[% FOREACH %] - Filters:
{{ name|upper }}maps to[% name | upper %] - Includes, blocks, and inheritance (conceptually similar)
- Expression grammar close enough to map mechanically
Installation
No external dependencies beyond core Perl 5.20+.
git clone https://github.com/lucianofedericopereira/jinja2tt2
cd jinja2tt2
Usage
Command Line
# Transpile a file to stdout
./bin/jinja2tt2 template.j2
# Transpile with output to file
./bin/jinja2tt2 template.j2 -o template.tt
# Transpile in-place (creates .tt file)
./bin/jinja2tt2 -i template.j2
# From stdin
echo '{{ name|upper }}' | ./bin/jinja2tt2
# Debug mode (shows tokens and AST)
./bin/jinja2tt2 --debug template.j2
Programmatic Usage
use Jinja2::TT2;
my $transpiler = Jinja2::TT2->new();
# From string
my $tt2 = $transpiler->transpile('{{ user.name|upper }}');
# Result: [% user.name.upper %]
# From file
my $tt2 = $transpiler->transpile_file('template.j2');
Supported Constructs
Variables
{{ foo }} → [% foo %]
{{ user.name }} → [% user.name %]
{{ items[0] }} → [% items.0 %]
Filters
{{ name|upper }} → [% name.upper %]
{{ name|lower|trim }} → [% name.lower.trim %]
{{ items|join(", ") }} → [% items.join(', ') %]
{{ name|default("Guest") }} → [% (name || 'Guest') %]
Conditionals
{% if user %} → [% IF user %]
{% elif admin %} → [% ELSIF admin %]
{% else %} → [% ELSE %]
{% endif %} → [% END %]
Loops
{% for item in items %} → [% FOREACH item IN items %]
{{ loop.index }} → [% loop.count %]
{{ loop.first }} → [% loop.first %]
{{ loop.last }} → [% loop.last %]
{% endfor %} → [% END %]
Blocks and Macros
{% block content %} → [% BLOCK content %]
{% endblock %} → [% END %]
{% macro btn(text) %} → [% MACRO btn(text) BLOCK %]
{% endmacro %} → [% END %]
Comments
{# This is a comment #} → [%# This is a comment %]
Whitespace Control
{{- name -}} → [%- name -%]
{%- if x -%} → [%- IF x -%]
Other Constructs
-
{% include "file.html" %}→[% INCLUDE file.html %] -
{% set x = 42 %}→[% x = 42 %] - Ternary:
{{ x if cond else y }}→[% (cond ? x : y) %] - Boolean literals:
true/false→1/0
Filter Mapping
| Jinja2 | TT2 Equivalent |
|---|---|
upper |
.upper |
lower |
.lower |
trim |
.trim |
first |
.first |
last |
.last |
length |
.size |
join |
.join |
reverse |
.reverse |
sort |
.sort |
escape / e
|
`\ |
{% raw %}default
|
`\ |
{% raw %}replace
|
.replace |
Some filters require TT2 plugins (e.g., tojson needs Template::Plugin::JSON).
Loop Variable Mapping
| Jinja2 | TT2 |
|---|---|
loop.index |
loop.count |
loop.index0 |
loop.index |
loop.first |
loop.first |
loop.last |
loop.last |
loop.length |
loop.size |
Limitations
-
Template inheritance (
{% extends %}) requires manual adjustment for TT2'sWRAPPERpattern - Autoescape is not directly supported in TT2
- Some filters need custom TT2 plugins or vmethods
- Complex Python expressions may need review
Running Tests
prove -l t/
Architecture
- Tokenizer: Splits Jinja2 source into tokens (text, variables, statements, comments)
- Parser: Builds an Abstract Syntax Tree (AST) from the token stream
- Emitter: Walks the AST and generates equivalent TT2 code
Credits
- Luciano Federico Pereira - Author
License
This is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License (LGPL) version 2.1 as published by the Free Software Foundation.
My name is Alex. Over the last years I’ve implemented several versions of the Raku’s documentation format (Synopsys 26 / Raku’s Pod) in Perl and JavaScript.
At an early stage, I shared the idea of creating a lightweight version of Raku’s Pod, with Damian Conway, the original author of the Synopsys 26 Documentation specification (S26). He was supportive of the concept and offered several valuable insights that helped shape the vision of what later became Podlite.
Today, Podlite is a small block-based markup language that is easy to read as plain text, simple to parse, and flexible enough to be used everywhere — in code, notes, technical documents, long-form writing, and even full documentation systems.
This article is an introduction for the Perl community — what Podlite is, how it looks, how you can already use it in Perl via a source filter, and what’s coming next.
The Block Structure of Podlite
One of the core ideas behind Podlite is its consistent block-based structure. Every meaningful element of a document — a heading, a paragraph, a list item, a table, a code block, a callout — is represented as a block. This makes documents both readable for humans and predictable for tools.
Podlite supports three interchangeable block styles: delimited, paragraph, and abbreviated.
Abbreviated blocks (=BLOCK)
This is the most compact form.
A block starts with = followed by the block name.
=head1 Installation Guide
=item Perl 5.8 or newer
=para This tool automates the process.
- ends on the next directive or a blank line
- best used for simple one-line blocks
- cannot include configuration options (attributes)
Paragraph blocks (=for BLOCK)
Use this form when you want a multi-line block or need attributes.
=for code :lang<perl>
say "Hello from Podlite!";
- ends when a blank line appears
- can include complex content
- allows attributes such as
:lang,:id,:caption,:nested, …
Delimited blocks (=begin BLOCK … =end BLOCK)
The most expressive form. Useful for large sections, nested blocks, or structures that require clarity.
=begin nested :notify<important>
Make sure you have administrator privileges.
=end nested
- explicit start and end markers
- perfect for code, lists, tables, notifications, markdown, formulas
- can contain other blocks, including nested ones
These block styles differ in syntax convenience, but all produce the same internal structure.

Regardless of which syntax you choose:
- all three forms represent the same block type
- attributes apply the same way (
:lang,:caption,:id, …) - tools and renderers treat them uniformly
- nested blocks work identically
- you can freely mix styles inside a document
Example: Comparing POD and Podlite
Let’s see how the same document looks in traditional POD versus Podlite:

Each block has clear boundaries, so you don’t need blank lines between them. This makes your documentation more compact and easier to read. This is one of the reasons Podlite remains compact yet powerful: the syntax stays flexible, while the underlying document model stays clean and consistent.
This Podlite example rendered as on the following screen:

Inside the Podlite Specification 1.0
One important point about Podlite is that it is first and foremost a specification. It does not belong to any particular programming language, platform, or tooling ecosystem. The specification defines the document model, syntax rules, and semantics.
From the Podlite 1.0 specification, notable features include:
- headings (
=head1,=head2, …) - lists and definition lists, and including task lists
- tables (simple and advanced)
- CSV-backed tables
- callouts / notifications (
=nested :notify<tip|warning|important|note|caution>) - table of contents (
=toc) - includes (
=include) - embedded data (
=data) - pictures (
=pictureand inlineP<>) - formulas (
=formulaand inlineF<>) - user defined blocks and markup codes
- Markdown integration
The =markdown block is part of the standard block set defined by the Podlite Specification 1.0.
This means Markdown is not an add-on or optional plugin — it is a fully integrated, first-class component of the language.
Markdown content becomes part of Podlite’s unified document structure, and its headings merge naturally with Podlite headings inside the TOC and document outline.
Below is a screenshot showing how Markdown inside Perl is rendered in the in-development VS Code extension, demonstrating both the block structure and live preview:

Using Podlite in Perl via the source filter
To make Podlite directly usable in Perl code, there is a module on CPAN: Podlite — Use Podlite markup language in Perl programs
A minimal example could look like this:
use Podlite; # enable Podlite blocks inside Perl
=head1 Quick Example
=begin markdown
Podlite can live inside your Perl programs.
=end markdown
print "Podlite active\n";
Roadmap: what’s next for Podlite
Podlite continues to grow, and the Specification 1.0 is only the beginning. Several areas are already in active development, and more will evolve with community feedback.
Some of the things currently planned or in progress:
- CLI tools
- command-line utilities for converting Podlite to HTML, PDF, man pages, etc.
- improve pipelines for building documentation sites from Podlite sources
- VS Code integration
- Ecosystem growth
- develop comprehensive documentation and tutorials
- community-driven block types and conventions
Try Podlite and share feedback
If this resonates with you, I’d be very happy to hear from you:
- ideas for useful block types
- suggestions for tools or integrations
- feedback on the syntax and specification
https://github.com/podlite/podlite-specs/discussions
Even small contributions — a comment, a GitHub star, or trying an early tool — help shape the future of the specification and encourage further development.
Useful links:
- CPAN: https://metacpan.org/pod/Podlite
- GitHub:https://github.com/podlite
- Specification
- Project site: https://podlite.org
- Roadmap: https://podlite.org/#Roadmap
Thanks for reading, Alex
Weekly Challenge 358
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.
Task 1: Max Str Value
Task
You are given an array of alphanumeric string, @strings.
Write a script to find the max value of alphanumeric string in the given array. The numeric representation of the string, if it comprises of digits only otherwise length of the string.
My solution
This task can be achieved in a single line in both Python and Perl, while still maintaining readability. For the Python solution I use list comprehension to convert each string into an integer (if it is all digits) or the length of string (if it isn't). This is wrapped around the max function to return maximum (largest) of these values.
def max_string_value(input_strings: list) -> int:
return max(int(s) if s.isdigit() else len(s) for s in input_strings)
Not to out-shinned, Perl can achieve similar functionality by using the map function, converting each string into its numeric representation or its length. Perl doesn't have a built-in max function, but it is available from the List::Util package.
sub main (@input_strings) {
say max( map { /^\d+$/ ? $_ : length($_) } @input_strings );
}
Examples
$ ./ch-1.py "123" "45" "6"
123
$ ./ch-1.py "abc" "de" "fghi"
4
$ ./ch-1.py "0012" "99" "a1b2c"
99
$ ./ch-1.py "x" "10" "xyz" "007"
10
$ ./ch-1.py "hello123" "2026" "perl"
2026
Task 2: Encrypted String
Task
You are given a string $str and an integer $int.
Write a script to encrypt the string using the algorithm - for each character $char in $str, replace $char with the $int th character after $char in the alphabet, wrapping if needed and return the encrypted string.
My solution
For this task, I start by setting $int (called i in Python as int is a reserved word) to be the modulus (remainder) of 26. If that value is 0, I return the original string as no encryption is required.
def encrypted_string(input_string: str, i: int) -> str:
i = i % 26
if i == 0:
return input_string
The next step is creating a mapping table. I start with the variable old_letters that has all the lower case letters of the English alphabet. I create a new_letters string by slicing the old_letters string at the appropriate point. I then double the length of each string by adding the upper case equivalent string. Finally, I use dict(zip()) to convert the strings to a dictionary where the key is the original letter and the value is the new letter.
old_letters = string.ascii_lowercase
new_letters = old_letters[i:] + old_letters[:i]
old_letters += old_letters.upper()
new_letters += new_letters.upper()
mapping = dict(zip(old_letters, new_letters))
The final step is to loop through each character and use the mapping dictionary to replace the letter, or use the original character if it is not found (numbers, spaces, punctuation characters, etc).
return "".join(mapping.get(char, char) for char in input_string)
The Perl code follows the same logic. It uses the splice method to create the new_letters variable, and both old_letters and new_letters are arrays. The mesh function also comes from the List::Util package. Perl will automatically convert a flat list to key/value pairs in the mapping hash.
sub main ( $input_string, $i ) {
$i %= 26;
if ( $i == 0 ) {
say $input_string;
return;
}
my @old_letters = my @new_letters = ( "a" .. "z" );
push @new_letters, splice( @new_letters, 0, $i );
push @old_letters, map { uc } @old_letters;
push @new_letters, map { uc } @new_letters;
my %mapping = mesh \@old_letters, \@new_letters;
say join "", map { $mapping{$_} // $_ } split //, $input_string;
Examples
$ ./ch-2.py abc 1
bcd
$ ./ch-2.py xyz 2
zab
$ ./ch-2.py abc 27
bcd
$ ./ch-2.py hello 5
mjqqt
$ ./ch-2.py perl 26
perl
All three of us discussed:
- We agree with the general idea of an improved PRNG, so we encourage Scott to continue working on the PR to get it into a polished state ready for merge
- Haarg’s “missing import” PR now looks good; Paul has LGTM’ed it
- TLS in core still remains a goal for the next release cycle.
Crypt::OpenSSL3might now be in a complete enough state to support a minimal viable product “https” client to be built on top of it, that could be used by an in-core CPAN client
"Perl is slow."
I've heard this for years, well since I started. You probably have too. And honestly? For a long time, I didn't have a great rebuttal. Sure, Perl's fast enough for most things, it's well known for text processing, glueing code and quick scripts. But when it came to object heavy code, the critics have a point.
We will begin by looking at the myth of perl being slow a little more deeply. Here's a benchmark between Perl and Python using CPU seconds, a fair comparison that measures actual work done:
=== PERL (5 CPU seconds per test) ===
Integer arithmetic 1,072,800/s
Float arithmetic 398,800/s
String concat 970,000/s
Array push/iterate 368,800/s
Hash insert/iterate 84,800/s
Function calls 244,000/s
Regex match 12,921,200/s
=== PYTHON (5 CPU seconds per test) ===
Integer arithmetic 777,200/s
Float arithmetic 512,400/s
String concat 627,200/s
List append/iterate 476,400/s
Dict insert/iterate 140,600/s
Function calls 331,400/s
Regex match 10,543,713/s
The results are more nuanced than the "Perl is slow" narrative suggests:
| Operation | Winner | Margin |
|---|---|---|
| Integer arithmetic | Perl | 1.4x faster |
| Float arithmetic | Python | 1.3x faster |
| String concat | Perl | 1.5x faster |
| Array/List ops | Python | 1.3x faster |
| Hash/Dict ops | Python | 1.7x faster |
| Function calls | Python | 1.4x faster |
| Regex match | Perl | 1.2x faster |
Perl wins at what it's always been good at: integers, strings, and regex. Python wins at floats, data structures, and function calls areas where I am told Python 3.x has seen heavy optimisation work.
But here's the thing that surprised me: neither language is dramatically faster than the other for basic operations. The differences are measured in fractions, not orders of magnitude. So where does the "Perl is slow" reputation actually come from?
Object-oriented code. Let's run that same fair comparison:
=== Object creation + 2 method calls (5M iterations) ===
Perl bless: 4,155,178/s (1.20 sec)
Python class: 5,781,818/s (0.86 sec)
Okay, this is not so bad. Perl's only 40% behind. But now let's look at what people actually use these days: Moo.
=== Object creation + 2 method calls (5M iterations) ===
Perl bless: 4,176,222/s (1.20 sec)
Moo class: 843,708/s (5.93 sec)
Python class: 5,590,052/s (0.89 sec)
Wait, what? Moo is 6.6x slower than Python. And it's 5x slower than plain bless.
This is layered with actual business logic is I guess where "Perl is slow" actually comes from. This all comes down to layers. Every Moo accessor has been optimised but if you have all features you build a call stack, each adding overhead:
$obj->name
└─> accessor method (generated sub)
└─> type constraint check
└─> coercion check
└─> trigger check
└─> lazy builder check
└─> finally: $self->{name}
Each of those subroutine calls means:
- Push arguments onto the stack (~3-5 ops)
- Create a new scope (localizing variables)
- Execute the check (even if it's just "return true")
- Pop the stack and return (~3-5 ops)
Even a "simple" Moo accessor with just a type constraint involves roughly 30+ additional operations compared to a plain hash access. The type constraint alone might call:
has_type_constraint()- is there a constraint?type_constraint()- get the constraint objectcheck()- call the constraint's check method- The actual validation logic
Multiply that by two accessors per iteration, five million iterations, and suddenly you're spending 5 seconds instead of 1.
This is the trade off Moo makes: flexibility and safety for speed. And for most applications, it's the right trade off and even in python they do this with what they call pydantic that halfs the performance of python objects.
I've spent more time than I'd care to admit thinking about this question. Not in a "let's rewrite everything in Rust" kind of way, but genuinely asking: what would it take to make Perl's object system competitive with languages people actually consider fast?
The answer, it turns out, was inside a CPAN module first released on 'Mon Jul 24 11:23:25 2000'. This was highlighted to me by another works who I am indeed one of the three people who do not only read their blogs but also often finds themselves lost within their interesting coding patterns.
So this is the story of the four modules that changed how I think about Perl performance: Marlin, Meow, Inline and XS::JIT. They're different tools with different philosophies, but together they represent something I never quite expected to see Perl object access that's actually faster than Python's equivalent. Not "almost as fast." Faster.
The Marlin story: A faster fish in the Moose family
If you've written any serious Perl in the last fifteen years, you've probably used Moose. Or Moo. Or Mouse. The naming convention is... well, it's a thing we do now.
Marlin fits right into that tradition, and the name's not accidental. Marlins are among the fastest fish in the ocean. That's the pitch: everything you love about Moose-style OO, but with speed as a first-class concern.
Toby Inkster released Marlin in late 2025, and it caught my attention as I stated before, many of his projects do. I'd previously attempted to write a fast OO system myself (Meow), but was struggling to even compete with Moo despite being entirely XS. Partly ability, partly still learning, mostly not being in the right compile time stage.
With my interest piqued, I installed Marlin, played with the API, and ran some benchmarks:
Benchmark: 1,000,000 iterations
Rate Meow Moo Marlin Mouse
Meow 606,061/s -- -1% -45% -47%
Moo 609,756/s 1% -- -45% -46%
Marlin 1,098,901/s 81% 80% -- -3%
Mouse 1,136,364/s 87% 86% 3% --
Marlin performed well. Meow at that point was... not impressive. But I liked Marlin's API and, understanding my own implementation's limitations, I was satisfied enough with the speed to build my Claude modules around it, while also understanding it would likely improve in performance.
A few weeks later, and a lot happened in between, but on Friday evening I randomly decided to revisit my Meow directory. Could I fix some of the flaws based upon my recent learnings? I managed to, and saw a huge improvement in my own benchmarks. So I updated to the latest Marlin for a fair comparison.
I was expecting Meow to be faster now since I'm doing much less in this minimalist approach. But what I actually found surprised me:
Benchmark: 10,000,000 iterations
Rate Moo Mouse Meow Marlin
Moo 868,810/s -- -47% -60% -81%
Mouse 1,626,016/s 87% -- -26% -64%
Meow 2,183,406/s 151% 34% -- -52%
Marlin 4,504,505/s 418% 177% 106% --
Marlin had gotten dramatically faster, over 4x improvement from the version I'd first tested. Toby had clearly been busy. And while Meow had improved too, it was still only half of Marlin's speed.
This was the moment that changed everything. I needed to understand how Marlin achieved this. What was I missing?
Just in time optimisation
As I mentioned, I read other people's code. I read Toby's posts on Marlin and how he'd studied Mouse's optimisation strategy: only validate when you absolutely need to. But when I started tracing through Marlin's actual implementation, something clicked.
The key insight is in Marlin::Attribute::install_accessors. Here's what happens when Marlin sets up a reader:
if ( $type eq 'reader' and !$me->has_simple_reader and $me->xs_reader ) {
$me->{_implementation}{$me->{$type}} = 'CXSR'; # Class::XSReader
}
elsif ( HAS_CXSA and $me->has_simple_reader ) {
# Use Class::XSAccessor for simple cases
Class::XSAccessor->import( class => $me->{package}, ... );
}
Marlin makes a compile-time decision: what kind of accessor does this attribute actually need?
- Simple getter (no default, no lazy, no type check on read)? → Use
Class::XSAccessor, which is pure XS and blindingly fast - Getter with lazy default or type coercion? → Use
Class::XSReader, which handles the complexity in optimised C - Something exotic (auto_deref, custom behaviour)? → Fall back to generated Perl
This is the magic. Most Moo-style accessors go through a generic code path that handles every possible feature, even features you're not using. Marlin analyses your attribute definition at compile time and generates the minimal accessor that satisfies your requirements.
Consider a read-only attribute with a type but no default:
# Moo accessor path:
$obj->name
→ check if lazy builder needed # nope, but we still check
→ check if default needed # nope, but we still check
→ check if coercion needed # nope, but we still check
→ finally: $self->{name}
# Marlin accessor (Class::XSAccessor):
$obj->name
→ $self->{name} # that's it. One XS call.
The type constraint? Marlin validates it in the constructor, not the getter. Once an object is built, reading an attribute is just a hash lookup: no validation, no subroutine calls, no stack manipulation.
This is why Marlin went from 1.1M ops/sec to 4.5M ops/sec between versions. Toby wasn't just optimising code. He was eliminating entire categories of runtime work by moving decisions to compile time.
A different approach is used forClass::XSConstructor. This reuses a generic XSUB but passes the class data via a custom pointer. This sub is then optimised to not need to reach back into perl for stash, hv lookups etc.
Some of this is JIT compilation, but done at module load time rather than runtime. By the time your code calls ->new or ->name, all the decisions have been made. All that's left is the actual work.
This was my revelation: the path to fast Perl OO isn't avoiding features, it's avoiding runtime feature detection. Know what you need at compile time, generate optimised code for exactly that, and get out of the way.
Now the question became: could I apply this same principle to Meow? It was already setup to build a simple hash that represented the object, I had what I needed but I wanted to do this in a backwards compatible way.
Enter Inline::C
Armed with the understanding of why Marlin was fast, I had a hypothesis: if I could generate XS accessors at compile time tailored to each attribute's needs, Meow could achieve the same performance.
I needed to generate custom C code and then execute it, well for perl that was written by Ingy döt Net back in 2000 the Inline::C.
The idea was simple: when Meow sees ro name => Str, it should generate C code for an accessor that:
- Takes the object
- Returns the value at the slot index for
name - That's it. No method dispatch, no type checking, no feature checking.
I didn't want to just break everything so I leaned into the Moose catalog and added a make_immutable phase. When this is called it would compile the C code needed to generate an optimised XS package and this was fed into Inline::C. The first run would compile; subsequent runs would use the cached .so.
And it worked. I had to change the benchmark to CPU to get a fair result but I've also included a Cor test here which does not have type checking like Marlin or Meow.
Benchmark: running Cor, Marlin, Meow for at least 5 CPU seconds...
Cor: 5 wallclock secs ( 5.13 usr + 0.02 sys = 5.15 CPU) @ 2,886,788/s
Marlin: 5 wallclock secs ( 5.01 usr + 0.11 sys = 5.12 CPU) @ 4,523,074/s
Meow: 5 wallclock secs ( 5.16 usr + 0.02 sys = 5.18 CPU) @ 4,558,344/s
As you can see Meow had caught Marlin. Actually, it was slightly faster, 4.56M vs 4.52M ops/sec, but this would be expected as Meow does ALOT less than Marlin.
But my bottlekneck was now in Inline::C and well nobody wants to write C/XS let alone concatenate it.
- Startup overhead: First compilation was slow, several seconds for a complex class
- Dependencies:
Inline::Cpulls inParse::RecDescent, adds complexity to the dependency chain - Build process: It generates a full
Makefile.PLand runs the ExtUtils::MakeMaker machinery - Caching: The caching mechanism is designed for "write once" scripts, not dynamic code generation
For a proof of concept, Inline::C was perfect. But for a production module, I needed something leaner. That's when I started looking at what Inline::C actually does under the hood, and wondering how much of it I could strip away.
Under the hood: XS::JIT as the secret weapon
Inline::C proved the concept worked, but it came with baggage. Every compile spawned a full Makefile.PL build process. Dependencies bloated the install. And the caching system, designed for write-once scripts, wasn't ideal for dynamic code generation.
So I started picking apart what Inline::C actually does:
- Parse C code to find function signatures
- Generate XS wrapper code
- Generate a
Makefile.PL - Run
perl Makefile.PL && make - Load the resulting
.so
And yes, this happens even when you use bind Inline C => ... instead of the use form. The bind keyword just defers compilation to runtime rather than compile time. It doesn't change what gets done, only when. You still get the full Parse::RecDescent parsing, the xsubpp processing, the MakeMaker dance. The only difference is whether it happens at use time or when bind is called.
Most of this was unnecessary for my use case. I didn't need function parsing, I already knew what functions I was generating. I didn't need XS wrappers, I was writing XS-native code directly. And I definitely didn't need the Makefile.PL dance.
XS::JIT strips all of that away. It's a single-purpose tool: take C code, compile it, load it, install the functions. No parsing. No xsubpp. No make. Direct compiler invocation.
Here's what the C API looks like:
#include "xs_jit.h"
/* Function mapping - where to install what */
XS_JIT_Func funcs[] = {
{ "Cat::new", "cat_new", 0, 1 }, /* target, source, varargs, xs_native */
{ "Cat::name", "cat_name", 0, 1 },
{ "Cat::age", "cat_age", 0, 1 },
};
/* Compile and install in one call */
int ok = xs_jit_compile(aTHX_
c_code, /* Your generated C code */
"Meow::JIT::Cat", /* Unique name for caching */
funcs, /* Function mapping array */
3, /* Number of functions */
"_CACHED_XS", /* Cache directory */
0 /* Don't force recompile */
);
That's it. One function call. The first time it runs, XS::JIT:
- Generates a boot function that registers all the XS functions
- Compiles directly with the system compiler (
cc -shared -fPIC ...) - Loads the
.sowithDynaLoader - Installs each function into its target namespace
Subsequent runs? It hashes the C code, finds the cached .so, and just loads it. The compile step vanishes entirely.
The key insight is the is_xs_native flag. When set, XS::JIT creates a simple alias: no wrapper function, no stack manipulation, no overhead. Your C function is the XS function:
XS_EUPXS(cat_name) {
dVAR; dXSARGS;
SV *self = ST(0);
AV *av = (AV*)SvRV(self);
SV **slot = av_fetch(av, 0, 0); /* slot 0 = name */
ST(0) = slot ? *slot : &PL_sv_undef;
XSRETURN(1);
}
No wrapper. No intermediate calls.
This is exactly what Meow needed. During make_immutable, it:
- Analyses each attribute's requirements (type constraint? coercion? trigger?)
- Generates minimal XS accessor code for each one
- Generates an optimised XS constructor that handles all attributes in one pass
- Hands the code to XS::JIT for compilation
- Gets back installed functions ready to call
The entire JIT compilation happens once per class, at module load time. By the time your code runs, everything is native XS.
Comparing the approaches
Here's what actually happens at runtime for each framework:
Moo accessor call:
$obj->name
→ Perl method dispatch
→ Generated Perl subroutine
→ has_type_constraint() check
→ type_constraint() fetch
→ check() call
→ finally: $self->{name}
Stack frames: 4-6. Operations: ~30.
Marlin accessor call (Class::XSAccessor):
$obj->name
→ Perl method dispatch
→ XS accessor
→ $self->{name}
Stack frames: 1. Operations: ~5.
Note: Toby has some slot magic also
Meow accessor call (XS::JIT):
$obj->name
→ Perl method dispatch
→ XS accessor
→ $self->[SLOT_INDEX]
Stack frames: 1. Operations: ~4 (arrays are slightly faster than hashes).
The benchmark results
With XS::JIT in place, here's where Meow now landed:
Benchmark: running Cor, Marlin for at least 5 CPU seconds... Marlin and Meow has type constraint checking
Cor: 5 wallclock secs ( 5.13 usr + 0.02 sys = 5.15 CPU) @ 2886788.16/s (n=14866959)
Marlin: 5 wallclock secs ( 5.01 usr + 0.11 sys = 5.12 CPU) @ 4523074.80/s (n=23158143)
Meow: 5 wallclock secs ( 5.16 usr + -0.01 sys = 5.15 CPU) @ 5196218.06/s (n=26760523)
Benchmark: running Marlin, Meow, Moo, Mouse for at least 5 CPU seconds...
Marlin: 5 wallclock secs ( 5.22 usr + 0.13 sys = 5.35 CPU) @ 4814728.04/s (n=25758795)
Meow: 5 wallclock secs ( 5.23 usr + 0.01 sys = 5.24 CPU) @ 5203329.96/s (n=27265449)
Moo: 4 wallclock secs ( 5.28 usr + 0.00 sys = 5.28 CPU) @ 860649.81/s (n=4544231)
Mouse: 6 wallclock secs ( 5.29 usr + 0.01 sys = 5.30 CPU) @ 1603849.25/s (n=8500401)
Rate Moo Mouse Marlin Meow
Moo 860650/s -- -46% -82% -83%
Mouse 1603849/s 86% -- -67% -69%
Marlin 4814728/s 459% 200% -- -7%
Meow 5203330/s 505% 224% 8% --
I must be honest, around this time I had not implemented the full benchmarks against Perl and Python. I didn't fully understand the difference, so I had some thoughts that I was hitting limitations with my own hardware (it was late, or early in the morning). Anyway, I kept pushing and ran a benchmark where I accessed the slot directly as an array reference. This got me excited:
Meow (direct) 7,172,481/s 778% 347% 50% 14%
I was seeing a huge improvement. I spent some time making an API that was a little nicer by exposing constants as slot indexes:
{
package Cat
use Meow;
ro name => Str;
ro age => Int;
make_immutable; # Creates $Cat::NAME, $Cat::AGE
}
# Direct slot access
my $name = $cat->[$Cat::NAME];
I was now on par with Python, but I wanted more. There had to be a way to get that array access without the ugly syntax.
So I dug deeper into Perl's internals and found the missing magic: cv_set_call_checker and custom ops.
The entersub bypass: Custom ops
Here's what normally happens when you call a method in Perl:
name($cat)
→ OP_ENTERSUB (the "call function" op)
→ Push arguments onto stack
→ Look up the CV (code value)
→ Set up new stack frame
→ Execute the XS function
→ Pop stack frame
→ Return
Even for our minimal XS accessor, there's overhead: the entersub op itself, the stack frame setup, the CV lookup. What if we could eliminate all of that?
Perl provides a hook called cv_set_call_checker. It allows you to register a "call checker" function that runs at compile time when the parser sees a call to your subroutine. The checker can inspect the op tree and crucially replace it with something else entirely.
Here's what Meow does:
static void _register_inline_accessor(pTHX_ CV *cv, IV slot_index, int is_ro) {
SV *ckobj = newSViv(slot_index); /* Store slot index for later */
cv_set_call_checker_flags(cv, S_ck_meow_get, ckobj, 0);
}
When the checker sees name($cat), it:
- Extracts the
$catargument from the op tree - Frees the entire
entersuboperation - Creates a new custom op with the slot index baked in
- Returns that instead
The custom op is trivially simple:
static OP *S_pp_meow_get(pTHX) {
dSP;
SV *self = TOPs;
PADOFFSET slot_index = PL_op->op_targ; /* Baked into the op */
SV **ary = AvARRAY((AV*)SvRV(self));
SETs(ary[slot_index] ? ary[slot_index] : &PL_sv_undef);
return NORMAL;
}
That's the entire accessor. No function call. No stack frame. No CV lookup. The slot index is embedded directly in the op structure. The Perl runloop executes this op directly, it's as close to $cat->[$NAME] as you can get while still looking like name($cat).
This is the same technique that builtin::true and builtin::false use in Perl 5.36+. It's also how List::Util::first can be optimised when given a simple block.
The final benchmark
With custom ops in place via import_accessors, here's how the Perl OO frameworks compare:
Benchmark: running Marlin, Meow, Meow (direct), Meow (op), Moo, Mouse for at least 5 CPU seconds...
Marlin: 6 wallclock secs ( 5.09 usr + 0.11 sys = 5.20 CPU) @ 4766685.58/s (n=24786765)
Meow: 5 wallclock secs ( 5.29 usr + 0.01 sys = 5.30 CPU) @ 6289606.79/s (n=33334916)
Meow (direct): 5 wallclock secs ( 5.32 usr + 0.01 sys = 5.33 CPU) @ 7172480.86/s (n=38229323)
Meow (op): 5 wallclock secs ( 5.16 usr + 0.01 sys = 5.17 CPU) @ 7394453.19/s (n=38229323)
Moo: 4 wallclock secs ( 5.44 usr + 0.02 sys = 5.46 CPU) @ 816865.93/s (n=4460088)
Mouse: 4 wallclock secs ( 5.18 usr + 0.01 sys = 5.19 CPU) @ 1605727.55/s (n=8333726)
Rate Moo Mouse Marlin Meow Meow (direct) Meow (op)
Moo 816866/s -- -49% -83% -87% -89% -89%
Mouse 1605728/s 97% -- -66% -74% -78% -78%
Marlin 4766686/s 484% 197% -- -24% -34% -36%
Meow 6289607/s 670% 292% 32% -- -12% -15%
Meow (direct) 7172481/s 778% 347% 50% 14% -- -3%
Meow (op) 7394453/s 805% 361% 55% 18% 3% --
Now lets test that directly against python:
============================================================
Python Direct Benchmark (slots + property accessors)
============================================================
Python version: 3.9.6 (default, Dec 2 2025, 07:27:58)
[Clang 17.0.0 (clang-1700.6.3.2)]
Iterations: 5,000,000
Runs: 5
------------------------------------------------------------
Run 1: 0.649s (7,704,306/s)
Run 2: 0.647s (7,733,902/s)
Run 3: 0.646s (7,736,307/s)
Run 4: 0.648s (7,720,909/s)
Run 5: 0.649s (7,702,520/s)
------------------------------------------------------------
Median rate: 7,720,909/s
============================================================
============================================================
Perl/Meow Benchmark Comparison
============================================================
Perl version: 5.042000
Iterations: 5000000
Runs: 5
------------------------------------------------------------
Inline Op (one($foo)):
Run 1: 0.638s (7,841,811/s)
Run 2: 0.629s (7,954,031/s)
Run 3: 0.631s (7,929,850/s)
Run 4: 0.631s (7,926,316/s)
Run 5: 0.633s (7,901,675/s)
Median: 7,926,316/s
============================================================
Summary:
------------------------------------------------------------
Inline Op: 7,926,316/s
============================================================
Conclusion: Why JIT might be the right approach
Looking back at this journey, a pattern emerges. The fastest code isn't the cleverest code. It's the code that does the least work at runtime.
Moo is slow because of the abstraction.
Marlin proved that you could have Moo's features without Moo's overhead by making smart choices at compile time. If an accessor doesn't need lazy building, don't generate code that checks for lazy building.
Meow pushed this further: if you're going to generate code at compile time anyway, why not generate exactly the code you need? Not a generic accessor that handles many cases, but a specific accessor for this specific attribute on this specific class.
And XS::JIT made that practical. Without a lightweight JIT compiler, dynamic XS generation would require shipping a C toolchain with every module, or adding multi-megabyte dependencies. XS::JIT strips the problem down to its essence: take C code, compile it, load it.
The result is object access that competes with, and sometimes beats, languages that have had decades of optimisation work. Not because Perl's interpreter got faster, but because we stopped asking it to do unnecessary work.
Is this approach right for every project? No. Most applications don't need 7 million object accesses per second.
But for the times when performance matters (hot loops, high-frequency trading, real-time systems) it's good to know the ceiling isn't as low as we thought. Perl can be fast. We just needed to get out of its way.
The modules discussed in this post:
- Marlin: https://metacpan.org/pod/Marlin
- Meow: https://metacpan.org/pod/Meow
- XS::JIT: https://metacpan.org/pod/XS::JIT
- Inline::C: https://metacpan.org/pod/Inline::C
Originally published at Perl Weekly 757
Hi there!
On Saturday, (evening for me, noon-ish in the Americas) we had an excellent meeting and there are recordings you can watch. In the first hour I showed some PRs I sent to MIME::Lite. You can watch the video here. In the second hour we changed the setup and we continued in driver-navigator style pair programming. I was giving the instructions and two other participants made the changes and sent the PR. Others in the audience made suggestions. So actually this was mob programming. As far as I know, this was the first time they contributed to open source projects. One of the PRs was already accepted while we were still in the meeting. Talk about quick feedback and fast ROI. You can watch the video here. Don't forget to 'like' the videos on YouTube and to follow the channel!
I've scheduled the next such event. Register here!. My hope is that many more of you will participate and then after getting a taste and having some practice you'll spend 15-20 min a day (2 hours a week) on similar contributions. Having 10-20 or maybe even 100 people doing that consistently will have a huge impact on Perl within a year.
Before that, however, there is the FOSDEM Community dinner on Saturday. If you are in Brussels.
Enjoy your week!
--
Your editor: Gabor Szabo.
Announcements
FOSDEM Community dinner information
On 31st January 2026 19:30,
Announcing the Perl Toolchain Summit 2026!
The 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.
Articles
Otobo supports the German Perl Workshop
Otobo is the Open Source Service Management Platform, a 2019 fork of OTRS.
vitroconnect sponsors the German Perl Workshop
Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions
One hour long video driver-navigator style pair-programming contributing to open source Perl modules.
Open source contribution - Perl - MIME::Lite - GitHub Actions, test coverage and adding a test
One hour long presentation about 3 pull-requests that were sent to MIME::Lite
SBOM::CycloneDX 1.07 is released
A new version of SBOM::CycloneDX with support for the OWASP CycloneDX 1.7 specification (ECMA-424).
🚀 sqltool: A Lightweight Local MySQL/MariaDB Instance Manager (No Containers Needed)
Venus v5 released: Modern OO standard library (and more) for Perl 5
Discuss it on Reddit
Ready, Set, Compile... you slow Camel
An excellent writeup on the process of optimization. Basically saying: don't do what you don't have to. This is specifically about optimizing OOP systems in Perl. Feel free to comment either on the bpo version of the article or here.
Call for proofreaders : blogging on beautiful Perl features
Laurent is looking for help with Python and Java for an article series he is writing. Send him an email!
Discussion
I wrote a Plack handler for HTTP/2, and it's now available on CPAN :)
Features of Plack-Handler-H2: * Full HTTP/2 spec via nghttp2; * Non-blocking via libevent; * Supports the entire PSGI spec; * Automatically generates self-signed certs if none are provided as args;
Geo::Gpx.pm: no 'speed' field (even is GPX 1.0?)
Web
ANNOUNCE: Perl.Wiki V 1.38 & Mojolicious.Wiki V 1.12
I'll Have a Mojolicious::Lite
Gwyn built mojoeye, a tiny Perl app to run system and security checks across their internal Linux hosts.
Perl
Retrospective on the Perl Development Release 5.43.7
Corion mentions a number of places where things can be improved. I am surprised that the whole process is not fully automated yet. I mean some of the brightest people in the Perl community work on the core of perl.
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 358
Welcome to a new week with a couple of fun tasks "Max Str Value" and "Encrypted String". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 357
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Kaprekar Constant" and "Unique Fraction Generator" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
Uniquely Constant
The article skillfully uses Raku's comb, sort, and flip operations for digit manipulation to offer a straightforward and idiomatic solution to the Kaprekar's ongoing problem. It is both instructive and useful for Raku programmers since it carefully addresses edge cases like non-convergence and shows verbose iteration output.
Perl Weekly Challenge: Week 357
The post offers concise and illustrative Perl and Raku solutions to the tasks from Week 357, particularly the Kaprekar Constant implementation with examples that match the problem specification and well-explained iteration logic. For Perl enthusiasts, its clear explanations and references to actual Wikipedia details make the algorithms simple to understand and instructive.
Fractional Fix Points
The Kaprekar Constant and Unique Fraction Generator tasks are explained in a clear and organised manner in this post, which also provides step-by-step iteration breakdowns and solid examples to illustrate the problem. For Perl/Raku learners taking on the Weekly Challenge, its solutions demonstrate careful algorithm design and address important edge cases, making it instructive and useful.
Perl Weekly Challenge 357: arrays everywhere!
Luca provides a thorough and systematic collection of answers to the problems issued in all the languages (Raku, PL/Perl, Python and PostgreSQL) and has demonstrated proficiency in both algorithmic reasoning and the use and applicability of various characteristics of each of these programming languages. The articles describe in detail how to implement algorithms logically. As a result, readers are provided with clean and accurate code as examples of how to successfully implement these algorithms through the use of the listed languages.
Perl Weekly Challenge 357
The blog post provides a comprehensive overview of how to implement the Kaprekar Constant and Unique Fraction Generator tasks in Perl. The examples provided demonstrate the idiomatic (one-line) style of coding that is used to represent both of the tasks. Additionally, the post discusses how to handle exceptions such as non-convergence and uniqueness of fractions, in a sensible manner.
One Constant, and Many Fractions
Matthias's solutions are easy to follow and use a typical hiring challenge style for each week. Each of his solutions adhere to the challenge's requirements. Additionally, all of his implementations demonstrate good programming practices for Perl.
I could drink a case of you…
Packy's write-up for week 357 of the Perl Weekly Challenge offers a fresh perspective on the challenge by telling an entertaining story that incorporates the Kaprekar problem into the write-up. The article clearly details how to implement the code and produces good results as well. The final product is easy to understand and provides a fun, educational experience to those tackling the challenge this week.
Converging on fractions
A thorough explanation of the solution (both tasks) is provided in the post. The Perl code included is easy to read and closely adheres to the descriptions of each problem. Furthermore, the code has been written such that it handles 'non-convergence' where applicable, with clear and logical outputs as well as analyses of each step helping the reader to learn about the algorithms and their correctness.
The Weekly Challenge #357
Robbie has provided full Perl implementations of the Kaprekar Constant and Unique Fraction Generator problems, including clear descriptions and links to the source code for both projects. His article is very well organised and user-friendly, allowing readers to quickly familiarise themselves with both tasks and check out Robbie's own code implementations.
Uniquely Kaprekar
The article provides all the vital information you need to comprehend the fundamental algorithms of each challenge, including thorough code sample illustrations, as well as an extensive discussion on iteration behaviour and the reasons you don't want to use floating-point division in programming.
Fractional Constant
This blog article describes how to perform both Weekly Challenge 357 tasks step by step, showing examples of useful and correct code in both the Python and Perl programming languages, as well as considering input validation and control structures for the Kaprekar constant, as well as selecting the correct data structures to store unique fractions and display them in sorted order. By comparing the differences between the two programming languages alongside their implementation details, this blog is a valuable resource to help those programming these challenges as they learn about them.
Kaprekar Steps & Ordered Fractions
The Kaprekar steps and unique ordered fractions problems are two challenging problems; the author has provided a short list of Perl-based, well-considered solutions to handling leading zeroes, digit sorting, finding loops and sequence detection, and performing value-based ordering of fractions with duplicate removal. These solutions outline the steps taken and lessons learned while approaching each problem.
Weekly collections
NICEPERL's lists
Great CPAN modules released last week.
Events
Perl Maven online: Code-reading and Open Source contribution
February 10, 2025
Boston.pm - online
February 10, 2025
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Hi fellow Perlists,
Now that I am retired, I have a bit more time for personal projects. One project dear to my heart would be to demonstrate strong features of Perl for programmers from other backgrounds. So I'm planning a https://dev.to/ series on "beautiful Perl features", comparing various aspects of Perl with similar features in Java, Python or Javascript.
There are many points to discuss, ranging from small details like flexibility of quote delimiters or the mere simplicity of allowing a final comma in a list, to much more fundamental features like lexical scoping and dynamic scoping.
Since I'm not a native english speaker, and since my knowledge of Java and Python is mostly theoretical, I would appreciate help if some of you would volunteer for spending some time in proofreading the projected posts. Just send an email to my CPAN account if you feel like participating.
Thanks in advance :-), Laurent Dami
This week’s Perl Weekly Challenge had two very interesting problems that look simple at first, but hide some beautiful logic underneath.
I solved both tasks in Perl, and here’s a walkthrough of my approach and what I learned.
Task 1 — Kaprekar Steps
We are given a 4-digit number and asked to repeatedly perform the following process:
- Arrange digits in descending order → form number A
- Arrange digits in ascending order → form number B
- Compute:
A - B - Repeat with the result
This process is known as Kaprekar’s routine for 4-digit numbers.
A fascinating fact:
Every valid 4-digit number (with at least two different digits) will reach 6174 in at most 7 steps.
6174 is called Kaprekar’s Constant.
My Approach:
The key requirements:
- Handle leading zeros (e.g., 1001 → "1001")
- Keep track of numbers already seen (to avoid infinite loops)
- Count the number of steps until 6174 is reached I used:
-
sprintf("%04d", $n)to preserve leading zeros - A hash
%seento detect loops - Sorting digits to build ascending and descending numbers
Core Logic
my $s = sprintf("%04d", $n);
my @digits = split('', $s);
my @desc = sort { $b cmp $a } @digits;
my @asc = sort { $a cmp $b } @digits;
my $desc_num = join('', @desc);
my $asc_num = join('', @asc);
$n = $desc_num - $asc_num;
Showing Each Iteration
I also added a helper function to print each iteration like:
3524 → 5432 - 2345 = 3087
3087 → 8730 - 0378 = 8352
8352 → 8532 - 2358 = 6174
This makes the program educational rather than just returning a number.
Task 2 — Ordered Unique Fractions**
Given a number N, generate all fractions:
num / den where 1 ≤ num ≤ N and 1 ≤ den ≤ N
Then:
- Sort them by their numeric value
- Remove duplicate values
- Keep the fraction with the smallest numerator for equal values
My Approach
Step 1: Generate all possible fractions
for my $num (1..$n) {
for my $den (1..$n) {
push @fractions, [$num, $den];
}
}
Step 2: Sort by fraction value
@fractions = sort {
my $val_a = $a->[0] / $a->[1];
my $val_b = $b->[0] / $b->[1];
$val_a <=> $val_b || $a->[0] <=> $b->[0]
} @fractions;
Step 3: Remove duplicates intelligently
Instead of reducing fractions (like 2/4 → 1/2), I tracked numeric values and kept the fraction with the smallest numerator.
if (!exists $seen_values{$value} || $num < $seen_values{$value}) {
$seen_values{$value} = $num;
push @unique, [$num, $den];
}
Example Output (N = 4)
1/4, 1/3, 1/2, 2/3, 3/4, 1/1, 4/3, 3/2, 2/1, 3/1, 4/1
Notice:
- Fractions are ordered by value
- Duplicates like 2/2, 3/3, 4/4 don’t appear
-
2/4is replaced by1/2because it has a smaller numerator
What I Learned This Week
From Kaprekar problem
- Importance of preserving leading zeros
- Detecting infinite loops with a hash
- How a simple number routine hides deep mathematical beauty
From Fraction problem
- Sorting by computed values
- Eliminating duplicates without using GCD
- Writing clean data-structure driven Perl code using array references
Conclusion
Both tasks looked straightforward, but required careful thinking about:
- Edge cases
- Ordering
- Data handling
Perl made these tasks elegant to implement thanks to:
- Powerful sorting
- Flexible data structures
- String formatting utilities
Another fun and educational week with Perl Weekly Challenge!
Happy hacking :)
"Perl is slow."
I've heard this for years, well since I started. You probably have too. And honestly? For a long time, I didn't have a great rebuttal. Sure, Perl's fast enough for most things, it's well known for text processing, glueing code and quick scripts. But when it came to object heavy code, the critics have a point.
We will begin by looking at the myth of perl being slow a little more deeply. Here's a benchmark between Perl and Python using CPU seconds, a fair comparison that measures actual work done:
=== PERL (5 CPU seconds per test) ===
Integer arithmetic 1,072,800/s
Float arithmetic 398,800/s
String concat 970,000/s
Array push/iterate 368,800/s
Hash insert/iterate 84,800/s
Function calls 244,000/s
Regex match 12,921,200/s
=== PYTHON (5 CPU seconds per test) ===
Integer arithmetic 777,200/s
Float arithmetic 512,400/s
String concat 627,200/s
List append/iterate 476,400/s
Dict insert/iterate 140,600/s
Function calls 331,400/s
Regex match 10,543,713/s
The results are more nuanced than the "Perl is slow" narrative suggests:
| Operation | Winner | Margin |
|---|---|---|
| Integer arithmetic | Perl | 1.4x faster |
| Float arithmetic | Python | 1.3x faster |
| String concat | Perl | 1.5x faster |
| Array/List ops | Python | 1.3x faster |
| Hash/Dict ops | Python | 1.7x faster |
| Function calls | Python | 1.4x faster |
| Regex match | Perl | 1.2x faster |
Perl wins at what it's always been good at: integers, strings, and regex. Python wins at floats, data structures, and function calls areas where I am told Python 3.x has seen heavy optimisation work.
But here's the thing that surprised me: neither language is dramatically faster than the other for basic operations. The differences are measured in fractions, not orders of magnitude. So where does the "Perl is slow" reputation actually come from?
Object-oriented code. Let's run that same fair comparison:
=== Object creation + 2 method calls (5M iterations) ===
Perl bless: 4,155,178/s (1.20 sec)
Python class: 5,781,818/s (0.86 sec)
Okay, this is not so bad. Perl's only 40% behind. But now let's look at what people actually use these days: Moo.
=== Object creation + 2 method calls (5M iterations) ===
Perl bless: 4,176,222/s (1.20 sec)
Moo class: 843,708/s (5.93 sec)
Python class: 5,590,052/s (0.89 sec)
Wait, what? Moo is 6.6x slower than Python. And it's 5x slower than plain bless.
This is layered with actual business logic is I guess where "Perl is slow" actually comes from. This all comes down to layers. Every Moo accessor has been optimised but if you have all features you build a call stack, each adding overhead:
$obj->name
└─> accessor method (generated sub)
└─> type constraint check
└─> coercion check
└─> trigger check
└─> lazy builder check
└─> finally: $self->{name}
Each of those subroutine calls means:
- Push arguments onto the stack (~3-5 ops)
- Create a new scope (localizing variables)
- Execute the check (even if it's just "return true")
- Pop the stack and return (~3-5 ops)
Even a "simple" Moo accessor with just a type constraint involves roughly 30+ additional operations compared to a plain hash access. The type constraint alone might call:
-
has_type_constraint()- is there a constraint? -
type_constraint()- get the constraint object -
check()- call the constraint's check method - The actual validation logic
Multiply that by two accessors per iteration, five million iterations, and suddenly you're spending 5 seconds instead of 1.
This is the trade off Moo makes: flexibility and safety for speed. And for most applications, it's the right trade off and even in python they do this with what they call pydantic that halfs the performance of python objects.
I've spent more time than I'd care to admit thinking about this question. Not in a "let's rewrite everything in Rust" kind of way, but genuinely asking: what would it take to make Perl's object system competitive with languages people actually consider fast?
The answer, it turns out, was inside a CPAN module first released on 'Mon Jul 24 11:23:25 2000'. This was highlighted to me by another works who I am indeed one of the three people who do not only read their blogs but also often finds themselves lost within their interesting coding patterns.
So this is the story of the four modules that changed how I think about Perl performance: Marlin, Meow, Inline and XS::JIT. They're different tools with different philosophies, but together they represent something I never quite expected to see Perl object access that's actually faster than Python's equivalent. Not "almost as fast." Faster.
The Marlin story: A faster fish in the Moose family
If you've written any serious Perl in the last fifteen years, you've probably used Moose. Or Moo. Or Mouse. The naming convention is... well, it's a thing we do now.
Marlin fits right into that tradition, and the name's not accidental. Marlins are among the fastest fish in the ocean. That's the pitch: everything you love about Moose-style OO, but with speed as a first-class concern.
Toby Inkster released Marlin in late 2025, and it caught my attention as I stated before, many of his projects do. I'd previously attempted to write a fast OO system myself (Meow), but was struggling to even compete with Moo despite being entirely XS. Partly ability, partly still learning, mostly not being in the right compile time stage.
With my interest piqued, I installed Marlin, played with the API, and ran some benchmarks:
Benchmark: 1,000,000 iterations
Rate Meow Moo Marlin Mouse
Meow 606,061/s -- -1% -45% -47%
Moo 609,756/s 1% -- -45% -46%
Marlin 1,098,901/s 81% 80% -- -3%
Mouse 1,136,364/s 87% 86% 3% --
Marlin performed well. Meow at that point was... not impressive. But I liked Marlin's API and, understanding my own implementation's limitations, I was satisfied enough with the speed to build my Claude modules around it, while also understanding it would likely improve in performance.
A few weeks later, and a lot happened in between, but on Friday evening I randomly decided to revisit my Meow directory. Could I fix some of the flaws based upon my recent learnings? I managed to, and saw a huge improvement in my own benchmarks. So I updated to the latest Marlin for a fair comparison.
I was expecting Meow to be faster now since I'm doing much less in this minimalist approach. But what I actually found surprised me:
Benchmark: 10,000,000 iterations
Rate Moo Mouse Meow Marlin
Moo 868,810/s -- -47% -60% -81%
Mouse 1,626,016/s 87% -- -26% -64%
Meow 2,183,406/s 151% 34% -- -52%
Marlin 4,504,505/s 418% 177% 106% --
Marlin had gotten dramatically faster, over 4x improvement from the version I'd first tested. Toby had clearly been busy. And while Meow had improved too, it was still only half of Marlin's speed.
This was the moment that changed everything. I needed to understand how Marlin achieved this. What was I missing?
Just in time optimisation
As I mentioned, I read other people's code. I read Toby's posts on Marlin and how he'd studied Mouse's optimisation strategy: only validate when you absolutely need to. But when I started tracing through Marlin's actual implementation, something clicked.
The key insight is in Marlin::Attribute::install_accessors. Here's what happens when Marlin sets up a reader:
if ( $type eq 'reader' and !$me->has_simple_reader and $me->xs_reader ) {
$me->{_implementation}{$me->{$type}} = 'CXSR'; # Class::XSReader
}
elsif ( HAS_CXSA and $me->has_simple_reader ) {
# Use Class::XSAccessor for simple cases
Class::XSAccessor->import( class => $me->{package}, ... );
}
Marlin makes a compile-time decision: what kind of accessor does this attribute actually need?
-
Simple getter (no default, no lazy, no type check on read)? → Use
Class::XSAccessor, which is pure XS and blindingly fast -
Getter with lazy default or type coercion? → Use
Class::XSReader, which handles the complexity in optimised C - Something exotic (auto_deref, custom behaviour)? → Fall back to generated Perl
This is the magic. Most Moo-style accessors go through a generic code path that handles every possible feature, even features you're not using. Marlin analyses your attribute definition at compile time and generates the minimal accessor that satisfies your requirements.
Consider a read-only attribute with a type but no default:
# Moo accessor path:
$obj->name
→ check if lazy builder needed # nope, but we still check
→ check if default needed # nope, but we still check
→ check if coercion needed # nope, but we still check
→ finally: $self->{name}
# Marlin accessor (Class::XSAccessor):
$obj->name
→ $self->{name} # that's it. One XS call.
The type constraint? Marlin validates it in the constructor, not the getter. Once an object is built, reading an attribute is just a hash lookup: no validation, no subroutine calls, no stack manipulation.
This is why Marlin went from 1.1M ops/sec to 4.5M ops/sec between versions. Toby wasn't just optimising code. He was eliminating entire categories of runtime work by moving decisions to compile time.
A different approach is used forClass::XSConstructor. This reuses a generic XSUB but passes the class data via a custom pointer. This sub is then optimised to not need to reach back into perl for stash, hv lookups etc.
It's JIT compilation, but done at module load time rather than runtime. By the time your code calls ->new or ->name, all the decisions have been made. All that's left is the actual work.
This was my revelation: the path to fast Perl OO isn't avoiding features, it's avoiding runtime feature detection. Know what you need at compile time, generate optimised code for exactly that, and get out of the way.
Now the question became: could I apply this same principle to Meow? It was already setup to build a simple hash that represented the object, I had what I needed but I wanted to do this in a backwards compatible way.
Enter Inline::C
Armed with the understanding of why Marlin was fast, I had a hypothesis: if I could generate XS accessors at compile time tailored to each attribute's needs, Meow could achieve the same performance.
I needed to generate custom C code and then execute it, well for perl that was written by Ingy döt Net back in 2000 the Inline::C.
The idea was simple: when Meow sees ro name => Str, it should generate C code for an accessor that:
- Takes the object
- Returns the value at the slot index for
name - That's it. No method dispatch, no type checking, no feature checking.
I didn't want to just break everything so I leaned into the Moose catalog and added a make_immutable phase. When this is called it would compile the C code needed to generate an optimised XS package and this was fed into Inline::C. The first run would compile; subsequent runs would use the cached .so.
And it worked. I had to change the benchmark to CPU to get a fair result but I've also included a Cor test here which does not have type checking like Marlin or Meow.
Benchmark: running Cor, Marlin, Meow for at least 5 CPU seconds...
Cor: 5 wallclock secs ( 5.13 usr + 0.02 sys = 5.15 CPU) @ 2,886,788/s
Marlin: 5 wallclock secs ( 5.01 usr + 0.11 sys = 5.12 CPU) @ 4,523,074/s
Meow: 5 wallclock secs ( 5.16 usr + 0.02 sys = 5.18 CPU) @ 4,558,344/s
As you can see Meow had caught Marlin. Actually, it was slightly faster, 4.56M vs 4.52M ops/sec, but this would be expected as Meow does ALOT less than Marlin.
But my bottlekneck was now in Inline::C and well nobody wants to write C/XS let alone concatenate it.
- Startup overhead: First compilation was slow, several seconds for a complex class
-
Dependencies:
Inline::Cpulls inParse::RecDescent, adds complexity to the dependency chain -
Build process: It generates a full
Makefile.PLand runs the ExtUtils::MakeMaker machinery - Caching: The caching mechanism is designed for "write once" scripts, not dynamic code generation
For a proof of concept, Inline::C was perfect. But for a production module, I needed something leaner. That's when I started looking at what Inline::C actually does under the hood, and wondering how much of it I could strip away.
Under the hood: XS::JIT as the secret weapon
Inline::C proved the concept worked, but it came with baggage. Every compile spawned a full Makefile.PL build process. Dependencies bloated the install. And the caching system, designed for write-once scripts, wasn't ideal for dynamic code generation.
So I started picking apart what Inline::C actually does:
- Parse C code to find function signatures
- Generate XS wrapper code
- Generate a
Makefile.PL - Run
perl Makefile.PL && make - Load the resulting
.so
And yes, this happens even when you use bind Inline C => ... instead of the use form. The bind keyword just defers compilation to runtime rather than compile time. It doesn't change what gets done, only when. You still get the full Parse::RecDescent parsing, the xsubpp processing, the MakeMaker dance. The only difference is whether it happens at use time or when bind is called.
Most of this was unnecessary for my use case. I didn't need function parsing, I already knew what functions I was generating. I didn't need XS wrappers, I was writing XS-native code directly. And I definitely didn't need the Makefile.PL dance.
XS::JIT strips all of that away. It's a single-purpose tool: take C code, compile it, load it, install the functions. No parsing. No xsubpp. No make. Direct compiler invocation.
Here's what the C API looks like:
#include "xs_jit.h"
/* Function mapping - where to install what */
XS_JIT_Func funcs[] = {
{ "Cat::new", "cat_new", 0, 1 }, /* target, source, varargs, xs_native */
{ "Cat::name", "cat_name", 0, 1 },
{ "Cat::age", "cat_age", 0, 1 },
};
/* Compile and install in one call */
int ok = xs_jit_compile(aTHX_
c_code, /* Your generated C code */
"Meow::JIT::Cat", /* Unique name for caching */
funcs, /* Function mapping array */
3, /* Number of functions */
"_CACHED_XS", /* Cache directory */
0 /* Don't force recompile */
);
That's it. One function call. The first time it runs, XS::JIT:
- Generates a boot function that registers all the XS functions
- Compiles directly with the system compiler (
cc -shared -fPIC ...) - Loads the
.sowithDynaLoader - Installs each function into its target namespace
Subsequent runs? It hashes the C code, finds the cached .so, and just loads it. The compile step vanishes entirely.
The key insight is the is_xs_native flag. When set, XS::JIT creates a simple alias: no wrapper function, no stack manipulation, no overhead. Your C function is the XS function:
XS_EUPXS(cat_name) {
dVAR; dXSARGS;
SV *self = ST(0);
AV *av = (AV*)SvRV(self);
SV **slot = av_fetch(av, 0, 0); /* slot 0 = name */
ST(0) = slot ? *slot : &PL_sv_undef;
XSRETURN(1);
}
No wrapper. No intermediate calls.
This is exactly what Meow needed. During make_immutable, it:
- Analyses each attribute's requirements (type constraint? coercion? trigger?)
- Generates minimal XS accessor code for each one
- Generates an optimised XS constructor that handles all attributes in one pass
- Hands the code to XS::JIT for compilation
- Gets back installed functions ready to call
The entire JIT compilation happens once per class, at module load time. By the time your code runs, everything is native XS.
Comparing the approaches
Here's what actually happens at runtime for each framework:
Moo accessor call:
$obj->name
→ Perl method dispatch
→ Generated Perl subroutine
→ has_type_constraint() check
→ type_constraint() fetch
→ check() call
→ finally: $self->{name}
Stack frames: 4-6. Operations: ~30.
Marlin accessor call (Class::XSAccessor):
$obj->name
→ Perl method dispatch
→ XS accessor
→ $self->{name}
Stack frames: 1. Operations: ~5.
Note: Toby has some slot magic also
Meow accessor call (XS::JIT):
$obj->name
→ Perl method dispatch
→ XS accessor
→ $self->[SLOT_INDEX]
Stack frames: 1. Operations: ~4 (arrays are slightly faster than hashes).
The benchmark results
With XS::JIT in place, here's where Meow now landed:
Benchmark: running Cor, Marlin for at least 5 CPU seconds... Marlin and Meow has type constraint checking
Cor: 5 wallclock secs ( 5.13 usr + 0.02 sys = 5.15 CPU) @ 2886788.16/s (n=14866959)
Marlin: 5 wallclock secs ( 5.01 usr + 0.11 sys = 5.12 CPU) @ 4523074.80/s (n=23158143)
Meow: 5 wallclock secs ( 5.16 usr + -0.01 sys = 5.15 CPU) @ 5196218.06/s (n=26760523)
Benchmark: running Marlin, Meow, Moo, Mouse for at least 5 CPU seconds...
Marlin: 5 wallclock secs ( 5.22 usr + 0.13 sys = 5.35 CPU) @ 4814728.04/s (n=25758795)
Meow: 5 wallclock secs ( 5.23 usr + 0.01 sys = 5.24 CPU) @ 5203329.96/s (n=27265449)
Moo: 4 wallclock secs ( 5.28 usr + 0.00 sys = 5.28 CPU) @ 860649.81/s (n=4544231)
Mouse: 6 wallclock secs ( 5.29 usr + 0.01 sys = 5.30 CPU) @ 1603849.25/s (n=8500401)
Rate Moo Mouse Marlin Meow
Moo 860650/s -- -46% -82% -83%
Mouse 1603849/s 86% -- -67% -69%
Marlin 4814728/s 459% 200% -- -7%
Meow 5203330/s 505% 224% 8% --
I must be honest, around this time I had not implemented the full benchmarks against Perl and Python. I didn't fully understand the difference, so I had some thoughts that I was hitting limitations with my own hardware (it was late, or early in the morning). Anyway, I kept pushing and ran a benchmark where I accessed the slot directly as an array reference. This got me excited:
Meow (direct) 7,172,481/s 778% 347% 50% 14%
I was seeing a huge improvement. I spent some time making an API that was a little nicer by exposing constants as slot indexes:
{
package Cat
use Meow;
ro name => Str;
ro age => Int;
make_immutable; # Creates $Cat::NAME, $Cat::AGE
}
# Direct slot access
my $name = $cat->[$Cat::NAME];
I was now on par with Python, but I wanted more. There had to be a way to get that array access without the ugly syntax.
So I dug deeper into Perl's internals and found the missing magic: cv_set_call_checker and custom ops.
The entersub bypass: Custom ops
Here's what normally happens when you call a method in Perl:
name($cat)
→ OP_ENTERSUB (the "call function" op)
→ Push arguments onto stack
→ Look up the CV (code value)
→ Set up new stack frame
→ Execute the XS function
→ Pop stack frame
→ Return
Even for our minimal XS accessor, there's overhead: the entersub op itself, the stack frame setup, the CV lookup. What if we could eliminate all of that?
Perl provides a hook called cv_set_call_checker. It allows you to register a "call checker" function that runs at compile time when the parser sees a call to your subroutine. The checker can inspect the op tree and crucially replace it with something else entirely.
Here's what Meow does:
static void _register_inline_accessor(pTHX_ CV *cv, IV slot_index, int is_ro) {
SV *ckobj = newSViv(slot_index); /* Store slot index for later */
cv_set_call_checker_flags(cv, S_ck_meow_get, ckobj, 0);
}
When the checker sees name($cat), it:
- Extracts the
$catargument from the op tree - Frees the entire
entersuboperation - Creates a new custom op with the slot index baked in
- Returns that instead
The custom op is trivially simple:
static OP *S_pp_meow_get(pTHX) {
dSP;
SV *self = TOPs;
PADOFFSET slot_index = PL_op->op_targ; /* Baked into the op */
SV **ary = AvARRAY((AV*)SvRV(self));
SETs(ary[slot_index] ? ary[slot_index] : &PL_sv_undef);
return NORMAL;
}
That's the entire accessor. No function call. No stack frame. No CV lookup. The slot index is embedded directly in the op structure. The Perl runloop executes this op directly, it's as close to $cat->[$NAME] as you can get while still looking like name($cat).
This is the same technique that builtin::true and builtin::false use in Perl 5.36+. It's also how List::Util::first can be optimised when given a simple block.
The final benchmark
With custom ops in place via import_accessors, here's how the Perl OO frameworks compare:
Benchmark: running Marlin, Meow, Meow (direct), Meow (op), Moo, Mouse for at least 5 CPU seconds...
Marlin: 6 wallclock secs ( 5.09 usr + 0.11 sys = 5.20 CPU) @ 4766685.58/s (n=24786765)
Meow: 5 wallclock secs ( 5.29 usr + 0.01 sys = 5.30 CPU) @ 6289606.79/s (n=33334916)
Meow (direct): 5 wallclock secs ( 5.32 usr + 0.01 sys = 5.33 CPU) @ 7172480.86/s (n=38229323)
Meow (op): 5 wallclock secs ( 5.16 usr + 0.01 sys = 5.17 CPU) @ 7394453.19/s (n=38229323)
Moo: 4 wallclock secs ( 5.44 usr + 0.02 sys = 5.46 CPU) @ 816865.93/s (n=4460088)
Mouse: 4 wallclock secs ( 5.18 usr + 0.01 sys = 5.19 CPU) @ 1605727.55/s (n=8333726)
Rate Moo Mouse Marlin Meow Meow (direct) Meow (op)
Moo 816866/s -- -49% -83% -87% -89% -89%
Mouse 1605728/s 97% -- -66% -74% -78% -78%
Marlin 4766686/s 484% 197% -- -24% -34% -36%
Meow 6289607/s 670% 292% 32% -- -12% -15%
Meow (direct) 7172481/s 778% 347% 50% 14% -- -3%
Meow (op) 7394453/s 805% 361% 55% 18% 3% --
Now lets test that directly against python:
============================================================
Python Direct Benchmark (slots + property accessors)
============================================================
Python version: 3.9.6 (default, Dec 2 2025, 07:27:58)
[Clang 17.0.0 (clang-1700.6.3.2)]
Iterations: 5,000,000
Runs: 5
------------------------------------------------------------
Run 1: 0.649s (7,704,306/s)
Run 2: 0.647s (7,733,902/s)
Run 3: 0.646s (7,736,307/s)
Run 4: 0.648s (7,720,909/s)
Run 5: 0.649s (7,702,520/s)
------------------------------------------------------------
Median rate: 7,720,909/s
============================================================
============================================================
Perl/Meow Benchmark Comparison
============================================================
Perl version: 5.042000
Iterations: 5000000
Runs: 5
------------------------------------------------------------
Inline Op (one($foo)):
Run 1: 0.638s (7,841,811/s)
Run 2: 0.629s (7,954,031/s)
Run 3: 0.631s (7,929,850/s)
Run 4: 0.631s (7,926,316/s)
Run 5: 0.633s (7,901,675/s)
Median: 7,926,316/s
============================================================
Summary:
------------------------------------------------------------
Inline Op: 7,926,316/s
============================================================
Conclusion: Why JIT might be the right approach
Looking back at this journey, a pattern emerges. The fastest code isn't the cleverest code. It's the code that does the least work at runtime.
Moo is slow because of the abstraction.
Marlin proved that you could have Moo's features without Moo's overhead by making smart choices at compile time. If an accessor doesn't need lazy building, don't generate code that checks for lazy building.
Meow pushed this further: if you're going to generate code at compile time anyway, why not generate exactly the code you need? Not a generic accessor that handles many cases, but a specific accessor for this specific attribute on this specific class.
And XS::JIT made that practical. Without a lightweight JIT compiler, dynamic XS generation would require shipping a C toolchain with every module, or adding multi-megabyte dependencies. XS::JIT strips the problem down to its essence: take C code, compile it, load it.
The result is object access that competes with, and sometimes beats, languages that have had decades of optimisation work. Not because Perl's interpreter got faster, but because we stopped asking it to do unnecessary work.
Is this approach right for every project? No. Most applications don't need 7 million object accesses per second.
But for the times when performance matters (hot loops, high-frequency trading, real-time systems) it's good to know the ceiling isn't as low as we thought. Perl can be fast. We just needed to get out of its way.
The modules discussed in this post:
- Marlin: https://metacpan.org/pod/Marlin
- Meow: https://metacpan.org/pod/Meow
- XS::JIT: https://metacpan.org/pod/XS::JIT
- Inline::C: https://metacpan.org/pod/Inline::C
We are happy to announce that Otobo also is part of our event!
Die Rother OSS GmbH ist Source Code Owner und Maintainer der Service Management-Plattform OTOBO.
Gemeinsam mit der Community entwickeln wir OTOBO kontinuierlich weiter und sorgen dafür, dass das Tool zu 100 % Open Source bleibt.
Unsere Kunden unterstützen wir mit partnerschaftlicher Beratung, Training, Entwicklung, Support und Managed Services.
https://otobo.io/de/unternehmen/karriere/
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.03 on 2026-01-19, with 56 votes
- Previous CPAN version: 10.02 was 10 days before
- Author: UTASHIRO
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.028 on 2026-01-21, with 19 votes
- Previous CPAN version: 1.027 was 1 month, 15 days before
- Author: PREACTION
-
CPAN::Meta - the distribution metadata for a CPAN dist
- Version: 2.150011 on 2026-01-22, with 39 votes
- Previous CPAN version: 2.150010 was 9 years, 5 months, 4 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260120.004 on 2026-01-20, with 25 votes
- Previous CPAN version: 20260120.002
- Author: BRIANDFOY
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.24 on 2026-01-18, with 19 votes
- Previous CPAN version: 1.23_03 was 5 days before
- Author: SCHUBIGER
-
EV - perl interface to libev, a high performance full-featured event loop
- Version: 4.37 on 2026-01-22, with 50 votes
- Previous CPAN version: 4.36 was 4 months, 2 days before
- Author: MLEHMANN
-
Git::Repository - Perl interface to Git repositories
- Version: 1.326 on 2026-01-18, with 27 votes
- Previous CPAN version: 1.325 was 4 years, 7 months, 17 days before
- Author: BOOK
-
IO::Async - Asynchronous event-driven programming
- Version: 0.805 on 2026-01-19, with 80 votes
- Previous CPAN version: 0.804 was 8 months, 26 days before
- Author: PEVANS
-
Mac::PropertyList - work with Mac plists at a low level
- Version: 1.606 on 2026-01-20, with 13 votes
- Previous CPAN version: 1.605 was 5 months, 11 days before
- Author: BRIANDFOY
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260119 on 2026-01-19, with 44 votes
- Previous CPAN version: 5.20251220 was 29 days before
- Author: BINGOS
-
Net::Server - Extensible Perl internet server
- Version: 2.015 on 2026-01-22, with 33 votes
- Previous CPAN version: 2.014 was 2 years, 10 months, 7 days before
- Author: BBB
-
Net::SSH::Perl - Perl client Interface to SSH
- Version: 2.144 on 2026-01-23, with 20 votes
- Previous CPAN version: 2.144 was 8 days before
- Author: BDFOY
-
Release::Checklist - A QA checklist for CPAN releases
- Version: 0.19 on 2026-01-25, with 16 votes
- Previous CPAN version: 0.18 was 1 month, 15 days before
- Author: HMBRAND
-
Spreadsheet::Read - Meta-Wrapper for reading spreadsheet data
- Version: 0.95 on 2026-01-25, with 31 votes
- Previous CPAN version: 0.94 was 1 month, 15 days before
- Author: HMBRAND
-
SPVM - The SPVM Language
- Version: 0.990117 on 2026-01-24, with 36 votes
- Previous CPAN version: 0.990116
- Author: KIMOTO
-
utf8::all - turn on Unicode - all of it
- Version: 0.026 on 2026-01-18, with 31 votes
- Previous CPAN version: 0.025 was 1 day before
- Author: HAYOBAAN
This is the weekly favourites list of CPAN distributions. Votes count: 36
Week's winner: Marlin (+3)
Build date: 2026/01/25 12:53:03 GMT
Clicked for first time:
- App::CPANTS::Lint - front-end to Module::CPANTS::Analyse
- DBIx::Class::Async - Asynchronous database operations for DBIx::Class
- Doubly - Thread-safe doubly linked list
- Mooish::Base - importer for Mooish classes
- Pod::Github - Make beautiful Markdown readmes from your POD
- Pod::Markdown::Githubert - convert POD to Github-flavored Markdown
Increasing its reputation:
- Class::Closure (+1=2)
- Class::DBI (+1=11)
- Class::Slot (+1=2)
- Class::XSConstructor (+2=5)
- CPAN::Changes (+1=33)
- Devel::NYTProf (+1=197)
- Faker (+1=13)
- Future::AsyncAwait (+1=51)
- List::AllUtils (+1=32)
- Marlin (+3=7)
- MCP (+1=8)
- Module::Starter (+1=35)
- Mooish::AttributeBuilder (+1=2)
- MooseX::XSConstructor (+1=2)
- PAGI (+2=6)
- Pod::Coverage (+1=15)
- Pod::Markdown (+1=34)
- Pod::Markdown::Github (+1=7)
- SDL3 (+1=3)
- Test::Kwalitee (+1=8)
- Test::Pod (+1=23)
- Test::Pod::Coverage (+1=24)
- Test::Spelling (+1=14)
- utf8::all (+1=31)
- Venus (+1=8)
- YAML::LibYAML (+1=60)
Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions
See OSDC Perl
-
00:00 Working with Peter Nilsson
-
00:01 Find a module to add GitHub Action to. go to CPAN::Digger recent
-
00:10 Found Tree-STR
-
01:20 Bug in CPAN Digger that shows a GitHub link even if it is broken.
-
01:30 Search for the module name on GitHub.
-
02:25 Verify that the name of the module author is the owner of the GitHub repository.
-
03:25 Edit the Makefile.PL.
-
04:05 Edit the file, fork the repository.
-
05:40 Send the Pull-Request.
-
06:30 Back to CPAN Digger recent to find a module without GitHub Actions.
-
07:20 Add file / Fork repository gives us "unexpected error".
-
07:45 Direct fork works.
-
08:00 Create the
.github/workflows/ci.ymlfile. -
09:00 Example CI yaml file copy it and edit it.
-
14:25 Look at a GitLab CI file for a few seconds.
-
14:58 Commit - change the branch and add a description!
-
17:31 Check if the GitHub Action works properly.
-
18:17 There is a warning while the tests are running.
-
21:20 Opening an issue.
-
21:48 Opening the PR (on the wrong repository).
-
22:30 Linking to output of a CI?
-
23:40 Looking at the file to see the source of the warning.
-
25:25 Assigning an issue? In an open source project?
-
27:15 Edit the already created issue.
-
28:30 USe the Preview!
-
29:20 Sending the Pull-Request to the project owner.
-
31:25 Switching to Jonathan
-
33:10 CPAN Digger recent
-
34:00 Net-SSH-Perl of BDFOY - Testing a networking module is hard and Jonathan is using Windows.
-
35:13 Frequency of update of CPAN Digger.
-
36:00 Looking at our notes to find the GitHub account of the module author LNATION.
-
38:10 Look at the modules of LNATION on MetaCPAN
-
38:47 Found JSON::Lines
-
39:42 Install the dependencies, run the tests, generate test coverage.
-
40:32 Cygwin?
-
42:45 Add Github Action copying it from the previous PR.
-
43:54
META.ymlshould not be committed as it is a generated file. -
48:25 I am looking for sponsors!
-
48:50 Create a branch that reflects what we do.
-
51:38 commit the changes
-
53:10 Fork the project on GitHub and setup git remote locally.
-
55:05
git push -u fork add-ci -
57:44 Sending the Pull-Request.
-
59:10 The 7 dwarfs and Snowwhite. My hope is to have a 100 people sending these PRs.
-
1:01:30 Feedback.
-
1:02:10 Did you think this was useful?
-
1:02:55 Would you be willing to tell people you know that you did this and you will do it again?
-
1:03:17 You can put this on your resume. It means you know how to do it.
-
1:04:16 ... and Zoom suddenly closed the recording...
See OSDC Perl
- 00:00 Introduction and about OSDC Perl
- 01:50 Sponsors of MetaCPAN, looking at some modules on CPAN.
- 03:30 The river status
- 04:10 Picking MIME::Lite and looking at MetaCPAN. Uses RT, has no GitHub Actions.
- 05:55 Look at the clone of the repository, the 2 remotes and the 3 branches.
- 06:40 GitHub Actions Examples
- 08:00 Running the Docker container locally. Install the dependencies.
- 09:10 Run the tests locally.
- 09:20 Add the
.gitignorefile. - 10:30 Picking a module from MetaCPAN recent
- 11:10 CPAN Digger recent
- 12:20 Explaining about pair-programming and workshop.
- 13:25 CPAN Digger statistics
- 14:15 Generate test coverage report using Devel::Cover.
- 17:15 The
foldfunction that is not tested and not even used. - 18:39 Wanted to open an issue about
fold, but I'll probbaly don't do it on RT. - 20:00 Updating the OSDC Perl document with the TODO items.
- 21:13 Split the packages into files?
- 22:27 The culture of Open Source contributions.
- 24:20 Why is the BEGIN line red when the content of the block is green?
- 27:40 Switching to the
long-headerbranch. - 30:40 Finding
header_as_stringin the documentation. - 32:15 Going over the test with the long subject line.
- 33:54 Let's compare the result to an empty string.
- 36:15 Switching to Test::Longstring to see the difference.
- 37:35 Test::Differences was also suggested.
- 39:40 Push out the branch and send the Pull-request.
- 40:35 Did this really increase the test coverage? Let's see it.
- 43:50 Messing up the explanation about codition coverage.
- 45:35 The repeated use of the magic number 72.
- 47:00 Is the output actually correct? Is it according to the standard?
- 51:45 Discussion about /usr/bin/perl on the first line.
- 52:45 No version is specified.
- 55:15 The sentence should be "conforms to the standard"
Download from the usual place, my Wiki Haven.
Announcing the Perl Toolchain Summit 2026!
The organizers have been working behind the scenes since last September, and today I’m happy to announce that the 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.
This post is brought to you by Simplelists, a group email and mailing list service provider, and a recurring sponsor of the Perl Toolchain Summit.

Started in 2008 as the Perl QA Hackathon in Oslo, the Perl Toolchain Summit is an annual event that brings together the key developers working on the Perl toolchain. Each year (except for 2020-2022), the event moves from country to country all over Europe, organised by local teams of volunteers. The surplus money from previous summits helps fund the next one.
Since 2023, the organizing team is formally split between a “global” team and a “local” team (although this setup has been informally used before).
The global team is made up of veteran PTS organizers, who deal with invitations, finding sponsors, paying bills and communications. They are Laurent Boivin (ELBEHO), Philippe Bruhat (BOOK), Thibault Duponchelle (CONTRA), Tina Müller (TINITA) and Breno de Oliveira (GARU), supported by Les Mongueurs de Perl’s bank account.
The local team members for this year have organized several events in Vienna (including the Perl QA Hackathon 2010!) and deal with finding the venue, the hotel, the catering and welcoming our attendees in Vienna in April. They are Alexander Hartmaier (ABRAXXA), Thomas Klausner (DOMM), Maroš Kollár (MAROS), Michael Kröll and Helmut Wollmersdorfer (WOLLMERS).
The developers who maintain CPAN and associated tools and services are all volunteers, scattered across the globe. This event is the one time in the year when they can get together.
The summit provides dedicated time to work on the critical systems and tools, with all the right people in the same room. The attendees hammer out solutions to thorny problems and discuss new ideas to keep the toolchain moving forward. This year, about 40 people have been invited, with 35 participants expected to join us in Vienna.
If you want to find out more about the work being done at the Toolchain Summit, and hear the teams and people involved, you can listen to several episodes of The Underbar podcast, which were recorded during the 2025 edition in Leipzig, Germany:
Given the important nature of the attendees’ work and their volunteer status, we try to pay for most expenses (travel, lodging, food, etc.) through sponsorship. If you’re interested in helping sponsor the summit, please get in touch with the global team at pts2026@perltoolchainsummit.org.
Simplelists has been sponsoring the Perl Toolchain Summit for several years now. We are very grateful for their continued support.
Simplelists is proud to sponsor the 2026 Perl Toolchain Summit, as Perl forms the core of our technology stack. We are grateful that we can rely on the robust and comprehensive Perl ecosystem, from the core of Perl itself to a whole myriad of CPAN modules. We are glad that the PTS continues its unsung work, ensuring that Simplelists can continue to rely on these many tools.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260110.003 on 2026-01-11, with 25 votes
- Previous CPAN version: 20260104.001 was 6 days before
- Author: BRIANDFOY
-
FFI::Platypus - Write Perl bindings to non-Perl libraries with FFI. No XS required.
- Version: 2.11 on 2026-01-12, with 69 votes
- Previous CPAN version: 2.10 was 1 year, 24 days before
- Author: PLICEASE
-
Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
- Version: 1.70 on 2026-01-11, with 18 votes
- Previous CPAN version: 1.69 was before
- Author: DDICK
-
Module::Starter - a simple starter kit for any module
- Version: 1.82 on 2026-01-10, with 34 votes
- Previous CPAN version: 1.81 was before
- Author: XSAWYERX
-
Net::DNS - Perl Interface to the Domain Name System
- Version: 1.54 on 2026-01-16, with 29 votes
- Previous CPAN version: 1.53 was 4 months, 18 days before
- Author: NLNETLABS
-
Net::SSH::Perl - Perl client Interface to SSH
- Version: 2.144 on 2026-01-14, with 20 votes
- Previous CPAN version: 2.143 was 1 year, 10 days before
- Author: BRIANDFOY
-
Sidef - The Sidef Programming Language - A modern, high-level programming language
- Version: 26.01 on 2026-01-13, with 121 votes
- Previous CPAN version: 25.12 was 23 days before
- Author: TRIZEN
-
Sys::Virt - libvirt Perl API
- Version: v12.0.0 on 2026-01-16, with 17 votes
- Previous CPAN version: v11.10.0 was 1 month, 14 days before
- Author: DANBERR
-
utf8::all - turn on Unicode - all of it
- Version: 0.025 on 2026-01-16, with 30 votes
- Previous CPAN version: 0.024 was 8 years, 11 days before
- Author: HAYOBAAN
This is the weekly favourites list of CPAN distributions. Votes count: 89
Week's winner: Marlin (+6)
Build date: 2026/01/18 10:13:31 GMT
Clicked for first time:
- Acme::KMX::Test - Testing package
- App::DAVThis - Export the current directory over WebDAV
- App::FTPThis - Export the current directory over anonymous FTP
- App::HTTPSThis - Export the current directory over HTTPS
- App::sshca - Minimalistic SSH Certificate Authority
- App::Transpierce - backup and modify important files
- CLI::Cmdline - A minimal command-line parser with short and long options in pure Perl
- DBIx::Class::Async - Asynchronous database operations for DBIx::Class
- i18n - Perl Internationalization Pragma
- Mooish::AttributeBuilder - build Mooish attribute definitions with less boilerplate
- MooseX::XSConstructor - glue between Moose and Class::XSConstructor
- Net::Async::Redis - Redis support for IO::Async
- PAGI::Server -
- PostScript::Simple - Produce PostScript files from Perl
- Strada - Call compiled Strada shared libraries from Perl
- Term::ProgressSpinner - Terminal Progress bars!
Increasing its reputation:
- Acme::AsciiEmoji (+1=2)
- App::CGIThis (+1=5)
- App::HTTPThis (+1=25)
- Beam::Wire (+1=19)
- Cache::Cache (+1=4)
- Carton (+1=130)
- Class::DBI (+1=10)
- Command::Runner (+1=4)
- CPAN::Meta (+1=28)
- CryptX (+1=53)
- DBD::CSV (+1=26)
- DBD::DuckDB (+1=8)
- DBD::SQLite (+1=107)
- DBIx::Class (+1=293)
- Devel::NYTProf (+1=196)
- Email::Stuffer (+1=37)
- EV (+1=50)
- ExtUtils::Depends (+1=5)
- FFI::Platypus (+1=70)
- forks (+1=23)
- Future::AsyncAwait (+1=51)
- Future::XS (+1=3)
- GDGraph (+1=8)
- Graphics::Framebuffer (+1=6)
- IO::Async (+1=80)
- IO::K8s (+1=5)
- JQ::Lite (+2=8)
- kura (+1=2)
- List::Gen (+1=25)
- Log::Any (+1=69)
- Log::Handler (+1=10)
- Marlin (+6=6)
- Melian (+2=4)
- MIME::Base64 (+1=25)
- Module::CPANfile (+1=62)
- Module::Runtime (+1=31)
- Module::Starter (+1=34)
- Mojolicious (+1=510)
- Mojolicious::Plugin::HTMX (+1=6)
- Mojolicious::Plugin::OpenAPI (+1=47)
- Moose (+1=334)
- MooX::Singleton (+1=6)
- Net::Prober (+1=4)
- Object::Pad (+1=47)
- PAGI (+2=4)
- PDL::Opt::GLPK (+1=2)
- perl (+1=441)
- Perl::Types (+1=3)
- perlsecret (+1=55)
- Plack (+1=240)
- Prima (+1=46)
- Protocol::Sys::Virt (+1=2)
- Pry (+1=24)
- Statocles (+1=31)
- Storable (+1=57)
- strictures (+1=26)
- Sub::Quote (+1=12)
- Test::Perl::Critic (+1=16)
- Thread::Subs (+1=3)
- Tickit (+2=29)
- Type::Tiny (+3=148)
- Z (+1=3)
I have the following program with JSON:
use strict;
use warnings;
use Data::Dumper qw( );
use JSON qw( );
my $json_text = '[
{
"sent": "2026-01-16T17:00:00Z",
"data": [
{
"headline": "text1",
"displayText": "text2"
},
{
"displayText": "text3"
},
{
"displayText": "text4"
}
]
},
{
"sent": "2026-01-16T17:00:00Z",
"data": [
{
"headline": "text5",
"displayText": "text6"
},
{
"displayText": "text7"
},
{
"displayText": "text8"
},
{
"headline": "text9",
"displayText": "text10"
}
]
}
]';
my $json = JSON->new;
my $data = $json->decode($json_text);
print Data::Dumper->Dump($data);
# This is pseudocode:
foreach ( $data->[] ) {
print "\$_ is $_";
}
I would like to walk through elements in JSON and find all sent and all displayText values. But, I do not know how to dereference first element. First element is array without any name in this case.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development…
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.

Dave writes:
During December, I fixed assorted bugs, and started work on another tranche of ExtUtils::ParseXS fixups, this time focussing on:
adding and rewording warning and error messages, and adding new tests for them;
improving test coverage: all XS keywords have tests now;
reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;
refactoring and improving the length(str) pseudo-parameter implementation.
By the end of this report period, that work was about half finished; it is currently finished and being reviewed.
Summary:
* 10:25 GH #16197 re eval stack unwinding
* 1:39 GH #23903 BBC: bleadperl breaks ETHER/Package-Stash-XS-0.30.tar.gz
* 0:09 GH #23986 Perl_rpp_popfree_to(SV sp**) questionable design
* 3:02 fix Pod::Html stderr noise
* 27:47 improve Extutils::ParseXS
* 1:47 modernise perlxs.pod
Total: * 44:49 (HH::MM)

Tony writes:
``` [Hours] [Activity] 2025/12/01 Monday 0.23 memEQ cast discussion with khw 0.42 #23965 testing, review and comment 2.03 #23885 review, testing, comments 0.08 #23970 review and approve 0.13 #23971 review and approve
0.08 #23965 follow-up
2.97
2025/12/02 Tuesday 0.73 #23969 research and comment 0.30 #23974 review and approve 0.87 #23975 review and comment 0.38 #23975 review reply and approve 0.25 #23976 review, research and approve 0.43 #23977 review, research and approve
1.20 #23918 try to produce expected bug and succeed
4.16
2025/12/03 Wednesday 0.35 #23883 check updates and approve with comment 0.72 #23979 review, try to trigger the messages and approve 0.33 #23968 review, research and approve 0.25 #23961 review and comment 2.42 #23918 fix handling of context, testing, push to update,
comment on overload handling plans, start on it
4.07
2025/12/04 Thursday 2.05 #23980 review, comment and approve, fix group_end() decorator and make PR 23983 0.25 #23982 review, research and approve 1.30 #23918 test for skipping numeric overload, and fix, start
on force overload
3.60
2025/12/05 Friday
0.63 #23980 comment
0.63
2025/12/08 Monday 0.90 #23984 review and comment 0.13 #23988 review and comment 2.03 #23918 work on force overload implmentation
1.45 #23918 testing, docs
4.51
2025/12/09 Tuesday 0.32 github notifications 1.23 #23918 add more tests 0.30 #23992 review 0.47 #23993 research, testing and comment
0.58 #23993 review and comment
2.90
2025/12/10 Wednesday 0.72 #23992 review updates, testing and comment 1.22 #23782 review (and some #23885 discussion in irc) 1.35 look into Jim’s freebsd core dump, reproduce and find cause, email him and briefly comment in irc, more 23885
discussion and approve 23885
3.29
2025/12/11 Thursday 0.33 #23997 comment 1.08 #23995 research and comment 0.47 #23998 review and approve
1.15 #23918 cleanup
3.03
2025/11/15 Saturday 0.20 #23998 review updates and approve 0.53 #23975 review comment, research and follow-up 1.25 #24002 review discussion, debugging and comment 0.28 #23993 comment 0.67 #23918 commit cleanup 0.20 #24002 follow-up
0.65 #23975 research and follow-up
3.78
2025/12/16 Tuesday 0.40 #23997 review, comment, approve 0.37 #23988 review and comment 0.95 #24001 debugging and comment 0.27 #24006 review and comment 0.23 #24004 review and nothing to say
1.27 #23918 more cleanup, documentation
3.49
2025/12/17 Wednesday 0.32 #24008 testing, debugging and comment 0.08 #24006 review update and approve 0.60 #23795 quick re-check and approve 1.02 #23918 more fixes, address each PR comment and push for CI 0.75 #23956 work on a test and a fix, push for CI 0.93 #24001 write a test, and a fix, testing 0.67 #24001 write an inverted test too, commit message and push for CI 0.17 #23956 perldelta 0.08 #23956 check CI results, make PR 24010
0.15 #24001 perldelta and make PR 24011
4.77
2025/12/18 Thursday 0.27 #24001 rebase, local testing, push for CI 1.15 #24012 research 0.50 #23995 testing and comment
0.08 #24001 check CI results and apply to blead
2.00
Which I calculate is 43.2 hours.
Approximately 32 tickets were reviewed or worked on, and 1 patches were applied. ```

Paul writes:
A mix of focus this month. I was hoping to get attributes-v2 towards
something that could be reviewed and merged, but then I bumped into a
bunch of refalias-related issues. Also spent about 5 hours reviewing
Dave's giant xspod rewrite.
- 1 = Rename
THINGtoken in grammar to something more meaningful- https://github.com/Perl/perl5/pull/23982
- 4 = Continue work on
attributes-v2 - 1 = BBC Ticket on Feature-Compat-Class
- https://github.com/Perl/perl5/issues/23991
- 2 = Experiment with
refaliasparameters with defaults in XS-Parse-Sublike - 1 = Managing the PPC documents and overall process
- 2 = Investigations into the
refaliasanddeclared_refsfeatures, to see if we can un-experiment them - 2 = Add a warning to
refaliasthat breaks closures- https://github.com/Perl/perl5/pull/24026 (work-in-progress)
- 3 = Restore refaliased variables after foreach loop
- https://github.com/Perl/perl5/issues/24028
- https://github.com/Perl/perl5/pull/24029
- 3 = Clear pad after multivariable foreach
- https://github.com/Perl/perl5/pull/24034 (not yet merged)
- 6 = Github code reviews (mostly on Dave's xspod)
- https://github.com/Perl/perl5/pull/23795
Total: 25 hours
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.02 on 2026-01-09, with 56 votes
- Previous CPAN version: 10.01 was 9 days before
- Author: UTASHIRO
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.097002 on 2026-01-09, with 818 votes
- Previous CPAN version: 2.097001
- Author: OLIVER
-
App::Sqitch - Sensible database change management
- Version: v1.6.1 on 2026-01-06, with 3087 votes
- Previous CPAN version: v1.6.0 was 3 months before
- Author: DWHEELER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260104.001 on 2026-01-04, with 25 votes
- Previous CPAN version: 20251228.001 was 6 days before
- Author: BRIANDFOY
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.23 on 2026-01-04, with 19 votes
- Previous CPAN version: 1.23 was 5 days before
- Author: SCHUBIGER
-
Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
- Version: 1.69 on 2026-01-10, with 19 votes
- Previous CPAN version: 1.68 was 3 months, 26 days before
- Author: DDICK
-
GD - Perl interface to the libgd graphics library
- Version: 2.84 on 2026-01-04, with 32 votes
- Previous CPAN version: 2.83 was 1 year, 6 months, 11 days before
- Author: RURBAN
-
IO::Socket::SSL - Nearly transparent SSL encapsulation for IO::Socket::INET.
- Version: 2.098 on 2026-01-06, with 49 votes
- Previous CPAN version: 2.097
- Author: SULLR
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.632 on 2026-01-06, with 16 votes
- Previous CPAN version: 0.631 was 12 days before
- Author: ETHER
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.037000 on 2026-01-07, with 27 votes
- Previous CPAN version: 2.036000
- Author: MICKEY
-
MIME::Lite - low-calorie MIME generator
- Version: 3.035 on 2026-01-08, with 35 votes
- Previous CPAN version: 3.034 was 2 days before
- Author: RJBS
-
Module::Starter - a simple starter kit for any module
- Version: 1.81 on 2026-01-09, with 34 votes
- Previous CPAN version: 1.80
- Author: XSAWYERX
-
Perl::Tidy - indent and reformat perl scripts
- Version: 20260109 on 2026-01-08, with 147 votes
- Previous CPAN version: 20250912 was 3 months, 26 days before
- Author: SHANCOCK
-
perlsecret - Perl secret operators and constants
- Version: 1.018 on 2026-01-09, with 55 votes
- Previous CPAN version: 1.017 was 4 years, 2 months before
- Author: BOOK
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.010001 on 2026-01-06, with 148 votes
- Previous CPAN version: 2.010000 was 7 days before
- Author: TOBYINK
-
UV - Perl interface to libuv
- Version: 2.001 on 2026-01-06, with 14 votes
- Previous CPAN version: 2.000 was 4 years, 5 months, 8 days before
- Author: PEVANS
In a script I'm using constants (use constant ...) to allow re-use ion actual regular expressions, using the pattern from https://stackoverflow.com/a/69379743/6607497.
However when using a {...} repeat specifier following such constant expansion, Perl wants to tread the constant as a hash variable.
The question is how to avoid that.
Code example:
main::(-e:1): 1
DB<1> use constant CHARSET => '[[:graph:]]'
DB<2> x "foo" =~ qr/^[[:graph:]]{3,}$/
0 1
DB<3> x "foo" =~ qr/^${\CHARSET}{3,}$/
Not a HASH reference at (eval 8)[/usr/lib/perl5/5.26.1/perl5db.pl:738] line 2.
DB<4> x "foo" =~ qr/^${\CHARSET}\{3,}$/
empty array
DB<5> x $^V
0 v5.26.1
According to https://stackoverflow.com/a/79845011/6607497 a solution may be to add a space that's being ignored, like this: qr/^${\CHARSET} {3,}$/x; however I don't understand why this works, because outside of a regular expression the space before { is being ignored:
DB<6> x "foo" =~ qr/^${\CHARSET} {3,}$/x
0 1
DB<7> %h = (a => 3)
DB<8> x $h{a}
0 3
DB<9> x $h {a}
0 3
The manual page (perlop(1) on "Quote and Quote-like Operators") isn't very precise on that:
For constructs that do interpolate, variables beginning with "$" or "@" are interpolated. Subscripted variables such as $a[3] or "$href->{key}[0]" are also interpolated, as are array and hash slices. But method calls such as "$obj->meth" are not.
foobar is a Perl script that prints to both standard output and standard error. In a separate Perl script echo-stderr, I run foobar and capture its standard error using IPC::Open3's open3 function, and simply echo it back.
Here's the code for echo-stderr:
#!/usr/bin/perl -w
use IPC::Open3;
use Symbol 'gensym';
$fh = gensym;
$pid = open3('STDIN', 'STDOUT', $fh, './foobar') or die "$0: failed to run ./foobar\n";
while ( <$fh> ) {
print STDERR $_;
}
close $fh;
waitpid($pid, 0);
The result is that whatever foobar writes to standard error is printed, nothing that it writes to standard output is.
And there is an error at the end:
<message written to STDERR>
<message written to STDERR>
...
Unable to flush stdout: Bad file descriptor
What is the reason for this error?
Whenever I’m building a static website, I almost never start by reaching for Apache, nginx, Docker, or anything that feels like “proper infrastructure”. Nine times out of ten I just want a directory served over HTTP so I can click around, test routes, check assets, and see what happens in a real browser.
For that job, I’ve been using App::HTTPThis for years.
It’s a simple local web server you run from the command line. Point it at a directory, and it serves it. That’s it. No vhosts. No config bureaucracy. No “why is this module not enabled”. Just: run a command and you’ve got a website.
Why I’ve used it for years
Static sites are deceptively simple… right up until they aren’t.
-
You want to check that relative links behave the way you think they do.
-
You want to confirm your CSS and images are loading with the paths you expect.
-
You want to reproduce “real HTTP” behaviour (caching headers, MIME types, directory handling) rather than viewing files directly from disk.
Sure, you can open file:///.../index.html in a browser, but that’s not the same thing as serving it over HTTP. And setting up Apache (or friends) feels like bringing a cement mixer to butter some toast.
With http_this, the workflow is basically:
-
cdinto your site directory -
run a single command
-
open a URL
-
get on with your life
It’s the “tiny screwdriver” that’s always on my desk.
Why I took it over
A couple of years ago, the original maintainer had (entirely reasonably!) become too busy elsewhere and the distribution wasn’t getting attention. That happens. Open source is like that.
But I was using App::HTTPThis regularly, and I had one small-but-annoying itch: when you visited a directory URL, it would always show a directory listing – even if that directory contained an index.html. So instead of behaving like a typical web server (serve index.html by default), it treated index.html as just another file you had to click.
That’s exactly the sort of thing you notice when you’re using a tool every day, and it was irritating enough that I volunteered to take over maintenance.
(If you want to read more on this story, I wrote a couple of blog posts.)
What I’ve done since taking it over
Most of the changes are about making the “serve a directory” experience smoother, without turning it into a kitchen-sink web server.
1) Serve index pages by default (autoindex)
The first change was to make directory URLs behave like you’d expect: if index.html exists, serve it automatically. If it doesn’t, you still get a directory listing.
2) Prettier index pages
Once autoindex was in place, I then turned my attention to the fallback directory listing page. If there isn’t an index.html, you still need a useful listing — but it doesn’t have to look like it fell out of 1998. So I cleaned up the listing output and made it a bit nicer to read when you do end up browsing raw directories.
3) A config file
Once you’ve used a tool for a while, you start to realise you run it the same way most of the time.
A config file lets you keep your common preferences in one place instead of re-typing options. It keeps the “one command” feel, but gives you repeatability when you want it.
4) --host option
The ability to control the host binding sounds like an edge case until it isn’t.
Sometimes you want:
-
only
localhostaccess for safety; -
access from other devices on your network (phone/tablet testing);
-
behaviour that matches a particular environment.
A --host option gives you that control without adding complexity to the default case.
The Bonjour feature (and what it’s for)
This is the part I only really appreciated recently: App::HTTPThis can advertise itself on your local network using mDNS / DNS-SD – commonly called Bonjour on Apple platforms, Avahi on Linux, and various other names depending on who you’re talking to.
It’s switched on with the --name option.
When you do that, http_this publishes an _http._tcp service on your local network with the instance name you chose (MyService in this case). Any device on the same network that understands mDNS/DNS-SD can then discover it and resolve it to an address and port, without you having to tell anyone, “go to http://192.168.1.23:7007/”.
Confession time: I ignored this feature for ages because I’d mentally filed it under “Apple-only magic” (Bonjour! very shiny! probably proprietary!). It turns out it’s not Apple-only at all; it’s a set of standard networking technologies that are supported on pretty much everything, just under a frankly ridiculous number of different names. So: not Apple magic, just local-network service discovery with a branding problem.
Because I’d never really used it, I finally sat down and tested it properly after someone emailed me about it last week, and it worked nicely, nicely enough that I’ve now added a BONJOUR.md file to the repo with a practical explanation of what’s going on, how to enable it, and a few ways to browse/discover the advertised service.
(If you’re curious, look for _http._tcp and your chosen service name.)
It’s a neat quality-of-life feature if you’re doing cross-device testing or helping someone else on the same network reach what you’re running.
Related tools in the same family
App::HTTPThis is part of a little ecosystem of “run a thing here quickly” command-line apps. If you like the shape of http_this, you might also want to look at these siblings:
-
https_this : like
http_this, but served over HTTPS (useful when you need to test secure contexts, service workers, APIs that require HTTPS, etc.) -
cgi_this : for quick CGI-style testing without setting up a full web server stack
-
dav_this : serves content over WebDAV (handy for testing clients or workflows that expect DAV)
-
ftp_this : quick FTP server for those rare-but-real moments when you need one
They all share the same basic philosophy: remove the friction between “I have a directory” and “I want to interact with it like a service”.
Wrapping up
I like tools that do one job, do it well, and get out of the way. App::HTTPThis has been that tool for me for years and it’s been fun (and useful) to nudge it forward as a maintainer.
If you’re doing any kind of static site work — docs sites, little prototypes, generated output, local previews — it’s worth keeping in your toolbox.
And if you’ve got ideas, bug reports, or platform notes (especially around Bonjour/Avahi weirdness), I’m always happy to hear them.
The post App::HTTPThis: the tiny web server I keep reaching for first appeared on Perl Hacks.

