Shortcuts: s show h hide n next p prev
magic.t: don't run %ENV keys through quotemeta on VMS

A couple of the tests for downgrading %ENV keys with UTF-8 characters
in them were failing.  It turns out the native facilities for setting
and retrieving logical names can handle UTF-8 characters in the key,
but not if you muddy the waters by injecting escape characters, which
mean nothing here.
S_clear_special_blocks - allow caller to notice that cv has been freed

GH #16868: `use strict;END{{{{}}}}{END}END{e}` triggers an assertion failure:
```
perl​: op.c​:10342​: CV *Perl_newATTRSUB_x(I32, OP *, OP *, OP *, OP *,
_Bool)​: Assertion `!cv || evanescent || SvREFCNT((SV*)cv) != 0'
failed.
```

In the following block, `S_clear_special_blocks` frees `cv`, but doesn't signal
that it has done so to the caller. The caller continues to act as if `cv` had
not been freed.
```
        if (name) {
            if (PL_parser && PL_parser->error_count) {
                clear_special_blocks(name, gv, cv);
            }
```

This commit changes `S_clear_special_blocks` from a void function to
returning a CV*:
```
    return SvIS_FREED(cv) ? NULL : cv;
```
The caller now assigns the result of the call back to `cv`.

This causes the test case to croak:
```
Bareword "e" not allowed while "strict subs" in use at -e line 1.
Execution of -e aborted due to compilation errors.
```
S_scan_const: abort compilation after \N{} errors

Upon encountering errors in parsing `\N{}` sequences, the parser used to
try to continue parsing for a bit before exiting. However, these errors
are - under certain circumstances - associated with problems with the
savestack being incorrectly adjusted.

GH #16930 is an example of this where:
* `PL_comppad_name` points to one struct during allocation of pad slots.
* Savestack activity causes `PL_comppad_name` to point somewhere else.
* The peephole optimiser is called, but needs `PL_comppad_name` to point
to the first struct to match up with the pad allocations.

With this commit, errors in parsing `\N{}` sequences are immediately fatal.
regcomp: Capture group names need be legal Unicode names

Previous comits have explicitly made sure that Perl identifiers are
legal Unicode names.  This extends that to regular expression group
(such as capturing) names.

toke.c: Add parse_ident_msg()

Perl commits on GitHub
toke.c: Add parse_ident_msg()

This new function can be used to have parse_ident() return an error
message to its caller instead of dieing.  It turns out that regcomp.c
is in want of this functionality.

Then there’s Perl

Perl on Medium

Since my native language isn’t English, the German text follows below.

100 days of Perl …

Perl on Medium

… or maybe some more ;)

I've gone through the Custom Data Labels documentation carefully, and reduced down to a simple example:

#!/usr/bin/perl
use strict;
use warnings;
use Excel::Writer::XLSX;

my $workbook  = Excel::Writer::XLSX->new( 'chart_custom_labels.xlsx' );
my $worksheet = $workbook->add_worksheet();

# Chart data
my $data = [
    [ 'Cat', 'Dog', 'Pig' ],
    [ 10, 40, 50 ],
];

$worksheet->write( 'A1', $data );

# Custom labels
my $custom_labels = [
    { value => 'Jan' },
    { value => 'Feb' },
    { value => 'Mar' },
];

my $chart = $workbook->add_chart( type => 'column' );

# Configure the series with custom string data labels
$chart->add_series(
    categories => '=Sheet1!$A$1:$A$3',
    values     => '=Sheet1!$B$1:$B$3',
    data_labels => {
        value => 1, 
        custom => $custom_labels,
    },
);

$workbook->close();

I expected this to apply labels of "Jan", "Feb", and "Mar" to the graph. However, the labels I get are just the values I would have gotten from value => 1 even if I had not included the custom labels line, i.e. 10, 40, 50:

enter image description here

I've also tried removing the value => 1 line but keeping the custom line, and that results in no labels at all. And I've tried a different approach where I keep the value => 1 line but use the delete property of custom property to remove some labels. That also did not work, and just kept the values for labels.

Is this functionality broken or am I missing something?

Environment details:

Cygwin

Perl v5.40.3

Excel::Writer::XLSX 1.03

Lock and unlock hash using Hash::Util

Perl Maven

If you don't like the autovivification or simply would like to make sure the code does not accidentally alter a hash the Hash::Util module is for you.

You can lock_hash and later you can unlock_hash if you'd like to make some changes to it.

In this example you can see 3 different actions commented out. Each one would raise an exception if someone tries to call them on a locked hash. After we unlock the hash we can execute those actions again.

I tried this both in perl 5.40 and 5.42.

examples/locking_hash.pl

use strict;
use warnings;
use feature 'say';

use Hash::Util qw(lock_hash unlock_hash);
use Data::Dumper qw(Dumper);


my %person = (
    fname => "Foo",
    lname => "Bar",
);
lock_hash(%person);

print Dumper \%person;
print "$person{fname} $person{lname}\n";
say "fname exists ", exists $person{fname};
say "language exists ", exists $person{language};

# $person{fname} = "Peti";     # Modification of a read-only value attempted
# delete $person{lname};       # Attempt to delete readonly key 'lname' from a restricted hash
# $person{language} = "Perl";  # Attempt to access disallowed key 'language' in a restricted hash

unlock_hash(%person);

$person{fname} = "Peti";     # Modification of a read-only value attempted
delete $person{lname};       # Attempt to delete readonly key 'lname' from a restricted hash
$person{language} = "Perl";  # Attempt to access disallowed key 'language' in a restricted hash

print Dumper \%person;

$VAR1 = {
          'lname' => 'Bar',
          'fname' => 'Foo'
        };
Foo Bar
fname exists 1
language exists
$VAR1 = {
          'language' => 'Perl',
          'fname' => 'Peti'
        };

Perl Maven Online - next session Feb 10

r/perl

Perl Developers who want to contribute to Perl open source development can learn how by joining a live online session with the Perl Maven Group.

Next live video session details :

Tuesday, February 10

1:00 PM - 3:00 PM EST

Register for the group via Luma on the link below :
https://luma.com/3vlpqn8g

Previous session recordings are available via Youtube ( Please Like and Subscribe to the Channel !!) :

Open source contribution - Perl - MIME::Lite - GitHub Actions, test coverage and adding a test

https://www.youtube.com/watch?v=XuwHFAyldsA

Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions

https://www.youtube.com/watch?v=Y1La0sfcvbI

submitted by /u/Itcharlie
[link] [comments]

Otobo supports the German Perl Workshop

r/perl

Jinja2TT2: Jinja2 to Template Toolkit Transpiler

dev.to #perl

jinja2tt2 logo a tori and a camel

A Perl transpiler that converts Jinja2 templates to Template Toolkit 2 (TT2) syntax.

Source: https://github.com/lucianofedericopereira/jinja2tt2

Description

Jinja2 is deeply integrated with Python, making a direct port impractical. However, since TT2 and Jinja2 share similar concepts and syntax patterns, this transpiler performs a mechanical translation between the two template languages.

Why TT2?

TT2 and Jinja2 share:

  • Variable interpolation: {{ var }} maps to [% var %]
  • Control structures: {% if %} / {% for %} map to [% IF %] / [% FOREACH %]
  • Filters: {{ name|upper }} maps to [% name | upper %]
  • Includes, blocks, and inheritance (conceptually similar)
  • Expression grammar close enough to map mechanically

Installation

No external dependencies beyond core Perl 5.20+.

git clone https://github.com/lucianofedericopereira/jinja2tt2
cd jinja2tt2

Usage

Command Line

# Transpile a file to stdout
./bin/jinja2tt2 template.j2

# Transpile with output to file
./bin/jinja2tt2 template.j2 -o template.tt

# Transpile in-place (creates .tt file)
./bin/jinja2tt2 -i template.j2

# From stdin
echo '{{ name|upper }}' | ./bin/jinja2tt2

# Debug mode (shows tokens and AST)
./bin/jinja2tt2 --debug template.j2

Programmatic Usage

use Jinja2::TT2;

my $transpiler = Jinja2::TT2->new();

# From string
my $tt2 = $transpiler->transpile('{{ user.name|upper }}');
# Result: [% user.name.upper %]

# From file
my $tt2 = $transpiler->transpile_file('template.j2');

Supported Constructs

Variables

{{ foo }}           →  [% foo %]
{{ user.name }}     →  [% user.name %]
{{ items[0] }}      →  [% items.0 %]

Filters

{{ name|upper }}              →  [% name.upper %]
{{ name|lower|trim }}         →  [% name.lower.trim %]
{{ items|join(", ") }}        →  [% items.join(', ') %]
{{ name|default("Guest") }}   →  [% (name || 'Guest') %]

Conditionals

{% if user %}          →  [% IF user %]
{% elif admin %}       →  [% ELSIF admin %]
{% else %}             →  [% ELSE %]
{% endif %}            →  [% END %]

Loops

{% for item in items %}    →  [% FOREACH item IN items %]
{{ loop.index }}           →  [% loop.count %]
{{ loop.first }}           →  [% loop.first %]
{{ loop.last }}            →  [% loop.last %]
{% endfor %}               →  [% END %]

Blocks and Macros

{% block content %}        →  [% BLOCK content %]
{% endblock %}             →  [% END %]

{% macro btn(text) %}      →  [% MACRO btn(text) BLOCK %]
{% endmacro %}             →  [% END %]

Comments

{# This is a comment #}    →  [%# This is a comment %]

Whitespace Control

{{- name -}}               →  [%- name -%]
{%- if x -%}               →  [%- IF x -%]

Other Constructs

  • {% include "file.html" %}[% INCLUDE file.html %]
  • {% set x = 42 %}[% x = 42 %]
  • Ternary: {{ x if cond else y }}[% (cond ? x : y) %]
  • Boolean literals: true/false1/0

Filter Mapping

Jinja2 TT2 Equivalent
upper .upper
lower .lower
trim .trim
first .first
last .last
length .size
join .join
reverse .reverse
sort .sort
escape / e `\
{% raw %}default `\
{% raw %}replace .replace

Some filters require TT2 plugins (e.g., tojson needs Template::Plugin::JSON).

Loop Variable Mapping

Jinja2 TT2
loop.index loop.count
loop.index0 loop.index
loop.first loop.first
loop.last loop.last
loop.length loop.size

Limitations

  • Template inheritance ({% extends %}) requires manual adjustment for TT2's WRAPPER pattern
  • Autoescape is not directly supported in TT2
  • Some filters need custom TT2 plugins or vmethods
  • Complex Python expressions may need review

Running Tests

prove -l t/

Architecture

  1. Tokenizer: Splits Jinja2 source into tokens (text, variables, statements, comments)
  2. Parser: Builds an Abstract Syntax Tree (AST) from the token stream
  3. Emitter: Walks the AST and generates equivalent TT2 code

Credits

  • Luciano Federico Pereira - Author

License

This is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License (LGPL) version 2.1 as published by the Free Software Foundation.

My name is Alex. Over the last years I’ve implemented several versions of the Raku’s documentation format (Synopsys 26 / Raku’s Pod) in Perl and JavaScript.

At an early stage, I shared the idea of creating a lightweight version of Raku’s Pod, with Damian Conway, the original author of the Synopsys 26 Documentation specification (S26). He was supportive of the concept and offered several valuable insights that helped shape the vision of what later became Podlite.

Today, Podlite is a small block-based markup language that is easy to read as plain text, simple to parse, and flexible enough to be used everywhere — in code, notes, technical documents, long-form writing, and even full documentation systems.

This article is an introduction for the Perl community — what Podlite is, how it looks, how you can already use it in Perl via a source filter, and what’s coming next.

The Block Structure of Podlite

One of the core ideas behind Podlite is its consistent block-based structure. Every meaningful element of a document — a heading, a paragraph, a list item, a table, a code block, a callout — is represented as a block. This makes documents both readable for humans and predictable for tools.

Podlite supports three interchangeable block styles: delimited, paragraph, and abbreviated.

Abbreviated blocks (=BLOCK)

This is the most compact form. A block starts with = followed by the block name.

=head1 Installation Guide
=item Perl 5.8 or newer
=para This tool automates the process.
  • ends on the next directive or a blank line
  • best used for simple one-line blocks
  • cannot include configuration options (attributes)

Paragraph blocks (=for BLOCK)

Use this form when you want a multi-line block or need attributes.

=for code :lang<perl>
say "Hello from Podlite!";
  • ends when a blank line appears
  • can include complex content
  • allows attributes such as :lang, :id, :caption, :nested, …

Delimited blocks (=begin BLOCK=end BLOCK)

The most expressive form. Useful for large sections, nested blocks, or structures that require clarity.

=begin nested :notify<important>
Make sure you have administrator privileges.
=end nested
  • explicit start and end markers
  • perfect for code, lists, tables, notifications, markdown, formulas
  • can contain other blocks, including nested ones

These block styles differ in syntax convenience, but all produce the same internal structure.

diagram here showing the three block styles and how they map to the same internal structure

Regardless of which syntax you choose:

  • all three forms represent the same block type
  • attributes apply the same way (:lang, :caption, :id, …)
  • tools and renderers treat them uniformly
  • nested blocks work identically
  • you can freely mix styles inside a document

Example: Comparing POD and Podlite

Let’s see how the same document looks in traditional POD versus Podlite:

POD vs Podlite

Each block has clear boundaries, so you don’t need blank lines between them. This makes your documentation more compact and easier to read. This is one of the reasons Podlite remains compact yet powerful: the syntax stays flexible, while the underlying document model stays clean and consistent.

This Podlite example rendered as on the following screen:

Podlite example

Inside the Podlite Specification 1.0

One important point about Podlite is that it is first and foremost a specification. It does not belong to any particular programming language, platform, or tooling ecosystem. The specification defines the document model, syntax rules, and semantics.

From the Podlite 1.0 specification, notable features include:

  • headings (=head1, =head2, …)
  • lists and definition lists, and including task lists
  • tables (simple and advanced)
  • CSV-backed tables
  • callouts / notifications (=nested :notify<tip|warning|important|note|caution>)
  • table of contents (=toc)
  • includes (=include)
  • embedded data (=data)
  • pictures (=picture and inline P<>)
  • formulas (=formula and inline F<>)
  • user defined blocks and markup codes
  • Markdown integration

The =markdown block is part of the standard block set defined by the Podlite Specification 1.0. This means Markdown is not an add-on or optional plugin — it is a fully integrated, first-class component of the language.

Markdown content becomes part of Podlite’s unified document structure, and its headings merge naturally with Podlite headings inside the TOC and document outline.

Below is a screenshot showing how Markdown inside Perl is rendered in the in-development VS Code extension, demonstrating both the block structure and live preview:

Podlite source, including =markdown block

Using Podlite in Perl via the source filter

To make Podlite directly usable in Perl code, there is a module on CPAN: Podlite — Use Podlite markup language in Perl programs

A minimal example could look like this:

use Podlite; # enable Podlite blocks inside Perl

=head1 Quick Example
=begin markdown
Podlite can live inside your Perl programs.
=end markdown
print "Podlite active\n";

Roadmap: what’s next for Podlite

Podlite continues to grow, and the Specification 1.0 is only the beginning. Several areas are already in active development, and more will evolve with community feedback.

Some of the things currently planned or in progress:

  • CLI tools
    • command-line utilities for converting Podlite to HTML, PDF, man pages, etc.
    • improve pipelines for building documentation sites from Podlite sources
  • VS Code integration
  • Ecosystem growth
    • develop comprehensive documentation and tutorials
    • community-driven block types and conventions

Try Podlite and share feedback

If this resonates with you, I’d be very happy to hear from you:

  • ideas for useful block types
  • suggestions for tools or integrations
  • feedback on the syntax and specification

https://github.com/podlite/podlite-specs/discussions

Even small contributions — a comment, a GitHub star, or trying an early tool — help shape the future of the specification and encourage further development.

Useful links:

Thanks for reading, Alex

Ready, Set, Compile... you slow Camel

r/perl

Weekly Challenge: Maximum Encryption

dev.to #perl

Weekly Challenge 358

Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. It's a great way for us all to practice some coding.

Challenge, My solutions

Task 1: Max Str Value

Task

You are given an array of alphanumeric string, @strings.

Write a script to find the max value of alphanumeric string in the given array. The numeric representation of the string, if it comprises of digits only otherwise length of the string.

My solution

This task can be achieved in a single line in both Python and Perl, while still maintaining readability. For the Python solution I use list comprehension to convert each string into an integer (if it is all digits) or the length of string (if it isn't). This is wrapped around the max function to return maximum (largest) of these values.

def max_string_value(input_strings: list) -> int:
    return max(int(s) if s.isdigit() else len(s) for s in input_strings)

Not to out-shinned, Perl can achieve similar functionality by using the map function, converting each string into its numeric representation or its length. Perl doesn't have a built-in max function, but it is available from the List::Util package.

sub main (@input_strings) {
    say max( map { /^\d+$/ ? $_ : length($_) } @input_strings );
}

Examples

$ ./ch-1.py "123" "45" "6"
123

$ ./ch-1.py "abc" "de" "fghi"
4

$ ./ch-1.py "0012" "99" "a1b2c"
99

$ ./ch-1.py "x" "10" "xyz" "007"
10

$ ./ch-1.py "hello123" "2026" "perl"
2026

Task 2: Encrypted String

Task

You are given a string $str and an integer $int.

Write a script to encrypt the string using the algorithm - for each character $char in $str, replace $char with the $int th character after $char in the alphabet, wrapping if needed and return the encrypted string.

My solution

For this task, I start by setting $int (called i in Python as int is a reserved word) to be the modulus (remainder) of 26. If that value is 0, I return the original string as no encryption is required.

def encrypted_string(input_string: str, i: int) -> str:
    i = i % 26

    if i == 0:
        return input_string

The next step is creating a mapping table. I start with the variable old_letters that has all the lower case letters of the English alphabet. I create a new_letters string by slicing the old_letters string at the appropriate point. I then double the length of each string by adding the upper case equivalent string. Finally, I use dict(zip()) to convert the strings to a dictionary where the key is the original letter and the value is the new letter.

    old_letters = string.ascii_lowercase
    new_letters = old_letters[i:] + old_letters[:i]
    old_letters += old_letters.upper()
    new_letters += new_letters.upper()
    mapping = dict(zip(old_letters, new_letters))

The final step is to loop through each character and use the mapping dictionary to replace the letter, or use the original character if it is not found (numbers, spaces, punctuation characters, etc).

    return "".join(mapping.get(char, char) for char in input_string)

The Perl code follows the same logic. It uses the splice method to create the new_letters variable, and both old_letters and new_letters are arrays. The mesh function also comes from the List::Util package. Perl will automatically convert a flat list to key/value pairs in the mapping hash.

sub main ( $input_string, $i ) {
    $i %= 26;

    if ( $i == 0 ) {
        say $input_string;
        return;
    }

    my @old_letters = my @new_letters = ( "a" .. "z" );
    push @new_letters, splice( @new_letters, 0, $i );

    push @old_letters, map { uc } @old_letters;
    push @new_letters, map { uc } @new_letters;
    my %mapping = mesh \@old_letters, \@new_letters;


    say join "", map { $mapping{$_} // $_ } split //, $input_string;

Examples

$ ./ch-2.py abc 1
bcd

$ ./ch-2.py xyz 2
zab

$ ./ch-2.py abc 27
bcd

$ ./ch-2.py hello 5
mjqqt

$ ./ch-2.py perl 26
perl

This week in PSC (213) | 2026-01-26

blogs.perl.org

All three of us discussed:

  • We agree with the general idea of an improved PRNG, so we encourage Scott to continue working on the PR to get it into a polished state ready for merge
  • Haarg’s “missing import” PR now looks good; Paul has LGTM’ed it
  • TLS in core still remains a goal for the next release cycle. Crypt::OpenSSL3 might now be in a complete enough state to support a minimal viable product “https” client to be built on top of it, that could be used by an in-core CPAN client

[P5P posting of this summary]

Ready, Set, Compile... you slow Camel

blogs.perl.org

"Perl is slow."

I've heard this for years, well since I started. You probably have too. And honestly? For a long time, I didn't have a great rebuttal. Sure, Perl's fast enough for most things, it's well known for text processing, glueing code and quick scripts. But when it came to object heavy code, the critics have a point.

We will begin by looking at the myth of perl being slow a little more deeply. Here's a benchmark between Perl and Python using CPU seconds, a fair comparison that measures actual work done:

=== PERL (5 CPU seconds per test) ===
Integer arithmetic             1,072,800/s
Float arithmetic                 398,800/s
String concat                    970,000/s
Array push/iterate               368,800/s
Hash insert/iterate               84,800/s
Function calls                   244,000/s
Regex match                   12,921,200/s

=== PYTHON (5 CPU seconds per test) ===
Integer arithmetic               777,200/s
Float arithmetic                 512,400/s
String concat                    627,200/s
List append/iterate              476,400/s
Dict insert/iterate              140,600/s
Function calls                   331,400/s
Regex match                   10,543,713/s

The results are more nuanced than the "Perl is slow" narrative suggests:

OperationWinnerMargin
Integer arithmeticPerl1.4x faster
Float arithmeticPython1.3x faster
String concatPerl1.5x faster
Array/List opsPython1.3x faster
Hash/Dict opsPython1.7x faster
Function callsPython1.4x faster
Regex matchPerl1.2x faster

Perl wins at what it's always been good at: integers, strings, and regex. Python wins at floats, data structures, and function calls areas where I am told Python 3.x has seen heavy optimisation work.

But here's the thing that surprised me: neither language is dramatically faster than the other for basic operations. The differences are measured in fractions, not orders of magnitude. So where does the "Perl is slow" reputation actually come from?

Object-oriented code. Let's run that same fair comparison:

=== Object creation + 2 method calls (5M iterations) ===
Perl bless:    4,155,178/s  (1.20 sec)
Python class:  5,781,818/s  (0.86 sec)

Okay, this is not so bad. Perl's only 40% behind. But now let's look at what people actually use these days: Moo.

=== Object creation + 2 method calls (5M iterations) ===
Perl bless:    4,176,222/s  (1.20 sec)
Moo class:       843,708/s  (5.93 sec)
Python class:  5,590,052/s  (0.89 sec)

Wait, what? Moo is 6.6x slower than Python. And it's 5x slower than plain bless.

This is layered with actual business logic is I guess where "Perl is slow" actually comes from. This all comes down to layers. Every Moo accessor has been optimised but if you have all features you build a call stack, each adding overhead:

$obj->name
  └─> accessor method (generated sub)
        └─> type constraint check
              └─> coercion check
                    └─> trigger check
                          └─> lazy builder check
                                └─> finally: $self->{name}

Each of those subroutine calls means:

  • Push arguments onto the stack (~3-5 ops)
  • Create a new scope (localizing variables)
  • Execute the check (even if it's just "return true")
  • Pop the stack and return (~3-5 ops)

Even a "simple" Moo accessor with just a type constraint involves roughly 30+ additional operations compared to a plain hash access. The type constraint alone might call:

  1. has_type_constraint() - is there a constraint?
  2. type_constraint() - get the constraint object
  3. check() - call the constraint's check method
  4. The actual validation logic

Multiply that by two accessors per iteration, five million iterations, and suddenly you're spending 5 seconds instead of 1.

This is the trade off Moo makes: flexibility and safety for speed. And for most applications, it's the right trade off and even in python they do this with what they call pydantic that halfs the performance of python objects.

I've spent more time than I'd care to admit thinking about this question. Not in a "let's rewrite everything in Rust" kind of way, but genuinely asking: what would it take to make Perl's object system competitive with languages people actually consider fast?

The answer, it turns out, was inside a CPAN module first released on 'Mon Jul 24 11:23:25 2000'. This was highlighted to me by another works who I am indeed one of the three people who do not only read their blogs but also often finds themselves lost within their interesting coding patterns.

So this is the story of the four modules that changed how I think about Perl performance: Marlin, Meow, Inline and XS::JIT. They're different tools with different philosophies, but together they represent something I never quite expected to see Perl object access that's actually faster than Python's equivalent. Not "almost as fast." Faster.

The Marlin story: A faster fish in the Moose family

If you've written any serious Perl in the last fifteen years, you've probably used Moose. Or Moo. Or Mouse. The naming convention is... well, it's a thing we do now.

Marlin fits right into that tradition, and the name's not accidental. Marlins are among the fastest fish in the ocean. That's the pitch: everything you love about Moose-style OO, but with speed as a first-class concern.

Toby Inkster released Marlin in late 2025, and it caught my attention as I stated before, many of his projects do. I'd previously attempted to write a fast OO system myself (Meow), but was struggling to even compete with Moo despite being entirely XS. Partly ability, partly still learning, mostly not being in the right compile time stage.

With my interest piqued, I installed Marlin, played with the API, and ran some benchmarks:

Benchmark: 1,000,000 iterations
            Rate   Meow    Moo Marlin  Mouse
Meow    606,061/s     --    -1%   -45%   -47%
Moo     609,756/s     1%     --   -45%   -46%
Marlin 1,098,901/s    81%    80%     --    -3%
Mouse  1,136,364/s    87%    86%     3%     --

Marlin performed well. Meow at that point was... not impressive. But I liked Marlin's API and, understanding my own implementation's limitations, I was satisfied enough with the speed to build my Claude modules around it, while also understanding it would likely improve in performance.

A few weeks later, and a lot happened in between, but on Friday evening I randomly decided to revisit my Meow directory. Could I fix some of the flaws based upon my recent learnings? I managed to, and saw a huge improvement in my own benchmarks. So I updated to the latest Marlin for a fair comparison.

I was expecting Meow to be faster now since I'm doing much less in this minimalist approach. But what I actually found surprised me:

Benchmark: 10,000,000 iterations
            Rate    Moo  Mouse   Meow Marlin
Moo     868,810/s     --   -47%   -60%   -81%
Mouse  1,626,016/s    87%     --   -26%   -64%
Meow   2,183,406/s   151%    34%     --   -52%
Marlin 4,504,505/s   418%   177%   106%     --

Marlin had gotten dramatically faster, over 4x improvement from the version I'd first tested. Toby had clearly been busy. And while Meow had improved too, it was still only half of Marlin's speed.

This was the moment that changed everything. I needed to understand how Marlin achieved this. What was I missing?

Just in time optimisation

As I mentioned, I read other people's code. I read Toby's posts on Marlin and how he'd studied Mouse's optimisation strategy: only validate when you absolutely need to. But when I started tracing through Marlin's actual implementation, something clicked.

The key insight is in Marlin::Attribute::install_accessors. Here's what happens when Marlin sets up a reader:

if ( $type eq 'reader' and !$me->has_simple_reader and $me->xs_reader ) {
    $me->{_implementation}{$me->{$type}} = 'CXSR';  # Class::XSReader
}
elsif ( HAS_CXSA and $me->has_simple_reader ) {
    # Use Class::XSAccessor for simple cases
    Class::XSAccessor->import( class => $me->{package}, ... );
}

Marlin makes a compile-time decision: what kind of accessor does this attribute actually need?

  • Simple getter (no default, no lazy, no type check on read)? → Use Class::XSAccessor, which is pure XS and blindingly fast
  • Getter with lazy default or type coercion? → Use Class::XSReader, which handles the complexity in optimised C
  • Something exotic (auto_deref, custom behaviour)? → Fall back to generated Perl

This is the magic. Most Moo-style accessors go through a generic code path that handles every possible feature, even features you're not using. Marlin analyses your attribute definition at compile time and generates the minimal accessor that satisfies your requirements.

Consider a read-only attribute with a type but no default:

# Moo accessor path:
$obj->name
   check if lazy builder needed     # nope, but we still check
   check if default needed          # nope, but we still check  
   check if coercion needed         # nope, but we still check
   finally: $self->{name}

# Marlin accessor (Class::XSAccessor):
$obj->name
   $self->{name}                    # that's it. One XS call.

The type constraint? Marlin validates it in the constructor, not the getter. Once an object is built, reading an attribute is just a hash lookup: no validation, no subroutine calls, no stack manipulation.

This is why Marlin went from 1.1M ops/sec to 4.5M ops/sec between versions. Toby wasn't just optimising code. He was eliminating entire categories of runtime work by moving decisions to compile time.

A different approach is used forClass::XSConstructor.  This reuses a generic XSUB but passes the class data via a custom pointer. This sub is then optimised to not need to reach back into perl for stash, hv lookups etc.

Some of this is JIT compilation, but done at module load time rather than runtime. By the time your code calls ->new or ->name, all the decisions have been made. All that's left is the actual work.

This was my revelation: the path to fast Perl OO isn't avoiding features, it's avoiding runtime feature detection. Know what you need at compile time, generate optimised code for exactly that, and get out of the way.

Now the question became: could I apply this same principle to Meow? It was already setup to build a simple hash that represented the object, I had what I needed but I wanted to do this in a backwards compatible way.

Enter Inline::C

Armed with the understanding of why Marlin was fast, I had a hypothesis: if I could generate XS accessors at compile time tailored to each attribute's needs, Meow could achieve the same performance.

I needed to generate custom C code and then execute it, well for perl that was written by Ingy döt Net back in 2000 the Inline::C.

The idea was simple: when Meow sees ro name => Str, it should generate C code for an accessor that:

  1. Takes the object
  2. Returns the value at the slot index for name
  3. That's it. No method dispatch, no type checking, no feature checking.

I didn't want to just break everything so I leaned into the Moose catalog and added a make_immutable phase. When this is called it would compile the C code needed to generate an optimised XS package and this was fed into Inline::C. The first run would compile; subsequent runs would use the cached .so.

And it worked. I had to change the benchmark to CPU to get a fair result but I've also included a Cor test here which does not have type checking like Marlin or Meow.

Benchmark: running Cor, Marlin, Meow for at least 5 CPU seconds...
       Cor:  5 wallclock secs ( 5.13 usr +  0.02 sys =  5.15 CPU) @ 2,886,788/s
    Marlin:  5 wallclock secs ( 5.01 usr +  0.11 sys =  5.12 CPU) @ 4,523,074/s
      Meow:  5 wallclock secs ( 5.16 usr +  0.02 sys =  5.18 CPU) @ 4,558,344/s

As you can see Meow had caught Marlin. Actually, it was slightly faster, 4.56M vs 4.52M ops/sec, but this would be expected as Meow does ALOT less than Marlin.

But my bottlekneck was now in Inline::C and well nobody wants to write C/XS let alone concatenate it.

  1. Startup overhead: First compilation was slow, several seconds for a complex class
  2. DependenciesInline::C pulls in Parse::RecDescent, adds complexity to the dependency chain
  3. Build process: It generates a full Makefile.PL and runs the ExtUtils::MakeMaker machinery
  4. Caching: The caching mechanism is designed for "write once" scripts, not dynamic code generation

For a proof of concept, Inline::C was perfect. But for a production module, I needed something leaner. That's when I started looking at what Inline::C actually does under the hood, and wondering how much of it I could strip away.

Under the hood: XS::JIT as the secret weapon

Inline::C proved the concept worked, but it came with baggage. Every compile spawned a full Makefile.PL build process. Dependencies bloated the install. And the caching system, designed for write-once scripts, wasn't ideal for dynamic code generation.

So I started picking apart what Inline::C actually does:

  1. Parse C code to find function signatures
  2. Generate XS wrapper code
  3. Generate a Makefile.PL
  4. Run perl Makefile.PL && make
  5. Load the resulting .so

And yes, this happens even when you use bind Inline C => ... instead of the use form. The bind keyword just defers compilation to runtime rather than compile time. It doesn't change what gets done, only when. You still get the full Parse::RecDescent parsing, the xsubpp processing, the MakeMaker dance. The only difference is whether it happens at use time or when bind is called.

Most of this was unnecessary for my use case. I didn't need function parsing, I already knew what functions I was generating. I didn't need XS wrappers, I was writing XS-native code directly. And I definitely didn't need the Makefile.PL dance.

XS::JIT strips all of that away. It's a single-purpose tool: take C code, compile it, load it, install the functions. No parsing. No xsubpp. No make. Direct compiler invocation.

Here's what the C API looks like:

#include "xs_jit.h"

/* Function mapping - where to install what */
XS_JIT_Func funcs[] = {
    { "Cat::new",  "cat_new",  0, 1 },  /* target, source, varargs, xs_native */
    { "Cat::name", "cat_name", 0, 1 },
    { "Cat::age",  "cat_age",  0, 1 },
};

/* Compile and install in one call */
int ok = xs_jit_compile(aTHX_
    c_code,           /* Your generated C code */
    "Meow::JIT::Cat", /* Unique name for caching */
    funcs,            /* Function mapping array */
    3,                /* Number of functions */
    "_CACHED_XS",     /* Cache directory */
    0                 /* Don't force recompile */
);

That's it. One function call. The first time it runs, XS::JIT:

  1. Generates a boot function that registers all the XS functions
  2. Compiles directly with the system compiler (cc -shared -fPIC ...)
  3. Loads the .so with DynaLoader
  4. Installs each function into its target namespace

Subsequent runs? It hashes the C code, finds the cached .so, and just loads it. The compile step vanishes entirely.

The key insight is the is_xs_native flag. When set, XS::JIT creates a simple alias: no wrapper function, no stack manipulation, no overhead. Your C function is the XS function:

XS_EUPXS(cat_name) {
    dVAR; dXSARGS;
    SV *self = ST(0);
    AV *av = (AV*)SvRV(self);
    SV **slot = av_fetch(av, 0, 0);  /* slot 0 = name */
    ST(0) = slot ? *slot : &PL_sv_undef;
    XSRETURN(1);
}

No wrapper. No intermediate calls.

This is exactly what Meow needed. During make_immutable, it:

  1. Analyses each attribute's requirements (type constraint? coercion? trigger?)
  2. Generates minimal XS accessor code for each one
  3. Generates an optimised XS constructor that handles all attributes in one pass
  4. Hands the code to XS::JIT for compilation
  5. Gets back installed functions ready to call

The entire JIT compilation happens once per class, at module load time. By the time your code runs, everything is native XS.

Comparing the approaches

Here's what actually happens at runtime for each framework:

Moo accessor call:

$obj->name
  → Perl method dispatch
    → Generated Perl subroutine
      → has_type_constraint() check
        → type_constraint() fetch
          → check() call
            → finally: $self->{name}

Stack frames: 4-6. Operations: ~30.

Marlin accessor call (Class::XSAccessor):

$obj->name
  → Perl method dispatch
    → XS accessor
      → $self->{name}

Stack frames: 1. Operations: ~5.

Note: Toby has some slot magic also

Meow accessor call (XS::JIT):

$obj->name
  → Perl method dispatch
    → XS accessor
      → $self->[SLOT_INDEX]

Stack frames: 1. Operations: ~4 (arrays are slightly faster than hashes).

The benchmark results

With XS::JIT in place, here's where Meow now landed:

Benchmark: running Cor, Marlin for at least 5 CPU seconds... Marlin and Meow has type constraint checking
       Cor:  5 wallclock secs ( 5.13 usr +  0.02 sys =  5.15 CPU) @ 2886788.16/s (n=14866959)
    Marlin:  5 wallclock secs ( 5.01 usr +  0.11 sys =  5.12 CPU) @ 4523074.80/s (n=23158143)
      Meow:  5 wallclock secs ( 5.16 usr + -0.01 sys =  5.15 CPU) @ 5196218.06/s (n=26760523)
Benchmark: running Marlin, Meow, Moo, Mouse for at least 5 CPU seconds...
    Marlin:  5 wallclock secs ( 5.22 usr +  0.13 sys =  5.35 CPU) @ 4814728.04/s (n=25758795)
      Meow:  5 wallclock secs ( 5.23 usr +  0.01 sys =  5.24 CPU) @ 5203329.96/s (n=27265449)
       Moo:  4 wallclock secs ( 5.28 usr +  0.00 sys =  5.28 CPU) @ 860649.81/s (n=4544231)
     Mouse:  6 wallclock secs ( 5.29 usr +  0.01 sys =  5.30 CPU) @ 1603849.25/s (n=8500401)
            Rate    Moo  Mouse Marlin   Meow
Moo     860650/s     --   -46%   -82%   -83%
Mouse  1603849/s    86%     --   -67%   -69%
Marlin 4814728/s   459%   200%     --    -7%
Meow   5203330/s   505%   224%     8%     --

I must be honest, around this time I had not implemented the full benchmarks against Perl and Python. I didn't fully understand the difference, so I had some thoughts that I was hitting limitations with my own hardware (it was late, or early in the morning). Anyway, I kept pushing and ran a benchmark where I accessed the slot directly as an array reference. This got me excited:

Meow (direct) 7,172,481/s     778%    347%     50%     14%

I was seeing a huge improvement. I spent some time making an API that was a little nicer by exposing constants as slot indexes:

{
    package Cat 
    use Meow;
    ro name => Str;
    ro age => Int;
    make_immutable;  # Creates $Cat::NAME, $Cat::AGE
}

# Direct slot access
my $name = $cat->[$Cat::NAME];

I was now on par with Python, but I wanted more. There had to be a way to get that array access without the ugly syntax.

So I dug deeper into Perl's internals and found the missing magic: cv_set_call_checker and custom ops.

The entersub bypass: Custom ops

Here's what normally happens when you call a method in Perl:

name($cat)
  → OP_ENTERSUB (the "call function" op)
    → Push arguments onto stack
    → Look up the CV (code value)
    → Set up new stack frame
    → Execute the XS function
    → Pop stack frame
    → Return

Even for our minimal XS accessor, there's overhead: the entersub op itself, the stack frame setup, the CV lookup. What if we could eliminate all of that?

Perl provides a hook called cv_set_call_checker. It allows you to register a "call checker" function that runs at compile time when the parser sees a call to your subroutine. The checker can inspect the op tree and crucially replace it with something else entirely.

Here's what Meow does:

static void _register_inline_accessor(pTHX_ CV *cv, IV slot_index, int is_ro) {
    SV *ckobj = newSViv(slot_index);  /* Store slot index for later */
    cv_set_call_checker_flags(cv, S_ck_meow_get, ckobj, 0);
}

When the checker sees name($cat), it:

  1. Extracts the $cat argument from the op tree
  2. Frees the entire entersub operation
  3. Creates a new custom op with the slot index baked in
  4. Returns that instead

The custom op is trivially simple:

static OP *S_pp_meow_get(pTHX) {
    dSP;
    SV *self = TOPs;
    PADOFFSET slot_index = PL_op->op_targ;  /* Baked into the op */

    SV **ary = AvARRAY((AV*)SvRV(self));
    SETs(ary[slot_index] ? ary[slot_index] : &PL_sv_undef);

    return NORMAL;
}

That's the entire accessor. No function call. No stack frame. No CV lookup. The slot index is embedded directly in the op structure. The Perl runloop executes this op directly, it's as close to $cat->[$NAME] as you can get while still looking like name($cat).

This is the same technique that builtin::true and builtin::false use in Perl 5.36+. It's also how List::Util::first can be optimised when given a simple block.

The final benchmark

With custom ops in place via import_accessors, here's how the Perl OO frameworks compare:

Benchmark: running Marlin, Meow, Meow (direct), Meow (op), Moo, Mouse for at least 5 CPU seconds...
    Marlin:  6 wallclock secs ( 5.09 usr +  0.11 sys =  5.20 CPU) @ 4766685.58/s (n=24786765)
      Meow:  5 wallclock secs ( 5.29 usr +  0.01 sys =  5.30 CPU) @ 6289606.79/s (n=33334916)
Meow (direct):  5 wallclock secs ( 5.32 usr +  0.01 sys =  5.33 CPU) @ 7172480.86/s (n=38229323)
 Meow (op):  5 wallclock secs ( 5.16 usr +  0.01 sys =  5.17 CPU) @ 7394453.19/s (n=38229323)
       Moo:  4 wallclock secs ( 5.44 usr +  0.02 sys =  5.46 CPU) @ 816865.93/s (n=4460088)
     Mouse:  4 wallclock secs ( 5.18 usr +  0.01 sys =  5.19 CPU) @ 1605727.55/s (n=8333726)
                   Rate      Moo   Mouse  Marlin    Meow Meow (direct) Meow (op)
Moo            816866/s       --    -49%    -83%    -87%          -89%      -89%
Mouse         1605728/s      97%      --    -66%    -74%          -78%      -78%
Marlin        4766686/s     484%    197%      --    -24%          -34%      -36%
Meow          6289607/s     670%    292%     32%      --          -12%      -15%
Meow (direct) 7172481/s     778%    347%     50%     14%            --       -3%
Meow (op)     7394453/s     805%    361%     55%     18%            3%        --

Now lets test that directly against python:

============================================================
Python Direct Benchmark (slots + property accessors)
============================================================
Python version: 3.9.6 (default, Dec  2 2025, 07:27:58)
[Clang 17.0.0 (clang-1700.6.3.2)]
Iterations: 5,000,000
Runs: 5
------------------------------------------------------------
Run 1: 0.649s (7,704,306/s)
Run 2: 0.647s (7,733,902/s)
Run 3: 0.646s (7,736,307/s)
Run 4: 0.648s (7,720,909/s)
Run 5: 0.649s (7,702,520/s)
------------------------------------------------------------
Median rate: 7,720,909/s
============================================================
============================================================
Perl/Meow Benchmark Comparison
============================================================
Perl version: 5.042000
Iterations: 5000000
Runs: 5
------------------------------------------------------------
Inline Op (one($foo)):
  Run 1: 0.638s (7,841,811/s)
  Run 2: 0.629s (7,954,031/s)
  Run 3: 0.631s (7,929,850/s)
  Run 4: 0.631s (7,926,316/s)
  Run 5: 0.633s (7,901,675/s)
  Median: 7,926,316/s
============================================================
Summary:
------------------------------------------------------------
  Inline Op:    7,926,316/s
============================================================

Conclusion: Why JIT might be the right approach

Looking back at this journey, a pattern emerges. The fastest code isn't the cleverest code. It's the code that does the least work at runtime.

Moo is slow because of the abstraction.

Marlin proved that you could have Moo's features without Moo's overhead by making smart choices at compile time. If an accessor doesn't need lazy building, don't generate code that checks for lazy building.

Meow pushed this further: if you're going to generate code at compile time anyway, why not generate exactly the code you need? Not a generic accessor that handles many cases, but a specific accessor for this specific attribute on this specific class.

And XS::JIT made that practical. Without a lightweight JIT compiler, dynamic XS generation would require shipping a C toolchain with every module, or adding multi-megabyte dependencies. XS::JIT strips the problem down to its essence: take C code, compile it, load it.

The result is object access that competes with, and sometimes beats, languages that have had decades of optimisation work. Not because Perl's interpreter got faster, but because we stopped asking it to do unnecessary work.

Is this approach right for every project? No. Most applications don't need 7 million object accesses per second.

But for the times when performance matters (hot loops, high-frequency trading, real-time systems) it's good to know the ceiling isn't as low as we thought. Perl can be fast. We just needed to get out of its way.


The modules discussed in this post:

Perl 🐪 Weekly #757 - Contribute to CPAN!

dev.to #perl

Originally published at Perl Weekly 757

Hi there!

On Saturday, (evening for me, noon-ish in the Americas) we had an excellent meeting and there are recordings you can watch. In the first hour I showed some PRs I sent to MIME::Lite. You can watch the video here. In the second hour we changed the setup and we continued in driver-navigator style pair programming. I was giving the instructions and two other participants made the changes and sent the PR. Others in the audience made suggestions. So actually this was mob programming. As far as I know, this was the first time they contributed to open source projects. One of the PRs was already accepted while we were still in the meeting. Talk about quick feedback and fast ROI. You can watch the video here. Don't forget to 'like' the videos on YouTube and to follow the channel!

I've scheduled the next such event. Register here!. My hope is that many more of you will participate and then after getting a taste and having some practice you'll spend 15-20 min a day (2 hours a week) on similar contributions. Having 10-20 or maybe even 100 people doing that consistently will have a huge impact on Perl within a year.

Before that, however, there is the FOSDEM Community dinner on Saturday. If you are in Brussels.

Enjoy your week!

--
Your editor: Gabor Szabo.

Announcements

FOSDEM Community dinner information

On 31st January 2026 19:30,

Announcing the Perl Toolchain Summit 2026!

The 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.

Articles

Otobo supports the German Perl Workshop

Otobo is the Open Source Service Management Platform, a 2019 fork of OTRS.

vitroconnect sponsors the German Perl Workshop

vitroconnect

Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions

One hour long video driver-navigator style pair-programming contributing to open source Perl modules.

Open source contribution - Perl - MIME::Lite - GitHub Actions, test coverage and adding a test

One hour long presentation about 3 pull-requests that were sent to MIME::Lite

SBOM::CycloneDX 1.07 is released

A new version of SBOM::CycloneDX with support for the OWASP CycloneDX 1.7 specification (ECMA-424).

🚀 sqltool: A Lightweight Local MySQL/MariaDB Instance Manager (No Containers Needed)

Venus v5 released: Modern OO standard library (and more) for Perl 5

Discuss it on Reddit

Ready, Set, Compile... you slow Camel

An excellent writeup on the process of optimization. Basically saying: don't do what you don't have to. This is specifically about optimizing OOP systems in Perl. Feel free to comment either on the bpo version of the article or here.

Call for proofreaders : blogging on beautiful Perl features

Laurent is looking for help with Python and Java for an article series he is writing. Send him an email!

Discussion

I wrote a Plack handler for HTTP/2, and it's now available on CPAN :)

Features of Plack-Handler-H2: * Full HTTP/2 spec via nghttp2; * Non-blocking via libevent; * Supports the entire PSGI spec; * Automatically generates self-signed certs if none are provided as args;

Geo::Gpx.pm: no 'speed' field (even is GPX 1.0?)

Web

ANNOUNCE: Perl.Wiki V 1.38 & Mojolicious.Wiki V 1.12

I'll Have a Mojolicious::Lite

Gwyn built mojoeye, a tiny Perl app to run system and security checks across their internal Linux hosts.

Perl

Retrospective on the Perl Development Release 5.43.7

Corion mentions a number of places where things can be improved. I am surprised that the whole process is not fully automated yet. I mean some of the brightest people in the Perl community work on the core of perl.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 358

Welcome to a new week with a couple of fun tasks "Max Str Value" and "Encrypted String". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 357

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Kaprekar Constant" and "Unique Fraction Generator" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Uniquely Constant

The article skillfully uses Raku's comb, sort, and flip operations for digit manipulation to offer a straightforward and idiomatic solution to the Kaprekar's ongoing problem. It is both instructive and useful for Raku programmers since it carefully addresses edge cases like non-convergence and shows verbose iteration output.

Perl Weekly Challenge: Week 357

The post offers concise and illustrative Perl and Raku solutions to the tasks from Week 357, particularly the Kaprekar Constant implementation with examples that match the problem specification and well-explained iteration logic. For Perl enthusiasts, its clear explanations and references to actual Wikipedia details make the algorithms simple to understand and instructive.

Fractional Fix Points

The Kaprekar Constant and Unique Fraction Generator tasks are explained in a clear and organised manner in this post, which also provides step-by-step iteration breakdowns and solid examples to illustrate the problem. For Perl/Raku learners taking on the Weekly Challenge, its solutions demonstrate careful algorithm design and address important edge cases, making it instructive and useful.

Perl Weekly Challenge 357: arrays everywhere!

Luca provides a thorough and systematic collection of answers to the problems issued in all the languages (Raku, PL/Perl, Python and PostgreSQL) and has demonstrated proficiency in both algorithmic reasoning and the use and applicability of various characteristics of each of these programming languages. The articles describe in detail how to implement algorithms logically. As a result, readers are provided with clean and accurate code as examples of how to successfully implement these algorithms through the use of the listed languages.

Perl Weekly Challenge 357

The blog post provides a comprehensive overview of how to implement the Kaprekar Constant and Unique Fraction Generator tasks in Perl. The examples provided demonstrate the idiomatic (one-line) style of coding that is used to represent both of the tasks. Additionally, the post discusses how to handle exceptions such as non-convergence and uniqueness of fractions, in a sensible manner.

One Constant, and Many Fractions

Matthias's solutions are easy to follow and use a typical hiring challenge style for each week. Each of his solutions adhere to the challenge's requirements. Additionally, all of his implementations demonstrate good programming practices for Perl.

I could drink a case of you…

Packy's write-up for week 357 of the Perl Weekly Challenge offers a fresh perspective on the challenge by telling an entertaining story that incorporates the Kaprekar problem into the write-up. The article clearly details how to implement the code and produces good results as well. The final product is easy to understand and provides a fun, educational experience to those tackling the challenge this week.

Converging on fractions

A thorough explanation of the solution (both tasks) is provided in the post. The Perl code included is easy to read and closely adheres to the descriptions of each problem. Furthermore, the code has been written such that it handles 'non-convergence' where applicable, with clear and logical outputs as well as analyses of each step helping the reader to learn about the algorithms and their correctness.

The Weekly Challenge #357

Robbie has provided full Perl implementations of the Kaprekar Constant and Unique Fraction Generator problems, including clear descriptions and links to the source code for both projects. His article is very well organised and user-friendly, allowing readers to quickly familiarise themselves with both tasks and check out Robbie's own code implementations.

Uniquely Kaprekar

The article provides all the vital information you need to comprehend the fundamental algorithms of each challenge, including thorough code sample illustrations, as well as an extensive discussion on iteration behaviour and the reasons you don't want to use floating-point division in programming.

Fractional Constant

This blog article describes how to perform both Weekly Challenge 357 tasks step by step, showing examples of useful and correct code in both the Python and Perl programming languages, as well as considering input validation and control structures for the Kaprekar constant, as well as selecting the correct data structures to store unique fractions and display them in sorted order. By comparing the differences between the two programming languages alongside their implementation details, this blog is a valuable resource to help those programming these challenges as they learn about them.

Kaprekar Steps & Ordered Fractions

The Kaprekar steps and unique ordered fractions problems are two challenging problems; the author has provided a short list of Perl-based, well-considered solutions to handling leading zeroes, digit sorting, finding loops and sequence detection, and performing value-based ordering of fractions with duplicate removal. These solutions outline the steps taken and lessons learned while approaching each problem.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

Events

Perl Maven online: Code-reading and Open Source contribution

February 10, 2025

Boston.pm - online

February 10, 2025

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2025

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Hi fellow Perlists,

Now that I am retired, I have a bit more time for personal projects. One project dear to my heart would be to demonstrate strong features of Perl for programmers from other backgrounds. So I'm planning a https://dev.to/ series on "beautiful Perl features", comparing various aspects of Perl with similar features in Java, Python or Javascript.

There are many points to discuss, ranging from small details like flexibility of quote delimiters or the mere simplicity of allowing a final comma in a list, to much more fundamental features like lexical scoping and dynamic scoping.

Since I'm not a native english speaker, and since my knowledge of Java and Python is mostly theoretical, I would appreciate help if some of you would volunteer for spending some time in proofreading the projected posts. Just send an email to my CPAN account if you feel like participating.

Thanks in advance :-), Laurent Dami


This week’s Perl Weekly Challenge had two very interesting problems that look simple at first, but hide some beautiful logic underneath.

I solved both tasks in Perl, and here’s a walkthrough of my approach and what I learned.

Task 1 — Kaprekar Steps

We are given a 4-digit number and asked to repeatedly perform the following process:

  1. Arrange digits in descending order → form number A
  2. Arrange digits in ascending order → form number B
  3. Compute: A - B
  4. Repeat with the result

This process is known as Kaprekar’s routine for 4-digit numbers.

A fascinating fact:

Every valid 4-digit number (with at least two different digits) will reach 6174 in at most 7 steps.

6174 is called Kaprekar’s Constant.

My Approach:

The key requirements:

  • Handle leading zeros (e.g., 1001 → "1001")
  • Keep track of numbers already seen (to avoid infinite loops)
  • Count the number of steps until 6174 is reached I used:
  • sprintf("%04d", $n) to preserve leading zeros
  • A hash %seen to detect loops
  • Sorting digits to build ascending and descending numbers

Core Logic

my $s = sprintf("%04d", $n);
my @digits = split('', $s);
my @desc = sort { $b cmp $a } @digits;
my @asc  = sort { $a cmp $b } @digits;

my $desc_num = join('', @desc);
my $asc_num  = join('', @asc);

$n = $desc_num - $asc_num;

Showing Each Iteration
I also added a helper function to print each iteration like:

3524 → 5432 - 2345 = 3087
3087 → 8730 - 0378 = 8352
8352 → 8532 - 2358 = 6174

This makes the program educational rather than just returning a number.

Task 2 — Ordered Unique Fractions**

Given a number N, generate all fractions:
num / den where 1 ≤ num ≤ N and 1 ≤ den ≤ N
Then:

  1. Sort them by their numeric value
  2. Remove duplicate values
  3. Keep the fraction with the smallest numerator for equal values

My Approach
Step 1: Generate all possible fractions

for my $num (1..$n) {
    for my $den (1..$n) {
        push @fractions, [$num, $den];
    }
}

Step 2: Sort by fraction value

@fractions = sort {
    my $val_a = $a->[0] / $a->[1];
    my $val_b = $b->[0] / $b->[1];
    $val_a <=> $val_b || $a->[0] <=> $b->[0]
} @fractions;

Step 3: Remove duplicates intelligently
Instead of reducing fractions (like 2/4 → 1/2), I tracked numeric values and kept the fraction with the smallest numerator.

if (!exists $seen_values{$value} || $num < $seen_values{$value}) {
    $seen_values{$value} = $num;
    push @unique, [$num, $den];
}

Example Output (N = 4)

1/4, 1/3, 1/2, 2/3, 3/4, 1/1, 4/3, 3/2, 2/1, 3/1, 4/1

Notice:

  • Fractions are ordered by value
  • Duplicates like 2/2, 3/3, 4/4 don’t appear
  • 2/4 is replaced by 1/2 because it has a smaller numerator

What I Learned This Week

From Kaprekar problem
  • Importance of preserving leading zeros
  • Detecting infinite loops with a hash
  • How a simple number routine hides deep mathematical beauty
From Fraction problem
  • Sorting by computed values
  • Eliminating duplicates without using GCD
  • Writing clean data-structure driven Perl code using array references

Conclusion

Both tasks looked straightforward, but required careful thinking about:

  • Edge cases
  • Ordering
  • Data handling

Perl made these tasks elegant to implement thanks to:

  • Powerful sorting
  • Flexible data structures
  • String formatting utilities

Another fun and educational week with Perl Weekly Challenge!

Happy hacking :)

Ready, Set, Compile... you slow Camel

dev.to #perl

"Perl is slow."

I've heard this for years, well since I started. You probably have too. And honestly? For a long time, I didn't have a great rebuttal. Sure, Perl's fast enough for most things, it's well known for text processing, glueing code and quick scripts. But when it came to object heavy code, the critics have a point.

We will begin by looking at the myth of perl being slow a little more deeply. Here's a benchmark between Perl and Python using CPU seconds, a fair comparison that measures actual work done:

=== PERL (5 CPU seconds per test) ===
Integer arithmetic             1,072,800/s
Float arithmetic                 398,800/s
String concat                    970,000/s
Array push/iterate               368,800/s
Hash insert/iterate               84,800/s
Function calls                   244,000/s
Regex match                   12,921,200/s

=== PYTHON (5 CPU seconds per test) ===
Integer arithmetic               777,200/s
Float arithmetic                 512,400/s
String concat                    627,200/s
List append/iterate              476,400/s
Dict insert/iterate              140,600/s
Function calls                   331,400/s
Regex match                   10,543,713/s

The results are more nuanced than the "Perl is slow" narrative suggests:

Operation Winner Margin
Integer arithmetic Perl 1.4x faster
Float arithmetic Python 1.3x faster
String concat Perl 1.5x faster
Array/List ops Python 1.3x faster
Hash/Dict ops Python 1.7x faster
Function calls Python 1.4x faster
Regex match Perl 1.2x faster

Perl wins at what it's always been good at: integers, strings, and regex. Python wins at floats, data structures, and function calls areas where I am told Python 3.x has seen heavy optimisation work.

But here's the thing that surprised me: neither language is dramatically faster than the other for basic operations. The differences are measured in fractions, not orders of magnitude. So where does the "Perl is slow" reputation actually come from?

Object-oriented code. Let's run that same fair comparison:

=== Object creation + 2 method calls (5M iterations) ===
Perl bless:    4,155,178/s  (1.20 sec)
Python class:  5,781,818/s  (0.86 sec)

Okay, this is not so bad. Perl's only 40% behind. But now let's look at what people actually use these days: Moo.

=== Object creation + 2 method calls (5M iterations) ===
Perl bless:    4,176,222/s  (1.20 sec)
Moo class:       843,708/s  (5.93 sec)
Python class:  5,590,052/s  (0.89 sec)

Wait, what? Moo is 6.6x slower than Python. And it's 5x slower than plain bless.

This is layered with actual business logic is I guess where "Perl is slow" actually comes from. This all comes down to layers. Every Moo accessor has been optimised but if you have all features you build a call stack, each adding overhead:

$obj->name
  └─> accessor method (generated sub)
        └─> type constraint check
              └─> coercion check
                    └─> trigger check
                          └─> lazy builder check
                                └─> finally: $self->{name}

Each of those subroutine calls means:

  • Push arguments onto the stack (~3-5 ops)
  • Create a new scope (localizing variables)
  • Execute the check (even if it's just "return true")
  • Pop the stack and return (~3-5 ops)

Even a "simple" Moo accessor with just a type constraint involves roughly 30+ additional operations compared to a plain hash access. The type constraint alone might call:

  1. has_type_constraint() - is there a constraint?
  2. type_constraint() - get the constraint object
  3. check() - call the constraint's check method
  4. The actual validation logic

Multiply that by two accessors per iteration, five million iterations, and suddenly you're spending 5 seconds instead of 1.

This is the trade off Moo makes: flexibility and safety for speed. And for most applications, it's the right trade off and even in python they do this with what they call pydantic that halfs the performance of python objects.

I've spent more time than I'd care to admit thinking about this question. Not in a "let's rewrite everything in Rust" kind of way, but genuinely asking: what would it take to make Perl's object system competitive with languages people actually consider fast?

The answer, it turns out, was inside a CPAN module first released on 'Mon Jul 24 11:23:25 2000'. This was highlighted to me by another works who I am indeed one of the three people who do not only read their blogs but also often finds themselves lost within their interesting coding patterns.

So this is the story of the four modules that changed how I think about Perl performance: Marlin, Meow, Inline and XS::JIT. They're different tools with different philosophies, but together they represent something I never quite expected to see Perl object access that's actually faster than Python's equivalent. Not "almost as fast." Faster.

The Marlin story: A faster fish in the Moose family

If you've written any serious Perl in the last fifteen years, you've probably used Moose. Or Moo. Or Mouse. The naming convention is... well, it's a thing we do now.

Marlin fits right into that tradition, and the name's not accidental. Marlins are among the fastest fish in the ocean. That's the pitch: everything you love about Moose-style OO, but with speed as a first-class concern.

Toby Inkster released Marlin in late 2025, and it caught my attention as I stated before, many of his projects do. I'd previously attempted to write a fast OO system myself (Meow), but was struggling to even compete with Moo despite being entirely XS. Partly ability, partly still learning, mostly not being in the right compile time stage.

With my interest piqued, I installed Marlin, played with the API, and ran some benchmarks:

Benchmark: 1,000,000 iterations
            Rate   Meow    Moo Marlin  Mouse
Meow    606,061/s     --    -1%   -45%   -47%
Moo     609,756/s     1%     --   -45%   -46%
Marlin 1,098,901/s    81%    80%     --    -3%
Mouse  1,136,364/s    87%    86%     3%     --

Marlin performed well. Meow at that point was... not impressive. But I liked Marlin's API and, understanding my own implementation's limitations, I was satisfied enough with the speed to build my Claude modules around it, while also understanding it would likely improve in performance.

A few weeks later, and a lot happened in between, but on Friday evening I randomly decided to revisit my Meow directory. Could I fix some of the flaws based upon my recent learnings? I managed to, and saw a huge improvement in my own benchmarks. So I updated to the latest Marlin for a fair comparison.

I was expecting Meow to be faster now since I'm doing much less in this minimalist approach. But what I actually found surprised me:

Benchmark: 10,000,000 iterations
            Rate    Moo  Mouse   Meow Marlin
Moo     868,810/s     --   -47%   -60%   -81%
Mouse  1,626,016/s    87%     --   -26%   -64%
Meow   2,183,406/s   151%    34%     --   -52%
Marlin 4,504,505/s   418%   177%   106%     --

Marlin had gotten dramatically faster, over 4x improvement from the version I'd first tested. Toby had clearly been busy. And while Meow had improved too, it was still only half of Marlin's speed.

This was the moment that changed everything. I needed to understand how Marlin achieved this. What was I missing?

Just in time optimisation

As I mentioned, I read other people's code. I read Toby's posts on Marlin and how he'd studied Mouse's optimisation strategy: only validate when you absolutely need to. But when I started tracing through Marlin's actual implementation, something clicked.

The key insight is in Marlin::Attribute::install_accessors. Here's what happens when Marlin sets up a reader:

if ( $type eq 'reader' and !$me->has_simple_reader and $me->xs_reader ) {
    $me->{_implementation}{$me->{$type}} = 'CXSR';  # Class::XSReader
}
elsif ( HAS_CXSA and $me->has_simple_reader ) {
    # Use Class::XSAccessor for simple cases
    Class::XSAccessor->import( class => $me->{package}, ... );
}

Marlin makes a compile-time decision: what kind of accessor does this attribute actually need?

  • Simple getter (no default, no lazy, no type check on read)? → Use Class::XSAccessor, which is pure XS and blindingly fast
  • Getter with lazy default or type coercion? → Use Class::XSReader, which handles the complexity in optimised C
  • Something exotic (auto_deref, custom behaviour)? → Fall back to generated Perl

This is the magic. Most Moo-style accessors go through a generic code path that handles every possible feature, even features you're not using. Marlin analyses your attribute definition at compile time and generates the minimal accessor that satisfies your requirements.

Consider a read-only attribute with a type but no default:

# Moo accessor path:
$obj->name
   check if lazy builder needed     # nope, but we still check
   check if default needed          # nope, but we still check  
   check if coercion needed         # nope, but we still check
   finally: $self->{name}

# Marlin accessor (Class::XSAccessor):
$obj->name
   $self->{name}                    # that's it. One XS call.

The type constraint? Marlin validates it in the constructor, not the getter. Once an object is built, reading an attribute is just a hash lookup: no validation, no subroutine calls, no stack manipulation.

This is why Marlin went from 1.1M ops/sec to 4.5M ops/sec between versions. Toby wasn't just optimising code. He was eliminating entire categories of runtime work by moving decisions to compile time.

A different approach is used forClass::XSConstructor. This reuses a generic XSUB but passes the class data via a custom pointer. This sub is then optimised to not need to reach back into perl for stash, hv lookups etc.

It's JIT compilation, but done at module load time rather than runtime. By the time your code calls ->new or ->name, all the decisions have been made. All that's left is the actual work.

This was my revelation: the path to fast Perl OO isn't avoiding features, it's avoiding runtime feature detection. Know what you need at compile time, generate optimised code for exactly that, and get out of the way.

Now the question became: could I apply this same principle to Meow? It was already setup to build a simple hash that represented the object, I had what I needed but I wanted to do this in a backwards compatible way.

Enter Inline::C

Armed with the understanding of why Marlin was fast, I had a hypothesis: if I could generate XS accessors at compile time tailored to each attribute's needs, Meow could achieve the same performance.

I needed to generate custom C code and then execute it, well for perl that was written by Ingy döt Net back in 2000 the Inline::C.

The idea was simple: when Meow sees ro name => Str, it should generate C code for an accessor that:

  1. Takes the object
  2. Returns the value at the slot index for name
  3. That's it. No method dispatch, no type checking, no feature checking.

I didn't want to just break everything so I leaned into the Moose catalog and added a make_immutable phase. When this is called it would compile the C code needed to generate an optimised XS package and this was fed into Inline::C. The first run would compile; subsequent runs would use the cached .so.

And it worked. I had to change the benchmark to CPU to get a fair result but I've also included a Cor test here which does not have type checking like Marlin or Meow.

Benchmark: running Cor, Marlin, Meow for at least 5 CPU seconds...
       Cor:  5 wallclock secs ( 5.13 usr +  0.02 sys =  5.15 CPU) @ 2,886,788/s
    Marlin:  5 wallclock secs ( 5.01 usr +  0.11 sys =  5.12 CPU) @ 4,523,074/s
      Meow:  5 wallclock secs ( 5.16 usr +  0.02 sys =  5.18 CPU) @ 4,558,344/s

As you can see Meow had caught Marlin. Actually, it was slightly faster, 4.56M vs 4.52M ops/sec, but this would be expected as Meow does ALOT less than Marlin.

But my bottlekneck was now in Inline::C and well nobody wants to write C/XS let alone concatenate it.

  1. Startup overhead: First compilation was slow, several seconds for a complex class
  2. Dependencies: Inline::C pulls in Parse::RecDescent, adds complexity to the dependency chain
  3. Build process: It generates a full Makefile.PL and runs the ExtUtils::MakeMaker machinery
  4. Caching: The caching mechanism is designed for "write once" scripts, not dynamic code generation

For a proof of concept, Inline::C was perfect. But for a production module, I needed something leaner. That's when I started looking at what Inline::C actually does under the hood, and wondering how much of it I could strip away.

Under the hood: XS::JIT as the secret weapon

Inline::C proved the concept worked, but it came with baggage. Every compile spawned a full Makefile.PL build process. Dependencies bloated the install. And the caching system, designed for write-once scripts, wasn't ideal for dynamic code generation.

So I started picking apart what Inline::C actually does:

  1. Parse C code to find function signatures
  2. Generate XS wrapper code
  3. Generate a Makefile.PL
  4. Run perl Makefile.PL && make
  5. Load the resulting .so

And yes, this happens even when you use bind Inline C => ... instead of the use form. The bind keyword just defers compilation to runtime rather than compile time. It doesn't change what gets done, only when. You still get the full Parse::RecDescent parsing, the xsubpp processing, the MakeMaker dance. The only difference is whether it happens at use time or when bind is called.

Most of this was unnecessary for my use case. I didn't need function parsing, I already knew what functions I was generating. I didn't need XS wrappers, I was writing XS-native code directly. And I definitely didn't need the Makefile.PL dance.

XS::JIT strips all of that away. It's a single-purpose tool: take C code, compile it, load it, install the functions. No parsing. No xsubpp. No make. Direct compiler invocation.

Here's what the C API looks like:

#include "xs_jit.h"

/* Function mapping - where to install what */
XS_JIT_Func funcs[] = {
    { "Cat::new",  "cat_new",  0, 1 },  /* target, source, varargs, xs_native */
    { "Cat::name", "cat_name", 0, 1 },
    { "Cat::age",  "cat_age",  0, 1 },
};

/* Compile and install in one call */
int ok = xs_jit_compile(aTHX_
    c_code,           /* Your generated C code */
    "Meow::JIT::Cat", /* Unique name for caching */
    funcs,            /* Function mapping array */
    3,                /* Number of functions */
    "_CACHED_XS",     /* Cache directory */
    0                 /* Don't force recompile */
);

That's it. One function call. The first time it runs, XS::JIT:

  1. Generates a boot function that registers all the XS functions
  2. Compiles directly with the system compiler (cc -shared -fPIC ...)
  3. Loads the .so with DynaLoader
  4. Installs each function into its target namespace

Subsequent runs? It hashes the C code, finds the cached .so, and just loads it. The compile step vanishes entirely.

The key insight is the is_xs_native flag. When set, XS::JIT creates a simple alias: no wrapper function, no stack manipulation, no overhead. Your C function is the XS function:

XS_EUPXS(cat_name) {
    dVAR; dXSARGS;
    SV *self = ST(0);
    AV *av = (AV*)SvRV(self);
    SV **slot = av_fetch(av, 0, 0);  /* slot 0 = name */
    ST(0) = slot ? *slot : &PL_sv_undef;
    XSRETURN(1);
}

No wrapper. No intermediate calls.

This is exactly what Meow needed. During make_immutable, it:

  1. Analyses each attribute's requirements (type constraint? coercion? trigger?)
  2. Generates minimal XS accessor code for each one
  3. Generates an optimised XS constructor that handles all attributes in one pass
  4. Hands the code to XS::JIT for compilation
  5. Gets back installed functions ready to call

The entire JIT compilation happens once per class, at module load time. By the time your code runs, everything is native XS.

Comparing the approaches

Here's what actually happens at runtime for each framework:

Moo accessor call:

$obj->name
  → Perl method dispatch
    → Generated Perl subroutine
      → has_type_constraint() check
        → type_constraint() fetch
          → check() call
            → finally: $self->{name}

Stack frames: 4-6. Operations: ~30.

Marlin accessor call (Class::XSAccessor):

$obj->name
  → Perl method dispatch
    → XS accessor
      → $self->{name}

Stack frames: 1. Operations: ~5.

Note: Toby has some slot magic also

Meow accessor call (XS::JIT):

$obj->name
  → Perl method dispatch
    → XS accessor
      → $self->[SLOT_INDEX]

Stack frames: 1. Operations: ~4 (arrays are slightly faster than hashes).

The benchmark results

With XS::JIT in place, here's where Meow now landed:

Benchmark: running Cor, Marlin for at least 5 CPU seconds... Marlin and Meow has type constraint checking
       Cor:  5 wallclock secs ( 5.13 usr +  0.02 sys =  5.15 CPU) @ 2886788.16/s (n=14866959)
    Marlin:  5 wallclock secs ( 5.01 usr +  0.11 sys =  5.12 CPU) @ 4523074.80/s (n=23158143)
      Meow:  5 wallclock secs ( 5.16 usr + -0.01 sys =  5.15 CPU) @ 5196218.06/s (n=26760523)
Benchmark: running Marlin, Meow, Moo, Mouse for at least 5 CPU seconds...
    Marlin:  5 wallclock secs ( 5.22 usr +  0.13 sys =  5.35 CPU) @ 4814728.04/s (n=25758795)
      Meow:  5 wallclock secs ( 5.23 usr +  0.01 sys =  5.24 CPU) @ 5203329.96/s (n=27265449)
       Moo:  4 wallclock secs ( 5.28 usr +  0.00 sys =  5.28 CPU) @ 860649.81/s (n=4544231)
     Mouse:  6 wallclock secs ( 5.29 usr +  0.01 sys =  5.30 CPU) @ 1603849.25/s (n=8500401)
            Rate    Moo  Mouse Marlin   Meow
Moo     860650/s     --   -46%   -82%   -83%
Mouse  1603849/s    86%     --   -67%   -69%
Marlin 4814728/s   459%   200%     --    -7%
Meow   5203330/s   505%   224%     8%     --

I must be honest, around this time I had not implemented the full benchmarks against Perl and Python. I didn't fully understand the difference, so I had some thoughts that I was hitting limitations with my own hardware (it was late, or early in the morning). Anyway, I kept pushing and ran a benchmark where I accessed the slot directly as an array reference. This got me excited:

Meow (direct) 7,172,481/s     778%    347%     50%     14%

I was seeing a huge improvement. I spent some time making an API that was a little nicer by exposing constants as slot indexes:

{
    package Cat 
    use Meow;
    ro name => Str;
    ro age => Int;
    make_immutable;  # Creates $Cat::NAME, $Cat::AGE
}

# Direct slot access
my $name = $cat->[$Cat::NAME];

I was now on par with Python, but I wanted more. There had to be a way to get that array access without the ugly syntax.

So I dug deeper into Perl's internals and found the missing magic: cv_set_call_checker and custom ops.

The entersub bypass: Custom ops

Here's what normally happens when you call a method in Perl:

name($cat)
  → OP_ENTERSUB (the "call function" op)
    → Push arguments onto stack
    → Look up the CV (code value)
    → Set up new stack frame
    → Execute the XS function
    → Pop stack frame
    → Return

Even for our minimal XS accessor, there's overhead: the entersub op itself, the stack frame setup, the CV lookup. What if we could eliminate all of that?

Perl provides a hook called cv_set_call_checker. It allows you to register a "call checker" function that runs at compile time when the parser sees a call to your subroutine. The checker can inspect the op tree and crucially replace it with something else entirely.

Here's what Meow does:

static void _register_inline_accessor(pTHX_ CV *cv, IV slot_index, int is_ro) {
    SV *ckobj = newSViv(slot_index);  /* Store slot index for later */
    cv_set_call_checker_flags(cv, S_ck_meow_get, ckobj, 0);
}

When the checker sees name($cat), it:

  1. Extracts the $cat argument from the op tree
  2. Frees the entire entersub operation
  3. Creates a new custom op with the slot index baked in
  4. Returns that instead

The custom op is trivially simple:

static OP *S_pp_meow_get(pTHX) {
    dSP;
    SV *self = TOPs;
    PADOFFSET slot_index = PL_op->op_targ;  /* Baked into the op */

    SV **ary = AvARRAY((AV*)SvRV(self));
    SETs(ary[slot_index] ? ary[slot_index] : &PL_sv_undef);

    return NORMAL;
}

That's the entire accessor. No function call. No stack frame. No CV lookup. The slot index is embedded directly in the op structure. The Perl runloop executes this op directly, it's as close to $cat->[$NAME] as you can get while still looking like name($cat).

This is the same technique that builtin::true and builtin::false use in Perl 5.36+. It's also how List::Util::first can be optimised when given a simple block.

The final benchmark

With custom ops in place via import_accessors, here's how the Perl OO frameworks compare:

Benchmark: running Marlin, Meow, Meow (direct), Meow (op), Moo, Mouse for at least 5 CPU seconds...
    Marlin:  6 wallclock secs ( 5.09 usr +  0.11 sys =  5.20 CPU) @ 4766685.58/s (n=24786765)
      Meow:  5 wallclock secs ( 5.29 usr +  0.01 sys =  5.30 CPU) @ 6289606.79/s (n=33334916)
Meow (direct):  5 wallclock secs ( 5.32 usr +  0.01 sys =  5.33 CPU) @ 7172480.86/s (n=38229323)
 Meow (op):  5 wallclock secs ( 5.16 usr +  0.01 sys =  5.17 CPU) @ 7394453.19/s (n=38229323)
       Moo:  4 wallclock secs ( 5.44 usr +  0.02 sys =  5.46 CPU) @ 816865.93/s (n=4460088)
     Mouse:  4 wallclock secs ( 5.18 usr +  0.01 sys =  5.19 CPU) @ 1605727.55/s (n=8333726)
                   Rate      Moo   Mouse  Marlin    Meow Meow (direct) Meow (op)
Moo            816866/s       --    -49%    -83%    -87%          -89%      -89%
Mouse         1605728/s      97%      --    -66%    -74%          -78%      -78%
Marlin        4766686/s     484%    197%      --    -24%          -34%      -36%
Meow          6289607/s     670%    292%     32%      --          -12%      -15%
Meow (direct) 7172481/s     778%    347%     50%     14%            --       -3%
Meow (op)     7394453/s     805%    361%     55%     18%            3%        --

Now lets test that directly against python:

============================================================
Python Direct Benchmark (slots + property accessors)
============================================================
Python version: 3.9.6 (default, Dec  2 2025, 07:27:58)
[Clang 17.0.0 (clang-1700.6.3.2)]
Iterations: 5,000,000
Runs: 5
------------------------------------------------------------
Run 1: 0.649s (7,704,306/s)
Run 2: 0.647s (7,733,902/s)
Run 3: 0.646s (7,736,307/s)
Run 4: 0.648s (7,720,909/s)
Run 5: 0.649s (7,702,520/s)
------------------------------------------------------------
Median rate: 7,720,909/s
============================================================
============================================================
Perl/Meow Benchmark Comparison
============================================================
Perl version: 5.042000
Iterations: 5000000
Runs: 5
------------------------------------------------------------
Inline Op (one($foo)):
  Run 1: 0.638s (7,841,811/s)
  Run 2: 0.629s (7,954,031/s)
  Run 3: 0.631s (7,929,850/s)
  Run 4: 0.631s (7,926,316/s)
  Run 5: 0.633s (7,901,675/s)
  Median: 7,926,316/s
============================================================
Summary:
------------------------------------------------------------
  Inline Op:    7,926,316/s
============================================================

Conclusion: Why JIT might be the right approach

Looking back at this journey, a pattern emerges. The fastest code isn't the cleverest code. It's the code that does the least work at runtime.

Moo is slow because of the abstraction.

Marlin proved that you could have Moo's features without Moo's overhead by making smart choices at compile time. If an accessor doesn't need lazy building, don't generate code that checks for lazy building.

Meow pushed this further: if you're going to generate code at compile time anyway, why not generate exactly the code you need? Not a generic accessor that handles many cases, but a specific accessor for this specific attribute on this specific class.

And XS::JIT made that practical. Without a lightweight JIT compiler, dynamic XS generation would require shipping a C toolchain with every module, or adding multi-megabyte dependencies. XS::JIT strips the problem down to its essence: take C code, compile it, load it.

The result is object access that competes with, and sometimes beats, languages that have had decades of optimisation work. Not because Perl's interpreter got faster, but because we stopped asking it to do unnecessary work.

Is this approach right for every project? No. Most applications don't need 7 million object accesses per second.

But for the times when performance matters (hot loops, high-frequency trading, real-time systems) it's good to know the ceiling isn't as low as we thought. Perl can be fast. We just needed to get out of its way.

The modules discussed in this post:

Otobo supports the German Perl Workshop

blogs.perl.org

We are happy to announce that Otobo also is part of our event!

Die Rother OSS GmbH ist Source Code Owner und Maintainer der Service Management-Plattform OTOBO.

Gemeinsam mit der Community entwickeln wir OTOBO kontinuierlich weiter und sorgen dafür, dass das Tool zu 100 % Open Source bleibt.
Unsere Kunden unterstützen wir mit partnerschaftlicher Beratung, Training, Entwicklung, Support und Managed Services.
https://otobo.io/de/unternehmen/karriere/

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.03 on 2026-01-19, with 56 votes
    • Previous CPAN version: 10.02 was 10 days before
    • Author: UTASHIRO
  2. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.028 on 2026-01-21, with 19 votes
    • Previous CPAN version: 1.027 was 1 month, 15 days before
    • Author: PREACTION
  3. CPAN::Meta - the distribution metadata for a CPAN dist
    • Version: 2.150011 on 2026-01-22, with 39 votes
    • Previous CPAN version: 2.150010 was 9 years, 5 months, 4 days before
    • Author: RJBS
  4. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260120.004 on 2026-01-20, with 25 votes
    • Previous CPAN version: 20260120.002
    • Author: BRIANDFOY
  5. DateTime::Format::Natural - Parse informal natural language date/time strings
    • Version: 1.24 on 2026-01-18, with 19 votes
    • Previous CPAN version: 1.23_03 was 5 days before
    • Author: SCHUBIGER
  6. EV - perl interface to libev, a high performance full-featured event loop
    • Version: 4.37 on 2026-01-22, with 50 votes
    • Previous CPAN version: 4.36 was 4 months, 2 days before
    • Author: MLEHMANN
  7. Git::Repository - Perl interface to Git repositories
    • Version: 1.326 on 2026-01-18, with 27 votes
    • Previous CPAN version: 1.325 was 4 years, 7 months, 17 days before
    • Author: BOOK
  8. IO::Async - Asynchronous event-driven programming
    • Version: 0.805 on 2026-01-19, with 80 votes
    • Previous CPAN version: 0.804 was 8 months, 26 days before
    • Author: PEVANS
  9. Mac::PropertyList - work with Mac plists at a low level
    • Version: 1.606 on 2026-01-20, with 13 votes
    • Previous CPAN version: 1.605 was 5 months, 11 days before
    • Author: BRIANDFOY
  10. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260119 on 2026-01-19, with 44 votes
    • Previous CPAN version: 5.20251220 was 29 days before
    • Author: BINGOS
  11. Net::Server - Extensible Perl internet server
    • Version: 2.015 on 2026-01-22, with 33 votes
    • Previous CPAN version: 2.014 was 2 years, 10 months, 7 days before
    • Author: BBB
  12. Net::SSH::Perl - Perl client Interface to SSH
    • Version: 2.144 on 2026-01-23, with 20 votes
    • Previous CPAN version: 2.144 was 8 days before
    • Author: BDFOY
  13. Release::Checklist - A QA checklist for CPAN releases
    • Version: 0.19 on 2026-01-25, with 16 votes
    • Previous CPAN version: 0.18 was 1 month, 15 days before
    • Author: HMBRAND
  14. Spreadsheet::Read - Meta-Wrapper for reading spreadsheet data
    • Version: 0.95 on 2026-01-25, with 31 votes
    • Previous CPAN version: 0.94 was 1 month, 15 days before
    • Author: HMBRAND
  15. SPVM - The SPVM Language
    • Version: 0.990117 on 2026-01-24, with 36 votes
    • Previous CPAN version: 0.990116
    • Author: KIMOTO
  16. utf8::all - turn on Unicode - all of it
    • Version: 0.026 on 2026-01-18, with 31 votes
    • Previous CPAN version: 0.025 was 1 day before
    • Author: HAYOBAAN

(dcxxiii) metacpan weekly report - Marlin

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 36

Week's winner: Marlin (+3)

Build date: 2026/01/25 12:53:03 GMT


Clicked for first time:


Increasing its reputation:

See OSDC Perl

  • 00:00 Working with Peter Nilsson

  • 00:01 Find a module to add GitHub Action to. go to CPAN::Digger recent

  • 00:10 Found Tree-STR

  • 01:20 Bug in CPAN Digger that shows a GitHub link even if it is broken.

  • 01:30 Search for the module name on GitHub.

  • 02:25 Verify that the name of the module author is the owner of the GitHub repository.

  • 03:25 Edit the Makefile.PL.

  • 04:05 Edit the file, fork the repository.

  • 05:40 Send the Pull-Request.

  • 06:30 Back to CPAN Digger recent to find a module without GitHub Actions.

  • 07:20 Add file / Fork repository gives us "unexpected error".

  • 07:45 Direct fork works.

  • 08:00 Create the .github/workflows/ci.yml file.

  • 09:00 Example CI yaml file copy it and edit it.

  • 14:25 Look at a GitLab CI file for a few seconds.

  • 14:58 Commit - change the branch and add a description!

  • 17:31 Check if the GitHub Action works properly.

  • 18:17 There is a warning while the tests are running.

  • 21:20 Opening an issue.

  • 21:48 Opening the PR (on the wrong repository).

  • 22:30 Linking to output of a CI?

  • 23:40 Looking at the file to see the source of the warning.

  • 25:25 Assigning an issue? In an open source project?

  • 27:15 Edit the already created issue.

  • 28:30 USe the Preview!

  • 29:20 Sending the Pull-Request to the project owner.

  • 31:25 Switching to Jonathan

  • 33:10 CPAN Digger recent

  • 34:00 Net-SSH-Perl of BDFOY - Testing a networking module is hard and Jonathan is using Windows.

  • 35:13 Frequency of update of CPAN Digger.

  • 36:00 Looking at our notes to find the GitHub account of the module author LNATION.

  • 38:10 Look at the modules of LNATION on MetaCPAN

  • 38:47 Found JSON::Lines

  • 39:42 Install the dependencies, run the tests, generate test coverage.

  • 40:32 Cygwin?

  • 42:45 Add Github Action copying it from the previous PR.

  • 43:54 META.yml should not be committed as it is a generated file.

  • 48:25 I am looking for sponsors!

  • 48:50 Create a branch that reflects what we do.

  • 51:38 commit the changes

  • 53:10 Fork the project on GitHub and setup git remote locally.

  • 55:05 git push -u fork add-ci

  • 57:44 Sending the Pull-Request.

  • 59:10 The 7 dwarfs and Snowwhite. My hope is to have a 100 people sending these PRs.

  • 1:01:30 Feedback.

  • 1:02:10 Did you think this was useful?

  • 1:02:55 Would you be willing to tell people you know that you did this and you will do it again?

  • 1:03:17 You can put this on your resume. It means you know how to do it.

  • 1:04:16 ... and Zoom suddenly closed the recording...

See OSDC Perl

  • 00:00 Introduction and about OSDC Perl
  • 01:50 Sponsors of MetaCPAN, looking at some modules on CPAN.
  • 03:30 The river status
  • 04:10 Picking MIME::Lite and looking at MetaCPAN. Uses RT, has no GitHub Actions.
  • 05:55 Look at the clone of the repository, the 2 remotes and the 3 branches.
  • 06:40 GitHub Actions Examples
  • 08:00 Running the Docker container locally. Install the dependencies.
  • 09:10 Run the tests locally.
  • 09:20 Add the .gitignore file.
  • 10:30 Picking a module from MetaCPAN recent
  • 11:10 CPAN Digger recent
  • 12:20 Explaining about pair-programming and workshop.
  • 13:25 CPAN Digger statistics
  • 14:15 Generate test coverage report using Devel::Cover.
  • 17:15 The fold function that is not tested and not even used.
  • 18:39 Wanted to open an issue about fold, but I'll probbaly don't do it on RT.
  • 20:00 Updating the OSDC Perl document with the TODO items.
  • 21:13 Split the packages into files?
  • 22:27 The culture of Open Source contributions.
  • 24:20 Why is the BEGIN line red when the content of the block is green?
  • 27:40 Switching to the long-header branch.
  • 30:40 Finding header_as_string in the documentation.
  • 32:15 Going over the test with the long subject line.
  • 33:54 Let's compare the result to an empty string.
  • 36:15 Switching to Test::Longstring to see the difference.
  • 37:35 Test::Differences was also suggested.
  • 39:40 Push out the branch and send the Pull-request.
  • 40:35 Did this really increase the test coverage? Let's see it.
  • 43:50 Messing up the explanation about codition coverage.
  • 45:35 The repeated use of the magic number 72.
  • 47:00 Is the output actually correct? Is it according to the standard?
  • 51:45 Discussion about /usr/bin/perl on the first line.
  • 52:45 No version is specified.
  • 55:15 The sentence should be "conforms to the standard"

Download from the usual place, my Wiki Haven.

Announcing the Perl Toolchain Summit 2026!

The organizers have been working behind the scenes since last September, and today I’m happy to announce that the 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.

This post is brought to you by Simplelists, a group email and mailing list service provider, and a recurring sponsor of the Perl Toolchain Summit.

Simplelists logo


Started in 2008 as the Perl QA Hackathon in Oslo, the Perl Toolchain Summit is an annual event that brings together the key developers working on the Perl toolchain. Each year (except for 2020-2022), the event moves from country to country all over Europe, organised by local teams of volunteers. The surplus money from previous summits helps fund the next one.

Since 2023, the organizing team is formally split between a “global” team and a “local” team (although this setup has been informally used before).

The global team is made up of veteran PTS organizers, who deal with invitations, finding sponsors, paying bills and communications. They are Laurent Boivin (ELBEHO), Philippe Bruhat (BOOK), Thibault Duponchelle (CONTRA), Tina Müller (TINITA) and Breno de Oliveira (GARU), supported by Les Mongueurs de Perl’s bank account.

The local team members for this year have organized several events in Vienna (including the Perl QA Hackathon 2010!) and deal with finding the venue, the hotel, the catering and welcoming our attendees in Vienna in April. They are Alexander Hartmaier (ABRAXXA), Thomas Klausner (DOMM), Maroš Kollár (MAROS), Michael Kröll and Helmut Wollmersdorfer (WOLLMERS).

The developers who maintain CPAN and associated tools and services are all volunteers, scattered across the globe. This event is the one time in the year when they can get together.

The summit provides dedicated time to work on the critical systems and tools, with all the right people in the same room. The attendees hammer out solutions to thorny problems and discuss new ideas to keep the toolchain moving forward. This year, about 40 people have been invited, with 35 participants expected to join us in Vienna.

If you want to find out more about the work being done at the Toolchain Summit, and hear the teams and people involved, you can listen to several episodes of The Underbar podcast, which were recorded during the 2025 edition in Leipzig, Germany:

Given the important nature of the attendees’ work and their volunteer status, we try to pay for most expenses (travel, lodging, food, etc.) through sponsorship. If you’re interested in helping sponsor the summit, please get in touch with the global team at pts2026@perltoolchainsummit.org.

Simplelists has been sponsoring the Perl Toolchain Summit for several years now. We are very grateful for their continued support.

Simplelists is proud to sponsor the 2026 Perl Toolchain Summit, as Perl forms the core of our technology stack. We are grateful that we can rely on the robust and comprehensive Perl ecosystem, from the core of Perl itself to a whole myriad of CPAN modules. We are glad that the PTS continues its unsung work, ensuring that Simplelists can continue to rely on these many tools.

Welcome to the Week #357 of The Weekly Challenge.
Thank you Team PWC for your continuous support and encouragement.

Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260110.003 on 2026-01-11, with 25 votes
    • Previous CPAN version: 20260104.001 was 6 days before
    • Author: BRIANDFOY
  2. FFI::Platypus - Write Perl bindings to non-Perl libraries with FFI. No XS required.
    • Version: 2.11 on 2026-01-12, with 69 votes
    • Previous CPAN version: 2.10 was 1 year, 24 days before
    • Author: PLICEASE
  3. Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
    • Version: 1.70 on 2026-01-11, with 18 votes
    • Previous CPAN version: 1.69 was before
    • Author: DDICK
  4. Module::Starter - a simple starter kit for any module
    • Version: 1.82 on 2026-01-10, with 34 votes
    • Previous CPAN version: 1.81 was before
    • Author: XSAWYERX
  5. Net::DNS - Perl Interface to the Domain Name System
    • Version: 1.54 on 2026-01-16, with 29 votes
    • Previous CPAN version: 1.53 was 4 months, 18 days before
    • Author: NLNETLABS
  6. Net::SSH::Perl - Perl client Interface to SSH
    • Version: 2.144 on 2026-01-14, with 20 votes
    • Previous CPAN version: 2.143 was 1 year, 10 days before
    • Author: BRIANDFOY
  7. Sidef - The Sidef Programming Language - A modern, high-level programming language
    • Version: 26.01 on 2026-01-13, with 121 votes
    • Previous CPAN version: 25.12 was 23 days before
    • Author: TRIZEN
  8. Sys::Virt - libvirt Perl API
    • Version: v12.0.0 on 2026-01-16, with 17 votes
    • Previous CPAN version: v11.10.0 was 1 month, 14 days before
    • Author: DANBERR
  9. utf8::all - turn on Unicode - all of it
    • Version: 0.025 on 2026-01-16, with 30 votes
    • Previous CPAN version: 0.024 was 8 years, 11 days before
    • Author: HAYOBAAN

(dcxxii) metacpan weekly report - Marlin

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 89

Week's winner: Marlin (+6)

Build date: 2026/01/18 10:13:31 GMT


Clicked for first time:


Increasing its reputation:

Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.
Thank you Team PWC for your continuous support and encouragement.

JSON parse array of data

Perl questions on StackOverflow

I have the following program with JSON:

use strict;
use warnings;

use Data::Dumper qw( );
use JSON qw( );

my $json_text = '[
  {
    "sent": "2026-01-16T17:00:00Z",
    "data": [
      {
        "headline": "text1",
        "displayText": "text2"
      },
      {
        "displayText": "text3"
      },
      {
        "displayText": "text4"
      }
    ]
  },
  {
    "sent": "2026-01-16T17:00:00Z",
    "data": [
      {
        "headline": "text5",
        "displayText": "text6"
      },
      {
        "displayText": "text7"
      },
      {
        "displayText": "text8"
      },
      {
        "headline": "text9",
        "displayText": "text10"
      }
    ]
  }
]';

my $json = JSON->new;
my $data = $json->decode($json_text);

print Data::Dumper->Dump($data);

# This is pseudocode:
foreach ( $data->[] ) {
    print "\$_ is $_";
}

I would like to walk through elements in JSON and find all sent and all displayText values. But, I do not know how to dereference first element. First element is array without any name in this case.

Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development…

Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.


Dave writes:

During December, I fixed assorted bugs, and started work on another tranche of ExtUtils::ParseXS fixups, this time focussing on:

  • adding and rewording warning and error messages, and adding new tests for them;

  • improving test coverage: all XS keywords have tests now;

  • reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;

  • refactoring and improving the length(str) pseudo-parameter implementation.

By the end of this report period, that work was about half finished; it is currently finished and being reviewed.

Summary: * 10:25 GH #16197 re eval stack unwinding * 1:39 GH #23903 BBC: bleadperl breaks ETHER/Package-Stash-XS-0.30.tar.gz * 0:09 GH #23986 Perl_rpp_popfree_to(SV sp**) questionable design * 3:02 fix Pod::Html stderr noise * 27:47 improve Extutils::ParseXS * 1:47 modernise perlxs.pod

Total: * 44:49 (HH::MM)


Tony writes:

``` [Hours] [Activity] 2025/12/01 Monday 0.23 memEQ cast discussion with khw 0.42 #23965 testing, review and comment 2.03 #23885 review, testing, comments 0.08 #23970 review and approve 0.13 #23971 review and approve

0.08 #23965 follow-up

2.97

2025/12/02 Tuesday 0.73 #23969 research and comment 0.30 #23974 review and approve 0.87 #23975 review and comment 0.38 #23975 review reply and approve 0.25 #23976 review, research and approve 0.43 #23977 review, research and approve

1.20 #23918 try to produce expected bug and succeed

4.16

2025/12/03 Wednesday 0.35 #23883 check updates and approve with comment 0.72 #23979 review, try to trigger the messages and approve 0.33 #23968 review, research and approve 0.25 #23961 review and comment 2.42 #23918 fix handling of context, testing, push to update,

comment on overload handling plans, start on it

4.07

2025/12/04 Thursday 2.05 #23980 review, comment and approve, fix group_end() decorator and make PR 23983 0.25 #23982 review, research and approve 1.30 #23918 test for skipping numeric overload, and fix, start

on force overload

3.60

2025/12/05 Friday

0.63 #23980 comment

0.63

2025/12/08 Monday 0.90 #23984 review and comment 0.13 #23988 review and comment 2.03 #23918 work on force overload implmentation

1.45 #23918 testing, docs

4.51

2025/12/09 Tuesday 0.32 github notifications 1.23 #23918 add more tests 0.30 #23992 review 0.47 #23993 research, testing and comment

0.58 #23993 review and comment

2.90

2025/12/10 Wednesday 0.72 #23992 review updates, testing and comment 1.22 #23782 review (and some #23885 discussion in irc) 1.35 look into Jim’s freebsd core dump, reproduce and find cause, email him and briefly comment in irc, more 23885

discussion and approve 23885

3.29

2025/12/11 Thursday 0.33 #23997 comment 1.08 #23995 research and comment 0.47 #23998 review and approve

1.15 #23918 cleanup

3.03

2025/11/15 Saturday 0.20 #23998 review updates and approve 0.53 #23975 review comment, research and follow-up 1.25 #24002 review discussion, debugging and comment 0.28 #23993 comment 0.67 #23918 commit cleanup 0.20 #24002 follow-up

0.65 #23975 research and follow-up

3.78

2025/12/16 Tuesday 0.40 #23997 review, comment, approve 0.37 #23988 review and comment 0.95 #24001 debugging and comment 0.27 #24006 review and comment 0.23 #24004 review and nothing to say

1.27 #23918 more cleanup, documentation

3.49

2025/12/17 Wednesday 0.32 #24008 testing, debugging and comment 0.08 #24006 review update and approve 0.60 #23795 quick re-check and approve 1.02 #23918 more fixes, address each PR comment and push for CI 0.75 #23956 work on a test and a fix, push for CI 0.93 #24001 write a test, and a fix, testing 0.67 #24001 write an inverted test too, commit message and push for CI 0.17 #23956 perldelta 0.08 #23956 check CI results, make PR 24010

0.15 #24001 perldelta and make PR 24011

4.77

2025/12/18 Thursday 0.27 #24001 rebase, local testing, push for CI 1.15 #24012 research 0.50 #23995 testing and comment

0.08 #24001 check CI results and apply to blead

2.00

Which I calculate is 43.2 hours.

Approximately 32 tickets were reviewed or worked on, and 1 patches were applied. ```


Paul writes:

A mix of focus this month. I was hoping to get attributes-v2 towards something that could be reviewed and merged, but then I bumped into a bunch of refalias-related issues. Also spent about 5 hours reviewing Dave's giant xspod rewrite.

  • 1 = Rename THING token in grammar to something more meaningful
    • https://github.com/Perl/perl5/pull/23982
  • 4 = Continue work on attributes-v2
  • 1 = BBC Ticket on Feature-Compat-Class
    • https://github.com/Perl/perl5/issues/23991
  • 2 = Experiment with refalias parameters with defaults in XS-Parse-Sublike
  • 1 = Managing the PPC documents and overall process
  • 2 = Investigations into the refalias and declared_refs features, to see if we can un-experiment them
  • 2 = Add a warning to refalias that breaks closures
    • https://github.com/Perl/perl5/pull/24026 (work-in-progress)
  • 3 = Restore refaliased variables after foreach loop
    • https://github.com/Perl/perl5/issues/24028
    • https://github.com/Perl/perl5/pull/24029
  • 3 = Clear pad after multivariable foreach
    • https://github.com/Perl/perl5/pull/24034 (not yet merged)
  • 6 = Github code reviews (mostly on Dave's xspod)
    • https://github.com/Perl/perl5/pull/23795

Total: 25 hours

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.02 on 2026-01-09, with 56 votes
    • Previous CPAN version: 10.01 was 9 days before
    • Author: UTASHIRO
  2. App::Netdisco - An open source web-based network management tool.
    • Version: 2.097002 on 2026-01-09, with 818 votes
    • Previous CPAN version: 2.097001 
    • Author: OLIVER
  3. App::Sqitch - Sensible database change management
    • Version: v1.6.1 on 2026-01-06, with 3087 votes
    • Previous CPAN version: v1.6.0 was 3 months before
    • Author: DWHEELER
  4. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260104.001 on 2026-01-04, with 25 votes
    • Previous CPAN version: 20251228.001 was 6 days before
    • Author: BRIANDFOY
  5. DateTime::Format::Natural - Parse informal natural language date/time strings
    • Version: 1.23 on 2026-01-04, with 19 votes
    • Previous CPAN version: 1.23 was 5 days before
    • Author: SCHUBIGER
  6. Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
    • Version: 1.69 on 2026-01-10, with 19 votes
    • Previous CPAN version: 1.68 was 3 months, 26 days before
    • Author: DDICK
  7. GD - Perl interface to the libgd graphics library
    • Version: 2.84 on 2026-01-04, with 32 votes
    • Previous CPAN version: 2.83 was 1 year, 6 months, 11 days before
    • Author: RURBAN
  8. IO::Socket::SSL - Nearly transparent SSL encapsulation for IO::Socket::INET.
    • Version: 2.098 on 2026-01-06, with 49 votes
    • Previous CPAN version: 2.097 
    • Author: SULLR
  9. JSON::Schema::Modern - Validate data against a schema using a JSON Schema
    • Version: 0.632 on 2026-01-06, with 16 votes
    • Previous CPAN version: 0.631 was 12 days before
    • Author: ETHER
  10. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.037000 on 2026-01-07, with 27 votes
    • Previous CPAN version: 2.036000 
    • Author: MICKEY
  11. MIME::Lite - low-calorie MIME generator
    • Version: 3.035 on 2026-01-08, with 35 votes
    • Previous CPAN version: 3.034 was 2 days before
    • Author: RJBS
  12. Module::Starter - a simple starter kit for any module
    • Version: 1.81 on 2026-01-09, with 34 votes
    • Previous CPAN version: 1.80 
    • Author: XSAWYERX
  13. Perl::Tidy - indent and reformat perl scripts
    • Version: 20260109 on 2026-01-08, with 147 votes
    • Previous CPAN version: 20250912 was 3 months, 26 days before
    • Author: SHANCOCK
  14. perlsecret - Perl secret operators and constants
    • Version: 1.018 on 2026-01-09, with 55 votes
    • Previous CPAN version: 1.017 was 4 years, 2 months before
    • Author: BOOK
  15. Type::Tiny - tiny, yet Moo(se)-compatible type constraint
    • Version: 2.010001 on 2026-01-06, with 148 votes
    • Previous CPAN version: 2.010000 was 7 days before
    • Author: TOBYINK
  16. UV - Perl interface to libuv
    • Version: 2.001 on 2026-01-06, with 14 votes
    • Previous CPAN version: 2.000 was 4 years, 5 months, 8 days before
    • Author: PEVANS

In a script I'm using constants (use constant ...) to allow re-use ion actual regular expressions, using the pattern from https://stackoverflow.com/a/69379743/6607497. However when using a {...} repeat specifier following such constant expansion, Perl wants to tread the constant as a hash variable.

The question is how to avoid that.

Code example:

main::(-e:1):   1
  DB<1> use constant CHARSET => '[[:graph:]]'

  DB<2> x "foo" =~ qr/^[[:graph:]]{3,}$/
0  1
  DB<3> x "foo" =~ qr/^${\CHARSET}{3,}$/
Not a HASH reference at (eval 8)[/usr/lib/perl5/5.26.1/perl5db.pl:738] line 2.
  DB<4> x "foo" =~ qr/^${\CHARSET}\{3,}$/
  empty array
  DB<5> x $^V
0  v5.26.1

According to https://stackoverflow.com/a/79845011/6607497 a solution may be to add a space that's being ignored, like this: qr/^${\CHARSET} {3,}$/x; however I don't understand why this works, because outside of a regular expression the space before { is being ignored:

  DB<6> x "foo" =~ qr/^${\CHARSET} {3,}$/x
0  1
  DB<7> %h = (a => 3)

  DB<8> x $h{a}
0  3
  DB<9> x $h {a}
0  3

The manual page (perlop(1) on "Quote and Quote-like Operators") isn't very precise on that:

For constructs that do interpolate, variables beginning with "$" or "@" are interpolated. Subscripted variables such as $a[3] or "$href->{key}[0]" are also interpolated, as are array and hash slices. But method calls such as "$obj->meth" are not.

foobar is a Perl script that prints to both standard output and standard error. In a separate Perl script echo-stderr, I run foobar and capture its standard error using IPC::Open3's open3 function, and simply echo it back.

Here's the code for echo-stderr:

#!/usr/bin/perl -w

use IPC::Open3;
use Symbol 'gensym';

$fh = gensym;
$pid = open3('STDIN', 'STDOUT', $fh, './foobar') or die "$0: failed to run ./foobar\n";

while ( <$fh> ) {
    print STDERR $_;
}

close $fh;
waitpid($pid, 0);

The result is that whatever foobar writes to standard error is printed, nothing that it writes to standard output is. And there is an error at the end:

<message written to STDERR>
<message written to STDERR>
...
Unable to flush stdout: Bad file descriptor

What is the reason for this error?

App::HTTPThis: the tiny web server I keep reaching for

Perl Hacks

Whenever I’m building a static website, I almost never start by reaching for Apache, nginx, Docker, or anything that feels like “proper infrastructure”. Nine times out of ten I just want a directory served over HTTP so I can click around, test routes, check assets, and see what happens in a real browser.

For that job, I’ve been using App::HTTPThis for years.

It’s a simple local web server you run from the command line. Point it at a directory, and it serves it. That’s it. No vhosts. No config bureaucracy. No “why is this module not enabled”. Just: run a command and you’ve got a website.

Why I’ve used it for years

Static sites are deceptively simple… right up until they aren’t.

  • You want to check that relative links behave the way you think they do.

  • You want to confirm your CSS and images are loading with the paths you expect.

  • You want to reproduce “real HTTP” behaviour (caching headers, MIME types, directory handling) rather than viewing files directly from disk.

Sure, you can open file:///.../index.html in a browser, but that’s not the same thing as serving it over HTTP. And setting up Apache (or friends) feels like bringing a cement mixer to butter some toast.

With http_this, the workflow is basically:

  • cd into your site directory

  • run a single command

  • open a URL

  • get on with your life

It’s the “tiny screwdriver” that’s always on my desk.

Why I took it over

A couple of years ago, the original maintainer had (entirely reasonably!) become too busy elsewhere and the distribution wasn’t getting attention. That happens. Open source is like that.

But I was using App::HTTPThis regularly, and I had one small-but-annoying itch: when you visited a directory URL, it would always show a directory listing – even if that directory contained an index.html. So instead of behaving like a typical web server (serve index.html by default), it treated index.html as just another file you had to click.

That’s exactly the sort of thing you notice when you’re using a tool every day, and it was irritating enough that I volunteered to take over maintenance.

(If you want to read more on this story, I wrote a couple of blog posts.)

What I’ve done since taking it over

Most of the changes are about making the “serve a directory” experience smoother, without turning it into a kitchen-sink web server.

1) Serve index pages by default (autoindex)

The first change was to make directory URLs behave like you’d expect: if index.html exists, serve it automatically. If it doesn’t, you still get a directory listing.

2) Prettier index pages

Once autoindex was in place, I then turned my attention to the fallback directory listing page. If there isn’t an index.html, you still need a useful listing — but it doesn’t have to look like it fell out of 1998. So I cleaned up the listing output and made it a bit nicer to read when you do end up browsing raw directories.

3) A config file

Once you’ve used a tool for a while, you start to realise you run it the same way most of the time.

A config file lets you keep your common preferences in one place instead of re-typing options. It keeps the “one command” feel, but gives you repeatability when you want it.

4) --host option

The ability to control the host binding sounds like an edge case until it isn’t.

Sometimes you want:

  • only localhost access for safety;

  • access from other devices on your network (phone/tablet testing);

  • behaviour that matches a particular environment.

A --host option gives you that control without adding complexity to the default case.

The Bonjour feature (and what it’s for)

This is the part I only really appreciated recently: App::HTTPThis can advertise itself on your local network using mDNS / DNS-SD – commonly called Bonjour on Apple platforms, Avahi on Linux, and various other names depending on who you’re talking to.

It’s switched on with the --name option.

http_this --name MyService

When you do that, http_this publishes an _http._tcp service on your local network with the instance name you chose (MyService in this case). Any device on the same network that understands mDNS/DNS-SD can then discover it and resolve it to an address and port, without you having to tell anyone, “go to http://192.168.1.23:7007/”.

Confession time: I ignored this feature for ages because I’d mentally filed it under “Apple-only magic” (Bonjour! very shiny! probably proprietary!). It turns out it’s not Apple-only at all; it’s a set of standard networking technologies that are supported on pretty much everything, just under a frankly ridiculous number of different names. So: not Apple magic, just local-network service discovery with a branding problem.

Because I’d never really used it, I finally sat down and tested it properly after someone emailed me about it last week, and it worked nicely, nicely enough that I’ve now added a BONJOUR.md file to the repo with a practical explanation of what’s going on, how to enable it, and a few ways to browse/discover the advertised service.

(If you’re curious, look for _http._tcp and your chosen service name.)

It’s a neat quality-of-life feature if you’re doing cross-device testing or helping someone else on the same network reach what you’re running.

Related tools in the same family

App::HTTPThis is part of a little ecosystem of “run a thing here quickly” command-line apps. If you like the shape of http_this, you might also want to look at these siblings:

  • https_this : like http_this, but served over HTTPS (useful when you need to test secure contexts, service workers, APIs that require HTTPS, etc.)

  • cgi_this : for quick CGI-style testing without setting up a full web server stack

  • dav_this : serves content over WebDAV (handy for testing clients or workflows that expect DAV)

  • ftp_this : quick FTP server for those rare-but-real moments when you need one

They all share the same basic philosophy: remove the friction between “I have a directory” and “I want to interact with it like a service”.

Wrapping up

I like tools that do one job, do it well, and get out of the way. App::HTTPThis has been that tool for me for years and it’s been fun (and useful) to nudge it forward as a maintainer.

If you’re doing any kind of static site work — docs sites, little prototypes, generated output, local previews — it’s worth keeping in your toolbox.

And if you’ve got ideas, bug reports, or platform notes (especially around Bonjour/Avahi weirdness), I’m always happy to hear them.

The post App::HTTPThis: the tiny web server I keep reaching for first appeared on Perl Hacks.

Horror Movie Month 2024

rjbs forgot what he was saying

Yesterday, I posted about the books I read in 2025, which made me remember that I never posted about the (horror) movies we watched in October 2024. So, I thought I’d get around to that. Of course this will be short and lossy, right? It’s been over a year.

Here’s what we watched for Horror Movie Month in 2024, at least according to my notes!

October 1: Raw (2016)

Girl goes to college, finally lets loose by becoming a cannibal. This movie was French and you’d know it even if you watched it dubbed. It was okay. It was worth my time.

October 2: Tragedy Girls (2017)

Two high school girls who are interested in death try to make more of it happen. It was a horror-comedy, and it was fun. Brianna Hildebrand, who you may remember as Negasonic Teenage Warhead, was in it.

October 4: V/H/S/Beyond (2024)

Honestly, apart from the 2025 one, most of the V/H/S movies are about the same to me: mixed bags, but usually not quite worth the whole watch. This one was that too. It had its moments.

October 5: Humanist Vampire Seeking Consenting Suicidal Person (2023)

Honestly, I’d watch just for “Want to see a French-Canadian horror movie?”

A young woman in a family of vampires really doesn’t want to go hunt for blood, but her parents have reluctantly become insistent. She decides she’ll look for somebody who’d be willing to donate.

It was good, and sort of a horror-comedy. It didn’t feel like every other movie, which was good.

October 6: Onyx the Fortuitous and the Talisman of Souls (2023)

I liked this the least of everybody in my household, I think. It was sometimes pretty funny, but the main character got on my nerves. I got the impression he is a YouTube character with some following, maybe? Like Ernest P. Worrell or other over the top “originally in small doses” characters, he was just too much here.

That said, we still make references to the guy’s catch phrase, so it stuck with us.

October 6: Cuckoo (2024)

This was one of the big hits of “general horror movies of 2024”, so I was glad we got to watch it. I liked it! It wasn’t perfect, but it did well at being one of those “Why can’t everybody else see how messed up this lovely place really is?” movies.

October 7: Let the Wrong One In (2021)

This movie was really stupid and I liked it. First off, there was a character named Deco, which made me think of The Commitments, which won points. Also, Anthony Stewart Head.

Basically it’s sort of a slapstick farcical vampire movie set in Ireland. Honestly, “What if [some successful movie] but the protagonists were idiots?” is a pretty good formula.

October 8: The Witches of Eastwick (1987)

Still a classic.

Sure, it’s kind of a mess here and there, but it’s got a great cast and it just goes for it. I read recently that there was talk about casting other people (other than Jack Nicholson) as Daryl Van Horne, which seems like it could only have been worse. One name mentioned was Bill Murray. What?! This was a nearly perfect vehicle for Jack Nicholson doing comedy, and Cher, Susan Sarandon, and Michelle Pfeiffer were a lot of fun, too.

The cherry scene!

October 9: Courtney Gets Possessed (2023)

I barely remember this one. I think it was funny enough? Demonic hijinks at a bachelorette party.

October 10: There’s Something Wrong with the Children (2023)

Two parents, their two kids, and an adult friend take a camping trip. The kids wander off in the woods and when they come back, they are… off. Things keep getting worse.

This was good. It wasn’t great, but it was good. You want to yell, “Wake up, people, your kids are busted!”

October 12: 6:45 (2021)

It took me a while to remember this one. It was okay. A couple take a ill-advised holiday to an island town, which leads to a deadly time loop. It was okay, but there are many better movies to watch instead. (Look, maybe it’s better than I remember, but given I barely remember it…)

October 13: Oddity (2024)

I didn’t remember this until reading the synopsis, but it was quite good. So maybe my “it’s bad because I don’t remember it” take above is wrong!

A woman is murdered at her secluded fixer-upper in the countryside. Later, her twin sister shows up and is really weird. What’s going on? You should just watch it, probably. Not a comedy.

October 14: Mr. Crocket (2024)

This is sort of like “what if there was a haunted video tape that showed you a cutesy TV show for kids, but also it was evil?” I wanted to like it, but it was just ugly. It wasn’t fun or funny, just dark. It wasn’t darkly funny, although maybe that was the goal.

October 15: Evil Dead Ⅱ (1987)

I think we watched this because Marty hadn’t seen it. Look, it’s fine. It’s a lot better than the first version. I think it’s just not exactly my bag. (I really like Bruce Campbell, though!)

October 16: Cube (2021)

I really liked Cube! This is not that movie, though, it’s a 2021 remake from Japan. Don’t bother. It is worse in every way. Maybe it’s okay, but it’s not significantly different, so go with the original.

October 18: Zombie Town (2023)

A reclusive movie director releases one more movie, and it turns everybody in town into zombies. Kids fight back.

This kind of movie could’ve been fun, but it wasn’t. It had two of the Kids in the Hall in it! What a waste.

October 19: The Finale (2023)

Oh yeah, this one.

Murders start happening at a summer theater camp. Everybody has a motive. Who did it?

Well, look, I think this was maybe better than the related Stage Fright, but it was bad. It was way too long. It was sometimes nonsensical. I do not recommend it.

October 19: Invitation to Hell (1984)

This gets huge points from me for “picked a weird premise and didn’t back down.” Wes Craven directs. A family moves to a new planned town where the father has taken a great new job. Everybody is obsessed with the local country club and its manager. Like, weirdly obsessed. What the heck is going on in town? Also, Robert Urich and Susan Lucci? Wild.

Not great, but I am glad I watched it.

October 20: Corporate Animals (2019)

A bunch of coworkers on a team-building exercise end up trapped in a cave. Demi Moore?! We had fun. It was stupid in a good way. The company specialized in edible cutlery, which paid off a few ways.

October 20: Stranger in Our House (1978)

Wes Craven again, this time with Linda Blair. It wasn’t great, sadly, and the concept has been done a bunch of times. Orphaned kid moves in with other family, and only one family member realizes that maybe this is a bad idea. It was… fine.

October 24: Little Evil (2017)

Adam Scott becomes the step-dad to the Antichrist and really tries to make things work. This was not amazing, but it was much better than I expected. I don’t mind having watched it, but I wouldn’t watch it again.

Good job casting the really creepy kid, though!

October 25: Deer Camp ‘86 (2022)

A bunch of guys go hunting and get into trouble. I remember nothing.

October 26: The Day of the Beast (1995)

A priest figures out how to predict the exact birth of the Antichrist, and enlists the help of a headbanger and a TV occultist to save the world. Was this a comedy on purpose? I just don’t know. It was weird, and unpredictable, and so I liked it.

October 27: The Strangers (2008)

What a lousy movie to end on. It’s a boring, tedious home invasion movie. I see it was 86 minutes long, but I remember it feeling much longer. Also, I think they remade it into a three part movie? I can’t imagine.

I just didn’t care about anyone or anything in this movie.

the books I read in 2025

rjbs forgot what he was saying

I don’t take the Goodreads “reading challenge” too seriously, but I did hit my target last year, and it felt good. I thought I’d try again this year and I did get it done – only just, though, as I finished my last three books in the last two days of the year. I think I would’ve liked to read a bit more through the year, but sometimes I just wasn’t feeling it. So it goes! I think this is a “structure your time” problem, but also it’s not the most pressing thing on my agenda, you know?

So, here’s what I read, not in order, and some brief notes.

Greg Egan

Last year, I read five Greg Egan books. This year, just two. First, I read The Book of All Skies, which I enjoyed. It’s the story of a group of people investigating the frontiers of their very weirdly-shaped world. As with many Egan books, there’s a lot of very weird math and physics under the hood, but it wasn’t critical to think too hard about them, and I think that made the story more enjoyable for me. In this book, they would’ve gotten in the way. That said, when I finished the book I went and read a bunch of Egan’s notes on the underlying ideas, which were interesting (insofar as I understood them).

Later, I read Schild’s Ladder, which was roughly the opposite. That is, it was one of the most physics-heavy Egan books I’ve read. More than once, I wanted to take photos of the page because it was a wall of thick jargon. I did not enjoy the book. At the beginning, I said, “Oh, this is going to be Egan’s take on Cat’s Cradle!” That would’ve been very interesting, especially because Egan and Vonnegut are so, so different. Or: maybe it was that, but I didn’t care to think about the comparison by the end. It reminded me of Vinge, too, but not in a way that excited me. Anyway, look, I’ve read a lot of Egan, and I will read more. This just didn’t hit home.

Effectiveness

“Effectiveness” is my shelf (or label or tag or whatever they call it now) in Goodreads for books on productivity and management. I have a lot of books in that queue, but I only make slow progress, for many reasons.

My favorite of the ones I read this year, by a long way, was Radical Candor. This is one of those books that I’d read about many times. It sounded not bad, but not amazing. But, of course, I’d only been seeing the shadows on the wall. It was great, and I hope I will go back to it in the future to try to puzzle out more ways to do better at my job. It really resonated with me, and I’ve brought it up over and over when talking to other managers, this year.

I also read Laziness Does Not Exist, which I didn’t love. It was okay. I feel the author would probably just give me a knowing “don’t you hear yourself??” look, but I kept wanting to say, “Yes, don’t work yourself sick, but honestly you are going too far.” I think the issue is that an indictment of a society-wide problem requires a massive-scaled critique. But “the Laziness Lie has you in its grip!”, over and over, was too much for me. (It was also funny that I finished this book just today, December 31st, and it had text saying “Don’t get worked up trying to meet your Goodreads goals”!)

Finally, as I wanted to get a bit more handle on some of my team’s rituals, I read Liftoff: Start and Sustain Agile Teams. I found it totally unremarkable, so I have no remarks.

Boss Fight Books

Boos Fight Books publishes short books about influential or otherwise important video games. The books are written by people who found the books to be important to them.

The first one I read was Animal Crossing by Kelsey Lewin. I’ve played all the main Animal Crossing games and have enjoyed them all. (Well, no, the iOS one was awful.) This book, at a pleasing 2⁸ pages, talked about the origin of the game, its weird launch history starting with the Nintendo64 DD, how it changed over time, and how the author enjoyed it (or didn’t) over time. I enjoyed the book, and felt like I’d read more like this – but it was also clear that a lot of the book was about the author’s life, which wasn’t really what I wanted. So, it wasn’t a bad book, it just wasn’t exactly what I wanted.

PaRappa the Rapper and ZZT books, which were similarly a mix of “I am very interested!” and “I am not particularly interested”. I knew what I was getting into, though, so I had no complaint for the authors. I just sort of wish there were more books about these games, focused more exclusively on the history and technology behind them.

I was surprised by how few of my peers remembered ZZT. I remember it being both impressive and influential. I was also surprised to learn how programmable its world builder was, and that ZZT (the game)’s author was that Tim Sweeney. (The book’s author was Anna Anthropy, which was one of the reasons I wanted to read this book.)

Finally, I read the book on Spelunky. I almost didn’t, but then I saw that the author was Derek Yu, also the primary creator of Spelunky itself! This book was by far closest to what I’d want from these books, if I was in charge. I got a copy for my nephews, too, who I introduced to the game a few years ago.

Stephen King

I read three Stephen King books this year, all story collections. I’ve been trying to catch up on reading all his story collections, and I’m very nearly done, now.

First, Four Past Midnight, from 1990. It contains four novellas, all of which I liked okay. I read it in part because I’d been doing some idle research into King’s repeated setting of Castle Rock, and saw that The Sun Dog (a story in this collection) was in some ways tied up with Needful Things.

After that, I read Hearts in Atlantis. This was a frustrating experience, because I kept thinking that maybe I’d read it already, twenty years ago, but I couldn’t be sure. This was extra frustrating because it seemed to me like one of King’s best books. Structurally and textually, it was excellent. I would recommend this to somebody who wasn’t sure they wanted to read Stephen King.

Finally, You Like It Darker. This is a collection of short stories published just last year. It was good! I enjoyed just about all of it, maybe most especially the final three stories. One of these was a sequel to Cujo, which I definitely did not expect to be reading!

Technical Books

This year, I’ve become the full-time lead of Fastmail’s Cyrus team. A big part of my team’s work is maintaining the open source Cyrus IMAP server. It’s written in C. My C is miserable, and was probably at its best in 1992. I need to get better. I read two C books this year: Effective C and Understanding and Using C Pointers. I think both were fine, but it’s hard to say. I’m not writing much C, day to day, so probably some of what I learned has already faded away. Still, I thought they were both clear and explained a bunch of topics that I hadn’t understood or only barely understood. Hard to judge, but definitely not bad. I can imagine going back to them later, when doing real work.

I already read tmux 3, a book about tmux. I like tmux quite a lot, and this isn’t the first little book I’ve read about it. It’s hard for me to say what I thought of it. I think it was a bit of a mish-mash for me. I was coming to it with a pretty long history with tmux, so lots of things were old hat and not worth my time. But as with many complex tools, even among the fundamentals there were lots of things I didn’t know. Here’s my biggest praise for the book: After I read it, I went back to a few sections I’d flagged and worked through my .tmux.conf, making improvements based on what the book taught me.

Slough House

Okay, so my biggest category of books was the Slough House series by Mick Herron. A full third of the books I read this year were these books.

Here are the titles:

  • Dead Lions
  • Real Tigers
  • Standing by the Wall
  • Spook Street
  • Nobody Walks
  • London Rules
  • Joe Country
  • Slough House
  • Bad Actors
  • The Secret Hours
  • Reconstruction
  • Clown Town

Look, they’re all very good. That’s why I read them! The only notable exception, I think, is Reconstruction. It’s fine. It’s just the least Slough House-y book, really tied in only by one character, and structured very differently from the rest. I’d almost recommend skipping it. It was a bit of a bummer that it was the last one I read for months. The last one I read, Clown Town, was only released this year, and I read it roughly immediately. (Thanks, Gloria, for giving me a surprise copy!)

Other Fiction

I read Thorns by Robert Silverberg, which was a 1967 nominee for the Nebula and Hugo. I can’t remember why I read it. I think it got onto my reminders list ages ago, and then it was on deep discount. I would’ve done better to just not read it. In 1967, it may have been interesting, but it didn’t age well.

I read How Long ‘til Black Future Month? by N.K. Jemisin, whose massively successful Broken Earth series I enjoyed a few years ago. This is a short story collection, and I’m a sucker for a good short story collection. And this was good. I’m told that LeVar Burton read two of these stories on his podcast LeVar Burton Reads, and I look forward to listening to them.

A few years ago, I finally read A Fire Upon the Deep, by Vinge. It was excellent, with a sprawling scope, a complex and coherent setting, and a whole mess of interesting ideas that all slotted together. Mark Dominus told me that the sequel, A Deepness in the Sky, was even better, but “very lonesome”. I agree! Vinge’s ability to spin so many plates, each one interesting on its own, and then to land them all into one neat pile was impressive and satisfying.

I read Ship Breaker and its sequel, The Drowned Cities, by Paolo Bacigalupi. They were okay, but I didn’t bother with the third book. Bacigalupi’s sci-fi work for adults is very good, and I’ve re-read a bunch of it. (I don’t think I re-read Pump 6 in its entirety this year, but I re-read a bunch of it.) The Ship Breaker books are young adult fiction, and all I could see on the page was all the depth and nuance missing compared to his other work. It probably would’ve been better when I was twelve. Given that it’s a book for that audience, who am I to complain?

I read Dungeon Crawler Carl because Bryan was reading it and said it sounded fun. It was fun, but I think too long for me. Everything about it was just a bit much. That could’ve been fun for two short books or so, but it was the first book in a seven book series, with books topping six hundred pages. I tapped out, and will probably read a summary some day.

Finally, I read Virtual Unrealities, a sci-fi story collection by the great Alfred Bester. I think I picked it up because I wanted to read Fondly Farenheit, which was good. I read it in the first week of January, so it’s been a while and I don’t remember it terribly well. My recollection was that I thought it was okay, but on the whole not anywhere near as good as The Demolished Man or The Stars My Destination. That’s the problem with writing massive, incredible successes, I guess!

Other Nonfiction

The Society of the Spectacle is the longest 150 page book I’ve ever read. According to Goodreads, I spent almost nine years reading it. It’s a lot, but it’s very good, and I think I will re-read parts of it again, probably several times. It’s one of the key texts of Situationism, a movement in post-WWII European socialism. The book is made up of 221 numbered paragraphs, which construct and explain the concept of “the spectacle”, a 20th (and, I’d say, 21st) conception of the problems of capitalism and, to an extent, imagined solutions. It’s short, but each paragraph deserves a good long think. You can’t just sit down and read the book in an afternoon the way you could a 150 page book about Animal Crossing.

For a long time, I have wanted to read more detailed writing on the Noble Eightfold Path, so I finally did. I read The Noble Eightfold Path: Way to the End of Suffering by Bhikku Bodhi. I’m glad I did, but it’s not easy to recommend generally. First, you need to be interested in Buddhism in general. Then, you need to have read enough about it (I think) that you want to read what is almost a technical manual about some of the core tenets. It’s a bit like reading a catechism, in which serious religious, metaphysical, and practical questions are answered in great and careful detail for the dedicated lay reader. I wish it had been a bit more focused on description and less on instruction. That is: I wanted to read analysis of and relationship between the eight practices, rather than a book intended to convince me of their importance. Still, this got close and I’m glad I read it.

What’s next?

I have no idea! Well, not much of an idea. Goodreads reminds me that I’m currently reading books about Eiffel, Rust, and WebAssembly. I received a few books for Christmas, and of course I already have a huge backlog of owned and unread books. There are just a few Egan novels I haven’t read yet. Lots of books remain on my “effectiveness” shelf. We’ll see where the year takes me.

One thing is seeming increasingly likely, though. I’ve read Gene Wolfe’s Book of the New Sun three (I think) times, now. These books get better as you re-read them and try to work out the many mysteries within them. Last time I read them, I thought, “When I read these again, it will be with a notebook for taking notes.” I think this is the year. I might also finally listen to ReReading Wolfe an epic-length podcast that goes through the books chapter-by-chapter, just for people who are re-reading the books, so spoilers a-plenty. I’ve been thinking about trying to find old hardback copies of the books to mark up, but it seems like most of them are quite expensive!

At any rate, maybe in a year I’ll write another blog post like this one. If I do, I hope it will be able to mention at least 36 books I’ve read in 2026.