Shortcuts: s show h hide n next p prev
RMG improvements arising from 5.43.9 release notes

* Make bold advice to ignore pedantic POD check
* Rework the "Update perldelta" section
* Change perlhist.pod commit message to include version number

Co-authored-by: Philippe Bruhat (BooK) <book@cpan.org>
Co-authored-by: Thibault Duponchelle <thibault.duponchelle@gmail.com>

Manage the health of your CLI tools at scale

dev.to #perl

Your services have dashboards, tracing, and alerting. Your CLI tools print to STDOUT and exit. When something breaks, debugging starts at the API gateway -- everything upstream is a black box. This makes no sense.

If your CLI talks to an API, it's part of the request path. Instrument it like any other participant.

This post describes how we instrumented an internal Perl CLI -- the same mycli tool from our earlier post on fatpacking -- with syslog logging, StatsD metrics, and correlation IDs. The post is strongly biased towards tooling internal to an organisation, which has the luxury of being opinionated: you control the deployment targets, you know where syslog goes, and you can lean on solved infrastructure rather than building your own. The principles generalise to any language and any CLI that talks to an API.

Why observability matters in CLI tools

Web services get dashboards as a matter of course[1]. Error rates, latency percentiles, request counts -- these are table stakes for any production service. CLI tools rarely get the same treatment, even when they're used just as heavily.

Once your CLI emits metrics, you can build per-tool dashboards that show error rates broken down by command, by user cohort, by API version, by CLI version, by deployment target. This is the same dimensional analysis you'd do for a web service, applied to a tool that runs on someone's laptop.

This integrates naturally with operational practices you're probably already using:

  • Continuous deployment. When you ship a new CLI version, the dashboard shows whether error rates changed. If command.device_list.errors spikes after a release, you know immediately -- not when someone files a ticket three days later.
  • Rollback decisions. If error rates climb after a release, the dashboard tells you in minutes -- roll back now, debug later. Without metrics, you're guessing whether the new version is the cause or a coincidence.
  • Canary deployments. Roll the new version to 10% of jumpboxes. Compare http.timing and http.errors between the canary and the stable cohort. The same deployment strategy that works for services works for CLI tools, but only if you have the metrics to compare.
  • Feature flags. If a new feature is gated behind a flag, metrics tell you whether the flagged code path is slower, more error-prone, or unused. Without instrumentation, feature flag decisions are based on "nobody complained".
  • Incident management. During a site event, the CLI dashboard shows whether the tool is contributing to or affected by the problem. A spike in http.status.503 from the CLI tells the incident commander that the API is rejecting requests before users report it. Conversely, if the CLI error rate is flat during an incident, you can rule it out as a contributing factor.
  • Adoption and deprecation. Metrics answer "is anyone still using the v1 endpoint?" and "has the team migrated to the new auth flow?" without surveys or guesswork.

The point is not that CLI tools are special -- it's that they're not. They're participants in the same distributed system as your services, and they deserve the same observability treatment. The investment is small: a correlation ID, a handful of counters, and a logging lifecycle. The return is that your CLI becomes a first-class citizen in your operational tooling rather than a blind spot.

[1] Yours does, right?

The three layers

We instrument at three levels, each serving a different audience and persistence model:

 Layer           Audience              Persistence
 -----           --------              -----------
 Verbose mode    Developer at terminal Ephemeral (STDERR)
 Syslog          Ops / incident review Durable (centralised logs)
 StatsD          Dashboards / alerting Aggregated (time-series)

A developer debugging their own command uses --verbose. An on-call engineer investigating a reported issue searches syslog by invocation ID. A platform team monitors command usage and error rates on dashboards. Same underlying data, different consumers, different retention.

Each layer is controlled independently and opt-in:

# Syslog only
MYCLI_LOG=1 mycli device list

# Verbose only (no syslog, no metrics)
mycli device list --verbose

# Everything
MYCLI_LOG=1 mycli device list --verbose

StatsD metrics emit whenever a statsd_host is configured -- no-ops otherwise. Syslog requires MYCLI_LOG=1 -- deliberately opt-in, since CLI tools run on personal machines and writing to syslog on every invocation without consent would be surprising.

The verbose layer itself has depth. --verbose shows the shape of the HTTP conversation -- method, URL, status, timing -- but deliberately omits headers and bodies to keep the output scannable. When that isn't enough, plugging in LWP::ConsoleLogger::Everywhere via perl -M gives a full HTTP trace without the CLI needing to build one. More on this in the debugging spectrum section below.

Invocation ID: the correlation key

Every mycli invocation generates a random 8-character hex ID at startup:

my @chars = ('0' .. '9', 'a' .. 'f');
my $id = join '', map { $chars[ int(rand @chars) ] } 1 .. 8;

This ID appears in three places:

  1. Every syslog message -- prefixed as [f7a3b1c2]
  2. Every HTTP request -- sent as the X-Invocation-Id header
  3. Verbose STDERR output -- printed at startup

The server-side API logs this header alongside its own request ID. To trace a failing command end-to-end:

# Find the CLI side
grep 'f7a3b1c2' /var/log/mycli.log

# Find the server side
grep 'f7a3b1c2' /var/log/api.log

One string, full picture. No timestamps to correlate, no guessing which request came from which terminal.

User-Agent

In addition to the invocation ID, set the User-Agent header to mycli/<version>. This is trivial and gives the server side a way to filter by CLI version without any custom header support -- useful for canary deployment analysis and for spotting users running outdated versions.

Two-way correlation

The API returns its own request ID in a response header (X-Request-Id). The CLI logs this too:

[f7a3b1c2] http: 200 OK (142ms, application/json, 8431 bytes) req=a1b2c3d4

This gives you a join key in both directions: from the CLI's invocation ID you can find the server's request ID, and vice versa. When a user reports "mycli gave me an error", the request ID in the error message leads straight to the server-side trace.

What the server needs to do

The correlation only works if the server participates. The requirements are minimal:

  1. Log the X-Invocation-Id header from incoming requests. Most API frameworks can do this with a single middleware or access log configuration change.
  2. Return a request ID in every response (e.g., X-Request-Id). Many frameworks generate this by default.
  3. Propagate both IDs into the server's own tracing and logging. If the API uses structured logging or distributed tracing, attach the invocation ID as a field or span attribute so it appears in the same search results.

If the server doesn't log the invocation ID, the CLI-side correlation still works (you can grep your CLI logs by invocation ID), but you lose the end-to-end join. If the server doesn't return a request ID, the CLI can still log its own invocation ID, but the user can't hand a request ID to the API team and say "look this up".

The ideal state is both: the CLI sends its ID, the server sends its ID, and both sides log both. This is a two-line change on the server and it makes every future debugging session faster.

Structured syslog

Every invocation logs a structured lifecycle to syslog:

Startup

[f7a3b1c2] startup: cli: mycli device list --status Active
[f7a3b1c2] startup: perl: 5.36.0 on linux
[f7a3b1c2] startup: env: API_KEY=ab12****, SERVER_URL=https://api.internal
[f7a3b1c2] config: key source: file (~/.config/mycli/api-key)
[f7a3b1c2] config: format: table, fields: all, tty

The API key is masked -- first four characters, then ****. Enough to identify which key is in use without leaking it to logs.

HTTP requests

[f7a3b1c2] http: GET https://api.internal/v1/devices
[f7a3b1c2] http: 200 OK (142ms, application/json, 8431 bytes) req=a1b2c3d4

Every request/response pair is logged with method, URL, status, elapsed time, content type, response size, and the server's request ID.

Shutdown

[f7a3b1c2] device_list: done (387ms, 24 results, 2 requests, cache 3/1)

One line summarising the entire command: wall-clock time, result count, number of HTTP requests made, and resolve cache statistics (3 items cached across 1 resource type).

Always format, conditionally emit

A subtle design choice: the logger always formats every message, even when logging is disabled. Only the syslog() call is conditional:

sub _emit {
    my ($self, $priority, $context, $detail) = @_;
    my $msg = sprintf '[%s] %s: %s', $self->{_id}, $context, $detail;
    syslog($priority, '%s', $msg) if $self->{_enabled};
    return $msg;
}

This means formatting bugs surface during normal development, not only when someone enables logging in production. The cost is negligible -- sprintf is fast.

A note on philosophy: when syslog is enabled, all levels are transmitted -- info, debug, error. There is no runtime knob to suppress debug messages. The belief behind this is that logging should always be on in production, not enabled after a problem is suspected. The time you most need debug-level detail is exactly the time you can't reproduce the issue. You can never have too much log detail, with the obvious exception of user or employee personal data, which should never be logged at any level.

What not to log

The API key masking (ab12****) is one example of a broader principle: log enough to identify, not enough to exploit.

  • Credentials and secrets -- mask API keys, tokens, and passwords. Show enough characters to distinguish between keys (we show four), then mask the rest. Apply the same caution to environment variables and URL query parameters that may carry tokens.
  • Request and response bodies -- don't log them. They may contain customer data, PII, or sensitive business logic. Log metadata (status, timing, size) but never content. Body inspection is what LWP::ConsoleLogger is for -- interactive, ephemeral, on-demand.

StatsD metrics

Every command emits a standard set of metrics to StatsD:

Per-command metrics

Metric Type Description
mycli.command.<cmd>.calls counter Command invocations
mycli.command.<cmd>.timing timing Wall-clock duration (ms)
mycli.command.<cmd>.results gauge Items returned
mycli.command.<cmd>.errors counter Unhandled exceptions

The command name is derived from the class hierarchy: MyCLI::App::Command::device::list becomes device_list.

Per-HTTP metrics

Metric Type Description
mycli.http.calls counter Total HTTP requests
mycli.http.timing timing Per-request duration (ms)
mycli.http.errors counter Non-2xx responses
mycli.http.status.<code> counter Per-status-code breakdown

Operational metrics

Metric Type Description
mycli.auth.key_source.<src> counter Where the API key came from
mycli.auth.url_source.<src> counter Where the server URL came from
mycli.config.file.found counter Config file was loaded
mycli.config.file.none counter No config file found
mycli.output.format.<name> counter Output format selection

What this tells you

The metrics answer questions that logs can't:

  • What commands are people actually using? -- sort command.*.calls by count. If nobody uses crossconnect list, don't spend time improving it.
  • Is the API getting slower? -- http.timing percentiles over time. The CLI is seeing the same latency as your users, including TLS negotiation and DNS.
  • Are auth errors increasing? -- http.status.401 spike means keys are being rotated or revoked.
  • How are people authenticating? -- auth.key_source.env vs auth.key_source.file tells you whether your team has adopted the recommended credential flow.
  • What output formats matter? -- if 90% of usage is output.format.json, your table renderer is mostly aesthetic.

Metric naming conventions

Prefix every metric with the tool name (mycli.*) to avoid collisions in a shared StatsD instance. Use a consistent dot-separated hierarchy (mycli.command.<cmd>.calls) rather than flat names -- this makes metrics discoverable by browsing the tree. Watch cardinality: derive command names from a fixed set (like the class hierarchy) rather than user input, and keep dynamic segments like http.status.<code> to naturally bounded sets.

Verbose mode and the debugging spectrum

The three layers above cover durable observability -- data that outlives the terminal session. But the most common debugging scenario is someone at a keyboard wondering why their command isn't working. For this, the CLI has three levels of HTTP visibility:

Level 1: Silent (default)

No HTTP output. The user sees formatted results only. Syslog and metrics still capture everything in the background.

Level 2: --verbose

--> GET https://api.internal/v1/devices?status=Active
<-- 200 OK (142ms, application/json, 8431 bytes)

Printed to STDERR so it doesn't interfere with STDOUT piping. Shows method, URL, status, timing, and size. This is enough for "is my request hitting the right endpoint?" and "why is this slow?".

The design choice here is restraint. Verbose mode shows the shape of the conversation -- what was asked, what came back, how long it took. It deliberately omits headers and bodies. This keeps the output scannable when a command makes multiple requests.

Level 3: LWP::ConsoleLogger::Everywhere

When --verbose isn't enough -- when you need to see request headers, response headers, and full bodies -- plug in LWP::ConsoleLogger::Everywhere:

# From source
perl -MLWP::ConsoleLogger::Everywhere -Ilib bin/mycli device get 42

# Fatpacked binary (with API key redaction)
LWPCL_REDACT_HEADERS=Authorization \
  PERL5OPT="-MLWP::ConsoleLogger::Everywhere" \
  ./mycli-packed device get 42

This is a full HTTP trace: every header, every byte of the request and response body, formatted and syntax-highlighted. It's invaluable for debugging serialisation issues, unexpected headers, or auth failures.

The reason we don't build this into --verbose is that it's a different tool for a different job. Verbose mode is for operators; full HTTP tracing is for developers debugging the CLI itself. The -M flag means the capability is always available without cluttering the option namespace or adding a dependency that most users will never need.

Error reporting and surfacing correlation IDs

When the API returns an error, the CLI needs to show the user enough information to report the problem without overwhelming them with internals. Our error output includes the server's request ID:

Error: 403 Forbidden
  The API key does not have permission to access this resource.
  Request ID: a1b2c3d4

The request ID is the bridge between the user and the operations team. "It gave me a 403, request ID a1b2c3d4" is a complete bug report. The on-call engineer greps the server logs for a1b2c3d4, finds the full request context (authenticated user, requested resource, policy that denied access), and resolves the issue -- without asking the user to reproduce it, enable verbose mode, or paste terminal output.

The invocation ID doesn't appear in normal error output -- it's an internal correlation key for log analysis, not a user-facing artifact. If syslog is enabled, the invocation ID is already in the logs alongside the request ID, providing the join in both directions.

The execution wrapper

All of this comes together in the base command's execute() method, which wraps every leaf command:

sub execute {
    my ($self, $opt, $args) = @_;
    my $cmd     = $self->_metric_name;
    my $start   = Time::HiRes::time();

    $self->logger->info($cmd, 'start');
    $self->metrics->increment("command.$cmd.calls");

    eval { $self->_execute($opt, $args) };

    my $elapsed_ms   = int((Time::HiRes::time() - $start) * $MS_PER_SEC);
    my $requests     = $self->client->request_count;
    my $result_count = $self->{_result_count};

    $self->metrics->timing("command.$cmd.timing", $elapsed_ms);
    $self->metrics->gauge("command.$cmd.results", $result_count)
        if defined $result_count;

    if (my $err = $@) {
        $self->metrics->increment("command.$cmd.errors");
        $self->logger->error($cmd, $err);
        die $err;
    }

    $self->logger->info($cmd, sprintf 'done (%dms, %s results, %d requests)',
        $elapsed_ms, $result_count // 'n/a', $requests);
}

Leaf commands implement _execute() and don't think about observability at all. They call $self->client->get(...), render results, and return. The wrapper handles timing, logging, metrics, and error reporting. This is the single place where the observability contract is enforced -- no leaf command can accidentally skip it.

Design principles

A few principles that guided these choices:

  1. Zero cost when off. Logging and metrics are lazy-initialised. If you never enable syslog or configure StatsD, the modules aren't even loaded.

  2. Instrument the framework, not the features. Leaf commands don't contain observability code. The base command wrapper and HTTP client handle everything. New commands get full instrumentation for free.

  3. Correlate by default. The invocation ID requires no opt-in. Every request carries it. The server just has to log it.

  4. Separate concerns by audience. Verbose mode is for the person at the terminal. Syslog is for the person investigating after the fact. Metrics are for the person watching trends. Don't conflate them.

  5. Don't build what you can plug in. Full HTTP tracing via LWP::ConsoleLogger is better than anything we'd build ourselves. Keep verbose mode lean and let the specialist tool handle the rest.

Testing observability

Instrumentation code is easy to write and easy to break silently. If nobody notices that the invocation ID stopped appearing in syslog, it might be months before an incident reveals the gap. A few testing strategies:

  • Unit test the logger's formatting. The _emit method returns the formatted message even when syslog is disabled. Assert that the invocation ID, context, and detail appear in the expected format.
  • Unit test metric emissions. Mock the StatsD client and assert that command.<cmd>.calls is incremented, command.<cmd>.timing receives a value, and command.<cmd>.errors fires on exception. These are contract tests -- they verify that the execution wrapper keeps its promises.
  • Assert the invocation ID propagates. Mock the HTTP client and verify that outgoing requests carry the X-Invocation-Id header with the same value the logger is using.
  • Integration test the full lifecycle. Run a command against a mocked API, capture STDERR with --verbose, and assert the --> / <-- lines appear with the expected method, URL, and status.

The "always format, conditionally emit" pattern helps here: the logger exercises all formatting code paths in every test run, even when syslog isn't available in the test environment.

Tracing an incident: a walkthrough

Here's how the instrumentation plays out during a real debugging scenario. This walkthrough exercises every layer described above: error output with a request ID, the metrics dashboard, syslog correlation, and two-way ID join.

A user reports: "mycli device list is failing intermittently." They include the error message:

Error: 503 Service Unavailable
  The API is temporarily unable to handle the request.
  Request ID: e4f5a6b7

Step 1: Find the server side. The on-call engineer greps the API logs for e4f5a6b7 and finds the request hit a backend that was in the middle of a deployment. The 503 was a transient error from a rolling restart.

Step 2: Assess the blast radius. But is it just this one user? The engineer checks the CLI dashboard: mycli.http.status.503 shows a spike over the last 20 minutes, coinciding with the deployment window. It's not one user -- it's everyone hitting that backend.

Step 3: Find the CLI side. The server log for e4f5a6b7 also contains the X-Invocation-Id: c8d9e0f1. Grepping the centralised CLI logs for c8d9e0f1 shows the full client-side context: which command was run, which user ran it, what arguments were passed, and that the request took 12 seconds before returning 503 (suggesting the backend was hanging, not failing fast).

Step 4: Verify the fix. After the deployment completes, the 503 counter drops to zero. The engineer confirms on the dashboard that error rates are back to baseline across all commands.

Total debugging time: minutes. Without instrumentation, this would have been a ticket saying "it's broken sometimes" followed by back-and-forth to reproduce, enable verbose mode, and collect output.

Summary

 +-------------------+    X-Invocation-Id    +-------------------+
 | mycli             |-----------------------| API               |
 |                   |    X-Request-Id       |                   |
 | - syslog [id]     |<----------------------| - access log [id] |
 | - StatsD metrics  |                       | - request trace   |
 | - verbose STDERR  |                       |                   |
 +-------------------+                       +-------------------+
         |                                           |
         v                                           v
 +-------------------+                       +-------------------+
 | Centralised logs  |<--- grep by ID ------>| Centralised logs  |
 | Metrics dashboard |                       | APM / tracing     |
 +-------------------+                       +-------------------+

Key takeaways:

  1. Your CLI is part of the distributed system. If it talks to an API, it's a participant in the request path -- treat it like a service, not a script.
  2. A correlation ID is the single most valuable thing you can add. One random string, sent as an HTTP header, ties client logs to server logs. Everything else builds on this.
  3. Separate layers by audience. Verbose mode for the developer at the terminal, structured logs for the on-call engineer after the fact, metrics for dashboards and alerting. Same data, different consumers, different lifetimes.
  4. Instrument the framework, not the features. A single execution wrapper gives every command logging, metrics, and error reporting for free. Leaf commands shouldn't contain observability code.
  5. The server needs to participate. Log the client's invocation ID, return your own request ID. Without this, correlation is one-sided.
  6. Log everything except secrets and personal data. Mask credentials, never log request bodies, and keep logging always on -- the time you need debug detail is the time you can't reproduce the issue.
  7. Start simple, keep the door open. Wrap your logging backend so the rest of the codebase never touches it directly. Start with whatever works for your deployment targets today -- Sys::Syslog, Fluent::Logger, a file. When your infrastructure is ready for OpenTelemetry or wide events (see Appendix A), the swap is localised.

The investment is small: a correlation ID, a handful of counters, and a logging lifecycle. The return is that your CLI becomes a first-class citizen in your operational tooling rather than a blind spot.

References

Getting started

If you want to add observability to an existing CLI tool, here's a practical order of operations. Each step is independently useful -- you don't need to do all five before any of them pay off.

  1. Generate a random invocation ID at startup. Eight hex characters is enough. Send it as an X-Invocation-Id header on every HTTP request. This single change makes every future debugging session easier.
  2. Set User-Agent to <tool>/<version>. Trivial, and it lets the server side filter by CLI version without any custom header support.
  3. Log three lifecycle events. Startup (command line, environment, config source), each HTTP request/response (method, URL, status, timing), and shutdown (duration, result count). Even logging to STDERR behind a --debug flag is better than nothing.
  4. Emit one counter per command invocation. If you have StatsD or a metrics collector, mycli.command.<cmd>.calls is the single most useful metric -- it tells you what people are actually using. If you don't have a metrics pipeline, a cheap alternative is to emit key=value pairs in your log lines (e.g. command=device_list duration_ms=387 status=ok) -- most log aggregation tools, including Grafana itself, can extract fields from these lines and build charts and dashboards without a separate metrics stack.
  5. Wrap your command entry point. Move timing, logging, and metric emission into a single wrapper around leaf command execution. New commands get instrumentation for free, and no leaf command can accidentally skip it.

Appendix A: Wide events

Our implementation uses separate syslog lines for each lifecycle phase (startup, HTTP, shutdown) and separate StatsD counters for aggregation. This works, but it means correlating data across multiple log lines at query time -- you need the invocation ID to join them together.

An increasingly popular alternative is the wide event (or what Stripe called a canonical log line in 2019): a single, information-dense structured record emitted once per unit of work, containing every attribute you collected along the way. Instead of five syslog lines and ten StatsD counters, you emit one event with fields like command=device_list duration_ms=387 results=24 http_requests=2 http_status=200 auth_source=file output_format=table cache_hits=3.

The advantages are significant:

  • Faster queries -- all the data is colocated in one record. No joins, no correlation by ID.
  • Ad hoc analysis -- during an incident you can group by any combination of fields without having pre-defined a metric for it.
  • Simpler pipeline -- one event replaces multiple log lines and multiple metric emissions. Less code, fewer failure modes.

We didn't take this approach because our logging infrastructure is syslog-based and doesn't support high-cardinality structured queries. If you have access to a columnar store (Honeycomb, ClickHouse, a data warehouse), wide events are the stronger choice. The execution wrapper already collects all the data in one place -- the change would be emitting it as a single structured record instead of spreading it across syslog and StatsD.

For more on wide events, see A Practitioner's Guide to Wide Events and All You Need Is Wide Events, Not Metrics.

Appendix B: Why Sys::Syslog and not a logging framework?

Perl has several mature logging frameworks -- Log::Any, Log::Dispatch, Log::Log4perl -- any of which would be a fine choice here. We went with Sys::Syslog directly. This is an opinionated trade-off worth explaining.

What Sys::Syslog gives you

Syslog is a solved problem on servers and jumpboxes. The local syslog daemon (rsyslog, syslog-ng, journald) handles buffering, rotation, compression, and forwarding to a central log aggregator. The CLI doesn't need to know where the logs go, how to authenticate to a remote endpoint, or what to do when the network is down. It calls syslog(), the daemon takes it from there. This is a clean separation of concerns: the application produces structured messages, the infrastructure handles transmission.

There are no extra dependencies beyond core Perl. No configuration files, no adapter registration, no output plugin selection. The logger module is ~50 lines. For a fatpacked binary where every dependency has a cost, this matters.

What a framework would give you

A framework like Log::Any or Log::Dispatch provides output abstraction: you write $log->info(...) and configure the destination at deployment time -- syslog, a file, STDERR, a network endpoint, or multiple at once. The application code doesn't change when the destination does. This is a genuine advantage when the tool runs in environments with different logging infrastructure, or when libraries you depend on already use Log::Any.

Where the trade-off bites

The opinionated choice of Sys::Syslog works well when every target machine runs a syslog daemon. It falls apart on developer laptops and desktops.

macOS ships with a syslog-compatible interface via Apple System Log, but the log viewer has moved to Console.app and the unified logging system. Messages from syslog() end up in a different place than most macOS users expect, and the retention policy may discard them quickly. On Windows, there is no syslog daemon at all.

You have two choices here:

Accept the gap. Detect the platform at startup and disable syslog on macOS and Windows. The CLI still has --verbose for interactive debugging, and StatsD metrics still flow if a collector is configured. You lose durable logging on developer machines, but you avoid adding complexity to the CLI itself. This is the approach we took -- the primary deployment targets are Linux servers and jumpboxes where syslog is reliable.

Solve logging everywhere. Use a framework like Log::Dispatch with pluggable outputs: syslog on Linux, a file on macOS, a network endpoint everywhere. This means the CLI now owns the full logging pipeline: transport selection, buffering when the destination is unavailable, possibly TLS for log data in transit, possibly client-side authentication to a log aggregator. Each of these is individually tractable, but collectively they add configuration surface, failure modes, and dependencies that the syslog approach avoids entirely.

There is a middle ground: In an organization with tight control of staff laptops and desktops (as is increasingly common), solving the logging problems in the CLI or having a local logging daemon is very feasible.

Another opinionated choice: Fluent::Logger

If your infrastructure runs Fluentd or Fluent Bit, Fluent::Logger is worth considering as an equally opinionated alternative to Sys::Syslog. It sends structured events directly to a Fluent collector over a local socket or TCP, which then handles routing, buffering, and delivery to whatever backend you use (Elasticsearch, S3, a data warehouse). Like Sys::Syslog, it delegates transport to purpose-built infrastructure. Unlike syslog, the events are natively structured -- key-value pairs rather than format strings -- which makes the path to wide events shorter.

The advantage of making an opinionated backend choice -- whether that's Sys::Syslog, Fluent::Logger, or something else entirely -- is that it removes abstraction layers that aren't adding value. If you know where your logs go, a framework like Log::Any is indirection without a benefit. You pay for adapter registration, output plugin configuration, and an extra dependency, but you only ever use one backend. An abstraction earns its keep when requirements are genuinely uncertain; when they're known, it's just ceremony.

The elephant in the room: OpenTelemetry

Of course, the industry is converging on OpenTelemetry as the standard answer to all of the above. Perl has solid support via the OpenTelemetry distribution on CPAN. If your organisation already runs an OTel collector, plumbing it into your CLI from the start is the right long-term bet.

Keeping the door open

The important thing is that the rest of the codebase never touches Sys::Syslog directly. Every module calls $self->logger->info(...), ->error(...), or ->debug(...). The actual syslog calls are isolated to two private methods in the logger class: _emit (which formats and transmits) and _open_syslog (which calls openlog). Swapping Sys::Syslog for Log::Dispatch, Fluent::Logger, or an OpenTelemetry log bridge would mean changing those two methods and nothing else.

This is the pragmatic middle path: start with the simplest backend that works for your deployment targets, but wrap it so the choice is easy to revisit. For a server-side CLI deployed to a controlled fleet, Sys::Syslog is a sensible default -- zero-config, zero-dependency, and delegates the hard problems to purpose-built infrastructure. If the tool later needs to run on developer laptops as a primary deployment target, the logging framework swap is a localised change rather than a rewrite.

Discussion

Have you plumbed observability into a CLI tool? I'd love to hear what worked and what didn't -- whether you went with OpenTelemetry traces, wide events from day one, or bolted logging on after the fact. What was the moment that made you invest in CLI instrumentation? Was it an incident that was hard to trace, a question about adoption you couldn't answer, or just good hygiene? And if you haven't done it yet -- what's holding you back?

Loo: Yet Another Way To Introspect Data

dev.to #perl

A Side note: I'm currently on look out for a new contract or permanent opportunity. If you know of any relevant openings, I'd appreciate hearing from you. I am a proficient front-end developer also - email@lnation.org

If you've spent any time debugging Perl, you've used Data::Dumper. It's one of those modules that ships with every Perl installation, gets loaded into every debugging session, and does its job without complaint. But it also hasn't changed much in a long time. The output is monochrome, the internals are pure Perl, and code references remain opaque sub { "DUMMY" } blobs unless you enable Deparse and even then you do not get the response one would expect.

Loo is a new take on the same problem: dump Perl data structures to readable output, but do it in C, with colour, and with a built-in code deparser that walks the op tree directly.

Why Another Dumper?

Three reasons drove the creation of Loo:

Speed. Loo is implemented entirely in XS. The dump logic, string escaping, colour code generation, and op tree walking all happen in C. For large or deeply nested structures, this matters.

Colour out of the box. When your terminal supports it, Loo's output is coloured by default. Strings, numbers, hash keys, braces, blessed class names, regex patterns, and code all get distinct colours that can be customised. There's no separate module to install, no formatter to configure. It auto-detects terminal capability, respects $ENV{NO_COLOR}, and falls back to plain text when appropriate.

Code deparsing without B::Deparse. When you pass a code reference to Loo with deparsing enabled, it walks Perl's internal op tree in C and reconstructs the source. This is not a wrapper around B::Deparse — it's a standalone implementation that lives in the same XS compilation unit as the introspecter itself.

Getting Started

Loo provides a functional interface that mirrors Data::Dumper closely enough that switching is straightforward:

use Loo qw(Dump cDump ncDump dDump);

# Colour auto-detected based on terminal
print Dump({ name => 'Perl', version => 5.40 });

# Force colour on (useful when piping to a pager that supports ANSI)
print cDump([1, 2, 3]);

# Force colour off (useful for logging or file output)
print ncDump(\%ENV);

# Dump with code deparsing enabled
print dDump(sub { my ($x) = @_; return $x * 2 });

The OO interface supports method chaining and gives you access to the full set of configuration options:

my $loo = Loo->new([{ key => 'value' }], ['data']);
$loo->Indent(1)->Sortkeys(1)->Theme('monokai');
print $loo->Dump;

Data::Dumper Compatibility

Loo implements the same accessor interface as Data::Dumper: Indent, Terse, Varname, Useqq, Quotekeys, Sortkeys, Maxdepth, Maxrecurse, Purity, Deepcopy, Pair, Freezer, Toaster, Bless, Deparse, Sparseseen, and Pad. If you have existing code that configures a Data::Dumper object, the same method calls work on a Loo object and is faster.

Beyond that, Loo adds a few options that Data::Dumper doesn't have:

  • Indentwidth($n) — control the number of characters per indent level (default 2)
  • Usetabs($bool) — indent with tabs instead of spaces
  • Trailingcomma($bool) — add trailing commas after the last element in arrays and hashes

Colour Customisation

Loo ships with four built-in themes: default, light (optimised for light terminal backgrounds), monokai, and none.

For fine-grained control, the Colour method accepts a hash specifying foreground and background colours for 17 distinct syntax elements:

$loo->Colour({
    string_fg  => 'green',
    key_fg     => 'magenta',
    number_fg  => 'cyan',
    brace_fg   => 'bright_black',
    undef_fg   => 'red',
});

The colorable elements cover every visual component of the output: string, number, key, brace, bracket, paren, arrow, comma, undef, blessed, regex, code, variable, quote, keyword, operator, and comment.

Colour codes are pre-computed once at configuration time, so there's no per-character overhead during the dump.

The Deparser

The most unusual feature of Loo is its built-in code deparser. When you enable deparsing, Loo walks the Perl op tree directly in C — the same internal structure that the Perl interpreter executes — and reconstructs Perl source code from it.

my $loo = Loo->new([\&Some::function]);
$loo->Deparse(1);
print $loo->Dump;

This means code references in your data structures are no longer black boxes. You see the actual code that will be run from the code. NOTE: It may not be identical to the code written as some postfix operations are lost as I am deparsing the compiled op tree itself.

Getting this to work across Perl versions was one of the harder parts of the project. The op tree structure has changed across Perl releases — OP_PADSV_STORE appeared in 5.38, OP_EMPTYAVHV landed in 5.36 as an enum rather than a #define, and PADNAME typedefs shifted between 5.20 and 5.22. Loo handles all of this with version-conditional compilation, supporting Perl 5.10 through to the latest releases.

Reusable C Headers

Loo's XS code is organised into modular C headers:

  • loo.h — core definitions, themes, colour element names
  • loo_colour.h — ANSI colour code generation
  • loo_escape.h — string escaping
  • loo_dump.h — recursive data structure dumping
  • loo_deparse.h — op tree walking and code reconstruction

These headers are installed alongside the Perl module, and Loo->include_dir returns their path. This means other XS modules can reuse Loo's colour or escaping logic without duplicating the C code.

Auto-Detection Done Right

Loo follows the no-color.org convention and layers several checks to decide whether to emit ANSI codes:

  1. If $Loo::USE_COLOUR is set, that takes precedence
  2. If $ENV{NO_COLOR} is set, colour is disabled
  3. If $ENV{TERM} is "dumb", colour is disabled
  4. If STDOUT is not a terminal, colour is disabled
  5. Otherwise, colour is enabled

This means Dump() does the right thing whether you're debugging interactively, piping to a file, or running in CI. And cDump() / ncDump() are there when you need to override.

Installation

Loo is available on CPAN:

cpanm Loo

It requires Perl 5.008003 or later and a C compiler (which you already have if you've ever installed an XS module).

Closing Thoughts

Data::Dumper is a workhorse that has served the Perl community well for decades. Loo isn't trying to replace it everywhere — but if you spend a lot of time reading dump output, colour and deparsing make a real difference. And if you're dumping large structures in production logging or tooling, the XS implementation gives you that output faster.

Give it a look. Your eyes may thank you.

Perl extensions don't necessarily need a long form function

The previous commit creates a function for any macro flagged 'mp' that
is visible outside the core, so that name collisions for the short form
macro name can easily be solved.  This isn't needed for perl core use,
as we will fix any collisions that come up before shipping the product.
But the same is true of extensions to core.  Change so that we extend
the exception to include those.
embed.pl: Create long name functions for mp-flagged macros

This allows XS code to #undef a short name macro, and still have access
to its functionality via a Perl_foo function.  This would go a long way
to allowing XS code to cope with namespace collisions.

Prior to this commit, macros without a thread context parameter did not
have function fallbacks.

The current catch is that all such macros must be flagged as 'mp'.  It's
trivial to add the 'p' flag to any such macro we want.  Or maybe there
could be a Configure option to add it to all public macros.  But that is
future work.  This lays the foundation for it.

One gotcha it took me a while to realize.  Suppose there's a name space
collision with macro A; so the XS code undefs our 'A' in favor of its
own.  But if another of our macros, say 'B',  calls 'A', it will get the
XS version.  Hopefully there's a parameter mismatch and the code doesn't
compile; and the solution would be to #undef 'B' as well.
embed.pl: Add mnemonics for visibility flags

This creates some more mnemonics so that, for example, 'AC' is repeated
everywhere for that class of visibility

Cast away const in call to strstr()

Perl commits on GitHub
Cast away const in call to strstr()

A future commit will, for the first time, call strstr with a
    const char *
'haystack parameter.  On my Linux box that doesn't compile because, in
spite of the man page, that parameter is declared merely 'char *'.

Looking at string.h, it appears that it can be const under the right
circumstances.

We're avid Perl programmers but we have been really wanting to get into Haskell or Erlang or something similar, though we don't know where to start. Any languages you guys recommend? if so, send some good tutorials [or give us a rundown yourself :>]

We must add that we're looking for pure languages, so lisps will largely not be an option unless they do end up being pure or close.

Problem with file encryption script

r/perl

I have been writing a Crypt:CBC script but although the encrypt an decrypt functions are nearly identical it has been throwing up an error(the decrypt one) that it needs a key. However if I add a print statement it has the key right. I will add stuff like file eraser function when I am past this stage, however I would like advice from a security person about how safe it is what I am doing.

#!/usr/bin/perl use strict; use warnings; use Crypt::CBC; sub write_file { my $fh;#file handle my $data = $_[1]; open($fh, '>' . $_[0]) or die $!; print $fh $data; } sub read_file { my $fh;#file handle my $collected; open($fh, '<' . $_[0]) or die $!; while(<$fh>) { $collected .= $_; } close($fh); return $collected; } sub encrypt { my $filename = $_[0]; my $key = $_[1]; my $cypher = Crypt::CBC->new( -pass => $key, -cipher => 'Cipher::AES' ); my $input = read_file($filename); my $cyphertext = $cypher->encrypt($input); write_file($filename . ".enc", $cyphertext) or die; } sub decrypt { my $filename = $_[0]; my $key = $_[1]; print "$filename $key"; my $cypher = Crypt::CBC->new( -pass => $key, -cipher => 'Cipher::AES' ); my $input = read_file($filename); my $plaintext = $cypher.decrypt($input); print $plaintext; } sub main { print "Enter file name "; chomp(my $filename = <STDIN>); print "Enter key(at least 8 bytes):"; chomp(my $key = <STDIN>); if(@ARGV ne 1) { die("incorrect mode"); } if($ARGV[0] eq "-e") { encrypt($filename, $key); print "outputted to ${filename}.enc"; } if($ARGV[0] eq "-d") { decrypt("${filename}.enc", $key); print "outputted to ${filename}"; } } main(); 
submitted by /u/Flashy-Show5897
[link] [comments]

Graphics::Toolkit::Color 2.0 feature overview

blogs.perl.org

Finally - GTC 2.0, an all in one color library, is released ! This post will not be rehash (of) the (very) fine manual, but give you a sense what you can achieve with this software and why it is better than any other lib of that sort on CPAN. If you like to look under the hood of GTC, please read my last post.

When I released GTC 1.0 in 2022, it had 4 major features:

1. computing color gradients, between 2 colors in RGB
2. computing complementary colors in HSL
3. translating color names from internal constant set into RGB values
4. converting RGB to HSL and back

The HSL support allowed to add and subtract lightness and saturation (make colors darker, or lighter make them more pale or colorful). And by mentioning a very rudimentary distance computation and color blending we reached the bottom of the barrel.

GTC 2.0 expanded in all areas by a manyfold. Going from 2 (RGB and HSL) to now 17 color spaces (soon ~25) has a large effect. Not only being able to read and write color values from 17 spaces makes GTC much more useful, but also computing a gradient and measuring the distance in different spaces gives you options. Some spaces are optimized for human perception (OKLAB or CIELUV) other you would choose for technical necessity. Especially OKLAB and OKHCL are the hype (for a while) and GTC is the only module in CPAN supporting it. Almost all methods (beside ''name'' and ''complement'') let you choose the color space, the method will be computed in. And in is always the named argument you do it with: " in => 'RGB' " just reads natural.

And just to complete bullet point 1: gradient can now take a series of colors and a tilt factor as arguments to produce very expressive and custom gradients. The tilt factor works also for complements. If you use special tilt values from the documentation you can get also split complementary colors as needed by designers but the nice thing about GTC is, you could choose any other value to get exactly what you are looking for. Many libraries have one method for triadic colors one for quadratic. To get them in GTC you just set the steps argument to 3 or 4 but you can choose again also any other number. Complements can be tilted in all 3 Dimensions.

Beside gradient and complement came also a new color set method: cluster. It is for computing a bunch of colors and are centered around a given one, but have a given, minimal dissimilarity. New is also invert, often the fastest way to get a fitting fore/background color, if the original color was not too bland.

The internal color name constants are still the same, but this feature block got 2 expansions. For one you can now ask for the closest color name (closest_name) and select from which standard this name has to come from (e.g. CSS). These Constants are provided by the Graphics::ColorNames::* modules and you can use them also anywhere a color is expected as input. The nice green from X11 standard would be just:'X:forestgreen'.
But since CSS + X11 + Pantone report colors are already included 'forestgreen' works too.

There are many more features that will come the next week, the most requested is probably simulation for color impaired vision, more spaces, a gamut checker is already implement, gamma correction, will be implemented this week and much much more. Just give it a try and please send bug reports and feature requests.

PS. Yes I also heald a lightning talk about GTC in Berlin last week.

PPS. 2.02 is out with gamma correction and correct complex color inversions in any space.

Modern software distribution has converged on a simple idea: ship a self-contained artifact. Whether that means a statically linked binary, a container image, or a snap/flatpak, the benefits are the same -- dependency management is solved at build time, platform differences are absorbed, and upgrades and rollbacks reduce to swapping a single file.

Perl's App::FatPacker applies the same principle to Perl scripts. It bundles every pure-Perl dependency into a single executable file. No cpanm, no local::lib, no Makefile on the target -- just copy the file and run it. The technique is well-established -- cpm (the CPAN installer we use in the build) is itself distributed as a fatpacked binary.

The distribution pipeline looks like this:

 Code repo --> CI --> fatpack --> deploy --> laptops / jumpboxes / servers
                       |
                 single file,
                 no dependencies

This post walks through how we fatpacked an internal CLI we'll call mycli, a ~90-module Perl app, into a single file. The approach generalises to any App::Cmd-based tool.

A good practice for internal tools is to provide all three interfaces: a web frontend, an API, and a CLI. The web frontend is the easiest to discover; the API enables automation and integration; the CLI is the fastest path for engineers who live in a terminal. FatPacker makes the CLI trivially deployable.

mycli is a thin client -- it talks to an internal REST API over HTTPS and renders the response locally. There is no local state beyond a config file and environment variables. You could build an equivalent tool against a binary RPC protocol such as gRPC or Thrift -- the fatpacking approach is the same.

 +--------------------+           +-------------------+
 | Workstation        |   HTTPS   | Server            |
 |                    |           |                   |
 |  $ mycli resource  |---------->|  REST API ---+    |
 |    list ...        |<----------|  (JSON)  DB  |    |
 +--------------------+           +-------------------+

Despite being a thin client, mycli is not trivial. It includes:

  • Pluggable output renderers (table, JSON, YAML, CSV, plain text)
  • Colour output with NO_COLOR support
  • Automatic pager integration (less -RFX) and pipe/TTY detection
  • Activity spinner
  • Multi-format ID resolution (numeric, UUID prefix, name lookup)
  • Command aliases (ls/list, get/show)
  • Config file discovery chain (env var, XDG path, dotfile)
  • Timezone-aware timestamp rendering
  • Structured syslog logging with per-invocation correlation IDs
  • StatsD metrics instrumentation
  • HTTP debugging hooks

All of this fatpacks cleanly because each feature is backed by pure-Perl modules.

This makes it an ideal fatpack candidate: the only XS dependency is Net::SSLeay for TLS, which is typically already present on the target system. Everything else is pure Perl.

Why FatPacker over PAR::Packer?

The other well-known option for single-file Perl distribution is PAR::Packer. PAR bundles everything -- including XS modules and even the perl interpreter itself -- into a self-extracting archive. At runtime it unpacks to a temp directory and executes from there.

FatPacker takes a different approach: modules are inlined as strings inside the script and served via a custom @INC hook. There is no extraction step, no temp directory, and no architecture coupling. The trade-off is that FatPacker only handles pure Perl -- XS modules must already be on the target.

For a thin REST client where the only XS dependency is Net::SSLeay, FatPacker wins on simplicity: the output is a plain Perl script, it starts instantly, and it runs on any architecture with a compatible perl. PAR is the better choice when you need to bundle XS-heavy dependencies or ship a binary to machines without Perl at all.

What fatpacking does

FatPacker prepends a BEGIN block to your script containing every dependency as a string literal, keyed by module path. A custom @INC hook serves these strings to require instead of reading from disk. The original script is appended unchanged.

$ wc -l bin/mycli mycli-packed
      13 bin/mycli
   48721 mycli-packed

That ~49k line file runs identically to the original, on any machine with Perl 5.24+.

The problem with naive fatpacking

The standard FatPacker workflow is:

fatpack trace bin/mycli
fatpack packlists-for $(cat fatpacker.trace) > packlists
fatpack tree $(cat packlists)
fatpack file bin/mycli > mycli-packed

This breaks for non-trivial apps because fatpack trace uses compile-time analysis (B::minus_c). It misses anything loaded at runtime via require:

  • App::Cmd discovers commands via Module::Pluggable at runtime
  • Text::ANSITable loads border styles and colour themes dynamically
  • LWP::UserAgent loads protocol handlers on first request
  • YAML::Any probes for available backends at runtime

If the trace misses a module, the packed binary dies with Can't locate Foo/Bar.pm in @INC at the worst possible moment.

The solution: a custom trace helper

Instead of relying on fatpack trace, we wrote a helper script that requires every module the app could ever load, then dumps %INC at exit. This captures the complete runtime dependency tree.

#!/usr/bin/env perl
# bin/trace-helper -- not shipped, build-time only
use strict;
use warnings;
use lib 'lib';

# Modules loaded lazily that fatpack misses
require Data::Unixish::Apply;
require Digest::SHA;
require HTTP::Request;
require LWP::UserAgent;
require String::RewritePrefix;

# Exercise objects to trigger deep runtime loads
{
    require Text::ANSITable;
    my $t = Text::ANSITable->new(use_color => 1, use_utf8 => 1);
    $t->border_style('UTF8::SingleLineBold');
    $t->color_theme('Text::ANSITable::Standard::NoGradation');
    $t->columns(['a']);
    $t->add_row(['1']);
    $t->draw;  # forces all rendering deps to load
}

# Every App::Cmd leaf command
require MyCLI::App;
require MyCLI::App::Command::device::list;
require MyCLI::App::Command::device::get;
# ... all 80+ command modules ...

END {
    open my $fh, '>', 'fatpacker.trace' or die $!;
    for my $inc (sort keys %INC) {
        next unless defined $INC{$inc};
        next if $inc =~ m{\AMyCLI/};  # our own modules come from lib/
        print $fh "$inc\n";
    }
}

Key points:

  • Don't call ->run -- App::Cmd subdispatch will die on duplicate command names across namespaces. Just require every leaf.
  • Exercise both code paths -- Text::ANSITable loads different modules for colour vs plain, UTF-8 vs ASCII. Instantiate both.
  • Exclude your own namespace -- FatPacker embeds modules from fatlib/; your lib/ modules are embedded separately. Including them in the trace causes duplicates.

Forcing pure-Perl backends

FatPacker can only bundle pure Perl. Many popular modules ship dual XS/pure-Perl backends and prefer XS at runtime. If XS is available during the trace, the pure-Perl fallback won't appear in %INC and won't get bundled.

Force pure-Perl mode during the build:

# In the fatpack build script
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

PUREPERL_ONLY=1 is a convention respected by many dual XS/PP distributions at install time, preventing XS compilation entirely. The per-module variables above cover modules that don't check PUREPERL_ONLY.

Combine this with --pp at install time to avoid pulling in XS at all:

cpm install -L local --target-perl 5.24.0 --pp

Pinning the target Perl version

The --target-perl flag to cpm is critical and easy to overlook. Without it, cpm resolves dependency versions against your build machine's Perl. If you're building on 5.38 but deploying to a jumpbox running 5.24, you'll silently install module versions that use postfix dereferencing, subroutine signatures, or other features that don't exist on the target.
The packed binary will fail at runtime with a syntax error -- far from the build where you could catch it.

This tells cpm's resolver to only consider module versions whose metadata declares compatibility with 5.24.0. Combined with perl -c as a post-install sanity check, this catches version mismatches before the slow trace step.

The complete build script

Here is the full pipeline, wrapped in a shell script. It supports incremental builds (reuses local/ and trace cache) and --clean for full rebuilds.

#!/bin/sh
set -e

CLEAN=0
[ "$1" = "--clean" ] && CLEAN=1

# 0. Prerequisites
for cmd in cpm fatpack perl; do
    command -v "$cmd" >/dev/null 2>&1 || {
        echo "Error: '$cmd' is not installed." >&2; exit 1
    }
done

export PERL_USE_UNSAFE_INC=1  # Perl 5.26+ removed . from @INC

# 1. Install deps (pure-perl only)
if [ "$CLEAN" = 1 ] || [ ! -d local/ ]; then
    rm -rf local/
    cpm install -L local --target-perl 5.24.0 --pp
fi

# 2. Set up paths
export PERL5LIB=$PWD/lib:$PWD/local/lib/perl5
export PATH=$PWD/local/bin:$PATH

# 3. Force pure-perl backends
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

# 4. Verify compilation
perl -c bin/mycli || exit 1

# 5. Trace
if [ "$CLEAN" = 1 ] || [ ! -f fatpacker.trace ]; then
    perl -Ilib bin/trace-helper
    echo "Trace: $(wc -l < fatpacker.trace) modules"
fi

# 6. Pack
fatpack packlists-for $(cat fatpacker.trace) > packlists
fatpack tree $(cat packlists)

# Strip arch-specific dirs and non-essential files
rm -rf fatlib/$(perl -MConfig -e 'print $Config{archname}')
find fatlib -name '*.pod' -delete
find fatlib -name '*.pl'  -delete

# Bundle
fatpack file bin/mycli > mycli-packed
chmod +x mycli-packed
echo "Built mycli-packed ($(wc -c < mycli-packed) bytes)"

Step by step: what happens

  1. Prerequisites -- verify cpm, fatpack, and perl are available
  2. Install -- cpm installs all dependencies into local/ as pure Perl, targeting 5.24.0
  3. Paths and env -- set PERL5LIB, PATH, and pure-Perl overrides
  4. Compile check -- perl -c bin/mycli catches syntax errors before the slow trace step
  5. Trace -- the helper script loads everything and writes the module list to fatpacker.trace
  6. Packlists and tree -- fatpack packlists-for maps module names to installed packlist files; fatpack tree copies the .pm files into fatlib/
  7. Clean up -- remove .pod, .pl, and arch-specific directories to reduce size
  8. Bundle -- fatpack file inlines everything from fatlib/ into the script

Makefile integration

For teams that prefer make, add targets that delegate to the shell script:

# In Makefile.PL, inside MY::postamble
.PHONY: pack clean_fatpack

pack:
    ./fatpack

clean :: clean_fatpack

clean_fatpack:
    rm -rf fatlib fatpacker.trace packlists mycli-packed local/

Then building is just:

perl Makefile.PL
make pack

Adding a new dependency

When someone adds use Some::New::Module to the codebase, the fatpacked binary will break with Can't locate Some/New/Module.pm in @INC unless the build picks it up. The workflow is:

  1. Add the module to cpanfile
  2. If the module is loaded at runtime (via require or a plugin mechanism), add a require Some::New::Module line to the trace helper
  3. Rebuild with --clean
./fatpack --clean

The --clean flag is important. Without it, the build reuses the cached local/ directory and fatpacker.trace from the previous run. The new module won't appear in either, and the packed binary will silently ship without it.

A good safeguard is to run perl -c mycli-packed after every build -- this catches missing modules at build time rather than in production.

What about perlstrip?

Perl::Strip can reduce the packed file by ~30% by removing comments, POD, and whitespace from bundled modules. We deliberately left it off. For an internal tool, the size saving (~1.7 MB) is not worth the trade-off: stripped files are harder to debug with stack traces, and perlstrip has a known issue corrupting files that contain use utf8.

Gotchas and tips

XS modules cannot be fatpacked

Modules with C extensions (.so/.xs) cannot be inlined. They must already exist on the target system. If your app has many XS dependencies, consider PAR::Packer instead (see above).

PERL_USE_UNSAFE_INC

Perl 5.26 removed . from @INC. Some older CPAN modules assume it's there during install or test. Set PERL_USE_UNSAFE_INC=1 during the build to avoid spurious failures. This only affects the build environment, not the packed binary.

Pinto / private CPAN

If your organisation runs a private CPAN mirror (Pinto, OrePAN2, etc.), point cpm at it with --resolver:

cpm install -L local --resolver 02packages,$PINTO_REPO --pp

Docker builds

FatPacker and Docker are complementary. Use Docker for the build environment (consistent Perl version, cpm, fatpack installed), and ship either the container image or just the packed file:

COPY mycli-packed /usr/local/bin/mycli
RUN chmod +x /usr/local/bin/mycli

Summary

The core recipe is three pieces:

  1. A trace helper that loads every module your app could use at runtime, capturing the full dependency tree via %INC
  2. Pure-Perl enforcement via environment variables and cpm --pp
  3. The standard fatpack pipeline: packlists, tree, clean up, bundle

The result is a single file you can scp to any box with Perl 5.24+ and run immediately. No CPAN, no Makefile, no containers required.

References

Modern software distribution has converged on a simple idea: ship a self-contained artifact. Whether that means a statically linked binary, a container image, or a snap/flatpak, the benefits are the same -- dependency management is solved at build time, platform differences are absorbed, and upgrades and rollbacks reduce to swapping a single file.

Perl's App::FatPacker applies the same principle to Perl scripts. It bundles every pure-Perl dependency into a single executable file. No cpanm, no local::lib, no Makefile on the target -- just copy the file and run it. The technique is well-established -- cpm (the CPAN installer we use in the build) is itself distributed as a fatpacked binary.

The distribution pipeline looks like this:

 Code repo --> CI --> fatpack --> deploy --> laptops / jumpboxes / servers
                      |
                single file,
                no dependencies

This post walks through how we fatpacked an internal CLI we'll call mycli, a ~90-module Perl app, into a single file. The approach generalises to any App::Cmd-based tool.

A good practice for internal tools is to provide all three interfaces: a web frontend, an API, and a CLI. The web frontend is the easiest to discover; the API enables automation and integration; the CLI is the fastest path for engineers who live in a terminal. FatPacker makes the CLI trivially deployable.

mycli is a thin client -- it talks to an internal REST API over HTTPS and renders the response locally. There is no local state beyond a config file and environment variables. You could build an equivalent tool against a binary RPC protocol such as gRPC or Thrift -- the fatpacking approach is the same.

 +--------------------+           +-------------------+
 | Workstation        |   HTTPS   | Server            |
 |                    |           |                   |
 |  $ mycli resource  |---------->|  REST API ---+    |
 |    list ...        |<----------|  (JSON)  DB  |    |
 +--------------------+           +-------------------+

Despite being a thin client, mycli is not trivial. It includes:

  • Pluggable output renderers (table, JSON, YAML, CSV, plain text)
  • Colour output with NO_COLOR support
  • Automatic pager integration (less -RFX) and pipe/TTY detection
  • Activity spinner
  • Multi-format ID resolution (numeric, UUID prefix, name lookup)
  • Command aliases (ls/list, get/show)
  • Config file discovery chain (env var, XDG path, dotfile)
  • Timezone-aware timestamp rendering
  • Structured syslog logging with per-invocation correlation IDs
  • StatsD metrics instrumentation
  • HTTP debugging hooks

All of this fatpacks cleanly because each feature is backed by pure-Perl modules.

This makes it an ideal fatpack candidate: the only XS dependency is Net::SSLeay for TLS, which is typically already present on the target system. Everything else is pure Perl.

Why FatPacker over PAR::Packer?

The other well-known option for single-file Perl distribution is PAR::Packer. PAR bundles everything -- including XS modules and even the perl interpreter itself -- into a self-extracting archive. At runtime it unpacks to a temp directory and executes from there.

FatPacker takes a different approach: modules are inlined as strings inside the script and served via a custom @INC hook. There is no extraction step, no temp directory, and no architecture coupling. The trade-off is that FatPacker only handles pure Perl -- XS modules must already be on the target.

For a thin REST client where the only XS dependency is Net::SSLeay, FatPacker wins on simplicity: the output is a plain Perl script, it starts instantly, and it runs on any architecture with a compatible perl. PAR is the better choice when you need to bundle XS-heavy dependencies or ship a binary to machines without Perl at all.

What fatpacking does

FatPacker prepends a BEGIN block to your script containing every dependency as a string literal, keyed by module path. A custom @INC hook serves these strings to require instead of reading from disk. The original script is appended unchanged.

$ wc -l bin/mycli mycli-packed
      13 bin/mycli
   48721 mycli-packed

That ~49k line file runs identically to the original, on any machine with Perl 5.24+.

The problem with naive fatpacking

The standard FatPacker workflow is:

$ fatpack trace bin/mycli
$ fatpack packlists-for $(cat fatpacker.trace) > packlists
$ fatpack tree $(cat packlists)
$ fatpack file bin/mycli > mycli-packed

This breaks for non-trivial apps because fatpack trace uses compile-time analysis (B::minus_c). It misses anything loaded at runtime via require:

  • App::Cmd discovers commands via Module::Pluggable at runtime
  • Text::ANSITable loads border styles and colour themes dynamically
  • LWP::UserAgent loads protocol handlers on first request
  • YAML::Any probes for available backends at runtime

If the trace misses a module, the packed binary dies with Can't locate Foo/Bar.pm in @INC at the worst possible moment.

The solution: a custom trace helper

Instead of relying on fatpack trace, we wrote a helper script that requires every module the app could ever load, then dumps %INC at exit. This captures the complete runtime dependency tree.

#!/usr/bin/env perl
# bin/trace-helper -- not shipped, build-time only
use strict;
use warnings;
use lib 'lib';

# Modules loaded lazily that fatpack misses
require Data::Unixish::Apply;
require Digest::SHA;
require HTTP::Request;
require LWP::UserAgent;
require String::RewritePrefix;

# Exercise objects to trigger deep runtime loads
{
    require Text::ANSITable;
    my $t = Text::ANSITable->new(use_color => 1, use_utf8 => 1);
    $t->border_style('UTF8::SingleLineBold');
    $t->color_theme('Text::ANSITable::Standard::NoGradation');
    $t->columns(['a']);
    $t->add_row(['1']);
    $t->draw;  # forces all rendering deps to load
}

# Every App::Cmd leaf command
require MyCLI::App;
require MyCLI::App::Command::device::list;
require MyCLI::App::Command::device::get;
# ... all 80+ command modules ...

END {
    open my $fh, '>', 'fatpacker.trace' or die $!;
    for my $inc (sort keys %INC) {
        next unless defined $INC{$inc};
        next if $inc =~ m{\AMyCLI/};  # our own modules come from lib/
        print $fh "$inc\n";
    }
}

Key points:

  • Don't call ->run -- App::Cmd subdispatch will die on duplicate command names across namespaces. Just require every leaf.
  • Exercise both code paths -- Text::ANSITable loads different modules for colour vs plain, UTF-8 vs ASCII. Instantiate both.
  • Exclude your own namespace -- FatPacker embeds modules from fatlib/; your lib/ modules are embedded separately. Including them in the trace causes duplicates.

Forcing pure-Perl backends

FatPacker can only bundle pure Perl. Many popular modules ship dual XS/pure-Perl backends and prefer XS at runtime. If XS is available during the trace, the pure-Perl fallback won't appear in %INC and won't get bundled.

Force pure-Perl mode during the build:

# In the fatpack build script
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

PUREPERL_ONLY=1 is a convention respected by many dual XS/PP distributions at install time, preventing XS compilation entirely. The per-module variables above cover modules that don't check PUREPERL_ONLY.

Combine this with --pp at install time to avoid pulling in XS at all:

cpm install -L local --target-perl 5.24.0 --pp

Pinning the target Perl version

The --target-perl flag to cpm is critical and easy to overlook. Without it, cpm resolves dependency versions against your build machine's Perl. If you're building on 5.38 but deploying to a jumpbox running 5.24, you'll silently install module versions that use postfix dereferencing, subroutine signatures, or other features that don't exist on the target. The packed binary will fail at runtime with a syntax error -- far from the build where you could catch it.

This tells cpm's resolver to only consider module versions whose metadata declares compatibility with 5.24.0. Combined with perl -c as a post-install sanity check, this catches version mismatches before the slow trace step.

The complete build script

Here is the full pipeline, wrapped in a shell script. It supports incremental builds (reuses local/ and trace cache) and --clean for full rebuilds.

#!/bin/sh
set -e

CLEAN=0
[ "$1" = "--clean" ] && CLEAN=1

# 0. Prerequisites
for cmd in cpm fatpack perl; do
    command -v "$cmd" >/dev/null 2>&1 || {
        echo "Error: '$cmd' is not installed." >&2; exit 1
    }
done

export PERL_USE_UNSAFE_INC=1  # Perl 5.26+ removed . from @INC

# 1. Install deps (pure-perl only)
if [ "$CLEAN" = 1 ] || [ ! -d local/ ]; then
    rm -rf local/
    cpm install -L local --target-perl 5.24.0 --pp
fi

# 2. Set up paths
export PERL5LIB=$PWD/lib:$PWD/local/lib/perl5
export PATH=$PWD/local/bin:$PATH

# 3. Force pure-perl backends
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

# 4. Verify compilation
perl -c bin/mycli || exit 1

# 5. Trace
if [ "$CLEAN" = 1 ] || [ ! -f fatpacker.trace ]; then
    perl -Ilib bin/trace-helper
    echo "Trace: $(wc -l < fatpacker.trace) modules"
fi

# 6. Pack
fatpack packlists-for $(cat fatpacker.trace) > packlists
fatpack tree $(cat packlists)

# Strip arch-specific dirs and non-essential files
rm -rf fatlib/$(perl -MConfig -e 'print $Config{archname}')
find fatlib -name '*.pod' -delete
find fatlib -name '*.pl'  -delete

# Bundle
fatpack file bin/mycli > mycli-packed
chmod +x mycli-packed
echo "Built mycli-packed ($(wc -c < mycli-packed) bytes)"

Step by step: what happens

  1. Prerequisites -- verify cpm, fatpack, and perl are available
  2. Install -- cpm installs all dependencies into local/ as pure Perl, targeting 5.24.0
  3. Paths and env -- set PERL5LIB, PATH, and pure-Perl overrides
  4. Compile check -- perl -c bin/mycli catches syntax errors before the slow trace step
  5. Trace -- the helper script loads everything and writes the module list to fatpacker.trace
  6. Packlists and tree -- fatpack packlists-for maps module names to installed packlist files; fatpack tree copies the .pm files into fatlib/
  7. Clean up -- remove .pod, .pl, and arch-specific directories to reduce size
  8. Bundle -- fatpack file inlines everything from fatlib/ into the script

Makefile integration

For teams that prefer make, add targets that delegate to the shell script:

# In Makefile.PL, inside MY::postamble
.PHONY: pack clean_fatpack

pack:
	./fatpack

clean :: clean_fatpack

clean_fatpack:
	rm -rf fatlib fatpacker.trace packlists mycli-packed local/

Then building is just:

$ perl Makefile.PL
$ make pack

Adding a new dependency

When someone adds use Some::New::Module to the codebase, the fatpacked binary will break with Can't locate Some/New/Module.pm in @INC unless the build picks it up. The workflow is:

  1. Add the module to cpanfile
  2. If the module is loaded at runtime (via require or a plugin mechanism), add a require Some::New::Module line to the trace helper
  3. Rebuild with --clean
./fatpack --clean

The --clean flag is important. Without it, the build reuses the cached local/ directory and fatpacker.trace from the previous run. The new module won't appear in either, and the packed binary will silently ship without it.

A good safeguard is to run perl -c mycli-packed after every build -- this catches missing modules at build time rather than in production.

What about perlstrip?

Perl::Strip can reduce the packed file by ~30% by removing comments, POD, and whitespace from bundled modules. We deliberately left it off. For an internal tool, the size saving (~1.7 MB) is not worth the trade-off: stripped files are harder to debug with stack traces, and perlstrip has a known issue corrupting files that contain use utf8.

Gotchas and tips

XS modules cannot be fatpacked

Modules with C extensions (.so/.xs) cannot be inlined. They must already exist on the target system. If your app has many XS dependencies, consider PAR::Packer instead (see above).

PERL_USE_UNSAFE_INC

Perl 5.26 removed . from @INC. Some older CPAN modules assume it's there during install or test. Set PERL_USE_UNSAFE_INC=1 during the build to avoid spurious failures. This only affects the build environment, not the packed binary.

Pinto / private CPAN

If your organisation runs a private CPAN mirror (Pinto, OrePAN2, etc.), point cpm at it with --resolver:

cpm install -L local --resolver 02packages,$PINTO_REPO --pp

Docker builds

FatPacker and Docker are complementary. Use Docker for the build environment (consistent Perl version, cpm, fatpack installed), and ship either the container image or just the packed file:

COPY mycli-packed /usr/local/bin/mycli
RUN chmod +x /usr/local/bin/mycli

Summary

The core recipe is three pieces:

  1. A trace helper that loads every module your app could use at runtime, capturing the full dependency tree via %INC
  2. Pure-Perl enforcement via environment variables and cpm --pp
  3. The standard fatpack pipeline: packlists, tree, clean up, bundle

The result is a single file you can scp to any box with Perl 5.24+ and run immediately. No CPAN, no Makefile, no containers required.

References

Modern software distribution has converged on a simple idea: ship a self-contained artifact. Whether that means a statically linked binary, a container image, or a snap/flatpak, the benefits are the same -- dependency management is solved at build time, platform differences are absorbed, and upgrades and rollbacks reduce to swapping a single file.

Perl's App::FatPacker applies the same principle to Perl scripts. It bundles every pure-Perl dependency into a single executable file. No cpanm, no local::lib, no Makefile on the target -- just copy the file and run it. The technique is well-established -- cpm (the CPAN installer we use in the build) is itself distributed as a fatpacked binary.

The distribution pipeline looks like this:

 Code repo --> CI --> fatpack --> deploy --> laptops / jumpboxes / servers
                      |
                single file,
                no dependencies

This post walks through how we fatpacked an internal CLI we'll call mycli, a ~90-module Perl app, into a single file. The approach generalises to any App::Cmd-based tool.

A good practice for internal tools is to provide all three interfaces: a web frontend, an API, and a CLI. The web frontend is the easiest to discover; the API enables automation and integration; the CLI is the fastest path for engineers who live in a terminal. FatPacker makes the CLI trivially deployable.

mycli is a thin client -- it talks to an internal REST API over HTTPS and renders the response locally. There is no local state beyond a config file and environment variables. You could build an equivalent tool against a binary RPC protocol such as gRPC or Thrift -- the fatpacking approach is the same.

 +--------------------+           +-------------------+
 | Workstation        |   HTTPS   | Server            |
 |                    |           |                   |
 |  $ mycli resource  |---------->|  REST API ---+    |
 |    list ...        |<----------|  (JSON)  DB  |    |
 +--------------------+           +-------------------+

Despite being a thin client, mycli is not trivial. It includes:

  • Pluggable output renderers (table, JSON, YAML, CSV, plain text)
  • Colour output with NO_COLOR support
  • Automatic pager integration (less -RFX) and pipe/TTY detection
  • Activity spinner
  • Multi-format ID resolution (numeric, UUID prefix, name lookup)
  • Command aliases (ls/list, get/show)
  • Config file discovery chain (env var, XDG path, dotfile)
  • Timezone-aware timestamp rendering
  • Structured syslog logging with per-invocation correlation IDs
  • StatsD metrics instrumentation
  • HTTP debugging hooks

All of this fatpacks cleanly because each feature is backed by pure-Perl modules.

This makes it an ideal fatpack candidate: the only XS dependency is Net::SSLeay for TLS, which is typically already present on the target system. Everything else is pure Perl.

Why FatPacker over PAR::Packer?

The other well-known option for single-file Perl distribution is PAR::Packer. PAR bundles everything -- including XS modules and even the perl interpreter itself -- into a self-extracting archive. At runtime it unpacks to a temp directory and executes from there.

FatPacker takes a different approach: modules are inlined as strings inside the script and served via a custom @INC hook. There is no extraction step, no temp directory, and no architecture coupling. The trade-off is that FatPacker only handles pure Perl -- XS modules must already be on the target.

For a thin REST client where the only XS dependency is Net::SSLeay, FatPacker wins on simplicity: the output is a plain Perl script, it starts instantly, and it runs on any architecture with a compatible perl. PAR is the better choice when you need to bundle XS-heavy dependencies or ship a binary to machines without Perl at all.

What fatpacking does

FatPacker prepends a BEGIN block to your script containing every dependency as a string literal, keyed by module path. A custom @INC hook serves these strings to require instead of reading from disk. The original script is appended unchanged.

$ wc -l bin/mycli mycli-packed
      13 bin/mycli
   48721 mycli-packed

That ~49k line file runs identically to the original, on any machine with Perl 5.24+.

The problem with naive fatpacking

The standard FatPacker workflow is:

$ fatpack trace bin/mycli
$ fatpack packlists-for $(cat fatpacker.trace) > packlists
$ fatpack tree $(cat packlists)
$ fatpack file bin/mycli > mycli-packed

This breaks for non-trivial apps because fatpack trace uses compile-time analysis (B::minus_c). It misses anything loaded at runtime via require:

  • App::Cmd discovers commands via Module::Pluggable at runtime
  • Text::ANSITable loads border styles and colour themes dynamically
  • LWP::UserAgent loads protocol handlers on first request
  • YAML::Any probes for available backends at runtime

If the trace misses a module, the packed binary dies with Can't locate Foo/Bar.pm in @INC at the worst possible moment.

The solution: a custom trace helper

Instead of relying on fatpack trace, we wrote a helper script that requires every module the app could ever load, then dumps %INC at exit. This captures the complete runtime dependency tree.

#!/usr/bin/env perl
# bin/trace-helper -- not shipped, build-time only
use strict;
use warnings;
use lib 'lib';

# Modules loaded lazily that fatpack misses
require Data::Unixish::Apply;
require Digest::SHA;
require HTTP::Request;
require LWP::UserAgent;
require String::RewritePrefix;

# Exercise objects to trigger deep runtime loads
{
    require Text::ANSITable;
    my $t = Text::ANSITable->new(use_color => 1, use_utf8 => 1);
    $t->border_style('UTF8::SingleLineBold');
    $t->color_theme('Text::ANSITable::Standard::NoGradation');
    $t->columns(['a']);
    $t->add_row(['1']);
    $t->draw;  # forces all rendering deps to load
}

# Every App::Cmd leaf command
require MyCLI::App;
require MyCLI::App::Command::device::list;
require MyCLI::App::Command::device::get;
# ... all 80+ command modules ...

END {
    open my $fh, '>', 'fatpacker.trace' or die $!;
    for my $inc (sort keys %INC) {
        next unless defined $INC{$inc};
        next if $inc =~ m{\AMyCLI/};  # our own modules come from lib/
        print $fh "$inc\n";
    }
}

Key points:

  • Don't call ->run -- App::Cmd subdispatch will die on duplicate command names across namespaces. Just require every leaf.
  • Exercise both code paths -- Text::ANSITable loads different modules for colour vs plain, UTF-8 vs ASCII. Instantiate both.
  • Exclude your own namespace -- FatPacker embeds modules from fatlib/; your lib/ modules are embedded separately. Including them in the trace causes duplicates.

Forcing pure-Perl backends

FatPacker can only bundle pure Perl. Many popular modules ship dual XS/pure-Perl backends and prefer XS at runtime. If XS is available during the trace, the pure-Perl fallback won't appear in %INC and won't get bundled.

Force pure-Perl mode during the build:

# In the fatpack build script
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

PUREPERL_ONLY=1 is a convention respected by many dual XS/PP distributions at install time, preventing XS compilation entirely. The per-module variables above cover modules that don't check PUREPERL_ONLY.

Combine this with --pp at install time to avoid pulling in XS at all:

cpm install -L local --target-perl 5.24.0 --pp

Pinning the target Perl version

The --target-perl flag to cpm is critical and easy to overlook. Without it, cpm resolves dependency versions against your build machine's Perl. If you're building on 5.38 but deploying to a jumpbox running 5.24, you'll silently install module versions that use postfix dereferencing, subroutine signatures, or other features that don't exist on the target. The packed binary will fail at runtime with a syntax error -- far from the build where you could catch it.

This tells cpm's resolver to only consider module versions whose metadata declares compatibility with 5.24.0. Combined with perl -c as a post-install sanity check, this catches version mismatches before the slow trace step.

The complete build script

Here is the full pipeline, wrapped in a shell script. It supports incremental builds (reuses local/ and trace cache) and --clean for full rebuilds.

#!/bin/sh
set -e

CLEAN=0
[ "$1" = "--clean" ] && CLEAN=1

# 0. Prerequisites
for cmd in cpm fatpack perl; do
    command -v "$cmd" >/dev/null 2>&1 || {
        echo "Error: '$cmd' is not installed." >&2; exit 1
    }
done

export PERL_USE_UNSAFE_INC=1  # Perl 5.26+ removed . from @INC

# 1. Install deps (pure-perl only)
if [ "$CLEAN" = 1 ] || [ ! -d local/ ]; then
    rm -rf local/
    cpm install -L local --target-perl 5.24.0 --pp
fi

# 2. Set up paths
export PERL5LIB=$PWD/lib:$PWD/local/lib/perl5
export PATH=$PWD/local/bin:$PATH

# 3. Force pure-perl backends
export B_HOOKS_ENDOFSCOPE_IMPLEMENTATION=PP
export LIST_MOREUTILS_PP=1
export MOO_XS_DISABLE=1
export PACKAGE_STASH_IMPLEMENTATION=PP
export PARAMS_VALIDATE_IMPLEMENTATION=PP
export PERL_JSON_BACKEND=JSON::PP
export PUREPERL_ONLY=1

# 4. Verify compilation
perl -c bin/mycli || exit 1

# 5. Trace
if [ "$CLEAN" = 1 ] || [ ! -f fatpacker.trace ]; then
    perl -Ilib bin/trace-helper
    echo "Trace: $(wc -l < fatpacker.trace) modules"
fi

# 6. Pack
fatpack packlists-for $(cat fatpacker.trace) > packlists
fatpack tree $(cat packlists)

# Strip arch-specific dirs and non-essential files
rm -rf fatlib/$(perl -MConfig -e 'print $Config{archname}')
find fatlib -name '*.pod' -delete
find fatlib -name '*.pl'  -delete

# Bundle
fatpack file bin/mycli > mycli-packed
chmod +x mycli-packed
echo "Built mycli-packed ($(wc -c < mycli-packed) bytes)"

Step by step: what happens

  1. 0Prerequisites -- verify cpm, fatpack, and perl are available
  2. 1Install -- cpm installs all dependencies into local/ as pure Perl, targeting 5.24.0
  3. 2Paths and env -- set PERL5LIB, PATH, and pure-Perl overrides
  4. 3Compile check -- perl -c bin/mycli catches syntax errors before the slow trace step
  5. 4Trace -- the helper script loads everything and writes the module list to fatpacker.trace
  6. 5Packlists and tree -- fatpack packlists-for maps module names to installed packlist files; fatpack tree copies the .pm files into fatlib/
  7. 6Clean up -- remove .pod, .pl, and arch-specific directories to reduce size
  8. 7Bundle -- fatpack file inlines everything from fatlib/ into the script

Makefile integration

For teams that prefer make, add targets that delegate to the shell script:

# In Makefile.PL, inside MY::postamble
.PHONY: pack clean_fatpack

pack:
	./fatpack

clean :: clean_fatpack

clean_fatpack:
	rm -rf fatlib fatpacker.trace packlists mycli-packed local/

Then building is just:

$ perl Makefile.PL
$ make pack

Adding a new dependency

When someone adds use Some::New::Module to the codebase, the fatpacked binary will break with Can't locate Some/New/Module.pm in @INC unless the build picks it up. The workflow is:

  1. Add the module to cpanfile
  2. If the module is loaded at runtime (via require or a plugin mechanism), add a require Some::New::Module line to the trace helper
  3. Rebuild with --clean
./fatpack --clean

The --clean flag is important. Without it, the build reuses the cached local/ directory and fatpacker.trace from the previous run. The new module won't appear in either, and the packed binary will silently ship without it.

A good safeguard is to run perl -c mycli-packed after every build -- this catches missing modules at build time rather than in production.

What about perlstrip?

Perl::Strip can reduce the packed file by ~30% by removing comments, POD, and whitespace from bundled modules. We deliberately left it off. For an internal tool, the size saving (~1.7 MB) is not worth the trade-off: stripped files are harder to debug with stack traces, and perlstrip has a known issue corrupting files that contain use utf8.

Gotchas and tips

XS modules cannot be fatpacked

Modules with C extensions (.so/.xs) cannot be inlined. They must already exist on the target system. If your app has many XS dependencies, consider PAR::Packer instead (see above).

PERL_USE_UNSAFE_INC

Perl 5.26 removed . from @INC. Some older CPAN modules assume it's there during install or test. Set PERL_USE_UNSAFE_INC=1 during the build to avoid spurious failures. This only affects the build environment, not the packed binary.

Pinto / private CPAN

If your organisation runs a private CPAN mirror (Pinto, OrePAN2, etc.), point cpm at it with --resolver:

cpm install -L local --resolver 02packages,$PINTO_REPO --pp

Docker builds

FatPacker and Docker are complementary. Use Docker for the build environment (consistent Perl version, cpm, fatpack installed), and ship either the container image or just the packed file:

COPY mycli-packed /usr/local/bin/mycli
RUN chmod +x /usr/local/bin/mycli

Summary

The core recipe is three pieces:

  1. A trace helper that loads every module your app could use at runtime, capturing the full dependency tree via %INC
  2. Pure-Perl enforcement via environment variables and cpm --pp
  3. The standard fatpack pipeline: packlists, tree, clean up, bundle

The result is a single file you can scp to any box with Perl 5.24+ and run immediately. No CPAN, no Makefile, no containers required.

References

Issues with cpan.org email forwarding

r/perl

For approximately the last 2 weeks I haven't received any emails sent to or forwarded from my cpan.org email address, which is configured via my pause account.

Anyone else noticed this, and/or where could I report this?

submitted by /u/jmcnamara13
[link] [comments]

Monitoring large files for changes using interval hashes

r/perl

I monitor some medium-to-large (multi-Gb) files for changes, and I'd rather not run a full hash on the whole thing. It's time-consuming, and if they're not on a ZFS filesystem, I can't take advantage of the automatic checksumming to warn me about corruption.

I use a script called chunkhash to read blocks at intervals in the file, store their SHA1 hashes and output a final hash generated from the intermediate ones. I'm not looking for crypto-level security; I want speed plus an indication of when something's changed. It took about 90 seconds on old hardware to check 393 Gbytes.

For large files (256 Mb and up):

open the file read and hash 1 Mb skip 63 Mb read and hash 1 Mb skip 63 Mb lather, rinse, repeat... 

For intermediate files (4-256 Mb), it reads 256k and skips 2Mb. Small files (<4 Mb) are completely hashed.

This idea is certainly not original with me; maybe it'll scratch an itch for someone out there. Example:

me% date; chunkhash */*.tgz; date Sat Mar 28 04:44:14 EDT 2026 69t3+P4ZfcHUR5QtbS764e+dsf0 archive-iso/part.01.tgz Rp3kNmgfIGH4whjjZYkcIXGixDM archive-iso/part.02.tgz 9bqyWAteNYuCFF3Vo+SLl+20UMo archive-iso/part.03.tgz Ph1KMSvK8lj421jFWQcbiOl2gGU archive-iso/part.04.tgz VFxgE86d4B77wpuX8GL9aWDF6d0 archive-iso/part.05.tgz t787n6s+0RDOud8xc8K0tA3GcqY archive-iso/part.06.tgz 9N2j8xYncT7xMy8sNqjF5sy3WHw archive-iso/part.07.tgz ... sBa9CvupF9Qw23nAWHWapCx0Itk var-log/part.01.tgz J9HbZau8M5ZMvVs1y7jl5ETS0vU var-log/part.02.tgz bfDv1AjS2TB9AvmooORcJZHTwds var-log/part.03.tgz k+xj9H8cvNOeQoiJrLsMl9T/gsg var-tmp/part.01.tgz Sat Mar 28 04:45:46 EDT 2026 

You can find the source at https://bezoar.org/src/chunkhash . Comments welcome.

submitted by /u/vogelke
[link] [comments]

This week in PSC (218) | 2026-03-16

blogs.perl.org

All three of us attended this long meeting covering quite a bit ground:

  • CVE-2026-3381 obliges us to cut a 5.42.2 point release with an updated Compress::Raw::Zlib.

  • We accepted Philippe’s and Eric’s offer to handle the last dev releases of the cycle.

  • Olaf Alders requested more explicit EOL notices and has updated perlpolicy.pod and the release manager guide accordingly. We agreed that the release announcement mails for the final dev release and the stable release should also contain a brief note about the perl version which is falling out of support, and filed an issue to make this happen.

  • We sent mail to kick off the voting process for some new core team member candidates.

  • We discussed the state of Devel::PPPort. It has been outdated for some time and needs to be unstuck.

  • We would like to get customize.dat down to the only entry that cannot be removed (for version.pm). We will try to coordinate with maintainers.

  • We noticed that we missed the deprecation of multiple use VERSION declarations in the same scope, which was supposed to be fatalized in 5.44. It is too late now to do that in this dev cycle, so the warning will have to change to 5.46 and the deprecation revisited next cycle.

  • Further on the topic of overlooked deprecations, we considered how to prevent this from continuing to happen. We decided that some kind of documentation of recurring PSC obligations during a cycle is needed, which would also include things like the contentious changes freeze and release blocker triage.

  • There was not much time left for release blocker triage, so we only did a little, which surfaced no candidate blockers so far. (A few already-definite blockers have been spotted and marked outside of triage.)

[P5P posting of this summary]

This week in PSC (219) | 2026-03-30

blogs.perl.org

All three of us attended.

  • We discussed policy questions that were turned up by the recent submission of some LLM-generated PRs. We need to hold conversation about this among the Core team. No contributions of this kind will be accepted while the discussion is still ongoing.

  • We finally had a good chunk of time to spend on release blocker triage. We made quick progress, working through half our list and closing some issues in the process. We newly marked 4 issues as blockers.

[P5P posting of this summary]

Can the TPF save Pwrlmonks?

r/perl

The site has been down again since yesterday. Just wondering if the Foundation can issue a grant to migrate it to another web hosting provider?

submitted by /u/pmz
[link] [comments]

Originally published at Perl Weekly 766

Hi there,

This week's Perl landscape firmly establishes that while the history of Perl is rich and exciting, it is also a place for experimentation and innovation in the future. There have been handful of releases of Perl v5.43.9 which came up with plenty of changes and major one for me was the enhanced /xx pattern modifier. In between there was another very important patch was released, Perl v5.42.2-RC1, and Perl v5.40.4 addressing the vulnerability in Compress::Raw::Zlib. Don't dare call Perl is dead.

Ever worked with XS modules? Well we have three related XS modules that made it looks so simple and easy. The benefit of XS helps creating efficient and high speed unique identifier creation using Horus, Apophis, and Sekhmet. Bonus, you get to see how they can be used together. Robert seems to be on the roll with his another gem, Eshu, a code formatter written entirely in C and exposed to Perl through XS.

Not everything have to be, XS. Dave showed how you can work with TOON (Token-Oriented Object Notation), textual format for representing structured data, same data model as JSON. Using his new creation TOON, one can easily work with TOON data model. If you are XS fan, feel free to create XS version of TOON.

Do you use Java? If yes then you now have the choice of using Perl power inside Java. The project, PerlOnJava, gives us handy tool to get the job done: jperl, jcpan, and jprove.

The week was fun, too much to handle in such a short time but I am not complaining. I am finding it hard to keep up, how about you?

Enjoy rest of the newsletter.

--
Your editor: Mohammad Sajid Anwar.

Announcements

TPRF Board Announces the 2025 Annual Report

The Board is pleased to share the 2025 Annual Report from the The Perl and Raku Foundation.

Articles

Beautiful Perl feature: "heredocs", multi-line strings embedded in source code

This article on the Beautiful Perl Feature - Heredocs and MultiLine Strings provides a nice introductory overview of how to use Perl's heredoc syntax to create readable, maintainable multiline text. It provides practical examples combined with a simple explanation which allows experienced programmers and novices alike to have a fresh look at an item that has been around for many years.

Perl, the Strange Language That Built the Early Web

The unusual language that made the early web; a glimpse at the history of Perl less than truly alien to the average user; The original dynamic/interactive media for the internet; with its contribution to automation processes (primarily text); through CGI scripting - both technically and culturally; In terms of practicality or versatility; play a significant role in creating and supporting how the first wave of web interactivity was created and how it became an integral part of the early days of the world wide web.

Horus, Apophis, and Sekhmet: An C/XS Identifier Stack for Perl

This post focuses on three related XS modules for efficient and high speed unique identifier creation (UUID, ULID and deterministic) and content-addressable storage in Perl. It provides a comprehensive overview of how to use these tools in conjunction with each other to create an efficient and scalable unique ID workflow. It also demonstrates how they can be used together.

Eshu: Indentation Fixer for Eight Languages, Written in C

This article discusses a portable C-based program that formats code and will uniformly line up the indentation across eight different programming languages. It describes examples to show you how Eshu can help you make the indentation to your programming code consistent with very little effort and no extra heavy duty tools required. For Developers who choose to use other than traditional language-specific formattors, this document presents an overview of how Eshu creates a lightweight formatting solution that developers may find useful.

Writing a TOON Module for Perl

The article presents TOON (Token-Oriented Object Notation) which aims to be simple for both people and LMs to construct and understand while using as few punctuation marks as possible and maintaining an easily accessible structure of data. It also discusses the reasons why TOON will be beneficial and provides a Perl implementation module for TOON with a familiar interface to those that have used JSON.pm.

CPAN

Graphics::Toolkit::Color 2.0 feature overview

The Graphics::Toolkit::Color 2.0 feature overview post provides an impressive look at all of the most significant improvements that have been made in developing GTC 2.0. The description outlines how GTC has grown beyond only doing basic coloring routines to include now a much richer, more complex, multi-space colored library complete with the ability to create beautiful gradients, accurately measure colors for perceptual purposes, and a variety of tools for use by both designers and developers. Overall, this is a succinct overview that does an excellent job of showcasing the reasons why GTC 2.0 is a unique addition to CPAN.

PerlOnJava Gets a CPAN Client

This is a great update regarding the addition of native CPAN support to Perl-on-JVM tooling. The example uses the ability to use an already developed CPAN client for installing modules and accessing the overall CPAN ecosystem in a more natural way than would be done with the non-JVM versions of the clients. It gives many real-world examples and is an excellent source of information for those who want to connect Perl and Java.

Lingua::* - From 17 to 61 Languages: Resurrecting and Modernizing PetaMem's Number Conversion Suite

The blog entry, "Lingua Revival", is an interesting way to reintroduce Lingua by combining elements of memories with new features that apply to modern day Perl. The story is easy to follow and focuses on being usable in today's world, which will be beneficial to both long-time users and new users of the project.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 367

Welcome to a new week with a couple of fun tasks "Max Odd Binary" and "Conflict Events". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 366

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Count Prefixes" and "Valid Times" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

Count the Times

Raku Musings has a clearly written "Count the Times" post that gives a well-organised overview. It shows how idiomatic features work together effectively in Raku, resulting in a clear and elegant solution. There is an excellent balance between compact code and an informative explanation. The post demonstrates the use of expressive constructs that lend themselves to solving this type of problem using Raku.

Could We Start Again, Please

Bob Lied writes an engaging post about a problem in a clear manner, interspersing logic with humor; making it a pleasure to read! The author reviews alternative methods and their advantages/disadvantages and demonstrates a practical approach as well as demonstrating good Perl coding skill.

Valid Times

Bob Lied's "Valid Times" post systematically breaks down the issue into distinct steps while also providing significant attention to detail in regards to possible edge cases and practical limitations of validation of times. It presents a succinct but complete Perl code and corresponds with sound logic behind the choice of this Perl implementation, which allows readers to follow along easily and use as a reference when addressing the same type of parsing problems.

Perl Weekly Challenge: Week 366

The Jaldhar's blog has written an extensive, interesting post detailing how to perform Week 366 tasks. It does so by separating them into two sections: Problem 1 and Problem 2. This helps readers easily understand the problems themselves, as well as providing a clear path to solution using Perl. The blog also clearly states the logic behind each step, allowing readers to learn from the blogs experience while still being able to easily move on to solving this week's challenges independently.

Pre-Timed Counters

The blog post written for the week of 366 by Jörg, presents an elegant solution that exemplifies the use of clear and concise Perl programming techniques with a command of idiomatic constructs. The reader will appreciate Sommrey's clean, logical approach in solving the problem space and his appreciation for the use of expressive and efficient code, which reflects both familiarity and appreciation for the inherent beauty of programming.

what time is it?

Luca Ferrari's post is a further example of his continuing theme of approaching Perl Weekly Challenge in an analytical as well as exploratory way; frequently developing solutions in several languages and platforms to help him better understand the challenges. Luca's posts provide very useful instruction/examples; help you learn through experimentation/experience; and help you to truly think about and re-examine/consider the real-world nature of the solution.

Perl Weekly Challenge 366

The write-up gives a very reasoned overview of the problem with easy-to-follow methods of solving it using logical thinking. There is a good amount of coding as well as thorough explanations that create a valid and helpful source for those searching for an understanding of this issue and its methods of resolution using Perl.

Counting Times Without Questions

The article from Matthias Muth, entitled Matthias Muth's Week 366, is a clear and precise description of his thought processes relating to solutions presented in a concise manner while also being well-organised so as to make it easy to understand the underlying concept. It has an elegant and idiomatic style similar to that of Matthias' other contributions to the Perl Weekly Challenge, and it has a very clean decomposition of the problem that allows more experienced readers to develop an appreciation for it.

The Times They Are A-Countin'

The entertaining narrative of Packy Anderson's post combines humor and solid technical approaches to a problem to keep readers interested and provide them with an enjoyable and intuitive experience involved in the solution. His creative approach to framing the challenge and providing clear examples of how to solve it makes for a positive experience for all.

Prefixes and times?

Peter Campbell Smith's Week 366 Write-up provides an unambiguous, pragmatic solution style representing a strong real-world Perl mindset. The emphasis is placed on solving the problem in an accurate and efficient manner through simple implementation methods. The provided solution is straightforward and effective; he understands the relevant tasks thoroughly and prefers to solve issues clearly and without complexity (typical of all Weekly Challenges).

The Weekly Challenge - 366: Count Prefixes

The writing style used by Reinier Maliepaard in his submission demonstrates a logical and coherent framework and logical correctness; Making it easy for the reader to follow. Reinier’s structure of writing reflects discipline and analytic thought, along with succinctness, resulting in a Combination of Clear and Robust Perl Source Code, which matches the strategy of problem-solving as well.

The Weekly Challenge - 366: Valid Token Counter

The Week 366 second post by Reinier Maliepaard provides another example of his methodical and rational approach to problem-solving with a detailed logical breakdown along with concise Perl code to solve the problem. The article focuses on providing a clear, correct, and easily read explanation of how to work through validation problems, offering users of all skill levels an accessible, educational account.

The Weekly Challenge #366

Robbie Hatley's Week 366 Answers includes usable Perl solutions, as well as easy to follow logical documentation for each step of reasoning. What is accomplished is a practical, understandable solution. While the solutions provide a clear method to convey both the "how-to" and "why" of developing the final product, they also teach the reader to think through each implementation logically.

The Time of the Count is Over

Week 366 of Roger's post is an impressive example of multi-language exploration with Ruby, Lua, PostScript and Raku; it illustrates how to solve problems in Perl and develop cross-language thinking by presenting examples of various programming paradigms solving the same problem as well as having clear and entertaining explanatory text.

Happy 7th birthday TWC!

Simon Green's 7th Anniversary Post for The Weekly Challenge is an amazing, heartfelt reflection of how far we've come as a community over the past seven years, combining his personal experience with his deep appreciation for all the amazing contributors & readers to our community. It's an uplifting and well-written post that captures the essence of what The Weekly Challenge is about and how it's positively affected our lives.

Rakudo

2026.12 Ich bin ein Berliner

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report.

Events

Perl Maven online: Testing in Perl - part 2

April 2, 2026

Perl Maven online: Testing in Perl - part 3

April 9, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Welcome to the Week #367 of The Weekly Challenge.
Thank you Team PWC for your continuous support and encouragement.

TPRF Board Announces the 2025 Annual Report

Perl Foundation News

The Board is pleased to share the 2025 Annual Report from the The Perl and Raku Foundation.

You can download the full report from the Perl and Raku Foundation website

Strengthening the Foundation

2025 was a year of both challenge and progress. Like many nonprofits, the Foundation faced funding constraints that required careful prioritization of resources. At the same time, increased focus on fundraising and donor engagement helped stabilize support for the work that matters most. A number of processes and tools were overhauled, allowing the Board to manage the funding more effectively, and pay grants more promptly and at lower overhead expense than had been the case previously.

Contributions from sponsors, corporate partners, and individual donors played a critical role in sustaining operations—particularly for core development and infrastructure.

Funding What Matters Most

Financial stewardship remained a top priority throughout the year. The Foundation focused its resources on:

  • Supporting the Perl 5 Core Maintenance Fund
  • Investing in Raku development and ecosystem improvements
  • Maintaining essential infrastructure and services

While some grant activity was reduced during tighter periods, the report describes the Foundations recovery from those trials and outlines a clear path toward expanding funding as donations grow.

Our total income for the year was $253,744.86, with total expenditures of $233,739.75. 92% of our spending supported grants, events, and infrastructure. Our largest single expenditure remains the Perl Core Maintenance Grants, one of the long-time pillars of the Foundation's programs.

A Community-Funded Future

The Foundation’s work is made possible by the community it serves. Every donation—whether from individuals or organizations—directly supports the developers, tools, and systems that keep Perl and Raku reliable and evolving.

In 2025, we also strengthened our fundraising efforts, building a more sustainable base of recurring and long-term support to ensure continuity in the years ahead.

Looking Ahead

Our focus for the coming year is clear:

  • Grow recurring donations and sponsorships
  • Restore and expand the grants program
  • Continue developing transparent, responsible financial management

We’re grateful to everyone who contributed in 2025. Your support keeps the ecosystem strong.

If you rely on Perl or Raku, we encourage you to take part in sustaining them. Your support is always welcome!

Writing a TOON Module for Perl

Perl Hacks

Every so often, a new data serialisation format appears and people get excited about it. Recently, one of those formats is **TOON** — Token-Oriented Object Notation. As the name suggests, it’s another way of representing the same kinds of data structures that you’d normally store in JSON or YAML: hashes, arrays, strings, numbers, booleans and nulls.

So the obvious Perl question is: *“Ok, where’s the CPAN module?”*

This post explains what TOON is, why some people think it’s useful, and why I decided to write a Perl module for it — with an interface that should feel very familiar to anyone who has used JSON.pm.

I should point out that I knew about [Data::Toon](https://metacpan.org/pod/Data::TOON) but I wanted something with an interface that was more like JSON.pm.

## What TOON Is

TOON stands for **Token-Oriented Object Notation**. It’s a textual format for representing structured data — the same data model as JSON:

* Objects (hashes)
* Arrays
* Strings
* Numbers
* Booleans
* Null

The idea behind TOON is that it is designed to be **easy for both humans and language models to read and write**. It tries to reduce punctuation noise and make the structure of data clearer.

If you think of the landscape like this:

| Format | Human-friendly | Machine-friendly | Very common |
| —— | ————– | —————- | ———– |
| JSON | Medium | Very | Yes |
| YAML | High | Medium | Yes |
| TOON | High | High | Not yet |

TOON is trying to sit in the middle: simpler than YAML, more readable than JSON.

Whether it succeeds at that is a matter of taste — but it’s an interesting idea.

## TOON vs JSON vs YAML

It’s probably easiest to understand TOON by comparing it to JSON and YAML. Here’s the same “person” record written in all three formats.

### JSON

{
“name”: “Arthur Dent”,
“age”: 42,
“email”: “arthur@example.com”,
“alive”: true,
“address”: {
“street”: “High Street”,
“city”: “Guildford”
},
“phones”: [
“01234 567890”,
“07700 900123”
]
}

### YAML

name: Arthur Dent
age: 42
email: arthur@example.com
alive: true
address:
street: High Street
city: Guildford
phones:
– 01234 567890
– 07700 900123

### TOON

name: Arthur Dent
age: 42
email: arthur@example.com
alive: true
address:
street: High Street
city: Guildford
phones[2]: 01234 567890,07700 900123

You can see that TOON sits somewhere between JSON and YAML:

* Less punctuation and quoting than JSON
* More explicit structure than YAML
* Still very easy to parse
* Still clearly structured for machines

That’s the idea, anyway.

## Why People Think TOON Is Useful

The current interest in TOON is largely driven by AI/LLM workflows.

People are using it because:

1. It is easier for humans to read than JSON.
2. It is less ambiguous and complex than YAML.
3. It maps cleanly to the JSON data model.
4. It is relatively easy to parse.
5. It works well in prompts and generated output.

In other words, it’s not trying to replace JSON for APIs, and it’s not trying to replace YAML for configuration files. It’s aiming at the space where humans and machines are collaborating on structured data.

You may or may not buy that argument — but it’s an interesting niche.

## Why I Wrote a Perl Module

I don’t have particularly strong opinions about TOON as a format. It might take off, it might not. We’ve seen plenty of “next big data format” ideas over the years.

But what I *do* have a strong opinion about is this:

> If a data format exists, then Perl should have a CPAN module for it that works the way Perl programmers expect.

Perl already has very good, very consistent interfaces for data serialisation:

* JSON
* YAML
* Storable
* Sereal

They all tend to follow the same pattern, particularly the object-oriented interface:

use JSON;
my $json = JSON->new->pretty->canonical;
my $text = $json->encode($data);
my $data = $json->decode($text);

So I wanted a TOON module that worked the same way.

## Design Goals

When designing the module, I had a few simple goals.

### 1. Familiar OO Interface

The primary interface should be object-oriented and feel like JSON.pm:

use TOON;
my $toon = TOON->new
->pretty
->canonical
->indent(2);
my $text = $toon->encode($data);
my $data = $toon->decode($text);

If you already know JSON, you already know how to use TOON.

There are also convenience functions, but the OO interface is the main one.

### 2. Pure Perl Implementation

Version 0.001 is pure Perl. That means:

* Easy to install
* No compiler required
* Works everywhere Perl works

If TOON becomes popular and performance matters, someone can always write an XS backend later.

### 3. Clean Separation of Components

Internally, the module is split into:

* **Tokenizer** – turns text into tokens
* **Parser** – turns tokens into Perl data structures
* **Emitter** – turns Perl data structures into TOON text
* **Error handling** – reports line/column errors cleanly

This makes it easier to test and maintain.

### 4. Do the Simple Things Well First

Version 0.001 supports:

* Scalars
* Arrayrefs
* Hashrefs
* undef → null
* Pretty printing
* Canonical key ordering

It does **not** (yet) try to serialise blessed objects or do anything clever. That can come later if people actually want it.

## Example Usage (OO Style)

Here’s a simple Perl data structure:

my $data = {
name => “Arthur Dent”,
age => 42,
drinks => [ “tea”, “coffee” ],
alive => 1,
};

### Encoding

use TOON;
my $toon = TOON->new->pretty->canonical;
my $text = $toon->encode($data);
print $text;

### Decoding

use TOON;
my $toon = TOON->new;
my $data = $toon->decode($text);
print $data->{name};

### Convenience Functions

use TOON qw(encode_toon decode_toon);
my $text = encode_toon($data);
my $data = decode_toon($text);

But the OO interface is where most of the flexibility lives.

## Command Line Tool

There’s also a command-line tool, toon_pp, similar to json_pp:

cat data.toon | toon_pp

Which will pretty-print TOON data.

## Final Thoughts

I don’t know whether TOON will become widely used. Predicting the success of data formats is a fool’s game. But the cost of supporting it in Perl is low, and the potential usefulness is high enough to make it worth doing.

And fundamentally, this is how CPAN has always worked:

> See a problem. Write a module. Upload it. See if anyone else finds it useful.

So now Perl has a TOON module. And if you already know how to use JSON.pm, you already know how to use it.

That was the goal.

The post Writing a TOON Module for Perl first appeared on Perl Hacks.

Continuing the dev.to series about beautiful Perl features, here are the recent articles (March 2026) :

Introduction

Since I last wrote a XS tutorial my knowledge within C has increased this has come from improvements in LLM software that has assisted in improving my knowledge where previously I would be stuck. This knowledge and software has since enabled me to craft more elegant and efficient XS implementations.

Today I will share with you my technique for writing reusable C/XS code.

One of the most powerful patterns in XS development is writing your core logic in pure C header files. This gives you:

  • Zero-cost reuse - no runtime linking, no shared libraries, just a #include line.
  • No Perl dependency in the C layer - your headers work in any C project
  • Compile-time inlining - the compiler sees everything, optimises aggressively
  • Simple distribution - headers are installed alongside the Perl module via PM

This tutorial walks through the complete pattern step by step, using a minimal working example you can build and run yourself.

The Example

We will create two distributions:

  1. Abacus - a provider distribution that ships a reusable pure-C abacus_math.h header containing simple arithmetic functions
  2. Tally - a consumer distribution that #includes the Abacus header to build its own XS module, without duplicating any C code

As always lets start by creating the distributions that we will need for this tutorial. Open your terminal and run module-starter. If you are using a modern version of Module::Starter then the command has changed slightly since my last posts.

  module-starter --module=Abacus --author="LNATION <email@lnation.org>"
  module-starter --module=Tally --author="LNATION <email@lnation.org>"

Part 1: The Provider Distribution (Abacus)

Write the pure-C header

This is the reusable part. It has zero Perl dependencies - just standard C.

Now enter the Abacus and create the include directory:

  cd Abucus
  mkdir include

Then create a new file abacus_math.h:

  touch abacus_math.h
  vim abacus_math.h

Paste the following code into the file:

#ifndef ABACUS_MATH_H
#define ABACUS_MATH_H

/*
 * abacus_math.h - Pure C arithmetic library (no Perl dependencies)
 *
 * This header is the reusable entry point for any C or XS project
 * that needs basic arithmetic operations. It has ZERO Perl/XS
 * dependencies.
 *
 * Usage from another XS module:
 *
 *     #include "abacus_math.h"
 *
 * Build: add -I/path/to/Abacus/include to your compiler flags.
 */

#include <stdint.h>

/* ── Error handling hook ─────────────────────────────────────────
 *
 * Consumers can #define ABACUS_FATAL(msg) before including this
 * header to route errors through their own mechanism.
 *
 * In an XS module you would typically do:
 *
 *     #define ABACUS_FATAL(msg) croak("%s", (msg))
 *     #include "abacus_math.h"
 *
 * In plain C the default behaviour is fprintf + abort.
 */
#ifndef ABACUS_FATAL
#  include <stdio.h>
#  include <stdlib.h>
#  define ABACUS_FATAL(msg) do { \
       fprintf(stderr, "abacus fatal: %s\n", (msg)); \
       abort(); \
   } while (0)
#endif

/* ── Arithmetic operations ───────────────────────────────────── */

static inline int32_t
abacus_add(int32_t a, int32_t b) {
    return a + b;
}

static inline int32_t
abacus_subtract(int32_t a, int32_t b) {
    return a - b;
}

static inline int32_t
abacus_multiply(int32_t a, int32_t b) {
    return a * b;
}

static inline int32_t
abacus_divide(int32_t a, int32_t b) {
    if (b == 0) {
        ABACUS_FATAL("division by zero");
    }
    return a / b;
}

static inline int32_t
abacus_factorial(int32_t n) {
    int32_t result = 1;
    int32_t i;
    if (n < 0) {
        ABACUS_FATAL("factorial of negative number");
    }
    for (i = 2; i <= n; i++) {
        result *= i;
    }
    return result;
}

#endif /* ABACUS_MATH_H */

The code above demonstrates three critical design patterns for reusable C headers:

static inline functions eliminate linker complications by giving each translation unit its own copy of the function. The compiler can then inline these small arithmetic operations directly into the call site, producing zero-overhead abstractions. This is key to the "zero-cost reuse" principle—there is no shared library dependency, no function call overhead, just pure generated code.

The ABACUS_FATAL macro hook provides a customization point for error handling. By default, it calls fprintf() and abort() in standalone C programs; but consumers can #define ABACUS_FATAL(msg) croak("%s", (msg)) before including the header to integrate seamlessly with Perl's exception system. This single mechanism allows the same C header to work across Perl XS, plain C, and other environments without code duplication.

The use of only stdint.h integers and no Perl types ensures the header remains truly portable. There are no SV* pointers, no pTHX context variables, no XSUB.h includes—just standard C99 types. This purity is what allows the header to be #included into any C or XS project without creating hidden Perl dependencies at the C layer.

Write the Perl-facing XS header

Next we will add another header which will hold the Perl/XS specific logic. Inside the include directory create a new file called abacus.h. The rational behind this thin wrapper is to pull in Perl's headers and sets up the ABACUS_FATAL macro to use croak(). To reiterate only a XS distribution should include this header, whereas abacus_math.h is generic and could be used by other languages which bind C.

    touch abacus.h
    vim abacus.h

Paste the following code into the file

#ifndef ABACUS_H
#define ABACUS_H

/*
 * abacus.h - Perl XS wrapper header for the Abacus library
 *
 * This header sets up Perl-specific error handling and includes
 * the pure C core library.
 *
 * For reuse from OTHER XS modules without Perl overhead, include
 * abacus_math.h directly instead (see that header for usage).
 */

#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"
#include "ppport.h"

/* Route fatal errors through Perl's core croak() */
#define ABACUS_FATAL(msg) croak("%s", (msg))

/* Pull in the pure-C library */
#include "abacus_math.h"

#endif /* ABACUS_H */

Now any .xs file can #include "abacus.h" and get the full Perl/XS environment plus all the pure-C functions, with errors properly integrated.

Write the XS file

Next we will create the XS file, return to the root directory and then enter the lib directoy where you should see the Abacus.pm file already. Create a new XS file called Abacus.xs. This will be the glue that exposes the C functions to Perl.

  cd ../lib
  touch Abacus.xs
  vim Abacus.xs
#include "abacus.h"

MODULE = Abacus  PACKAGE = Abacus

PROTOTYPES: DISABLE

int
add(a, b)
    int a
    int b
  CODE:
    RETVAL = abacus_add(a, b);
  OUTPUT:
    RETVAL

int
subtract(a, b)
    int a
    int b
  CODE:
    RETVAL = abacus_subtract(a, b);
  OUTPUT:
    RETVAL

int
multiply(a, b)
    int a
    int b
  CODE:
    RETVAL = abacus_multiply(a, b);
  OUTPUT:
    RETVAL

int
divide(a, b)
    int a
    int b
  CODE:
    RETVAL = abacus_divide(a, b);
  OUTPUT:
    RETVAL

int
factorial(n)
    int n
  CODE:
    RETVAL = abacus_factorial(n);
  OUTPUT:
    RETVAL

As you can see we include abacus.h which pulls in all the C that we need to create our XS module. We then define add, subtract, multiply, divide and factorial as XSUBs. As you should know by now XSUBs can be called directly from your perl code.

Write the Perl module with include_dir()

Next open the pm file and update to add an exporter for the XSUBS we have just created.

package Abacus;

use 5.008003;
use strict;
use warnings;

our $VERSION = '0.01';

use Exporter 'import';
our @EXPORT_OK = qw(add subtract multiply divide factorial);

require XSLoader;
XSLoader::load('Abacus', $VERSION);

Now the critical piece that makes header sharing work. A include_dir() method which returns the path to the installed headers so that consumer distributions can find them at build time.

sub include_dir {
    my $dir = $INC{'Abacus.pm'};
    $dir =~ s{Abacus\.pm$}{Abacus/include};
    return $dir;
}

1;

How include_dir() works:

  1. When Perl loads Abacus.pm, it records the full path in %INC (e.g. /usr/lib/perl5/site_perl/Abacus.pm)
  2. include_dir() replaces Abacus.pm with Abacus/include
  3. That directory exists because Makefile.PL installs the headers there (see next step)

Write the Makefile.PL that installs headers

The PM hash is what makes headers available to other distributions after install. It maps source files to their installation destinations.

Abacus/Makefile.PL

use 5.008003;
use strict;
use warnings;
use ExtUtils::MakeMaker;

WriteMakefile(
    NAME             => 'Abacus',
    AUTHOR           => 'Your Name <you@example.com>',
    VERSION_FROM     => 'lib/Abacus.pm',
    ABSTRACT_FROM    => 'lib/Abacus.pm',
    LICENSE          => 'artistic_2',
    MIN_PERL_VERSION => '5.008003',
    CONFIGURE_REQUIRES => {
        'ExtUtils::MakeMaker' => '0',
    },
    TEST_REQUIRES => {
        'Test::More' => '0',
    },
    PREREQ_PM => {},
    XSMULTI => 1,
    # XS configuration
    INC    => '-I. -Iinclude',
    OBJECT => '$(O_FILES)',

    # *** THIS IS THE KEY PART ***
    # Install headers alongside the module so dependent
    # distributions can find them via Abacus->include_dir()
    PM => {
        'lib/Abacus.pm'           => '$(INST_LIB)/Abacus.pm',
        'include/abacus.h'        => '$(INST_LIB)/Abacus/include/abacus.h',
        'include/abacus_math.h'   => '$(INST_LIB)/Abacus/include/abacus_math.h',
    },

    dist  => { COMPRESS => 'gzip -9f', SUFFIX => 'gz' },
    clean => { FILES => 'Abacus-*' },
);

The PM hash does two things:

  1. Installs Abacus.pm as normal
  2. Copies the header files into Abacus/include/ alongside the module

After make install, the filesystem looks like:

site_perl/
  Abacus.pm
  Abacus/
    include/
      abacus.h
      abacus_math.h

Write a test

Abacus/t/01-basic.t

use strict;
use warnings;
use Test::More;
use Abacus qw(add subtract multiply divide factorial);

is(add(2, 3),       5,   'add');
is(subtract(10, 4), 6,   'subtract');
is(multiply(3, 7),  21,  'multiply');
is(divide(20, 4),   5,   'divide');
is(factorial(5),     120, 'factorial');

eval { divide(1, 0) };
like($@, qr/division by zero/, 'divide by zero croaks');

eval { factorial(-1) };
like($@, qr/negative/, 'negative factorial croaks');

done_testing;

Build and install Abacus

cd Abacus
perl Makefile.PL
make
make test
make install          # installs headers into site_perl

Part 2: The Consumer Distribution (Tally)

Tally is a separate distribution that reuses Abacus's C arithmetic without duplicating any code. It adds its own "running total" functionality on top.

Write the Makefile.PL that finds Abacus headers

This is where the consumer locates the provider's headers. The two-step resolution strategy supports both installed (CPAN) and development (sibling
directory) scenarios.

Tally/Makefile.PL

use 5.008003;
use strict;
use warnings;
use ExtUtils::MakeMaker;

# Resolve Abacus include directory:
#   1. Try installed Abacus module (CPAN / system)
#   2. Fall back to sibling directory (development)
my $abacus_inc;
eval {
    no warnings 'redefine';
    local *XSLoader::load = sub {};  # skip XS bootstrap
    require Abacus;
    my $dir = Abacus->include_dir();
    $abacus_inc = $dir if $dir && -d $dir;
};
if (!$abacus_inc && -d '../Abacus/include') {
    $abacus_inc = '../Abacus/include';
}
die "Cannot find Abacus include directory.\n"
  . "Install Abacus or place it as a sibling directory.\n"
    unless $abacus_inc;

WriteMakefile(
    NAME             => 'Tally',
    AUTHOR           => 'Your Name <you@example.com>',
    VERSION_FROM     => 'lib/Tally.pm',
    ABSTRACT_FROM    => 'lib/Tally.pm',
    LICENSE          => 'artistic_2',
    MIN_PERL_VERSION => '5.008003',
    CONFIGURE_REQUIRES => {
        'ExtUtils::MakeMaker' => '0',
        'Abacus'              => '0.01',
    },
    TEST_REQUIRES => {
        'Test::More' => '0',
    },
    PREREQ_PM => {
        'Abacus' => '0.01',
    },

    # Point the compiler at Abacus's installed headers
    INC    => "-I$abacus_inc",
    OBJECT => '$(O_FILES)',

    dist  => { COMPRESS => 'gzip -9f', SUFFIX => 'gz' },
    clean => { FILES => 'Tally-*' },
);

Let's walk through the header resolution:

  1. Try the installed path first - require Abacus loads the module, then Abacus->include_dir() returns the path where the headers were installed. We stub out XSLoader::load because we only need the pure-Perl include_dir() method, not the XS functions.
  2. Fall back to sibling directory - during development, Abacus and Tally often live side by side. ../Abacus/include handles this case.
  3. Die with a clear message if neither path works.

The resolved path is passed to INC, which adds it to the C compiler's include search path (-I/path/to/Abacus/include).

Abacus is listed in both CONFIGURE_REQUIRES and PREREQ_PM:

  • CONFIGURE_REQUIRES ensures Abacus is installed before Makefile.PL runs (needed because we require Abacus at configure time)
  • PREREQ_PM ensures it is available at runtime too

Write the XS file

This is where the reuse happens. Tally includes abacus_math.h directly -
no Perl coupling, just pure C function calls.

Tally/Tally.xs

#define PERL_NO_GET_CONTEXT
#include "EXTERN.h"
#include "perl.h"
#include "XSUB.h"

/* Hook Abacus errors into Perl's croak() */
#define ABACUS_FATAL(msg) croak("%s", (msg))

/* Include the pure-C header from Abacus - no Perl deps in the header */
#include "abacus_math.h"

/* ── Tally's own C logic, built on top of Abacus ─────────────── */

typedef struct {
    int32_t total;
} tally_state_t;

static inline void
tally_init(tally_state_t *state) {
    state->total = 0;
}

static inline int32_t
tally_add(tally_state_t *state, int32_t value) {
    state->total = abacus_add(state->total, value);
    return state->total;
}

static inline int32_t
tally_subtract(tally_state_t *state, int32_t value) {
    state->total = abacus_subtract(state->total, value);
    return state->total;
}

static inline int32_t
tally_multiply_total(tally_state_t *state, int32_t value) {
    state->total = abacus_multiply(state->total, value);
    return state->total;
}

static inline int32_t
tally_get(tally_state_t *state) {
    return state->total;
}

static inline void
tally_reset(tally_state_t *state) {
    state->total = 0;
}

/* ── XS bindings ─────────────────────────────────────────────── */

MODULE = Tally  PACKAGE = Tally

PROTOTYPES: DISABLE

SV *
new(class)
    const char *class
  CODE:
    tally_state_t *state;
    Newxz(state, 1, tally_state_t);
    tally_init(state);
    RETVAL = newSV(0);
    sv_setref_pv(RETVAL, class, (void *)state);
  OUTPUT:
    RETVAL

int
add(self, value)
    SV *self
    int value
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    RETVAL = tally_add(state, value);
  OUTPUT:
    RETVAL

int
subtract(self, value)
    SV *self
    int value
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    RETVAL = tally_subtract(state, value);
  OUTPUT:
    RETVAL

int
multiply_total(self, value)
    SV *self
    int value
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    RETVAL = tally_multiply_total(state, value);
  OUTPUT:
    RETVAL

int
total(self)
    SV *self
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    RETVAL = tally_get(state);
  OUTPUT:
    RETVAL

void
reset(self)
    SV *self
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    tally_reset(state);

void
DESTROY(self)
    SV *self
  CODE:
    tally_state_t *state = INT2PTR(tally_state_t *, SvIV(SvRV(self)));
    Safefree(state);

Notice that Tally includes abacus_math.h (the pure C header), not abacus.h (the Perl-facing wrapper). This is intentional - Tally has its own Perl/XS setup and only needs the C functions.

Write the Perl module

Tally/lib/Tally.pm

package Tally;

use 5.008003;
use strict;
use warnings;

our $VERSION = '0.01';

require XSLoader;
XSLoader::load('Tally', $VERSION);

1;

__END__

=head1 NAME

Tally - Running total calculator using Abacus C headers

=head1 SYNOPSIS

    use Tally;

    my $t = Tally->new;
    $t->add(10);        # total is now 10
    $t->add(5);         # total is now 15
    $t->subtract(3);    # total is now 12
    $t->multiply_total(2);  # total is now 24
    say $t->total;      # 24
    $t->reset;          # back to 0

=cut

Write a test

Tally/t/01-basic.t

use strict;
use warnings;
use Test::More;

use_ok('Tally');

my $t = Tally->new;
isa_ok($t, 'Tally');

is($t->total, 0, 'starts at zero');

is($t->add(10), 10, 'add 10');
is($t->add(5),  15, 'add 5');
is($t->subtract(3), 12, 'subtract 3');
is($t->multiply_total(2), 24, 'multiply by 2');
is($t->total, 24, 'total is 24');

$t->reset;
is($t->total, 0, 'reset to zero');

done_testing;

Build Tally (development mode)

cd Tally
perl Makefile.PL     # finds ../Abacus/include automatically
make
make test

I hope you found this tutorial useful! If you have questions about XS, C header reuse, or building modular Perl/C libraries, please leave a message.

As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.

I am trying to understand the behavior of the following script under Perl 5.28.2:

sub split_and_print {
    my $label = $_[0];
    my $x = $_[1];
    my @parts = split('\.', $x);
    print sprintf("%s -> %s %s %.20f\n", $label, $parts[0], $parts[1], $x);
}

my @raw_values = ('253.38888888888889', '373.49999999999994');
for my $raw_value (@raw_values) {
    split_and_print("'$raw_value'", $raw_value);
    split_and_print("1.0 * '$raw_value'", 1.0 * $raw_value);
}

for me, this prints

'253.38888888888889' -> 253 38888888888889 253.38888888888888573092
1.0 * '253.38888888888889' -> 253 388888888889 253.38888888888888573092
'373.49999999999994' -> 373 49999999999994 373.49999999999994315658
1.0 * '373.49999999999994' -> 373 5 373.49999999999994315658

All of that is as expected, except for the last line: I don't understand why, during the automatic conversion of $x from a number to a string in the call to split it is converted into 373.5. print(373.49999999999994 - 373.5) says -5.6843418860808e-14, so Perl knows that those numbers are not equal (i.e. it's not about a limited precision of floating points in Perl).

perlnumber says

As mentioned earlier, Perl can store a number in any one of three formats, but most operators typically understand only one of those formats. When a numeric value is passed as an argument to such an operator, it will be converted to the format understood by the operator.

[...]

  • If the source number is outside of the limits representable in the target form, a representation of the closest limit is used. (Loss of information)

  • If the source number is between two numbers representable in the target form, a representation of one of these numbers is used. (Loss of information)

But '373.5' doesn't seem to be the "closest limit" of representing 373.49999999999994 as a string -- that would be '373.49999999999994', or some other decimal representation that, when converted back to a number yields the original value.

Also: what is different about 253.38888888888889?

I am looking for a definite reference that explains how exactly the automatic conversion of numbers to strings works in Perl.

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Clone - recursively copy Perl datatypes
    • Version: 0.50 on 2026-03-28, with 33 votes
    • Previous CPAN version: 0.49 was released 3 days before
    • Author: ATOOMIC
  2. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260327.002 on 2026-03-27, with 25 votes
    • Previous CPAN version: 20260318.001 was released 9 days before
    • Author: BRIANDFOY
  3. DBD::Oracle - Oracle database driver for the DBI module
    • Version: 1.95 on 2026-03-24, with 33 votes
    • Previous CPAN version: 1.91_5 was released 8 days before
    • Author: ZARQUON
  4. IPC::Run - system() and background procs w/ piping, redirs, ptys (Unix, Win32)
    • Version: 20260322.0 on 2026-03-22, with 39 votes
    • Previous CPAN version: 20250809.0 was released 7 months, 12 days before
    • Author: TODDR
  5. Mojo::Pg - Mojolicious ♥ PostgreSQL
    • Version: 4.29 on 2026-03-23, with 98 votes
    • Previous CPAN version: 4.28 was released 5 months, 23 days before
    • Author: SRI
  6. Object::Pad - a simple syntax for lexical field-based objects
    • Version: 0.825 on 2026-03-25, with 48 votes
    • Previous CPAN version: 0.824 was released 1 day before
    • Author: PEVANS
  7. PDL::Stats - a collection of statistics modules in Perl Data Language, with a quick-start guide for non-PDL people.
    • Version: 0.856 on 2026-03-22, with 15 votes
    • Previous CPAN version: 0.855 was released 1 year, 16 days before
    • Author: ETJ
  8. SPVM - The SPVM Language
    • Version: 0.990152 on 2026-03-26, with 36 votes
    • Previous CPAN version: 0.990151 was released the same day
    • Author: KIMOTO
  9. Term::Choose - Choose items from a list interactively.
    • Version: 1.781 on 2026-03-25, with 15 votes
    • Previous CPAN version: 1.780 was released 1 month, 20 days before
    • Author: KUERBIS
  10. YAML::Syck - Fast, lightweight YAML loader and dumper
    • Version: 1.42 on 2026-03-27, with 18 votes
    • Previous CPAN version: 1.41 was released 4 days before
    • Author: TODDR

(dcxxvii) metacpan weekly report - Mail::Make

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 43

Week's winner: Mail::Make (+2)

Build date: 2026/03/28 20:47:31 GMT


Clicked for first time:


Increasing its reputation:

I need to move some chunks of text around in a file. I am partially successful, in the sense that I can move only the first chunk successfully.

The text in the file looks like this:

text regtext1 text regtext2 text regtextA regtextZ end

where text is some random text, and regtext1,2,3 are pieces of text conforming to some regular rules / patterns. All of them can contain pretty much any printable character, and a few more (diacritics, end-of-line, ...).

What I do now is something like this:

/(reg)(text\d+.*?)(regtext[A-Z]+)/$1$3$2/gs

the result being that regextA is moved inside regtext1:

text regregtextAtext1 text regtext2 text regtextZ end

The issue is that after the replace, the search-and-replace continues at the position after regtextA, before regtextZ - if I understand the algorithm correctly.

How can I modify the search-and-replace expression in such way to do the same thing for regtext2...regtextZ, and all other such occurrences? The text in the end should look like:

text regregtextAtext1 text regregtextZtext2 text end

but it does not happen.

I might have to use the \G anchor, but I have no idea how. For debugging I use regex101.com.

Looking at a previous example, I tried the following code:

$s =~ s{(?:\G(?!\A)|)\K(reg)(text\d+.*?)(regtext[A-Z]+)}{"$1$3$2"}

but it makes also only one replacement - probably because I do not understand exactly how the original code (and \G) works.

I tried the correct version of the code suggested in the answer, but it takes an "infinity" of time(actually, I forcefully stopped the execution after several minutes) (just like in the previous example) - even if I limit the execution to only one replacement. The presence of the "while" is "malefic". In the absence of the while, the one replacement happens "instantly".

In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.

Here is an excerpt of the code that gives error:

package remote {
__DATA__
print "$ENV{HOME}\n";
}

as show below

Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.

I can't seem to find any mis-matched brackets.

On the contrary, when I re-write the same package without braces, the code works.

package remote;
__DATA__
print "$ENV{HOME}\n";

I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.

I just spend another fun and productive week in Marseille at the Koha Hackfest hosted by BibLibre. We (Mark, tadzik and me) arrived on Sunday (via plane from Vienna or Poland, and I came by train from Berlin via Strasbourg) and left on Friday.

There where the usual interesting discussions on all things Koha, presentations of new features and of course a lot of socializing. And cheese, so much cheese...

Elasticsearch

On the first day there was a discussion on Elasticsearch and getting rid of Zebra (the old search engine used by Koha). Actually getting rid of Zebra is not an option (now), because small installation won't want to set up and run Elasticsearch. But Mark proposed using our Marc Normalization Plugin as the basis for a new internal, DB-only search engine (so no need for an external index etc) and over the course of the week (and with LLM help) implemented a prototype. It would really be amazing if we could get this running!

I worked a bit on improving Elasticsearch indexing:

  • Bulk biblio ES index update after auth change: When merging (or updating) authorities, the Elasticsearch indexing of the linked biblios now will happen in one background job per authority instead of one background job per biblio. So an authority that is used in 100 biblios will now trigger one indexing background job with 100 biblio items instead of 100 background jobs with 1 biblio item each.
  • Zebraqueue should not be added to when only Elasticsearch is used: We added a new syspref "ElasticsearchEnableZebraQueue". If disabled, no data will be written to the zebraqueue table, because usually when using Elasticsearch you don't need to also run Zebra.

I got sign-offs and Pass-QA for both issues during the hackfest, thanks Fridolin, Paul and Baptiste (who owns the coolest tea mug at BibLibre..)

QA

I also did QA on a bunch of other issues: 22639, 35267, 36550, 39158, 40906, 41767, 41967, 42107. Some of them where of interest to me, some I did because other people nicely asked me to :-)

LLM, "AI" and Agentic Coding

This was again a hot topic, with some people using those tools to great effect, some hating them, and some in between. As in my last post on the German Perl Workshop I again want to point out this blog post: I Sold Out for $20 a Month and All I Got Was This Perfectly Generated Terraform, and during the event the post Thoughts on slowing the fuck down dropped (by Mario Zechner, who wrote the coding agent I (currently) use).

Anyway, Koha now has some guidelines on AI and LLM-assisted contributions and on using LLM features inside Koha.

Claude vs domm

While working on unit tests for Bug 40577 I struggled with a test failing only if I run the whole test script (as opposed to only the one subtest I was working on). It seemed to be a problem with mocked tests, so I asked Joubu (who was by chance just standing next to me). Together we figured out the scoping problem: If you use Test::MockObject/MockModule multiple times on a class from different scopes, the mocked methods/functions might not automatically be removed. You have to call unmock explicitly. After the patch was done, I described the error to Claude and asked for a fix, expecting to not get anything useable. But (to my slight horror) it produced the correct explanation and fix in very short time. On the one hand: amazing; on the other hand: very scary.

Other random stuff:

  • When it rains and a TGV arrives at the station, more people have the idea to take a taxi than taxis are available. So walking the short distance was necessary, but we (Katrin, who I met on the train, and me) still got wet. At least we had cold burgers...
  • Paul showed me a non-Koha tool he has written: mdv - A terminal markdown viewer with vim keybindings. Very nice, I especially like it to view checkouts of gitlab wikis!
  • I was not the only Team Scheisse fan attending! Philip++
  • Philip also pointed out the very detailed and interesting shared notes produced by various attendees during the event.
  • At my third visit to Marseille, I manage to navigate the city center quite well.
  • I finally made it to the Tangerine record store, very nice selection. I still did not let the shop owner talk me into buying a 200€ original UK pressing of Unknown Pleasures by Joy Division.
  • I did not get Moule Frits, but at least some Galette and Cidre.
  • After being to Senegal in February, I now realized that there are a lot of places selling Yassa and Mafe in Marseille. I guess they where there last year too, I just did not see them, having never eaten Yassa or Mafe before.
  • It can get very windy in Marseille.
  • I should do it like Jake(?) and cycle (at least partly) to the next hackfest.

Thanks

Thanks to BibLibre and Paul Poulain for organizing the event, and to all the attendees for making it such a wonderful 5 days!

Looking forward to meet you all again at the upcoming KohaCon in Karlsruhe

Updates

  • 2026-03-03: Added link to shared notes.

I have a calendar week of a given year, like so:

perl -E "use POSIX qw(strftime); say strftime('%Y-%V', localtime)"

How do I generate a unix timestamp for this calendar week? (for example a timestamp for the start of said week).

My use case is that I need to group different timestamps (YYYY-MM-DD) into calendar weeks, but then need unix timestamps of those weeks to proceed further. I use strftime to convert YYYY-MM-DD into calendar weeks, but have difficulties proceeding from there.

Make, Bash, and a scripting language of your choice


Creating AWS Resources…let me count the ways

You need to create an S3 bucket, an SQS queue, an IAM policy and a few other AWS resources. But how?…TIMTOWTDI

The Console

  • Pros: visual, immediate feedback, no tooling required, great for exploration
  • Cons: not repeatable, not version controllable, opaque, clickops doesn’t scale, “I swear I configured it the same way”

The AWS CLI

  • Pros: scriptable, composable, already installed, good for one-offs
  • Cons: not idempotent by default, no state management, error handling is manual, scripts can grow into monsters

CloudFormation

  • Pros: native AWS, state managed by AWS, rollback support, drift detection
  • Cons: YAML/JSON verbosity, slow feedback loop, stack update failures are painful, error messages are famously cryptic, proprietary to AWS, subject to change without notice

Terraform

  • Pros: multi-cloud, huge community, mature ecosystem, state management, plan before apply
  • Cons: state file complexity, backend configuration, provider versioning, HCL is yet another language to learn, overkill for small projects, often requires tricks & contortions

Pulumi

  • Pros: real programming languages, familiar abstractions, state management
  • Cons: even more complex than Terraform, another runtime to install and maintain

CDK

  • Pros: real programming languages, generates CloudFormation, good for large organizations
  • Cons: CloudFormation underneath means CloudFormation problems, Node.js dependency

…and the rest of crew…

Ansible, AWS SAM, Serverless Framework - each with their own opinions, dependencies, and learning curves.

Every option beyond the CLI adds a layer of abstraction, a new language or DSL, a state management story, and a new thing to learn and maintain. For large teams managing hundreds of resources across multiple environments that overhead is justified. For a solo developer or small team managing a focused set of resources it can feel like overkill.

Even in large organizations, not every project should be conflated into the corporate infrastructure IaC tool. Moreover, not every project gets the attention of the DevOps team necessary to create or support the application infrastructure.

What if you could get idempotent, repeatable, version-controlled infrastructure management using tools you already have? No new language, no state backend, no provider versioning. Just make, bash, a scripting language you’re comfortable with, and your cloud provider’s CLI.

And yes…my love affair with make is endless.

We’ll use AWS examples throughout, but the patterns apply equally to Google Cloud (gcloud) and Microsoft Azure (az). The CLI tools differ, the patterns don’t.


A word about the AWS CLI --query option

Before you reach for jq, perl, or python to parse CLI output, it’s worth knowing that most cloud CLIs have built-in query support. The AWS CLI’s --query flag implements JMESPath - a query language for JSON that handles the majority of filtering and extraction tasks without any additional tools:

# get a specific field
aws lambda get-function \
    --function-name my-function \
    --query 'Configuration.FunctionArn' \
    --output text

# filter a list
aws sqs list-queues \
    --query 'QueueUrls[?contains(@, `my-queue`)]|[0]' \
    --output text

--query is faster, requires no additional dependencies, and keeps your pipeline simple. Reach for it first. When it falls short - complex transformations, arithmetic, multi-value extraction - that’s when a one-liner earns its place:

# perl
aws lambda get-function --function-name my-function | \
    perl -MJSON -n0 -e '$l=decode_json($_); print $l->{Configuration}{FunctionArn}'

# python
aws lambda get-function --function-name my-function | \
    python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Configuration']['FunctionArn'])"

Both get the job done. Use whichever lives in your shed.


What is Idempotency?

The word comes from mathematics - an operation is idempotent if applying it multiple times produces the same result as applying it once. Sort of like those ID10T errors…no matter how hard or how many times that user clicks on that button they get the same result.

In the context of infrastructure management it means this: running your resource creation script twice should have exactly the same outcome as running it once. The first run creates the resource. The second run detects it already exists and does nothing - no errors, no duplicates, no side effects.

This sounds simple but it’s surprisingly easy to get wrong. A naive script that just calls aws lambda create-function will fail on the second run with a ResourceConflictException. A slightly better script wraps that in error handling. A truly idempotent script never attempts to create a resource it knows already exists.

And it works in both directions. The idempotent bug - running a failing process repeatedly and getting the same error every time - is what happens when your failure path is idempotent too. Consistently wrong, no matter how many times you try. The patterns we’ll show are designed to ensure that success is idempotent while failure always leaves the door open for the next attempt.

Cloud APIs fall into four distinct behavioral categories when it comes to idempotency, and your tooling needs to handle each one differently:

Case 1 - The API is idempotent and produces output

Some APIs can be called repeatedly without error and return useful output each time - whether the resource was just created or already existed. aws events put-rule is a good example - it returns the rule ARN whether the rule was just created or already existed. The pattern: call the read API first, capture the output, call the write API only if the read returned nothing.

Case 2 - The API is idempotent but produces no output

Some write APIs succeed silently - they return nothing on success. aws s3api put-bucket-notification-configuration is a good example. It will happily overwrite an existing configuration without complaint, but returns no output to confirm success. The pattern: call the API, synthesize a value for your sentinel using && echo to capture something meaningful on success.

Case 3 - The API is not idempotent

Some APIs will fail with an error if you try to create a resource that already exists. aws lambda add-permission returns ResourceConflictException if the statement ID already exists. aws lambda create-function returns ResourceConflictException if the function already exists. These APIs give you no choice - you must query first and only call the write API if the resource is missing.

Case 4 - The API call fails

Any of the above can fail - network errors, permission problems, invalid parameters. When a call fails you must not leave behind a sentinel file that signals success. A stale sentinel is worse than no sentinel - it tells Make the resource exists when it doesn’t, and subsequent runs silently skip the creation step. The patterns: || rm -f $@ when writing directly, or else rm -f $@ when capturing to a variable first.


The Sentinel File

Before we look at the four patterns in detail, we need to introduce a concept that ties everything together: the sentinel file.

A sentinel file is simply a file whose existence signals that a task has been completed successfully. It contains no magic - it might hold the output of the API call that created the resource, or it might just be an empty file created with touch. What matters is that it exists when the task succeeded and doesn’t exist when it hasn’t.

make has used this pattern since the 1970s. When you declare a target in a Makefile, make checks whether a file with that name exists before deciding whether to run the recipe. If the file exists and is newer than its dependencies, make skips the recipe entirely. If the file doesn’t exist, make runs the recipe to create it.

For infrastructure management this is exactly the behavior we want:

my-resource:
    @value="$$(aws some-service describe-resource \
            --name $(RESOURCE_NAME) 2>&1)"; \
    if [[ -z "$$value" || "$$value" = "ResourceNotFound" ]]; then \
        value="$$(aws some-service create-resource \
            --name $(RESOURCE_NAME))"; \
    fi; \
    test -e $@ || echo "$$value" > $@

The first time you run make my-resource the file doesn’t exist, the recipe runs, the resource is created, and the API response is written to the sentinel file my-resource. The second time you run it, make sees the file exists and skips the recipe entirely - zero API calls.

When an API call fails we want to be sure we do not create the sentinel file. We’ll cover the failure case in more detail in Pattern 4 of the next section.


The Four Patterns

Armed with the sentinel file concept and an understanding of the four API behavioral categories, let’s look at concrete implementations of each pattern.


Pattern 1 - Idempotent API with output

The simplest case. Query the resource first - if it exists capture the output and write the sentinel. If it doesn’t exist, create it, capture the output, and write the sentinel. Either way you end up with a sentinel containing meaningful content.

The SQS queue creation is a good example:

sqs-queue:
    @queue="$$(aws sqs list-queues \
        --query 'QueueUrls[?contains(@, `$(QUEUE_NAME)`)]|[0]' \
        --output text --profile $(AWS_PROFILE) 2>&1)"; \
    if echo "$$queue" | grep -q 'error\|Error'; then \
        echo "ERROR: list-queues failed: $$queue" >&2; \
        exit 1; \
    elif [[ -z "$$queue" || "$$queue" = "None" ]]; then \
        queue="$(QUEUE_NAME)"; \
        aws sqs create-queue --queue-name $(QUEUE_NAME) \
            --profile $(AWS_PROFILE); \
    fi; \
    test -e $@ || echo "$$queue" > $@

Notice --query doing the filtering work before the output reaches the shell. No jq, no pipeline - the AWS CLI extracts exactly what we need. The result is either a queue URL or empty. If empty we create. Either way $$queue ends up with a value and the sentinel is written exactly once.

The EventBridge rule follows the same pattern:

lambda-eventbridge-rule:
    @rule="$$(aws events describe-rule \
            --name $(RULE_NAME) \
            --profile $(AWS_PROFILE) 2>&1)"; \
    if echo "$$rule" | grep -q 'ResourceNotFoundException'; then \
        rule="$$(aws events put-rule \
            --name $(RULE_NAME) \
            --schedule-expression "$(SCHEDULE_EXPRESSION)" \
            --state ENABLED \
            --profile $(AWS_PROFILE))"; \
    elif echo "$$rule" | grep -q 'error\|Error'; then \
        echo "ERROR: describe-rule failed: $$rule" >&2; \
        exit 1; \
    fi; \
    test -e $@ || echo "$$rule" > $@

Same shape - query, create if missing, write sentinel once.


Pattern 2 - Idempotent API with no output

Some APIs succeed silently. aws s3api put-bucket-notification-configuration is the canonical example - it happily overwrites an existing configuration and returns nothing. No output means nothing to write to the sentinel.

The solution is to synthesize a value using &&:

define notification_configuration =
use JSON;

my $lambda_function = $ENV{lambda_function};
my $function_arn = decode_json($lambda_function)->{Configuration}->{FunctionArn};

my $configuration = {
 LambdaFunctionConfigurations => [ {
   LambdaFunctionArn => $function_arn,
   Events => [ split ' ', $ENV{s3_event} ],
  }
 ]
};

print encode_json($configuration);
endef

export s_notification_configuration = $(value notification_configuration)

lambda-s3-trigger: lambda-s3-permission
        temp="$$(mktemp)"; trap 'rm -f "$$temp"' EXIT; \
        lambda_function="$$(cat lambda-function)"; \
        echo $$(s3_event="$(S3_EVENT)" lambda_function="$$lambda_function" \
          perl -e "$$s_notification_configuration") > $$temp; \
        trigger="$$(aws s3api put-bucket-notification-configuration \
            --bucket $(BUCKET_NAME) \
            --notification-configuration file://$$temp \
            --profile $(AWS_PROFILE) && cat $$temp)"; \
        test -e $@ || echo "$$trigger" > $@

The && cat $$temp is the key. If the API call succeeds the && fires and $$trigger gets the configuration JSON string - something meaningful to write to the sentinel. If the API call fails && doesn’t fire, $$trigger stays empty because the Makefile recipe aborts.

Using a scriptlet (s_notification_configuration) might seem like overkill, but it’s worth not having to fight shell quoting issues!

Writing JSON used in many AWS API calls to a temporary file is usually a better way than passing a string on the command line. Unless you wrap the JSON in quotes you’ll be fighting shell quoting and interpolation issues…and of course you can write your scriptlets in Perl or Python!


Pattern 3 - Non-idempotent API

Some APIs are not idempotent - they fail with a ResourceConflictException or similar if the resource already exists. aws lambda add-permission and aws lambda create-function are both in this category. There is no “create or update” variant - you must check existence first and only call the write API if the resource is missing.

The Lambda S3 permission target is a good example:

lambda-s3-permission: lambda-function s3-bucket
        @permission="$$(aws lambda get-policy \
                --function-name $(FUNCTION_NAME) \
                --profile $(AWS_PROFILE) 2>&1)"; \
        if echo "$$permission" | grep -q 'ResourceNotFoundException' || \
           ! echo "$$permission" | grep -q s3.amazonaws.com; then \
            permission="$$(aws lambda add-permission \
                --function-name $(FUNCTION_NAME) \
                --statement-id s3-trigger-$(BUCKET_NAME) \
                --action lambda:InvokeFunction \
                --principal s3.amazonaws.com \
                --source-arn arn:aws:s3:::$(BUCKET_NAME) \
                --profile $(AWS_PROFILE))"; \
        elif echo "$$permission" | grep -q 'error\|Error'; then \
            echo "ERROR: get-policy failed: $$permission" >&2; \
            exit 1; \
        fi; \
        if [[ -n "$$permission" ]]; then \
            test -e $@ || echo "$$permission" > $@; \
        else \
            rm -f $@; \
        fi

A few things worth noting here…

  • get-policy returns the full policy document which may contain multiple statements - we check for the presence of s3.amazonaws.com specifically using ! grep -q rather than just checking for an empty response. This handles the case where a policy exists but doesn’t yet have the S3 permission we need.
  • The sentinel is only written if $$permission is non-empty after the if block. This covers the case where get-policy returns nothing and add-permission also fails - the sentinel stays absent and the next make run will try again.
  • We pipe errors to our bash variable to detect the case where the resource does not exist or there may have been some other error. When other failures are possible 2>&1 combined with specific error string matching gives you both idempotency and visibility. Swallowing errors silently (2>/dev/null) is how idempotent bugs are born.

Pattern 4 - Failure handling

This isn’t a separate pattern so much as a discipline that applies to all three of the above. There are two mechanisms depending on how the sentinel is written.

Case 1: When the sentinel is written directly by the command:

aws lambda create-function ... > $@ || rm -f $@

|| rm -f $@ ensures that if the command fails the partial or empty sentinel is immediately cleaned up. Without it make sees the file on the next run and silently skips the recipe - an idempotent bug.

Case 2: When the sentinel is written by capturing output to a variable first:

if [[ -n "$$value" ]]; then \
    test -e $@ || echo "$$value" > $@; \
else \
    rm -f $@; \
fi

The else rm -f $@ serves the same purpose. If the variable is empty - because the API call failed - the sentinel is removed. If the sentinel doesn’t exist yet nothing is written. Either way the next make run will try again.

In both cases the goal is the same: a sentinel file should only exist when the underlying resource exists. A stale sentinel is worse than no sentinel.

Depending on the way your recipe is written you may not need to test the variable that capture the output at all. In Makefiles we .SHELLFLAGS := -ec which causes make to exit immediately if any command in a recipe fails. This means targets that don’t write to $@ - like our sqs-queue target above - don’t need explicit failure handling. make will die loudly and the sentinel won’t be written. In that case you don’t even need to test $$value and can simplify writing of the sentinel file like this:

test -e $@ || echo "$$value" > $@

Conclusion

Creating AWS resources can be done using several different tools…all of them eventually call AWS APIs and process the return payloads. Each of these tools has its place. Each adds something. Each also has a complexity, dependencies, and a learning curve score.

For a small project or a focused set of resources - the kind a solo developer or small team manages for a specific application - you don’t need tools with a high cognitive or resource load. You can use those tools you already have on your belt; make,bash, [insert favorite scripting language here], and aws. And you can leverage those same tools equally well with gcloud or az.

The four patterns we’ve covered handle every AWS API behavior you’ll encounter:

  • Query first, create only if missing, write a sentinel
  • Synthesize output when the API has none
  • Always check before calling a non-idempotent API
  • Clean up on failure with || rm -f $@

These aren’t new tricks - they’re straightforward applications of tools that have been around for decades. make has been managing file-based dependencies since 1976. The sentinel file pattern predates cloud computing entirely. We’re just applying them to a new problem.

One final thought. The idempotent bug - running a failing process repeatedly and getting the same error every time - is the mirror image of what we’ve built here. Our goal is idempotent success: run it once, it works. Run it again, it still works. Run it a hundred times, nothing changes. || rm -f $@ is what separates idempotent success from idempotent failure - it ensures that a bad run always leaves the door open for the next attempt rather than cementing the failure in place with a stale sentinel.

Your shed is already well stocked. Sometimes the right tool for the job is the one you’ve had hanging on the wall for thirty years.


Further Reading

  • “Advanced Bash-Scripting Guide” - https://tldp.org/LDP/abs/html/index.html
  • “GNU Make” - https://www.gnu.org/software/make/manual/html_node/index.html
  • Dave Oswald, “Perl One Liners for the Shell” (Perl conference presentation): https://www.slideshare.net/slideshow/perl-oneliners/77841913
  • Peteris Krumins, “Perl One-Liners” (No Starch Press): https://nostarch.com/perloneliners
  • Sundeep Agarwal, “Perl One-Liners Guide” (free online): https://learnbyexample.github.io/learn_perl_oneliners/
  • AWS CLI JMESPath query documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html

building my todo list app

rjbs forgot what he was saying

For years, I’ve wanted a better daily to-do checklist. I had a good idea what I wanted from it, but I knew it was going to be a pain to produce. It didn’t have any complicated ideas, just a bunch of UI considerations, and that’s not my area of expertise, so I’ve made do with a bunch of worse (for me) options, which has, for me, led to worse outcomes. I accepted the tradeoffs, but I wasn’t thrilled. Now I’ve finally built exactly the app I wanted, and it went great. I call it, for now, NerfJournal.

Project Seven: NerfJournal

That’s right, this is another “stuff Rik did with Claude” post. This one feels like maybe the project that had the greatest impact on me so far, and that’s in three ways: First, the tool I’ve produced is genuinely useful and I use it daily. Secondly, it made clear the ways in which the realm of coding easily available to me was expanded by agents. Finally, it’s been a great way to not just access but also learn those things, which I’ll write about in a follow-up post.

Anyway, the project is called NerfJournal, because it’s like a bullet journal, but wouldn’t really hurt anybody. Except me, if Hasbro decides to complain about the name.

I try to stick to a routine in setting up my work today. I have “work diary”, a bit like Mark Dominus once wrote about, and which I got to see in practice when we last worked together. This journal is very simple. There’s a bunch of checkboxes of things I mean to do every day, and then there’s space for notes on what else I actually did. I try to add a new page to this every day, and I’ve got a Monday template and a “rest of the week” template. The Monday template includes stuff I only need to do once a week. Here’s a sample page, not filled in:

Monday agenda in Notion

You’ll see that the 6th item on the morning routine is to post to #cyr-scrum. This is the Slack channel where, every day, the Cyrus team members are each meant to post what we did the previous day and what we’re going to do today. While the Notion page includes “stuff I do every day, but might forget”, the #cyr-scrum post is generally “stuff I won’t do again once it’s done, and might need to carry over until tomorrow”.

That is: if I didn’t fill my water pitcher today, I failed, and tomorrow I’ll get a new instance of that to do. It’s not “still open”, it’s a new one, and it’s interesting (well, to me) whether I kept up a streak. On the other hand, if I post in #cyr-scrum that I’m going to complete ticket CYR-1234, but I don’t do it, I better do it tomorrow. And if I do, there’s no reason to see it again on the next day.

a scrum post

A problem here is that now I have two to-do lists. One is a real todo list that I can tick “done” on, and the other is a post in Slack that I want to refer back to, from time to time, to see whether I’m keeping up with what I said I’d do. GTD rightfully tells us that “more todo lists is worse than fewer todo list”, generally, and I wanted fewer. But I didn’t want to make Linear tasks every day for things like “drink water”. And putting my scrum in Notion would be tedious. And CalDAV with VTODO has its own problems.

What I wanted was a single todo list that would be easy to use, visually simple enough to just leave on my desktop for quick reference. I’d been thinking about such a program off and on (mostly off) for a year or so, and after some so-so but encourage experience having Claude produce SwiftUI applications for me, I thought I’d give this one a go.

The session took place over two days. After a brief false start using VTODO (well, Apple’s EventKit) as a backend, we pivoted to a custom data model and got something working. We iterated on that, adding features, fixing bugs, and tweaking the design for a good while. When I felt like it, I’d take a break to play Xbox or read a book. When I came back, Claude had not contexted switched. Meanwhile, I’d had time for that diffuse cognition mode to help me “think” about next steps.

The biggest shifts were about realizing that the data model was subtly wrong. This wouldn’t have been hard to fix by hand, but it would have been fiddly and boring. Instead, I said, ‘Here’s the new model, do it.” Claude asked some useful questions, then did it. Meanwhile, I read Wikipedia. (I also spent some time reading the Swift source code.)

As things stand now, the app seems very likely to be useful. There are a bunch of things I still want to add. Some of them, I have a good picture of how to get them. Others, I only know the general idea. In both cases, I feel confident that I can get closer to what I want without too much serious effort. Pruning development dead ends is cheap.

You can read the whole development transcript, but it’s long. Firefox says 400 pages. But it’s there in case you want to look.

Here’s the app, loaded with test data. (There’s a Perl program to spit out predictable test data which can be imported into the app for testing.)

today's todo

Here’s today’s page, and you can see what I’ve done and haven’t. At the bottom, if you squint, you might see that one of my code review tasks says “carried over - 1 day ago”, meaning that I first put it on my list yesterday, but still haven’t done it.

If we go back a while, we can see what a “finished” day looks like:

a completed page

Now I can see all the things I did, when I marked them done, their category, and so on. I’m afraid I don’t have any days logged now that show some other things that could happen: things that didn’t get done would be shown in a “not done” section, showing that they were carried over and (maybe) done four days later. Some items could be shown as abandoned – I decided not to do them or carry them over. This is useful for those “fill the water” tasks. If I didn’t do that task on Monday, then when Tuesday starts, Monday’s todo is automatically abandoned. You can see the distinction in the previous screenshot: tasks that will carry over get round ticky bubbles, but tasks that will get auto-abandoned get square ticky boxes.

This is all pretty good, but wasn’t this supposed to help with Scrum? Well, it does! There’s a menu option to generate a “mrkdwn” (Slack’s bogus Markdown for chat) version of the day’s todo list into the macOS clipboard. Then I paste that into Slack. I can configure the report (or multiple versions of a report) so it doesn’t include personal items, for example. All of that: reporting, categories, and so on, are handled in the bundle manager.

the bundle manager

The bundle manager is named for “bundles”, which are groups of tasks that I can dump onto my list with two clicks. I have one for the start of a sprint, and I have another for standard work days. I imagine that I’ll have other bundles later for things like “prepare to travel” or “conference day”. But when I click “start a new day”, I get a blank page, and I know I better start with my daily bundle.

…and one of the items on my daily bundle is “make the code review tasks”. It’s got a hyperlink (you may have noticed that todo items can have a little link icon). The hyperlink is an iterm: URI that, when clicked, prompts me to run a little Perl program. That program fetches all the GitLab and GitHub code review requests waiting on me, turns it into JSON, and passes it to another little program that turns them into todos in NerfJournal. So I click the link, click “yes, run this program”, and then a bunch of specific-to-today tasks show up. Then I mark the first task done. I am getting all my code review done daily, just about. It’s a big process improvement.

wasn’t this post about Claude?

Well, sort of. I did all this with Claude. I described what I wanted, and I said I wanted it in SwiftUI, and Clade got to work building. I’d test, find bugs, realize that I had the wrong design, and iterate. I spent a big hunk of two days on this, and it has been a huge win. I could’ve built this on my own, for sure, but it would’ve taken weeks, at least, including “learn SwiftUI from scratch”. Possible, of course, but a much larger investment on a tool that, in the end, I might not have liked!

Is the code bad? I’m not sure. I don’t think so, but I’m not a Swift expert yet. But also: it only runs on my machine. I can see everything it does, and I can see it’s safe. I do not plan to sell it, support it, or run my business on it. Effectively, I summoned into existence a specialized tool that helps me do the job at which I am an expert, saving my expert time for expert problems. I think I will end up doing a lot of this. And hopefully I’ll pick up some new skills, as I go, from paying close attention to the new code I’m reading.

I had Claude produce a complete daily to-do checklist for me – something like a bullet journal. The results were great, and I’ve been using the program for weeks and it’s definitely helping me stay on track and keep an eye on what I’m doing. The problem was that unlikely everything else I’d had Claude write, I was not competent to review this work. I felt confident it wasn’t going to cause me big problems, but what if I wanted to debug it myself? I realized there was a plausible solution waiting for me…

I gave Claude a prompt that went roughly like this:

We have built a really useful program here, and I am using it and enjoy it. Next, I would like to be able to work on it directly and to think more deeply about its architecture. The problem is that I don’t really know Swift.

I am an experienced programmer with decades of experience. I have worked primarily in Perl, but am well-versed in general programming concepts from many other different languages. It seems relevant so: I understand event-driven programming, observers, and MVC, in general.

I would like you to prepare a syllabus for me, which will help me learn SwiftUI, using NerfJournal as a working example. What might be useful here is a set of topics, starting with fundamentals and building to more complex ones, related to how the project actually works.

This began a back and forth that didn’t go on all that long. (You can read the transcript. Claude produced a syllabus. I proposed that we turn the project into a website. We fought with Jekyll for a while. Claude told me that I wouldn’t need some skills I thought I might want. (Later, I did want them.)

Still, in short order, I had: Unit 1: Swift as a Language. It started like this:

Before touching SwiftUI, you need the language it’s built on. Swift is statically typed, compiled, and designed around a distinction — value types vs. reference types — that will shape every decision in the units that follow.

This unit covers the language features you’ll see constantly in NerfJournal’s source: structs, enums, optionals, protocols, extensions, modules, closures, and computed properties. None of this is SwiftUI-specific; it’s just Swift.

The single most important idea in this unit is that structs are value types. Everything else makes more sense once that has settled in.

I felt that the text was good. It wasn’t confusing. It wasn’t unclear. It also didn’t captivate me or lead me to imagine I was reading a lost work of Edward Gibbon. But I didn’t need that, I just needed something to systematically help me learn SwiftUI, with an eye to working on the project I’d summoned into existence. On that front, the text was good.

Eventually, I did end up creating some skills and standing instructions. First, the standing instruction:

When the user asks a question about Swift or SwiftUI during a learning discussion, log it to learning/questions.md under the appropriate unit heading, then commit it. Do this automatically without being prompted.

As I read the content, I’d do all the things I’d normally do when reading a programming book: I’d close my eyes and think hard. I’d fiddle with the source code to see how things changed. I’d go consult the authoritative documentation. But sometimes, I’d also (or instead), ask Claude to elaborate on something.

At some point, the text said that extensions were “module-scoped”. I had no idea what a module was. The text didn’t say. Rather than consult the docs, I just asked Claude: “You refer to module scope. What is a module? Is this going to be explained later? If so, no problem.”

Claude said that no, its plan hadn’t included modules, and really they belonged in unit one. It provided me a clear and useful explanation and then, without prompting, wrote a commit to add the explanation to the Q&A appendix of the book. More questions like this came up, and Claude would populate the Q&A section.

Later, I added a skill, ‘next-chapter’:

Write the next unit of NerfLearning.

First, rebase this branch on main.

Review the changes between the state of this branch before rebasing and after. If changes to the project suggest that learning/SYLLABUS.md should be updated for future chapters, make those changes and commit it.

Then review the file learning/questions.md, which reflects questions from the reader during the last unit. Merge the material from the questions into the unit they came from. Remove the now-merged questions from the questions file. Commit that.

Then write the next unit from the syllabus. When doing so, reflect on the question-and-answers content you just merged into the previous unit. That reflects the kind of thing that the reader felt was missing from the text.

Commit the new unit.

I asked Claude to write Unit 2, and it did so. “It seems like the user wants more implementation details,” it mused, “I should make sure to cover how @ViewBuilder actually works.” Then it spit out another unit. Was the unit actually better because of those instructions? How the heck should I know!? But it remained good.

I’m up to unit six now, where I’m stalled mostly due to other things taking my time. I actually feel like I can read the whole program and pretty much follow along what it’s doing, how the syntax works, how the SwiftUI “magic” is suffused through the system, and how I’d change things in significant ways. I’m no expert. At best, I’m a beginner, but I have been given a huge boost in my learning process.

Of course this sort of process could go haywire. I would not want to learn a foreign language or culture this way and then go on a diplomatic mission. Software learning is much more forgiving, because so much of it can be trivially verified by checking authoritative sources or performing experiments. Also, I’ve got a lot of experience to draw on. But even so, it’s clear that this has been valuable and I’ll do something like this again.

There is sometimes an argument that “why will anybody learn anything anymore if the computer can do the work?” I don’t get this argument. Sure, some people will try to get by on the minimum, but that’s already the case. Now there are some longer levers for just skating by. But the same levers can be used to learn more, to achieve more, and to experiment more. I don’t think any of this is an unvarnished good, but it’s also clearly not just spicy autocorrect.

I’m hoping to get back to SwiftUI in a week or two. I’m piling up a number of little features I’d like to implement, and might try a few by hand.

You can read NerfLearning, up to wherever I’ve gotten to, if you like… but it’s targeting a pretty darn small audience.

Today marks seven years since we began The Weekly Challenge mission. It all started with just a handful of us tackling challenges in Perl. Soon after, Raku joined the gang, and we began to grow steadily thanks to the incredible support from both the Perl and Raku communities.
Thank you Team PWC for your continuous support and encouragement.

I'm currently in a train from Berlin to Strasbourg and then onward to Marseille, traveling from the 28th(!) German Perl Workshop to the Koha Hackfest. I spend a few days after the Perl Workshop in Berlin with friends from school who moved to Berlin during/after university, hanging around at their homes and neighborhoods, visiting museums, professional industrial kitchens and other nice and foody places. But I want to review the Perl Workshop, so:

German Perl Workshop

It seems the last time I've attended a German Perl Workshop was in 2020 (literally days before the world shut down...), so I've missed a bunch of nice events and possibilities to meet up with old Perl friends. But even after this longish break it felt a bit like returning home :-)

I traveled to Berlin by sleeper train (worked without a problem) arriving on Monday morning a few hours before the workshop started. I went to a friends place (where I'm staying for the week), dumped my stuff, got a bike, and did a nice morning cycle through Tiergarten to the venue. Which was an actual church! And not even a secularized one.

Day 1

After a short introduction and welcome by Max Maischein (starting with a "Willkommen, liebe Gemeinde" fitting the location) he started the workshop with a talk on Claude Code and Coding-Agents. I only recently started to play around a bit with similar tools, so I could related to a lot of the topics mentioned. And I (again?) need to point out the blog post I Sold Out for $20 a Month and All I Got Was This Perfectly Generated Terraform which sums up my feelings and experiences with LLMs much better than I could.

Abigail then shared a nice story on how they (Booking.com) sharded a database, twice using some "interesting" tricks to move the data around and still getting reads from the correct replicas, all with nearly no downtime. Fun, but as "my" projects usually operate on a much smaller scale than Booking I will probably not try to recreate their solution.

For lunch I met with Michael at a nearby market hall for some Vietnamese food to do some planing for the upcoming Perl Toolchain Summit in Vienna.

Lars Dieckow then talked about data types in databases, or actually the lack of more complex types in databases and how one could still implement such types in SQL. Looks interesting, but probably a bit to hackish for me to actually use. I guess I have to continue handling such cases in code (which of course feels ugly, especially as I've learned to move more and more code into the DB using CTEs and window functions).

Next Flavio S. Glock showed his very impressive progress with PerlOnJava, a Perl distribution for the JVM. Cool, but probably not something I will use (mostly because I don't run Java anywhere, so adding it to our stack would make things more complex).

Then Lars showed us some of his beloved tools in Aus dem Nähkästchen, continuing a tradition started by Sven Guckes (RIP). I am already using some of the tools (realias, fzf, zoxide, htop, ripgrep) but now plan to finally clean up my dotfiles using xdg-ninja.

Now it was time for my first talk at this workshop, on Using class, the new-ish feature available in Perl (since 5.38) for native keywords for object-oriented programming. I also sneaked in some bibliographic data structures (MAB2 and MARCXML) to share my pain with the attendees. I was a tiny bit (more) nervous, as this was the first time I was using my current laptop (a Framework running Sway/Wayland) with an external projector, but wl-present worked like a charm. After the talk Wolfram Schneider showed me his MAB2->MARC online converter, which could maybe have been a basis for our tool, but then writing our own was a "fun" way to learn about MAB2.

The last talk of the day was Lee Johnson with I Bought A Scanner showing us how he got an old (ancient?) high-res foto scanner working again to scan his various film projects. Fun and interesting!

Between the end of the talks and the social event I went for some coffee with Paul Cochrane, and we where joined by Sawyer X and Flavio and some vegan tiramisu. Paul and me then cycled to the Indian restaurant through some light drizzle and along the Spree, and only then I realized that Paul cycled all the way from Hannover to Berlin. I was a bit envious (even though I in fact did cycle to Berlin 16 years ago (oh my, so long ago..)). Dinner was nice, but I did not stay too long.

Day 2

Tuesday started with Richard Jelinek first showing us his rather impressive off-grid house (or "A technocrat's house - 2050s standard") and the software used to automate it before moving on the the actual topic of his talk, Perl mit AI which turned out to be about a Perl implementation in Rust called pperl developed with massive LLM support. Which seems to be rather fast. As with PerlOnJava, I'm not sure I really want to use an alternative implementation (and of course currently pperl is marked as "Research Preview — WORK IN PROGRESS — please do not use in production environments") but maybe I will give it a try when it's more stable. Especially since we now have containers, which make setting up some experimental environments much easier.

Then Alexander Thurow shared his Thoughts on (Modern?) Software Development, lots of inspirational (or depressing) quotes and some LLM criticism lacking at the workshop (until now..)

Next up was Lars (again) with a talk on Hierarchien in SQL where we did a very nice derivation on how to get from some handcrafted SQL to recursive CTEs to query hierarchical graph data (DAG). I used (and even talked about) recursive CTEs a few times, but this was by far the best explanation I've ever seen. And we got to see some geizhals internals :-)

Sören Laird Sörries informed us on Digitale Souveränität und Made in Europe and I'm quite proud to say that I'm already using a lot of the services he showed (mailbox, Hetzner, fairphone, ..) though we could still do better (eg one project is still using a bunch of Google services)

Then Salve J. Nilsen (whose name I will promise to not mangle anymore) showed us his thoughts on What might a CPAN Steward organization look like?. We already talked about this topic a few weeks ago (in preparation of the Perl Toolchain Summit), so I was not paying a lot of attention (and instead hacked up a few short slides for a lightning talk) - Sorry. But in the discussion afterwards Salve clarified that the Cyber Resilience Act applies to all "CE-marked products" and that even a Perl API backend that power a mobile app running on a smartphone count as "CE-marked products". Before that I was under the assumption that only software running on actual physical products need the attestation. So we should really get this Steward organization going and hopefully even profit from it!

The last slot of the day was filled with the Lightning Talks hosted by R Geoffrey Avery and his gong. I submitted two and got a "double domm" slot, where I hurried through my microblog pipeline (on POSSE and getting not-twitter-tweets from my command line via some gitolite to my self hosted microblog and the on to Mastodon) followed by taking up Lars' challenge to show stuff from my own "Nähkästchen", in my case gopass and tofi (and some bash pipes) for an easy password manager.

We had the usual mixture of fun and/or informative short talks, but the highlight for me was Sebastian Gamaga, who did his first talk at a Perl event on How I learned about the problem differentiating a Hash from a HashRef. Good slides, well executed and showing a problem that I'm quite sure everybody encountered when first learning Perl (and I have to admit I also sometimes mix up hash/ref and regular/curly-braces when setting up a hash). Looking forward for a "proper" talk by Sebastian next year :-)

This evening I skipped having dinner with the Perl people, because I had to finish some slides for Wednesday and wanted to hang out with my non-Perl friends. But I've heard that a bunch of people had fun bouldering!

Day 3

I had a job call at 10:00 and (unfortunately) a bug to fix, so I missed the three talks in the morning session and only arrived at the venue during lunch break and in time for Paul Cochrane talking about Getting FIT in Perl (and fit he did get, too!). I've only recently started to collect exercise data (as I got a sport watch for my birthday) and being able to extract and analyze the data using my own software is indeed something I plan to do.

Next up was Julien Fiegehenn on Turning humans into SysAdmins, where he showed us how he used LLMs to adapt his developer mentorship framework to also work for sysadmin and getting them (LLMs, not fresh Sysadmins) to differentiate between Julian and Julien (among other things..)

For the final talk it was my turn again: Deploying Perl apps using Podman, make & gitlab. I'm not too happy with slides, as I had to rush a bit to finish them and did not properly highlight all the important points. But it still went well (enough) and it seemed that a few people found one of the main points (using bash / make in gitlab CI instead of specifying all the steps directly in .gitlab-ci.yml) useful.

Then Max spoke the closing words and announced the location of next years German Perl Workshop, which will take place in Hannover! Nice, I've never been there and plan to attend (and maybe join Paul on a bike ride there?)

Summary

As usual, a lot of thanks to the sponsors, the speakers, the orgas and the attendees. Thanks for making this nice event possible!

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::cpanminus - get, unpack, build and install modules from CPAN
    • Version: 1.7049 on 2026-03-17, with 286 votes
    • Previous CPAN version: 1.7048 was 1 year, 4 months, 18 days before
    • Author: MIYAGAWA
  2. App::HTTPThis - Export the current directory over HTTP
    • Version: v0.11.1 on 2026-03-16, with 25 votes
    • Previous CPAN version: v0.11.0 was 2 days before
    • Author: DAVECROSS
  3. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260318.001 on 2026-03-18, with 25 votes
    • Previous CPAN version: 20260315.002 was 3 days before
    • Author: BRIANDFOY
  4. Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
    • Version: 0.022 on 2026-03-21, with 17 votes
    • Previous CPAN version: 0.021 was 1 year, 1 month, 17 days before
    • Author: LEONT
  5. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.20.0 on 2026-03-19, with 103 votes
    • Previous CPAN version: 3.19.0 was 4 days before
    • Author: TURNSTEP
  6. Git::CPAN::Patch - Patch CPAN modules using Git
    • Version: 2.5.2 on 2026-03-18, with 45 votes
    • Previous CPAN version: 2.5.1
    • Author: YANICK
  7. JSON - JSON (JavaScript Object Notation) encoder/decoder
    • Version: 4.11 on 2026-03-22, with 109 votes
    • Previous CPAN version: 4.10 was 3 years, 5 months, 13 days before
    • Author: ISHIGAKI
  8. JSON::PP - JSON::XS compatible pure-Perl module.
    • Version: 4.18 on 2026-03-20, with 22 votes
    • Previous CPAN version: 4.17_01 was 2 years, 7 months, 21 days before
    • Author: ISHIGAKI
  9. Log::Any - Bringing loggers and listeners together
    • Version: 1.719 on 2026-03-16, with 69 votes
    • Previous CPAN version: 1.718 was 9 months, 14 days before
    • Author: PREACTION
  10. MetaCPAN::API - (DEPRECATED) A comprehensive, DWIM-featured API to MetaCPAN
    • Version: 0.52 on 2026-03-16, with 26 votes
    • Previous CPAN version: 0.51 was 8 years, 9 months, 9 days before
    • Author: HAARG
  11. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260320 on 2026-03-20, with 44 votes
    • Previous CPAN version: 5.20260308 was 11 days before
    • Author: BINGOS
  12. Net::SSLeay - Perl bindings for OpenSSL and LibreSSL
    • Version: 1.96 on 2026-03-21, with 27 votes
    • Previous CPAN version: 1.95_03 
    • Author: CHRISN
  13. OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
    • Version: 0.7009 on 2026-03-19, with 15 votes
    • Previous CPAN version: 0.7008
    • Author: ETJ
  14. SPVM - The SPVM Language
    • Version: 0.990150 on 2026-03-19, with 36 votes
    • Previous CPAN version: 0.990149
    • Author: KIMOTO
  15. Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
    • Version: 1.045 on 2026-03-19, with 14 votes
    • Previous CPAN version: 1.044 was 10 days before
    • Author: CHOROBA
  16. TimeDate - Date and time formatting subroutines
    • Version: 2.35 on 2026-03-21, with 28 votes
    • Previous CPAN version: 2.34_03 was 1 day before
    • Author: ATOOMIC
  17. Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
    • Version: 0.70 on 2026-03-19, with 20 votes
    • Previous CPAN version: 0.69
    • Author: CHANSEN
  18. YAML::Syck - Fast, lightweight YAML loader and dumper
    • Version: 1.39 on 2026-03-21, with 18 votes
    • Previous CPAN version: 1.38
    • Author: TODDR

TPRC Talk Submission Deadline extended

Perl Foundation News

We are re-opening the talk submissions with a new deadline of April 21, 2026. Please submit your 20 minute talks, and 50 minute talks at https://tprc.us/. Let us know if you need help with your submission or your talk development, because we have mentors who can listen to your ideas and guide you.

We are also taking submissions for interactive sessions. These are sessions that have a theme, but invite maximum audience participation; sessions which take advantage of the gathering of community members that have a wide range of experience and ideas to share. You would introduce the theme and moderate the session. If you have ideas for interactive sessions, but don’t want to moderate them yourself, please go to our wiki to enter your ideas, and maybe someone else will pick up the ball!

Still on the [b]leading edge

Perl Hacks

About eighteen months ago, I wrote a post called On the Bleading Edge about my decision to start using Perl’s new class feature in real code. I knew I was getting ahead of parts of the ecosystem. I knew there would be occasional pain. I decided the benefits were worth it.

I still think that’s true.

But every now and then, the bleading edge reminds you why it’s called that.

Recently, I lost a couple of days to a bug that turned out not to be in my code, not in the module I was installing, and not even in the module that module depended on — but in the installer’s understanding of modern Perl syntax.

This is the story.

The Symptom

I was building a Docker image for Aphra. As part of the build, I needed to install App::HTTPThis, which depends on Plack::App::DirectoryIndex, which depends on WebServer::DirIndex.

The Docker build failed with this error:

#13 45.66 --> Working on WebServer::DirIndex
#13 45.66 Fetching https://www.cpan.org/authors/id/D/DA/DAVECROSS/WebServer-DirIndex-0.1.3.tar.gz ... OK
#13 45.83 Configuring WebServer-DirIndex-v0.1.3 ... OK
#13 46.21 Building WebServer-DirIndex-v0.1.3 ... OK
#13 46.75 Successfully installed WebServer-DirIndex-v0.1.3
#13 46.84 ! Installing the dependencies failed: Installed version (undef) of WebServer::DirIndex is not in range 'v0.1.0'
#13 46.84 ! Bailing out the installation for Plack-App-DirectoryIndex-v0.2.1.

Now, that’s a deeply confusing error message.

It clearly says that WebServer::DirIndex was successfully installed. And then immediately says that the installed version is undef and not in the required range.

At this point you start wondering if you’ve somehow broken version numbering, or if there’s a packaging error, or if the dependency chain is wrong.

But the version number in WebServer::DirIndex was fine. The module built. The tests passed. Everything looked normal.

So why did the installer think the version was undef?

When This Bug Appears

This only shows up in a fairly specific situation:

  • A module uses modern Perl class syntax
  • The module defines a $VERSION
  • Another module declares a prerequisite with a specific version requirement
  • The installer tries to check the installed version without loading the module
  • It uses Module::Metadata to extract $VERSION
  • And the version of Module::Metadata it is using doesn’t properly understand class

If you don’t specify a version requirement, you’ll probably never see this. Which is why I hadn’t seen it before. I don’t often pin minimum versions of my own modules, but in this case, the modules are more tightly coupled than I’d like, and specific versions are required.

So this bug only appears when you combine:

modern Perl syntax + version checks + older toolchain

Which is pretty much the definition of “bleading edge”.

The Real Culprit

The problem turned out to be an older version of Module::Metadata that had been fatpacked into cpanm.

cpanm uses Module::Metadata to inspect modules and extract $VERSION without loading the module. But the older Module::Metadata didn’t correctly understand the class keyword, so it couldn’t work out which package the $VERSION belonged to.

So when it checked the installed version, it found… nothing.

Hence:

Installed version (undef) of WebServer::DirIndex is not in range ‘v0.1.0’

The version wasn’t wrong. The installer just couldn’t see it.

An aside, you may find it amusing to hear an anecdote from my attempts to debug this problem.

I spun up a new Ubuntu Docker container, installed cpanm and tried to install Plack::App::DirectoryIndex. Initially, this gave the same error message. At least the problem was easily reproducible.

I then ran code that was very similar to the code cpanm uses to work out what a module’s version is.

$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'

This displayed an empty string. I was really onto something here. Module::Metadata couldn’t find the version.

I was using Module::Metadata version 1.000037 and, looking at the change log on CPAN, I saw this:

1.000038 2023-04-28 11:25:40Z
- detects "class" syntax
I installed 1.000038 and reran my command.
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'
0.1.3

That seemed conclusive. Excitedly, I reran the Docker build.

It failed again.

You’ve probably worked out why. But it took me a frustrating half an hour to work it out.

cpanm doesn’t use the installed version of Module::Metadata. It uses its own, fatpacked version. Updating Module::Metadata wouldn’t fix my problem.

The Workaround

I found a workaround. That was to add a redundant package declaration alongside the class declaration, so older versions of Module::Metadata can still identify the package that owns $VERSION.

So instead of just this:

class WebServer::DirIndex {
  our $VERSION = '0.1.3';
  ...
}

I now have this:

package WebServer::DirIndex;

class WebServer::DirIndex {
  our $VERSION = '0.1.3';
  ...
}

It looks unnecessary. And in a perfect world, it would be unnecessary.

But it allows older tooling to work out the version correctly, and everything installs cleanly again.

The Proper Fix

Of course, the real fix was to update the toolchain.

So I raised an issue against App::cpanminus, pointing out that the fatpacked Module::Metadata was too old to cope properly with modules that use class.

Tatsuhiko Miyagawa responded very quickly, and a new release of cpanm appeared with an updated version of Module::Metadata.

This is one of the nice things about the Perl ecosystem. Sometimes you report a problem and the right person fixes it almost immediately.

When Do I Remove the Workaround?

This leaves me with an interesting question.

The correct fix is “use a recent cpanm”.

But the workaround is “add a redundant package line so older tooling doesn’t get confused”.

So when do I remove the workaround?

The answer is probably: not yet.

Because although a fixed cpanm exists, that doesn’t mean everyone is using it. Old Docker base images, CI environments, bootstrap scripts, and long-lived servers can all have surprisingly ancient versions of cpanm lurking in them.

And the workaround is harmless. It just offends my sense of neatness slightly.

So for now, the redundant package line stays. Not because modern Perl needs it, but because parts of the world around modern Perl are still catching up.

Life on the Bleading Edge

This is what life on the bleading edge actually looks like.

Not dramatic crashes. Not language bugs. Not catastrophic failures.

Just a tool, somewhere in the install chain, that looks at perfectly valid modern Perl code and quietly decides that your module doesn’t have a version number.

And then you lose two days proving that you are not, in fact, going mad.

But I’m still using class. And I’m still happy I am.

You just have to keep an eye on the whole toolchain — not just the language — when you decide to live a little closer to the future than everyone else.

The post Still on the [b]leading edge first appeared on Perl Hacks.

Abstract

Even if you’re skeptical about AI writing your code, you’re leaving time on the table.

Many developers have been slow to adopt AI in their workflows, and that’s understandable. As AI coding assistants become more capable the anxiety is real - nobody wants to feel like they’re training their replacement. But we’re not there yet. Skilled developers who understand logic, mathematics, business needs and user experience will be essential to guide application development for the foreseeable future.

The smarter play is to let AI handle the parts of the job you never liked anyway - the documentation, the release notes, the boilerplate tests - while you stay focused on the work that actually requires your experience and judgment. You don’t need to go all in on day one. Here are six places to start.


1. Unit Test Writing

Writing unit tests is one of those tasks most developers know they should do more of and few enjoy doing. It’s methodical, time-consuming, and the worst time to write them is when the code reviewer asks if they pass.

TDD is a fine theory. In practice, writing tests before you’ve vetted your design means rewriting your tests every time the design evolves - which is often. Most experienced developers write tests after the design has settled, and that’s a perfectly reasonable approach.

The important thing is that they get written at all. Even a test that simply validates use_ok(qw(Foo::Bar)) puts scaffolding in place that can be expanded when new features are added or behavior changes. A placeholder is infinitely more useful than nothing.

This is where AI earns its keep. Feed it a function or a module and it will identify the code paths that need coverage - the happy path, the edge cases, the boundary conditions, the error handling. It will suggest appropriate test data sets including the inputs most likely to expose bugs: empty strings, nulls, negative numbers, off-by-one values - the things a tired developer skips.

You review it, adjust it, own it. AI did the mechanical work of thinking through the permutations. You make sure it reflects how your code is actually used in the real world.


2. Documentation

“Documentation is like sex: when it’s good, it’s very, very good; and when it’s bad, it’s better than nothing.” - said someone somewhere.

Of course, there are developers that justify their disdain for writing documentation with one of two arguments (or both):

  1. The code is the documentation
  2. Documentation is wrong the moment it is written

It is true, the single source of truth regarding what code actually does is the code itself. What it is supposed to do is what documentation should be all about. When they diverge it’s either a defect in the software or a misunderstanding of the business requirement captured in the documentation.

Code that changes rapidly is difficult to document, but the intent of the code is not. Especially now with AI. It is trivial to ask AI to review the current documentation and align it with the code, negating point #2.

Feed AI a module and ask it to generate POD. It will describe what the code does. Your job is to verify that what it does is what it should do - which is a much faster review than writing from scratch.


3. Release Notes

If you’ve read this far you may have noticed the irony - this post was written by someone who just published a blog post about automating release notes with AI. So consider this section field-tested.

Release notes sit at the intersection of everything developers dislike: writing prose, summarizing work they’ve already mentally moved on from, and doing it with enough clarity that non-developers can understand what changed and why it matters. It’s the last thing standing between you and shipping.

The problem with feeding a git log to AI is that git logs are written for developers in the moment, not for readers after the fact. “Fix the thing” and “WIP” are not useful release note fodder.

The better approach is to give AI real context - a unified diff, a file manifest, and the actual source of the changed files. With those three inputs AI can identify the primary themes of a release, group related changes, and produce structured notes that actually reflect the architecture rather than just the line changes.

A simple make release-notes target can generate all three assets automatically from your last git tag. Upload them, prompt for your preferred format, and you have a first draft in seconds rather than thirty minutes. Here’s how I built it.

You still edit it. You add color, context, and the business rationale that only you know. But the mechanical work of reading every diff and turning it into coherent prose? Delegated.


4. Bug Triage

Debugging can be the most frustrating and the most rewarding experience for a developer. Most developers are predisposed to love a puzzle and there is nothing more puzzling than a race condition or a dangling pointer. Even though books and posters have been written about debugging it is sometimes difficult to know exactly where to start.

Describe the symptoms, share the relevant code, toss your theory at it. AI will validate or repudiate without ego - no colleague awkwardly telling you you’re wrong. It will suggest where to look, what telemetry to add, and before you know it you’re instrumenting the code that should have been instrumented from the start.

AI may not find your bug, but it will be a fantastic bug buddy.


5. Code Review

Since I’ve started using AI I’ve found that one of the most valuable things I can do with it is to give it my first draft of a piece of code. Anything more than a dozen or so lines is fair game.

Don’t waste your time polishing a piece of lava that just spewed from your noggin. There’s probably some gold in there and there’s definitely some ash. That’s ok. You created the framework for a discussion on design and implementation. Before you know it you have settled on a path.

AI’s strength is pattern recognition. It will recognize when your code needs to adopt a different pattern or when you nailed it. Get feedback. Push back. It’s not a one-way conversation. Question the approach, flag the inconsistencies that don’t feel right - your input into that review process is critical in evolving the molten rock into a solid foundation.


6. Legacy Code Deciphering

What defines “Legacy Code?” It’s a great question and hard to answer. And not to get too racy again, but as it has been said of pornography, I can’t exactly define it but I know it when I see it.

Fortunately (and yes I do mean fortunately) I have been involved in maintaining legacy code since the day I started working for a family run business in 1998. The code I maintained there was born literally in the late 70’s and still, to this day generates millions of dollars. You will never learn more about coding than by maintaining legacy code.

These are the major characteristics of legacy code from my experience (in order of visibility):

  1. It generates so much money for a company they could not possibly think of it being unavailable.
  2. It is monolithic and may in fact consist of modules in multiple languages.
  3. It is grown organically over the decades.
  4. It is more than 10 years old.
  5. The business rules are not documented, opaque and can only be discerned by a careful reading of the software. Product managers and users think they know what the software does, but probably do not have the entire picture.
  6. It cannot easily be re-written (by humans) because of #5.
  7. It contains as much dead code that is no longer serving any useful purpose as it does useful code.

I once maintained a C program that searched an ISAM database of legal judgments. The code had been ported from a proprietary in-memory binary tree implementation and was likely older than most of the developers reading this post. The business model was straightforward and terrifying - miss a judgment and we indemnify the client. Every change had to be essentially idempotent. You weren’t fixing code, you were performing surgery on a patient who would sue you if the scar was in the wrong place.

I was fortunate - there were no paydays for a client on my watch. But I wish I’d had AI back then. Not to write the code. To help me read it.

Now, where does AI come in? Points 5, 6, and definitely 7.

Throw a jabberwocky of a function at AI and ask it what it does. Not what it should do - what it actually does. The variable names are cryptic, the comments are either missing or lying, and the original author left the company during the Clinton administration. AI doesn’t care. It reads the code without preconception and gives you a plain English explanation of the logic, the assumptions baked in, and the side effects you never knew existed.

That explanation becomes your documentation. Those assumptions become your unit tests. Those side effects become the bug reports you never filed because you didn’t know they were bugs.

Dead code is where AI particularly shines. Show it a module and ask what’s unreachable. Ask what’s duplicated. Ask what hasn’t been touched in a decade but sits there quietly terrifying anyone who considers deleting it. AI will give you a map of the minefield so you can walk through it rather than around it forever.

Along the way AI will flag security vulnerabilities you never knew were there - input validation gaps, unsafe string handling, authentication assumptions that made sense in 1998 and are a liability today. It will also suggest where instrumentation is missing, the logging and telemetry that would have made every debugging session for the last twenty years shorter. You can’t go back and add it to history, but you can add it now before the next incident.

The irony of legacy code is that the skills required to understand it - patience, pattern recognition, the ability to hold an entire system in your head - are exactly the skills AI complements rather than replaces. You still need to understand the business. AI just helps you read the hieroglyphics.


Conclusion

None of the six items on this list require you to hand over the keys. You are still the architect, the decision maker, the person who understands the business and the user. AI is the tireless assistant who handles the parts of the job that drain your energy without advancing your craft.

The developers who thrive in the next decade won’t be the ones who resisted AI the longest. They’ll be the ones who figured out earliest how to delegate the tedious, the mechanical, and the repetitive - and spent the time they saved on the work that actually requires a human.

You don’t have to go all in. Start with a unit test. Paste some legacy code and ask AI to explain it or document it. Think of AI as that senior developer you go to with the tough problems - the one who has seen everything, judges nothing, and is available at 3am when the production system is on fire.

Only this one never sighs when you knock on the door.


Is all my 2026 blogging going to be about doing more random stuff with Claude? No, I promise. But I’m still working through my backlog of “and the next thing I tried to put it through its paces”, so here it comes. This time, more drudgery.

Project Four: MIME-Lite to Dist::Zilla

Here’s a quick one before I post beefy Project Five tomorrow.

A couple weeks ago, I got a pull request against MIME-Lite. That’s a pretty popular Perl email library that I maintain, and which Fastmail uses at least a little. The contributor had supplied a patch to use GitHub Actions to run the tests on every PR. Great!

Thing was, I already have a standard way of doing that, and I didn’t really want to pick up another way. The other thing was, my standard way relies on the distribution being built with Dist::Zilla. It’s not hard to convert from not using Dist::Zilla to using it, but it’s not trivial. And I don’t love MIME::Lite, so I wasn’t keen to spend a bunch of time on it.

I fired up Claude, pretty certain that I was going to get a crappy result.

rjbsConvert this distribution to building with Dist::Zilla, in the fashion of most RJBS-authored distributions, like Email::MIME.
claudeI’ll explore both the current MIME-Lite distribution and Email::MIME as a reference for the Dist::Zilla conversion pattern.

In about 20 minutes, it was done. Now, when I say “it”, I mean Claude. Claude wrote a big commit that wasn’t good enough. MIME-Lite had a bunch of little quirks, like optional dependencies and a slightly not-rjbs-standard documentation structure. I spent just a few minutes polishing things up and it was good to go.

This was a good example of needing some expert involvement. Understanding exactly what to do was going to require way too much context that couldn’t be found in the repository or in “understanding” of published copies of things. But some exploration of MIME-Lite and Email-MIME let Claude do a bunch of the work so I didn’t have to grumble my way through it. As I’ve seen a few times, it also did work I would not have bothered with, like converting the changelog to a more consistent format.

One funny nit… the original changelog had:

=item Version 1.101   (1997/03/01)

Baseline code.
Originally created: 11 December 1996.  Ho ho ho.

The new one:

1.101     1997-03-01
        - Baseline code. Originally created: 11 December 1996.

I guess Claude is part of the war on Christmas.

Claude also hallucinated author names weirdly and screwed up the Copyright. You can’t just let it do whatever it wants with no oversight! It was pretty bad, and I can definitely see somebody getting into hot water this way. This was probably the worst Claude experience I had that didn’t end in scrapping things. I think I probably could’ve done this one myself faster and with fewer errors. As the world’s leading Dist::Zilla expert… probably shouldn’t be a surprise.

With the conversion done, I dropped the standard CI workflow in place and… found problems in my conversion. Just what CI is for. Now it’s all sorted and happy and any new PRs will get test runs.

You can read my transcript of this conversation, but it’s a bit odd. I stopped and resumed the session at one point, and Claude waited at a permission prompt for 12 hours while I slept or something.

Originally, this was going to be the last in my series of stuff I did with Claude that I found compelling, but… the news, good or bad, is that I’ll be posting at least one more soon. This one, though, is definitely the piece of work that convinced me that Claude was useful for more than mechanical transformation.

Project Five: ClaudeLog

In some of my previous posts, I posted links to transcripts of my chats with Claude, including its internal monologue, its tools used, and other stuff that you won’t see if you just copy text out of the viewer. Those transcripts were generated by a tool that I built with Claude, which I’m going to talk about, now.

I think that the experience of using Claude in its current form (as opposed to what we saw from coding agents a year ago) is fairly stunning. It sometimes screws up, it sometimes makes errors, it sometimes takes four passes at doing things that any normal person would routinely do in one… but on the other hand, it can generate large, complex, readable, correct pieces of software from a standing start. Saying “it’s real good, folks” is not sufficient. I think that to understand what it can do, you should see for yourself. This is not to say that there are no reasons to hesitate or to worry about the serious impact that LLMs are having, and will continue to have. But one criticism I continue to see is “these things spew out unmaintainable nonsense”, which is not claim that really stands up to much real use.

Also, one friend recently said to me, “I want to be a programmer, not a copy editor.” I had to explain that while, yes, you do need to read and think about possible errors in agent-generated code, the experience is much more one of design and construct than of proof reading.

Since not everybody is going to say, “Hey, Rik, let’s pair and look at Claude,” and because I am not going to pair with every rando who might ask, I thought it would be good to provide a thorough transcript. I knew that Claude keeps a detailed log of its sessions (even though, amusingly, Claude claimed it knew of no such thing).

I had looked through the JSONL files in which sessions were stored, and the data looked a bit messy, but probably sufficient. Without giving it too much investigation, I opened up Claude Code and said…

I want to provide transcripts of my interactions with Claude to colleagues who are learning to use the system. I understand that my ~/.claude directory contains transcripts of the chat sessions. Please build a TUI application that:

  1. lets me drill down through projects to individual sessions, showing previews of the sessions as I go
  2. lets me select a session to be transcribed for my use
  3. produces a Markdown summary of the session that I can embed in a document

While your general instructions say to prefer Perl, in this case I would like you to use whatever language you believe will achieve the results most successfully with a pleasant terminal interface.

You can, of course, read the transcription of this whole conversation, produced by the program that the conversation eventually led to. There’s a lot of experimentation, a fair bit of stumbling, and overall you can see how I discover what it is I want while I’m building it. This is normal development, but…

In normal development with such a significant “discovery” aspect, it’s very common to spend a lot of time upshifting and downshifting. That is, first I imagine that general structure of what I want. I write a pseudo-code outline of the high-level code. Then I begin converting individual pieces into real code. Almost continuously, I’m shifting between design thinking and implementation. These are (for me, and I suspect for others) distinct ways of thinking, and while it’s not “context switching”, there is, I think, an analogous overhead.

Using Claude, I am really focusing on one of those angles at a time. I started with “here is a very rough outline” and within 20 minutes, I had a working program. I never, ever had to switch into implementation thinking, to get there. Then I had many tight, quick exchanges in the form, “This is wrong, change it” or “I’m ready to describe the next feature”.

At the top of the transcript, you’ll see this line:

Duration: 7h 50m (3h active)

This means that from the first to last event in the transcript, about eight hours passed on the clock, but I was only engaged in the chat for about three. Probably I took a long lunch in there, or maybe worked on something more pressing for a while. Or I just stopped and thought about it, or spent time reading transcripts and thinking about what could be better.

By the end of the day, I had a really useful program. The HTML it was generating was definitely fit for purpose. On the other hand, I made the mistake of looking at the code…

Look, it wasn’t absolutely impenetrable. It just felt very, very amateurish. It got the job done, and I could read it, but there was very little abstraction. There was nearly no separation of concerns. There were no real “layers” to speak of. And when layers did exist, they often duplicated work.

Hardly surprising: if you imagine Clade as often analogous to a junior programmer (not a great analogy, but often useful), and you imagine me as the lousy boss who kept saying, “Implement this feature immediately and I don’t care about code quality”, of course there was a ton of debt. And of course it racked up fast, because I was racking up features fast. The question I needed to answer was Will Claude’s code always be unmaintainable after a short while?

I decided to answer that the next day. Instead of adding more features, I’d just look at fixing the code. It was already clear to me how the code should’ve been structured. I just didn’t tell Claude about it, because I was focused on features. Why didn’t Claude architect it well from the beginning? Because nobody asked it to. This seems sort of stupid, but it’s often how this stuff works.

So, the next day, I started a new session. There were two key parts of this:

I am very happy with the output of this program. I would now like to begin some serious refactoring for maintainability. The first step should be fairly simple. I would like you to split out the code that takes a session JSONL file and emits HTML. Put that in its own module, so that the “build HTML from JSONL” can be called without the session browser being involved.

…and…

Next, I would like to separate out the generation of HTML from the interpretation of the JSONL file. I propose the following:

  • a SessionFile object that represents the log itself, and is created with a static method like session_file = SessionFile.from_path(filepath)
  • a SessionLog object that represents a sequence of SessionTurn
  • a SessionFile has a SessionLog, which is computed lazily when requested
  • a SessionTurn is either a UserTurn, a ClaudeTurn, a SystemTurn, or an IdleTurn
  • non-IdleTurn turns have a sequence of TurnEvents, which are things like the chat, the tool use, the thinking, and so on
  • the HTML generator is passed an already-computed SessionLog, which it then just formats, rather than formatting and event-aggregating all at once

This is a significant change. Make a plan, think hard, and ask me any questions you need to ask up front.

There’s a bunch of other stuff that went on, but these are the big ones. You can read the whole refactoring transcript.

Claude thought for a while, then asked me some (very reasonable) questions. I answered them, and Claude produced a plan. Pretty soon, it all worked. The program still did just what it did before, but now the code made sense, and it was much easier to add some new features, because they could be added as small changes to a few layers, instead of changes to the one big ball of mud.

Part of what made this easy was that up front I said, “save a copy of input and output now, so that while you refactor, you can keep checking you didn’t change the output”. Claude did this, checking the output repeatedly as it worked.

Again, a lot of other stuff went on, but I think this is an accurate summary.

One way of looking at this as a two-stage process is “Rik should’ve made sure Claude did it right to start”, and that’s an okay position, but it brings back some of that upshifting and downshifting that I mentioned.

Another option is, “Rik should’ve given Claude standing instructions to pay more attention to architecture.” Yes, probably that would be useful, but also then I’d probably be spending more time thinking about its architecture, because it would be talking about its architecture.

I think the best view might be: because fixing the design later is at least sometimes not as costly (in time, not to mention morale), the human is given more freedom to choose the approach that they will find more satisfying or pleasant. In the end, we want the code to be correct, reliable, readable, and maintainable. The path we take to get there isn’t necessarily all that relevant. (But please leave a good git history.)

I’ll publish the code for CladeLog at some point.

Answer

You can configure grub via several ways to use a specific kernel or you can configure grub to use the latest one, or you can tell grub to pick one from a selection.

One specific kernel

If you inspect /etc/grub/grub.cfg you’ll see entries like this:

# the \ are mine, these are usually one big line but for blog purposes I
# multilined them
menuentry 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' --class debian \
 --class gnu-linux --class gnu --class os $menuentry_id_option \
 'gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4' {

You can use two identifiers to configure grub; you can use 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' or you can use the $menuentry_id_option with gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4.

The Problem: Generating Release Notes is Boring

You’ve just finished a marathon refactoring - perhaps splitting a monolithic script into proper modules-and now you need to write the release notes. You could feed an AI a messy git log, but if you want high-fidelity summaries that actually understand your architecture, you need to provide better context.

The Solution: AI Loves Boring Tasks

…and is pretty good at them too!

Instead of manually describing changes or hoping it can interpret my ChangeLog, I’ve automated the production of three ephemeral “Sidecar” assets. These are generated on the fly, uploaded to the LLM, and then purged after analysis - no storage required.

The Assets

  • The Manifest (.lst): A simple list of every file touched, ensuring the AI knows the exact scope of the release.
  • The Logic (.diffs): A unified diff (using git diff --no-ext-diff) that provides the “what” and “why” of every code change.
  • The Context (.tar.gz): This is the “secret sauce.” It contains the full source of the changed files, allowing the AI to see the final implementation - not just the delta.

The Makefile Implementation

If you’ve read any of my blog posts you know I’m a huge Makefile fan. To automate this I’m naturally going to add a recipe to my Makefile or Makefile.am.

First, we explicitly set the shell to /usr/bin/env bash to ensure features like brace expansion work consistently across all dev environments.

# Ensure a portable bash environment for advanced shell features
SHELL := /usr/bin/env bash

.PHONY: release-notes clean-local

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}

Breaking Down the Recipe

  • The Shell Choice (/usr/bin/env bash): We avoid hardcoding paths to ensure the script finds the correct Bash path on macOS, Linux, or inside a container.
  • The Version Override (VERSION ?=): This allows the “pre-flight” trick: running make release-notes VERSION=HEAD to iterate on notes before you’ve actually tagged the release.
  • Smart Tag Discovery (--sort=-v:refname): Using v:refname forces Git to use semantic versioning logic (so 1.10.0 correctly follows 1.2.0), while the glob pattern filters out “noisy” non-version tags.
  • The Diff Filter (--diff-filter=AMR): This ensures the tarball only includes files that actually exist (Added, Modified, or Renamed). If a release deleted a file, this filter prevents tar from erroring out when it can’t find the missing file on disk.
  • The Cleanup Crew (clean-local): remove the ephemeral artifacts using bash expansion

The AI Prompt

Once your assets are generated, upload them to AI and use a prompt that enforces your specific formatting standards.

The Prompt:

I have uploaded a file manifest (`.lst`), a unified diff
(`.diffs`), and a tarball of the source code (`.tar.gz`). Please
analyze these to identify the primary themes of this
release. Generate a release entry using this protocol:

1. Separate versions with 71 dashes.
2. Include a high-level executive summary paragraph.
3. Indent bullet points exactly 21 spaces to maintain a vertical column.
4. Group changes under category labels like `Core Bedrock:` or `Miniserver:`.
5. Provide plain-text ASCII only inside a code block.

The Result

In a recent dry run for Bedrock 3.8.8, this exact workflow allowed the AI to correctly identify that bedrock-miniserver.pl.in was refactored into Bedrock::MiniServer and Bedrock::MiniServer::CLI, while accurately noting that Bedrock::Error moved to the Bedrock-Core distribution.

In actuality, version 3.8.8 focuses on the primary goals of MiniServer maturation (more batteries included), Apache decoupling, and distribution hygiene. Here’s AI’s version:

-----------------------------------------------------------------------
Bedrock 3.8.8
-----------------------------------------------------------------------
This release focuses on the maturation of the MiniServer component, 
decoupling core Bedrock logic from Apache-specific dependencies, and 
improving distribution hygiene. Key architectural changes include 
refactoring the MiniServer into modular components and moving 
exception handling into the Bedrock-Core distribution.

2026-03-17 - 3.8.8 - MiniServer Maturation and Apache Decoupling
                     Miniserver:
                     - Refactored bedrock-miniserver.pl into modular 
                       Bedrock::MiniServer and Bedrock::MiniServer::CLI.
                     - Implemented zero-config scaffolding to 
                       automatically create application trees.
                     - Integrated full Bedrock configuration pipeline 
                       for parity with Apache environments.
                     - Updated bedrock_server_config to support both 
                       getter and setter operations.

                     Core:
                     - Moved Bedrock::Error and Bedrock::Exception to 
                       the Bedrock-Core distribution.
                     - Introduced Bedrock::FauxHandler as a production-
                       ready alias for test handlers.
                     - Added dist_dir() to BLM::Startup::Bedrock to 
                       expose distribution paths to templates.

                     Fixes:
                     - Demoted Apache-specific modules (mod_perl2, 
                       Apache2::Request) to optional recommendations.
                     - Improved Bedrock::Test::FauxHandler to handle 
                       caller-supplied loggers and safe destruction.

Conclusion

As I mentioned in a response to a recent Medium article, AI can be an accelerator for seasoned professionals. You’re not cheating. You did the work. AI does the wordsmithing. You edit, add color, and ship. What used to take 30 minutes now takes 3. Now that’s working smarter, not harder!

Pro-Tip

Add this to the top of your Makefile

SHELL := /usr/bin/env bash

# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)

Copy this to a file named release-notes.mk

.PHONY: release-notes clean-local

release-notes:
    @curr_ver=$(VERSION); \
    last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
    diffs="release-$$curr_ver.diffs"; \
    diff_list="release-$$curr_ver.lst"; \
    diff_tarball="release-$$curr_ver.tar.gz"; \
    echo "Comparing $$last_tag to current $$curr_ver..."; \
    git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
    git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
    tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
    ls -alrt release-$$curr_ver*

clean-local:
    @echo "Cleaning ephemeral release assets..."
    rm -f release-*.{tar.gz,lst,diffs}

Then add release-notes.mk to your Makefile

include release-notes.mk

Dave writes:

Last month I worked on various miscellaneous issues, including a few performance and deparsing regressions.

Summary: * 3:00 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build * 2:49 GH# #24212 goto void XSUB in scalar context crashes * 7:19 XS: avoid core distros using void ST(0) hack * 2:40 fix up Deparse breakage * 5:41 remove OP_NULLs in OP_COND execution path

Total: * 21:29 (HH::MM)


Paul writes:

Not too much activity of my own this month here, as I spent a lot of Perl time working on other things like magic-v2 or some CPAN module ecosystem like Future::IO. Plus I had a stage show to finish building props for and manage the running of.

But I did manage to do:

  • 3 = Continue work on attributes-v2 and write a provisional PR for the first stage
    • https://github.com/Perl/perl5/pull/24171
  • 3 = Bugfix in class.c in threaded builds
    • https://github.com/Perl/perl5/issues/24150
    • https://github.com/Perl/perl5/pull/24171
  • 1 = More foreach lvref neatening
    • https://github.com/Perl/perl5/pull/24202
  • 3 = Various github code reviews

Total: 10 hours

Now that both attributes-v2 and magic-v2 are parked awaiting the start of the 5.45.x development cycle, most of my time until then will be spent on building up some more exciting features to launch those with, as well as continuing to focus on fixing any release-blocker bugs for 5.44.


Tony writes:

``` [Hours] [Activity] 2026/02/02 Monday 0.08 #24122 review updates and comment 0.17 #24063 review updates and apply to blead 0.28 #24062 approve with comment and bonus comment 0.92 #24071 review updates and approve 0.40 #24080 review updates, research and comment 0.18 #24122 review updates and approve 0.27 #24157 look into it and original ticket, comment on original ticket 0.58 #24134 review and comments 0.27 #24144 review and approve with comment 0.18 #24155 review and comment 0.48 #16865 debugging

0.90 #16865 debugging, start a bisect with a better test case

4.71

2026/02/03 Tuesday 0.17 review steve’s suggested maint-votes and vote 0.17 #24155 review updates and approve 1.30 #24073 recheck, comments and apply to blead 0.87 #24082 more review, follow-ups 0.83 #24105 work on threads support

0.65 #24105 more work on threads, hash randomization support

3.99

2026/02/04 Wednesday 0.13 github notifications 1.92 #24163 review, comments 0.48 #24105 rebase some more, fix tests, do a commit and push for CI (needs more work)

1.70 #24105 more cleanup and push for CI

4.23

2026/02/05 Thursday 0.20 github notifications 0.38 #24105 review CI results and fix some issues 1.75 #24082 research and comments 0.63 #24105 more CI results, update the various generated config files and push for CI 0.17 #23561 review updates and comment 0.40 #24163 research and follow-up

0.58 #24098 review updates and comments

4.11

2026/02/09 Monday 0.15 #24082 comment 0.20 #22040 comment 0.30 #24005 research, comment 0.33 #4106 rebase again and apply to blead 0.35 #24133 comment 0.35 #24168 review CI results and comment 0.25 #24098 comment 0.18 #24129 review updates and comment 0.92 #24160 review, comment, approve 0.17 #24136 review and briefly comment 0.78 #24179 review, comments

0.48 #16865 comment, try an approach

4.46

2026/02/10 Tuesday 0.62 #24163 comment 0.23 #24082 research

0.20 #24082 more research

1.05

2026/02/11 Wednesday 0.48 #24163 review updates and approve 0.73 #24129 review updates 0.45 #24098 research and follow-up comment 0.32 #24134 review updates and approve 0.17 #24080 review updates and approve 1.18 #22132 setup, testing and comments on ticket and upstream llvm ticket 0.32 #23561 review update and approve 0.42 #24179 review some more and make a suggestion

1.03 #24187 review and comments

5.10

2026/02/12 Thursday 0.43 #24136 research and comment 0.17 #24190 review and approve 0.90 #24182 review discussion and the change and approve 0.08 #24178 review and briefly comment 0.33 #24177 review, research and comment 0.08 #24187 brief follow-up 0.43 #24176 research, review and approve 0.27 #24191 research, testing 0.20 #24192 review and approve 0.38 #24056 debugging

0.58 #24056 debugging, something in find_lexical_cv()?

3.85

2026/02/16 Monday 0.52 github notifications 0.08 #24178 review updates and approve 2.20 #24098 review and comments 0.88 #24056 more debugging, find at least one bug 0.92 #24056 work up tests, testing, commit message and push for

CI, perldelta and re-push

4.60

2026/02/17 Tuesday 0.18 #24056 check CI results, rebase in case and re-push, open PR 24205 2.88 #24187 review, comments 0.47 #24187 more comments 0.23 reply email from Jim Keenan re git handling for testing PR

tests without the fixes

3.76

2026/02/18 Wednesday 3.02 #24187 review comments, work on fix for assertion, testing, push for CI 0.25 #24187 check CI, make perldelta and make PR 24211

0.35 #24098 review updates and approve

3.62

2026/02/19 Thursday 0.30 #24200 research and comment 0.47 #24215 review, wonder why cmp_version didn’t complain, find out and approve 0.08 #24208 review and comment 0.73 #24213 review, everything that needs saying had been said 0.22 #24206 review and comments 0.53 #24203 review, comment and approve 0.33 #24210 review, research and approve with comment

0.37 #24200 review, research and approve

3.03

2026/02/23 Monday 0.35 #24212 testing add #24213 to 5.42 votes 2.42 #24159 review and benchmarking, comment

0.73 #24187 try to break it

3.50

2026/02/24 Tuesday 0.35 github notifications 1.13 #24187 update PR 24211 commit message, rechecks 0.43 #24001 re-work tests on PR 24060

0.30 #24001 more re-work

2.21

2026/02/25 Wednesday 1.02 #24180 research, comments 0.22 #24206 review update and comment 0.28 #24208 review updates and comment 0.57 #24060 more tests

0.88 #24060 more tests, testing, debugging

2.97

2026/02/26 Thursday 0.47 #24211 minor fixes per comments 0.23 #24206 review updates and approve 0.22 #24180 review updates and approve 0.98 #24236 review and comments 1.30 #24228 review, testing and comments 0.08 #24236 research and comment

0.78 #24159 review updates, testing, comments

4.06

Which I calculate is 59.25 hours.

Approximately 50 tickets were reviewed or worked on, and 3 patches were applied. ```

Here’s another post in which I fiddle around with Claude Code and determine that it is not just spewing out nonsense and bugs, but instead is doing kinda what I would’ve done, at least enough to reduce my total effort. This time, writing tests.

Project Three: JMAP-Tester test coverage

Okay, I progressed from code I’d throw away to code I would keep but not look at. I was progressing up the totem pole of how much cultural value we put on code. What was the next least respected code? Tests, of course.

Now, I actually love tests, and like treating them like first-class code, and building libraries to make testing better. One of those libraries is JMAP::Tester, which we use in tons of our testing. Until pretty recently, it didn’t have all that much testing of its own. That is: JMAP-Tester was used to test things, but was not itself tested. In December, as part of adding some features to JMAP::Tester, I started to expand its test coverage. This was rewarding and useful, but I didn’t get to 100% coverage. I used to strive for 100% (well, 95% coverage) on my code, but these days… well, who has the time?

Turns out, Claude has the time. This one was pretty darn impressive. You can read the whole transcript, but here’s an accurate summary:

rjbsThis project is a CPAN distribution. Produce a coverage report, which you can do with “dzil cover”. Find low-hanging fruit to add test coverage and draft the tests.
time passes
claudeI’ve increased test coverage from about 50% to about 95%.

That was it! You can read the pull request.

Well, there were a couple more bits, mostly me saying, “Make it look like how I’d have written it” (literally), and some git faff. I did go through and tweak the code to be more readable and better organized. I could’ve given Claude clearer instructions to avoid most of that, or had standing instructions about it… but really, it was a good way to keep my mind engaged while I reviewed that the code was testing what it should be, anyway.

If “write more coverage tests” was the only thing I could get out of Claude, it’d still be huge, but obviously there’s more. I’ll keep posting…

Let’s talk about music programming! There are a million aspects to this subject, but today, we’ll touch on generating rhythmic patterns with mathematical and combinatorial techniques. These include the generation of partitions, necklaces, and Euclidean patterns.

Stefan and J. Richard Hollos wrote an excellent little book called “Creating Rhythms” that has been turned into C, Perl, and Python. It features a number of algorithms that produce or modify lists of numbers or bit-vectors (of ones and zeroes). These can be beat onsets (the ones) and rests (the zeroes) of a rhythm. We’ll check out these concepts with Perl.

For each example, we’ll save the MIDI with the MIDI::Util module. Also, in order to actually hear the rhythms, we will need a MIDI synthesizer. For these illustrations, fluidsynth will work. Of course, any MIDI capable synth will do! I often control my eurorack analog synthesizer with code (and a MIDI interface module).

Here’s how I start fluidsynth on my mac in the terminal, in a separate session. It uses a generic soundfont file (sf2) that can be downloaded here (124MB zip).

fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2

So, how does Perl know what output port to use? There are a few ways, but with JBARRETT’s MIDI::RtMidi::FFI::Device, you can do this:

use MIDI::RtMidi::FFI::Device ();

my $midi_in = RtMidiIn->new;
my $midi_out = RtMidiOut->new;

print "Input devices:\n";
$midi_in->print_ports;
print "\n";

print "Output devices:\n";
$midi_out->print_ports;
print "\n";

This shows that fluidsynth is alive and ready for interaction.

Okay, on with the show!

First-up, let’s look at partition algorithms. With the part() function, we can generate all partitions of n, where n is 5, and the “parts” all add up to 5. Then taking one of these (say, the third element), we convert it to a binary sequence that can be interpreted as a rhythmic phrase, and play it 4 times.

#!/usr/bin/env perl
use strict;
use warnings;

use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $parts = $mcr->part(5);
# [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 1, 2 ], [ 1, 2, 2 ], [ 1, 1, 3 ], [ 2, 3 ], [ 1, 4 ], [ 5 ] ]

my $p = $parts->[2]; # [ 1, 2, 2 ]

my $seq = $mcr->int2b([$p]); # [ [ 1, 1, 0, 1, 0 ] ]

Now we render and save the rhythm:

use MIDI::Util qw(setup_score);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) {
    for my $bit ($seq->[0]->@*) {
        if ($bit) {
            $score->n('en', 40);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-1.mid');

In order to play the MIDI file that is produced, we can use fluidsynth like this:

fluidsynth -i ~/Music/soundfont/FluidR3_GM.sf2 perldotcom-1.mid

Not terribly exciting yet.

Let’s see what the “compositions” of a number reveal. According to the Music::CreatingRhythms docs, a composition of a number is “the set of combinatorial variations of the partitions of n with the duplicates removed.”

Okay. Well, the 7 partitions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 3], [1, 2, 2], [1, 4], [2, 3], [5]]

And the 16 compositions of 5 are:

[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 2, 1], [1, 1, 3], [1, 2, 1, 1], [1, 2, 2], [1, 3, 1], [1, 4], [2, 1, 1, 1], [2, 1, 2], [2, 2, 1], [2, 3], [3, 1, 1], [3, 2], [4, 1], [5]]

That is, the list of compositions has, not only the partition [1, 2, 2], but also its variations: [2, 1, 2] and [2, 2, 1]. Same with the other partitions. Selections from this list will produce possibly cool rhythms.

Here are the compositions of 5 turned into sequences, played by a snare drum, and written to the disk:

use Music::CreatingRhythms ();
use MIDI::Util qw(setup_score);

my $mcr = Music::CreatingRhythms->new;

my $comps = $mcr->compm(5, 3); # compositions of 5 with 3 elements

my $seq = $mcr->int2b($comps);

my $score = setup_score(bpm => 120, channel => 9);

for my $pattern ($seq->@*) {
    for my $bit (@$pattern) {
        if ($bit) {
            $score->n('en', 40); # snare patch
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-2.mid');

A little better. Like a syncopated snare solo.

Another way to play the MIDI file is to use timidity. On my mac, with the soundfont specified in the timidity.cfg configuration file, this would be:

timidity -c ~/timidity.cfg -Od perldotcom-2.mid

To convert a MIDI file to an mp3 (or other audio formats), I do this:

timidity -c ~/timidity.cfg perldotcom-2.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 64k perldotcom-2.mp3

Okay. Enough technical details! What if we want a kick bass drum and hi-hat cymbals, too? Refactor time…

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $s_comps = $mcr->compm(4, 2); # snare
my $s_seq = $mcr->int2b($s_comps);

my $k_comps = $mcr->compm(4, 3); # kick
my $k_seq = $mcr->int2b($k_comps);

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 8) { # repeats
    my $s_choice = $s_seq->[ int rand @$s_seq ];
    my $k_choice = $k_seq->[ int rand @$k_seq ];

    for my $i (0 .. $#$s_choice) { # pattern position
        my @notes = (42); # hi-hat every time
        if ($s_choice->[$i]) {
            push @notes, 40;
        }
        if ($k_choice->[$i]) {
            push @notes, 36;
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-3.mid');

Here we play generated kick and snare patterns, along with a steady hi-hat.

Next up, let’s look at rhythmic “necklaces.” Here we find many grooves of the world.

World rhythms

Image from The Geometry of Musical Rhythm

Rhythm necklaces are circular diagrams of equally spaced, connected nodes. A necklace is a lexicographical ordering with no rotational duplicates. For instance, the necklaces of 3 beats are [[1, 1, 1], [1, 1, 0], [1, 0, 0], [0, 0, 0]]. Notice that there is no [1, 0, 1] or [0, 1, 1]. Also, there are no rotated versions of [1, 0, 0], either.

So, how many 16 beat rhythm necklaces are there?

my $necklaces = $mcr->neck(16);
print scalar @$necklaces, "\n"; # 4116 of 'em!

Okay. Let’s generate necklaces of 8 instead, pull a random choice, and play the pattern with a percussion instrument.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $patch = shift || 75; # claves

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);
my $choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $bit (@$choice) { # pattern position
        if ($bit) {
            $score->n('en', $patch);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-4.mid');

Here we choose from all necklaces. But note that this also includes the sequence with all ones and the sequence with all zeroes. More sophisticated code might skip these.

More interesting would be playing simultaneous beats.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $necklaces = $mcr->neck(8);

my $x_choice = $necklaces->[ int rand @$necklaces ];
my $y_choice = $necklaces->[ int rand @$necklaces ];
my $z_choice = $necklaces->[ int rand @$necklaces ];

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $#$x_choice) { # pattern position
        my @notes;
        if ($x_choice->[$i]) {
            push @notes, 75; # claves
        }
        if ($y_choice->[$i]) {
            push @notes, 63; # hi_conga
        }
        if ($z_choice->[$i]) {
            push @notes, 64; # low_conga
        }
        $score->n('en', @notes);
    }
}

$score->write_score('perldotcom-5.mid');

And that sounds like:

How about Euclidean patterns? What are they, and why are they named for a geometer?

Euclidean patterns are a set number of positions P that are filled with a number of beats Q that is less than or equal to P. They are named for Euclid because they are generated by applying the “Euclidean algorithm,” which was originally designed to find the greatest common divisor (GCD) of two numbers, to distribute musical beats as evenly as possible.

use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();

my $mcr = Music::CreatingRhythms->new;

my $beats = 16;

my $s_seq = $mcr->rotate_n(4, $mcr->euclid(2, $beats)); # snare
my $k_seq = $mcr->euclid(2, $beats); # kick
my $h_seq = $mcr->euclid(11, $beats); # hi-hats

my $score = setup_score(bpm => 120, channel => 9);

for (1 .. 4) { # repeats
    for my $i (0 .. $beats - 1) { # pattern position
        my @notes;
        if ($s_seq->[$i]) {
            push @notes, 40; # snare
        }
        if ($k_seq->[$i]) {
            push @notes, 36; # kick
        }
        if ($h_seq->[$i]) {
            push @notes, 42; # hi-hats
        }
        if (@notes) {
            $score->n('en', @notes);
        }
        else {
            $score->r('en');
        }
    }
}

$score->write_score('perldotcom-6.mid');

Now we’re talkin’ - an actual drum groove! To reiterate, the euclid() method distributes a number of beats, like 2 or 11, over the number of beats, 16. The kick and snare use the same arguments, but the snare pattern is rotated by 4 beats, so that they alternate.

So what have we learned today?

  1. That you can use mathematical functions to generate sequences to represent rhythmic patterns.

  2. That you can play an entire sequence or simultaneous notes with MIDI.

References:

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.340 on 2026-03-13, with 50 votes
    • Previous CPAN version: 0.339 was 21 days before
    • Author: RJBS
  2. App::HTTPThis - Export the current directory over HTTP
    • Version: v0.11.0 on 2026-03-13, with 25 votes
    • Previous CPAN version: 0.010 was 3 months, 9 days before
    • Author: DAVECROSS
  3. App::zipdetails - Display details about the internal structure of Zip files
    • Version: 4.005 on 2026-03-08, with 65 votes
    • Previous CPAN version: 4.004 was 1 year, 10 months, 8 days before
    • Author: PMQS
  4. CPAN::Audit - Audit CPAN distributions for known vulnerabilities
    • Version: 20260308.002 on 2026-03-08, with 21 votes
    • Previous CPAN version: 20250829.001 was 6 months, 10 days before
    • Author: BRIANDFOY
  5. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260311.002 on 2026-03-11, with 25 votes
    • Previous CPAN version: 20260308.006 was 2 days before
    • Author: BRIANDFOY
  6. Dancer2 - Lightweight yet powerful web application framework
    • Version: 2.1.0 on 2026-03-12, with 139 votes
    • Previous CPAN version: 2.0.1 was 4 months, 20 days before
    • Author: CROMEDOME
  7. Data::Alias - Comprehensive set of aliasing operations
    • Version: 1.30 on 2026-03-11, with 19 votes
    • Previous CPAN version: 1.29 was 1 month, 8 days before
    • Author: XMATH
  8. DBD::Pg - DBI PostgreSQL interface
    • Version: 3.19.0 on 2026-03-14, with 103 votes
    • Previous CPAN version: 3.18.0 was 2 years, 3 months, 7 days before
    • Author: TURNSTEP
  9. IO::Compress - IO Interface to compressed data files/buffers
    • Version: 2.219 on 2026-03-09, with 19 votes
    • Previous CPAN version: 2.218 was before
    • Author: PMQS
  10. JSON::Schema::Modern - Validate data against a schema using a JSON Schema
    • Version: 0.633 on 2026-03-13, with 16 votes
    • Previous CPAN version: 0.632 was 2 months, 7 days before
    • Author: ETHER
  11. Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
    • Version: 0.74 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.74 was 1 day before
    • Author: DANAJ
  12. MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
    • Version: 2.040000 on 2026-03-09, with 29 votes
    • Previous CPAN version: 2.039000 was 8 days before
    • Author: MICKEY
  13. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260308 on 2026-03-08, with 44 votes
    • Previous CPAN version: 5.20260220 was 15 days before
    • Author: BINGOS
  14. OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
    • Version: 0.7007 on 2026-03-13, with 15 votes
    • Previous CPAN version: 0.7006 was 10 months, 29 days before
    • Author: ETJ
  15. less - The Perl 5 language interpreter
    • Version: 5.042001 on 2026-03-08, with 2248 votes
    • Previous CPAN version: 5.042001 was 14 days before
    • Author: SHAY
  16. SPVM - The SPVM Language
    • Version: 0.990146 on 2026-03-14, with 36 votes
    • Previous CPAN version: 0.990145 was before
    • Author: KIMOTO
  17. Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
    • Version: 1.044 on 2026-03-09, with 14 votes
    • Previous CPAN version: 1.043 was 8 months, 5 days before
    • Author: CHOROBA
  18. Test::Routine - composable units of assertion
    • Version: 0.032 on 2026-03-12, with 13 votes
    • Previous CPAN version: 0.031 was 2 years, 11 months before
    • Author: RJBS
  19. WWW::Mechanize::Chrome - automate the Chrome browser
    • Version: 0.76 on 2026-03-13, with 22 votes
    • Previous CPAN version: 0.75 was 4 months, 12 days before
    • Author: CORION
  20. X11::korgwm - a tiling window manager for X11
    • Version: 6.1 on 2026-03-08, with 14 votes
    • Previous CPAN version: 6.0 was before
    • Author: ZHMYLOVE

(dcxxvi) metacpan weekly report - Langertha

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 61

Week's winner: Langertha (+3)

Build date: 2026/03/14 22:28:35 GMT


Clicked for first time:


Increasing its reputation:

For those running a development version of git from master or next, you probably have seen it already. Today I was inspecting the git logs of git and found this little gem. It supports my workflow to the max.

You can now configure git status to compare branches with your current branch in status. When you configure status.comparebranches you can use @{upstream} and @{push} and you see both how far you’ve diverged from your upstream and your push branch. For those, like me, who track an upstream branch which differs from their push branch this is a mighty fine feature!

TL;DR

I didn’t like how the default zsh prompt truncation works. My solution, used in my own custom-made prompt (fully supported by promptinit), uses a custom precmd hook to dynamically determine the terminal’s available width.

Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling corrections (or autocorrect for commands). While handy, it is somewhat limited as it is global. Now, I wanted to ignore it only for git and not other commands. But I haven’t found a way to only target git without having to make a wrapper around git (which I don’t want to do).

So I wrote an autoloaded function that does this for me. The idea is rather simple. In your .zshrc you set a zstyle that tells which file should be ignored based on files (or directories) that exist in the current directory. Based on this you build the CORRECT_IGNORE_FILE environment variable or you just unset it. This function is then hooked into the chpwd action. I went with three default options, check dir, file, or just exist: d, f, or e. File wins, then directory, then exists.