Shortcuts: s show h hide n next p prev

RIP nginx - Long Live Apache

nginx is dead. Not metaphorically dead. Not “falling out of favor” dead. Actually, officially, put-a-date-on-it dead.

In November 2025 the Kubernetes project announced the retirement of Ingress NGINX — the controller running ingress for a significant fraction of the world’s Kubernetes clusters. Best-effort maintenance until March 2026. After that: no releases, no bugfixes, no security patches. GitHub repositories go read-only. Tombstone in place.

And before the body was even cold, we learned why. IngressNightmare — five CVEs disclosed in March 2025, headlined by CVE-2025-1974, rated 9.8 critical. Unauthenticated remote code execution. Complete cluster takeover. No credentials required. Wiz Research found over 6,500 clusters with the vulnerable admission controller publicly exposed to the internet, including Fortune 500 companies. 43% of cloud environments vulnerable. The root cause wasn’t a bug that could be patched cleanly - it was an architectural flaw baked into the design from the beginning. And the project that ran ingress for millions of production clusters was, in the end, sustained by one or two people working in their spare time.

Meanwhile Apache has been quietly running the internet for 30 years, governed by a foundation, maintained by a community, and looking increasingly like the adult in the room.

Let’s talk about how we got here.

Apache Was THE Web Server

Before we talk about what went wrong, let’s remember what Apache actually was. Not a web server. THE web server. At its peak Apache served over 70% of all websites on the internet. It didn’t win that position by accident - it won it by solving every problem the early web threw at it. Virtual hosting. SSL. Authentication. Dynamic content via CGI and then mod_perl. Rewrite rules. Per-directory configuration. Access control. Compression. Caching. Proxying. One by one, as the web evolved, Apache evolved with it, and the industry built on top of it.

Apache wasn’t just infrastructure. It was the platform on which the commercial internet was built. Every hosting provider ran it. Every enterprise deployed it. Every web developer learned it. It was as foundational as TCP/IP - so foundational that most people stopped thinking about it, the way you stop thinking about running water.

Then nginx showed up with a compelling story at exactly the right moment.

The Narrative That Stuck

The early 2000s brought a new class of problem - massively concurrent web applications, long-polling, tens of thousands of simultaneous connections. The C10K problem was real and Apache’s prefork MPM - one process per connection - genuinely struggled under that specific load profile. nginx’s event-driven architecture handled it elegantly. The benchmarks were dramatic. The config was clean and minimal, a breath of fresh air compared to Apache’s accumulated complexity. nginx felt modern. Apache felt like your dad’s car.

The “Apache is legacy” narrative took hold and never let go - even after the evidence for it evaporated.

Apache gained mpm_event, bringing the same non-blocking I/O and async connection handling that nginx was celebrated for. The performance gap on concurrent connections essentially closed. Then CDNs solved the static file problem at the architectural level - your static files live in S3 now, served from a Cloudflare edge node milliseconds from your user, and your web server never sees them. The two pillars of the nginx argument - concurrency and static file performance - were addressed, one by Apache’s own evolution and one by infrastructure that any serious deployment should be using regardless of web server choice.

But nobody reruns the benchmarks. The “legacy” label outlived the evidence by a decade. A generation of engineers learned nginx first, taught it to the next generation, and the assumption calcified into received wisdom. Blog posts from 2012 are still being cited as architectural guidance in 2025.

What Apache Does That nginx Can’t

Strip away the benchmark mythology and look at what these servers actually do when you need them to do something hard.

Apache’s input filter chain lets you intercept the raw request byte stream mid-flight - before the body is fully received - and do something meaningful with it. I’m currently building a multi-server file upload handler with real-time Redis progress tracking, proper session authentication, and CSRF protection implemented directly in the filter chain. Zero JavaScript upload libraries. Zero npm dependencies. Zero supply chain attack surface. The client sends bytes. Apache intercepts them. Redis tracks them. Done. nginx needs a paid commercial module to get close. Or you write C. Or you route around it to application code and wonder why you needed nginx in the first place.

Apache’s phase handlers let you hook into the exact right moment of the request lifecycle - post-read, header parsing, access control, authentication, response - each phase a precise intervention point. mod_perl embeds a full Perl runtime in the server with persistent state, shared memory, and pre-forked workers inheriting connection pools and compiled code across requests. mod_security gives you WAF capabilities your “modern” stack is paying a vendor for. mod_cache is a complete RFC-compliant caching layer that nginx reserves for paying customers.

And LDAP - one of the oldest enterprise authentication requirements there is. With mod_authnz_ldap it’s a few lines of config:

AuthType Basic
AuthName "Corporate Login"
AuthBasicProvider ldap
AuthLDAPURL ldap://ldap.company.com/dc=company,dc=com
AuthLDAPBindDN "cn=apache,dc=company,dc=com"
AuthLDAPBindPassword secret
Require ldap-group cn=developers,ou=groups,dc=company,dc=com

Connection pooling, SSL/TLS to the directory, group membership checks, credential caching - all native, all in config, no code required. With nginx you’re reaching for a community module with an inconsistent maintenance history, writing Lua, or standing up a separate auth service and proxying to it with auth_request - which is just mod_authnz_ldap reimplemented badly across two processes with an HTTP round trip in the middle.

Apache Includes Everything You’re Now Paying For

Look at Apache’s feature set and you’re reading the history of web infrastructure, one solved problem at a time. SSL termination? Apache had it before cloud load balancers existed to take it off your plate. Caching? mod_cache predates Redis by years. Load balancing? mod_proxy_balancer was doing weighted round-robin and health checks before ELB was a product. Compression, rate limiting, IP-based access control, bot detection via mod_security - Apache had answers to all of it before the industry decided each problem deserved its own dedicated service, its own operations overhead, and its own vendor relationship.

Apache didn’t accumulate features because it was undisciplined. It accumulated features because the web kept throwing problems at it and it kept solving them. The fact that your load balancer now handles SSL termination doesn’t mean Apache was wrong to support it - it means Apache was right early enough that the rest of the industry eventually built dedicated infrastructure around the same idea.

Now look at your AWS bill. CloudFront for CDN. ALB for load balancing and SSL termination. WAF for request filtering. ElastiCache for caching. Cognito for authentication. API Gateway for routing. Each one a line item. Each one a managed service wrapping functionality that Apache has shipped for free since before most of your team was writing code.

Amazon Web Services is, in a very real sense, Apache’s feature set repackaged as paid managed infrastructure. They looked at what the web needed, looked at what Apache had already solved, and built a business around operating those solutions at scale so you didn’t have to. That’s a legitimate value proposition - operations is hard and sometimes paying AWS is absolutely the right answer. But if you’re running a handful of servers and paying for half a dozen AWS services to handle concerns that Apache handles natively, maybe set the Wayback Machine to 2005, spin up Apache, and keep the credit card in your pocket.

Grandpa wasn’t just ahead of his time. Grandpa was so far ahead that Amazon built a cloud business catching up to him.

So Why Did You Choose nginx?

Be honest. The real reason is that you learned it first, or your last job used it, or a blog post from 2012 told you it was the modern choice. Maybe someone at a conference said Apache was legacy and you nodded along because everyone else was nodding. That’s how technology adoption works - narrative momentum, not engineering analysis.

But those nginx blinders have a cost. And the Kubernetes ecosystem just paid it in full.

The Cost of the nginx Blinders

The nginx Ingress Controller became the Kubernetes default early in the ecosystem’s adoption curve and the pattern stuck. Millions of production clusters. The de-facto standard. Fortune 500 companies. The Swiss Army knife of Kubernetes networking - and that flexibility was precisely its undoing.

The “snippets” feature that made it popular - letting users inject raw nginx config via annotations - turned out to be an unsanitizable attack surface baked into the design. CVE-2025-1974 exploited this to achieve unauthenticated RCE via the admission controller, giving attackers access to all secrets across all namespaces. Complete cluster takeover from anything on the pod network. In many common configurations the pod network is accessible to every workload in your cloud VPC. The blast radius was the entire cluster.

The architectural flaw couldn’t be fixed without gutting the feature that made the project worth using. So it was retired instead.

Here is the part nobody is saying out loud: Apache could have been your Kubernetes ingress controller all along.

The Apache Ingress Controller exists. It supports path and host-based routing, TLS termination, WebSocket proxying, header manipulation, rate limiting, mTLS - everything Ingress NGINX offered, built on a foundation with 30 years of security hardening and a governance model that doesn’t depend on one person’s spare time. It doesn’t have an unsanitizable annotation system because Apache’s configuration model was designed with proper boundaries from the beginning. The full Apache module ecosystem - mod_security, mod_authnz_ldap, the filter chain, all of it - available to every ingress request.

The Kubernetes community never seriously considered it. nginx had the mindshare, nginx got the default recommendation, nginx became the assumed answer before the question was even finished. Apache was dismissed as grandpa’s web server by engineers who had never actually used it for anything hard - and so the ecosystem bet its ingress layer on a project sustained by volunteers and crossed its fingers.

The nginx blinders cost the industry IngressNightmare, 6,500 exposed clusters, and a forced migration that will consume engineering hours across thousands of organizations in 2026. Not because Apache wasn’t available. Because nobody looked.

nginx is survived by its commercial fork nginx Plus, approximately 6,500 vulnerable Kubernetes clusters, and a generation of engineers who will spend Q1 2026 migrating to Gateway API - a migration they could have avoided entirely.

Who’s Keeping The Lights On

Here’s the conversation that should happen in every architecture review but almost never does: who maintains this and what happens when something goes wrong?

For Apache the answer has been the same for over 30 years. The Apache Software Foundation - vendor-neutral, foundation-governed, genuinely open source. Security vulnerabilities found, disclosed responsibly, patched. A stable API that doesn’t break your modules between versions. Predictable release cycles. Institutional stability that has outlasted every company that ever tried to compete with it.

nginx’s history is considerably more complicated. Written by Igor Sysoev while employed at Rambler, ownership murky for years, acquired by F5 in 2019. Now a critical piece of infrastructure owned by a networking hardware vendor whose primary business interests may or may not align with the open source project. nginx Plus - the version with the features that actually compete with Apache on a level playing field - is commercial. OpenResty, the variant most people reach for when they need real programmability, is a separate project with its own maintenance trajectory.

The Ingress NGINX project had millions of users and a maintainership you could count on one hand. That’s not a criticism of the maintainers - it’s an indictment of an ecosystem that adopted a critical infrastructure component without asking who was keeping the lights on.

Three decades of adversarial testing by the entire internet is a security posture no startup’s stack can match. The Apache Software Foundation will still be maintaining Apache httpd when the company that owns your current stack has pivoted twice and been acqui-hired into oblivion.

Long Live Apache

The engineers who dismissed Apache as legacy were looking at a 2003 benchmark and calling it a verdict. They missed the server that anticipated every problem modern infrastructure is still solving, that powered the internet before AWS existed to charge you for the privilege, and that was sitting right there in the Kubernetes ecosystem waiting to be evaluated while the community was busy betting critical infrastructure on a volunteer project with an architectural time bomb in its most popular feature.

Grandpa didn’t just know what he was doing. Grandpa was building the platform you’re still trying to reinvent - badly, in JavaScript, with a vulnerability disclosure coming next Tuesday and a maintainer burnout announcement the Tuesday after that.

The server is fine. It was always fine. Touch grass, update your mental model, and maybe read the Apache docs before your next architecture meeting.

RIP nginx Ingress Controller. 2015-2026. Maintained by one guy in his spare time. Missed by the 43% of cloud environments that probably should have asked more questions.

Sources

  • IngressNightmare - CVE details and exposure statistics Wiz Research, March 24, 2025 https://www.wiz.io/blog/ingress-nginx-kubernetes-vulnerabilities
  • Ingress NGINX Retirement Announcement Kubernetes SIG Network and Security Response Committee, November 11, 2025 https://kubernetes.io/blog/2025/11/11/ingress-nginx-retirement/
  • Ingress NGINX CVE-2025-1974 - Official Kubernetes Advisory Kubernetes, March 24, 2025 https://kubernetes.io/blog/2025/03/24/ingress-nginx-cve-2025-1974/
  • Transitioning Away from Ingress NGINX - Maintainership and architectural analysis Google Open Source Blog, February 2026 https://opensource.googleblog.com/2026/02/the-end-of-an-era-transitioning-away-from-ingress-nginx.html
  • F5 Acquisition of nginx F5 Press Release, March 2019 https://www.f5.com/company/news/press-releases/f5-acquires-nginx-to-bridge-netops-and-devops

Disclaimer: This article was written with AI assistance during a long discussion on the features and history of Apache and nginx, drawing on my experience maintaining and using Apache over the last 20+ years. The opinions, technical observations, and arguments are entirely my own. I am in no way affiliated with the ASF, nor do I have any financial interest in promoting Apache. I have been using and benefiting from Apache since 1998 and continue to discover features and capabilities that surprise me even to this day.

Perl 🐪 Weekly #761 - Perl on WhatsApp

dev.to #perl

Originally published at Perl Weekly 761

Hi there!

Do you use WhatsApp? There is now a WhatsApp group for Perl. Join us!

Thanks to Mikko Koivunalho we now have a graph on the MetaCPAN stats page.

Perl-wise it was a rather weak week: we don't have many articles. On the other hand we are back with a new live online event where we are going to work on one or more CPAN modules. I hope this will encourage more of you to start contributing to open source projects in Perl and maybe also to write articles about your journey. Register here! If the scheduled time-slot is not good for you, come to our WhatsApp group and let's discuss it!

Enjoy your week!

--
Your editor: Gabor Szabo.

Articles

ANN: CPAN::MetaCurator V 1.08, Perl.Wiki V 1.40 etc

Treating GitHub Copilot as a Contributor

Dave Cross just posted this article explaining how to use Github co-pilot as a contributor to your project. We will give it a try next meeting, but you can already try it yourself on one of the TODO items in our list.

Web

Perl/Plack Middleware for Emulating An Apache HTTP Server

Keith released a couple of new Plack middleware modules that he uses as a test web server for pages that will ultimately be under Apache httpd.

Websockets in Catalyst

A detailed example with explanation and use-case.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 362

Welcome to a new week with a couple of fun tasks "Echo Chamber" and "Spellbound Sorting". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 361

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Zeckendorf Representation" and "Find Celebrity" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

TWC361

The blog post presents clear and idiomatic Perl solutions for both the Zeckendorf representation and the celebrity problem, showcasing practical logic and efficient algorithmic style. The code is easy to follow and well-structured, making it a great example of solving weekly challenge tasks with solid Perl techniques.

Celebrity Representation

The post showcases a clean and thoughtful Raku solution to computing Zeckendorf representations, demonstrating idiomatic use of sequences and recursion in the language. It's both well-structured and easy to follow, making it a valuable reference for Raku practitioners tackling algorithmic challenges.

numbers

The write-up presents clear and well-structured Raku solutions for both the Zeckendorf sequence and the celebrity problem, with straightforward logic that's easy to follow and learn from. The use of idiomatic Raku constructs and explanatory comments makes the post a solid reference for anyone tackling similar challenges.

Perl Weekly Challenge 361

The post delivers clear and practical Perl implementations for both the Zeckendorf representation and the celebrity detection problems, with complete working scripts and illustrative example outputs. Its well‑organised explanations and real usage examples make it an excellent reference for Perl developers tackling these classic algorithmic tasks.

Was Fibonacci ever a Celebrity?

The post offers solid, well-commented Perl implementations for both TWC361 tasks, clearly expressing the logic behind Zeckendorf decomposition and celebrity detection. The structured approach and readable code make it a valuable example for anyone exploring algorithmic solutions in Perl.

Where Everybody Knows Your Name

The write-up delivers clear and well-structured multi-language solutions for both the Zeckendorf representation and the celebrity detection tasks, with thoughtful explanations of the greedy algorithm and candidate evaluation. The step-by-step approach and readable Perl, Raku, Python, and Elixir code make the post a practical and educational resource for anyone exploring these classic algorithmic problems.

Zeckendorf, the celebrity

The Challenge 361 post clearly states the two tasks - computing the Zeckendorf representation of a number and finding a celebrity in a matrix, along with illustrative examples that make the problem definitions easy to grasp. Its structured presentation of inputs and expected outputs helps readers understand the algorithmic goals before diving into solutions, making it a solid reference for anyone exploring these classic programming challenges.

Zeckendorf Representation

The write-up presents a memory-efficient and well-explained Perl implementation for computing the Zeckendorf representation, cleverly using only two Fibonacci values at a time and clear test examples to illustrate the logic. Its structured presentation and readable code make it a helpful reference for anyone interested in elegant algorithmic Perl solutions.

Find Celebrity

The celebrity finder solution delivers a clear and self-contained Perl implementation that uses readable grep-based checks to identify the celebrity by row and column conditions, backed by several solid test cases illustrating correctness. Its straightforward logic and minimal reliance on external modules make it both accessible and practical for Perl programmers exploring matrix-based algorithms.

The Weekly Challenge #361

The Perl solutions for the challenge combine clear logic with well-commented, idiomatic code that makes both the Zeckendorf representation and celebrity detection easy to follow. The step-by-step explanations and practical test cases offer a solid, educational reference for Perl programmers engaging with classic algorithmic tasks.

Celebrity Zeckendorf

The post offers a clear, language-agnostic walk through both challenge tasks, computing the Zeckendorf representation and finding a celebrity in a matrix, with working code in several languages and readable explanations of the greedy Fibonacci strategy and set-based filtering. Its inclusion of multiple idiomatic implementations makes it a practical and educational read for programmers exploring these classic algorithmic problems.

Representing a celebrity

The post delivers clear, well‑structured Python (with Perl) implementations for both the Zeckendorf representation and celebrity detection tasks, showcasing thoughtful logic and solid error handling. The explanations and example inputs/outputs make the solutions easy to understand and follow, making it a useful resource for anyone practicing these classic algorithmic problems.

Weekly collections

NICEPERL's lists

Great CPAN modules released last week;
MetaCPAN weekly report.

Events

Perl Maven online: Code-reading and Open Source contribution

March 3, 2026

Paris.pm monthly meeting

March 11, 2026

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2026

Perl Toolchain Summit 2026

April 23-26, 2026

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Websockets in Catalyst

blogs.perl.org

Recently I've heard quite a few people comment about how Catalyst can't do websockets. I'm writing this article to dispel that myth, and document how you can retro-fit websockets into an existing Catalyst application in a materially useful way, and without rewriting any of your existing code. I'll also show a case where it actually makes sense to do this rather than using a framework designed for that purpose.

(this article has been updated to use Request->io_fh, which is Catalyst's official escape-hatch for supporting websockets and comet polling. Also the earlier version omitted the 'begin' action)

Backstory

I studied all the different ways to do websockets in Perl for my presentation at TPC 2019. I think the talk turned out rather well, so I encourage people to watch it and look at the GitHub repo for example code, but the TL;DR of it was that websockets in PSGI are a bit of an under-specified hack, and you probably need to rewrite a bunch of your Plack code to accommodate non-blocking needs, and Mojolicious seemed like the obvious answer for implementing websockets. I concluded that anyone who needs to add event-driven features to their existing Plack-based web app should just write a new Mojo app to handle only the websockets, and then use reverse proxies to run the Mojo app under the same hostname as the Plack app. (I should also mention that since that talk, there is now PAGI which addresses the limitations of PSGI and offers an alternative to Mojolicious)

Since then, I have used Mojolicious a few times for event-driven hobby projects, but since I was writing it from scratch I just wrote the whole thing in Mojo. I had not had a request from any of my customers that required a websocket, so I had not actually gotten to test out my advice about making a hybrid app.

Interactive Feature Request

Last year, the opportunity finally came along. One of our customers has a web store which is written in Catalyst, and they wanted to add some features that would let sales representatives interact with customers while they were on the phone with the customer who was actively building a cart. They wanted to be able to quickly identify which cart belonged to the customer on the phone (who might be using an anonymous cart, not logged into an account) and interact with the customer by helping them edit their cart and possibly apply discounts to the cart and maybe send them links to pages on the site. While this could be implemented with polling, any lag between the phone conversation and what they saw in their browser was a potential for confusion, so the polling would need to be rather frequent to give the desired user experience. While we probably have enough capacity to handle some fast polling by a few sales reps, it would just be a messy way to implement it and possibly cause future problems if any of the queries got expensive. Implementing it in an event-driven manner was the clear winner. It was finally time to add websockets!

I started by following my advice from the TPC talk, but quickly ran into a snag. We have two systems of session management, one for public users and one for the admin/sales users. Due to swarms of bots hitting the site, the public user sessions are stored in Redis, while the admin-side sessions are using database tables. All of this has been nicely abstracted behind Catalyst plugins, and we have nice APIs to query users and their permissions. The first thing that a Mojolicious controller would need to do for an incoming connection is authenticate the client. I realized I was going to have to dig down into all the details of my Catalyst sessions and re-implement a bunch of that logic, and that seemed like a lot of effort. It would also mean that any future changes to session management would require updates to both the Catalyst and Mojolicious apps.

I knew I would still need a two-app aproach, because there was lots and lots of blocking code in the workers of the Catalyst app, and you can't have blocking code (of more than a few milliseconds) in an event-driven app. But, what if I just ran the same app twice, with all the websocket actions diverted to the Catalyst app running under Twiggy, and the rest sent to the existing app running under Gazelle? I would also need to build some more professional-looking structure around websocket handling so that it fits with Catalyst's object metaphor, since Catalyst doesn't provide official APIs for websockets. (Catalyst has an official escape-hatch for websockets and comet polling, but no structured modules around that)

Controller Design

Here's what I came up with. (anonymized and simplified a bit) (also the syntax highlighter doesn't recognize POD, so I had to prefix all the POD with '#'. pretend that isn't there)

First, I created a new Controller to hold all the new event-driven code.


package MyApp::Controller::Event;

# =head1 DESCRIPTION
# 
# This controller handles all event-driven (websocket)
# behavior of the admin interface.
# 
# Actions in this controller can only be served via the
# Twiggy event-driven webserver which runs from the
# myapp-twiggy docker container.  Accessing this controller
# from the normal Gazelle server gives an error message.
# 
# The Twiggy server is mounted via Traefik PathPrefix rule
# at /(redacted), but the request paths are not rewritten
# so Catalyst doesn't need to add any special prefixes when
# it generates links.
# 
# This controller uses the "instance per request" design,
# so Moose attributes only apply to the current user, and
# continue to only apply to the current user even after the
# event-driven callbacks have started.
# 
# =cut

use Moose;
use Scalar::Util 'refaddr';
use JSON::MaybeXS;
use namespace::clean;
use v5.36;

BEGIN { extends 'MyApp::Controller'; }

# One instance per request
sub ACCEPT_CONTEXT {
  my ($self, $c)= @_;
  return $self unless ref $c;
  $c->stash->{(__PACKAGE__)} //= do {
    $self= bless { %$self }, ref $self;
    $self->context($c);
    $self;
  };
}

I should note here that this borrows the workings of Catalyst::Component::InstancePerContext, but since that module only saves four lines of code, I just paste it into each controller so that it's clear to everyone what exactly is going on to provide InstancePerContext behavior, and have one fewer CPAN dependency.

Next, I chose a design where the Catalyst context object and controller object are long-lived, with references held globally and cleared by the disconnect event of the websocket.


# =attribute context
# 
# A weak reference to the Catalyst context ($c).
# 
# =attribute fh
# 
# The file handle of the websocket.
# 
# =attribute websocket
# 
# The AnyEvent::Websocket::Connection, if one has been
# created.
# 
# =attribute io_session_name
# 
# A convenient name to identify the websocket session in
# logs.  Currently "$username-$n" where $n counts upward
# on a per-user basis.
# 
# =cut

# This holds the top-level strong references to websocket
# session instances.  It is keyed by refaddr($self) and
# holds values of [ $self, $c ].
# The Controller::Event instance ($self) holds references
# to the websocket, Postgres listeners, and a weak-ref
# back to the Catalyst context ($c).  The context holds a
# strong reference to the stash, which has a strong
# reference to $self.

our %active_contexts;

has context         => ( is => 'rw', weak_ref => 1 );
has fh              => ( is => 'rw' );
has websocket       => ( is => 'rw' );
has io_session_name => ( is => 'ro', lazy_build => 1,
  predicate => 'has_io_session_name' );

sub _build_io_session_name($self) {
  state %next_n_for_user;
  my $uname= $self->context->user->username;
  return $uname . '-' . ++$next_n_for_user{$uname};
}

# =method io_connect
# 
# Called by AnyEvent::Websocket::Server when the websocket
# handshake is complete.  It receives a $promise that is
# either a websocket object or an exception.
# 
# =method io_disconnect
# 
# This is called every time we receive a disconnect event
# from a websocket client.
# 
# =cut

sub io_connect($self, $promise) {
  unless(eval { $self->websocket($promise->recv); 1 }) {
    warn "Rejected connection '$sess_name': $@\n";
    close($self->fh);
    delete $active_contexts{refaddr $self};
    return;
  }
  Scalar::Util::weaken($self);
  $self->websocket->on(each_message => sub($conn, @args) {
    eval { $self->io_message(@args); 1 }
      or warn "Exception for $sess_name: $@";
  });
  $self->websocket->on(finish => sub {
    eval { $self->io_disconnect; 1 }
      or warn "Exception for $sess_name: $@";
  });
}

sub io_disconnect($self) {
  delete $active_contexts{refaddr $self};
}

Event Plumbing

This is perhaps a topic for another article, but I have extensions on the DBIC Postgres connection of my app that enable some event-driven features. I should get that packaged for CPAN some day...


# =attribute cart_listener
# 
# This is an instance of
# L<DBIx::Class::Storage::DBI::PgWithEventListeners::Listener>
# which delivers Postgres events named 'cart_activity' to method
# L</on_cart_activity>.  The object is lazy-built.  Note that
# DBIx::Class::Storage::DBI::PgWithEventListeners keeps track
# of whether objects exist for a Pg channel, so LISTEN
# happens when the first listener is created, and
# UNLISTEN happens after the last listener is garbage
# collected.
# 
# =cut

has cart_listener => ( is => 'rw', lazy_build => 1,
  predicate => 'has_cart_listener',
  clearer => 'clear_cart_listener'
);

sub _build_cart_listener($self) {
  my $db= $self->context->model('DB');

  # Ensure we are dispatching events via event loop.
  # This only works on Twiggy.  The Gazelle-served
  # instance of the app doesn't call this.
  $db->storage->dispatch_via_anyevent;

  # Each instance of Controller::Event has its own listener.
  # As long as one of these objects exists, postgres will
  # be listening to "cart_activity" events.
  return $db->storage->new_listener(
    'cart_activity', $self, 'on_cart_activity'
  );
}

Then some methods that send and receive the events. The events I'm generating from Postgres are fairly benign (just indicating which records have changed), so they can just be forwarded directly out to the websocket clients. The JavaScript client then uses the information about what has changed to decide which normal AJAX requests to execute to refresh the screen. Those AJAX requests go to the normal Gazelle-based web app instance. I'm using this Event controller only for the delivery of change notifications.


# =method send_event
# 
#   $self->send_event($data);
# 
# Serialize $data into JSON and send to the client over the
# websocket.
# 
# =method on_cart_activity
# 
# This is called by the database listener every time relevant
# cart activity has occurred.  It relays the event to the
# websocket client.
# 
# =cut

sub send_event($self, $data) {
  $self->websocket->send(JSON::MaybeXS->new->encode($data));
}

sub on_cart_activity($self, $channel, $pg_pid, $payload) {
  $self->send_event([ cart_activity => $payload ]);
}

# =method io_message
# 
# This is called every time we receive a packet from the
# webscket client.  Right now the client just requets to
# listen to a feed of events like 'cart_activity'.
# Actions the client takes in response to these events are
# sent as normal HTTP requests to other controllers.
# 
# =cut

sub io_message($self, $msg, @) {
  if ($msg->is_text) {
    my $data= JSON->new->decode($msg->decoded_body);
    if ($data->{listen} eq 'cart_activity') {
      $self->cart_listener; # lazy-build
    }
  }
}

Actions

I added a 'begin' action to prevent any action of this controller from running under the wrong server.


sub begin : Private ($self, $c) {
  my $env= $c->req->env;
  if (!defined $env->{'psgix.io'}) {
    # Must be using a PSGI server that exposes the file handle to us
    $c->detach(HTTP => 500,
      [ 'psgix.io is not supported by this server' ]);
  }
  if (!$env->{'psgi.nonblocking'}) {
    # Can't do anything useful with a websocket unless the
    # webserver is written with an I/O event loop
    $c->detach(HTTP => 500,
      [ 'Nonblocking communication not supported by this server' ]);
  }
  # Early (obsolete) versions of WebSocket required reading
  # additional body bytes, which is awkward to do in a
  # nonblocking manner.  New versions supply header
  # Sec-Websocket-Key.  If the client supplied this header,
  # we can skip the older body-based protocol.
  # If not, just refuse the connection.
  if (($c->req->headers->header('Upgrade')//'') eq 'websocket'
    && !$c->req->headers->header('Sec-Websocket-Key')
  ) {
    $c->detach(HTTP => 400,
      [ 'Unsupported version of WebSocket' ]);
  }

  $self->next::method($c); # auth check
}

And finally, the Websocket-handling action:


# =action /(redacted)/io
# 
# This is the endpoint for making websocket connections.
# The browser must send the header 'Upgrade: websocket'
# and the user must be logged in and be permitted to use
# event features.  Websocket events are then dispatched
# to the L</io_message> and L</io_disconnect> methods.
# 
# =cut

sub io : Local Args(0) ($self, $c) {
  my $h= $c->req->headers;
  ($h->header('Upgrade')//'') eq 'websocket'
    or $c->detach(HTTP => 400, ['Expected websocket']);
  $c->user && $c->check_user_roles('event_listener')
    or $c->detach(HTTP => 403, ['Can't monitor events']);

  # lazy-load, so that the normal Gazelle app instance
  # doesn't need to load AnyEvent
  require AnyEvent::WebSocket::Server;

  # trigger building of io_session_name
  my $sess_name= $self->io_session_name;

  # Accessing this file handle tells Catalyst that it is no
  # longer responsible for writing a response.
  my $req_fh= $c->req->io_fh;

  # Optional:
  # Future-proof this code by dup()-ing the PSGI handle to
  # a new FD number and then closing the original.
  # This guarantees that neither Catalyst nor Twiggy can
  # disturb the communication with this client.
  open(my $dup_fh, '>&', $req_fh)
    or die "dup psgix.io: $!";
  close($req_fh);
  $self->fh($dup_fh);

  # save a ref to ourselves to prevent garbage collection.
  # note that ->context is a weak-ref, so need to hold a
  # ref to that too.
  $active_contexts{refaddr $self}= [ $self, $c ];

  my $env= $c->req->env;
  AnyEvent::WebSocket::Server->new
    ->establish_psgi({ %$env, 'psgix.io' => $dup_fh })
    ->cb(sub($promise) { $self->io_connect($promise) });

  # for Catalyst logging only; the 101 response is sent by
  # AnyEvent::WebSocket::Server, and catalyst will not
  # write anything now that we've touched io_fh.
  $c->res->code(101);
  $c->res->body('');
  $c->detach();
}

The key piece of this action is the io_fh attribute of the Catalyst request. Once you have asked for that file handle, Catalyst will no longer write a response to PSGI.

I wasn't previously aware of that detail, so my earlier version of this code was duplicating the file descriptor and closing the file handle to ensure that Catalyst and Twiggy can't possibly break the websocket. I think that's still a good method of future-proofing the code, so I'm leaving it in the example, but you could choose to omit it.

The file handle is then handed off to a new instance of AnyEvent::WebSocket::Server, and that object sets up event callbacks that conduct the remainder of the websocket handshake and then deliver either a websocket object or an exception via $promise. I pass that to my io_connect method, defined in the earlier snippets.

Reverse Proxy

As I mentioned earlier, I have one container that is running the app under Gazelle (a pre-forking worker pool where each worker handles one request at a time) and another that runs the app under Twiggy (where one process is juggling multiple event-driven requests interleaved with eachother). The only differences between these containers are the command and the Traefik labels.

Docker myapp-gazelle command:

["plackup","-s","Gazelle","-p","3000","--max-reqs-per-child","10000","myapp.psgi"]

Docker myapp-twiggy command:

["plackup","-s","Twiggy","-p","3000","myapp.psgi"]

I'm a fan of the Traefik reverse proxy, mostly because of how nicely it integrates with Docker and LetsEncrypt. These are the relevant labels from the myapp-twiggy docker container:

  • "traefik.http.services.myapp-twiggy.loadbalancer.server.port=3000"
  • "traefik.http.services.myapp-twiggy.loadbalancer.server.scheme=http"
  • "traefik.http.routers.myapp-twiggy.entryPoints=https"
  • "traefik.http.routers.myapp-twiggy.priority=15"
  • "traefik.http.routers.myapp-twiggy.rule=(Host(redacted) && PathPrefix(/redacted) )"
  • "traefik.http.routers.myapp-twiggy.service=myapp-twiggy"

I have omitted some rules for middlewares and TLS. The main points are that the priority=15 gives this router a higher priority than the router of myapp-gazelle, and Host and PathPrefix rules match only the paths served by my Event controller, leaving all the other requests to fall back to myapp-gazelle.

Catalyst Websocket Tradeoffs

I retrospect, using Catalyst for websockets actually worked out even better than I anticipated.

  • I was able to re-use the authentication and sessions, as intended.
  • I was able to re-use the application's DBIC configuration instead of needing to implement the equivalent with Mojo::Pg. (passwords, on-connect settings, logging, trace/debug, etc)
  • Homogenous logging of HTTP request/response/errors
  • No additional reverse proxy configuration (getting Mojolicious to trust the same reverse-proxy headers that Plack is trusting)
  • Docker container configuration is nearly identical
  • Avoid introducing a completely different framework into the project, which helps with maintenance.

The only downside is that the session setup code has a brief blocking behavior as it queries the database, during which Twiggy cannot also be delivering websocket events. This could theoretically make a denial-of-service attack easier, but just barely. Any attack distributed enough to dodge the connection-throttling middleware would be a problem regardless of some milliseconds lost to blocking database queries. I could always add a worker pool of Twiggy instances if I needed to.

Extras

It's important to ensure that the references to the controller and Catalyst context go out of scope when websockets disconnect. While initially writing the code above, I used the following "destructor logger" to log every time an object I cared about got destroyed. Just create an instance and then assign it to a random hash element of the object of interest.


package DestructorLogger {
  use v5.36;
  use Log::Any '$log';
  sub new($class, $msg) {
    bless \$msg, $class;
  }
  sub DESTROY($self) {
    $log->info("Destroyed: $$self");
  }
}

...
$c->{destructor_logger}= DestructorLogger->new('context');
$self->{destructor_logger}= DestructorLogger->new('controller');

I should also mention that I removed a lot of the logging from the code in this article, since most of it was rather app-specific, and cluttered the view a bit.

I also have a controller action that serves a static page that can test the websocket server and see the events it is sending:


# =head2 GET /(redacted)
# 
# This is a simple status page to verify that this controller
# is running through the correct webserver and delivering the
# expected events.
# 
# =cut

sub index : Path Args(0) ($self, $c) {
  $c->detach(HTTP => 200, [ <<~HTML ]);
    <!DOCTYPE html>
    <html>
    <head>
      <title>Event Server</title>
      <script src="/(redacted)/jquery-3.4.0.js"></script>
      <script>
      window.liveupdate= {
        init: function(ws_uri) {
          var self= this;
          this.ws_uri= ws_uri;
          \$('.chatline').on('keypress', function(event) {
            self.onkeypress(event.originalEvent)
          });
          // Connect WebSocket and initialize events
          console.log('connecting WebSocket '+this.ws_uri);
          this.ws= new WebSocket(this.ws_uri);
          this.ws.onopen= function(event) {
            console.log("onopen");
          };
          this.ws.onmessage= function(event) {
            self.onmessage(event)
          };
          this.ws.onclose= function(event) {
            console.log("onclose");
          };
        },
        onmessage: function(event) {
          \$('body').append(
            document.createTextNode(event.data+"\n"))
        },
        onkeypress: function(event) {
          if (event.key == 'Enter')
            this.onsend();
        },
        onsend: function(event) {
          var text= \$('.chatline').val();
          if (text) {
            this.ws.send(text);
            \$('.chatline').val('');
          }
        }
      };
      \$(document).ready(function() {
        var loc= '' + window.location;
        window.liveupdate.init(
          loc.replace(/^http/, 'ws')
             .replace(/\/?\$/, '/io'))
      });
      </script>
    </head>
    <body>
      Serving @{[ scalar keys %active_contexts ]} connections
      <br><input class="chatline" type="text">
    </body>
    </html>
    HTML
}
cpan/CPAN-Meta-Requirements - Update to version 2.145

2.145     2026-01-31 12:17:04+01:00 Europe/Brussels
    - Correct "normalization of 0" code to work correctly with vstrings
      (thanks, Graham Knop!)
Thank you Team PWC for your continuous support and encouragement.
Welcome to the Week #362 of The Weekly Challenge.
Thank you Team PWC for your continuous support and encouragement.
cpan/CPAN-Meta: Update to CPAN version 2.150013

From Changes:

2.150013  2026-02-20 12:44:18-05:00 America/New_York

  [FIXED]
  - Fix an incompatibility with newer CPAN::Meta::Requirements

Committer: Ran:
        perl -Icpan/CPAN-Meta/lib Porting/makemeta -j
        perl -Icpan/CPAN-Meta/lib Porting/makemeta -y

Updated name of CPAN releasor in Maintainers.pl.

Beautiful Perl feature: trailing commas

dev.to #perl

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

The last two posts about lexical scoping and dynamic scoping addressed deep subjects and therefore were very long; for this time, we'll discuss trailing commas, a much lighter subject ... that nevertheless deserves detailed comments!

Trailing commas, basic principle

Programming languages that support trailing commas are able to parse a list like

(1, 2, 3, 5, 8,)

without generating an error. In other words, it is legal to put a comma after the last item in the list; that comma is a no-op, so the list above is equivalent to (1, 2, 3, 5, 8). Of course the trailing comma is optional, not mandatory.

When it appears on a single line, like in that example, the trailing comma seems ridiculous; but the interest is when the list is written on several lines:

my @fruits = (
              "apples",
              "oranges",
              "bananas",
             );

Here the trailing comma facilitates later changes to the code, should these become necessary. In this example if we want to comment out bananas, or to switch the order of the fruits, we can operate on single lines without having to care about which fruit comes last.

This feature is a transposition of a principle already familiar in all languages with blocks, namely the fact that the last statement in a block can indifferently have a semicolon or not:

if ($is_simple) {do_simple_stuff()} # no semicolon
else            {initialize();
                 process();
                 cleanup();         # with semicolon
                }

Other arguments are commonly invoked in favor of trailing commas, like the facts that diffs in version control systems are cleaner, or that it is easier to generate code programmatically. Going into such arguments would take too much space here, but discussions on the matter can easily be found on the Internet.

On the other hand, let us mention that some people object to trailing commas, arguing that enumeration sentences in natural language never end with a comma, but rather with a full stop or another punctuation sign that marks the end of the list. Evidently this is true, but a sentence in natural language, once emitted, is not rewritten, while in programming languages rewriting is quite frequent - for this usage trailing commas are interesting in programming.

History and comparison with other languages

Long ago the venerable ANSI C language already accepted trailing commas in array initializers; the same thing for enums was added in C99. Both features propagated into languages of C heritage, like C++, Java, PHP, etc., and of course Perl. But Perl went further: since the beginning (version 1.0 in 1987), Perl supported trailing commas in all kinds of lists, including parameters to subroutines, assignments to lists of variables, or more recently in subroutine signatures:

my ($first,
    $second,
    @others,
   ) = @ARGV;

draw_diagram(Perl   => 1987,
             Python => 1991,
             Java   => 1995,
             );

sub transfer_content($source, 
                     $destination,
                     %options,
                    ) { ... }

I strongly suspect that this early design decision in Perl had an influence on the later conception of other programming languages, although I couldn't find any evidence to prove it - influences are rarely documented in design documents! Here is a historical picture:

  • Python had trailing commas since the beginning (1991), but with some peculiarities that will be discussed below;
  • Java accepted trailing commas since the beginning (1995), but only in array initializers and enums, like in C; a request to extend this to other lists was formulated in 2006 but was ignored;
  • JavaScript accepted trailing commas in array literals since the beginning (1995); later support for allowing them in object literals was added in ES5 (2009) and also for function definitions and calls in JS 2017. The global picture is well documented in the MDN documentation, and lots of examples are shown in a blog by logrocket. However JavaScript has a strange edge case with array literals, which will be discussed below. Furthermore, beware that JSON, not a programming language but a data interchange format closely related to JavaScript, does not support trailing commas;
  • C++ inherited from C a restricted use of trailing commas; a recent proposal (2025) to extend the support to more general use cases is still pending;
  • Kotlin added general support for trailing commas in version 1.4.0 (2020);
  • PHP extended its support for trailing commas in version 8.0 (2020).

So nowadays there seems to be a clear tendency towards adoption of trailing commas in many major languages, and some languages are still working on extending their support in this area.

Edge cases in other languages

Trailing commas in Perl are true no-ops, in every context. Furthermore, intermediate commas in a list, or several commas at the end, are also allowed, without any semantic consequence; this is not very useful, but has the nice property of requiring absolutely no reasoning from the programmer, for example when large chunks of code are reshuffled in the course of a refactoring operation. By contrast, Python and JavaScript have edge cases, as shown in the rest of this chapter.

Python : the tuple exception

In Python the array expression [1, 2, 3,] is legal, but expressions [1, 2, , 3,] or [1, 2, 3,,] are not: in other words, intermediate commas or multiple trailing commas are not allowed. This is just a minor syntactic restriction, not very annoying.

A more severe peculiarity is the "tuple exception", namely the fact that these two expressions are semantically different:

(123)    # single value
(123,)   # tuple with one member

Actually, the syntax with the trailing comma is the only way to write a singleton tuple. Another situation, related to the first, comes from the fact that tuples in some contexts are written without parenthesis; so these two lines are again semantically different:

x = 123  # assign a scalar value to x
x = 123, # assign a singleton tuple to x

whereas with tuples of more than one element, the trailing comma is a true no-op:

x = 1, 2, 3  # assign a triple to x
x = 1, 2, 3, # same thing

Furthermore, in singleton lists (as opposed to singleton tuples), trailing commas are also a no-op:

y = [1]      # assign a singleton list to y
y = [1,]     # same thing

So when trailing commas appear in Python code, some thought is required to get the proper meaning.

JavaScript: the sparse array exception

Now let us consider trailing commas in JavaScript. Within object literals, they are true no-ops:

{x:1, y:2, }   // equivalent to {x:1, y:2}

but like in Python, it is not possible to write intermediate commas or several trailing commas

{x:1, , y:2, } // Uncaught SyntaxError: Unexpected token ','
{x:1, y:2,, }  // idem

The situation with array literals is quite different: there the syntax admits intermediate commas and several trailing commas, but semantically they occupy slots in the array:

[1, , 2, 3,,,] // [ 1, <1 empty item>, 2, 3, <2 empty items> ]

This example is an array of length 6, but with only 3 occupied slots - something that is called a sparse array in JavaScript parlance. Empty slots in the array are not equivalent to undefined; their meaning depends on the operation applied to the array, as explained in this MDN document.

Like in Python, when trailing commas appear in JavaScript code, some thought is required to get the proper meaning.

Wrapping up

Trailing commas are a relatively minor topic, but it is interesting to observe that Perl since its initial design made wise decisions with this feature. Trailing commas are treated consistently in all Perl programming constructs, without any surprises for the programmer. In comparison to C, Perl widened the contexts where trailing commas were admitted, and this was probably an inspiration to many other programming languages. Isn't that beautiful ?

About the cover picture

The cover picture shows the initial bars of fugue BWV885 from Johann Sebastian Bach, in a manuscript which is not from the hand of the composer but was transmitted to us by his son-in-law Johann Christoph Altnickol. The theme of that fugue is very recognizable as it repeats the same note seven times before moving to something else - so it reminded me of repeated commas! But this is just a wink; in reality, the illustration is absolutely not relevant with respect to the main theme of this article, because the repetition of notes in the theme is by no means a no-op - on the contrary, it is a strong reinforcement. On a violin a similar effect could be obtained by performing a crescendo on a long note; but on a harpsichord or an organ a note cannot be changed after the initial attack - so repetition is another device to convey the idea of reinforcement.

Websockets in Catalyst

r/perl

Recently I've heard quite a few people comment about how Catalyst can't do websockets. I'm writing this article to dispel that myth, and document how you can retro-fit websockets into an existing Catalyst application in a materially useful way, and without rewriting any of your existing code. I'll also show a case where it actually makes sense to do this rather than using a framework designed for that purpose.

submitted by /u/nrdvana
[link] [comments]

Web search somehow do not find relevant results. And perl docs too (e.g. seems there is no 'selection operator'). I have a large text file and want output of results of global substitution, but not what was not selected for substitutions.

I thought I might first select relevant parts, then run substitutions, but I don't know how to select.

As of now I've designed a workaround:

perl -0777 -p -e '$n = s/.*?to(.*?)found/${1}END/sg; while ($n>1) {s/END//s;$n=$n-1};s/END.*//s' filename

So I replace all needed (keeping what's between 'to' and 'found') adding some text ('END') not present in original (and getting number of replacements in $n), delete 'END' between results and delete all from 'END' to the end. All code except main substitution operator is to delete what is left after the last substitution.

I hope there is a better way, e.g. not requiring to think of unique temporary text and shorter.

It's been long since I've posted on SO, now there is some 'Question Type', I'm asking about best practice hence I've selected it.

Edit: simple case of file contents:

1 to 2
3
found 4 to 5 found
6

Produces (correctly):

 2
3
 5 

Add epigraph for 5.42.1-RC1

Perl commits on GitHub
Add epigraph for 5.42.1-RC1

Treating GitHub Copilot as a Contributor

r/perl

Treating GitHub Copilot as a Contributor

Perl Hacks

For some time, we’ve talked about GitHub Copilot as if it were a clever autocomplete engine.

It isn’t.

Or rather, that’s not all it is.

The interesting thing — the thing that genuinely changes how you work — is that you can assign GitHub issues to Copilot.

And it behaves like a contributor.

Over the past day, I’ve been doing exactly that on my new CPAN module, WebServer::DirIndex. I’ve opened issues, assigned them to Copilot, and watched a steady stream of pull requests land. Ten issues closed in about a day, each one implemented via a Copilot-generated PR, reviewed and merged like any other contribution.

That still feels faintly futuristic. But it’s not “vibe coding”. It’s surprisingly structured.

Let me explain how it works.


It Starts With a Proper Issue

This workflow depends on discipline. You don’t type “please refactor this” into a chat window. You create a proper GitHub issue. The sort you would assign to another human maintainer. For example, here are some of the recent issues Copilot handled in WebServer::DirIndex:

  • Add CPAN scaffolding
  • Update the classes to use Feature::Compat::Class
  • Replace DirHandle
  • Add WebServer::DirIndex::File
  • Move render() method
  • Use :reader attribute where useful
  • Remove dependency on Plack

Each one was a focused, bounded piece of work. Each one had clear expectations.

The key is this: Copilot works best when you behave like a maintainer, not a magician.

You describe the change precisely. You state constraints. You mention compatibility requirements. You indicate whether tests need to be updated.

Then you assign the issue to Copilot.

And wait.


The Pull Request Arrives

After a few minutes — sometimes ten, sometimes less — Copilot creates a branch and opens a pull request.

The PR contains:

  • Code changes
  • Updated or new tests
  • A descriptive PR message

And because it’s a real PR, your CI runs automatically. The code is evaluated in the same way as any other contribution.

This is already a major improvement over editor-based prompting. The work is isolated, reviewable, and properly versioned.

But the most interesting part is what happens in the background.


Watching Copilot Think

If you visit the Agents tab in the repository, you can see Copilot reasoning through the issue.

It reads like a junior developer narrating their approach:

  • Interpreting the problem
  • Identifying the relevant files
  • Planning changes
  • Considering test updates
  • Running validation steps

And you can interrupt it.

If it starts drifting toward unnecessary abstraction or broad refactoring, you can comment and steer it:

  • Please don’t change the public API.
  • Avoid experimental Perl features.
  • This must remain compatible with Perl 5.40.

It responds. It adjusts course.

This ability to intervene mid-flight is one of the most useful aspects of the system. You are not passively accepting generated code — you’re supervising it.


Teaching Copilot About Your Project

Out of the box, Copilot doesn’t really know how your repository works. It sees code, but it doesn’t know policy.

That’s where repository-level configuration becomes useful.

1. Custom Repository Instructions

GitHub allows you to provide a .github/copilot-instructions.md file that gives Copilot repository-specific guidance. The documentation for this lives here:

When GitHub offers to generate this file for you, say yes.

Then customise it properly.

In a CPAN module, I tend to include:

  • Minimum supported Perl version
  • Whether Feature::Compat::Class is preferred
  • Whether experimental features are forbidden
  • CPAN layout expectations (lib/, t/, etc.)
  • Test conventions (Test::More, no stray diagnostics)
  • A strong preference for not breaking the public API

Without this file, Copilot guesses.

With this file, Copilot aligns itself with your house style.

That difference is impressive.

2. Customising the Copilot Development Environment

There’s another piece that many people miss: Copilot can run a special workflow event called copilot_agent_setup.

You can define a workflow that prepares the environment Copilot works in. GitHub documents this here:

In my Perl projects, I use this standard setup:

name: Copilot Setup Steps

on: copilot_agent_setup

jobs:
  copilot-setup-steps:
    runs-on: ubuntu-latest
    permissions:
      contents: read
  steps:
    - name: Check out repository
      uses: actions/checkout@v4

    - name: Set up Perl 5.40
      uses: shogo82148/actions-setup-perl@v1
      with:
        perl-version: '5.40'

    - name: Install dependencies
      run: cpanm --installdeps --with-develop --notest .

(Obviously, that was originally written for me by Copilot!)

This does two important things.

Firstly, it ensures Copilot is working with the correct Perl version.

Secondly, it installs the distribution dependencies, meaning Copilot can reason in a context that actually resembles my real development environment.

Without this workflow, Copilot operates in a kind of generic space.

With it, Copilot behaves like a contributor who has actually checked out your code and run cpanm.

That’s a useful difference.


Reviewing the Work

This is the part where it’s important not to get starry-eyed.

I still review the PR carefully.

I still check:

  • Has it changed behaviour unintentionally?
  • Has it introduced unnecessary abstraction?
  • Are the tests meaningful?
  • Has it expanded scope beyond the issue?

I check out the branch and run the tests. Exactly as I would with a PR from a human co-worker.

You can request changes and reassign the PR to Copilot. It will revise its branch.

The loop is fast. Faster than traditional asynchronous code review.

But the responsibility is unchanged. You are still the maintainer.


Why This Feels Different

What’s happening here isn’t just “AI writing code”. It’s AI integrated into the contribution workflow:

  • Issues
  • Structured reasoning
  • Pull requests
  • CI
  • Review cycles

That architecture matters.

It means you can use Copilot in a controlled, auditable way.

In my experience with WebServer::DirIndex, this model works particularly well for:

  • Mechanical refactors
  • Adding attributes (e.g. :reader where appropriate)
  • Removing dependencies
  • Moving methods cleanly
  • Adding new internal classes

It is less strong when the issue itself is vague or architectural. Copilot cannot infer the intent you didn’t articulate.

But given a clear issue, it’s remarkably capable — even with modern Perl using tools like Feature::Compat::Class.


A Small but Important Point for the Perl Community

I’ve seen people saying that AI tools don’t handle Perl well. That has not been my experience.

With a properly described issue, repository instructions, and a defined development environment, Copilot works competently with:

  • Modern Perl syntax
  • CPAN distribution layouts
  • Test suites
  • Feature::Compat::Class (or whatever OO framework I’m using on a particular project)

The constraint isn’t the language. It’s how clearly you explain the task.


The Real Shift

The most interesting thing here isn’t that Copilot writes Perl. It’s that GitHub allows you to treat AI as a contributor.

  • You file an issue.
  • You assign it.
  • You supervise its reasoning.
  • You review its PR.

It’s not autocomplete. It’s not magic. It’s just another developer on the project. One who works quickly, doesn’t argue, and reads your documentation very carefully.

Have you been using AI tools to write or maintain Perl code? What successes (or failures!) have you had? Are there other tools I should be using?


Links

If you want to have a closer look at the issues and PRs I’m talking about, here are some links?

The post Treating GitHub Copilot as a Contributor first appeared on Perl Hacks.

Treating GitHub Copilot as a Contributor

dev.to #perl

For some time, we’ve talked about GitHub Copilot as if it were a clever autocomplete engine.

It isn’t.

Or rather, that’s not all it is.

The interesting thing — the thing that genuinely changes how you work — is that you can assign GitHub issues to Copilot.

And it behaves like a contributor.

Over the past day, I’ve been doing exactly that on my new CPAN module, WebServer::DirIndex. I’ve opened issues, assigned them to Copilot, and watched a steady stream of pull requests land. Ten issues closed in about a day, each one implemented via a Copilot-generated PR, reviewed and merged like any other contribution.

That still feels faintly futuristic. But it’s not “vibe coding”. It’s surprisingly structured.

Let me explain how it works.

It Starts With a Proper Issue

This workflow depends on discipline. You don’t type “please refactor this” into a chat window. You create a proper GitHub issue. The sort you would assign to another human maintainer. For example, here are some of the recent issues Copilot handled in WebServer::DirIndex:

  • Add CPAN scaffolding
  • Update the classes to use Feature::Compat::Class
  • Replace DirHandle
  • Add WebServer::DirIndex::File
  • Move render() method
  • Use :reader attribute where useful
  • Remove dependency on Plack

Each one was a focused, bounded piece of work. Each one had clear expectations.

The key is this: Copilot works best when you behave like a maintainer, not a magician.

You describe the change precisely. You state constraints. You mention compatibility requirements. You indicate whether tests need to be updated.

Then you assign the issue to Copilot.

And wait.

The Pull Request Arrives

After a few minutes — sometimes ten, sometimes less — Copilot creates a branch and opens a pull request.

The PR contains:

  • Code changes
  • Updated or new tests
  • A descriptive PR message

And because it’s a real PR, your CI runs automatically. The code is evaluated in the same way as any other contribution.

This is already a major improvement over editor-based prompting. The work is isolated, reviewable, and properly versioned.

But the most interesting part is what happens in the background.

Watching Copilot Think

If you visit the Agents tab in the repository, you can see Copilot reasoning through the issue.

It reads like a junior developer narrating their approach:

  • Interpreting the problem
  • Identifying the relevant files
  • Planning changes
  • Considering test updates
  • Running validation steps

And you can interrupt it.

If it starts drifting toward unnecessary abstraction or broad refactoring, you can comment and steer it:

  • Please don’t change the public API.
  • Avoid experimental Perl features.
  • This must remain compatible with Perl 5.40.

It responds. It adjusts course.

This ability to intervene mid-flight is one of the most useful aspects of the system. You are not passively accepting generated code — you’re supervising it.

Teaching Copilot About Your Project

Out of the box, Copilot doesn’t really know how your repository works. It sees code, but it doesn’t know policy.

That’s where repository-level configuration becomes useful.

1. Custom Repository Instructions

GitHub allows you to provide a .github/copilot-instructions.md file that gives Copilot repository-specific guidance. The documentation for this lives here:

When GitHub offers to generate this file for you, say yes.

Then customise it properly.

In a CPAN module, I tend to include:

  • Minimum supported Perl version
  • Whether Feature::Compat::Class is preferred
  • Whether experimental features are forbidden
  • CPAN layout expectations (lib/, t/, etc.)
  • Test conventions (Test::More, no stray diagnostics)
  • A strong preference for not breaking the public API

Without this file, Copilot guesses.

With this file, Copilot aligns itself with your house style.

That difference is impressive.

2. Customising the Copilot Development Environment

There’s another piece that many people miss: Copilot can run a special workflow event called copilot_agent_setup.

You can define a workflow that prepares the environment Copilot works in. GitHub documents this here:

In my Perl projects, I use this standard setup:

name: Copilot Setup Steps

on: copilot_agent_setup

jobs:
  copilot-setup-steps:
    runs-on: ubuntu-latest
    permissions:
      contents: read
  steps:
    - name: Check out repository
      uses: actions/checkout@v4

    - name: Set up Perl 5.40
      uses: shogo82148/actions-setup-perl@v1
      with:
        perl-version: '5.40'

    - name: Install dependencies
      run: cpanm --installdeps --with-develop --notest .

(Obviously, that was originally written for me by Copilot!)

This does two important things.

Firstly, it ensures Copilot is working with the correct Perl version.

Secondly, it installs the distribution dependencies, meaning Copilot can reason in a context that actually resembles my real development environment.

Without this workflow, Copilot operates in a kind of generic space.

With it, Copilot behaves like a contributor who has actually checked out your code and run cpanm.

That’s a useful difference.

Reviewing the Work

This is the part where it’s important not to get starry-eyed.

I still review the PR carefully.

I still check:

  • Has it changed behaviour unintentionally?
  • Has it introduced unnecessary abstraction?
  • Are the tests meaningful?
  • Has it expanded scope beyond the issue?

I check out the branch and run the tests. Exactly as I would with a PR from a human co-worker.

You can request changes and reassign the PR to Copilot. It will revise its branch.

The loop is fast. Faster than traditional asynchronous code review.

But the responsibility is unchanged. You are still the maintainer.

Why This Feels Different

What’s happening here isn’t just “AI writing code”. It’s AI integrated into the contribution workflow:

  • Issues
  • Structured reasoning
  • Pull requests
  • CI
  • Review cycles

That architecture matters.

It means you can use Copilot in a controlled, auditable way.

In my experience with WebServer::DirIndex, this model works particularly well for:

  • Mechanical refactors
  • Adding attributes (e.g. :reader where appropriate)
  • Removing dependencies
  • Moving methods cleanly
  • Adding new internal classes

It is less strong when the issue itself is vague or architectural. Copilot cannot infer the intent you didn’t articulate.

But given a clear issue, it’s remarkably capable — even with modern Perl using tools like Feature::Compat::Class.

A Small but Important Point for the Perl Community

I’ve seen people saying that AI tools don’t handle Perl well. That has not been my experience.

With a properly described issue, repository instructions, and a defined development environment, Copilot works competently with:

  • Modern Perl syntax
  • CPAN distribution layouts
  • Test suites
  • Feature::Compat::Class (or whatever OO framework I’m using on a particular project)

The constraint isn’t the language. It’s how clearly you explain the task.

The Real Shift

The most interesting thing here isn’t that Copilot writes Perl. It’s that GitHub allows you to treat AI as a contributor.

  • You file an issue.
  • You assign it.
  • You supervise its reasoning.
  • You review its PR.

It’s not autocomplete. It’s not magic. It’s just another developer on the project. One who works quickly, doesn’t argue, and reads your documentation very carefully.

Have you been using AI tools to write or maintain Perl code? What successes (or failures!) have you had? Are there other tools I should be using?

Links

If you want to have a closer look at the issues and PRs I’m talking about, here are some links?

The post Treating GitHub Copilot as a Contributor first appeared on Perl Hacks.

Perl is nice for MCPs

r/perl

One nice feature of Perl: Startup & execution speed. We should look at fast, simple templates. AI::MCP::* or something.

https://medium.com/@kanishks772/python-is-93-slower-the-mcp-benchmark-that-shocked-developers-7e1c5be6604e

submitted by /u/photo-nerd-3141
[link] [comments]
As you know, The Weekly Challenge, primarily focus on Perl and Raku. During the Week #018, we received solutions to The Weekly Challenge - 018 by Orestis Zekai in Python. It was pleasant surprise to receive solutions in something other than Perl and Raku. Ever since regular team members also started contributing in other languages like Ada, APL, Awk, BASIC, Bash, Bc, Befunge-93, Bourne Shell, BQN, Brainfuck, C3, C, CESIL, Chef, COBOL, Coconut, C Shell, C++, Clojure, Crystal, CUDA, D, Dart, Dc, Elixir, Elm, Emacs Lisp, Erlang, Excel VBA, F#, Factor, Fennel, Fish, Forth, Fortran, Gembase, Gleam, GNAT, Go, GP, Groovy, Haskell, Haxe, HTML, Hy, Idris, IO, J, Janet, Java, JavaScript, Julia, K, Kap, Korn Shell, Kotlin, Lisp, Logo, Lua, M4, Maxima, Miranda, Modula 3, MMIX, Mumps, Myrddin, Nelua, Nim, Nix, Node.js, Nuweb, Oberon, Octave, OCaml, Odin, Ook, Pascal, PHP, PicoLisp, Python, PostgreSQL, Postscript, PowerShell, Prolog, R, Racket, Rexx, Ring, Roc, Ruby, Rust, Scala, Scheme, Sed, Smalltalk, SQL, Standard ML, SVG, Swift, Tcl, TypeScript, Typst, Uiua, V, Visual BASIC, WebAssembly, Wolfram, XSLT, YaBasic and Zig.

autodoc: Fix unescaped '#' warning

Perl commits on GitHub
autodoc: Fix unescaped '#' warning

Fixes #24225

(dlxxxviii) 17 great CPAN modules released last week

r/perl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::Cmd - write command line apps with less suffering
    • Version: 0.339 on 2026-02-19, with 50 votes
    • Previous CPAN version: 0.338 was 4 months, 16 days before
    • Author: RJBS
  2. App::Greple - extensible grep with lexical expression and region handling
    • Version: 10.04 on 2026-02-19, with 56 votes
    • Previous CPAN version: 10.03 was 30 days before
    • Author: UTASHIRO
  3. App::Netdisco - An open source web-based network management tool.
    • Version: 2.097003 on 2026-02-21, with 834 votes
    • Previous CPAN version: 2.097002 was 1 month, 12 days before
    • Author: OLIVER
  4. App::rdapper - a command-line RDAP client.
    • Version: 1.24 on 2026-02-19, with 21 votes
    • Previous CPAN version: 1.23 was 17 days before
    • Author: GBROWN
  5. CPAN::Meta - the distribution metadata for a CPAN dist
    • Version: 2.150013 on 2026-02-20, with 39 votes
    • Previous CPAN version: 2.150012 was 25 days before
    • Author: RJBS
  6. CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
    • Version: 20260220.001 on 2026-02-20, with 25 votes
    • Previous CPAN version: 20260215.001 was 4 days before
    • Author: BRIANDFOY
  7. Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
    • Version: 9.1.0 on 2026-02-17, with 18 votes
    • Previous CPAN version: 9.0.0 was 23 days before
    • Author: CUKEBOT
  8. Getopt::Long::Descriptive - Getopt::Long, but simpler and more powerful
    • Version: 0.117 on 2026-02-19, with 58 votes
    • Previous CPAN version: 0.116 was 1 year, 1 month, 19 days before
    • Author: RJBS
  9. MIME::Lite - low-calorie MIME generator
    • Version: 3.038 on 2026-02-16, with 35 votes
    • Previous CPAN version: 3.037 was 5 days before
    • Author: RJBS
  10. Module::CoreList - what modules shipped with versions of perl
    • Version: 5.20260220 on 2026-02-20, with 44 votes
    • Previous CPAN version: 5.20260119 was 1 month, 1 day before
    • Author: BINGOS
  11. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.1 on 2026-02-14, with 13 votes
    • Previous CPAN version: v2.0.0
    • Author: SANKO
  12. Net::Server - Extensible Perl internet server
    • Version: 2.018 on 2026-02-18, with 34 votes
    • Previous CPAN version: 2.017 was 8 days before
    • Author: BBB
  13. Resque - Redis-backed library for creating background jobs, placing them on multiple queues, and processing them later.
    • Version: 0.44 on 2026-02-21, with 42 votes
    • Previous CPAN version: 0.43
    • Author: DIEGOK
  14. SNMP::Info - OO Interface to Network devices and MIBs through SNMP
    • Version: 3.975000 on 2026-02-20, with 40 votes
    • Previous CPAN version: 3.974000 was 5 months, 8 days before
    • Author: OLIVER
  15. SPVM - The SPVM Language
    • Version: 0.990134 on 2026-02-20, with 36 votes
    • Previous CPAN version: 0.990133
    • Author: KIMOTO
  16. Test2::Harness - A new and improved test harness with better Test2 integration.
    • Version: 1.000162 on 2026-02-20, with 28 votes
    • Previous CPAN version: 1.000161 was 8 months, 9 days before
    • Author: EXODIST
  17. WebService::Fastly - an interface to most facets of the [Fastly API](https://www.fastly.com/documentation/reference/api/).
    • Version: 14.00 on 2026-02-16, with 18 votes
    • Previous CPAN version: 13.01 was 2 months, 6 days before
    • Author: FASTLY

This is the weekly favourites list of CPAN distributions. Votes count: 53

Week's winner: Linux::Event::Fork (+2)

Build date: 2026/02/21 21:48:43 GMT


Clicked for first time:


Increasing its reputation:

Aiming for tomorrow for 5.42.1-RC1

Perl commits on GitHub
Aiming for tomorrow for 5.42.1-RC1

100 days of Perl …

Perl on Medium

… or maybe some more ;)

In Perl on Linux, I want to check the File status flags of an OS file descriptor number without knowing which Perl file handles, if any, use that file descriptor (the fd might even have been inherited from a parent process and unknown to Perl).

It seems like POSIX::fcntl($fd, F_GETFL, 0) should work but it doesn't. POSIX::fcntl is different than all other i/o functions in the POSIX module e.g. POSIX::open, POSIX::write, etc. POSIX::fcntl is documented as "identical to Perl's builtin fcntl", i.e. it expects a Perl file handle as an argument, not an OS file descriptor (actually it only accepts a file handle glob).

Is there any way in Perl to access the OS fcntl function using a file descriptor?

#!/usr/bin/env perl
use strict; use warnings; use POSIX qw/:fcntl_h/; 
{ my $bits = POSIX::fcntl(1, F_GETFL, 0);
  warn "GETFL from fd 1 failed:$!" unless defined($bits);
  printf "GETFL from fd 1 = %08X\n", $bits;
}
{ printf "fileno(STDOUT) = %d\n", fileno(STDOUT);
  #my $bits = POSIX::fcntl(STDOUT, F_GETFL, 0);  # syntax error
  my $bits = POSIX::fcntl(*STDOUT, F_GETFL, 0);
  printf "GETFL from Perl fh = %08X\n", $bits;
}

... which gives output:

GETFL from fd 1 failed:Bad file descriptor at /tmp/t2 line 4.
Use of uninitialized value $bits in printf at /tmp/t2 line 5.
GETFL from fd 1 = 00000000
fileno(STDOUT) = 1
GETFL from Perl fh = 00080002

I've written a Perl script where I am able to successfully use the parallel::forkmanager module. I setup a simple script which has 6 entries in an array and takes 3 at a time, just prints the number from the array, sleeps 5 seconds and then takes the next 3 and does the same thing.

However when I test the same script but just add a dbh connection to a database that same simple script stops after the first 3 entries. It just hangs. Even if I immediately connect and disconnect the dbh session. Is there something else I need to do with dbh in order to get this to work?

#!/usr/bin/perl

use strict;
use warnings;
use Parallel::ForkManager;
use DBI;

my $uid = "xxxxxx";
my $snowflake = "snowflake";
my $dbh = DBI->connect("dbi:ODBC:DSN=$snowflake;UID=$uid;");
$dbh->disconnect();

my @mids = (6001, 6002, 6003, 6004, 6005, 6006);

my $MAX_PROCESSES = 3;
my $pm = Parallel::ForkManager->new($MAX_PROCESSES);
my $pid;
foreach my $mid (@mids) {
  $pid = $pm->start($mid) and next;
  my $result = &testing($mid);
  print "\nsleeping 5 seconds\n" if ($mid == 6003);
  sleep 5;
  $pm->finish(0, { result => $result });
}
print "\nwaiting on all children\n";
$pm->wait_all_children;


#############
sub testing {

 my $mid = shift;
 print "\nmid is now $mid\n";

return();
}

and here is the output from the script if i comment out the dbh connection


mid is now 6001

mid is now 6002

mid is now 6003

sleeping 5 seconds

mid is now 6004

mid is now 6005

waiting on all children

mid is now 6006

and here is the same output with the dbh connection and disconnect included:


mid is now 6001

mid is now 6002

mid is now 6003

sleeping 5 seconds

More PCC Summer 2025 Videos Are Out!

blogs.perl.org

All links are available on our Perl Community Subreddit:

  • Brett Estrade Review of John P Linderman's Quick Sort Paper
  • Justin Kelly Perl Supabase
  • State of the Onions
  • Perl Can Dev Ops Better Than You Think
  • Brett Estrade Building Beowulf Clusters with Perl
  • John Napiorkowski Porting ASGI from Python to Perl
  • Kai Baker, et al Shiny CMS
  • Privacy Preserving Applications

4 more have gone out to our exclusive mailing list. And we will have one more round for a total of 16 videos. We will also be releasing videos from the Winter 2025 PCC after our Summer 2026 PCC coming up on July 3rd & 4th in Austin, TX, USA.

Thank you Team PWC for your continuous support and encouragement.

I'm working on a SpamAssassin plugin, which would submit the text of a suspected email to AI (via HTTP POST) and process the response. The response is expected to contain a one-word verdict (one of "HAM", "SPAM", or "UNSURE") and a short reason.

Both fields are returned by the model, according to log. For example:

{ "verdict": "HAM", "reason": "The email appears to be a legitimate response to a GitHub discussion about the performance of llama.cpp with Vulkan, discussing technical details and comparisons with ROCm." }

The plugin correctly informs SpamAssassin too -- I see my AI_HAM among the "tests" of the X-Spam-Status-header.

However, I'd also like the "reason" field to be inserted as a separate header of its own. To that effect, I added the following code to my plugin:

    my $r = $ai_json->{reason};
    $pms->set_tag('AI_STATUS', $r);
    warn("Status tag: " . $pms->get_tag('AI_STATUS'));

and this to my plugin's .pre file:

add_header  all AI-Status _AI_STATUS_

That "Status tag" "warning" is logged correctly, but no X-Spam-AI-Status header is ever added to the incoming messages... What am I missing?

Update: Moving the add_header directive from MyPlugin.pre to MyPlugin.cf causes the new header to appear in the output of spamassassin -t -- for some reason!

But it is still missing from the actual headers of the messages, that've gone through the milter. The original X-Spam-Status header inserted by SpamAssassin itself is present (and now lists my plugin among the tests applied to the messages), but my own header is still not added...

I would like to use a Perl one-liner to modify numeric values in a text file. My data are stored in a text file:

0, 0, (1.263566e+02, -5.062154e+02)
0, 1, (1.069488e+02, -1.636887e+02)
0, 2, (-2.281294e-01, -7.787449e-01)
0, 3, (5.492424e+00, -4.145492e+01)
0, 4, (-7.961223e-01, 2.740912e+01)

These are complex numbers with their respective i and j coordinates: i, j, (real, imag). I would like to modify the coordinates, to shift them from zero-based to one-based indexing. In other words I would like to add one to each i and each j. I can correctly capture the i and j, but I'm struggling to treat them as numbers not as strings. This is the one-liner I'm using:

perl -p -i.bak -w -e 's/^(\d+), (\d+)/$1+1, $2+1/' complex.txt

How do I tell Perl to treat $1 and $2 as numbers?

My expected output would be:

1, 1, (1.263566e+02, -5.062154e+02)
1, 2, (1.069488e+02, -1.636887e+02)
1, 3, (-2.281294e-01, -7.787449e-01)
1, 4, (5.492424e+00, -4.145492e+01)
1, 5, (-7.961223e-01, 2.740912e+01)

ANN: CPAN::MetaCurator V 1.08, Perl.Wiki V 1.40 etc

blogs.perl.org

Hi All
Big news today.
I've uploaded these files to my Wiki Haven:
a. Perl.Wiki.html V 1.40
b. jsTree V 1.08 (i.e. the matching version of Perl.Wiki).
This version is much nicer than the previous one.
c. MojoLicious.Wiki.html V 1.14

And I've uploaded to CPAN: CPAN::MetaCurator V 1.40.

This version no longer ships 02packages.details.txt, nor a packages table in the SQLite db.

Further, I'm about to start coding CPAN::MetaPackages, which will do nothing but load
02packages.details.txt into its own SQLite db. Then CPAN::MetaCurator will have its own db
and simultaneously link to the packages db.

The point: To make CPAN::MetaCurator's build.dh.sh run much faster, down from
15 hours on my humble Lenovo M70Q 'Tiny' desktop to 1 second, basically.

And the next step is then to ship only differences between successive versions of
02packages.details.txt, so that updating packages.sqlite will be lightning fast.

Perl 🐪 Weekly #760 - Async Perl

dev.to #perl

Originally published at Perl Weekly 760

Hi there,

Perl's asynchronous ecosystem continues to grow, enabling developers to build non-blocking, responsive applications with ease. Modules like IO::Async, async features in Mojolicious, and helpers for asynchronous database operations (such as DBIx::Class::Async) allow event-driven designs, background tasks, and futures/promises, making high-throughput web services, real time APIs, and streaming pipelines straightforward to implement.

Projects like PAGI and PAGI::Server, now with HTTP/2 support, and Thunderhorse showcase how Perl can handle multiple connections efficiently while keeping code clear and maintainable. Together, these tools make it easier to build responsive, scalable, and maintainable applications, all while retaining the expressive, pragmatic style that continues to define Perl.

With each new module and project, Perl's potential in the modern, concurrent world keeps expanding, the best is yet to come.

A small milestone reached with this issue, my 200th edition. Enjoy rest of the newsletter and stay safe.

--
Your editor: Mohammad Sajid Anwar.

Announcements

Join us for TPRC 2026 in Greenville, SC!

It’s great to see the announcement for The Perl and Raku Conference 2026 (TPRC) taking shape, with registration opening and plans underway for a vibrant community gathering in Greenville, SC this June. The post reinforces the value of bringing Perl and Raku developers together for talks, workshops, and networking. A highlight on the opensource calendar that strengthens the ecosystem and connects contributors across projects.

This week in PSC (215) | 2026-02-11

In This week in PSC 215, the Perl Steering Council covered a deep discussion around legal identifier name restrictions for security without reaching consensus and planning to broaden the conversation beyond p5p, tackled the challenge of an absent maintainer for the dual‑life Encode module, and decided to hold off on merging larger PRs like magic v2 and attributes v2 due to the upcoming freeze in this release cycle. These updates give a clear snapshot of ongoing governance and core maintenance decisions within the Perl project.

Articles

Poplar Decisions

The post reflects on the challenge of rational decision‑making with a quirky, human‑centred anecdote, weaving in the idea that structured data models, like decision trees, can help bring objectivity to complex choices. The post’s blend of storytelling and commentary on data structures adds a thoughtful and entertaining perspective for programmers thinking about reasoning and modeling in code.

Mojo with WebSocket

A practical real‑world example of using Mojolicious’ built‑in WebSocket support to build an interactive online chat app in Perl, complete with multiple server variants and integration options like Redis or PostgreSQL. The repository showcases how easily Mojolicious can handle real‑time bidirectional communication with WebSockets, making it a solid reference for Perl developers exploring event‑driven web apps.

Coding Agents using Anthropic and z.ai - presentation at the German Perl Workshop 2026

The post previews Max's talk at the German Perl Workshop 2026, exploring how modern AI coding agents from Anthropic and z.ai can assist with Perl development, what differences exist between the models, and tips for getting them to write good code. It’s an engaging look at practical uses of agentic AI in real world programming contexts, a timely topic for anyone curious about AI‑assisted development.

Beautiful Perl feature: 'local', for temporary changes to global variables

This article highlights one of Perl’s unique strengths, the local keyword, showing how it enables temporary, dynamic changes to global variables without permanent side effects. With clear examples manipulating %ENV, special Perl variables and even symbol table entries, it makes a compelling case for using local judiciously to solve real world problems that lexical scoping alone can’t.

CPAN

PAGI::Server, now with HTTP/2!

The announcement of PAGI::Server 0.001017 highlights experimental HTTP/2 support built on nghttp2, bringing both cleartext h2c and TLS‑based HTTP/2 to Perl web services with automatic protocol detection and solid h2spec compliance. The write‑up explains why HTTP/2 matters for backend performance and modern use cases like gRPC and multiplexed APIs, and it also outlines other quality‑of‑life improvements and operational fixes in the release.

Black box test generator version 0.28 released

Version 0.28 of App::Test::Generator, the black‑box test case generator, has just been released with improved schema extraction and test generation accuracy, tightening detection of getter/setter methods and better typing in generated tests. These enhancements make it easier to produce honest, robust fuzz and corpus driven test harnesses from your Perl modules.

Run::WeeklyChallenge

Run::WeeklyChallenge is a small but useful CPAN module that helps you automate running solutions to challenges from The Weekly Challenge site by passing one or more sets of JSON‑formatted inputs to your code. It cleanly wraps your solution logic and input schema validation, making it easier to test and reuse challenge solutions programmatically.

DBIx::Class::Async

DBIx::Class::Async is a modern asynchronous wrapper for DBIx::Class that allows non‑blocking database operations in Perl, keeping the familiar DBIC interface while running queries in the background via futures. The latest update brings several improvements: caching is now disabled by default (TTL 0), automatic detection of non-deterministic SQL functions (like NOW() and RAND()) ensures safe bypass of caching, cache invalidation on update/delete operations is more precise using primary keys, and count() queries are no longer cached to guarantee accurate row counts. These enhancements make asynchronous DBIC usage both safer and more reliable.

Syntax::Highlight::Engine::Kate

Syntax::Highlight::Engine::Kate provides Perl programs with robust syntax highlighting using the same engine as the Kate editor. The latest update fixes Issue #23 in Template.pm: the testDetectSpaces() regex was corrected, ensuring only spaces and tabs are matched and improving number highlighting in Perl and other languages. The test suite expected outputs were also updated to reflect the corrected highlighting behavior.

The Weekly Challenge

The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.

The Weekly Challenge - 361

Welcome to a new week with a couple of fun tasks "Zeckendorf Representation" and "Find Celebrity". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.

RECAP - The Weekly Challenge - 360

Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Text Justifier" and "Word Sorter" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.

TWC360

This write-up delivers straightforward, idiomatic Perl solutions for both text justification and word sorting, showing practical use of fc for case-insensitive comparisons and clear subroutine design. The concise code examples make the challenge solutions easy to follow and apply.

Full Circle

In this post, an effective solution to The Weekly Challenge #360 is given, with beautiful examples showing how Raku can be implemented for text justification and sorting as the output. Most of the explanations are fairly short, but they are clearly defined, giving lots of examples; and, they consider the idiomatic usage of Raku's features to make the output readable and helpful.

Pertaining to a subtlety of sorting

This article offers a thoughtful take on the Word Sorter task from PWC 360, with a clear explanation of the case-insensitive sort and an efficient Perl solution using a Schwartzian transform. The benchmarking insight and attention to Unicode case folding nuance show both practical coding skill and depth of understanding.

Perl Weekly Challenge: Week 360

This write-up clearly presents both Text Justifier and Word Sorter tasks with simple, idiomatic Perl and Raku solutions that showcase practical string manipulation and sorting techniques. The inclusion of multiple examples and cross-language snippets makes the challenge approachable and highlights the expressive power of each language.

Perl Weekly Challenge 360

This post showcases clear, idiomatic Perl solutions for both the Text Justifier and Word Sorter tasks, with one‑liner examples and concise logic that demonstrate practical use of integer arithmetic and case‑preserving sorting. The included sample inputs and outputs make the behavior easy to follow and verify.

This is exactly the sort of justification that I was looking for

This post methodically implements both Text Justifier and Word Sorter solutions for PWC 360 in clear Perl code, showing careful step-by-step padding logic and idiomatic sorting. The explanations of how the examples are handled make the approach easy to follow and instructive for readers.

Justifying TIMTOWTDI

The post offers well‑structured Perl solutions that clearly implement both text justification and alphabetical word sorting with idiomatic constructs and practical tests. The use of case‑preserving sorting and centered padding logic demonstrates good command of Perl’s core features and makes the solutions easy to follow and reuse.

Word Crimes are Justified

An excellent and clear walk-through of the Perl Weekly Challenge tasks, with well-structured multi-language solutions and thoughtful explanations that make the text justification and word sorting problems easy to follow. The blend of Perl, Raku, Python, and Elixir examples shows both depth and versatility of approach.

Padding and sorting

The post presents the Text Justifier and Word Sorter tasks clearly with well-explained inputs and desired outputs, giving readers a solid grounding in the problem definitions. The examples are practical and show the expected string centering and alphabetical ordering behavior in a way that supports straightforward implementation.

The Weekly Challenge - 360: Text Justifier

This post demonstrates a practical and idiomatic Perl solution by leveraging String::Pad for Text Justifier, showcasing how using existing modules can simplify challenge tasks. The concise examples with clear input/output make it easy to grasp the task mechanics and verify correctness.

The Weekly Challenge - 360: Word Sorter

This write-up delivers a succinct and idiomatic Perl solution to the Word Sorter task, using a case-insensitive sort and clean split/grep logic that keeps words unchanged while ordering them alphabetically. The included test cases make the behavior clear and easy to verify.

Justify the Words

This post delivers clear and well-commented solutions to both Text Justifier and Word Sorter tasks from The Weekly Challenge 360, using concise Lua, Raku, Perl and other language examples with practical explanations of key steps like centered padding and case-insensitive sorting. The author’s discussion of different implementation strategies highlights thoughtful coding decisions that make the techniques accessible and educational for readers.

Padding and sorting

This write‑up walks through both the Text Justifier and Word Sorter tasks from The Weekly Challenge 360 with clear Python and Perl solutions, showing well‑structured logic for string padding and case‑insensitive sorting. The practical examples and side‑by‑side language implementations make the techniques easy to understand and apply.

Perl Power: Two Tiny Scripts, Big Learning!

This write‑up distills both Text Justifier and Word Sorter solutions into clean, minimal Perl scripts with clear logic for padding and sorting, and emphasizes solid test‑driven development and edge‑case handling. The examples and explanation of core techniques make it both beginner‑friendly and technically sound.

Rakudo

2026.06 CÔD YN GYMRAEG

Weekly collections

NICEPERL's lists

Great CPAN modules released last week.

The corner of Gabor

A couple of entries sneaked in by Gabor.

WhatsApp

Do you use WhatsApp? Join the Perl Maven chat group!

Events

Paris.pm monthly meeting

March 11, 2025

German Perl/Raku Workshop 2026 in Berlin

March 16-18, 2025

The Perl and Raku Conference 2026

June 26-29, 2026, Greenville, SC, USA

You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.

Want to see more? See the archives of all the issues.

Not yet subscribed to the newsletter? Join us free of charge!

(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

Beautiful Perl series

This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.

The last post was about BLOCKs and lexical scoping, a feature present in almost all programming languages. By contrast, today's topic is about another feature quite unique to Perl : dynamic scoping, introduced through the 'local' keyword. As we shall see, lexical scoping and dynamic scoping address different needs, and Perl allows us to choose the scoping strategy most appropriate to each situation.

Using local : an appetizer

Let us start with a very simple example:

say "current time here is ", scalar localtime;

foreach my $zone ("Asia/Tokyo", "Africa/Nairobi", "Europe/Athens") {
  local $ENV{TZ} = $zone;
  say "in $zone, time is ", scalar localtime;
}

say "but here current time is still ", scalar localtime;

Perl's localtime builtin function returns the time "in the current timezone". On Unix-like systems, the notion of "current timezone" can be controlled through the TZ environment variable, so here the program temporarily changes that variable to get the local time in various cities around the world. At the end of the program, we get back to the usual local time. Here is the output:

current time here is Sun Feb  8 22:21:41 2026
in Asia/Tokyo, time is Mon Feb  9 06:21:41 2026
in Africa/Nairobi, time is Mon Feb  9 00:21:41 2026
in Europe/Athens, time is Sun Feb  8 23:21:41 2026
but here current time is still Sun Feb  8 22:21:41 2026

The gist of that example is that:

  1. we want to call some code not written by us (here: the localtime builtin function);
  2. the API of that code does not offer any parameter or option to tune its behaviour according to our needs; instead, it relies on some information in the global state of the running program (here: the local timezone);
  3. apart from getting our specific result, we don't want the global state to be durably altered, because this could lead to unwanted side-effects.

Use cases similar to this one are very common, not only in Perl but in every programming language. There is always some kind of implicit global state that controls how the program behaves internally and how it interacts with its operating system: for example environment variables, command-line arguments, open sockets, signal handlers, etc. Each language has its strategies for dealing with the global state, but often the solutions are multi-faceted and domain-specific. Perl shines because its local mechanism covers a very wide range of situations in a consistent way and is extremely simple to activate.

Before going into the details, let us address the bad reputation of dynamic scoping. This mechanism is often described as something that should be absolutely avoided because it implements a kind of action at distance, yielding unpredictable program behaviour. Yes, it is totally true, and therefore languages that only have dynamic scoping are not suited for programming-in-the-large. Yet in specific contexts, dynamic scoping is acceptable or even appropriate. The most notable examples are Unix shells and also Microsoft Powershell for Windows, that still today all rely on dynamic scoping, probably because it's easier to implement.

Perl1 also used dynamic scoping in its initial design, presumably for the same reason of easiness of implementation, and also because it was heavily influenced by shell programming and was addressing the same needs. When later versions of Perl evolved into a general-purpose language, the additional mechanism of lexical scoping was introduced because it was indispensable for larger architectures. Nevertheless, dynamic scoping was not removed, partly for historical reasons, but also because action at distance is precisely what you want in some specific situations. The next chapters will show you why.

The mechanics of local

In the last post we saw that a my declaration declares a new variable, possibly shadowing a previous variable of the same name. By contrast, a local declaration is only possible on a global variable that already exists; the effect of the declaration is to temporarily push aside the current value of that global variable, leaving room for a new value. After that operation, any code anywhere in the program that accesses the global variable will see the new value ... until the effect of local is reverted, namely when exiting the scope in which the local declaration started.

Here is a very simple example, again based on the %ENV global hash. This hash is automatically populated with a copy of the shell environment when perl starts up. We'll suppose that the initial value of $ENV{USERNAME}, as transmitted from the shell, is Alex:

sub greet_user ($salute) {
  say "$salute, $ENV{USERNAME}";
}

greet_user("Hello");                 # Hello, Alex

{ local $ENV{USERNAME} = 'Brigitte';
  greet_user("Hi");                  # Hi, Brigitte

  { local $ENV{USERNAME} = 'Charles';
    greet_user("Good morning");      # Good morning, Charles
  }

  greet_user("Still there");         # Still there, Brigitte
}

greet_user("Good bye");              # Good bye, Alex

The same thing would work with a global package variable:

our $username = 'Alex';             # "our" declares a global package variable

sub greet_user ($salute) {
  say "$salute, $username";
}

greet_user("Hello");
{ local $username = 'Brigitte';
  greet_user("Hi");
  ... etc.

but if the variable is not pre-declared, we get an error:

Global symbol "$username" requires explicit package name (did you forget to declare "my $username"?)

or if the variable is pre-declared as lexical through my $username instead of our $username, we get another error:

Can't localize lexical variable $username

So the local mechanism only applies to global variables. There are three categories of such variables:

  1. standard variables of the Perl interpreter;
  2. package members (of modules imported from CPAN, or of your own modules). This includes the whole symbol tables of those packages, i.e. not only the global variables, but also the subroutines or methods;
  3. what the Perl documentation calls elements of composite types, i.e. individual members of arrays or hashes. Such elements can be localized even if the array or hash is itself a lexical variable, because while the entry point to the array or hash may be on the execution stack, its member elements are always stored in global heap memory. This last category of global variables is perhaps more difficult to understand, but we will give examples later to make it clear.

As we can see, the single and simple mechanism of dynamic scoping covers a vast range of applications! Let us explore the various use cases.

Localizing standard Perl variables

The Perl interpreter has a number of builtin special variables, listed in perlvar. Some of them control the internal behaviour of the interpreter; some other are interfaces to the operating system (environment variables, signal handlers, etc.). These variables have reasonable default values that satisfy most common needs, but they can be changed whenever needed. If the change is for the whole program, a regular assignment instruction is good enough; but if it is for a temporary change in a specific context, localis the perfect tool for the job.

Internal variables of the interpreter

The examples below are typical of idiomatic Perl programming: altering global variables so that builtin functions behave differently from the default.

# input record separator ($/)
my @lines         = <STDIN>;                     # regular mode, separating by newlines
my @paragraphs    = do {local $/ = ""; <STDIN>}; # paragraph mode, separating by 2 or more newlines
my $whole_content = do {local $/; <STDIN>};      # slurp mode, no separation

# output field and output record separators
sub write_csv_file ($rows, $filename) {
  open my $fh, ">:unix", $filename or die "cannot write into $filename: $!";
  local $, = ",";               # output field separator -- inserted between columns
  local $\ = "\r\n";            # output row separator   -- inserted between rows
  print $fh @$_ foreach @$rows; # assuming that each member of @$rows is an arrayref
}

# list separator in interpolated strings
my @perl_files   = <*.pl>; # globbing perl files in the current directory
my @python_files = <*.py>; # idem for python files
{ local $" = ", ";         # lists will be comma-separated when interpolated in a string
  say "I found these perl files: @perl_files and these python files: @python_files";
}

Interface to the operating system

We have already seen two examples involving the %ENV hash of environment variables inherited from the operating system. Likewise, it is possible to tweak the @ARGV array before parsing the command-line arguments. Another interesting variable is the %SIG hash of signal handlers, as documented in perlipc:

{ local $SIG{HUP} = "IGNORE"; # don't want to be disturbed for a while
  do_some_tricky_computation();
}

Predefined handles like STDIN, STDOUT and STDERR can also be localized:

say "printing to regular STDOUT";
{ local *STDOUT;
  open STDOUT, ">", "captured_stdout.txt" or die $!;
  do_some_verbose_computation();
}
say "back to regular STDOUT";

Localizing package members

Package global variables

Global variables are declared with the our keyword. The difference with lexical variables (declared with my) is that such global variables are accessible, not only from within the package, but also from the outside, if prefixed by the package name. So if package Foo::Bar declares our ($x, @a, %h), these variables are accessible from anywhere in the Perl program under $Foo::Bar::x, @Foo::Bar::a or %Foo::Bar::h.

Many CPAN modules use such global variables to expose their public API. For example, the venerable Data::Dumper chooses among various styles for dumping a data structure, depending on the $Data::Dumper::Indent variable. The default (style 2) is optimized for readability, but sometimes the compact style 0 is more appropriate:

print Dumper($data_tree);
print do {local $Data::Dumper::Indent=0; Dumper($other_tree)};

Carp or URI are other examples of well-known modules where global variables are used as a configuration API.

Modules with this kind of architecture are often pretty old; more recent modules tend to prefer an object-oriented style, where the configuration options are given as options to the new() method instead of using global variables. Of course the object-oriented architecture offers better encapsulation, since a large program can work with several instances of the same module, each instance having its own configuration options, without interference between them. This is not to say, however, that object-oriented configuration is always the best solution: when it comes to tracing, debugging or profiling needs, it is often very convenient to be able to tune a global knob and have its effect applied to the whole program: in those situations, what you want is just the opposite of strict encapsulation! Therefore some modules, although written in a modern style, still made the wise choice of leaving some options expressed as global variables; changing these options has a global effect, but thanks to local this effect can be limited to a specific scope. Examples can be found in Type::Tiny or in List::Util.

Subroutines (aka monkey-patching)

Every package has a symbol table that contains not only its global variables, but also the subroutines (or methods) declared in that package. Since the symbol table is writeable, it is possible to overwrite any subroutine, thereby changing the behaviour of the package - an operation called monkey-patching. Of course monkey-patching could easily create chaos, so it should be used with care - but in some circumstances it is extremely powerful and practical. In particular, testing frameworks often use monkey-patching for mocking interactions with the outside world, so that the internal behaviour of a sofware component can be tested in isolation.

The following example is not very realistic, but it's the best I could come up with to convey the idea in just a few lines of code. Consider a big application equipped with a logger object. Here we will use a logger from Log::Dispatch:

use Log::Dispatch;
my $log = Log::Dispatch->new(
  outputs   => [['Screen', min_level => 'info', newline => 1]],
);

The logger has methods debug(), info(), error(), etc. for accepting messages at different levels. Here it is configured to only log messages starting from level info; so when the client code calls info(), the message is printed, while calls to debug() are ignored. As a result, when the following routine is called, we normally only see the messages "start working ..." and "end working ...":

sub work ($phase) {
  $log->info("start working on $phase");
  $log->debug("weird condition while doing $phase"); # normally not seen - level below 'info'
  $log->info("end working on $phase\n");
}

Now suppose that we don't want to change the log level for the whole application, but nevertheless we need to see the debug messages at a given point of execution. One (dirty!) way of achieving this is to temporarily treat calls to debug() as if they were calls to info(). So a scenario like

work("initial setup");
{ local *Log::Dispatch::debug = *Log::Dispatch::info; # temporarily alias the 'debug' method to 'info'
  work("core stuff")}
work("cleanup");

logs the following sequence:

start working on initial setup
end working on initial setup

start working on core stuff
weird condition while doing core stuff
end working on core stuff

start working on cleanup
end working on cleanup

Monkey-patching techniques are not specific to Perl; they are used in all dynamic languages (Python, Javascript, etc.), not for regular programming needs, but rather for testing or profiling tasks. However, since other dynamic languages do not have the local mechanism, temporary changes to the symbol table must be programmed by hand, by storing the initial code reference in a temporary variable, and restoring it when exiting from the monkey-patched scope. This is a bit more work and is more error-prone. Often there are library modules for making the job easier, though: see for example https://docs.pytest.org/en/7.1.x/how-to/monkeypatch.html in Python.

Monkey-patching in a statically-typed language like Java is more acrobatic, as shown in Nicolas Fränkel's blog.

Localizing elements of arrays or hashes

The value of an array at a specific index, or the value of a hash for a specific key, can be localized too. We have already seen some examples with the builtin hashes %ENV or %SIG, but it works as well on user data, even when the data structure is several levels deep. A typical use case for this is when a Web application loads a JSON, YAML or XML config file at startup. The config data becomes a nested tree in memory; if for any reason that config data must be changed at some point, local can override a specific data leaf, or any intermediate subtree, like this:

local $app->config->{logger_file} = '/opt/log/special.log';
local $app->config->{session}     = {storage => '/opt/data/app_session',
                                     expires => 42900,
                                    };

Another example can be seen in my Data::Domain module. The job of that module is to walk through a datatree and check if it meets the conditions expected by a "domain". The inspect() method that does the checking sometimes needs to know at which node it is currently located; so a $context tree is worked upon during the traversal and passed to every method call. With the help of local, temporary changes to the shape of $context are very easy to implement:

  for (my $i = 0; $i < $n_items; $i++) {
    local $context->{path} = [@{$context->{path}}, $i];
    ...

An alternative could have been to just push $i on top of the @{$context->{path}} array, and then pop it back at the end of the block. But since this code may call subclasses written by other people, with little guarantee on how they would behave with respect to $context->{path}, it is safer to localize it and be sure that $context->{path} is in a clean state when starting the next iteration of the loop. Interested readers can skim through the source code to see the full story. A similar technique can also be observed in Catalyst::Dispatcher.

A final example is the well-known DBI module, whose rich API exploits several Perl mechanisms simultaneously. DBI is principally object-oriented, except that the objects are called "handles"; but in addition to methods, handles also have "attributes" accessible as hash members. The DBI documentation explicitly recommends to use local for temporary modifications to the values of attributes, for example for the RaiseError attribute. This is interesting because it shows a dichotomy between API styles: if DBI had a purely object-oriented style, with usual getter and setter methods, it would be impossible to use the benefits of local - temporary changes to an attribute followed by a revert to the previous value would have to be programmed by hand.

How other languages handle temporary changes to global state

As argued earlier, the need for temporary changes to global state occurs in every programming language, in particular for testing, tracing or profiling tasks. When dynamic scoping is not present, the most common solution is to write specific code for storing the old state in a temporary variable, implement the change for the duration of a given computation, and then restore the old state. A common best practice for such situations is to use a try ... finally ... construct, where restoration to the old state is implemented in the finally clause: this guarantees that even if exceptions occur, the code exits with a clean state. Most languages do possess such constructs - this is definitely the case for Java, JavaScript and Python.

Python context managers

Python has a mechanism more specifically targeted at temporary changes of context : this is called context manager, triggered through a with statement. A context manager implements special methods __enter__() and __exit__() that can be programmed to operate changes into the global context. This technique offers more precision than Perl's local construct, since the enter and exit methods are free to implement any kind of computation; however it is less general, because each context manager is specialized for a specific task. Python's contextlib library provides a collection of context managers for common needs.

Wrapping up

Perl's local mechanism is often misunderstood. It is frowned upon because it breaks encapsulation - and this criticism is perfectly legitimate as far as regular programming tasks are concerned; but on the other hand, it is a powerful and clean mechanism for temporary changes to the global execution state. It solves real problems with surprising grace, and it’s one of the features that makes Perl uniquely expressive among modern languages. So do not follow the common advice to avoid local at all costs, but learn to identify the situations where local will be a helpful tool to you!

Beyond the mere mechanism, the question is more at the level of philosophy of programming: to which extent should we enforce strict encapsulation of components? In an ideal world, each component has a well-defined interface, and interactions are only allowed to go through the official interfaces. But in a big assembly of components, we may encounter situations that were not foreseen by the designers of the individual components, and require inserting some additional screws, or drilling some additional holes, so that the global assembly works satisfactorily. This is where Perl's local is a beautiful device.

About the cover picture

The cover picture1 represents a violin with a modified global state: the two intermediate strings are crossed! This is the most spectacular example of scordatura in Heirich Biber's Rosary sonatas (sometimes also called "Mystery sonatas"). Each sonata in the collection requires to tune the violin in a specific way, different from the standard tuning in fifths, resulting in very different atmospheres. It requires some intellectual gymnastics from the player, because the notes written in the score no longer represent the actual sounds heard, but merely refer to the locations of fingers on a normal violin; in other words, this operation is like monkey-patching a violin!

  1. CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=110521795 ↩

Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. Data::ObjectDriver - Simple, transparent data interface, with caching
    • Version: 0.27 on 2026-02-13, with 16 votes
    • Previous CPAN version: 0.26 was 3 months, 27 days before
    • Author: SIXAPART
  2. DateTime::Format::Natural - Parse informal natural language date/time strings
    • Version: 1.25 on 2026-02-13, with 19 votes
    • Previous CPAN version: 1.24_01 was 1 day before
    • Author: SCHUBIGER
  3. Devel::Size - Perl extension for finding the memory usage of Perl variables
    • Version: 0.86 on 2026-02-10, with 22 votes
    • Previous CPAN version: 0.86 was 1 day before
    • Author: NWCLARK
  4. Marlin - 🐟 pretty fast class builder with most Moo/Moose features 🐟
    • Version: 0.023001 on 2026-02-14, with 12 votes
    • Previous CPAN version: 0.023000 was 7 days before
    • Author: TOBYINK
  5. MIME::Lite - low-calorie MIME generator
    • Version: 3.037 on 2026-02-11, with 35 votes
    • Previous CPAN version: 3.036 was 1 day before
    • Author: RJBS
  6. MIME::Body - Tools to manipulate MIME messages
    • Version: 5.517 on 2026-02-11, with 15 votes
    • Previous CPAN version: 5.516
    • Author: DSKOLL
  7. Net::BitTorrent - Pure Perl BitTorrent Client
    • Version: v2.0.0 on 2026-02-13, with 13 votes
    • Previous CPAN version: 0.052 was 15 years, 10 months before
    • Author: SANKO
  8. Net::Server - Extensible Perl internet server
    • Version: 2.017 on 2026-02-09, with 34 votes
    • Previous CPAN version: 2.016 was 12 days before
    • Author: BBB
  9. Protocol::HTTP2 - HTTP/2 protocol implementation (RFC 7540)
    • Version: 1.12 on 2026-02-14, with 27 votes
    • Previous CPAN version: 1.11 was 1 year, 8 months, 25 days before
    • Author: CRUX
  10. SPVM - The SPVM Language
    • Version: 0.990130 on 2026-02-13, with 36 votes
    • Previous CPAN version: 0.990129 was 1 day before
    • Author: KIMOTO

Join us for TPRC 2026 in Greenville, SC!

Perl Foundation News

We are pleased to announce the dates of our next Perl and Raku Conference, to be held in Greenville, SC on June 26-28, 2026.  The venue is the same as last year, but we are expanding the conference to 3 days of talks/presentations across the weekend.  One or more classes will be scheduled for Monday the 29th as well. The hackathon will be running continuously from June 25 through June 29—so if you can come early or stay late, there will be opportunities for involvement with other members of the community.

Mark your calendars and save the dates!

Our website, https://www.tprc.us/  has more details including links to reserve your hotel room and a link to register for the conference at the early bird price.  Watch for more updates as more plans are finalized.

Our theme for 2026 is “Perl is my cast iron pan”.  Perl is reliable, versatile, durable, and continues to be ever so useful!  Just like your favorite cast iron pan! Raku might map to tempered steel.  also quite reliable and useful, and with some very attractive updates!

We hope to see you in June!

My presentation will look at coding agents from Anthropic and z.ai with the following questions:

How (well) can coding agents support Perl code?

What differences are there between the models?

How can I get agents to write good code?

Hope to see you there:

https://act.yapc.eu/gpw2026/

My Blog

blogs.perl.org

This is my first post in this Blog...
I will write about Perl

  • 00:00 Introduction

  • 01:30 OSDC Perl, mention last week

  • 03:00 Nikolai Shaplov NATARAJ, one of our guests author of Lingua-StarDict-Writer on GitLab.

  • 04:30 Nikolai explaining his goals about security memory leak in Net::SSLeay

  • 05:58 What we did earlier. (Low hanging fruits.)

  • 07:00 Let's take a look at the repository of Net::SSLeay

  • 08:00 Try understand what happens in the repository?

  • 09:15 A bit of explanation about adopting a module. (co-maintainer, unauthorized uploads)

  • 11:00 PAUSE

  • 15:30 Check the "river" status of the distribution. (reverse dependency)

  • 17:20 You can CC-me in your correspondence.

  • 18:45 Ask people to review your pull-requests.

  • 21:30 Mention the issue with DBIx::Class and how to take over a module.

  • 23:50 A bit about the OSDC Perl page.

  • 24:55 CPAN Dashboard and how to add yourself to it.

  • 27:40 Show the issues I opened asking author if they are interested in setting up GitHub Actions.

  • 29:25 Start working on Dancer-Template-Mason

  • 30:00 clone it

  • 31:15 perl-tester Docker image.

  • 33:30 Installing the dependencies in the Docker container

  • 34:40 Create the GitHub Workflow file. Add to git. Push it out to GitHub.

  • 40:55 First failure in the CI which is unclear.

  • 42:30 Verifying the problem locally.

  • 43:10 Open an issue.

  • 58:25 Can you talk about dzil and Dist::Zilla?

  • 1:02:25 We get back to working in the CI.

  • 1:03:25 Add --notest to make installations run faster.

  • 1:05:30 Add the git configuration to the CI workflow.

  • 1:06:32 Is it safe to use --notest when installing dependencies?

  • 1:11:05 git rebase squashing the commits into one commit

  • 1:13:35 git push --force

  • 1:14:10 Send the pull-request.

Answer

I use xscreensaver and to forbid it:

! in .Xresources
xscreensaver.splash: false
! Set to nothing makes user switching not possible
*.newLoginCommand:

Lightdm supports .d directories, by default they aren’t created on Debian but upstream documents them clearly. In other words: /etc/lightdm/lightdm.conf.d/ is read.

Which means you need to drop a file, /etc/lightdm/lightdm.conf.d/10-local-overrides.conf and add the content:

[Seat:*]
allow-user-switching=false
allow-guest=false

To check your configuration:

lightdm --show-config

(dlxxxvi) 10 great CPAN modules released last week

Niceperl
Updates for great CPAN modules released last week. A module is considered great if its favorites count is greater or equal than 12.

  1. App::rdapper - a command-line RDAP client.
    • Version: 1.23 on 2026-02-02, with 21 votes
    • Previous CPAN version: 1.22 was 3 days before
    • Author: GBROWN
  2. Beam::Wire - Lightweight Dependency Injection Container
    • Version: 1.030 on 2026-02-04, with 19 votes
    • Previous CPAN version: 1.029 was 1 day before
    • Author: PREACTION
  3. BerkeleyDB - Perl extension for Berkeley DB version 2, 3, 4, 5 or 6
    • Version: 0.67 on 2026-02-01, with 14 votes
    • Previous CPAN version: 0.66 was 1 year, 3 months, 18 days before
    • Author: PMQS
  4. Data::Alias - Comprehensive set of aliasing operations
    • Version: 1.29 on 2026-02-02, with 19 votes
    • Previous CPAN version: 1.28 was 3 years, 1 month, 12 days before
    • Author: XMATH
  5. Image::ExifTool - Read and write meta information
    • Version: 13.50 on 2026-02-07, with 44 votes
    • Previous CPAN version: 13.44 was 1 month, 22 days before
    • Author: EXIFTOOL
  6. IO::Compress - IO Interface to compressed data files/buffers
    • Version: 2.217 on 2026-02-01, with 19 votes
    • Previous CPAN version: 2.216 was 1 day before
    • Author: PMQS
  7. Perl::Tidy - indent and reformat perl scripts
    • Version: 20260204 on 2026-02-03, with 147 votes
    • Previous CPAN version: 20260109 was 25 days before
    • Author: SHANCOCK
  8. Sisimai - Mail Analyzing Interface for bounce mails.
    • Version: v5.6.0 on 2026-02-02, with 81 votes
    • Previous CPAN version: v5.5.0 was 1 month, 28 days before
    • Author: AKXLIX
  9. SPVM - The SPVM Language
    • Version: 0.990127 on 2026-02-04, with 36 votes
    • Previous CPAN version: 0.990126 was before
    • Author: KIMOTO
  10. Term::Choose - Choose items from a list interactively.
    • Version: 1.780 on 2026-02-04, with 15 votes
    • Previous CPAN version: 1.779 was 2 days before
    • Author: KUERBIS

(dcxxiv) metacpan weekly report - XS::JIT

Niceperl

This is the weekly favourites list of CPAN distributions. Votes count: 61

Week's winner: XS::JIT (+5)

Build date: 2026/02/07 20:47:56 GMT


Clicked for first time:


Increasing its reputation:


Dave writes:

During January, I finished working on another tranche of ExtUtils::ParseXS fixups, this time focussing on:

  • adding and rewording warning and error messages, and adding new tests for them;

  • improving test coverage: all XS keywords have tests now;

  • reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;

  • refactoring and improving the length(str) pseudo-parameter implementation.

I also started work on my annual "get './TEST -deparse' working again" campaign. This option runs all the test suite files through a round trip in the deparser before running them. Over the course of the year we invariably accumulate new breakage; sometimes this involves fixing Deparse.pm, and sometimes just back-listing the test file as it is now tickling an already known issue in the deparser.

I also worked on a couple of bugs.

Summary:

  • 0:53 GH #13878 COW speedup lost after e8c6a474
  • 4:05 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build
  • 12:14 fix up Deparse breakage
  • 26:12 improve Extutils::ParseXS

Total:

  • 43:24 (HH::MM)

Tony writes:

``` [Hours] [Activity] 2026/01/05 Monday 0.23 #24055 review, research and approve with comment 0.08 #24054 review and approve 0.12 #24052 review and comment 1.13 #24044 review, research and approve with comment 0.37 #24043 review and approve 0.78 #23918 rebase, testing, push and mark ready for review 1.58 #24001 fix related call_sv() issue, testing

0.65 #24001 testing, debugging

4.94

2026/01/06 Tuesday 0.90 #24034 review and comment 1.12 #24024 research and follow-up 1.40 #24001 debug crash in init_debugger() and work up a fix, testing

0.08 #24001 re-check, push for CI

3.50

2026/01/07 Wednesday 0.15 #24034 review updates and approve 0.55 #23641 review and comment 0.23 #23961 review and approve 0.28 #23988 review and approve with comment 0.62 #24001 check CI results and open PR 24060 0.50 #24059 review and comments 0.82 #24024 work on a test and the fix, testing 0.27 #24024 add perldelta, testing, make PR 24061 1.13 #24040 work on a test and a fix, testing, investigate

other possible similar problems

4.55

2026/01/08 Thursday 0.35 #24024 minor fixes, comment, get approval, apply to blead 0.72 #24040 rebase, perldelta, find a simplification, testing and re-push 0.18 #24053 review and approve 0.77 #24063 review, research, testing and comment 1.47 #24040 look at goto too, look for other similar issues and open 24064, fix for goto (PVOP), testing and push for CI 0.32 #24040 check CI results, make PR 24065

0.28 #24050 review and comment

4.09

2026/01/09 Friday

0.20 #24059 review updates and comment

0.20

2026/01/12 Monday 0.32 #24059 review updates and approve 0.35 #24066 review and approve 0.22 #24040 rebase and clean up whitespace, apply to blead 1.00 #23966 rebase, testing (expected issues from the #23885 merge but I guess I got the magic docs right) 1.05 #24062 review up to ‘ParseXS: tidy up INCLUDE error messages’ 0.25 #24069 review and comment

0.62 #24071 part review and comment

3.81

2026/01/13 Tuesday 1.70 #24070 research and comment 0.23 #24069 review and comment 0.42 #24071 more review

1.02 #24071 more review, comments

3.37

2026/01/14 Wednesday 0.23 #23918 minor fix 0.18 #24069 review updates and approve 0.32 #24077 review and comments 0.53 #24073 review, research and comment 0.87 #24075 review, research and comment 0.45 #24071 benchmarking and comment

1.47 #24019 debugging, brief comment on work so far

4.05

2026/01/15 Thursday 0.37 #24019 debugging and comment on cause of win32 issues 1.02 #24077 review, follow-up 0.08 #24079 review and approve 0.25 #24076 review and approve 0.73 #24062 more review up to ‘ParseXS: refactor: don't set $_ in Param::parse()’ 2.03 #24062 more review to ‘ParseXS: refactor: 001-basic.t: add

TODO flag’ and comments

4.48

2026/01/19 Monday 1.12 maint-votes, vote apply/testing one of the commits 0.43 github notifications 0.08 #24079 review updates and comment 0.70 #24075 research and approve 0.57 #24063 research, try to break it, comment 1.43 #24062 more review to ‘ParseXS: add basic tests for

PREINIT keyword’

4.33

2026/01/20 Tuesday 2.23 #24078 review, testing, comments 0.85 #24098 review, research and comment 1.00 #24062 more review up to ‘ParseXS: 001-basic.t: add more

ellipsis tests’

4.08

2026/01/21 Wednesday 0.82 #23995 research and follow-up 0.25 #22125 follow-up 0.23 #24056 research, comment 0.67 #24103 review, research and approve

1.45 #24062 more review to end, comment

3.42

2026/01/22 Thursday 0.10 #24079 review update and approve 0.08 #24106 review and approve 0.10 #24096 review and approve 0.08 #24094 review and approve 0.82 #24080 review, research and comments 0.08 #24081 review and approve 0.75 #24082 review, testing, comment 1.15 #23918 rebase #23966, testing and apply to blead, start on

string APIs

3.16

2026/01/27 Tuesday 0.35 #23956 fix perldelta issues 1.27 #22125 remove debugging detritus, research and comment 1.57 #24080 debugging into SvOOK and PVIO 0.67 #24080 more debugging, comment 0.25 #24120 review and approve

1.03 #23984 review, research and approve

5.14

2026/01/28 Wednesday 0.37 #24080 follow-up 0.10 #24128 review and apply to blead 0.70 #24105 review, look at changes needed 0.15 #23956 check CI results and apply to blead 0.10 #22125 check CI results and apply to blead 0.13 #4106 rebase PR 23262 and testing 0.53 #24001 rebase PR 24060 and testing 0.57 #24129 review and comments 0.28 #24127 review and approve 0.10 #24124 review and approve

0.20 #24123 review and approve with comment

3.23

2026/01/29 Thursday 0.27 #23262 minor change suggested by xenu, testing, push for CI 0.22 #24060 comment 1.63 #24082 review, testing, comments 0.43 #24130 review, check some side issues, approve 0.12 #24077 review updates and approve 0.08 #24121 review and approve 0.08 #24122 review and comment

0.30 #24119 review and approve

3.13

Which I calculate is 59.48 hours.

Approximately 57 tickets were reviewed or worked on, and 6 patches were applied. ```


Paul writes:

This month I managed to finish off a few refalias-related issues, as well as lend some time to help BooK further progress implementing PPC0014

  • 1 = Clear pad after multivar foreach
    • https://github.com/Perl/perl5/pull/240
  • 3 = Fix B::Concise output for OP_MULTIPARAM
    • https://github.com/Perl/perl5/pull/24066
  • 6 = Implement multivariable foreach on refalias
    • https://github.com/Perl/perl5/pull/24094
  • 1 = SVf_AMAGIC flag tidying (as yet unmerged)
    • https://github.com/Perl/perl5/pull/24129
  • 2.5 = Mentoring BooK towards implementing PPC0014
  • 2 = Various github code reviews

Total: 15.5 hours

My focus for February will now be to try to get both attributes-v2 and magic-v2 branches in a state where they can be reviewed, and at least the first parts merged in time for 5.43.9, and hence 5.44, giving us a good base to build further feature ideas on top of.

2025 was a tough year for The Perl and Raku Foundation (TPRF). Funds were sorely needed. The community grants program had been paused due to budget constraints and we were in danger of needing to pause the Perl 5 core maintenance grants. Fastmail stepped up with a USD 10,000 donation and helped TPRF to continue to support Perl 5 core maintenance. Ricardo Signes explains why Fastmail helped keep this very important work on track.

Perl has served us quite well since Fastmail’s inception. We’ve built up a large code base that has continued to work, grow, and improve over twenty years. We’ve stuck with Perl because Perl stuck with us: it kept working and growing and improving, and very rarely did those improvements require us to stop the world and adapt to onerous changes. We know that kind of stability is, in part, a function of the developers of Perl, whose time is spent figuring out how to make Perl better without also making it worse. The money we give toward those efforts is well-spent, because it keeps the improvements coming and the language reliable.

— Ricardo Signes, Director & Chief Developer Experience Officer, Fastmail

One of the reasons that you don’t hear about Perl in the headlines is its reliability. Upgrading your Perl from one version to the next? That can be a very boring deployment. You code worked before and it continues to “just work” after the upgrade. You don’t need to rant about short deprecation cycles, performance degradation or dependencies which no longer install. The Perl 5 core maintainers take great care to ensure that you don’t have to care very much about upgrading your Perl. Backwards compatibility is top of mind. If your deployment is boring, it’s because a lot of care and attention has been given to this matter by the people who love Perl and love to work on it.

As we moved to secure TPRF’s 2025 budget, we reached out to organizations which rely on Perl. A number of these companies immediately offered to help. Fastmail has already been a supporter of TPRF for quite some time. In addition to this much needed donation, Fastmail has been providing rock solid free email hosting to the foundation for many years.

While Fastmail’s donation has been allocated towards Perl 5 Core maintenance, TPRF is now in the position to re-open the community grants program, funding it with USD 10,000 for 2026. There is also an opportunity to increase the community grants funding if sponsor participation increases. As we begin our 2026 fundraising, we are looking to cast a wider net and bring more sponsor organizations on board to help support healthy Perl and Raku ecosystems.

Maybe your organization will be the one to help us double our community grants budget in 2026. To become a sponsor, contact: olaf@perlfoundation.org

“Perl is my cast-iron pan - reliable, versatile, durable, and continues to be ever so useful.” TPRC 2026 brings together a community that embodies all of these qualities, and we’re looking for sponsors to help make this special gathering possible.

About the Conference

The Perl and Raku Conference 2026 is a community-organized gathering of developers, enthusiasts, and industry professionals. It takes place from June 26-28, 2026, in Greenville, South Carolina. The conference will feature an intimate, single-track format that promises high sponsor visibility. We look forward to approximately 80 participants with some of those staying in town for the shoulder days (June 25-29) and a Monday workshop.

Why Sponsor?

  • Give back to the language and communities which have already given so much to you
  • Connect with the developers and craftspeople who build your tools – the ones that are built to last
  • Help to ensure that The Perl and Raku Foundation can continue to fund Perl 5 core maintenance and Community Grants

Sponsorship Tiers

Platinum Sponsor ($6,000)

  • Only 1 sponsorship is available at this level
  • Premium logo placement on conference website
  • This donation qualifies your organization to be a Bronze Level Sponsor of The Perl and Raku Foundation
  • 5-minute speaking slot during opening ceremony
  • 2 complimentary conference passes
  • Priority choice of rollup banner placement
  • Logo prominently displayed on conference badges
  • First choice of major named sponsorship (Conference Dinner, T-shirts, or Swag Bags)
  • Logo on main stage backdrop and conference banners
  • Social media promotion
  • All benefits of lower tiers

Gold Sponsor ($4,000)

  • Logo on all conference materials
  • One complimentary conference pass
  • Rollup banner on display
  • Choice of named sponsorship (Lunch or Snacks)
  • Logo on backdrop and banners
  • Dedicated social media recognition
  • All benefits of lower tiers

Silver Sponsor ($2,000)

  • Logo on conference website
  • Logo on backdrop and banners
  • Choice of smaller named sponsorship (Beverage Bars)
  • Social media mention
  • All benefits of lower tier

Bronze Sponsor ($1,000)

  • Name/logo on conference website
  • Name/logo on backdrop and banners

All Sponsors Receive

  • Logo/name in Update::Daily conference newsletter sidebar
  • Opportunity to provide materials for conference swag bags
  • Recognition during opening and closing ceremonies
  • Listed on conference website sponsor page
  • Mentioned in conference social media

Named Sponsorship Opportunities

Exclusive naming rights available for:

  • Conference Dinner ($2,000) - Signage on tables and buffet
  • Conference Swag Bags ($1,500) - Logo on bags
  • Conference T-Shirts ($1,500) - Logo on sleeve
  • Lunches ($1,500) - Signage at pickup and on menu tickets
  • Snacks ($1,000) - Signage at snack bar
  • Update::Daily Printing ($200) - Logo on masthead

About The Perl and Raku Foundation

Proceeds beyond conference expenses support The Perl and Raku Foundation, a non-profit organization dedicated to advancing the Perl and Raku programming languages through open source development, education, and community building.

Contact Information

For more information on how to become a sponsor, please contact: olaf@perlfoundation.org

OSDC Perl.

  • 00:00 Introduction to OSDC
  • 01:30 Introducing myself Perl Maven, Perl Weekly
  • 02:10 The earlier issues.
  • 03:10 How to select a project to contribute to?
  • 04:50 Chat on OSDC Zulip
  • 06:45 How to select a Perl project?
  • 09:20 CPAN::Digger
  • 10:10 Modules that don't have a link to their VCS.
  • 13:00 Missing CI - GitHub Actions or GitLab Pipeline and Travis-CI.
  • 14:00 Look at Term-ANSIEncode by Richard Kelsch - How to find the repository of this project?
  • 15:38 Switching top look at Common-CodingTools by mistake.
  • 16:30 How MetaCPAN knows where is the repository?
  • 17:52 Clone the repository.
  • 18:15 Use the szabgab/perl Docker container.
  • 22:10 Run perl Makefile.PL, install dependency, run make and make distdir.
  • 23:40 See the generated META.json file.
  • 24:05 Edit the Makefile.PL
  • 24:55 Explaining my method of cloning first (calling it origin) and forking later and calling that fork.
  • 27:00 Really edit Makefile.PL and add the META_MERGE section and verify the generated META.json file.
  • 29:00 Create a branch locally. Commit the change.
  • 30:10 Create a fork on GitHub.
  • 31:45 Add the fork as a remote repository and push the branch to it.
  • 33:20 Linking to the PR on the OSDC Perl report page.
  • 35:00 Planning to add .gitignore and maybe setting up GitHub Action.
  • 36:00 Start from the main branch, create the .gitignore file.
  • 39:00 Run the tests locally. Set up GitHub Actions to run the tests on every push.
  • 44:00 Editing the GHA configuration file.
  • 48:30 Commit, push to the fork, check the results of GitHub Action in my fork on GitHub.
  • 51:45 Look at the version of the perldocker/perl-tester Docker image.
  • 54:40 Update list of Perl versions in the CI. See the results on GitHub.
  • 55:30 Show the version number of perl.

Perl-related GitHub organizations

Perl Maven

Most projects are started by a single person in a GitHub repository of that person. Later for various reasons people establish GitHub organizations and move the projects there. Sometimes the organization contains a set of sub-projects related to a central project. (e.g. A web framework and its extensions or database access.) Sometimes it is a collection of projects related to a single topic. (e.g. testing or IDE support.) Sometimes it is just a random collection of project where people band together in the hope that no project will be left behind. (e.g. the CPAN Authors organization.)

Organizations make it easier to have multiple maintainers and thus ensuring continuity of project, but it might also mean that none of the members really feel the urge to continue working on something.

In any case, I tried to collect all the Perl-related GitHub organizations.

Hopefully in ABC order...

  • Beyond grep - It is mostly for ack, a better grep developed by Andy Lester. No public members. 4 repositories.

  • Catalyst - Catalyst is a web-framework. Its runtime and various extensions are maintained in this organization. 10 members and 39 repositories.

  • cpan-authors - A place for CPAN authors to collaborate more easily. 4 members and 9 repositories.

  • davorg cpan - Organisation for maintaining Dave Cross's CPAN modules. No public members and 47 repositories.

  • ExifTool - No public members. 3 repositories.

  • Foswiki - The Foswiki and related projects. 13 members and 649 repositories.

  • gitpan - An archive of CPAN modules - 2 members and 5k+ read-only repositories.

  • Kelp framework - a web development framework. 1 member and 12 repositories.

  • MetaCPAN - The source of the MetaCPAN site - 9 member and 56 rpositories.

  • Mojolicious - Some Perl and many JavaScript projects. 5 members and 29 repositories.

  • Moose - Moose,MooseX-*, Moo, etc. 11 members and 69 repositories.

  • Netdisco - Netdisco and SNMP-Info projects. 10 members and 14 repositories.

  • PadreIDE - Padre, the Perl IDE. 13 members and 102 repositories.

  • Paracamelus - No public members. 2 repositories.

  • Perl - Perl 5 itself, Docker images, etc. 20 members, 8 repositories.

  • Perl Actions - GitHub Actions to be used in workflows. 5 members and 9 repositories.

  • Perl Advent Calendar - including the source of perl.com. 3 members and 8 repositories.

  • Perl Bitcoin - Perl Bitcoin Toolchain Collective. 3 members and 7 repositories.

  • Perl Toolchain Gang - ExtUtils::MakeMaker, Module::Build, etc. 27 members and 41 repositories.

  • Perl Tools Team - source of planetperl , perl-ads etc. No public members. 6 repositories.

  • Perl Dancer - Dancer, Dancer2, many plugins. 30 members and 79 repositories.

  • perltidy - Only for Perl::Tidy. No public members. 1 repository.

  • perl5-dbi - DBI, several DBD::* modules, and some related modules. 7 members and 15 repositories.

  • perl.org - also cpan.org and perldoc.perl.org. 3 members and 7 repositories.

  • Perl5 - DBIx-Class and DBIx-Class-Historic. No public members. 2 repositories.

  • perl5-utils - List::MoreUtils, File::ShareDir etc. 2 members and 22 repositories.

  • Perl-Critic - PPI, Perl::Critic and related. 7 members and 5 repositories.

  • perl-ide - Perl Development Environments. 26 members and 13 repositories.

  • perl-pod - Pod::Simple, Test::Pod. 1 member and 4 repositories.

  • PkgConfig - 1 member 1 repository.

  • plack - psgi-specs, Plack and a few middlewares. 5 members and 7 repositories.

  • RexOps - Rex, Rexify and related projects. 1 member and 46 repositories.

  • Sqitch - Sqitch for Sensible database change management and related projects. No public members. 16 repositories.

  • StrawberryPerl - The Perl distribution for MS Windows. 4 members and 10 repositories.

  • Test-More - Test::Builder, Test::Simple, Test2 etc. - 4 members, 27 repositories.

  • The Enlightened Perl Organisation - Task::Kensho and Task::Kensho::*. 1 member and 1 repository.

  • Thunderhorse Framework - a modern web development framework. No public members. 4 repositories.

  • Webmin - Webmin is a web-based system administration tool for Unix-like servers. - 5 repositories.

Companies

These are bot Perl-specific GitHub organizations, but some of the repositories are in Perl.

  • cPanel - Open Source Software provided by cPanel. 1 member and 22 repositories.

  • DuckDuckGo - The search engine. 19 members and 122 repositories.

  • Fastmail - Open-source software developed at Fastmail. 4 members and 38 repositories.

  • RotherOSS - Otobo (OTRS fork). No public members. 47 repositories.

Then there’s Perl

Perl on Medium

Since my native language isn’t English, the German text follows below.

Lock and unlock hash using Hash::Util

Perl Maven

If you don't like the autovivification or simply would like to make sure the code does not accidentally alter a hash the Hash::Util module is for you.

You can lock_hash and later you can unlock_hash if you'd like to make some changes to it.

In this example you can see 3 different actions commented out. Each one would raise an exception if someone tries to call them on a locked hash. After we unlock the hash we can execute those actions again.

I tried this both in perl 5.40 and 5.42.

examples/locking_hash.pl

use strict;
use warnings;
use feature 'say';

use Hash::Util qw(lock_hash unlock_hash);
use Data::Dumper qw(Dumper);


my %person = (
    fname => "Foo",
    lname => "Bar",
);
lock_hash(%person);

print Dumper \%person;
print "$person{fname} $person{lname}\n";
say "fname exists ", exists $person{fname};
say "language exists ", exists $person{language};

# $person{fname} = "Peti";     # Modification of a read-only value attempted
# delete $person{lname};       # Attempt to delete readonly key 'lname' from a restricted hash
# $person{language} = "Perl";  # Attempt to access disallowed key 'language' in a restricted hash

unlock_hash(%person);

$person{fname} = "Peti";     # Modification of a read-only value attempted
delete $person{lname};       # Attempt to delete readonly key 'lname' from a restricted hash
$person{language} = "Perl";  # Attempt to access disallowed key 'language' in a restricted hash

print Dumper \%person;

$VAR1 = {
          'lname' => 'Bar',
          'fname' => 'Foo'
        };
Foo Bar
fname exists 1
language exists
$VAR1 = {
          'language' => 'Perl',
          'fname' => 'Peti'
        };

My name is Alex. Over the last years I’ve implemented several versions of the Raku’s documentation format (Synopsys 26 / Raku’s Pod) in Perl and JavaScript.

At an early stage, I shared the idea of creating a lightweight version of Raku’s Pod, with Damian Conway, the original author of the Synopsys 26 Documentation specification (S26). He was supportive of the concept and offered several valuable insights that helped shape the vision of what later became Podlite.

Today, Podlite is a small block-based markup language that is easy to read as plain text, simple to parse, and flexible enough to be used everywhere — in code, notes, technical documents, long-form writing, and even full documentation systems.

This article is an introduction for the Perl community — what Podlite is, how it looks, how you can already use it in Perl via a source filter, and what’s coming next.

The Block Structure of Podlite

One of the core ideas behind Podlite is its consistent block-based structure. Every meaningful element of a document — a heading, a paragraph, a list item, a table, a code block, a callout — is represented as a block. This makes documents both readable for humans and predictable for tools.

Podlite supports three interchangeable block styles: delimited, paragraph, and abbreviated.

Abbreviated blocks (=BLOCK)

This is the most compact form. A block starts with = followed by the block name.

=head1 Installation Guide
=item Perl 5.8 or newer
=para This tool automates the process.
  • ends on the next directive or a blank line
  • best used for simple one-line blocks
  • cannot include configuration options (attributes)

Paragraph blocks (=for BLOCK)

Use this form when you want a multi-line block or need attributes.

=for code :lang<perl>
say "Hello from Podlite!";
  • ends when a blank line appears
  • can include complex content
  • allows attributes such as :lang, :id, :caption, :nested, …

Delimited blocks (=begin BLOCK=end BLOCK)

The most expressive form. Useful for large sections, nested blocks, or structures that require clarity.

=begin nested :notify<important>
Make sure you have administrator privileges.
=end nested
  • explicit start and end markers
  • perfect for code, lists, tables, notifications, markdown, formulas
  • can contain other blocks, including nested ones

These block styles differ in syntax convenience, but all produce the same internal structure.

diagram here showing the three block styles and how they map to the same internal structure

Regardless of which syntax you choose:

  • all three forms represent the same block type
  • attributes apply the same way (:lang, :caption, :id, …)
  • tools and renderers treat them uniformly
  • nested blocks work identically
  • you can freely mix styles inside a document

Example: Comparing POD and Podlite

Let’s see how the same document looks in traditional POD versus Podlite:

POD vs Podlite

Each block has clear boundaries, so you don’t need blank lines between them. This makes your documentation more compact and easier to read. This is one of the reasons Podlite remains compact yet powerful: the syntax stays flexible, while the underlying document model stays clean and consistent.

This Podlite example rendered as on the following screen:

Podlite example

Inside the Podlite Specification 1.0

One important point about Podlite is that it is first and foremost a specification. It does not belong to any particular programming language, platform, or tooling ecosystem. The specification defines the document model, syntax rules, and semantics.

From the Podlite 1.0 specification, notable features include:

  • headings (=head1, =head2, …)
  • lists and definition lists, and including task lists
  • tables (simple and advanced)
  • CSV-backed tables
  • callouts / notifications (=nested :notify<tip|warning|important|note|caution>)
  • table of contents (=toc)
  • includes (=include)
  • embedded data (=data)
  • pictures (=picture and inline P<>)
  • formulas (=formula and inline F<>)
  • user defined blocks and markup codes
  • Markdown integration

The =markdown block is part of the standard block set defined by the Podlite Specification 1.0. This means Markdown is not an add-on or optional plugin — it is a fully integrated, first-class component of the language.

Markdown content becomes part of Podlite’s unified document structure, and its headings merge naturally with Podlite headings inside the TOC and document outline.

Below is a screenshot showing how Markdown inside Perl is rendered in the in-development VS Code extension, demonstrating both the block structure and live preview:

Podlite source, including =markdown block

Using Podlite in Perl via the source filter

To make Podlite directly usable in Perl code, there is a module on CPAN: Podlite — Use Podlite markup language in Perl programs

A minimal example could look like this:

use Podlite; # enable Podlite blocks inside Perl

=head1 Quick Example
=begin markdown
Podlite can live inside your Perl programs.
=end markdown
print "Podlite active\n";

Roadmap: what’s next for Podlite

Podlite continues to grow, and the Specification 1.0 is only the beginning. Several areas are already in active development, and more will evolve with community feedback.

Some of the things currently planned or in progress:

  • CLI tools
    • command-line utilities for converting Podlite to HTML, PDF, man pages, etc.
    • improve pipelines for building documentation sites from Podlite sources
  • VS Code integration
  • Ecosystem growth
    • develop comprehensive documentation and tutorials
    • community-driven block types and conventions

Try Podlite and share feedback

If this resonates with you, I’d be very happy to hear from you:

  • ideas for useful block types
  • suggestions for tools or integrations
  • feedback on the syntax and specification

https://github.com/podlite/podlite-specs/discussions

Even small contributions — a comment, a GitHub star, or trying an early tool — help shape the future of the specification and encourage further development.

Useful links:

Thanks for reading, Alex

See OSDC Perl

  • 00:00 Working with Peter Nilsson

  • 00:01 Find a module to add GitHub Action to. go to CPAN::Digger recent

  • 00:10 Found Tree-STR

  • 01:20 Bug in CPAN Digger that shows a GitHub link even if it is broken.

  • 01:30 Search for the module name on GitHub.

  • 02:25 Verify that the name of the module author is the owner of the GitHub repository.

  • 03:25 Edit the Makefile.PL.

  • 04:05 Edit the file, fork the repository.

  • 05:40 Send the Pull-Request.

  • 06:30 Back to CPAN Digger recent to find a module without GitHub Actions.

  • 07:20 Add file / Fork repository gives us "unexpected error".

  • 07:45 Direct fork works.

  • 08:00 Create the .github/workflows/ci.yml file.

  • 09:00 Example CI yaml file copy it and edit it.

  • 14:25 Look at a GitLab CI file for a few seconds.

  • 14:58 Commit - change the branch and add a description!

  • 17:31 Check if the GitHub Action works properly.

  • 18:17 There is a warning while the tests are running.

  • 21:20 Opening an issue.

  • 21:48 Opening the PR (on the wrong repository).

  • 22:30 Linking to output of a CI?

  • 23:40 Looking at the file to see the source of the warning.

  • 25:25 Assigning an issue? In an open source project?

  • 27:15 Edit the already created issue.

  • 28:30 USe the Preview!

  • 29:20 Sending the Pull-Request to the project owner.

  • 31:25 Switching to Jonathan

  • 33:10 CPAN Digger recent

  • 34:00 Net-SSH-Perl of BDFOY - Testing a networking module is hard and Jonathan is using Windows.

  • 35:13 Frequency of update of CPAN Digger.

  • 36:00 Looking at our notes to find the GitHub account of the module author LNATION.

  • 38:10 Look at the modules of LNATION on MetaCPAN

  • 38:47 Found JSON::Lines

  • 39:42 Install the dependencies, run the tests, generate test coverage.

  • 40:32 Cygwin?

  • 42:45 Add Github Action copying it from the previous PR.

  • 43:54 META.yml should not be committed as it is a generated file.

  • 48:25 I am looking for sponsors!

  • 48:50 Create a branch that reflects what we do.

  • 51:38 commit the changes

  • 53:10 Fork the project on GitHub and setup git remote locally.

  • 55:05 git push -u fork add-ci

  • 57:44 Sending the Pull-Request.

  • 59:10 The 7 dwarfs and Snowwhite. My hope is to have a 100 people sending these PRs.

  • 1:01:30 Feedback.

  • 1:02:10 Did you think this was useful?

  • 1:02:55 Would you be willing to tell people you know that you did this and you will do it again?

  • 1:03:17 You can put this on your resume. It means you know how to do it.

  • 1:04:16 ... and Zoom suddenly closed the recording...