Beautiful Perl series
This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.
The last post was about BLOCKs and lexical scoping, a feature present in almost all programming languages. By contrast, today's topic is about another feature quite unique to Perl : dynamic scoping, introduced through the 'local' keyword. As we shall see, lexical scoping and dynamic scoping address different needs, and Perl allows us to choose the scoping strategy most appropriate to each situation.
Using local : an appetizer
Let us start with a very simple example:
say "current time here is ", scalar localtime;
foreach my $zone ("Asia/Tokyo", "Africa/Nairobi", "Europe/Athens") {
local $ENV{TZ} = $zone;
say "in $zone, time is ", scalar localtime;
}
say "but here current time is still ", scalar localtime;
Perl's localtime builtin function returns the time "in the current timezone". On Unix-like systems, the notion of "current timezone" can be controlled through the TZ environment variable, so here the program temporarily changes that variable to get the local time in various cities around the world. At the end of the program, we get back to the usual local time. Here is the output:
current time here is Sun Feb 8 22:21:41 2026
in Asia/Tokyo, time is Mon Feb 9 06:21:41 2026
in Africa/Nairobi, time is Mon Feb 9 00:21:41 2026
in Europe/Athens, time is Sun Feb 8 23:21:41 2026
but here current time is still Sun Feb 8 22:21:41 2026
The gist of that example is that:
- we want to call some code not written by us (here: the
localtimebuiltin function); - the API of that code does not offer any parameter or option to tune its behaviour according to our needs; instead, it relies on some information in the global state of the running program (here: the local timezone);
- apart from getting our specific result, we don't want the global state to be durably altered, because this could lead to unwanted side-effects.
Use cases similar to this one are very common, not only in Perl but in every programming language. There is always some kind of implicit global state that controls how the program behaves internally and how it interacts with its operating system: for example environment variables, command-line arguments, open sockets, signal handlers, etc. Each language has its strategies for dealing with the global state, but often the solutions are multi-faceted and domain-specific. Perl shines because its local mechanism covers a very wide range of situations in a consistent way and is extremely simple to activate.
Before going into the details, let us address the bad reputation of dynamic scoping. This mechanism is often described as something that should be absolutely avoided because it implements a kind of action at distance, yielding unpredictable program behaviour. Yes, it is totally true, and therefore languages that only have dynamic scoping are not suited for programming-in-the-large. Yet in specific contexts, dynamic scoping is acceptable or even appropriate. The most notable examples are Unix shells and also Microsoft Powershell for Windows, that still today all rely on dynamic scoping, probably because it's easier to implement.
Perl1 also used dynamic scoping in its initial design, presumably for the same reason of easiness of implementation, and also because it was heavily influenced by shell programming and was addressing the same needs. When later versions of Perl evolved into a general-purpose language, the additional mechanism of lexical scoping was introduced because it was indispensable for larger architectures. Nevertheless, dynamic scoping was not removed, partly for historical reasons, but also because action at distance is precisely what you want in some specific situations. The next chapters will show you why.
The mechanics of local
In the last post we saw that a my declaration declares a new variable, possibly shadowing a previous variable of the same name. By contrast, a local declaration is only possible on a global variable that already exists; the effect of the declaration is to temporarily push aside the current value of that global variable, leaving room for a new value. After that operation, any code anywhere in the program that accesses the global variable will see the new value ... until the effect of local is reverted, namely when exiting the scope in which the local declaration started.
Here is a very simple example, again based on the %ENV global hash. This hash is automatically populated with a copy of the shell environment when perl starts up. We'll suppose that the initial value of $ENV{USERNAME}, as transmitted from the shell, is Alex:
sub greet_user ($salute) {
say "$salute, $ENV{USERNAME}";
}
greet_user("Hello"); # Hello, Alex
{ local $ENV{USERNAME} = 'Brigitte';
greet_user("Hi"); # Hi, Brigitte
{ local $ENV{USERNAME} = 'Charles';
greet_user("Good morning"); # Good morning, Charles
}
greet_user("Still there"); # Still there, Brigitte
}
greet_user("Good bye"); # Good bye, Alex
The same thing would work with a global package variable:
our $username = 'Alex'; # "our" declares a global package variable
sub greet_user ($salute) {
say "$salute, $username";
}
greet_user("Hello");
{ local $username = 'Brigitte';
greet_user("Hi");
... etc.
but if the variable is not pre-declared, we get an error:
Global symbol "$username" requires explicit package name (did you forget to declare "my $username"?)
or if the variable is pre-declared as lexical through my $username instead of our $username, we get another error:
Can't localize lexical variable $username
So the local mechanism only applies to global variables. There are three categories of such variables:
- standard variables of the Perl interpreter;
- package members (of modules imported from CPAN, or of your own modules). This includes the whole symbol tables of those packages, i.e. not only the global variables, but also the subroutines or methods;
- what the Perl documentation calls elements of composite types, i.e. individual members of arrays or hashes. Such elements can be localized even if the array or hash is itself a lexical variable, because while the entry point to the array or hash may be on the execution stack, its member elements are always stored in global heap memory. This last category of global variables is perhaps more difficult to understand, but we will give examples later to make it clear.
As we can see, the single and simple mechanism of dynamic scoping covers a vast range of applications! Let us explore the various use cases.
Localizing standard Perl variables
The Perl interpreter has a number of builtin special variables, listed in perlvar. Some of them control the internal behaviour of the interpreter; some other are interfaces to the operating system (environment variables, signal handlers, etc.). These variables have reasonable default values that satisfy most common needs, but they can be changed whenever needed. If the change is for the whole program, a regular assignment instruction is good enough; but if it is for a temporary change in a specific context, localis the perfect tool for the job.
Internal variables of the interpreter
The examples below are typical of idiomatic Perl programming: altering global variables so that builtin functions behave differently from the default.
# input record separator ($/)
my @lines = <STDIN>; # regular mode, separating by newlines
my @paragraphs = do {local $/ = ""; <STDIN>}; # paragraph mode, separating by 2 or more newlines
my $whole_content = do {local $/; <STDIN>}; # slurp mode, no separation
# output field and output record separators
sub write_csv_file ($rows, $filename) {
open my $fh, ">:unix", $filename or die "cannot write into $filename: $!";
local $, = ","; # output field separator -- inserted between columns
local $\ = "\r\n"; # output row separator -- inserted between rows
print $fh @$_ foreach @$rows; # assuming that each member of @$rows is an arrayref
}
# list separator in interpolated strings
my @perl_files = <*.pl>; # globbing perl files in the current directory
my @python_files = <*.py>; # idem for python files
{ local $" = ", "; # lists will be comma-separated when interpolated in a string
say "I found these perl files: @perl_files and these python files: @python_files";
}
Interface to the operating system
We have already seen two examples involving the %ENV hash of environment variables inherited from the operating system. Likewise, it is possible to tweak the @ARGV array before parsing the command-line arguments. Another interesting variable is the %SIG hash of signal handlers, as documented in perlipc:
{ local $SIG{HUP} = "IGNORE"; # don't want to be disturbed for a while
do_some_tricky_computation();
}
Predefined handles like STDIN, STDOUT and STDERR can also be localized:
say "printing to regular STDOUT";
{ local *STDOUT;
open STDOUT, ">", "captured_stdout.txt" or die $!;
do_some_verbose_computation();
}
say "back to regular STDOUT";
Localizing package members
Package global variables
Global variables are declared with the our keyword. The difference with lexical variables (declared with my) is that such global variables are accessible, not only from within the package, but also from the outside, if prefixed by the package name. So if package Foo::Bar declares our ($x, @a, %h), these variables are accessible from anywhere in the Perl program under $Foo::Bar::x, @Foo::Bar::a or %Foo::Bar::h.
Many CPAN modules use such global variables to expose their public API. For example, the venerable Data::Dumper chooses among various styles for dumping a data structure, depending on the $Data::Dumper::Indent variable. The default (style 2) is optimized for readability, but sometimes the compact style 0 is more appropriate:
print Dumper($data_tree);
print do {local $Data::Dumper::Indent=0; Dumper($other_tree)};
Carp or URI are other examples of well-known modules where global variables are used as a configuration API.
Modules with this kind of architecture are often pretty old; more recent modules tend to prefer an object-oriented style, where the configuration options are given as options to the new() method instead of using global variables. Of course the object-oriented architecture offers better encapsulation, since a large program can work with several instances of the same module, each instance having its own configuration options, without interference between them. This is not to say, however, that object-oriented configuration is always the best solution: when it comes to tracing, debugging or profiling needs, it is often very convenient to be able to tune a global knob and have its effect applied to the whole program: in those situations, what you want is just the opposite of strict encapsulation! Therefore some modules, although written in a modern style, still made the wise choice of leaving some options expressed as global variables; changing these options has a global effect, but thanks to local this effect can be limited to a specific scope. Examples can be found in Type::Tiny or in List::Util.
Subroutines (aka monkey-patching)
Every package has a symbol table that contains not only its global variables, but also the subroutines (or methods) declared in that package. Since the symbol table is writeable, it is possible to overwrite any subroutine, thereby changing the behaviour of the package - an operation called monkey-patching. Of course monkey-patching could easily create chaos, so it should be used with care - but in some circumstances it is extremely powerful and practical. In particular, testing frameworks often use monkey-patching for mocking interactions with the outside world, so that the internal behaviour of a sofware component can be tested in isolation.
The following example is not very realistic, but it's the best I could come up with to convey the idea in just a few lines of code. Consider a big application equipped with a logger object. Here we will use a logger from Log::Dispatch:
use Log::Dispatch;
my $log = Log::Dispatch->new(
outputs => [['Screen', min_level => 'info', newline => 1]],
);
The logger has methods debug(), info(), error(), etc. for accepting messages at different levels. Here it is configured to only log messages starting from level info; so when the client code calls info(), the message is printed, while calls to debug() are ignored. As a result, when the following routine is called, we normally only see the messages "start working ..." and "end working ...":
sub work ($phase) {
$log->info("start working on $phase");
$log->debug("weird condition while doing $phase"); # normally not seen - level below 'info'
$log->info("end working on $phase\n");
}
Now suppose that we don't want to change the log level for the whole application, but nevertheless we need to see the debug messages at a given point of execution. One (dirty!) way of achieving this is to temporarily treat calls to debug() as if they were calls to info(). So a scenario like
work("initial setup");
{ local *Log::Dispatch::debug = *Log::Dispatch::info; # temporarily alias the 'debug' method to 'info'
work("core stuff")}
work("cleanup");
logs the following sequence:
start working on initial setup
end working on initial setup
start working on core stuff
weird condition while doing core stuff
end working on core stuff
start working on cleanup
end working on cleanup
Monkey-patching techniques are not specific to Perl; they are used in all dynamic languages (Python, Javascript, etc.), not for regular programming needs, but rather for testing or profiling tasks. However, since other dynamic languages do not have the local mechanism, temporary changes to the symbol table must be programmed by hand, by storing the initial code reference in a temporary variable, and restoring it when exiting from the monkey-patched scope. This is a bit more work and is more error-prone. Often there are library modules for making the job easier, though: see for example https://docs.pytest.org/en/7.1.x/how-to/monkeypatch.html in Python.
Monkey-patching in a statically-typed language like Java is more acrobatic, as shown in Nicolas Fränkel's blog.
Localizing elements of arrays or hashes
The value of an array at a specific index, or the value of a hash for a specific key, can be localized too. We have already seen some examples with the builtin hashes %ENV or %SIG, but it works as well on user data, even when the data structure is several levels deep. A typical use case for this is when a Web application loads a JSON, YAML or XML config file at startup. The config data becomes a nested tree in memory; if for any reason that config data must be changed at some point, local can override a specific data leaf, or any intermediate subtree, like this:
local $app->config->{logger_file} = '/opt/log/special.log';
local $app->config->{session} = {storage => '/opt/data/app_session',
expires => 42900,
};
Another example can be seen in my Data::Domain module. The job of that module is to walk through a datatree and check if it meets the conditions expected by a "domain". The inspect() method that does the checking sometimes needs to know at which node it is currently located; so a $context tree is worked upon during the traversal and passed to every method call. With the help of local, temporary changes to the shape of $context are very easy to implement:
for (my $i = 0; $i < $n_items; $i++) {
local $context->{path} = [@{$context->{path}}, $i];
...
An alternative could have been to just push $i on top of the @{$context->{path}} array, and then pop it back at the end of the block. But since this code may call subclasses written by other people, with little guarantee on how they would behave with respect to $context->{path}, it is safer to localize it and be sure that $context->{path} is in a clean state when starting the next iteration of the loop. Interested readers can skim through the source code to see the full story. A similar technique can also be observed in Catalyst::Dispatcher.
A final example is the well-known DBI module, whose rich API exploits several Perl mechanisms simultaneously. DBI is principally object-oriented, except that the objects are called "handles"; but in addition to methods, handles also have "attributes" accessible as hash members. The DBI documentation explicitly recommends to use local for temporary modifications to the values of attributes, for example for the RaiseError attribute. This is interesting because it shows a dichotomy between API styles: if DBI had a purely object-oriented style, with usual getter and setter methods, it would be impossible to use the benefits of local - temporary changes to an attribute followed by a revert to the previous value would have to be programmed by hand.
How other languages handle temporary changes to global state
As argued earlier, the need for temporary changes to global state occurs in every programming language, in particular for testing, tracing or profiling tasks. When dynamic scoping is not present, the most common solution is to write specific code for storing the old state in a temporary variable, implement the change for the duration of a given computation, and then restore the old state. A common best practice for such situations is to use a try ... finally ... construct, where restoration to the old state is implemented in the finally clause: this guarantees that even if exceptions occur, the code exits with a clean state. Most languages do possess such constructs - this is definitely the case for Java, JavaScript and Python.
Python context managers
Python has a mechanism more specifically targeted at temporary changes of context : this is called context manager, triggered through a with statement. A context manager implements special methods __enter__() and __exit__() that can be programmed to operate changes into the global context. This technique offers more precision than Perl's local construct, since the enter and exit methods are free to implement any kind of computation; however it is less general, because each context manager is specialized for a specific task. Python's contextlib library provides a collection of context managers for common needs.
Wrapping up
Perl's local mechanism is often misunderstood. It is frowned upon because it breaks encapsulation - and this criticism is perfectly legitimate as far as regular programming tasks are concerned; but on the other hand, it is a powerful and clean mechanism for temporary changes to the global execution state. It solves real problems with surprising grace, and it’s one of the features that makes Perl uniquely expressive among modern languages. So do not follow the common advice to avoid local at all costs, but learn to identify the situations where local will be a helpful tool to you!
Beyond the mere mechanism, the question is more at the level of philosophy of programming: to which extent should we enforce strict encapsulation of components? In an ideal world, each component has a well-defined interface, and interactions are only allowed to go through the official interfaces. But in a big assembly of components, we may encounter situations that were not foreseen by the designers of the individual components, and require inserting some additional screws, or drilling some additional holes, so that the global assembly works satisfactorily. This is where Perl's local is a beautiful device.
About the cover picture
The cover picture1 represents a violin with a modified global state: the two intermediate strings are crossed! This is the most spectacular example of scordatura in Heirich Biber's Rosary sonatas (sometimes also called "Mystery sonatas"). Each sonata in the collection requires to tune the violin in a specific way, different from the standard tuning in fifths, resulting in very different atmospheres. It requires some intellectual gymnastics from the player, because the notes written in the score no longer represent the actual sounds heard, but merely refer to the locations of fingers on a normal violin; in other words, this operation is like monkey-patching a violin!
[MERGE] assorted LOGOP peephole stuff
Assorted commits which fix a performance regression and do refactoring
of the handling of LOGOPs in the peephole optimiser. In particular,
@res = $c ? () : @f
had a very slight regression: it should have got faster recently, but
actually got slightly slower. It's now fast again.
op_dump(): display OTHER on newish LOGOPs Some recently-added OPs of class LOGOP weren't having their op_other field displayed by op_dump().
rpeep(): remove spurious flag setting on LOGOPs
the main loop in rpeep() does
o->op_opt = 1
on each op just before it processes it, to indicate that the op has been
processed and peephole-optimised.
For some reason, just the OP_AND/OP_OR/etc ops set it again.
This appears superfluous, and changing it to assert(o->op_opt)
didn't trigger any test suite failures, so this commit removes it
altogether.
rpeep(): refactor: simplify deleting bare if blks
There's some code in rpeep() structured a bit like:
case OP_COND_EXPR:
...
if (the true branch is a bare stub op) {
delete the branch;
}
elsif (the true branch is a bare enter/scope and stub branch) {
delete the branch;
}
This commit changes/simplifies that code to:
if ( the true branch is a bare stub op
|| the true branch is a bare enter/scope and stub branch)
{
delete the branch;
}
rpeep(): assert that some LOGOPs are done In general, rpeep() needs to follow the op_other pointer of LOGOPs to optimise the 'other' branch of logical ops. However, some ops don't need doing, because their op_other doesn't point to general user code, but to a fixed op which will have already been processed. For example, OP_SUBSTCONT->op_other always points to its parent OP_SUBST. Previously these ops were missing from the rpeep() main loop as nothing needed doing for them. This commit adds them, both for completeness, and also adds asserts that their op_other points to something which doesn't need to be processed. This commit serves two purposes: * it makes it clear that the op hasn't just been forgotten (I had to examine each missing LOGOP and try to work out why it wasn't in rpeep() - future people hopefully won't have to repeat this exercise); * if any of the asserts fail, it is indication that my understanding of that op wasn't complete, and that there may in fact be cases where the op_other *should* be followed.
This week, I challenged myself to write small but meaningful Perl programs for Weekly Challenge — focusing on clean logic, edge cases, and test-driven validation.
Sometimes, the simplest problems teach the strongest lessons.
Challenge 1: Text Justifier (Centering Text with Custom Padding)
Problem Statement
Write a script to return the string that centers the text within that width using asterisks * as padding.
If the string length is greater than or equal to the width, return the string as-is.
Example
Input:
("Hi", 5)
Output:
*Hi**
Implementation Highlights
- Calculated total padding required.
- Split padding evenly left and right.
- Handled odd padding differences correctly.
- Covered edge cases like:
- Empty string
- Width equal to string length
- Width smaller than string length
Key Logic
my $pad_total = $width - $len;
my $left_pad = int($pad_total / 2);
my $right_pad = $pad_total - $left_pad;
This ensures perfect centering even when padding is odd.
What I Learned
- Importance of integer division in layout formatting
- Handling edge cases makes logic robust
- Writing test cases increases confidence immediately
Challenge 2: Word Sorter (Alphabetical Word Sorting)
Problem Statement
Write a script to order words in the given sentence alphabetically but keeps the words themselves unchanged.
Example
Input:
"I have 2 apples and 3 bananas!"
Output:
2 3 and apples bananas! have I
Implementation Highlights
- Split words using whitespace regex
- Used Perl’s built-in sort
- Rejoined words with single spaces
- Managed multiple spaces correctly
Core Logic
my @words = split /\s+/, $str;
return join ' ', sort @words;
Simple. Elegant. Effective.
What I Learned
- Perl’s sort is powerful and concise
- Regex-based splitting handles messy input cleanly
- Even small scripts benefit from structured testing
I'm working on a little wikilink to Markdown link converter with Perl. This is the conversion code:
foreach my $idx (0 .. $args) {
if (-e -f -r -w $ARGV[$idx]) {
my $file_name = $ARGV[$idx];
open(my $file_reader, "<", $file_name) or die "Failed to open file reader: $!";
my $new_file_content = "";
while (my $line = <$file_reader>) {
if ($line =~ /(\[\[.+?\]\])/) {
my $extracted_text = substr($1, 2, -2);
my $new_line = $line =~ s/\[\[.+?\]\]/\[$extracted_text\]\(\.\/$extracted_text$ext\)/gr;
$new_file_content = $new_file_content . $new_line;
} else {
$new_file_content = $new_file_content . $line;
}
}
close($file_reader) or die "Failed to close file reader: $!";
# This is where it writes to the file
if ($overwrite) {
print("Overwriting $file_name...\n\n");
open(my $file_writer, ">", $file_name) or die "Failed to open file writer: $!";
print($file_writer, $new_file_content);
close($file_writer) or die "Failed to close file writer: $!";
} else {
print("$new_file_content");
}
}
}
The link conversion itself is fine, but when writing to the file, it always ends up empty. I'm not sure as to what could be causing this.
-
Data::ObjectDriver - Simple, transparent data interface, with caching
- Version: 0.27 on 2026-02-13, with 16 votes
- Previous CPAN version: 0.26 was 3 months, 27 days before
- Author: SIXAPART
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.25 on 2026-02-13, with 19 votes
- Previous CPAN version: 1.24_01 was 1 day before
- Author: SCHUBIGER
-
Devel::Size - Perl extension for finding the memory usage of Perl variables
- Version: 0.86 on 2026-02-10, with 22 votes
- Previous CPAN version: 0.86 was 1 day before
- Author: NWCLARK
-
Marlin - 🐟 pretty fast class builder with most Moo/Moose features 🐟
- Version: 0.023001 on 2026-02-14, with 12 votes
- Previous CPAN version: 0.023000 was 7 days before
- Author: TOBYINK
-
MIME::Lite - low-calorie MIME generator
- Version: 3.037 on 2026-02-11, with 35 votes
- Previous CPAN version: 3.036 was 1 day before
- Author: RJBS
-
MIME::Body - Tools to manipulate MIME messages
- Version: 5.517 on 2026-02-11, with 15 votes
- Previous CPAN version: 5.516
- Author: DSKOLL
-
Net::BitTorrent - Pure Perl BitTorrent Client
- Version: v2.0.0 on 2026-02-13, with 13 votes
- Previous CPAN version: 0.052 was 15 years, 10 months before
- Author: SANKO
-
Net::Server - Extensible Perl internet server
- Version: 2.017 on 2026-02-09, with 34 votes
- Previous CPAN version: 2.016 was 12 days before
- Author: BBB
-
Protocol::HTTP2 - HTTP/2 protocol implementation (RFC 7540)
- Version: 1.12 on 2026-02-14, with 27 votes
- Previous CPAN version: 1.11 was 1 year, 8 months, 25 days before
- Author: CRUX
-
SPVM - The SPVM Language
- Version: 0.990130 on 2026-02-13, with 36 votes
- Previous CPAN version: 0.990129 was 1 day before
- Author: KIMOTO
Which the best way for deploying an Perl Mojolicious web app to an containerized production environment like Docker or Kubernets:
# one process per pod
/script/my_app daemon
or starting an more powerful multiple ones:
# myapp.conf
{
hypnotoad => {
listen => ['https://*:443?cert=/etc/server.crt&key=/etc/server.key'],
workers => 5
}
};
hypnotoad ./script/my_app

We are pleased to announce the dates of our next Perl and Raku Conference, to be held in Greenville, SC on June 26-28, 2026. The venue is the same as last year, but we are expanding the conference to 3 days of talks/presentations across the weekend. One or more classes will be scheduled for Monday the 29th as well. The hackathon will be running continuously from June 25 through June 29—so if you can come early or stay late, there will be opportunities for involvement with other members of the community.
Mark your calendars and save the dates!
Our website, https://www.tprc.us/ has more details including links to reserve your hotel room and a link to register for the conference at the early bird price. Watch for more updates as more plans are finalized.
Our theme for 2026 is “Perl is my cast iron pan”. Perl is reliable, versatile, durable, and continues to be ever so useful! Just like your favorite cast iron pan! Raku might map to tempered steel. also quite reliable and useful, and with some very attractive updates!
We hope to see you in June!
My presentation will look at coding agents from Anthropic and z.ai with the following questions:
How (well) can coding agents support Perl code?
What differences are there between the models?
How can I get agents to write good code?
Hope to see you there:
This is my first post in this Blog...
I will write about Perl
All three of us attended.
We spent a long time debating how to proceed with Karl’s proposal to restrict legal identifier names for security. No consensus emerged, and we resolved to ask the question in a venue with wider reach than p5p.
We are faced with an absent maintainer of a CPAN-upstream dual-life module, namely Encode. We discussed this situation in general terms but did not get beyond the question itself – partly due to time constraints. We agreed that this seems likely to occur more often in the future and that p5p will need an agreed way of dealing with it, but what that should be is too big a topic to be contained within the PSC meeting format.
There are multiple PRs currently in flight, some of them rather large, including Paul’s magic v2 and attributes v2. We decided that we are too close to contentious changes freeze in this cycle to merge those.
We're happy to have Abigail present at the German Perl Workshop 2026!
Sharding a database, twice richtet sich an Alle und wird in English gehalten.
There comes a time in the life time of a database, the database takes too many resources (be it disk space, number of I/O transactions, or something else) to be handled by a single box.
Sharding, where data is distributed over several identically shaped databases is one technique to solve this.
For a high volume database I used to work with, we hit this limit about a dozen
years ago. Then we hit the limit again two years ago.
In this talk, we will first discuss how we initialized switched our systems to make use of a sharded database, without any significant downtime.
PAGI 0.001017 is on CPAN. The headline feature is HTTP/2 support in PAGI::Server, built on the nghttp2 C library and validated against h2spec's conformance suite. HTTP/2 is marked experimental in this release -- the protocol works, the compliance numbers are solid, and we want production feedback before dropping that label.
This post focuses on why h2c (cleartext HTTP/2) between your reverse proxy and backend matters, and how PAGI::Server implements it.
The Quick Version
# Install the HTTP/2 dependency
cpanm Net::HTTP2::nghttp2
# Run with HTTP/2 over TLS
pagi-server --http2 --ssl-cert cert.pem --ssl-key key.pem app.pl
# Run with cleartext HTTP/2 (h2c) -- behind a proxy
pagi-server --http2 app.pl
Your app code doesn't change. HTTP/2 is a transport concern handled entirely by the server. The same async handlers that serve HTTP/1.1 serve HTTP/2 without modification.
"I Have nginx in Front, Why Do I Care?"
If nginx terminates TLS and speaks HTTP/2 to clients, why does the backend need HTTP/2 too?
The answer is h2c -- cleartext HTTP/2 between your proxy and your application server. No TLS overhead, but all of HTTP/2's protocol benefits on the internal hop: stream multiplexing over a single TCP connection, HPACK header compression (especially effective for repetitive internal headers like auth tokens and tracing IDs), and per-stream flow control so a slow response on one stream doesn't block others.
The practical wins: fewer TCP connections between proxy and backend (one multiplexed h2c connection replaces a pool of HTTP/1.1 connections), less file descriptor and kernel memory pressure, and no TIME_WAIT churn from connection recycling.
Where h2c Matters
gRPC requires HTTP/2 -- it doesn't work over HTTP/1.1 at all. If you're building gRPC services, h2c is mandatory.
API gateway fan-out is where multiplexing shines. When your gateway fans out to 10 backend services per request, h2c means 1-2 connections per backend instead of a pool of 50-100.
Service mesh environments (Envoy/Istio sidecars) default to HTTP/2 between services. A backend that speaks h2c natively means one less protocol translation.
A Note on Proxies
Not all proxies handle h2c equally:
- Envoy has the best h2c upstream support with full multiplexing
-
Caddy makes it trivial:
reverse_proxy h2c://localhost:8080 -
nginx supports h2c via
grpc_passfor gRPC workloads, but its genericproxy_passdoesn't supportproxy_http_version 2.0
For full multiplexing to backends, Envoy or Caddy are better choices than nginx today.
HTTP/2 Over TLS -- No Proxy Required
h2c isn't the only mode. PAGI::Server also does full HTTP/2 over TLS with ALPN negotiation:
pagi-server --http2 --ssl-cert cert.pem --ssl-key key.pem app.pl
This is useful when you don't want the overhead or complexity of a reverse proxy -- internal tools, admin dashboards, development servers, or any app where the traffic doesn't justify a separate proxy layer. Browsers get HTTP/2 directly, with TLS, no nginx required.
What PAGI::Server Does
Dual-Mode Protocol Detection
With TLS, PAGI::Server uses ALPN negotiation during the handshake -- advertising h2 and http/1.1, letting the client choose. The protocol is decided before the first byte of application data.
Without TLS (h2c mode), PAGI::Server inspects the first 24 bytes of each connection for the HTTP/2 client connection preface. If it matches, the connection upgrades to HTTP/2. If not, it falls through to HTTP/1.1. Both protocols coexist on the same port, same worker -- no configuration needed beyond --http2.
Either way, HTTP/1.1 clients are still served normally. The server handles both protocols on the same port.
WebSocket over HTTP/2 (RFC 8441)
Most HTTP/2 implementations skip this. PAGI::Server supports the Extended CONNECT protocol from RFC 8441, which tunnels WebSocket connections over HTTP/2 streams. Multiple WebSocket connections multiplex over a single TCP connection instead of requiring one TCP connection each.
Compliance
Built on nghttp2 (the same C library behind curl, Firefox, and Apache's mod_http2). PAGI::Server passes 137 of 146 h2spec conformance tests (93.8%). The 9 remaining failures are in nghttp2 itself and shared with every server that uses it. Load tested with h2load at 60,000 requests across 50 concurrent connections with no data loss or protocol violations.
Full test-by-test results are published: HTTP/2 Compliance Results.
Multi-Worker and Tunable
HTTP/2 works in multi-worker prefork mode. Each worker independently handles HTTP/2 sessions:
pagi-server --http2 --workers 4 app.pl
Protocol settings are exposed for environments that need fine-tuning:
my $server = PAGI::Server->new(
app => $app,
http2 => 1,
h2_max_concurrent_streams => 50, # default: 100
h2_initial_window_size => 131072, # default: 65535
h2_max_frame_size => 32768, # default: 16384
h2_max_header_list_size => 32768, # default: 65536
);
Most deployments won't need to touch these. The defaults follow the RFC recommendations.
Context in the Perl Ecosystem
Perl has had HTTP/2 libraries on CPAN (Protocol::HTTP2, Net::HTTP2), but application servers haven't integrated them with validated compliance testing. PAGI::Server is the first to publish h2spec results and ship h2c with automatic protocol detection alongside HTTP/1.1. If you're currently running Starman, Twiggy, or Hypnotoad, none of them offer HTTP/2.
What Else Is in 0.001017
The rest of the release is operational improvements:
-
Worker heartbeat monitoring -- parent process detects workers with blocked event loops and replaces them via SIGKILL + respawn. Default 50s timeout. Only triggers on true event loop starvation; async handlers using
awaitare unaffected. -
Custom access log format -- format strings with atoms like
%a(address),%s(status),%D(duration). -
TLS performance fix -- shared SSL context via
SSL_reuse_ctxeliminates per-connection CA bundle parsing. 26x throughput improvement at 8+ concurrent TLS connections. - SSE wire format fix -- now handles CRLF, LF, and bare CR line endings per the SSE specification.
- Multi-worker fixes -- shutdown escalation, parameter pass-through, and various stability improvements.
Getting Started
# Install PAGI
cpanm PAGI
# Install HTTP/2 support (optional)
cpanm Net::HTTP2::nghttp2
# Run your app with HTTP/2
pagi-server --http2 app.pl
Links
- PAGI 0.001017 on CPAN
- HTTP/2 Compliance Results
- PAGI on GitHub
- h2spec -- HTTP/2 conformance testing tool
It's been a while since I commented on a Weekly Challenge solution, but here we are at week 360. Such a useful number. So divisible, so circular. It deserves twenty minutes.
Task 2: Word Sorter
The task
You are given a sentence. Write a script to order words in the given
sentence alphabetically but keep the words themselves unchanged.
# Example 1 Input: $str = "The quick brown fox"
# Output: "brown fox quick The"
# Example 2 Input: $str = "Hello World! How are you?"
# Output: "are Hello How World! you?"
# Example 3 Input: $str = "Hello"
# Output: "Hello"
# Example 4 Input: $str = "Hello, World! How are you?"
# Output: "are Hello, How World! you?"
# Example 5 Input: $str = "I have 2 apples and 3 bananas!"
# Output: "2 3 and apples bananas! have I"
The thoughts
This should be quite simple: split the words, sort them, put them back together. The sort should be case-insensitive.
join " ", sort { lc($a) cmp lc($b) } split(" ", $str);
Creeping doubt #1
Is converting to lowercase with lc the right way to do case-insenstive compares? Not really. Perl has the fc -- fold-case -- function to take care of subtleties in Unicode. We won't see those in simple ASCII text, but for the full rabbit hole, start with the documentation of fc.
Creeping doubt #2
Doing the case conversion inside the sort means that we will invoke that every time there's a string comparison, which will be quite redundant. We could (probably?) speed it up by pre-calculating the conversions once.
The solution
sub sorter($str)
{
return join " ",
map { $_->[0] }
sort { $a->[1] cmp $b->[1] }
map { [$_, fc($_)] }
split(" ", $str);
}
This solution uses the idiom of Schwartzian transformation. Every word gets turned into a pair of [original_word, case_folded_word]. That array of pairs gets sorted, and then we select the original words out of the sorted pairs. This is best read bottom-up.
-
split(" ", $str)-- turns the string into an array of words, where words are loosely defined by being white-space separated. -
map { [$_, fc($_)] }-- every word turns into a pair: the original, and its case-folded variation. The result is a list of array references. -
sort { $a->[1] cmp $b->[1] }-- sort by the case-folded versions. The result is still a list of array references. -
map { $_->[0] }-- select the original word from each pair -
join " "-- Return a single string, where the words are separated by one space.
Does it blend?
A quick benchmark shows that it is indeed faster to pre-calculate the case folding. This example used a text string of about 60 words.
Rate oneline pre_lc
oneline 32258/s -- -45%
pre_lc 58140/s 80% --
My intuition says that when the strings are much shorter, the overhead of the transform might not offset the gains in the sort, but as is so often true, my intuition is crap. This is the result for a string of five words:
oneline 470588/s -- -59%
pre_lc 1142857/s 143% --
… or maybe some more ;)
-
00:00 Introduction
-
01:30 OSDC Perl, mention last week
-
03:00 Nikolai Shaplov NATARAJ, one of our guests author of Lingua-StarDict-Writer on GitLab.
-
04:30 Nikolai explaining his goals about security memory leak in Net::SSLeay
-
05:58 What we did earlier. (Low hanging fruits.)
-
07:00 Let's take a look at the repository of Net::SSLeay
-
08:00 Try understand what happens in the repository?
-
09:15 A bit of explanation about adopting a module. (co-maintainer, unauthorized uploads)
-
11:00 PAUSE
-
15:30 Check the "river" status of the distribution. (reverse dependency)
-
17:20 You can CC-me in your correspondence.
-
18:45 Ask people to review your pull-requests.
-
21:30 Mention the issue with DBIx::Class and how to take over a module.
-
23:50 A bit about the OSDC Perl page.
-
24:55 CPAN Dashboard and how to add yourself to it.
-
27:40 Show the issues I opened asking author if they are interested in setting up GitHub Actions.
-
29:25 Start working on Dancer-Template-Mason
-
30:00 clone it
-
31:15 perl-tester Docker image.
-
33:30 Installing the dependencies in the Docker container
-
34:40 Create the GitHub Workflow file. Add to git. Push it out to GitHub.
-
40:55 First failure in the CI which is unclear.
-
42:30 Verifying the problem locally.
-
43:10 Open an issue.
-
58:25 Can you talk about dzil and Dist::Zilla?
-
1:02:25 We get back to working in the CI.
-
1:03:25 Add
--notestto make installations run faster. -
1:05:30 Add the git configuration to the CI workflow.
-
1:06:32 Is it safe to use
--notestwhen installing dependencies? -
1:11:05 git rebase squashing the commits into one commit
-
1:13:35
git push --force -
1:14:10 Send the pull-request.
I've published version 0.28 of App::Test::Generator, the black-box test case generator. I focused on tightening SchemaExtractor’s handling of accessor methods and making the generated schemas more honest and testable. I fixed cases where getter/setter and combined getset routines were being missed, added targeted tests to lock in correct detection of getset accessors, and clarified output typing so weak scalar inference no longer masquerades as a real type. I added explicit 'isa' coverage, ensuring that object expectations are captured and that generated tests correctly fail when passed the wrong object type.
[link] [comments]
I would like to use a Perl one-liner to modify numeric values in a text file. My data are stored in a text file:
0, 0, (1.263566e+02, -5.062154e+02)
0, 1, (1.069488e+02, -1.636887e+02)
0, 2, (-2.281294e-01, -7.787449e-01)
0, 3, (5.492424e+00, -4.145492e+01)
0, 4, (-7.961223e-01, 2.740912e+01)
These are complex numbers with their respective i and j coordinates: i, j, (real, imag). I would like to modify the coordinates, to shift them from zero-based to one-based indexing. In other words I would like to add one to each i and each j. I can correctly capture the i and j, but I'm struggling to treat them as numbers not as strings. This is the one-liner I'm using:
perl -p -i.bak -w -e 's/^(\d+), (\d+)/$1+1, $2+1/' complex.txt
How do I tell Perl to treat $1 and $2 as numbers?
My expected output would be:
1, 1, (1.263566e+02, -5.062154e+02)
1, 2, (1.069488e+02, -1.636887e+02)
1, 3, (-2.281294e-01, -7.787449e-01)
1, 4, (5.492424e+00, -4.145492e+01)
1, 5, (-7.961223e-01, 2.740912e+01)
Answer
I use xscreensaver and to forbid it:
! in .Xresources
xscreensaver.splash: false
! Set to nothing makes user switching not possible
*.newLoginCommand:
Lightdm supports .d directories, by default they aren’t created on Debian
but upstream documents them clearly. In
other words: /etc/lightdm/lightdm.conf.d/ is read.
Which means you need to drop a file,
/etc/lightdm/lightdm.conf.d/10-local-overrides.conf and add the content:
[Seat:*]
allow-user-switching=false
allow-guest=false
To check your configuration:
lightdm --show-config
In Perl under Unix (HP-UX and Linux) I start instances of a program and redirect their outputs to files. I fork without problem, close the old STDOUT and STDERR, open a file to use as STDOUT, then assign STDOUT to STDERR. In the Perl child instance, if I write:
print "This goes to STDOUT\n";
print STDERR "This goes to STDERR\n";
both lines go in the file redirection target. I use exec to start instances of another program whose output will be redirected to that file.
A test script in Bash periodically produces output to the standard output, sleeps, produces output to the error output, sleeps, and loops infinitely. When I call it using the command line I see all output. If I redirect the output on the command line to different files (one for STDOUT, one for STDERR) the outputs get written to the different files.
When I exec my Perl script into that test script, after redirecting the outputs in Perl I only see the lines the script produce to STDOUT, not the lines to STDERR. *STDERR = *STDOUT works for Perl but not for any program it becomes after exec. How do I solve this?
Perl code:
#!/bin/perl
$plc = 666;
$pid = fork;
if ($pid == -1) {
die "ERREUR: Le fork a échoué !";
}
if ($pid) {
# On est dans le processus PARENT. $pid est le PID de l'enfant créé.
print "Fork réussi: PID de l'enfant: $pid\n";
exit 0;
} else {
print "Gaga ? ($plc)\n";
close *STDOUT; # On ferme le descripteur de fichier correspondant à la sortie standard.
open *STDOUT, '>', 'stdout.txt' or die "Gaga n'a pu ouvrir le fichier stdout.txt en écriture !"; # On le rouvre, sur le fichier spécifié
close *STDERR; # On ferme la sortie d'erreur
*STDERR = *STDOUT; # Et on lui assigne le même fichier que la sortie standard.
print "Gaga sur stdout.\n";
print STDERR "Gaga sur stderr.\n";
exec {'zombieWriter'} ('zombieWriter' ,$plc);
}
zombieWriter script:
#!/bin/bash
while [[ 1 ]];
do
date
echo "Je suis un zombie ($1) et c'est ma joie !"
sleep 2
date >&2
echo "Je suis un zombie ($1) qui écrit en erreur..." >&2
sleep 3
done
The output in the stdout.txt file:
Gaga sur stdout.
Gaga sur stderr.
jeu. 05 févr. 2026 12:15:25 CET
Je suis un zombie (666) et c'est ma joie !
jeu. 05 févr. 2026 12:15:30 CET
Je suis un zombie (666) et c'est ma joie !
jeu. 05 févr. 2026 12:15:35 CET
Je suis un zombie (666) et c'est ma joie !
File redirection works in Perl but not for the shell script it execs into. If I do not explicitly close *STDERR before *STDERR = *STDOUT, the lines the zombieWriter script outputs to the error output show on my shell.
How do I get a true redirection before I exec into that script?
Trying to write some Perl code, wherein a hash reference is defined and some functions that can work with that hash reference.
The following code works
package test {
sub run {
my $h1 = {
e => "elder",
h => "hazel",
i => "ivy",
m => "maple",
o => "oregano",
s => "sycamore",
y => "yarrow",
};
test_ref_01 ($h1);
}
sub test_ref_01 {
my ($h1) = $_[0];
print "\n";
print $h1;
print "\n";
print ref ($h1);
print "\n";
print ref (\$h1);
print "\n";
# print $h1{e};
# print "\n";
print $h1->{e};
print "\n";
}
}
but this code doesn't gives error.
package test {
my $h1 = {
e => "elder",
h => "hazel",
i => "ivy",
m => "maple",
o => "oregano",
s => "sycamore",
y => "yarrow",
};
sub test_ref_01 {
print "\n";
print $h1;
print "\n";
print ref ($h1);
print "\n";
print ref (\$h1);
print "\n";
# print $h1{e};
# print "\n";
print $h1->{e};
print "\n";
}
}
What I'm trying to accomplish is
- define a hash reference (
$h1) at the package level - each of the subroutines defined in the package can work with the same hash reference (
$h1).
Sydney Perl continues regular meetings with our next in February
Please join us on Tuesday 24th Feb 2026 at Organic Trader Pty Ltd.
Unit 13/809-821 Botany Road Rosebery
6:30pm to 9pm.
Chances are folks will head to a nearby Pub afterward.
I will talk about my 5 years working at Meta Platforms and 6 months at Amazon Inc. specifically contrasting their engineering culture, and generally discussing what Google calls an SRE culture. Contrasting my experiences at Big Tech to "Middle Tech".
Getting there:
Come in the front door marked
"Konnect" then take the first door on the right, and up the stairs to
Level 1.
Mascot station + 20 minute walk or 358 bus to Gardener's Road if you
don't want to walk so far.
Or Waterloo Metro station + 309 bus to Botany Road after Harcourt Parade.
We have a Signal group chat which we use to co-ordinate travel assistance on the day. For example, if you are lost or need a pick up from the station when it's raining etc. Reach out and someone will add you.
Join the email list!
The email list is very low volume and the place to get these updates (I sometimes forget to post them here).
Ensure to add the "from" email sydney-pm@pm.org to a custom filter and your allow-lists (and similar) to maximize chances Google/Microsoft/etc don't discard them. Plug to Australian native and Perl-ish Fastmail which is very popular and plays well.
Have you ever been working on someone else's Perl code or perhaps your own from 25 years ago and wondered what the formatting style should be?
I looked around and did not see anything and have had the idea for a decade so
I started trying to piece something together, I decided to use perltidy itself of course, its not production ready, heck it may not even be formatted to perltidy's perltidyrc!
however its done enough to share the idea and see if there is any other interest out there, please fork it and hack away, I have also opened 'issue' with perltidy to share;
Perl::Tidy::StyleDetector
https://github.com/tur-tle/perltidy
https://github.com/tur-tle/perltidy/blob/detect-format/STYLE_DETECTOR_README.md
[link] [comments]
-
App::rdapper - a command-line RDAP client.
- Version: 1.23 on 2026-02-02, with 21 votes
- Previous CPAN version: 1.22 was 3 days before
- Author: GBROWN
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.030 on 2026-02-04, with 19 votes
- Previous CPAN version: 1.029 was 1 day before
- Author: PREACTION
-
BerkeleyDB - Perl extension for Berkeley DB version 2, 3, 4, 5 or 6
- Version: 0.67 on 2026-02-01, with 14 votes
- Previous CPAN version: 0.66 was 1 year, 3 months, 18 days before
- Author: PMQS
-
Data::Alias - Comprehensive set of aliasing operations
- Version: 1.29 on 2026-02-02, with 19 votes
- Previous CPAN version: 1.28 was 3 years, 1 month, 12 days before
- Author: XMATH
-
Image::ExifTool - Read and write meta information
- Version: 13.50 on 2026-02-07, with 44 votes
- Previous CPAN version: 13.44 was 1 month, 22 days before
- Author: EXIFTOOL
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.217 on 2026-02-01, with 19 votes
- Previous CPAN version: 2.216 was 1 day before
- Author: PMQS
-
Perl::Tidy - indent and reformat perl scripts
- Version: 20260204 on 2026-02-03, with 147 votes
- Previous CPAN version: 20260109 was 25 days before
- Author: SHANCOCK
-
Sisimai - Mail Analyzing Interface for bounce mails.
- Version: v5.6.0 on 2026-02-02, with 81 votes
- Previous CPAN version: v5.5.0 was 1 month, 28 days before
- Author: AKXLIX
-
SPVM - The SPVM Language
- Version: 0.990127 on 2026-02-04, with 36 votes
- Previous CPAN version: 0.990126 was before
- Author: KIMOTO
-
Term::Choose - Choose items from a list interactively.
- Version: 1.780 on 2026-02-04, with 15 votes
- Previous CPAN version: 1.779 was 2 days before
- Author: KUERBIS
This is the weekly favourites list of CPAN distributions. Votes count: 61
Week's winner: XS::JIT (+5)
Build date: 2026/02/07 20:47:56 GMT
Clicked for first time:
- Ancient - Post-Apocalyptic Perl
- App::CPANTS::Lint - front-end to Module::CPANTS::Analyse
- Claude::Agent - Perl SDK for the Claude Agent SDK
- Dancer2::Plugin::OpenAPI - create OpenAPI documentation of your application
- Meow - Object Orientation
- Net::Z3950::ZOOM - Perl extension for invoking the ZOOM-C API.
Increasing its reputation:
- AnyEvent (+1=168)
- App::ccdiff (+1=3)
- App::Software::License (+1=3)
- Class::XSAccessor (+1=29)
- Class::XSConstructor (+3=8)
- Const::Fast (+1=38)
- CPAN::Digger (+1=4)
- CPAN::Uploader (+1=25)
- CryptX (+1=53)
- DBIx::Class::Async (+1=2)
- Devel::Cover::Report::Coveralls (+1=19)
- Dist::Zilla (+1=188)
- Excel::ValueReader::XLSX (+1=2)
- Excel::ValueWriter::XLSX (+1=3)
- File::HomeDir (+1=35)
- File::Tail (+1=8)
- Hypersonic (+3=3)
- Image::PHash (+1=3)
- IO::Async (+1=80)
- Marlin (+4=11)
- MetaCPAN::Client (+1=26)
- Mojo::Redis (+1=21)
- Mojolicious (+1=510)
- Moos (+1=6)
- MooseX::XSConstructor (+1=3)
- MooX::Singleton (+1=6)
- MooX::XSConstructor (+1=3)
- Net::Daemon (+1=3)
- Net::Libwebsockets (+1=4)
- Net::Server (+1=34)
- ODF::lpOD (+1=4)
- PAGI (+1=7)
- Parallel::ForkManager (+1=102)
- PathTools (+1=84)
- perl (+1=442)
- Plack::Middleware::ProofOfWork (+2=2)
- Regexp::Grammars (+1=39)
- Reply (+1=62)
- Scalar::List::Utils (+1=184)
- Sub::HandlesVia (+1=10)
- Sub::StrictDecl (+1=3)
- Sys::Statistics::Linux (+1=4)
- XS::JIT (+5=5)
Beautiful Perl series
This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.
BLOCK : sequence of statements
Today's topic is the construct named BLOCK in the perl documentation: a sequence of statements enclosed in curly brackets {}. Both the concept and the syntax are common to many programming languages - they can also be found in languages of C heritage like Java, JavaScript and C++, among others; but Perl is different on various aspects - read on to dig into the details.
BLOCKs in Perl may be used:
- as part of a compound statement, after an initial control flow construct like
if,while,foreach, etc.; - as the body of a
subdeclaration (subroutine or function); - at any location where a single statement is expected. Obviously it would also be possible to just insert a plain sequence of statements, but enclosing them in a BLOCK has the advantage of creating a new delimited lexical scope so that the effect of inner declarations is guaranteed to end when control flow exits the BLOCK. Details about lexical scopes are discussed below;
- as part of a
doexpression, so that the whole BLOCK becomes a value that can be inserted within a more complex expression. This may be convenient for clarity of thought in some algorithms, and also for avoiding a subroutine call when efficiency is at stake.
All modern programmming languages have constructs equivalent to usages 1 and 2, because these are crucial for structuring algorithms and for handling complexity in programming. Usage 3 is less common, and usage 4 is quite particular to Perl. The next chapters will cover these various aspects in more depth.
Lexical scope
In all usage situations listed above, a Perl BLOCK always opens a new lexical scope, i.e. a portion of code that delimits the effect of inner declarations. Things that can be temporarily declared inside a lexical scope are:
-
lexical variables, temporarily binding a variable name to a memory location on the stack - declared through keywords
myorstate; -
lexical pragmata, temporarily importing semantics into the current BLOCK - introduced through keywords
useorno; - (beginning with Perl 5.18) lexical subroutines, only accessible within the scope - also declared through keywords
myorstate.
When a BLOCK is used as part of a compound statement (if, foreach, etc.), the initial clause before the BLOCK is already part of the lexical scope, so that variables declared in that clause can then be used within the BLOCK:
foreach my $member (@list) {
work_with($member); # here $member can be used
}
say $member; # ERROR : $member is no longer in scope
This is also true when a compound statement has several clauses and therefore several BLOCKs, like if (...) {...} elsif {...} else {...}. Further examples, together with detailed explanations, can be found in perlsub.
Declarations of variables, pragmata or subroutines have the following facts in common:
- they take effect starting from the statement after the declaration. Technically declarations can occur anywhere in the BLOCK; the most common usage is to put them at the beginning, but there is no obligation to do so;
- they may temporarily shadow the effect of declarations in higher scopes;
- their effect ends when control flow exits the BLOCK, whatever the exit cause may be (normal end of the block, explicit exit through instructions like
return,nextorgoto, or exit because an exception was encountered).
Declarations in lexical scopes have effects both at compile-time (the Perl interpreter temporarily alters its parsing rules) and at runtime (the interpreter temporarily allocates or releases resources). For example in the following snippet:
{
my $db_handle = DBI->connect(@db_connection_args);
my @large_array = $db_handle->selectall_array($some_sql, @some_bind_values);
open my $file_handle, ">", $output_file or die "could not open $output_file: $!";
print $file_handle formatted_ouput($_) foreach @large_array;
}
the interpreter knows at compile-time that variables $db_handle, @large_array and $file_handle are allowed within the BLOCK, but not outside of it, so it can check statically for typos or other misuses of variables names; then at runtime the interpreter will dynamically allocate and release memory and external handles when control flow crosses the BLOCK. This is quite similar to what happens in a statically typed language like Java. By contrast, Python, often grouped in the same family as Perl because it is also a dynamically-typed language, does not have the same treatment of lexical scopes.
Lexical scopes in Python are not like Perl
In Python, there is no generally available construct as versatile as a Perl BLOCK. Subsequences of statements are expressed through indentation, but this is only allowed as part of a function definition or as part of a compound statement. A compound statement must always start with a keyword (if, for, while, etc.) that opens the header clause and is followed by a suite:
if is_success():
summary = summarize_results()
report_to_user(summary)
cleanup_resources()
A 'suite' in Python is not to be confused with a 'block'. The official documentation is very careful about the distinction, but many informal texts in Python literature make the confusion; yet the difference is quite important because:
- a Python block, "a piece of Python program text that is executed as a unit", only occurs within a module, a function body or a class definition (https://docs.python.org/3/reference/executionmodel.html#structure-of-a-program);
- a Python suite, "a group of statements controlled by a clause", occurs whenever a clause in a compound statement expects to be followed by some instructions.
Blocks and suites look similar, because they are both expressed as indented sequences of statements; but the difference is that a 'block' opens a new lexical scope, while a 'suite' does not. In particular, this means that variables declared in a compound statement are still available after the statement has ended, which is quite surprising for programmers coming from a C-like culture (including Perl and Java). So for example
for i in [1, 2, 3]:
pass
print(i)
is valid Python code and prints 3. This example is taken from https://eli.thegreenplace.net/2015/the-scope-of-index-variables-in-pythons-for-loops/ which does a very good job at explaining why Python in this respect works differently from many other languages.
Lexical scope vs dynamic scope
In addition to traditional lexical scoping, Perl also has another construct named dynamic scoping, introduced through the keyword local. Dynamic scoping is a vestige from Perl1, but still very useful in some specific use cases; it will be discussed in a future post in this series. For the moment let us just say that in all common situations, lexical scoping is the most appropriate mechanism for working with variables guaranteed not to interfere with the global state of the program.
Lexical variables
Lexical variables in Perl are introduced with the my keyword. Several variables, possibly of different types, can be declared and initialized in one single statement:
my @table = ([qw/x y z/], [1, 2, 3], [9, 8, 7]);
my ($nb_rows, $nb_cols) = (scalar @table, scalar $table[0]->@*);
my (@db_connection_args, %select_args);
Variable initialization and list destructuring
Lexical variables can only be used starting from the statement after the declaration. Therefore it is illegal in Perl to write something like:
my ($x, $y, $z) = (123, $x + 1, $x + 2);
because at the point where expression $x + 1 is encountered, variable $x is not yet in scope. By contrast, Java would accept int x = 123, y = x + 1, z = x + 2, or JavaScript would accept let x = 123, y = x + 1, z = x + 2, because in those languages variable initializations occur in sequence, while Perl starts by evaluating the complete list on the right-hand side of the assignment, and then distributes the values into variables on the left-hand side.
Python is like Perl: it does not accept x, y = 123, x + 1 because x is not defined in the right-hand side. Both Perl and Python had from the start the notion of "destructuring" a list into several individual variables. Other languages adopted similar features much later:
- JavaScript has an advanced mechanism of destructuring since ES6 (2015), that can be applied not only to lists, but also to objects (records). For destructuring a list the syntax variables must be put in angle brackets on the left-hand side of the assignment, in order to avoid ambiguity with sequences of ordinary assignments:
let x = 123, y = x + 1, z = x + 2; // sequence of assignments
let [a, b, c] = [123, 124, 125]; // list destructuring
- The Amber project for Java recently introduced several mechanisms for pattern matching, which is quite close to the idea of destructuring. However for the moment it can only be used for destructuring records, but not yet for lists.
Coming back to Perl, list destructuring has always been part of common idioms, notably for:
- extracting items from command-line arguments
my ($user, $password, @others) = @ARGV;
- extracting items from the argument list to a subroutine
my ($height, $width, $depth) = @_;
- switching variables
($x, $y) = ($y, $x);
Shadowing variables from higher scopes
Like in other languages, a lexical variable in Perl can shadow another variable of the same name at a higher lexical scope. However, the shadowing effect only starts at the statement after the declaration of the variable. As a result, the shadowed value can still be used in the initializing expression:
my $x = 987;
{ my ($x, $y) = (123, $x + 1);
say "inner scope, x is $x and y is $y"; # "inner scope, x is 123 and y is 988"
}
say "outer scope, x is $x"; # "outer scope, x is 987"
Now let us see how other dynamically typed languages handle variable shadowing.
Shadowing variables in Python
Python has no explicit variable declarations; instead, any assignment instruction implicitly declares the target of the assignment to be a lexical variable in the current lexical scope.
def foo():
x = 123 # declares lexical variable x
y = 456 # declares lexical variable y
x = 789 # assigns a new value to existing variable x
Since the intention of declaring is not explicitly stated by the programmer, the interpreter is of little help for detecting errors that would be identified as typos in other languages. In the example above, one could suspect that the intent was to declare a z variable instead of assigning a new value to x.
If an assignment occurs in the middle of a lexical scope, the corresponding variable is nevertheless treated as being lexical from the very beginning of the scope. As a consequence, newcomers to Python can easily be surprised by an UnboundLocalError, which can be shortly demonstrated by this example from the official documentation:
x = 10
def foo():
print(x)
x += 1
foo()
Here the assignment x += 1 implicitly declares x to be a lexical variable for the whole body of the foo() function, even if the assignment comes at the end. In this situation the print() statement raises an exception because at this point lexical variable x is not bound to a value. By contrast, if the assignment is commented out
x = 10
def foo():
print(x)
# x += 1
foo()
the program happily prints 10, because here x is no longer interpreted as a lexical variable, but as the global x.
Python statements global and nonlocal can instruct the parser that some specific variables should not be declared in the current lexical scope, but should instead be taken from the global module scope, or, in case of nested functions or classes, from the next higher scope. So on this respect, Python programming is just the opposite to Perl or Java : instead of explicitly declaring lexical variables, one must explicitly declare the variables that are not lexical. Furthermore, since such declarations apply to the whole current lexical scope, independently of the place where they are inserted, it is an exclusive choice : any use of a named variable must be either from the current lexical scope or from a higher scope or from the global scope. Therefore it is not possible, like in Perl, to use the value of global x in the initialization expression for local lexical x.
Shadowing variables in JavaScript
The historical construct for declaring lexical variables in JavaScript was through the var keyword, which is still present in the language. The behaviour of var is quite similar to Python lexical variables: variables appear to exist even before they are declared (which is called hoisting in JavaScript); they are scoped by functions or modules, not by blocks, so they still hold values afer exiting from the block; and the interpreter does not complain if a variable is declared twice. For all these reasons, var is now deprecated, replaced since ES6 (2015) by keywords const (for variables that do not change after initialization) or let (for mutable variables).
These new constructs indeed introduced more security in usage of lexical variables in JavaScript: such variables can no longer be used after exiting from the block, and redeclarations raise syntax errors. Yet one ambiguity remains: the shadowing effect of a variable declared with let does not start at the location of the declaration, but at the beginning of the enclosing block. This is no longer called "hoisting", but still it means that from the beginning of the block that variable name shadows any variable with the same name in higher scopes. This is called temporal dead zone in JavaScript literature.
Shadowing is prohibited in Java
Java has no ambiguity with shadowing ... because it has a more radical approach: it raises an exception when a variable is declared in an inner block with a name already in use at a higher scope! The following snippet
public class ScopeDemo {
public static void main(String[] args) {
int x = 987;
{
int x = 123, y = x + 1, z = x + 2;
System.out.println("here x is " + x + " and y is " + y);
}
System.out.println("here x is " + x);
}
}
yields:
ScopeDemo.java:6: error: variable x is already defined in method main(String[])
int x = 123, y = x + 1, z = x + 2;
^
1 error
error: compilation failed
Lexical pragmata
In Perl, lexical scopes are not only used to control the lifetime of lexical variables: they are also used for lexical pragmata that temporarily alter the behaviour of the interpreter, either by adding some semantics (through the keyword use) or by removing some semantics (through the keyword no). Here is an example of one very common idiom:
use strict;
use warnings;
foreach my $user (get_users_from_database()) {
no warnings 'uninitialized';
my $body = "Dear $user->{firstname} $user->{lastname}, bla bla bla";
...
}
At the beginning of the program, the warnings pragma is activated, because this is general good practice, so that the interpreter can detect suspect situations and warn about them. But when working with a $user record from the database, some fields might be undef, which is OK, there is no reason to issue a warning - so within that BLOCK the interpreter is instructed to treat undefined data as empty strings, without complaining.
In a similar vein, it is sometimes necessary to alleviate the controls performed by use strict, in particular on the subject of symbolic references. This control forbids programmatic insertion of new subroutines into the symbol table of a module, a safe measure of prevention; yet this feature is powerful and very useful in some specific situations - so when needed, one can temporarily disable the control:
foreach my $method_name (@list_of_names) {
no strict 'refs';
*{$method_name} = generate_closure_for($method_name);
}
This technique is used quite extensively for example in the Object-Relational Mapping module DBIx::DataModel for generating methods that implement navigation from one table to another related table. The source code can demonstrate usage patterns.
Some pragmata can also reinforce controls instead of alleviating them. One very good example is the autovivification module, which changes the default behaviour of Perl on implicit creation of intermediate references:
my $tree; # at this point, $tree is undef
$tree->{foo}{bar}[1] = 99; # no error; now $tree is {foo => {bar => [undef, 99]}}
Autovivification can be very handy, but it can be dangerous too. If we want to be on the safe side, we can write
{ no autovivification qw/fetch store/;
my $tree; # at this point, $tree is undef
$tree->{foo}{bar}[1] = 99; # ERROR: Can't vivify reference
}
Like for lexical variables, lexical pragmata can be nested, the innermost use or no declaration temporarily shadowing previous declarations on the same pragma.
Other examples of lexical pragmata include bigint, which transparently transforms all arithmetic operations to work with instances of Math::BigInt; or the incredible Regexp::Grammars module that adds grammatical parsing features to Perl regexes. The perlpragma documentation explains how module authors can implement new lexical pragmata.
'do': transform a BLOCK into an expression
In all examples seen so far, BLOCKs were treated as statements; but thanks to the do BLOCK construct, it is also possible to insert a BLOCK anywhere in an expression. The value from the last instruction in the BLOCK is then processed by operators in the expression. This is very convenient for performing a small computation in-place, either for clarity or for efficiency reasons.
The first example is a cheap version of XML entity encoding, slight adaptation from my module Excel::ValueWriter::XLSX:
my %ENTITY_TABLE = ( '<' => '<', '>' => '>', '&' => '&' );
my $entity_regex = do {my $chars = join "", keys %ENTITY_TABLE; qr/[$chars]/};
...
$text =~ s/($entity_regex)/$ENTITY_TABLE{$1}/g; # encode entity characters in $text
The second example is from the cousin module Excel::ValueReader::XLSX. Here we are parsing the content of table in an Excel sheet, and the data is returned either in the form of a list of arrayrefs (plain values), or in the form of a list of hashrefs (column name => value), depending on an option given by the caller:
my $row = $args{want_records} ? do {my %r; @r{@{$args{columns}}} = @$vals; \%r}
: $vals;
If the caller wants records, the do block performs a hash slice assignment into a lexical hash variable to create a new record on the fly.
Wrapping up
Thanks to BLOCKs, lexical scoping can be introduced very flexibly almost anywhere in Perl code. The semantics of lexical variable and lexical pragmata cleanly defines that the lexical effect starts at the next statement after the declaration, and that it ends at exit from the block, without any of the surprises that we have seen in some other languages. The shadowing effect of lexical variables in inner scopes is easily understandable and consistent across all higher scopes, including the next englobing lexical scopes and the global module scope.
What a beautiful language design !
The next post will be about dynamic scoping through the local keyword - another, complementary way for temporarily changing the behaviour of the interpreter.
About the cover image
The picture is an excerpt from the initial movement of Verdi's Requiem, at a place where Verdi shadows several characteristics of the movement : for a short while, the orchestra stays still, leaving the choir a cappella, with a different speed, different tonality and different dynamics; then after this parenthesis, all parameters come back to their initial state, come prima as stated in the score.

Dave writes:
During January, I finished working on another tranche of ExtUtils::ParseXS fixups, this time focussing on:
adding and rewording warning and error messages, and adding new tests for them;
improving test coverage: all XS keywords have tests now;
reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;
refactoring and improving the length(str) pseudo-parameter implementation.
I also started work on my annual "get './TEST -deparse' working again" campaign. This option runs all the test suite files through a round trip in the deparser before running them. Over the course of the year we invariably accumulate new breakage; sometimes this involves fixing Deparse.pm, and sometimes just back-listing the test file as it is now tickling an already known issue in the deparser.
I also worked on a couple of bugs.
Summary:
- 0:53 GH #13878 COW speedup lost after e8c6a474
- 4:05 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build
- 12:14 fix up Deparse breakage
- 26:12 improve Extutils::ParseXS
Total:
- 43:24 (HH::MM)

Tony writes:
``` [Hours] [Activity] 2026/01/05 Monday 0.23 #24055 review, research and approve with comment 0.08 #24054 review and approve 0.12 #24052 review and comment 1.13 #24044 review, research and approve with comment 0.37 #24043 review and approve 0.78 #23918 rebase, testing, push and mark ready for review 1.58 #24001 fix related call_sv() issue, testing
0.65 #24001 testing, debugging
4.94
2026/01/06 Tuesday 0.90 #24034 review and comment 1.12 #24024 research and follow-up 1.40 #24001 debug crash in init_debugger() and work up a fix, testing
0.08 #24001 re-check, push for CI
3.50
2026/01/07 Wednesday 0.15 #24034 review updates and approve 0.55 #23641 review and comment 0.23 #23961 review and approve 0.28 #23988 review and approve with comment 0.62 #24001 check CI results and open PR 24060 0.50 #24059 review and comments 0.82 #24024 work on a test and the fix, testing 0.27 #24024 add perldelta, testing, make PR 24061 1.13 #24040 work on a test and a fix, testing, investigate
other possible similar problems
4.55
2026/01/08 Thursday 0.35 #24024 minor fixes, comment, get approval, apply to blead 0.72 #24040 rebase, perldelta, find a simplification, testing and re-push 0.18 #24053 review and approve 0.77 #24063 review, research, testing and comment 1.47 #24040 look at goto too, look for other similar issues and open 24064, fix for goto (PVOP), testing and push for CI 0.32 #24040 check CI results, make PR 24065
0.28 #24050 review and comment
4.09
2026/01/09 Friday
0.20 #24059 review updates and comment
0.20
2026/01/12 Monday 0.32 #24059 review updates and approve 0.35 #24066 review and approve 0.22 #24040 rebase and clean up whitespace, apply to blead 1.00 #23966 rebase, testing (expected issues from the #23885 merge but I guess I got the magic docs right) 1.05 #24062 review up to ‘ParseXS: tidy up INCLUDE error messages’ 0.25 #24069 review and comment
0.62 #24071 part review and comment
3.81
2026/01/13 Tuesday 1.70 #24070 research and comment 0.23 #24069 review and comment 0.42 #24071 more review
1.02 #24071 more review, comments
3.37
2026/01/14 Wednesday 0.23 #23918 minor fix 0.18 #24069 review updates and approve 0.32 #24077 review and comments 0.53 #24073 review, research and comment 0.87 #24075 review, research and comment 0.45 #24071 benchmarking and comment
1.47 #24019 debugging, brief comment on work so far
4.05
2026/01/15 Thursday 0.37 #24019 debugging and comment on cause of win32 issues 1.02 #24077 review, follow-up 0.08 #24079 review and approve 0.25 #24076 review and approve 0.73 #24062 more review up to ‘ParseXS: refactor: don't set $_ in Param::parse()’ 2.03 #24062 more review to ‘ParseXS: refactor: 001-basic.t: add
TODO flag’ and comments
4.48
2026/01/19 Monday 1.12 maint-votes, vote apply/testing one of the commits 0.43 github notifications 0.08 #24079 review updates and comment 0.70 #24075 research and approve 0.57 #24063 research, try to break it, comment 1.43 #24062 more review to ‘ParseXS: add basic tests for
PREINIT keyword’
4.33
2026/01/20 Tuesday 2.23 #24078 review, testing, comments 0.85 #24098 review, research and comment 1.00 #24062 more review up to ‘ParseXS: 001-basic.t: add more
ellipsis tests’
4.08
2026/01/21 Wednesday 0.82 #23995 research and follow-up 0.25 #22125 follow-up 0.23 #24056 research, comment 0.67 #24103 review, research and approve
1.45 #24062 more review to end, comment
3.42
2026/01/22 Thursday 0.10 #24079 review update and approve 0.08 #24106 review and approve 0.10 #24096 review and approve 0.08 #24094 review and approve 0.82 #24080 review, research and comments 0.08 #24081 review and approve 0.75 #24082 review, testing, comment 1.15 #23918 rebase #23966, testing and apply to blead, start on
string APIs
3.16
2026/01/27 Tuesday 0.35 #23956 fix perldelta issues 1.27 #22125 remove debugging detritus, research and comment 1.57 #24080 debugging into SvOOK and PVIO 0.67 #24080 more debugging, comment 0.25 #24120 review and approve
1.03 #23984 review, research and approve
5.14
2026/01/28 Wednesday 0.37 #24080 follow-up 0.10 #24128 review and apply to blead 0.70 #24105 review, look at changes needed 0.15 #23956 check CI results and apply to blead 0.10 #22125 check CI results and apply to blead 0.13 #4106 rebase PR 23262 and testing 0.53 #24001 rebase PR 24060 and testing 0.57 #24129 review and comments 0.28 #24127 review and approve 0.10 #24124 review and approve
0.20 #24123 review and approve with comment
3.23
2026/01/29 Thursday 0.27 #23262 minor change suggested by xenu, testing, push for CI 0.22 #24060 comment 1.63 #24082 review, testing, comments 0.43 #24130 review, check some side issues, approve 0.12 #24077 review updates and approve 0.08 #24121 review and approve 0.08 #24122 review and comment
0.30 #24119 review and approve
3.13
Which I calculate is 59.48 hours.
Approximately 57 tickets were reviewed or worked on, and 6 patches were applied. ```

Paul writes:
This month I managed to finish off a few refalias-related issues, as well as lend some time to help BooK further progress implementing PPC0014
- 1 = Clear pad after multivar foreach
- https://github.com/Perl/perl5/pull/240
- 3 = Fix B::Concise output for OP_MULTIPARAM
- https://github.com/Perl/perl5/pull/24066
- 6 = Implement multivariable
foreachon refalias- https://github.com/Perl/perl5/pull/24094
- 1 = SVf_AMAGIC flag tidying (as yet unmerged)
- https://github.com/Perl/perl5/pull/24129
- 2.5 = Mentoring BooK towards implementing PPC0014
- 2 = Various github code reviews
Total: 15.5 hours
My focus for February will now be to try to get both attributes-v2
and magic-v2 branches in a state where they can be reviewed, and at
least the first parts merged in time for 5.43.9, and hence 5.44, giving
us a good base to build further feature ideas on top of.
German text below.
2025 was a tough year for The Perl and Raku Foundation (TPRF). Funds were sorely needed. The community grants program had been paused due to budget constraints and we were in danger of needing to pause the Perl 5 core maintenance grants. Fastmail stepped up with a USD 10,000 donation and helped TPRF to continue to support Perl 5 core maintenance. Ricardo Signes explains why Fastmail helped keep this very important work on track.
Perl has served us quite well since Fastmail’s inception. We’ve built up a large code base that has continued to work, grow, and improve over twenty years. We’ve stuck with Perl because Perl stuck with us: it kept working and growing and improving, and very rarely did those improvements require us to stop the world and adapt to onerous changes. We know that kind of stability is, in part, a function of the developers of Perl, whose time is spent figuring out how to make Perl better without also making it worse. The money we give toward those efforts is well-spent, because it keeps the improvements coming and the language reliable.
— Ricardo Signes, Director & Chief Developer Experience Officer, Fastmail
One of the reasons that you don’t hear about Perl in the headlines is its reliability. Upgrading your Perl from one version to the next? That can be a very boring deployment. You code worked before and it continues to “just work” after the upgrade. You don’t need to rant about short deprecation cycles, performance degradation or dependencies which no longer install. The Perl 5 core maintainers take great care to ensure that you don’t have to care very much about upgrading your Perl. Backwards compatibility is top of mind. If your deployment is boring, it’s because a lot of care and attention has been given to this matter by the people who love Perl and love to work on it.
As we moved to secure TPRF’s 2025 budget, we reached out to organizations which rely on Perl. A number of these companies immediately offered to help. Fastmail has already been a supporter of TPRF for quite some time. In addition to this much needed donation, Fastmail has been providing rock solid free email hosting to the foundation for many years.
While Fastmail’s donation has been allocated towards Perl 5 Core maintenance, TPRF is now in the position to re-open the community grants program, funding it with USD 10,000 for 2026. There is also an opportunity to increase the community grants funding if sponsor participation increases. As we begin our 2026 fundraising, we are looking to cast a wider net and bring more sponsor organizations on board to help support healthy Perl and Raku ecosystems.
Maybe your organization will be the one to help us double our community grants budget in 2026. To become a sponsor, contact: olaf@perlfoundation.org
“Perl is my cast-iron pan - reliable, versatile, durable, and continues to be ever so useful.” TPRC 2026 brings together a community that embodies all of these qualities, and we’re looking for sponsors to help make this special gathering possible.
About the Conference
The Perl and Raku Conference 2026 is a community-organized gathering of developers, enthusiasts, and industry professionals. It takes place from June 26-28, 2026, in Greenville, South Carolina. The conference will feature an intimate, single-track format that promises high sponsor visibility. We look forward to approximately 80 participants with some of those staying in town for the shoulder days (June 25-29) and a Monday workshop.
Why Sponsor?
- Give back to the language and communities which have already given so much to you
- Connect with the developers and craftspeople who build your tools – the ones that are built to last
- Help to ensure that The Perl and Raku Foundation can continue to fund Perl 5 core maintenance and Community Grants
Sponsorship Tiers
Platinum Sponsor ($6,000)
- Only 1 sponsorship is available at this level
- Premium logo placement on conference website
- This donation qualifies your organization to be a Bronze Level Sponsor of The Perl and Raku Foundation
- 5-minute speaking slot during opening ceremony
- 2 complimentary conference passes
- Priority choice of rollup banner placement
- Logo prominently displayed on conference badges
- First choice of major named sponsorship (Conference Dinner, T-shirts, or Swag Bags)
- Logo on main stage backdrop and conference banners
- Social media promotion
- All benefits of lower tiers
Gold Sponsor ($4,000)
- Logo on all conference materials
- One complimentary conference pass
- Rollup banner on display
- Choice of named sponsorship (Lunch or Snacks)
- Logo on backdrop and banners
- Dedicated social media recognition
- All benefits of lower tiers
Silver Sponsor ($2,000)
- Logo on conference website
- Logo on backdrop and banners
- Choice of smaller named sponsorship (Beverage Bars)
- Social media mention
- All benefits of lower tier
Bronze Sponsor ($1,000)
- Name/logo on conference website
- Name/logo on backdrop and banners
All Sponsors Receive
- Logo/name in Update::Daily conference newsletter sidebar
- Opportunity to provide materials for conference swag bags
- Recognition during opening and closing ceremonies
- Listed on conference website sponsor page
- Mentioned in conference social media
Named Sponsorship Opportunities
Exclusive naming rights available for:
- Conference Dinner ($2,000) - Signage on tables and buffet
- Conference Swag Bags ($1,500) - Logo on bags
- Conference T-Shirts ($1,500) - Logo on sleeve
- Lunches ($1,500) - Signage at pickup and on menu tickets
- Snacks ($1,000) - Signage at snack bar
- Update::Daily Printing ($200) - Logo on masthead
About The Perl and Raku Foundation
Proceeds beyond conference expenses support The Perl and Raku Foundation, a non-profit organization dedicated to advancing the Perl and Raku programming languages through open source development, education, and community building.
Contact Information
For more information on how to become a sponsor, please contact: olaf@perlfoundation.org
- 00:00 Introduction to OSDC
- 01:30 Introducing myself Perl Maven, Perl Weekly
- 02:10 The earlier issues.
- 03:10 How to select a project to contribute to?
- 04:50 Chat on OSDC Zulip
- 06:45 How to select a Perl project?
- 09:20 CPAN::Digger
- 10:10 Modules that don't have a link to their VCS.
- 13:00 Missing CI - GitHub Actions or GitLab Pipeline and Travis-CI.
- 14:00 Look at Term-ANSIEncode by Richard Kelsch - How to find the repository of this project?
- 15:38 Switching top look at Common-CodingTools by mistake.
- 16:30 How MetaCPAN knows where is the repository?
- 17:52 Clone the repository.
- 18:15 Use the szabgab/perl Docker container.
- 22:10 Run
perl Makefile.PL, install dependency, runmakeandmake distdir. - 23:40 See the generated
META.jsonfile. - 24:05 Edit the
Makefile.PL - 24:55 Explaining my method of cloning first (calling it
origin) and forking later and calling thatfork. - 27:00 Really edit
Makefile.PLand add theMETA_MERGEsection and verify the generatedMETA.jsonfile. - 29:00 Create a branch locally. Commit the change.
- 30:10 Create a fork on GitHub.
- 31:45 Add the
forkas a remote repository and push the branch to it. - 33:20 Linking to the PR on the OSDC Perl report page.
- 35:00 Planning to add
.gitignoreand maybe setting up GitHub Action. - 36:00 Start from the
mainbranch, create the.gitignorefile. - 39:00 Run the tests locally. Set up GitHub Actions to run the tests on every push.
- 44:00 Editing the GHA configuration file.
- 48:30 Commit, push to the fork, check the results of GitHub Action in my fork on GitHub.
- 51:45 Look at the version of the perldocker/perl-tester Docker image.
- 54:40 Update list of Perl versions in the CI. See the results on GitHub.
- 55:30 Show the version number of perl.
-
App::ccdiff - Colored Character Diff
- Version: 0.35 on 2026-01-25, with 20 votes
- Previous CPAN version: 0.34 was 1 year, 23 days before
- Author: HMBRAND
-
App::rdapper - a command-line RDAP client.
- Version: 1.22 on 2026-01-29, with 21 votes
- Previous CPAN version: 1.21 was 1 day before
- Author: GBROWN
-
App::SpeedTest - Command line interface to speedtest.net
- Version: 0.31 on 2026-01-25, with 32 votes
- Previous CPAN version: 0.30 was 1 year, 18 days before
- Author: HMBRAND
-
CPAN::Meta - the distribution metadata for a CPAN dist
- Version: 2.150012 on 2026-01-25, with 39 votes
- Previous CPAN version: 2.150011 was 3 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260129.001 on 2026-01-29, with 25 votes
- Previous CPAN version: 20260125.001 was 3 days before
- Author: BRIANDFOY
-
Cucumber::TagExpressions - A library for parsing and evaluating cucumber tag expressions (filters)
- Version: 9.0.0 on 2026-01-25, with 17 votes
- Previous CPAN version: 8.1.0 was 1 month, 29 days before
- Author: CUKEBOT
-
Dancer - lightweight yet powerful web application framework
- Version: 1.3522 on 2026-01-26, with 149 votes
- Previous CPAN version: 1.3521 was 2 years, 11 months, 18 days before
- Author: BIGPRESH
-
Dist::Zilla - distribution builder; installer not included!
- Version: 6.037 on 2026-01-25, with 188 votes
- Previous CPAN version: 6.036 was 2 months, 15 days before
- Author: RJBS
-
Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
- Version: v30.0.0 on 2026-01-28, with 20 votes
- Previous CPAN version: v29.0.1 was 3 months, 12 days before
- Author: CHEVALIER
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.216 on 2026-01-30, with 19 votes
- Previous CPAN version: 2.215
- Author: PMQS
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.038000 on 2026-01-29, with 27 votes
- Previous CPAN version: 2.037000 was 22 days before
- Author: MICKEY
-
Net::Server - Extensible Perl internet server
- Version: 2.016 on 2026-01-28, with 34 votes
- Previous CPAN version: 2.015 was 5 days before
- Author: BBB
-
SPVM - The SPVM Language
- Version: 0.990124 on 2026-01-31, with 36 votes
- Previous CPAN version: 0.990123
- Author: KIMOTO
-
UV - Perl interface to libuv
- Version: 2.002 on 2026-01-28, with 14 votes
- Previous CPAN version: 2.001 was 22 days before
- Author: PEVANS
Most projects are started by a single person in a GitHub repository of that person. Later for various reasons people establish GitHub organizations and move the projects there. Sometimes the organization contains a set of sub-projects related to a central project. (e.g. A web framework and its extensions or database access.) Sometimes it is a collection of projects related to a single topic. (e.g. testing or IDE support.) Sometimes it is just a random collection of project where people band together in the hope that no project will be left behind. (e.g. the CPAN Authors organization.)
Organizations make it easier to have multiple maintainers and thus ensuring continuity of project, but it might also mean that none of the members really feel the urge to continue working on something.
In any case, I tried to collect all the Perl-related GitHub organizations.
Hopefully in ABC order...
-
Beyond grep - It is mostly for
ack, abetter grepdeveloped by Andy Lester. No public members. 4 repositories. -
Catalyst - Catalyst is a web-framework. Its runtime and various extensions are maintained in this organization. 10 members and 39 repositories.
-
cpan-authors - A place for CPAN authors to collaborate more easily. 4 members and 9 repositories.
-
davorg cpan - Organisation for maintaining Dave Cross's CPAN modules. No public members and 47 repositories.
-
ExifTool - No public members. 3 repositories.
-
Foswiki - The Foswiki and related projects. 13 members and 649 repositories.
-
gitpan - An archive of CPAN modules - 2 members and 5k+ read-only repositories.
-
Kelp framework - a web development framework. 1 member and 12 repositories.
-
MetaCPAN - The source of the MetaCPAN site - 9 member and 56 rpositories.
-
Mojolicious - Some Perl and many JavaScript projects. 5 members and 29 repositories.
-
Moose - Moose,
MooseX-*, Moo, etc. 11 members and 69 repositories. -
Netdisco - Netdisco and SNMP-Info projects. 10 members and 14 repositories.
-
PadreIDE - Padre, the Perl IDE. 13 members and 102 repositories.
-
Paracamelus - No public members. 2 repositories.
-
Perl - Perl 5 itself, Docker images, etc. 20 members, 8 repositories.
-
Perl Actions - GitHub Actions to be used in workflows. 5 members and 9 repositories.
-
Perl Advent Calendar - including the source of perl.com. 3 members and 8 repositories.
-
Perl Bitcoin - Perl Bitcoin Toolchain Collective. 3 members and 7 repositories.
-
Perl Toolchain Gang - ExtUtils::MakeMaker, Module::Build, etc. 27 members and 41 repositories.
-
Perl Tools Team - source of planetperl , perl-ads etc. No public members. 6 repositories.
-
Perl Dancer - Dancer, Dancer2, many plugins. 30 members and 79 repositories.
-
perltidy - Only for Perl::Tidy. No public members. 1 repository.
-
perl5-dbi - DBI, several
DBD::*modules, and some related modules. 7 members and 15 repositories. -
perl.org - also cpan.org and perldoc.perl.org. 3 members and 7 repositories.
-
Perl5 - DBIx-Class and DBIx-Class-Historic. No public members. 2 repositories.
-
perl5-utils - List::MoreUtils, File::ShareDir etc. 2 members and 22 repositories.
-
Perl-Critic - PPI, Perl::Critic and related. 7 members and 5 repositories.
-
perl-ide - Perl Development Environments. 26 members and 13 repositories.
-
perl-pod - Pod::Simple, Test::Pod. 1 member and 4 repositories.
-
PkgConfig - 1 member 1 repository.
-
plack - psgi-specs, Plack and a few middlewares. 5 members and 7 repositories.
-
RexOps - Rex, Rexify and related projects. 1 member and 46 repositories.
-
Sqitch - Sqitch for Sensible database change management and related projects. No public members. 16 repositories.
-
StrawberryPerl - The Perl distribution for MS Windows. 4 members and 10 repositories.
-
Test-More - Test::Builder, Test::Simple, Test2 etc. - 4 members, 27 repositories.
-
The Enlightened Perl Organisation - Task::Kensho and
Task::Kensho::*. 1 member and 1 repository. -
Thunderhorse Framework - a modern web development framework. No public members. 4 repositories.
-
Webmin - Webmin is a web-based system administration tool for Unix-like servers. - 5 repositories.
Companies
These are bot Perl-specific GitHub organizations, but some of the repositories are in Perl.
-
cPanel - Open Source Software provided by cPanel. 1 member and 22 repositories.
-
DuckDuckGo - The search engine. 19 members and 122 repositories.
-
Fastmail - Open-source software developed at Fastmail. 4 members and 38 repositories.
-
RotherOSS - Otobo (OTRS fork). No public members. 47 repositories.
Since my native language isn’t English, the German text follows below.
If you don't like the autovivification or simply would like to make sure the code does not accidentally alter a hash the Hash::Util module is for you.
You can lock_hash and later you can unlock_hash if you'd like to make some changes to it.
In this example you can see 3 different actions commented out. Each one would raise an exception if someone tries to call them on a locked hash. After we unlock the hash we can execute those actions again.
I tried this both in perl 5.40 and 5.42.
use strict;
use warnings;
use feature 'say';
use Hash::Util qw(lock_hash unlock_hash);
use Data::Dumper qw(Dumper);
my %person = (
fname => "Foo",
lname => "Bar",
);
lock_hash(%person);
print Dumper \%person;
print "$person{fname} $person{lname}\n";
say "fname exists ", exists $person{fname};
say "language exists ", exists $person{language};
# $person{fname} = "Peti"; # Modification of a read-only value attempted
# delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
# $person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
unlock_hash(%person);
$person{fname} = "Peti"; # Modification of a read-only value attempted
delete $person{lname}; # Attempt to delete readonly key 'lname' from a restricted hash
$person{language} = "Perl"; # Attempt to access disallowed key 'language' in a restricted hash
print Dumper \%person;
$VAR1 = {
'lname' => 'Bar',
'fname' => 'Foo'
};
Foo Bar
fname exists 1
language exists
$VAR1 = {
'language' => 'Perl',
'fname' => 'Peti'
};
My name is Alex. Over the last years I’ve implemented several versions of the Raku’s documentation format (Synopsys 26 / Raku’s Pod) in Perl and JavaScript.
At an early stage, I shared the idea of creating a lightweight version of Raku’s Pod, with Damian Conway, the original author of the Synopsys 26 Documentation specification (S26). He was supportive of the concept and offered several valuable insights that helped shape the vision of what later became Podlite.
Today, Podlite is a small block-based markup language that is easy to read as plain text, simple to parse, and flexible enough to be used everywhere — in code, notes, technical documents, long-form writing, and even full documentation systems.
This article is an introduction for the Perl community — what Podlite is, how it looks, how you can already use it in Perl via a source filter, and what’s coming next.
The Block Structure of Podlite
One of the core ideas behind Podlite is its consistent block-based structure. Every meaningful element of a document — a heading, a paragraph, a list item, a table, a code block, a callout — is represented as a block. This makes documents both readable for humans and predictable for tools.
Podlite supports three interchangeable block styles: delimited, paragraph, and abbreviated.
Abbreviated blocks (=BLOCK)
This is the most compact form.
A block starts with = followed by the block name.
=head1 Installation Guide
=item Perl 5.8 or newer
=para This tool automates the process.
- ends on the next directive or a blank line
- best used for simple one-line blocks
- cannot include configuration options (attributes)
Paragraph blocks (=for BLOCK)
Use this form when you want a multi-line block or need attributes.
=for code :lang<perl>
say "Hello from Podlite!";
- ends when a blank line appears
- can include complex content
- allows attributes such as
:lang,:id,:caption,:nested, …
Delimited blocks (=begin BLOCK … =end BLOCK)
The most expressive form. Useful for large sections, nested blocks, or structures that require clarity.
=begin nested :notify<important>
Make sure you have administrator privileges.
=end nested
- explicit start and end markers
- perfect for code, lists, tables, notifications, markdown, formulas
- can contain other blocks, including nested ones
These block styles differ in syntax convenience, but all produce the same internal structure.

Regardless of which syntax you choose:
- all three forms represent the same block type
- attributes apply the same way (
:lang,:caption,:id, …) - tools and renderers treat them uniformly
- nested blocks work identically
- you can freely mix styles inside a document
Example: Comparing POD and Podlite
Let’s see how the same document looks in traditional POD versus Podlite:

Each block has clear boundaries, so you don’t need blank lines between them. This makes your documentation more compact and easier to read. This is one of the reasons Podlite remains compact yet powerful: the syntax stays flexible, while the underlying document model stays clean and consistent.
This Podlite example rendered as on the following screen:

Inside the Podlite Specification 1.0
One important point about Podlite is that it is first and foremost a specification. It does not belong to any particular programming language, platform, or tooling ecosystem. The specification defines the document model, syntax rules, and semantics.
From the Podlite 1.0 specification, notable features include:
- headings (
=head1,=head2, …) - lists and definition lists, and including task lists
- tables (simple and advanced)
- CSV-backed tables
- callouts / notifications (
=nested :notify<tip|warning|important|note|caution>) - table of contents (
=toc) - includes (
=include) - embedded data (
=data) - pictures (
=pictureand inlineP<>) - formulas (
=formulaand inlineF<>) - user defined blocks and markup codes
- Markdown integration
The =markdown block is part of the standard block set defined by the Podlite Specification 1.0.
This means Markdown is not an add-on or optional plugin — it is a fully integrated, first-class component of the language.
Markdown content becomes part of Podlite’s unified document structure, and its headings merge naturally with Podlite headings inside the TOC and document outline.
Below is a screenshot showing how Markdown inside Perl is rendered in the in-development VS Code extension, demonstrating both the block structure and live preview:

Using Podlite in Perl via the source filter
To make Podlite directly usable in Perl code, there is a module on CPAN: Podlite — Use Podlite markup language in Perl programs
A minimal example could look like this:
use Podlite; # enable Podlite blocks inside Perl
=head1 Quick Example
=begin markdown
Podlite can live inside your Perl programs.
=end markdown
print "Podlite active\n";
Roadmap: what’s next for Podlite
Podlite continues to grow, and the Specification 1.0 is only the beginning. Several areas are already in active development, and more will evolve with community feedback.
Some of the things currently planned or in progress:
- CLI tools
- command-line utilities for converting Podlite to HTML, PDF, man pages, etc.
- improve pipelines for building documentation sites from Podlite sources
- VS Code integration
- Ecosystem growth
- develop comprehensive documentation and tutorials
- community-driven block types and conventions
Try Podlite and share feedback
If this resonates with you, I’d be very happy to hear from you:
- ideas for useful block types
- suggestions for tools or integrations
- feedback on the syntax and specification
https://github.com/podlite/podlite-specs/discussions
Even small contributions — a comment, a GitHub star, or trying an early tool — help shape the future of the specification and encourage further development.
Useful links:
- CPAN: https://metacpan.org/pod/Podlite
- GitHub:https://github.com/podlite
- Specification
- Project site: https://podlite.org
- Roadmap: https://podlite.org/#Roadmap
Thanks for reading, Alex
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.03 on 2026-01-19, with 56 votes
- Previous CPAN version: 10.02 was 10 days before
- Author: UTASHIRO
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.028 on 2026-01-21, with 19 votes
- Previous CPAN version: 1.027 was 1 month, 15 days before
- Author: PREACTION
-
CPAN::Meta - the distribution metadata for a CPAN dist
- Version: 2.150011 on 2026-01-22, with 39 votes
- Previous CPAN version: 2.150010 was 9 years, 5 months, 4 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260120.004 on 2026-01-20, with 25 votes
- Previous CPAN version: 20260120.002
- Author: BRIANDFOY
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.24 on 2026-01-18, with 19 votes
- Previous CPAN version: 1.23_03 was 5 days before
- Author: SCHUBIGER
-
EV - perl interface to libev, a high performance full-featured event loop
- Version: 4.37 on 2026-01-22, with 50 votes
- Previous CPAN version: 4.36 was 4 months, 2 days before
- Author: MLEHMANN
-
Git::Repository - Perl interface to Git repositories
- Version: 1.326 on 2026-01-18, with 27 votes
- Previous CPAN version: 1.325 was 4 years, 7 months, 17 days before
- Author: BOOK
-
IO::Async - Asynchronous event-driven programming
- Version: 0.805 on 2026-01-19, with 80 votes
- Previous CPAN version: 0.804 was 8 months, 26 days before
- Author: PEVANS
-
Mac::PropertyList - work with Mac plists at a low level
- Version: 1.606 on 2026-01-20, with 13 votes
- Previous CPAN version: 1.605 was 5 months, 11 days before
- Author: BRIANDFOY
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260119 on 2026-01-19, with 44 votes
- Previous CPAN version: 5.20251220 was 29 days before
- Author: BINGOS
-
Net::Server - Extensible Perl internet server
- Version: 2.015 on 2026-01-22, with 33 votes
- Previous CPAN version: 2.014 was 2 years, 10 months, 7 days before
- Author: BBB
-
Net::SSH::Perl - Perl client Interface to SSH
- Version: 2.144 on 2026-01-23, with 20 votes
- Previous CPAN version: 2.144 was 8 days before
- Author: BDFOY
-
Release::Checklist - A QA checklist for CPAN releases
- Version: 0.19 on 2026-01-25, with 16 votes
- Previous CPAN version: 0.18 was 1 month, 15 days before
- Author: HMBRAND
-
Spreadsheet::Read - Meta-Wrapper for reading spreadsheet data
- Version: 0.95 on 2026-01-25, with 31 votes
- Previous CPAN version: 0.94 was 1 month, 15 days before
- Author: HMBRAND
-
SPVM - The SPVM Language
- Version: 0.990117 on 2026-01-24, with 36 votes
- Previous CPAN version: 0.990116
- Author: KIMOTO
-
utf8::all - turn on Unicode - all of it
- Version: 0.026 on 2026-01-18, with 31 votes
- Previous CPAN version: 0.025 was 1 day before
- Author: HAYOBAAN
Open Source contribution - Perl - Tree-STR, JSON-Lines, and Protocol-Sys-Virt - Setup GitHub Actions
See OSDC Perl
-
00:00 Working with Peter Nilsson
-
00:01 Find a module to add GitHub Action to. go to CPAN::Digger recent
-
00:10 Found Tree-STR
-
01:20 Bug in CPAN Digger that shows a GitHub link even if it is broken.
-
01:30 Search for the module name on GitHub.
-
02:25 Verify that the name of the module author is the owner of the GitHub repository.
-
03:25 Edit the Makefile.PL.
-
04:05 Edit the file, fork the repository.
-
05:40 Send the Pull-Request.
-
06:30 Back to CPAN Digger recent to find a module without GitHub Actions.
-
07:20 Add file / Fork repository gives us "unexpected error".
-
07:45 Direct fork works.
-
08:00 Create the
.github/workflows/ci.ymlfile. -
09:00 Example CI yaml file copy it and edit it.
-
14:25 Look at a GitLab CI file for a few seconds.
-
14:58 Commit - change the branch and add a description!
-
17:31 Check if the GitHub Action works properly.
-
18:17 There is a warning while the tests are running.
-
21:20 Opening an issue.
-
21:48 Opening the PR (on the wrong repository).
-
22:30 Linking to output of a CI?
-
23:40 Looking at the file to see the source of the warning.
-
25:25 Assigning an issue? In an open source project?
-
27:15 Edit the already created issue.
-
28:30 USe the Preview!
-
29:20 Sending the Pull-Request to the project owner.
-
31:25 Switching to Jonathan
-
33:10 CPAN Digger recent
-
34:00 Net-SSH-Perl of BDFOY - Testing a networking module is hard and Jonathan is using Windows.
-
35:13 Frequency of update of CPAN Digger.
-
36:00 Looking at our notes to find the GitHub account of the module author LNATION.
-
38:10 Look at the modules of LNATION on MetaCPAN
-
38:47 Found JSON::Lines
-
39:42 Install the dependencies, run the tests, generate test coverage.
-
40:32 Cygwin?
-
42:45 Add Github Action copying it from the previous PR.
-
43:54
META.ymlshould not be committed as it is a generated file. -
48:25 I am looking for sponsors!
-
48:50 Create a branch that reflects what we do.
-
51:38 commit the changes
-
53:10 Fork the project on GitHub and setup git remote locally.
-
55:05
git push -u fork add-ci -
57:44 Sending the Pull-Request.
-
59:10 The 7 dwarfs and Snowwhite. My hope is to have a 100 people sending these PRs.
-
1:01:30 Feedback.
-
1:02:10 Did you think this was useful?
-
1:02:55 Would you be willing to tell people you know that you did this and you will do it again?
-
1:03:17 You can put this on your resume. It means you know how to do it.
-
1:04:16 ... and Zoom suddenly closed the recording...
Announcing the Perl Toolchain Summit 2026!
The organizers have been working behind the scenes since last September, and today I’m happy to announce that the 16th Perl Toolchain Summit will be held in Vienna, Austria, from Thursday April 23rd till Sunday April 26th, 2026.
This post is brought to you by Simplelists, a group email and mailing list service provider, and a recurring sponsor of the Perl Toolchain Summit.

Started in 2008 as the Perl QA Hackathon in Oslo, the Perl Toolchain Summit is an annual event that brings together the key developers working on the Perl toolchain. Each year (except for 2020-2022), the event moves from country to country all over Europe, organised by local teams of volunteers. The surplus money from previous summits helps fund the next one.
Since 2023, the organizing team is formally split between a “global” team and a “local” team (although this setup has been informally used before).
The global team is made up of veteran PTS organizers, who deal with invitations, finding sponsors, paying bills and communications. They are Laurent Boivin (ELBEHO), Philippe Bruhat (BOOK), Thibault Duponchelle (CONTRA), Tina Müller (TINITA) and Breno de Oliveira (GARU), supported by Les Mongueurs de Perl’s bank account.
The local team members for this year have organized several events in Vienna (including the Perl QA Hackathon 2010!) and deal with finding the venue, the hotel, the catering and welcoming our attendees in Vienna in April. They are Alexander Hartmaier (ABRAXXA), Thomas Klausner (DOMM), Maroš Kollár (MAROS), Michael Kröll and Helmut Wollmersdorfer (WOLLMERS).
The developers who maintain CPAN and associated tools and services are all volunteers, scattered across the globe. This event is the one time in the year when they can get together.
The summit provides dedicated time to work on the critical systems and tools, with all the right people in the same room. The attendees hammer out solutions to thorny problems and discuss new ideas to keep the toolchain moving forward. This year, about 40 people have been invited, with 35 participants expected to join us in Vienna.
If you want to find out more about the work being done at the Toolchain Summit, and hear the teams and people involved, you can listen to several episodes of The Underbar podcast, which were recorded during the 2025 edition in Leipzig, Germany:
Given the important nature of the attendees’ work and their volunteer status, we try to pay for most expenses (travel, lodging, food, etc.) through sponsorship. If you’re interested in helping sponsor the summit, please get in touch with the global team at pts2026@perltoolchainsummit.org.
Simplelists has been sponsoring the Perl Toolchain Summit for several years now. We are very grateful for their continued support.
Simplelists is proud to sponsor the 2026 Perl Toolchain Summit, as Perl forms the core of our technology stack. We are grateful that we can rely on the robust and comprehensive Perl ecosystem, from the core of Perl itself to a whole myriad of CPAN modules. We are glad that the PTS continues its unsung work, ensuring that Simplelists can continue to rely on these many tools.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with Google Cloud Run.
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development…
