| I was recently asked to run some in-house Perl training. That doesn't happen very often these days. But I ran the course yesterday and it seemed to go well (they've asked me back for more). It reminded me how much I enjoy running training courses. I should try to do more of it. It also reminded me of the huge number of training decks I've created over the last 25 years. I've decided to bring those altogether in one place to make it easier for people to find them. They might be useful to someone. And they might encourage more people to hire me to run courses for them š I made a (small) start on that today. It'll take a while! [link] [comments] |

Dave writes:
During December, I fixed assorted bugs, and started work on another tranche of ExtUtils::ParseXS fixups, this time focussing on:
adding and rewording warning and error messages, and adding new tests for them;
improving test coverage: all XS keywords have tests now;
reorganising the test infrastructure: deleting obsolete test files, renaming the t/*.t files to a more consistent format; splitting a large test file; modernising tests;
refactoring and improving the length(str) pseudo-parameter implementation.
By the end of this report period, that work was about half finished; it is currently finished and being reviewed.
Summary:
* 10:25 GH #16197 re eval stack unwinding
* 1:39 GH #23903 BBC: bleadperl breaks ETHER/Package-Stash-XS-0.30.tar.gz
* 0:09 GH #23986 Perl_rpp_popfree_to(SV sp**) questionable design
* 3:02 fix Pod::Html stderr noise
* 27:47 improve Extutils::ParseXS
* 1:47 modernise perlxs.pod
Total: * 44:49 (HH::MM)

Tony writes:
``` [Hours] [Activity] 2025/12/01 Monday 0.23 memEQ cast discussion with khw 0.42 #23965 testing, review and comment 2.03 #23885 review, testing, comments 0.08 #23970 review and approve 0.13 #23971 review and approve
0.08 #23965 follow-up
2.97
2025/12/02 Tuesday 0.73 #23969 research and comment 0.30 #23974 review and approve 0.87 #23975 review and comment 0.38 #23975 review reply and approve 0.25 #23976 review, research and approve 0.43 #23977 review, research and approve
1.20 #23918 try to produce expected bug and succeed
4.16
2025/12/03 Wednesday 0.35 #23883 check updates and approve with comment 0.72 #23979 review, try to trigger the messages and approve 0.33 #23968 review, research and approve 0.25 #23961 review and comment 2.42 #23918 fix handling of context, testing, push to update,
comment on overload handling plans, start on it
4.07
2025/12/04 Thursday 2.05 #23980 review, comment and approve, fix group_end() decorator and make PR 23983 0.25 #23982 review, research and approve 1.30 #23918 test for skipping numeric overload, and fix, start
on force overload
3.60
2025/12/05 Friday
0.63 #23980 comment
0.63
2025/12/08 Monday 0.90 #23984 review and comment 0.13 #23988 review and comment 2.03 #23918 work on force overload implmentation
1.45 #23918 testing, docs
4.51
2025/12/09 Tuesday 0.32 github notifications 1.23 #23918 add more tests 0.30 #23992 review 0.47 #23993 research, testing and comment
0.58 #23993 review and comment
2.90
2025/12/10 Wednesday 0.72 #23992 review updates, testing and comment 1.22 #23782 review (and some #23885 discussion in irc) 1.35 look into Jimās freebsd core dump, reproduce and find cause, email him and briefly comment in irc, more 23885
discussion and approve 23885
3.29
2025/12/11 Thursday 0.33 #23997 comment 1.08 #23995 research and comment 0.47 #23998 review and approve
1.15 #23918 cleanup
3.03
2025/11/15 Saturday 0.20 #23998 review updates and approve 0.53 #23975 review comment, research and follow-up 1.25 #24002 review discussion, debugging and comment 0.28 #23993 comment 0.67 #23918 commit cleanup 0.20 #24002 follow-up
0.65 #23975 research and follow-up
3.78
2025/12/16 Tuesday 0.40 #23997 review, comment, approve 0.37 #23988 review and comment 0.95 #24001 debugging and comment 0.27 #24006 review and comment 0.23 #24004 review and nothing to say
1.27 #23918 more cleanup, documentation
3.49
2025/12/17 Wednesday 0.32 #24008 testing, debugging and comment 0.08 #24006 review update and approve 0.60 #23795 quick re-check and approve 1.02 #23918 more fixes, address each PR comment and push for CI 0.75 #23956 work on a test and a fix, push for CI 0.93 #24001 write a test, and a fix, testing 0.67 #24001 write an inverted test too, commit message and push for CI 0.17 #23956 perldelta 0.08 #23956 check CI results, make PR 24010
0.15 #24001 perldelta and make PR 24011
4.77
2025/12/18 Thursday 0.27 #24001 rebase, local testing, push for CI 1.15 #24012 research 0.50 #23995 testing and comment
0.08 #24001 check CI results and apply to blead
2.00
Which I calculate is 43.2 hours.
Approximately 32 tickets were reviewed or worked on, and 1 patches were applied. ```

Paul writes:
A mix of focus this month. I was hoping to get attributes-v2 towards
something that could be reviewed and merged, but then I bumped into a
bunch of refalias-related issues. Also spent about 5 hours reviewing
Dave's giant xspod rewrite.
- 1 = Rename
THINGtoken in grammar to something more meaningful- https://github.com/Perl/perl5/pull/23982
- 4 = Continue work on
attributes-v2 - 1 = BBC Ticket on Feature-Compat-Class
- https://github.com/Perl/perl5/issues/23991
- 2 = Experiment with
refaliasparameters with defaults in XS-Parse-Sublike - 1 = Managing the PPC documents and overall process
- 2 = Investigations into the
refaliasanddeclared_refsfeatures, to see if we can un-experiment them - 2 = Add a warning to
refaliasthat breaks closures- https://github.com/Perl/perl5/pull/24026 (work-in-progress)
- 3 = Restore refaliased variables after foreach loop
- https://github.com/Perl/perl5/issues/24028
- https://github.com/Perl/perl5/pull/24029
- 3 = Clear pad after multivariable foreach
- https://github.com/Perl/perl5/pull/24034 (not yet merged)
- 6 = Github code reviews (mostly on Dave's xspod)
- https://github.com/Perl/perl5/pull/23795
Total: 25 hours
An Analysis of The Perl and Raku Foundation's 2024 Finances
In October 2024, I published an article analyzing the financial situation of The Perl and Raku Foundation (TPRF). Since then, I have left the board, and my life is now largely unrelated to Perl. I no longer have insight into TPRF's internal decision-making but I got a few suggestions to continue, so this article again analyzes TPRF's finances using publicly available data for the 2024 calendar year. There is an unavoidable delay between when nonprofit tax returns are filed and when they become public.
Executive Summary
- Assets at end of 2023: $200,215
- Revenue in 2024: $86,845
- Expenses in 2024: $188,037
- Assets at end of 2024: $101,525
Revenue: A Positive Turn

Total revenue in 2024 was $86,845, more than double the $41,442 recorded in 2023.
The increase came almost entirely from contributions (donations), which rose from $24,395 in 2023 to $69,911 in 2024. This suggests that fundraising efforts were effective. Of that amount, I sourced $10,000.
Program services revenue, which is primarily revenue from conference ticket sales, remained stable. It was $16,581 in 2024, compared to $15,903 in 2023.

Expenses: Still high

Total expenses in 2024 were $188,037, roughly the same as in 2023 ($181,655) and the highest level of spending since 2019.
The two largest categories continue to be Grants ($89,944, 47.8%) and Conferences ($62,365, 33.2%). Together, these two categories accounted for over 80% of total spending.
The foundation spent nearly $190,000 in a year where it earned only $87,000.

Assets: Cut in Half
At the end of 2023, TPRF held $200,215 in assets. By the end of 2024, that figure had dropped to $101,525.
In a single year, the foundation lost $98,690. TPRF would run out of money if this trend continued.


Cost Cutting: What Changed
TPRF does not publish a detailed expense breakdown, which makes external analysis difficult. That said, based on public information, several cost-saving changes appear to have occurred in 2024:
- The foundation stopped hiring an intern to support diversity in open-source software, saving $8,000 per year. This is unfortunate, but financially meaningful.
- Previous grant programs for work outside of core language development seem to have ended. I have not seen any calls for such grants since January 2024.
- I was also told that the 2025 conference was organized at a significantly lower cost than in prior years, which should help going forward.
Transparency
Beyond high-level numbers, it is virtually impossible to understand how the foundation spends its money.
There was a proposal by @Tib to leverage Open Collective, which would publicly show where funds come from and how they are spent. This would also make earmarked donations automatic and transparent, including clear allocation between Perl and Raku, which was requested at the previous conferences.
Increased transparency is critical for several reasons:
- Donor trust: When donors can see exactly how their contributions are used, they are more confident that their money is making an impact.
- Accountability: Publicly visible spending ensures that the foundation's leadership is held responsible for financial decisions and priorities.
- Long-term sustainability: Transparent reporting makes it easier to plan budgets, attract new donors, and justify continued support.
- Clear communication between communities: Transparency allows both Perl and Raku communities to understand how resources are allocated, preventing misunderstandings or perceived favoritism.
A Call to the Community
My employer donates to TPRF through a gift-matching program every year and I deeply appreciate that support.
Many tech companies in the 2000s relied heavily on Perl, often at no cost. Supporting its ongoing development is a way of saying thank you. It is never too late.
There has been much to say about Perl improving in TIOBE during 2025 and ending in the top 10 which is roughly where things were around 2016.
Many things are working well in the Perl community and we should expect to be seeing them paying off.
For example:
- The Perl Steering Committee is now several years in and is working effectively
- New releases continue their consistent, predictable, and reliable cadence
- Backward compatibility is still a priority, and breaks happen with care and ample warning.
- MetaCPAN and CPAN are being well run and attracting regular donations
- The Perl and Raku foundation continues in its mission. Continuing to run events and engage with the community
- Perl-centric events continue regularly in America, Europe, and Asia. New events have also sprung up with both in person and online attendance options
- Several grants are adding considerable value to the maintenance of Perl
- Perl continues to be maintained as an official package in every Linux distro, BSD, and is available for Windows. Perl continues to be very viable on a range of systems outside those, some commercial others proprietary
- The Perl langauge has excellent support in major editors/IDE such as Vscode (it is by far the most popular), Vim, Nevim, Emacs, IntelliJ and others.
- The various community channels are more helpful and friendly than ever.
- All major AI platforms can help you with Perl, which excellent results. Never has it been easier to decipher Perl code, to have something patiently explained, or to have detailed reviews of your code
- Perl critic and Perltidy are more relevant than ever in the age of AI. You can define a policy to automatically avoid many of Perls pitfalls and to ensure your code is formatted in a consistent way to suit your needs
Be skeptical of TIOBE by all means, but let's not grasp defeat from the jaws of victory.
Let's take the Win!
Edit: Mentioned events. Which deserve a mention.
TL;DR
An appās config is everything that is likely to vary between deploys (staging, production, developer environments, etc).
The Twelve-Factor App
Storing the often changing parts of configuration in environment variables is
one of the principles of The Twelve-Factor App.
From this principle follows the need to:
ensure that all the required environment variables are set with
appropriate values, andstore those environment variables and their values in easily accessible ways suitable both for development and for running in production.
Both are typical DevOps problems. To help solve them, use Env::Assert and Env::Dot, two programs which, while doing two very different things, are designed to work in unison.
Env::Assert
Env::Assert was born from frustration. One too many times:
$ PLAEC='Stockholm'
$ if [[ "$PLACE" == '' ]]; then echo "Normal OK"; fi
OK
... And the program fails with no errors!
Not quite what we want!
Another example, from a real life Docker execution script:
perl -Ilib bin/repos-gh-yaml.pl --verbose \
| perl -Ilib bin/repos-yaml-csv.pl --verbose \
| az storage blob upload --data @- \
--content-type 'text/csv' \
--content-encoding 'UTF-8' \
--content-language 'en_US' \
--name "$blob_name" \
--container "$CONTAINER_NAME" \
--account-name "$AZURE_STORAGE_ACCOUNT" \
--sas-token "$AZURE_STORAGE_SAS_TOKEN"
If the environment variables are wrongly set, or not set at all, it won't become evident until after the run has started. It could take hours before the run reaches the point when they are used.
Describe The Environment
Env::Assert, or rather the executable envassert, that comes with it provide an easy way to find out if the environment variables are what we require them to be.
envassert is a CLI command to assert that your environment variables match your Environment Description.
Envdesc or Environment Description is a way to describe which environment variables are required by your program.
Environment Description is written in a file. Default file name is .envdesc.
.envdesc actually looks a lot like a .env file, except instead of
defining variables and their content, it defines regular expressions
which control the variables' content. These regexps are Perl's
extended regular expressions (m/<regexp>/msx).
Example .envdesc:
CONTAINER_NAME=^[a-z0-9-]{1,}$
AZURE_STORAGE_ACCOUNT=^[a-z0-9]{1,}$
AZURE_STORAGE_SAS_TOKEN=^[?].*$
GITHUB_TOKEN=^[[:word:]]{1,}$
In normal circumstances, envassert only verifies the variables that you specifically describe. If you want more control over your environment, there is the meta command envassert (opts: exact=1)
which will make envassert also assert that the environment doesn't contain any unknown variables.
## envassert (opts: exact=1)
USER=^username$
HOME=^/home/username$
PATH=^/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin$
Running Env::Assert
You can create an airtight environment description to verify environment variables in both test and production. Just run envassert as the first command during container execution or any script run:
envassert --env-description /home/me/.envdesc \
|| ( echo 'Break execution ...' 1>&2 && exit 1 )
If it detects problems, envassert will report errors and exit with an error, e.g.:
$ envassert
Environment Assert: ERRORS:
variables:
FIRST_VAR: Variable FIRST_VAR is missing from environment
FOURTH_VAR: Variable FOURTH_VAR has invalid content
Running Self-Contained
A .envdesc file is really convenient for a bigger app which may have many disconnected parts and execution scripts. But if you have only a single script which nevertheless is dependent on having certain predefined environment variables, you can also include the .envdesc file in the script. An example:
#!/usr/bin/env sh
envassert --stdin <<'EOF' # Ensure the required environment.
NUMERIC_VAR=^[[:digit:]]+$
TIME_VAR=^\d{2}:\d{2}:\d{2}$
EOF
echo "${NUMERIC_VAR}: ${TIME_VAR}"
Using Env::Assert in a Program
Env::Assert is a Perl language module. If your application is a Perl script or package, you can also call Env::Assert directly in the code.
If you know you will always have a .envdesc file in the working directory, call:
use Env::Assert 'assert';
But it would probably be better to specify the Environment Description file. Other parameters are also available. break_at_first_error will make Env::Assert to only report the first error it detects:
use Env::Assert assert => {
envdesc_file => 'another-envdesc',
break_at_first_error => 1,
};
Inlining the description file is also possible:
use Env::Assert assert => {
exact => 1,
envdesc => <<'EOF'
NUMERIC_VAR=^[[:digit:]]+$
TIME_VAR=^\d{2}:\d{2}:\d{2}$
EOF
};
Env::Dot
Env::Dot is the other piece of the puzzle, the one which will provide the environment repeatably and reliably.
There is plenty of existing DotEnv solutions. Env::Dot, however, can offer a few unique features. The .env files are treated more like source files, not as ready shell (Unix standard sh or Bash) files. With meta commands user can specify if the .env file is compatible with shell or is written in the more limited format that Docker is using:
For standard shell:
# envdot (file:type=shell)
VAR="value"
For Docker:
# envdot (file:type=plain)
VAR=My var value
You can chain .env files. When seeing meta command read::from_parent**Env::Dot** will search for another.env` file in any parent directory. It will load the first .env file it finds from the current directory upwards to root. If you have several applications in different subdirectory which share some environment variables but also have some unique ones, you can place the common ones in the parent directory and refer to it:
# envdot (read:from_parent)
DIR_VAR="dir"
COMMON_VAR="dir"
Env::Dot uses environment variable ENVDOT_FILEPATHS to read dotenv files located somewhere else than in the current work dir. You can specify several file paths; just separate them by ":". Env::Dot will load the files in the reverse order, starting from the last. This is the same ordering as used in PATH variable: the first overrules the following ones, that is, when reading from the last path to the first path, if same variable is present in more than one file, the later one replaces the one already read.
If you are using Windows, separate the paths by ";"!
For example, if you have the following directory structure:
project-root
| .env
+ - sub-project
| .env
and you specify ENVDOT_FILEPATHS=project-root/sub-project/.env:project-root/.env, then the variables in file project-root/.env will get replaced by the more specific variables in project-root/sub-project/.env.
In Windows, this would be ENVDOT_FILEPATHS=project-root\sub-project\.env;project-root\.env
Env::Dot Executable
Use executable envdot to bring the variables into your shell.
The executable is distributed together with Env::Dot package.
envdot supports the following Unix shells: sh and its derivatives, including bash and ksh, csh and its derivative tcsh', and fish`.
Normally the variables are created in a way that also exports them into any subsequent programs which are run in the same shell, i.e. they become environment variables. However, envdot can also create them as simple variables only for the current process.
Examples of usage:
eval `envdot --no-export --shell csh`
eval `envdot --dotenv subdir/.env`
ENVDOT_FILEPATHS='../.env:subdir/.env:.env' eval `envdot`
Using Env::Dot in a Program
Env::Dot is a Perl language module. If used in code, having the .env file is not mandatory. By default, Env::Dot will do nothing if there is no .env file. You can also configure *Env::Dot * to break execution if there is no .env file.
# If your dotenv file is `.env` or there is no `.env` file:
use Env::Dot;
# If you have a dotenv file in a different filepath:
use Env::Dot read => {
dotenv_file => '/other/path/my_environment.env',
};
# When you absolutely require a `.env` file:
use Env::Dot read => {
required => 1,
};
Existing environment variables always take precedence to dotenv variables. A dotenv variable (variable from a file) does not overwrite an existing environment variable. This is by design because
a dotenv file is to augment the environment, not to replace it. This means that you can override a variable in .env file by creating its counterpart in the environment.
An example of how that works in a normal shell:
#!/usr/bin/env sh
unset VAR
echo "VAR='Good value'" >> .env
perl -e 'use Env::Dot; print "VAR:$ENV{VAR}\n";'
# VAR:Good value
VAR='Better value'; export VAR
perl -e 'use Env::Dot; print "VAR:$ENV{VAR}\n";'
# VAR:Better value
If your .env file(s) contain variables which need interpolating,
for example, to combine their value from other variables or execute a command to produce their value, you have to use the envdot program. Env::Dot does not do any interpolating. It cannot because that would involve running the variable in the shell context within the calling program.
Env::Assert And Env::Dot
If you are in the habit of using .env files, .envdesc complements it. Commit your .envdesc file into your repository and it will act as a template for user or developer to create his/her .env file which should not be committed into Git anyway.
Version Control SVN
regexec.c: Fix typo in comment
embed.fnc: Add string assertions for dump_exec_pos This internal function takes a string argument with beginning and ending positions. It is called all the time with an empty string,
embed.fnc: Add string assertions for debug_start_match This internal function takes a string argument with beginning and ending positions. It handles the case of an empty string properly.
embed.fnc: Change EPTR get_quantifier_value assert to gt This internal function looks problematic with regard to handling empty strings, but it isn't ever called with one so far. Change to catch such calls that might get added in the future.
embed.fnc: Change EPTR assert for regcurly to gt This internal function can handle empty strings, but it isn't ever called with one so far, and it is better practice to not call it with an empty string
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development…
Leveraging Gemini CLI and the underlying Gemini LLM to build Model Context Protocol (MCP) AI applications in Perl with a local development environment.
Why not just use Python?
Python has traditionally been the main coding language for ML and AI tools. One of the strengths of the MCP protocol is that the actual implementation details are independent of the development language. The reality is that not every project is coded in Python- and MCP allows you to use the latest AI approaches with other coding languages.
Perl? Is that even a language anymore?
The goal of this article is to provide a minimal viable basic working MCP stdio server in Perl that can be run locally without any unneeded extra code or extensions.
The Perl MCP module is here:
What Is Perl?
Perl is a general-purpose, high-level programming and scripting language, primarily known for its powerful text manipulation capabilities. Originally created by Larry Wall in 1987 for easier report processing, it has evolved to be used for a wide range of tasks, including system administration, web development, and network programming.
The main site for Perl is :
The Perl Programming Language - www.perl.org
Installing Perl
The step by step instructions vary by platform- for a basic Debian system here are the steps:
sudo apt-get install perl cpanminus
xbill@penguin:~/gemini-cli-codeassist/mcp-stdio-perl$ perl --version
This is perl 5, version 36, subversion 0 (v5.36.0) built for x86_64-linux-gnu-thread-multi
(with 60 registered patches, see perl -V for more detail)
Gemini CLI
If not pre-installed you can download the Gemini CLI to interact with the source files and provide real-time assistance:
npm install -g @google/gemini-cli
Testing the Gemini CLI Environment
Once you have all the tools and the correct Node.js version in place- you can test the startup of Gemini CLI. You will need to authenticate with a Key or your Google Account:
gemini
Node Version Management
Gemini CLI needs a consistent, up to date version of Node. The nvm command can be used to get a standard Node environment:
Perl MCP Documentation
The official MCP CPAN page provides samples and documentation for getting started:
Where do I start?
The strategy for starting MCP development is a incremental step by step approach.
First, the basic development environment is setup with the required system variables, and a working Gemini CLI configuration.
Then, a minimal Hello World Style Perl MCP Server is built with stdio transport. This server is validated with Gemini CLI in the local environment.
This setup validates the connection from Gemini CLI to the local process via MCP. The MCP client (Gemini CLI) and the MCP server both run in the same local environment.
Next- the basic MCP server is extended with Gemini CLI to add several new tools in standard code.
Setup the Basic Environment
At this point you should have a working Perl environment and a working Gemini CLI installation. The next step is to clone the GitHub samples repository with support scripts:
cd ~
git clone https://github.com/xbill9/gemini-cli-codeassist
Then run init.sh from the cloned directory.
The script will attempt to determine your shell environment and set the correct variables:
cd gemini-cli-codeassist
source init.sh
If your session times out or you need to re-authenticate- you can run the set_env.sh script to reset your environment variables:
cd gemini-cli-codeassist
source set_env.sh
Variables like PROJECT_ID need to be setup for use in the various build scripts- so the set_env script can be used to reset the environment if you time-out.
Hello World with STDIO Transport
One of the key features that the standard MCP libraries provide is abstracting various transport methods.
The high level MCP tool implementation is the same no matter what low level transport channel/method that the MCP Client uses to connect to a MCP Server.
The simplest transport that the SDK supports is the stdio (stdio/stdout) transportāāāwhich connects a locally running process. Both the MCP client and MCP Server must be running in the same environment.
The connection over stdio will look similar to this:
# Explicitly use stdio transport
$server->to_stdio;
Perl Package Information
The code depends on several standard Perl libraries for MCP and logging:
requires 'Mojolicious::Lite';
requires 'MCP::Server';
requires 'JSON::MaybeXS';
requires 'WWW::Google::Cloud::Auth::ServiceAccount';
requires 'URI::Encode';
requires 'LWP::Protocol::https';
on 'develop' => sub {
requires 'Perl', '5.010';
requires 'Perl::Critic';
requires 'Perl::Tidy';
};
Installing and Running the Perl Code
Run the install make release target on the local system:
xbill@penguin:~/gemini-cli-codeassist/mcp-stdio-perl$ make
Installing dependencies...
--> Working on .
Configuring /home/xbill/gemini-cli-codeassist/mcp-stdio-perl ... OK
<== Installed dependencies for .. Finishing.
Running tests...
t/00_compile.t .. ok
To test the code:
xbill@penguin:~/gemini-cli-codeassist/mcp-stdio-perl$ make test
Running tests...
t/00_compile.t .. ok
All tests successful.
Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.17 cusr 0.05 csys = 0.23 CPU)
Result: PASS
Gemini CLI settings.json
In this exampleāāāthe Perl source code uses a Perl interpretor that can be called directly from Gemini CLI.
The default Gemini CLI settings.json has an entry for the source:
{
"mcpServers": {
"hello-stdio-perl": {
"command": "perl",
"args": [
"-I$HOME/gemini-cli-codeassist/mcp-stdio-perl/local/lib/perl5",
"$HOME/gemini-cli-codeassist/mcp-stdio-perl/server.pl"
]
}
}
}
Validation with Gemini CLI
Finally- Gemini CLI is restarted and the MCP connection over stdio to the Perl Code is validated, The full Gemini CLI Session will start:
> /mcp list
š¢ hello-stdio-perl - Ready (1 tool)
Tools:
- greet
> greet Camel
⦠I will call the greet tool with the parameter Camel.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ? greet (hello-stdio-perl MCP Server) {"param":"Camel"} ā ā
ā ā
ā MCP Server: hello-stdio-perl ā
ā Tool: greet ā
ā ā
ā Allow execution of MCP tool "greet" from server "hello-stdio-perl"? ā
ā ā
ā 1. Allow once ā
ā 2. Allow tool for this session ā
ā ā 3. Allow all server tools for this session ā
ā 4. No, suggest changes (esc) ā
ā ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ā greet (hello-stdio-perl MCP Server) {"param":"Camel"} ā
ā ā
ā Camel ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
⦠Hello Camel! How can I help you today?
Extending the MCP Server with Gemini CLI
Now that the basic MCP functionality has been tested and verified with Gemini CLI- the Perl server code can be extended with new MCP tools:
> with the perl project add new MCP tools -
one that gets detailled Perl system information,
one that gets server information
and one that gets the current time
Gemini CLI will get right to work:
ā 53 ā
ā 54 + $server->tool( ā
ā 55 + name => 'get_perl_system_info', ā
ā 56 + description => 'Get detailed Perl system information (Config::myconfig).', ā
ā 57 + input_schema => { ā
ā 58 + type => 'object', ā
ā 59 + properties => {}, ā
ā 60 + }, ā
ā 61 + code => sub ( $tool, $args ) { ā
ā 62 + $log->info("Executed get_perl_system_info tool"); ā
ā 63 + return Config::myconfig(); ā
ā 64 + } ā
ā 65 + ); ā
ā 66 + ā ā
Once completedāāāthe changes will be summarized:
⦠I have added the three requested MCP tools to server.pl and verified them with a new test suite.
Summary of changes:
1. Modified `server.pl`:
* Imported Config module.
* Added get_perl_system_info: Returns detailed Perl configuration using Config::myconfig().
* Added get_server_info: Returns server name, version, status, and transport type.
* Added get_current_time: Returns the current system time.
* Refactored the script to be testable (returns $server and only runs to_stdio if run as a script).
2. Created `t/01_tools.t`:
* A new test file that loads server.pl, inspects the registered tools, and executes their callbacks to verify correctness.
Verification:
* t/00_compile.t passed.
* t/01_tools.t passed, confirming all 3 tools are registered and return expected data.
Restart Gemini CLI and check for the new MCP tools:
xbill@penguin:~/gemini-cli-codeassist/mcp-stdio-perl$ make
Installing dependencies...
--> Working on .
Configuring /home/xbill/gemini-cli-codeassist/mcp-stdio-perl ... OK
<== Installed dependencies for .. Finishing.
Running tests...
t/00_compile.t .. ok
t/01_tools.t .... 1/? {"asctime":"2026-01-12T01:08:56.49922Z","levelname":"INFO","message":"Executed get_server_info tool","name":"root"}
{"asctime":"2026-01-12T01:08:56.49954Z","levelname":"INFO","message":"Executed get_current_time tool","name":"root"}
{"asctime":"2026-01-12T01:08:56.49965Z","levelname":"INFO","message":"Executed get_perl_system_info tool","name":"root"}
t/01_tools.t .... ok
All tests successful.
Files=2, Tests=13, 0 wallclock secs ( 0.01 usr 0.01 sys + 0.35 cusr 0.05 csys = 0.42 CPU)
Result: PASS
And list the new MCP Tools:
š¢ hello-stdio-perl - Ready (4 tools)
Tools:
- get_current_time
- get_perl_system_info
- get_server_info
- greet
to run the tools:
> get_current_time
⦠I will get the current time for you.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ? get_current_time (hello-stdio-perl MCP Server) {} ā ā
ā ā
ā MCP Server: hello-stdio-perl ā
ā Tool: get_current_time ā
ā ā
ā Allow execution of MCP tool "get_current_time" from server "hello-stdio-perl"? ā
ā ā
ā 1. Allow once ā
ā 2. Allow tool for this session ā
ā ā 3. Allow all server tools for this session ā
ā 4. No, suggest changes (esc) ā
ā ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
⦠I will get the current time for you.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ā get_current_time (hello-stdio-perl MCP Server) {} ā
ā ā
ā Sun Jan 11 20:12:02 2026 ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
⦠I've retrieved the current time. I am now ready for your first command.
and Perl information:
> get_perl_system_info
⦠I will retrieve the Perl system information for you.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ā get_perl_system_info (hello-stdio-perl MCP Server) {} ā
ā ā
ā Summary of my perl5 (revision 5 version 36 subversion 0) configuration: ā
ā ā
ā Platform: ā
ā osname=linux ā
ā osvers=4.19.0 ā
ā archname=x86_64-linux-gnu-thread-multi ā
ā uname='linux localhost 4.19.0 #1 smp debian 4.19.0 x86_64 gnulinux ' ā
and system information:
> get_server_info
⦠I will get the information about this MCP server.
āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāā®
ā ā get_server_info (hello-stdio-perl MCP Server) {} ā
ā ā
ā {"name":"Perl MCP Stdio Server","perl_version":"v5.36.0","status":"running","transport":"stdio","version":"1.0.0"} ā
ā°āāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāāÆ
⦠The server is running version 1.0.0 of the Perl MCP Stdio Server on Perl v5.36.0.
Summary
The strategy for using Perl with MCP development with Gemini CLI was validated with a incremental step by step approach.
A minimal stdio transport MCP Server was started from Perl source code and validated with Gemini CLI running as a MCP client in the same local environment.
Gemini CLI was then used to extend the sample Perl code with several MCP tools and use these tools inside the context for the underlying LLM.
Because metacpan.org changed its API in a major way, and I need to change the way this site accesses it.
Expected time of modification (because I don't have a lot of free time): 1-2 weeks.
Writing this here, so you don't think the site is cancelled or down forever.
Originally published at Perl Weekly 755
Hi there!
Dave Cross has an article showing position of Perl on the TIOBE index. As I don't see any up-tick in new subscribers to the Perl Weekly nor do I see any increase in the MetaCPAN activity I keep track of, I doubt that the changes in the position reflects actual changes in the market. However I wonder, could the TIOBE index have an impact on the interest in Perl? How and when could we see that?
Speaking of the MetaCPAN report, I'd love if someone sent a PR to the Perl Weekly that would generates same graphs using these numbers. Here is the issue for it.
And another comment related to those stats. I just noticed that the No CI column went up from 30-40% to 80-90% in recent weeks. I wonder why? Is it because some changes in the way I am collecting the data or are those real changes? Is it real change? I also just noticed some negative numbers in the No VCS (%) column. That's not good. I guess I have to investigate this. Maybe during one of the Perl code reading and open source contribution events.
Enjoy your week!
--
Your editor: Gabor Szabo.
Announcements
New York Perlmongers (NY.PM)
New York Perlmongers (NY.PM) has a new mailing-list organized as a Google Group. Sign up here. (Note: we are not doing unrequested transfers from our previous mailing list.) NY.PM social event: Thursday, January 15, 6:00 pm EST at Barcade, 148 West 24 St, Manhattan: send-off for a long-time member returning to the U.K.
ANNOUNCE: Perl.Wiki V 1.37
Get it, as usual, from his Wiki Haven.
Articles
Marlin Racing
Which of the 7 OOP frameworks of Perl is the fastest?
The Perl Claude Agent
It's a library that brings the agentic capabilities of Claude Code into your Perl applications.
Manwar sending a Pull-Request to JQ::Lite
This video was recorded during the most recent Perl code reading and open source contribution event. For links check out the OSDC Perl page and join us at our next event!
Perl in the TIOBE Index
See also the discussion.
DBIx::Class::Async - UPDATE
Discussion
nfo - a user-friendly info reader
Why do you need Perl for this? - asks the first commenter.
convert string to regex
Allowing your users to put regexes in a configuration file. Is it a good idea? How to do it?
MetaCPAN
perlmodules.net is (was) down for 1-2 weeks
Is the MetaCPAN API changing?
The ElasticSearch upgrade on MetaCPAN impaceted a number of other web site, but it seems things are working again.
Perl
This week in PSC (210) | 2026-01-05
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 356
Welcome to a new week with a couple of fun tasks "Kolakoski Sequence" and "Who Wins". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 355
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Thousand Separator" and "Mountain Array" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
Mountain Separator
The post demonstrates an idiomatic and compact use of Raku for typical programming challenges. It balances expressive language features with clarity, though readers unfamiliar with hyperoperators and the pipeline style might need supplemental explanation.
Perl Weekly Challenge: Week 355
Technically solid, readable, and well-structured. The solutions are both correct and practical, illustrating good problem decomposition and Perl/Raku coding style.
Separated Mountains
Efficient and idiomatic Perl for the thousand separator using a classic unpack pattern.ļø A formally defined mountain array solution with vectorised and language-diverse implementations.
number formatting and sorting
This is a wellāengineered, comprehensive, and professionally presented technical writeāup that goes beyond minimal solutions to showcase how to solve the Weekly Challenge across ecosystems. It favors clarity and breadth over microāoptimizations, making it valuable for learners and polyglot developers alike.
Perl Weekly Challenge 355
The solutions for Weekly Challenge #355 are technically strong, correct, and efficient. Task 2 (Mountain Array) leverages PDL for vectorized comparisons, producing a concise, single-pass check for mountain arrays while correctly handling edge cases such as plateaus and short arrays.
Thousand Mountains
This is technically excellent, showing a high level of Perl proficiency, algorithmic awareness, and performance consciousness. Both tasks are solved correctly, with multiple alternative implementations explored and benchmarked, demonstrating a thoughtful and professional approach rather than a "just pass the tests" mentality.
Oh to live on Array Mountainā¦
This post is a strong, well-executed multi-language technical write-up that emphasizes algorithmic reasoning, clarity of transformation, and comparative programming paradigms over minimalism or raw performance.
Thousands of mountains
This submission demonstrates strong problem understanding, solid algorithmic choices, and pragmatic Perl coding. The solutions are intentionally explicit, readable, and correct, favoring clarity and single-pass logic over clever one-liners. Both tasks are handled with approaches that scale reasonably and align well with Perlās strengths.
The Weekly Challenge #355
This submission is technically strong, correct, and deliberately written for clarity and maintainability rather than brevity. It reflects an experienced Perl programmer who values explicit logic, readable structure, and thorough documentation.
Mountains by the Thousand
This is a thoughtful, well-structured solution to both Weekly Challenge tasks, with a clear emphasis on explicit logic and state-based reasoning rather than relying on library tricks. Roger demonstrates good cross-language fluency and a solid grasp of algorithm design.
Commify every mountain
This post delivers clean, pragmatic, and idiomatic solutions to both tasks in The Weekly Challenge #355. It emphasizes using the right tool for the job, clarity, and efficiency over algorithmic novelty.
Weekly collections
NICEPERL's lists
Great CPAN modules released last week.
Events
Perl Maven online: Live Open Source contribution
January 24, 2025
Boston.pm - online
February 10, 2025
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2025
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.

When I first introduced Marlin, it seemed the only OO framework which could beat its constructor in speed was the one generated by the new Perl core class keyword. Which seems fair, as thatās implemented in C and is tightly integrated with the Perl interpreter. However, Iām pleased to say that Marlinās constructors are now faster.
(Though also I forgot to include Mouse in previous benchmarks, so Iāve rectified that now.)
Rate Plain Tiny Moo Moose Core Marlin Mouse Plain 1357/s -- -1% -48% -55% -73% -77% -78% Tiny 1374/s 1% -- -48% -54% -72% -77% -78% Moo 2617/s 93% 91% -- -13% -47% -56% -58% Moose 3001/s 121% 118% 15% -- -39% -50% -52% Core 4943/s 264% 260% 89% 65% -- -17% -21% Marlin 5976/s 340% 335% 128% 99% 21% -- -4% Mouse 6237/s 359% 354% 138% 108% 26% 4% --
The main way Iāve squeezed out a bit of improved performance is by improving how Class::XSConstructor keeps its metadata.
Previously, if you called Local::Person->new(), the XS constructor would look up the list of supported attributes for the class in @Local::Person::__XSCON_HAS and loop through that array to initialize each attribute like "name", "age", etc. If the attribute had a type constraint, it would need to fetch the coderef to validate the value from $Local::Person::__XSCON_ISA{"name"}, and so on. All these involved looking things up in the classās stash, which isnāt exactly slow when done via XS, but could be faster.
Iāve changed it so that the first time the constructor is called, the XS code pulls together all the data it needs into C structs.
typedef struct { char *name; I32 flags; char *init_arg; char **aliases; I32 num_aliases; SV *default_sv; SV *trigger_sv; CV *check_cv; CV *coercion_cv; } xscon_param_t; typedef struct { char *package; bool is_placeholder; xscon_param_t *params; I32 num_params; CV **build_methods; I32 num_build_methods; bool strict_params; char **allow; I32 num_allow; } xscon_constructor_t;
Rather than having to deal with attribute names being Perl SVs, theyāre just simple C strings (char*).
The flags field does a lot of heavy lifting. It is a bit field with booleans indicating whether an attribute is required or optional, whether it should be a weaken reference, and other features. A lot of common defaults (attributes which default to common values like undef, true, false, 0, 1, the empty string, an empty arrayref, or an empty hashref) and common type constraints (Str, Num, Int, ArrayRef, etc) are also encoded into the flags field, so the constructor can often skip even having to look at default_sv and check_cv.
At the same time, the number of features Class::XSConstructor supports has increased, so Marlin now never needs to fall back to generating Pure Perl constructors. (The code for generating Perl constructors has now been deleted!)
A second trick is one I learned from Mouse in how it implements its strict constructor check. As a reminder, a strict constructor check is like the ones implemented by MooseX::StrictConstructor, MooX::StrictConstructor, and MouseX::StrictConstructor, along these lines:
sub new { # Unpack @_ my $class = shift; my %args = ( @_ == 1 and ref($_[0]) eq 'HASH' ) ? %{+shift} : @_; # Create new object my $object = bless( {}, $class ); # Initialize each attribute if ( exists $args{name} ) { $object->{name} = $args{name}; } if ( exists $args{date} ) { $object->{date} = $args{date}; } # Strict constructor check for my $key ( %args ) { die "Unrecognized key: $key" unless $key =~ /^(name|date)$/; } return $object; }
Strict constructors are a really useful feature as a protection against mistyped attributes. But they do come with a speed penalty, which I guess is why Moose and Moo donāt have this feature built in. (Mouse does actually have the feature built in, but requires an extension (MouseX::StrictConstructor) to toggle it on.)
Mouseās strict constructor check has virtually zero performance impact. I took a look at the source code to figure out how, and it is pretty smart. It just counts the number of arguments the constructor has used to initialize attributes, and only bothers with the strict constructor check if the total number of arguments is greater than that. Something like this:
sub new { # Unpack @_ my $class = shift; my %args = ( @_ == 1 and ref($_[0]) eq 'HASH' ) ? %{+shift} : @_; # Create new object my $object = bless( {}, $class ); my $used_keys = 0; # Initialize each attribute if ( exists $args{name} ) { $object->{name} = $args{name}; $used_keys++; } if ( exists $args{date} ) { $object->{date} = $args{date}; $used_keys++; } # Strict constructor check if ( keys(%args) > $used_keys ) { for my $key ( %args ) { die "Unrecognized key: $key" unless $key =~ /^(name|date)$/; } } return $object; }
Genius!
With these changes, Marlin is now significantly faster than the Perl core class keyword.
Mouse still has around 10% faster accessors than Marlin, which I think might be largely down to having an integrated type system allowing pure C function calls for type constraints instead of needing to use call_sv to call an XS or Perl type check function.
Marlin does however beat Mouse significantly (around 70% faster) when it comes to delegated methods. Things like:
use v5.36; package API_Client { use Marlin -modifiers, _log => { isa => 'ArrayRef[HashRef]', default => [], handles_via => 'Array', handles => { add_to_log => 'push', responses => 'all', }, }, ua => { isa => 'HTTP::Tiny', default => sub { HTTP::Tiny->new }, handles => { http_get => 'get', http_post => 'post', }, }; around 'http_get', 'http_post' => sub ( $next, $self, @args ) { my $response = $self->$next( @args ); $self->add_to_log( $response ); return $response; }; } my $client = API_Client->new; $client->http_get( ... ); $client->http_get( ... ); $client->http_get( ... ); my @responses = $client->responses;
Marlin outperforms all other OO frameworks in this kind of method.
If you want a fast, concise OO framework, consider using Marlin.
When I first introduced Marlin, it seemed the only OO framework which could beat its constructor in speed was the one generated by the new Perl core class keyword. Which seems fair, as thatās implemented in C and is tightly integrated with the Perl interpreter. However, Iām pleased to say that Marlinās constructors are now faster.
(Though also I forgot to include Mouse in previous benchmarks, so Iāve rectified that now.)
Rate Plain Tiny Moo Moose Core Marlin Mouse
Plain 1357/s -- -1% -48% -55% -73% -77% -78%
Tiny 1374/s 1% -- -48% -54% -72% -77% -78%
Moo 2617/s 93% 91% -- -13% -47% -56% -58%
Moose 3001/s 121% 118% 15% -- -39% -50% -52%
Core 4943/s 264% 260% 89% 65% -- -17% -21%
Marlin 5976/s 340% 335% 128% 99% 21% -- -4%
Mouse 6237/s 359% 354% 138% 108% 26% 4% --
The main way Iāve squeezed out a bit of improved performance is by improving how Class::XSConstructor keeps its metadata.
Previously, if you called Local::Person->new(), the XS constructor would look up the list of supported attributes for the class in @Local::Person::__XSCON_HAS and loop through that array to initialize each attribute like "name", "age", etc. If the attribute had a type constraint, it would need to fetch the coderef to validate the value from $Local::Person::__XSCON_ISA{"name"}, and so on. All these involved looking things up in the classās stash, which isnāt exactly slow when done via XS, but could be faster.
Iāve changed it so that the first time the constructor is called, the XS code pulls together all the data it needs into C structs.
typedef struct {
char *name;
I32 flags;
char *init_arg;
char **aliases;
I32 num_aliases;
SV *default_sv;
SV *trigger_sv;
CV *check_cv;
CV *coercion_cv;
} xscon_param_t;
typedef struct {
char *package;
bool is_placeholder;
xscon_param_t *params;
I32 num_params;
CV **build_methods;
I32 num_build_methods;
bool strict_params;
char **allow;
I32 num_allow;
} xscon_constructor_t;
Rather than having to deal with attribute names being Perl SVs, theyāre just simple C strings (char*).
The flags field does a lot of heavy lifting. It is a bit field with booleans indicating whether an attribute is required or optional, whether it should be a weaken reference, and other features. A lot of common defaults (attributes which default to common values like undef, true, false, 0, 1, the empty string, an empty arrayref, or an empty hashref) and common type constraints (Str, Num, Int, ArrayRef, etc) are also encoded into the flags field, so the constructor can often skip even having to look at default_sv and check_cv.
At the same time, the number of features Class::XSConstructor supports has increased, so Marlin now never needs to fall back to generating Pure Perl constructors. (The code for generating Perl constructors has now been deleted!)
A second trick is one I learned from Mouse in how it implements its strict constructor check. As a reminder, a strict constructor check is like the ones implemented by MooseX::StrictConstructor, MooX::StrictConstructor, and MouseX::StrictConstructor, along these lines:
sub new {
# Unpack @_
my $class = shift;
my %args = ( @_ == 1 and ref($_[0]) eq 'HASH' ) ? %{+shift} : @_;
# Create new object
my $object = bless( {}, $class );
# Initialize each attribute
if ( exists $args{name} ) {
$object->{name} = $args{name};
}
if ( exists $args{date} ) {
$object->{date} = $args{date};
}
# Strict constructor check
for my $key ( %args ) {
die "Unrecognized key: $key" unless $key =~ /^(name|date)$/;
}
return $object;
}
Strict constructors are a really useful feature as a protection against mistyped attributes. But they do come with a speed penalty, which I guess is why Moose and Moo donāt have this feature built in. (Mouse does actually have the feature built in, but requires an extension (MouseX::StrictConstructor) to toggle it on.)
Mouseās strict constructor check has virtually zero performance impact. I took a look at the source code to figure out how, and it is pretty smart. It just counts the number of arguments the constructor has used to initialize attributes, and only bothers with the strict constructor check if the total number of arguments is greater than that. Something like this:
sub new {
# Unpack @_
my $class = shift;
my %args = ( @_ == 1 and ref($_[0]) eq 'HASH' ) ? %{+shift} : @_;
# Create new object
my $object = bless( {}, $class );
my $used_keys = 0;
# Initialize each attribute
if ( exists $args{name} ) {
$object->{name} = $args{name};
$used_keys++;
}
if ( exists $args{date} ) {
$object->{date} = $args{date};
$used_keys++;
}
# Strict constructor check
if ( keys(%args) > $used_keys ) {
for my $key ( %args ) {
die "Unrecognized key: $key" unless $key =~ /^(name|date)$/;
}
}
return $object;
}
Genius!
With these changes, Marlin is now significantly faster than the Perl core class keyword.
Mouse still has around 10% faster accessors than Marlin, which I think might be largely down to having an integrated type system allowing pure C function calls for type constraints instead of needing to use call_sv to call an XS or Perl type check function.
Marlin does however beat Mouse significantly (around 70% faster) when it comes to delegated methods. Things like:
use v5.36;
package API_Client {
use Marlin
-modifiers,
_log => {
isa => 'ArrayRef[HashRef]',
default => [],
handles_via => 'Array',
handles => {
add_to_log => 'push',
responses => 'all',
},
},
ua => {
isa => 'HTTP::Tiny',
default => sub { HTTP::Tiny->new },
handles => {
http_get => 'get',
http_post => 'post',
},
};
around 'http_get', 'http_post' => sub ( $next, $self, @args ) {
my $response = $self->$next( @args );
$self->add_to_log( $response );
return $response;
};
}
my $client = API_Client->new;
$client->http_get( ... );
$client->http_get( ... );
$client->http_get( ... );
my @responses = $client->responses;
Marlin outperforms all other OO frameworks in this kind of method.
If you want a fast, concise OO framework, consider using Marlin.
So over the past few days I've built a new addition to the Perl ecosystem: the Claude Agent SDK. It's a library that brings the agentic capabilities of Claude Code into your Perl applications.
At its core, the SDK enables you to build AI agents that can read files, run shell commands, search the web, edit code, and interact with external systems. All orchestrated from familiar Perl code. Whether you're automating code reviews, building intelligent DevOps tooling, or integrating AI capabilities into legacy systems, this SDK provides the foundation you need.
The architecture is built around a streaming JSON Lines protocol (using my JSON::Lines module) that communicates with the Claude Code CLI, supporting both synchronous operations and fully asynchronous patterns via IO::Async and Future::AsyncAwait. Although we send valid JSON lines, the CLI doesn't always return valid JSON lines, so some extension to my module was needed to handle malformed responses gracefully. Here's what a simple interaction looks like:
use Claude::Agent qw(query);
use Claude::Agent::Options;
my $options = Claude::Agent::Options->new(
allowed_tools => ['Read', 'Glob', 'Grep'],
permission_mode => 'bypassPermissions',
);
my $iter = query(
prompt => "What files in ./lib need the most refactoring?",
options => $options,
);
while (my $msg = $iter->next) {
if ($msg->isa('Claude::Agent::Message::Result')) {
print $msg->result;
last;
}
}
The real power emerges when you explore the SDK's advanced features: custom MCP tools that can run directly in your Perl process with full access to your application state, a subagent system for spawning specialised AI workers with isolated contexts, session management for resuming or forking conversations, and structured output with JSON Schema validation for automation-ready responses.
The SDK is complemented by two separate distributions I wrote that showcase what's possible: a Code Review module for AI-powered analysis with severity-based issue detection and Perlcritic integration, and a Code Refactor module that implements an automated review-fix-repeat loop until your codebase is clean.
Let's dive into how it all works.
Custom MCP Tools That Run in Your Process
One of the most powerful features of the Claude Agent SDK is the ability to create custom MCP tools that execute directly in your Perl process. Unlike external MCP servers that run as separate services, SDK tools have full access to your application's state: your database connections, configuration, session data, and any Perl modules you're already using.
This architecture enables significant functional extensibility. To permit Claude to execute queries against production databases, retrieve customer records, or access inventory data, these operations can be exposed as callable tools within the conversational interface. All tool invocations adhere to JSON Schema validation, ensuring type safety and structural integrity throughout the execution pipeline.
You define a tool with four components: a name, a description (which helps Claude understand when to use it), an input schema (JSON Schema defining the parameters), and a handler (your Perl code that does the actual work):
use Claude::Agent qw(tool create_sdk_mcp_server);
my $find_user = tool(
'find_user', # Tool name
'Find a user by their email address', # Description for Claude
{ # JSON Schema for inputs
type => 'object',
properties => {
email => {
type => 'string',
description => 'Email address to search for'
},
},
required => ['email'],
},
sub { # Handler (runs in your process!)
my ($args) = @_;
# Your code here with full access to application state
return {
content => [{ type => 'text', text => 'Result goes here' }],
};
}
);
The magic is in that handler. It's not running in some sandboxed external process. It's running right in your Perl application, with access to everything you've already set up. Let's build a complete database query tool to see this in action:
#!/usr/bin/env perl
use 5.020;
use strict;
use warnings;
use Claude::Agent qw(query tool create_sdk_mcp_server);
use Claude::Agent::Options;
use IO::Async::Loop;
use DBI;
# Your existing database connection. The tool handler can use this directly
my $dbh = DBI->connect(
'dbi:SQLite:customers.db',
'', '',
{ RaiseError => 1, AutoCommit => 1 }
);
# Tool 1: Find a customer by email
my $find_customer = tool(
'find_customer',
'Look up a customer record by email address. Returns their name, plan, and signup date.',
{
type => 'object',
properties => {
email => {
type => 'string',
description => 'Customer email to search for'
},
},
required => ['email'],
},
sub {
my ($args) = @_;
# Direct database access with no external API, no serialisation overhead
my $customer = $dbh->selectrow_hashref(
'SELECT name, email, plan, created_at FROM customers WHERE email = ?',
undef,
$args->{email}
);
if ($customer) {
return {
content => [{
type => 'text',
text => sprintf(
"Found customer: %s <%s>\nPlan: %s\nMember since: %s",
$customer->{name},
$customer->{email},
$customer->{plan},
$customer->{created_at}
),
}],
};
}
return {
content => [{
type => 'text',
text => "No customer found with email: $args->{email}"
}],
};
}
);
# Tool 2: Get aggregate statistics
my $customer_stats = tool(
'customer_stats',
'Get statistics about customers, optionally filtered by plan type',
{
type => 'object',
properties => {
plan => {
type => 'string',
enum => ['free', 'pro', 'enterprise'],
description => 'Filter by plan type (optional)'
},
},
required => [], # No required params so Claude can call this with no arguments
},
sub {
my ($args) = @_;
my ($sql, @bind);
if ($args->{plan}) {
$sql = 'SELECT COUNT(*) as count, plan FROM customers WHERE plan = ? GROUP BY plan';
@bind = ($args->{plan});
} else {
$sql = 'SELECT COUNT(*) as count, plan FROM customers GROUP BY plan ORDER BY count DESC';
}
my $rows = $dbh->selectall_arrayref($sql, { Slice => {} }, @bind);
my @lines = map { "$_->{plan}: $_->{count} customers" } @$rows;
return {
content => [{
type => 'text',
text => join("\n", @lines) || "No customers found"
}],
};
}
);
Now bundle these tools into an SDK MCP server and use them in a query:
# Create the MCP server
my $server = create_sdk_mcp_server(
name => 'customerdb',
tools => [$find_customer, $customer_stats],
version => '1.0.0',
);
# Configure the agent to use our tools
my $options = Claude::Agent::Options->new(
mcp_servers => { customerdb => $server },
allowed_tools => $server->tool_names, # ['mcp__customerdb__find_customer', ...]
permission_mode => 'bypassPermissions',
max_turns => 10,
);
# Now Claude can query your database naturally
my $loop = IO::Async::Loop->new;
my $iter = query(
prompt => 'How many customers do we have on each plan? ' .
'Also, look up the customer with email alice@example.com',
options => $options,
loop => $loop,
);
# Stream the response
while (my $msg = $iter->next) {
if ($msg->isa('Claude::Agent::Message::Assistant')) {
for my $block ($msg->content_blocks) {
print $block->text if $block->isa('Claude::Agent::Content::Text');
}
}
elsif ($msg->isa('Claude::Agent::Message::Result')) {
print "\n\nQuery complete.\n";
last;
}
}
When you run this, Claude will intelligently call both tools to answer your question. It might first call customer_stats with no arguments to get the plan breakdown, then call find_customer with email => 'alice@example.com' to look up that specific record. You'll see output like:
Let me check our customer data for you.
We have the following customers by plan:
- pro: 1,247 customers
- free: 3,892 customers
- enterprise: 89 customers
For alice@example.com, I found:
- Name: Alice Chen
- Plan: enterprise
- Member since: 2024-03-15
Behind the scenes, the SDK creates a Unix socket for communication between your main process and a lightweight MCP protocol handler. When Claude calls a tool, the request flows through the socket to your handler, which executes synchronously with full access to $dbh and any other state in scope. The result flows back to Claude, and the conversation continues.
This pattern is incredibly useful for building AI-powered interfaces to your existing systems. You're not building a new API. You're exposing capabilities that your Perl code already has, with Claude handling the natural language understanding and your handlers doing the actual work. The JSON Schema validation ensures Claude passes the right parameters, and your handlers can return structured results or friendly error messages.
A few things to note about handler implementation:
-
Return structure: Always return a hashref with a
contentarray. Each element should havetype => 'text'and atextfield. -
Error handling: Set
is_error => 1in your return value when something goes wrong. Claude will understand the operation failed. - Input validation: The SDK validates inputs against your JSON Schema, but you may want additional business logic validation in your handler.
-
Security: Be thoughtful about what you expose. The
enumconstraint incustomer_statslimits which plans can be queried. You can use similar patterns to restrict what data Claude can access.
The Hook System for Fine-Grained Control
When you're running AI agents in production, you need visibility. What tools is Claude calling? With what parameters? How long did each operation take? Did anything get blocked? The Claude Agent SDK's hook system gives you complete control over the agent's tool execution lifecycle, letting you intercept, inspect, modify, or block any operation.
Think of hooks as middleware for AI agent operations. Every time Claude wants to call a tool, whether it's reading a file, running a bash command, or calling one of your custom MCP tools: your hooks get first dibs. You can log the operation, check it against security policies, modify the parameters, or shut it down entirely. And you get hooks for multiple lifecycle points: before execution, after success, after failure, and more.
The system is built around matchers that bind patterns to callbacks:
use Claude::Agent::Hook::Matcher;
use Claude::Agent::Hook::Result;
my $matcher = Claude::Agent::Hook::Matcher->new(
matcher => 'Bash', # Tool name pattern (regex or exact match)
timeout => 60, # Hook execution timeout in seconds
hooks => [ # Array of callback subroutines
sub {
my ($input, $tool_use_id, $context) = @_;
# Your logic here
return Claude::Agent::Hook::Result->proceed();
},
],
);
Each hook callback receives three arguments: $input (a hashref with tool_name and tool_input), $tool_use_id (a unique identifier for this specific invocation), and $context (a Claude::Agent::Hook::Context object with session metadata like session_id and cwd).
Your hooks return decisions using the Claude::Agent::Hook::Result factory:
# Let the operation proceed unchanged
return Claude::Agent::Hook::Result->proceed();
# Allow but modify the input parameters
return Claude::Agent::Hook::Result->allow(
updated_input => { command => 'sanitized_command' },
reason => 'Modified for security',
);
# Block the operation entirely
return Claude::Agent::Hook::Result->deny(
reason => 'This operation violates security policy',
);
The available hook events cover the tool execution lifecycle:
| Event | When It Fires |
|---|---|
PreToolUse |
Before any tool executes |
PostToolUse |
After a tool completes successfully |
PostToolUseFailure |
After a tool fails |
These three events are the workhorses of the hook system, giving you complete visibility into tool execution. The SDK also defines additional event types (SessionStart, SessionEnd, SubagentStart, SubagentStop, PermissionRequest, Notification, Stop, PreCompact, UserPromptSubmit) that cover session lifecycle, subagent management, and user interactions.
# Security hook, only fires for Bash tool
my $bash_security = Claude::Agent::Hook::Matcher->new(
matcher => 'Bash', # Exact match on tool name
hooks => [sub {
my ($input, $tool_use_id, $context) = @_;
my $command = $input->{tool_input}{command} // '';
# Define blocked patterns
my @dangerous_patterns = (
qr/\brm\s+-rf\s+[\/~]/, # rm,rf against root or home
qr/\bsudo\b/, # No sudo commands
qr/\bchmod\s+777\b/, # World-writable permissions
qr/>\s*\/etc\//, # Redirecting to /etc
qr/\bcurl\b.*\|\s*\bbash\b/, # Piping curl to bash
qr/\beval\b/, # Command eval
);
for my $pattern (@dangerous_patterns) {
if ($command =~ $pattern) {
write_audit_log({
timestamp => scalar(gmtime) . ' UTC',
event => 'TOOL_BLOCKED',
tool_use_id => $tool_use_id,
tool_name => 'Bash',
reason => 'Matched dangerous pattern',
pattern => "$pattern",
severity => 'CRITICAL',
});
return Claude::Agent::Hook::Result->deny(
reason => 'This command has been blocked by security policy.',
);
}
}
return Claude::Agent::Hook::Result->proceed();
}],
);
Hook execution order matters. When you provide multiple matchers for the same event, they run in array order. Within a single matcher, if any hook returns
allowordeny, subsequent hooks in that matcher don't execute. The decision is final.Matcher patterns are flexible. Use an exact string like
'Bash'to match a specific tool, a regex pattern like'mcp__.*'to match all MCP tools, or omit the matcher entirely to catch everything. The SDK includes ReDoS protection to prevent pathological regex patterns from hanging your process.Hooks are exception-safe. If your callback throws, the SDK catches it and returns
{ decision => 'error' }. Your agent keeps running, and you can enableCLAUDE_AGENT_DEBUG=1to see the full stack trace.The context object is your friend. The
$contextparameter gives you the session ID (essential for correlating logs across a conversation), the current working directory, and the tool details. Use this metadata to make intelligent decisions about what to allow.
Subagents for Specialised Tasks
Sometimes a single agent isn't enough. Maybe you need to run multiple analyses in parallel, checking for security vulnerabilities while simultaneously reviewing code style. Maybe you want to isolate a complex task so it doesn't pollute your main conversation context. Or maybe you need specialised expertise: one agent focused purely on security, another on performance, each with tailored instructions and tool access.
This is what subagents are for. The Claude Agent SDK lets you define specialised agent profiles that your main agent can spawn on demand. Each subagent runs in its own isolated context with its own system prompt, tool permissions, and even model selection. Think of them as expert consultants your agent can call in when it needs help.
The architecture is elegant. You define subagents as configuration objects with four properties:
use Claude::Agent::Subagent;
my $subagent = Claude::Agent::Subagent->new(
description => '...', # When should Claude use this agent?
prompt => '...', # System prompt defining expertise
tools => [...], # Allowed tools (optional, inherits if not set)
model => '...', # Model override (optional, 'sonnet', 'opus', 'haiku')
);
The description is key. Claude uses this to decide when to delegate. Write it like you're explaining to a colleague: "Expert security reviewer for vulnerability analysis" tells Claude exactly what this agent does. The prompt is the system prompt that shapes the subagent's behaviour, giving it the specialised knowledge and instructions it needs.
The subagent architecture provides several powerful capabilities:
Context isolation. Each subagent starts fresh with only its system prompt. There is no accumulated context from earlier in the conversation. This prevents context pollution and keeps analyses focused.
Tool restriction. Notice how secrets_detector doesn't have Bash access. It can only read files. This is defense in depth: even if the AI were to malfunction, a secrets-scanning agent physically cannot execute commands.
Model selection. Use Opus for complex security analysis where you need the strongest reasoning. Use Haiku for straightforward pattern-matching tasks. Your main agent can be Sonnet as the orchestrator. This optimises both cost and capability.
Parallel potential. While Claude currently executes subagents sequentially, the architecture supports parallel execution. When you spawn multiple subagents, their isolated contexts mean results can be combined without interference.
Async Tool Handlers
For tools that perform I/O operationsāHTTP requests, database queries, file operationsāblocking the event loop is wasteful. The SDK supports async tool handlers that return Futures, enabling true non-blocking execution.
Your handler receives the IO::Async::Loop as its second parameter. Use it to perform async operations and return a Future that resolves with your result:
use Future::AsyncAwait;
use Net::Async::HTTP;
my $fetch_url = tool(
'fetch_url',
'Fetch content from a URL asynchronously',
{
type => 'object',
properties => {
url => { type => 'string', description => 'URL to fetch' },
},
required => ['url'],
},
async sub {
my ($args, $loop) = @_;
my $http = Net::Async::HTTP->new;
$loop->add($http);
my $response = await $http->GET($args->{url});
return {
content => [{
type => 'text',
text => sprintf("Status: %d\nBody: %s",
$response->code,
substr($response->decoded_content, 0, 1000)),
}],
};
}
);
The same pattern works for hooks. Your hook callback can return a Future for async validation:
my $async_security_hook = Claude::Agent::Hook::Matcher->new(
matcher => '.*',
hooks => [
async sub {
my ($input, $tool_use_id, $context, $loop) = @_;
# Async check against a security policy service
my $http = Net::Async::HTTP->new;
$loop->add($http);
my $resp = await $http->POST(
'https://security.internal/check',
content => encode_json($input),
);
if ($resp->code == 403) {
return Claude::Agent::Hook::Result->deny(
reason => 'Blocked by security policy',
);
}
return Claude::Agent::Hook::Result->proceed();
},
],
);
One powerful pattern enabled by the shared event loop: spawning nested queries from within a tool handler. Your tool can invoke Claude as a sub-agent:
my $research_tool = tool(
'deep_research',
'Spawn a sub-agent to research a topic',
{
type => 'object',
properties => {
topic => { type => 'string' },
},
required => ['topic'],
},
sub {
my ($args, $loop) = @_;
# Spawn a sub-query using the shared event loop
my $sub_query = query(
prompt => "Research thoroughly: $args->{topic}",
options => Claude::Agent::Options->new(
allowed_tools => ['Read', 'Glob', 'WebSearch'],
permission_mode => 'bypassPermissions',
max_turns => 5,
),
loop => $loop,
);
my $result = '';
while (my $msg = $sub_query->next) {
if ($msg->isa('Claude::Agent::Message::Result')) {
$result = $msg->result // '';
last;
}
}
return {
content => [{ type => 'text', text => $result }],
};
}
);
Sync handlers continue to work unchanged. The SDK automatically wraps synchronous return values in Futures, so you can mix sync and async tools freely.
Wrapping Up
The Claude Agent SDK for Perl brings agentic AI capabilities directly into your existing infrastructure. From custom MCP tools that access your application state, to a flexible hook system for security and observability, to specialised subagents for parallel expertiseāthe toolkit is designed for real-world automation. Whether you're building intelligent code review pipelines, DevOps automation, or AI-powered interfaces to legacy systems, the SDK provides the primitives you need while keeping you in control. The code is available on CPAN, and I look forward to seeing what you build with it.
https://metacpan.org/pod/Claude::Agent
Here are some extensions I've built already using the SDK:
https://metacpan.org/pod/Claude::Agent::Code::Review
https://metacpan.org/pod/Claude::Agent::Code::Refactor
https://metacpan.org/pod/Wordsmith::Claude
https://metacpan.org/dist/Acme-Claude-Shell/view/bin/acme_claude_shell
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.02 on 2026-01-09, with 56 votes
- Previous CPAN version: 10.01 was 9 days before
- Author: UTASHIRO
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.097002 on 2026-01-09, with 818 votes
- Previous CPAN version: 2.097001
- Author: OLIVER
-
App::Sqitch - Sensible database change management
- Version: v1.6.1 on 2026-01-06, with 3087 votes
- Previous CPAN version: v1.6.0 was 3 months before
- Author: DWHEELER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260104.001 on 2026-01-04, with 25 votes
- Previous CPAN version: 20251228.001 was 6 days before
- Author: BRIANDFOY
-
DateTime::Format::Natural - Parse informal natural language date/time strings
- Version: 1.23 on 2026-01-04, with 19 votes
- Previous CPAN version: 1.23 was 5 days before
- Author: SCHUBIGER
-
Firefox::Marionette - Automate the Firefox browser with the Marionette protocol
- Version: 1.69 on 2026-01-10, with 19 votes
- Previous CPAN version: 1.68 was 3 months, 26 days before
- Author: DDICK
-
GD - Perl interface to the libgd graphics library
- Version: 2.84 on 2026-01-04, with 32 votes
- Previous CPAN version: 2.83 was 1 year, 6 months, 11 days before
- Author: RURBAN
-
IO::Socket::SSL - Nearly transparent SSL encapsulation for IO::Socket::INET.
- Version: 2.098 on 2026-01-06, with 49 votes
- Previous CPAN version: 2.097
- Author: SULLR
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.632 on 2026-01-06, with 16 votes
- Previous CPAN version: 0.631 was 12 days before
- Author: ETHER
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.037000 on 2026-01-07, with 27 votes
- Previous CPAN version: 2.036000
- Author: MICKEY
-
MIME::Lite - low-calorie MIME generator
- Version: 3.035 on 2026-01-08, with 35 votes
- Previous CPAN version: 3.034 was 2 days before
- Author: RJBS
-
Module::Starter - a simple starter kit for any module
- Version: 1.81 on 2026-01-09, with 34 votes
- Previous CPAN version: 1.80
- Author: XSAWYERX
-
Perl::Tidy - indent and reformat perl scripts
- Version: 20260109 on 2026-01-08, with 147 votes
- Previous CPAN version: 20250912 was 3 months, 26 days before
- Author: SHANCOCK
-
perlsecret - Perl secret operators and constants
- Version: 1.018 on 2026-01-09, with 55 votes
- Previous CPAN version: 1.017 was 4 years, 2 months before
- Author: BOOK
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.010001 on 2026-01-06, with 148 votes
- Previous CPAN version: 2.010000 was 7 days before
- Author: TOBYINK
-
UV - Perl interface to libuv
- Version: 2.001 on 2026-01-06, with 14 votes
- Previous CPAN version: 2.000 was 4 years, 5 months, 8 days before
- Author: PEVANS
I just saw perlmodules.net is down for 1-2 weeks mentioning an upcoming outage because of changes to the MetaCPAN API.
Because metacpan.org changes its API in a major way, and I need to change this site accesses it.
I see that there's the unmerged pull request metacpan/metacpan-api#1109, but I didn't see anything in the MetaCPAN::Client repo.
[link] [comments]
Get it, as usual, from my Wiki Haven.
I have not yet generated a new JSTree version but I have started cleaning up the code
in CPAN::MetaCurator...
In a script I'm using constants (use constant ...) to allow re-use ion actual regular expressions, using the pattern from https://stackoverflow.com/a/69379743/6607497.
However when using a {...} repeat specifier following such constant expansion, Perl wants to tread the constant as a hash variable.
The question is how to avoid that.
Code example:
main::(-e:1): 1
DB<1> use constant CHARSET => '[[:graph:]]'
DB<2> x "foo" =~ qr/^[[:graph:]]{3,}$/
0 1
DB<3> x "foo" =~ qr/^${\CHARSET}{3,}$/
Not a HASH reference at (eval 8)[/usr/lib/perl5/5.26.1/perl5db.pl:738] line 2.
DB<4> x "foo" =~ qr/^${\CHARSET}\{3,}$/
empty array
DB<5> x $^V
0 v5.26.1
According to https://stackoverflow.com/a/79845011/6607497 a solution may be to add a space that's being ignored, like this: qr/^${\CHARSET} {3,}$/x; however I don't understand why this works, because outside of a regular expression the space before { is being ignored:
DB<6> x "foo" =~ qr/^${\CHARSET} {3,}$/x
0 1
DB<7> %h = (a => 3)
DB<8> x $h{a}
0 3
DB<9> x $h {a}
0 3
The manual page (perlop(1) on "Quote and Quote-like Operators") isn't very precise on that:
For constructs that do interpolate, variables beginning with "$" or "@" are interpolated. Subscripted variables such as $a[3] or "$href->{key}[0]" are also interpolated, as are array and hash slices. But method calls such as "$obj->meth" are not.
foobar is a Perl script that prints to both standard output and standard error. In a separate Perl script echo-stderr, I run foobar and capture its standard error using IPC::Open3's open3 function, and simply echo it back.
Here's the code for echo-stderr:
#!/usr/bin/perl -w
use IPC::Open3;
use Symbol 'gensym';
$fh = gensym;
$pid = open3('STDIN', 'STDOUT', $fh, './foobar') or die "$0: failed to run ./foobar\n";
while ( <$fh> ) {
print STDERR $_;
}
close $fh;
waitpid($pid, 0);
The result is that whatever foobar writes to standard error is printed, nothing that it writes to standard output is.
And there is an error at the end:
<message written to STDERR>
<message written to STDERR>
...
Unable to flush stdout: Bad file descriptor
What is the reason for this error?
Version Control SVN
Whenever Iām building a static website, I almost never start by reaching for Apache, nginx, Docker, or anything that feels like āproper infrastructureā. Nine times out of ten I just want a directory served over HTTP so I can click around, test routes, check assets, and see what happens in a real browser.
For that job, Iāve been using App::HTTPThis for years.
Itās a simple local web server you run from the command line. Point it at a directory, and it serves it. Thatās it. No vhosts. No config bureaucracy. No āwhy is this module not enabledā. Just: run a command and youāve got a website.
Why Iāve used it for years
Static sites are deceptively simple⦠right up until they arenāt.
-
You want to check that relative links behave the way you think they do.
-
You want to confirm your CSS and images are loading with the paths you expect.
-
You want to reproduce āreal HTTPā behaviour (caching headers, MIME types, directory handling) rather than viewing files directly from disk.
Sure, you can open file:///.../index.html in a browser, but thatās not the same thing as serving it over HTTP. And setting up Apache (or friends) feels like bringing a cement mixer to butter some toast.
With http_this, the workflow is basically:
-
cdinto your site directory -
run a single command
-
open a URL
-
get on with your life
Itās the ātiny screwdriverā thatās always on my desk.
Why I took it over
A couple of years ago, the original maintainer had (entirely reasonably!) become too busy elsewhere and the distribution wasnāt getting attention. That happens. Open source is like that.
But I was using App::HTTPThis regularly, and I had one small-but-annoying itch: when you visited a directory URL, it would always show a directory listing – even if that directory contained an index.html. So instead of behaving like a typical web server (serve index.html by default), it treated index.html as just another file you had to click.
Thatās exactly the sort of thing you notice when youāre using a tool every day, and it was irritating enough that I volunteered to take over maintenance.
(If you want to read more on this story, I wrote a couple of blog posts.)
What Iāve done since taking it over
Most of the changes are about making the āserve a directoryā experience smoother, without turning it into a kitchen-sink web server.
1) Serve index pages by default (autoindex)
The first change was to make directory URLs behave like youād expect: if index.html exists, serve it automatically. If it doesnāt, you still get a directory listing.
2) Prettier index pages
Once autoindex was in place, I then turned my attention to the fallback directory listing page. If there isnāt an index.html, you still need a useful listing ā but it doesnāt have to look like it fell out of 1998. So I cleaned up the listing output and made it a bit nicer to read when you do end up browsing raw directories.
3) A config file
Once youāve used a tool for a while, you start to realise you run it the same way most of the time.
A config file lets you keep your common preferences in one place instead of re-typing options. It keeps the āone commandā feel, but gives you repeatability when you want it.
4) --host option
The ability to control the host binding sounds like an edge case until it isnāt.
Sometimes you want:
-
only
localhostaccess for safety; -
access from other devices on your network (phone/tablet testing);
-
behaviour that matches a particular environment.
A --host option gives you that control without adding complexity to the default case.
The Bonjour feature (and what itās for)
This is the part I only really appreciated recently: App::HTTPThis can advertise itself on your local network using mDNS / DNS-SD – commonly called Bonjour on Apple platforms, Avahi on Linux, and various other names depending on who youāre talking to.
Itās switched on with the --name option.
When you do that, http_this publishes an _http._tcp service on your local network with the instance name you chose (MyService in this case). Any device on the same network that understands mDNS/DNS-SD can then discover it and resolve it to an address and port, without you having to tell anyone, āgo to http://192.168.1.23:7007/ā.
Confession time: I ignored this feature for ages because Iād mentally filed it under āApple-only magicā (Bonjour! very shiny! probably proprietary!). It turns out itās not Apple-only at all; itās a set of standard networking technologies that are supported on pretty much everything, just under a frankly ridiculous number of different names. So: not Apple magic, just local-network service discovery with a branding problem.
Because Iād never really used it, I finally sat down and tested it properly after someone emailed me about it last week, and it worked nicely, nicely enough that Iāve now added a BONJOUR.md file to the repo with a practical explanation of whatās going on, how to enable it, and a few ways to browse/discover the advertised service.
(If youāre curious, look for _http._tcp and your chosen service name.)
Itās a neat quality-of-life feature if youāre doing cross-device testing or helping someone else on the same network reach what youāre running.
Related tools in the same family
App::HTTPThis is part of a little ecosystem of ārun a thing here quicklyā command-line apps. If you like the shape of http_this, you might also want to look at these siblings:
-
https_this : like
http_this, but served over HTTPS (useful when you need to test secure contexts, service workers, APIs that require HTTPS, etc.) -
cgi_this : for quick CGI-style testing without setting up a full web server stack
-
dav_this : serves content over WebDAV (handy for testing clients or workflows that expect DAV)
-
ftp_this : quick FTP server for those rare-but-real moments when you need one
They all share the same basic philosophy: remove the friction between āI have a directoryā and āI want to interact with it like a serviceā.
Wrapping up
I like tools that do one job, do it well, and get out of the way. App::HTTPThis has been that tool for me for years and itās been fun (and useful) to nudge it forward as a maintainer.
If youāre doing any kind of static site work ā docs sites, little prototypes, generated output, local previews ā itās worth keeping in your toolbox.
And if youāve got ideas, bug reports, or platform notes (especially around Bonjour/Avahi weirdness), Iām always happy to hear them.
The post App::HTTPThis: the tiny web server I keep reaching for first appeared on Perl Hacks.
A jq-compatible JSON processor written in pure Perl, designed for environments where jq cannot be installed.
-
App::cpm - a fast CPAN module installer
- Version: 0.998003 on 2025-12-29, with 177 votes
- Previous CPAN version: 0.998002 was 24 days before
- Author: SKAJI
-
App::Greple - extensible grep with lexical expression and region handling
- Version: 10.01 on 2025-12-31, with 56 votes
- Previous CPAN version: 10.00 was 19 days before
- Author: UTASHIRO
-
App::Music::ChordPro - A lyrics and chords formatting program
- Version: v6.090.1 on 2026-01-03, with 432 votes
- Previous CPAN version: v6.090.0 was 2 months, 3 days before
- Author: JV
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251228.001 on 2025-12-29, with 25 votes
- Previous CPAN version: 20251221.001 was 7 days before
- Author: BRIANDFOY
-
DBD::SQLite - Self Contained SQLite RDBMS in a DBI Driver
- Version: 1.78 on 2026-01-02, with 107 votes
- Previous CPAN version: 1.76 was 1 year, 2 months, 14 days before
- Author: ISHIGAKI
-
Module::Starter - a simple starter kit for any module
- Version: 1.79 on 2026-01-03, with 33 votes
- Previous CPAN version: 1.78 was 7 months, 30 days before
- Author: XSAWYERX
-
Type::Tiny - tiny, yet Moo(se)-compatible type constraint
- Version: 2.010000 on 2025-12-30, with 148 votes
- Previous CPAN version: 2.009_003 was 7 days before
- Author: TOBYINK
-
WebService::Dropbox - Perl interface to Dropbox API
- Version: 2.10 on 2025-12-29, with 12 votes
- Previous CPAN version: 2.09 was 4 years, 6 months, 14 days before
- Author: ASKADNA
In an attempt to avoid switch /x for a complex regular expression, I tried to replace qr/string/ with the following expression:
(map { qr/$_/ } ("a more" . "complex regex"))[0]
As the latter expression uses double quoted strings, I thought I would have to duplicate any backslash that should go into the qr operator. Because of that, I tried something like this:
(map { qr/$_/ } (
"^Load key \"\\Q$host_CA\\E\": "
. 'incorrect passphrase supplied '
. "to decrypt private key$CRLF"
))
However, Perl 5.26 complains with the following error message:
Unrecognized escape \Q passed through in regex; marked by <-- HERE in m/^Load key "\Q <-- HERE ...
It seems I don't have to duplicate the backslash, but I don't understand why. The following examples illustrate the behavior I am seeing:
DB<3> $x='f*o'
DB<4> x qr/\Q$x\E/
0 (?^u:f\\*o)
-> qr/(?^u:f\*o)/
DB<5> x map { qr/$_/ } ("\\Q$x\\E/")
0 (?^u:\\Qf*o\\E/)
-> qr/(?^u:\Qf*o\E\/)/
DB<6> x map { qr/$_/ } ("\Q$x\E/")
0 (?^u:f\\*o/)
-> qr/(?^u:f\*o\/)/
-
App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
- Version: 2.438 on 2025-12-25, with 18 votes
- Previous CPAN version: 2.437_05 was 7 days before
- Author: KUERBIS
-
Convert::Pheno - A module to interconvert common data models for phenotypic data
- Version: 0.29 on 2025-12-23, with 15 votes
- Previous CPAN version: 0.28 was 8 months, 4 days before
- Author: MRUEDA
-
Devel::MAT - Perl Memory Analysis Tool
- Version: 0.54 on 2025-12-26, with 30 votes
- Previous CPAN version: 0.53 was 1 year, 9 months, 19 days before
- Author: PEVANS
-
Finance::Quote - Get stock and mutual fund quotes from various exchanges
- Version: 1.68 on 2025-12-21, with 145 votes
- Previous CPAN version: 1.68 was 3 days before
- Author: BPSCHUCK
-
HTTP::Tiny - A small, simple, correct HTTP/1.1 client
- Version: 0.092 on 2025-12-27, with 115 votes
- Previous CPAN version: 0.091 was 14 days before
- Author: HAARG
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.631 on 2025-12-25, with 16 votes
- Previous CPAN version: 0.630 was 10 days before
- Author: ETHER
I am developing a Virtualmin plugin. But the problem is to have a link appear under the "Manage Virtual Server" category in the Virtualmin sidebar whenever the feature is enabled for a virtual server (domain).
Despite following the standard plugin structure, the menu item refuses to appear in the Virtualmin UI, although the module is accessible if I manually type the URL or find it in the Webmin "Tools" section (when not hidden).
Environment
- OS: Ubuntu 22.04 / 24.04
- Virtualmin version: Latest
- Webmin version: Latest
File Structure
/usr/share/webmin/my-plugin-folder/
index.cgimodule.infovirtual_feature.pl- ...
Relevant Code
virtual_feature.pl
require 'my-plugin-lib.pl';
sub feature_name {
return "plugin_name";
}
sub feature_label {
return "Plugin Name";
}
sub feature_disables {
return 1;
}
sub feature_check {
return undef;
}
sub feature_setup {
my ($d) = @_;
return undef;
}
sub feature_links {
my ($d) = @_;
# This is intended to place the link under "Manage Virtual Server"
return ({ 'mod' => $module_name,
'desc' => "Plugin Name",
'page' => "index.cgi?dom=" . $d->{'id'},
'cat' => 'server' });
}
1;
module.info
desc=Plugin Name Tool
os_support=*-linux
version=1.6
category=server
depends=virtual-server
virtualmin=1
hidden=1
Expected Behavior
After enabling the feature globally in System Settings -> Features and Plugins, a link should appear in the left-hand sidebar under the "Manage Virtual Server" category.
Actual Behavior
The feature shows up in the "Enabled features" list and can be toggled/saved successfully. However, the link never appears in the sidebar. No errors are logged in /var/webmin/miniserv.error.
What I have tried
- Restarting Webmin (
/etc/webmin/restart). - Hardcoding the module folder name in the
'mod'field offeature_links. - Changing the
'cat'field to'services'or'logs'. - Refreshing the Webmin module cache.
- Verifying that the feature is indeed marked as enabled in the domain's configuration file in
/etc/webmin/virtual-server/domains/.
Is there a specific registration step or a required function in virtual_feature.pl that I am missing for the sidebar injection to work correctly in recent versions of the Virtualmin Authentic Theme?
When writing perl functions that operate on arrays I use the tail of @_ as the "array argument" and when returning I return them by value. This keep the api simple
When I write map like functions:
sub mymap (&@) { ... }
I receive the list @ as last argument. Assuming I know these functions will operate over big arrays (#>10000) is it worth to write it to receive a reference instead of the array?
The same question applies to returning arrays, if I return a big array, is it worth to return in a reference?
By "worth" here I means, are there any gains in performance and how much?
Or is perl smart enought to optimize these cases?
My current understanding is that when I do
foo(@bar)
Perl create an alias for each value in @bar in foo's stack, if I replace @bar with \ @bar a single alias is created. This should not matter for functions that receive like 10 arguments, but for functions like map, grep etc, that operate on arrays, they may easily receive >10000 arguments
Another question, are map and grep optimized to operate on big arrays?
As suggested by Ikegami, here is a benchmark for reference
#!/usr/bin/perl
use strict;
use warnings;
use feature 'say';
use Data::Dumper qw(Dumper);
use List::Util;
use Benchmark qw(:all);
sub array_as_value {
my @list = (0 .. shift);
for (@_) {
}
return @list;
}
sub array_as_ref {
my @list = (0 .. shift);
for (shift->@*) {
}
return \@list;
}
my %test;
for (my $i = 100; $i < 1_000_000; $i *= 10) {
my @big_array = (0 .. $i);
say "Testing with $i values";
cmpthese(-5, {
as_value => sub { my @result = array_as_value($i, @big_array); scalar @result; },
as_ref => sub { my $result = array_as_ref($i, \@big_array); scalar $result->@*; }
});
say "";
}
Here are the results
Testing with 100 values
Rate as_value as_ref
as_value 217634/s -- -40%
as_ref 365740/s 68% --
Testing with 1000 values
Rate as_value as_ref
as_value 23641/s -- -43%
as_ref 41365/s 75% --
Testing with 10000 values
Rate as_value as_ref
as_value 2053/s -- -46%
as_ref 3813/s 86% --
Testing with 100000 values
Rate as_value as_ref
as_value 200/s -- -50%
as_ref 402/s 101% --
So from what I understand, passing references is considerably faster for big arrays
At an online event through the Perl Maven group we tried to understand this module and even to contriute to it. For more details about the contributions check out the OSDC Perl page.
This example is based on the one in the documentation of the JSON::Schema::Validate and tweaked a bit. It will be useful again if we continue dealing with this module.
examples/json_schema_validate.pl
use JSON::Schema::Validate;
use JSON ();
use open qw( :std :encoding(UTF-8) );
my $schema = {
'$schema' => 'https://json-schema.org/draft/2020-12/schema',
'$id' => 'https://example.org/s/root.json',
type => 'object',
required => [ 'name' ],
properties => {
name => { type => 'string', minLength => 5 },
next => { '$dynamicRef' => '#Node' },
},
'$dynamicAnchor' => 'Node',
additionalProperties => JSON::false,
};
my $js = JSON::Schema::Validate->new( $schema )
->compile
->content_checks
->ignore_unknown_required_vocab
->prune_unknown
->register_builtin_formats
->trace
->trace_limit(200) # 0 means unlimited
->unique_keys; # enable uniqueKeys
#my $data = {
# name => 'head',
# next => {
# name => 'tail'
# }
#};
#my $data = {
# name => 23,
# next => {
# name => 'tail'
# }
#};
#my $data = {
# name => 'head',
#};
my $data = {
name => 'head big',
};
my $ok = $js->validate($data)
or die( $js->error );
print "ok\n";
This is a series of post of my experiences learning Perl web development with Vuejs. These are all the posts:
This text was translated using software. However, I wrote almost all of it myself. So please bear with me if the language sounds a bit…
-
App::Netdisco - An open source web-based network management tool.
- Version: 2.097000 on 2025-12-16, with 810 votes
- Previous CPAN version: 2.096001 was 2 days before
- Author: OLIVER
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20251221.001 on 2025-12-21, with 25 votes
- Previous CPAN version: 20251214.001 was 7 days before
- Author: BRIANDFOY
-
Dist::Zilla::Plugin::Test::Compile - Common tests to check syntax of your modules, using only core modules
- Version: 2.059 on 2025-12-16, with 13 votes
- Previous CPAN version: 2.058 was 7 years, 11 months, 27 days before
- Author: ETHER
-
Image::ExifTool - Read and write meta information
- Version: 13.44 on 2025-12-15, with 44 votes
- Previous CPAN version: 13.36 was 3 months, 6 days before
- Author: EXIFTOOL
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.630 on 2025-12-14, with 16 votes
- Previous CPAN version: 0.629 was 2 days before
- Author: ETHER
-
List::Gen - provides functions for generating lists
- Version: 0.979 on 2025-12-21, with 24 votes
- Previous CPAN version: 0.978
- Author: SOMMREY
-
Minilla - CPAN module authoring tool
- Version: v3.1.29 on 2025-12-17, with 98 votes
- Previous CPAN version: v3.1.28 was 3 months, 2 days before
- Author: SYOHEX
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20251220 on 2025-12-20, with 44 votes
- Previous CPAN version: 5.20251120 was 1 month before
- Author: BINGOS
-
Mouse - Moose minus the antlers
- Version: v2.6.1 on 2025-12-20, with 63 votes
- Previous CPAN version: v2.6.0 was 1 month, 20 days before
- Author: SKAJI
-
PGXN::API - Maintain and serve a REST API to search PGXN mirrors
- Version: v0.21.0 on 2025-12-15, with 18 votes
- Previous CPAN version: v0.20.2 was 1 year, 9 months before
- Author: DWHEELER
-
Sidef - The Sidef Programming Language
- Version: 25.12 on 2025-12-21, with 121 votes
- Previous CPAN version: 24.11 was 1 year, 22 days before
- Author: TRIZEN
-
Text::Markup - Parse text markup into HTML
- Version: 0.41 on 2025-12-18, with 12 votes
- Previous CPAN version: 0.40 was 3 days before
- Author: DWHEELER
-
Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
- Version: 0.63 on 2025-12-20, with 20 votes
- Previous CPAN version: 0.62 was 8 years, 8 months, 9 days before
- Author: CHANSEN
-
Zonemaster::Backend - A system for running Zonemaster tests asynchronously through an RPC-API
- Version: 12.0.0 on 2025-12-19, with 16 votes
- Previous CPAN version: 11.5.0 was 5 months, 22 days before
- Author: ZNMSTR
-
Zonemaster::Engine::Exception::NormalExit - run Zonemaster tests from the command line
- Version: 8.000001 on 2025-12-19, with 23 votes
- Previous CPAN version: 8.000000 was 5 months, 22 days before
- Author: ZNMSTR
-
Zonemaster::Engine - A tool to check the quality of a DNS zone
- Version: 8.001000 on 2025-12-19, with 35 votes
- Previous CPAN version: 8.000000 was 5 months, 22 days before
- Author: ZNMSTR
This is the weekly favourites list of CPAN distributions. Votes count: 43
Week's winner: MCP (+3)
Build date: 2025/12/21 13:03:54 GMT
Clicked for first time:
- App::BlurFill - Blurred background fill image processor
- Complete::Getopt::Long - Complete command-line argument using Getopt::Long specification
- Data::Turtle - Turtle Movement and State Operations
- Marlin - ưĀĀĀ pretty fast class builder with most Moo/Moose features ưĀĀĀ
- Mojo::Collection::XS - Fast XS subclass of Mojo::Collection with XS-based while
- SimpleFlow - easy, simple workflow manager (and logger); for keeping track of and debugging large and complex shell command workflows
Increasing its reputation:
- Affix (+1=4)
- App::shcompgen (+1=3)
- Complete::Bash (+1=5)
- Complete::Util (+1=2)
- Const::Fast (+1=37)
- DateTime::Format::Strptime (+1=26)
- File::HomeDir (+1=35)
- File::XDG (+1=10)
- Getopt::Long::Complete (+1=15)
- Getopt::Long::More (+1=2)
- IPC::Run3 (+1=25)
- JQ::Lite (+1=7)
- JSON::Schema::Modern (+1=9)
- JSON::XS (+1=121)
- MCP (+3=7)
- Melian (+2=3)
- MooX::Singleton (+1=6)
- OpenGL (+1=14)
- OpenGL::Modern (+1=3)
- PAGI (+2=2)
- Path::Iterator::Rule (+1=26)
- Perl::Types (+1=2)
- Prima (+1=46)
- SDL3 (+2=2)
- sealed (+1=2)
- Storage::Abstract (+2=2)
- Sub::Throttler (+1=2)
- Test2::Plugin::SubtestFilter (+1=3)
- Text::Markup (+1=12)
- Thread::Subs (+2=2)



