Finally - GTC 2.0, an all in one color library, is released ! This post will not be rehash (of) the (very) fine manual, but give you a sense what you can achieve with this software and why it is better than any other lib of that sort on CPAN. If you like to look under the hood of GTC, please read my last post.
When I released GTC 1.0 in 2022, it had 4 major features:
1. computing color gradients, between 2 colors in RGB
2. computing complementary colors in HSL
3. translating color names from internal constant set into RGB values
4. converting RGB to HSL and back
The HSL support allowed to add and subtract lightness and saturation (make colors darker, or lighter make them more pale or colorful). And by mentioning a very rudimentary distance computation and color blending we reached the bottom of the barrel.
GTC 2.0 expanded in all areas by a manyfold. Going from 2 (RGB and HSL) to now 17 color spaces (soon ~25) has a large effect. Not only being able to read and write color values from 17 spaces makes GTC much more useful, but also computing a gradient and measuring the distance in different spaces gives you options. Some spaces are optimized for human perception (OKLAB or CIELUV) other you would choose for technical necessity. Especially OKLAB and OKHCL are the hype (for a while) and GTC is the only module in CPAN supporting it. Almost all methods (beside ''name'' and ''complement'') let you choose the color space, the method will be computed in. And in is always the named argument you do it with: " in => 'RGB' " just reads natural.
And just to complete bullet point 1: gradient can now take a series of colors and a tilt factor as arguments to produce very expressive and custom gradients. The tilt factor works also for complements. If you use special tilt values from the documentation you can get also split complementary colors as needed by designers but the nice thing about GTC is, you could choose any other value to get exactly what you are looking for. Many libraries have one method for triadic colors one for quadratic. To get them in GTC you just set the steps argument to 3 or 4 but you can choose again also any other number. Complements can be tilted in all 3 Dimensions.
Beside gradient and complement came also a new color set method: cluster. It is for computing a bunch of colors and are centered around a given one, but have a given, minimal dissimilarity. New is also invert, often the fastest way to get a fitting fore/background color, if the original color was not too bland.
The internal color name constants are still the same, but this feature block got 2 expansions. For one you can now ask for the closest color name (closest_name) and select from which standard this name has to come from (e.g. CSS). These Constants are provided by the Graphics::ColorNames::* modules and you can use them also anywhere a color is expected as input. The nice green from X11 standard would be just:'X:forestgreen'.
But since CSS + X11 + Pantone report colors are already included 'forestgreen' works too.
There are many more features that will come the next week, the most requested is probably simulation for color impaired vision, more spaces, a gamut checker is already implement, gamma correction, will be implemented this week and much much more. Just give it a try and please send bug reports and feature requests.
PS. Yes I also heald a lightning talk about GTC in Berlin last week.
I need to move some chunks of text around in a file. I am partially successful, in the sense that I can move only the first chunk successfully.
The text in the file looks like this:
text regtext1 text regtext2 text regtextA regtextZ end
where text is some random text, and regtext1,2,3 are pieces of text conforming to some regular rules / patterns. All of them can contain pretty much any printable character, and a few more (diacritics, end-of-line, ...).
What I do now is something like this:
/(reg)(text\d+.*?)(regtext[A-Z]+)/$1$3$2/gs
the result being that regextA is moved inside regtext1:
text regregtextAtext1 text regtext2 text regtextZ end
The issue is that after the replace, the search-and-replace continues at the position after regtextA, before regtextZ - if I understand the algorithm correctly.
How can I modify the search-and-replace expression in such way to do the same thing for regtext2...regtextZ, and all other such occurrences? The text in the end should look like:
text regregtextAtext1 text regregtextZtext2 text end
but it does not happen.
I might have to use the \G anchor, but I have no idea how. For debugging I use regex101.com.
Looking at a previous example, I tried the following code:
$s =~ s{(?:\G(?!\A)|)\K(reg)(text\d+.*?)(regtext[A-Z]+)}{"$1$3$2"}
but it makes also only one replacement - probably because I do not understand exactly how the original code (and \G) works.
I tried the correct version of the code suggested in the answer, but it takes an "infinity" of time(actually, I forcefully stopped the execution after several minutes) (just like in the previous example) - even if I limit the execution to only one replacement. The presence of the "while" is "malefic". In the absence of the while, the one replacement happens "instantly".
Reschedule 'use VERSION' switch fatalisation to 5.46 We did say we'd do this for 5.44 but we forgot to make the change until now, and it's a bit late in the cycle. We'll reschedule it for 5.46.
perlguts: Refer queries directly to P5P list Currently, readers of this file who encounter problems have to scroll down over 5000 lines to find the "author" to whom questions should be directed. For nearly 30 years that "author" has been P5P, so let's tell the readers that directly.
Make, Bash, and a scripting language of your choice
Creating AWS Resources…let me count the ways
You need to create an S3 bucket, an SQS queue, an IAM policy and a few other AWS resources. But how?…TIMTOWTDI
The Console
- Pros: visual, immediate feedback, no tooling required, great for exploration
- Cons: not repeatable, not version controllable, opaque, clickops doesn’t scale, “I swear I configured it the same way”
The AWS CLI
- Pros: scriptable, composable, already installed, good for one-offs
- Cons: not idempotent by default, no state management, error handling is manual, scripts can grow into monsters
CloudFormation
- Pros: native AWS, state managed by AWS, rollback support, drift detection
- Cons: YAML/JSON verbosity, slow feedback loop, stack update failures are painful, error messages are famously cryptic, proprietary to AWS, subject to change without notice
Terraform
- Pros: multi-cloud, huge community, mature ecosystem, state management, plan before apply
- Cons: state file complexity, backend configuration, provider versioning, HCL is yet another language to learn, overkill for small projects, often requires tricks & contortions
Pulumi
- Pros: real programming languages, familiar abstractions, state management
- Cons: even more complex than Terraform, another runtime to install and maintain
CDK
- Pros: real programming languages, generates CloudFormation, good for large organizations
- Cons: CloudFormation underneath means CloudFormation problems, Node.js dependency
…and the rest of crew…
Ansible, AWS SAM, Serverless Framework - each with their own opinions, dependencies, and learning curves.
Every option beyond the CLI adds a layer of abstraction, a new language or DSL, a state management story, and a new thing to learn and maintain. For large teams managing hundreds of resources across multiple environments that overhead is justified. For a solo developer or small team managing a focused set of resources it can feel like overkill.
Even in large organizations, not every project should be conflated into the corporate infrastructure IaC tool. Moreover, not every project gets the attention of the DevOps team necessary to create or support the application infrastructure.
What if you could get idempotent, repeatable, version-controlled
infrastructure management using tools you already have? No new
language, no state backend, no provider versioning. Just make,
bash, a scripting language you’re comfortable with, and your cloud
provider’s CLI.
And yes…my love affair with make is endless.
We’ll use AWS examples throughout, but the patterns apply equally to
Google Cloud (gcloud) and Microsoft Azure (az). The CLI tools
differ, the patterns don’t.
A word about the AWS CLI --query option
Before you reach for jq, perl, or python to parse CLI output,
it’s worth knowing that most cloud CLIs have built-in query
support. The AWS CLI’s --query flag implements JMESPath - a query
language for JSON that handles the majority of filtering and
extraction tasks without any additional tools:
# get a specific field
aws lambda get-function \
--function-name my-function \
--query 'Configuration.FunctionArn' \
--output text
# filter a list
aws sqs list-queues \
--query 'QueueUrls[?contains(@, `my-queue`)]|[0]' \
--output text
--query is faster, requires no additional dependencies, and keeps
your pipeline simple. Reach for it first. When it falls short -
complex transformations, arithmetic, multi-value extraction - that’s
when a one-liner earns its place:
# perl
aws lambda get-function --function-name my-function | \
perl -MJSON -n0 -e '$l=decode_json($_); print $l->{Configuration}{FunctionArn}'
# python
aws lambda get-function --function-name my-function | \
python3 -c "import json,sys; d=json.load(sys.stdin); print(d['Configuration']['FunctionArn'])"
Both get the job done. Use whichever lives in your shed.
What is Idempotency?
The word comes from mathematics - an operation is idempotent if applying it multiple times produces the same result as applying it once. Sort of like those ID10T errors…no matter how hard or how many times that user clicks on that button they get the same result.
In the context of infrastructure management it means this: running your resource creation script twice should have exactly the same outcome as running it once. The first run creates the resource. The second run detects it already exists and does nothing - no errors, no duplicates, no side effects.
This sounds simple but it’s surprisingly easy to get wrong. A naive
script that just calls aws lambda create-function will fail on the
second run with a ResourceConflictException. A slightly better
script wraps that in error handling. A truly idempotent script never
attempts to create a resource it knows already exists.
And it works in both directions. The idempotent bug - running a failing process repeatedly and getting the same error every time - is what happens when your failure path is idempotent too. Consistently wrong, no matter how many times you try. The patterns we’ll show are designed to ensure that success is idempotent while failure always leaves the door open for the next attempt.
Cloud APIs fall into four distinct behavioral categories when it comes to idempotency, and your tooling needs to handle each one differently:
Case 1 - The API is idempotent and produces output
Some APIs can be called repeatedly without error and return useful
output each time - whether the resource was just created or already
existed. aws events put-rule is a good example - it returns the rule
ARN whether the rule was just created or already existed. The pattern:
call the read API first, capture the output, call the write API only
if the read returned nothing.
Case 2 - The API is idempotent but produces no output
Some write APIs succeed silently - they return nothing on
success. aws s3api put-bucket-notification-configuration is a good
example. It will happily overwrite an existing configuration without
complaint, but returns no output to confirm success. The pattern: call
the API, synthesize a value for your sentinel using && echo to
capture something meaningful on success.
Case 3 - The API is not idempotent
Some APIs will fail with an error if you try to create a resource that
already exists. aws lambda add-permission returns
ResourceConflictException if the statement ID already exists. aws
lambda create-function returns ResourceConflictException if the
function already exists. These APIs give you no choice - you must
query first and only call the write API if the resource is missing.
Case 4 - The API call fails
Any of the above can fail - network errors, permission problems,
invalid parameters. When a call fails you must not leave behind a
sentinel file that signals success. A stale sentinel is worse than no
sentinel - it tells Make the resource exists when it doesn’t, and
subsequent runs silently skip the creation step. The patterns: || rm
-f $@ when writing directly, or else rm -f $@ when capturing to a
variable first.
The Sentinel File
Before we look at the four patterns in detail, we need to introduce a concept that ties everything together: the sentinel file.
A sentinel file is simply a file whose existence signals that a task
has been completed successfully. It contains no magic - it might hold
the output of the API call that created the resource, or it might just
be an empty file created with touch. What matters is that it exists
when the task succeeded and doesn’t exist when it hasn’t.
make has used this pattern since the 1970s. When you declare a
target in a Makefile, make checks whether a file with that name
exists before deciding whether to run the recipe. If the file exists
and is newer than its dependencies, make skips the recipe
entirely. If the file doesn’t exist, make runs the recipe to create
it.
For infrastructure management this is exactly the behavior we want:
my-resource:
@value="$$(aws some-service describe-resource \
--name $(RESOURCE_NAME) 2>&1)"; \
if [[ -z "$$value" || "$$value" = "ResourceNotFound" ]]; then \
value="$$(aws some-service create-resource \
--name $(RESOURCE_NAME))"; \
fi; \
test -e $@ || echo "$$value" > $@
The first time you run make my-resource the file doesn’t exist,
the recipe runs, the resource is created, and the API response
is written to the sentinel file my-resource. The second time you
run it, make sees the file exists and skips the recipe entirely -
zero API calls.
This brings us to the || rm -f $@ discipline. If the API call fails
for any reason, the sentinel file is immediately removed. Without this
a failed create leaves an empty or partial sentinel file. Make sees
the file exists on the next run, skips the recipe, and the resource is
never created. An idempotent bug - consistently broken, silently,
forever.
One more pattern worth noting - test -e $@ || echo "$$value" >
$@. This writes the sentinel only if it doesn’t already
exist. Combined with the initial query this means we never rewrite a
sentinel unnecessarily, avoiding redundant API calls on every make
invocation. The sentinel is written exactly once - on the first
successful run - and never touched again.
The Four Patterns
Armed with the sentinel file concept and an understanding of the four API behavioral categories, let’s look at concrete implementations of each pattern.
Pattern 1 - Idempotent API with output
The simplest case. Query the resource first - if it exists capture the output and write the sentinel. If it doesn’t exist, create it, capture the output, and write the sentinel. Either way you end up with a sentinel containing meaningful content.
The SQS queue creation is a good example:
sqs-queue:
@queue="$$(aws sqs list-queues \
--query 'QueueUrls[?contains(@, `$(QUEUE_NAME)`)]|[0]' \
--output text --profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$queue" | grep -q 'error\|Error'; then \
echo "ERROR: list-queues failed: $$queue" >&2; \
exit 1; \
elif [[ -z "$$queue" || "$$queue" = "None" ]]; then \
queue="$(QUEUE_NAME)"; \
aws sqs create-queue --queue-name $(QUEUE_NAME) \
--profile $(AWS_PROFILE); \
fi; \
test -e $@ || echo "$$queue" > $@
Notice --query doing the filtering work before the output reaches
the shell. No jq, no pipeline - the AWS CLI extracts exactly what we
need. The result is either a queue URL or empty. If empty we
create. Either way $$queue ends up with a value and the sentinel is
written exactly once.
The EventBridge rule follows the same pattern:
lambda-eventbridge-rule:
@rule="$$(aws events describe-rule \
--name $(RULE_NAME) \
--profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$rule" | grep -q 'ResourceNotFoundException'; then \
rule="$$(aws events put-rule \
--name $(RULE_NAME) \
--schedule-expression "$(SCHEDULE_EXPRESSION)" \
--state ENABLED \
--profile $(AWS_PROFILE))"; \
elif echo "$$rule" | grep -q 'error\|Error'; then \
echo "ERROR: describe-rule failed: $$rule" >&2; \
exit 1; \
fi; \
test -e $@ || echo "$$rule" > $@
Same shape - query, create if missing, write sentinel once.
Pattern 2 - Idempotent API with no output
Some APIs succeed silently. aws s3api
put-bucket-notification-configuration is the canonical example - it
happily overwrites an existing configuration and returns nothing. No
output means nothing to write to the sentinel.
The solution is to synthesize a value using &&:
lambda-s3-trigger: lambda-s3-permission
@function_arn=$$(cat lambda-function | perl -MJSON -n0 -e \
'$$l=decode_json($$_); print $$l->{Configuration}->{FunctionArn}'); \
config="{LambdaFunctionConfigurations => \
[{LambdaFunctionArn => q{$$function_arn}, Events => [qw($(S3_EVENT))]}]}"; \
config="$$(perl -MJSON -e "printf q{\"%s\"}, encode_json($$config)")"; \
trigger="$$(aws s3api put-bucket-notification-configuration \
--bucket $(BUCKET_NAME) \
--notification-configuration $$config \
--profile $(AWS_PROFILE) && echo $$config)"; \
if [[ -n "$$trigger" ]]; then \
test -e $@ || echo "$$trigger" > $@; \
else \
rm -f $@; \
fi
The && echo $$config is the key. If the API call succeeds the &&
fires and $$trigger gets the config value - something meaningful to
write to the sentinel. If the API call fails && doesn’t fire,
$$trigger stays empty, and the else branch cleans up with rm -f
$@.
This is also where a useful trick emerges for generating shell-safe JSON from a scripting language. The AWS CLI needs the JSON wrapped in double quotes as a single shell argument. Rather than fighting with shell escaping at the point of use, we bake the quotes into the generated value at the point of creation:
# perl
config="$(perl -MJSON -e "printf q{\"%s\"}, encode_json({...})")"
# python
config="$(python3 -c "import json; print('\"' + json.dumps({...}) + '\"')")"
Pattern 3 - Non-idempotent API
Some APIs are not idempotent - they fail with a
ResourceConflictException or similar if the resource already
exists. aws lambda add-permission and aws lambda create-function
are both in this category. There is no “create or update” variant -
you must check existence first and only call the write API if the
resource is missing.
The Lambda S3 permission target is a good example:
lambda-s3-permission: lambda-function s3-bucket
@permission="$$(aws lambda get-policy \
--function-name $(FUNCTION_NAME) \
--profile $(AWS_PROFILE) 2>&1)"; \
if echo "$$permission" | grep -q 'ResourceNotFoundException' || \
! echo "$$permission" | grep -q s3.amazonaws.com; then \
permission="$$(aws lambda add-permission \
--function-name $(FUNCTION_NAME) \
--statement-id s3-trigger-$(BUCKET_NAME) \
--action lambda:InvokeFunction \
--principal s3.amazonaws.com \
--source-arn arn:aws:s3:::$(BUCKET_NAME) \
--profile $(AWS_PROFILE))"; \
elif echo "$$permission" | grep -q 'error\|Error'; then \
echo "ERROR: get-policy failed: $$permission" >&2; \
exit 1; \
fi; \
if [[ -n "$$permission" ]]; then \
test -e $@ || echo "$$permission" > $@; \
else \
rm -f $@; \
fi
A few things worth noting here…
get-policyreturns the full policy document which may contain multiple statements - we check for the presence ofs3.amazonaws.comspecifically using! grep -qrather than just checking for an empty response. This handles the case where a policy exists but doesn’t yet have the S3 permission we need.- The sentinel is only written if
$$permissionis non-empty after the if block. This covers the case whereget-policyreturns nothing andadd-permissionalso fails - the sentinel stays absent and the nextmakerun will try again. - We pipe errors to our
bashvariable to detect the case where the resource does not exist or there may have been some other error. When other failures are possible2>&1combined with specific error string matching gives you both idempotency and visibility. Swallowing errors silently (2>/dev/null) is how idempotent bugs are born.
Pattern 4 - Failure handling
This isn’t a separate pattern so much as a discipline that applies to all three of the above. There are two mechanisms depending on how the sentinel is written.
When the sentinel is written directly by the command:
aws lambda create-function ... > $@ || rm -f $@
|| rm -f $@ ensures that if the command fails the partial or empty
sentinel is immediately cleaned up. Without it Make sees the file on
the next run and silently skips the recipe - an idempotent bug.
When the sentinel is written by capturing output to a variable first:
if [[ -n "$$value" ]]; then \
test -e $@ || echo "$$value" > $@; \
else \
rm -f $@; \
fi
The else rm -f $@ serves the same purpose. If the variable is empty
- because the API call failed - the sentinel is removed. If the
sentinel doesn’t exist yet nothing is written. Either way the next
make run will try again.
In both cases the goal is the same: a sentinel file should only exist when the underlying resource exists. A stale sentinel is worse than no sentinel.
Note also that our Makefiles set .SHELLFLAGS := -ec which causes
make to exit immediately if any command in a recipe fails. This
means commands that don’t write to $@ - like aws sqs create-queue
- don’t need explicit failure handling. make will die loudly and the
sentinel won’t be written.
Conclusion
Creating AWS resources can be done using several different tools…all of them eventually call AWS APIs and process the return payloads. Each of these tools has its place. Each adds something. Each also has a complexity, dependencies, and a learning curve score.
For a small project or a focused set of resources - the kind a solo
developer or small team manages for a specific application - you don’t
need tools with a high cognitive or resource load. You can use those
tools you already have on your belt; make,bash, [insert favorite
scripting language here], and aws. And you can leverage those same tools
equally well with gcloud or az.
The four patterns we’ve covered handle every AWS API behavior you’ll encounter:
- Query first, create only if missing, write a sentinel
- Synthesize output when the API has none
- Always check before calling a non-idempotent API
- Clean up on failure with
|| rm -f $@
These aren’t new tricks - they’re straightforward applications of
tools that have been around for decades. make has been managing
file-based dependencies since 1976. The sentinel file pattern predates
cloud computing entirely. We’re just applying them to a new problem.
One final thought. The idempotent bug - running a failing process
repeatedly and getting the same error every time - is the mirror image
of what we’ve built here. Our goal is idempotent success: run it once,
it works. Run it again, it still works. Run it a hundred times,
nothing changes. || rm -f $@ is what separates idempotent success
from idempotent failure - it ensures that a bad run always leaves the
door open for the next attempt rather than cementing the failure in
place with a stale sentinel.
Your shed is already well stocked. Sometimes the right tool for the job is the one you’ve had hanging on the wall for thirty years.
Further Reading
- “Advanced Bash-Scripting Guide” - https://tldp.org/LDP/abs/html/index.html
- “GNU Make” - https://www.gnu.org/software/make/manual/html_node/index.html
- Dave Oswald, “Perl One Liners for the Shell” (Perl conference presentation): https://www.slideshare.net/slideshow/perl-oneliners/77841913
- Peteris Krumins, “Perl One-Liners” (No Starch Press): https://nostarch.com/perloneliners
- Sundeep Agarwal, “Perl One-Liners Guide” (free online): https://learnbyexample.github.io/learn_perl_oneliners/
- AWS CLI JMESPath query documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-filter.html
Weekly Challenge 366
It was seven years ago that Mohammad sent out the first challenge to Team PWC (as it was then known). Thank you very much for all your work over the seven years.
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.
Task 1: Count Prefixes
Task
You are given an array of words and a string (contains only lowercase English letters).
Write a script to return the number of words in the given array that are a prefix of the given string.
My solution
This is a one-liner in Python. It should be pretty self-explanatory. Given a list called array and a string called prefix, it counts the number of items in the list where the first letters of the prefix match the word.
def count_prefixes(array: list, prefix: str) -> int:
return sum(1 for word in array if prefix[:len(word)] == word)
The Perl solution uses the grep function to perform the counting. In a scalar context, grep returns the number of matching items.
sub main (@array) {
my $prefix = pop(@array);
my $count = grep { substr( $prefix, 0, length($_) ) eq $_ } @array;
say $count;
}
Examples
$ ./ch-1.py a ap app apple banana apple
4
$ ./ch-1.py cat dog fish bird
0
$ ./ch-1.py hello he hello heaven he hello
4
$ ./ch-1.py "" code coding cod coding
3
$ ./ch-1.py p pr pro prog progr progra program program
7
Task 2: Valid Times
Task
You are given a time in the form HH:MM. The earliest possible time is 00:00 and the latest possible time is 23:59. In the string time, the digits represented by the ? symbol are unknown, and must be replaced with a digit from 0 to 9.
Write a script to return the count different ways we can make it a valid time.
My solution
This is an interesting challenge as the solution is not straight forward. There are a few approaches that can be taken. One option is to calculate all 1440 minutes in a day and see if it matched the expected pattern.
The approach I took was to calculate the possible hours and possible minutes, and multiplying both figures to return a result.
I start by using a regular expression to check if the time is valid. As the question mark ? is within square brackets [ ] this is taken as a literal character.
def valid_times(input_string: str) -> int:
if not re.search(r'^([0-1?][0-9?]|2[0-3?]):[0-5?][0-9?]$', input_string):
raise ValueError("Input is not in the expected format (HH:MM)")
The next task is calculating the number of valid hours.
- If the hours is
??, then there are 24 valid hours. - If the first character is a question there are 3 valid hours if the second digit is less than four (e.g. 02 12 22), or 2 if it is 4 or greater (e.g. 04 14).
- If the second character is a question mark, there are 4 valid hours if the first digit is 2, or 10 valid hours otherwise.
- If hours have no questions marks, there is only one valid hour.
# Compute the hours
if input_string[:2] == "??":
hours = 24
elif input_string[:1] == "?":
hours = 3 if int(input_string[1:2]) < 4 else 2
elif input_string[1:2] == "?":
hours = 4 if input_string[:1] == "2" else 10
else:
hours = 1
Thankfully calculating the number of valid minutes is a little easier.
- If the minutes is
??, then there are sixty valid minutes. - If the first character is a question mark, then there are six valid minutes (e.g. 06 16 26 36 46 56).
- If the second characters is a question mark, there are ten valid minutes (e.g. 50 51 ... 58 59).
- If the minutes have no question marks, there is only one valid minute.
if input_string[3:] == "??":
minutes = 60
elif input_string[3:4] == "?":
minutes = 6
elif input_string[4:] == "?":
minutes = 10
else:
minutes = 1
return hours * minutes
The Perl solution follows the same logic as the Python solution.
Examples
$ ./ch-2.py ?2:34
3
$ ./ch-2.py ?4:?0
12
$ ./ch-2.py ??:??
1440
$ ./ch-2.py ?3:45
3
$ ./ch-2.py 2?:15
4
Originally published at Perl Weekly 765
Hi there!
I am sending this edition rather late as I got into a frenzy of online courses that require a lot of preparation and only now I had time to work on the Perl Weekly. Sorry for that. In addition this edition has a lot of excellent articles. What happend? Last time I hardly found any article and now there are a lot. I am not complaining at all, I was just really surprised. Keep up the blogging so we we can share more content!
We have 3 grant reports, 2 reports from GPW, several article about the use of AI for Perl and many more. I think one of the keys is that several people have started to write serieses of articles. So they have a theme and explore it from various aspects.
I realized too late, but as I am stuck in Hungary for more than a month already, I should have visited the German Perl Workshop in Berlin. I thought about it too late. Anyway, there are at least the reports.
Personally I love testing. It is coding with very fast feedback that helps me stay sane. More or less :-)
Last week I taught a course on Testing in Python, but I thought one about Perl should be also done. So a few days from now I am going to start teaching a multi-part course about Testing in Perl. In Zoom.
Course attendance is free of charge.
The presentations will be recorded and will be uploaded to the Code Maven Academy where they will be available to paying subscribers.
I hope I'll see many of you and your co-workers at the course. Register here!
Enjoy your week
--
Your editor: Gabor Szabo.
Articles
Perl, the Strange Language That Built the Early Web
A bit of nostalgy and a lot of good insights.
TPRC Talk Submission Deadline extended
The new deadline is April 21, 2026. Go and submit your talk proposal!
Still on the [b]leading edge
The story of a crazy bug. Somewhere. Not in my code. discuss
ANNOUNCE: Perl.Wiki V 1.42 & 2 CPAN::Meta* modules
Beautiful Perl feature: reusable subregexes
Stop Writing Release Notes: Accelerate with AI
Help testing DBD::Oracle
Discussion
Getting a 500 error on my website when running CGI script
Or, how to go from Perl v5.005 to Perl v5.32.1 in one step.
PetaPerl - reimplementation of perl
I have though several times about trying to reimplement Perl in Rust and every time I quickly convinced myself not to do it. First of all because it is way beyond my expertise. However also, what is the value of it? As I understand it there was a presentation about it at the German Perl Workshop covering the motivation as well. Very interesting. You can read the documentation and see the slides. I am rather excited!
Ambiguous use of ${x} resolved to $x
Code with winter clothes...
Perl and AI
Six Ways to Use AI Without Giving Up the Keys
The titles: 1. Unit Test Writing; 2. Documentation; 3. Release Notes; 4. Bug Triage; 5. Code Review; 6. Legacy Code Deciphering
experiments with claude, part ⅳ: dzilification of MIME-Lite
experiments with claude, part ⅴ: ClaudeLog
experiments with claude, part ⅲ: JMAP-Tester coverage
Grants
Maintaining Perl 5 Core (Dave Mitchell): February 2026
PEVANS Core Perl 5: Grant Report for February 2026
Maintaining Perl (Tony Cook) February 2026
Perl
This week in PSC (218) | 2026-03-16
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Lance Wicks.
The Weekly Challenge - 366
Welcome to a new week with a couple of fun tasks "Count Prefixes" and "Valid Times". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 365
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Alphabet Index Digit Sum" and "Valid Token Counter" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
A Token Alphabet
An informative and thoughtful article which illustrates Raku's fantastic facilities for creating grammars and using tokens to model your own custom alphabet in a pleasing and expressive manner. Good balance of theory with practical approach; gives uncommon parsing concepts reasonable readability as well showcasing Raku's idiomatic implementation.
PWC365, Task 2 Valid Token Counter
The implementation of this solution has been done using a clean and organised manner. It shows excellent use of list processing in Raku while also using control flow to solve the problem effectively. Based on the written implementation, the author clearly understands how the system works as shown by their concise and logical reasoning in the code itself, as well as providing an idiomatic means of expressing themselves through the way they wrote their code.
Perl Weekly Challenge: Week 365
A clearly written and entertaining article that clearly shows both Perl and Raku solutions in parallel. This demonstrates the author's understanding of the idioms and strengths of both languages. The article provides clear logic as well as practical examples of how to implement the logic. The information provided in the article is helpful in showing the differences and similarities between the two programming languages, while also being concise and easy to read.
Sum Tokens and Count Digits
This is an intelligently written article that succinctly outlines how to utilise an effective problem-solving methodology without sacrificing either code readability or idiomatic use of language. In addition, the article does a wonderful job of providing clarity as well as technical depth in order to enhance both continuity in reasoning and elegance/instructional value of the solution.
The Weekly Challenge 365
This well-written article provides structure to help readers understand how each Weekly Challenge solution was developed. It combines clear explanations with practical examples of code to look at both how to apply a problem and how to solve it. The author demonstrates an understanding of their problem as well as the specific requirements that need to be satisfied in order for a given solution to be considered valid, but also gives the reader a fun place to explore various forms of programming using the languages of Perl and others.
regexps to rule them all!
An organised, well-articulated post that illustrates your consistent, orderly method for completing each week’s Challenge with great success in diverse languages. This demonstrates your problem solving capabilities as well as your versatility. All explanations provided were descriptive and practical; therefore were applicable across all languages. Also, by providing side-by-side examples of the various implementations from different programming languages, you have created meaningful comparisons; therefore illustrating each language’s distinctive characteristics.
Perl Weekly Challenge 365
A normalised write‑up is written in an interesting way, making it clear and fun to understand about solving both parts of the Weekly Challenges providing well-structured solutions and Perl/Raku examples. Examples will also be provided that are easy to read, written clearly and concisely, demonstrating logic that can be understood easily, by those with varying abilities.
Are Post Alphabits a Token Breakfast Cereal?
The post is full of energy and fun. It presents a practical, hands-on approach to completing the Weekly Challenge with appropriate justification and effective usage of Perl programming constructs. Solutions demonstrate an excellent understanding of the basics of programming (particularly list and string). Implementation of the solutions are both approachable and educational for the viewer.
Splitting and Summing and Checking and Counting
A concise README that is thoughtfully organised, with clear explanations and idiomatic code, that makes it easy to replicate your approach. You have demonstrated excellent problem solving and a high level of attention to clarity in your write-up; you have also successfully managed to balance the level of detail and technical depth for other people to follow.
I'll be the smartest bird the world has ever seen!
This is a creative solution that is fun, playful, uses a literary reference to solve a technical problem, and has clarity of thought and personality. The implementation is brief and uses idiomatic Perl. The strengths of Perl have been used to make it clear, and the story has been made clear and memorable.
Lots of counting
This is a good example of a solid engineering solution. It shows a structured and clear thinking process, as well as how well you have used the basic features of Perl to accomplish the task at hand. Your implementation is both concise and expressive; thus, demonstrating your mastery of decomposing problems into their components and using clean, idiomatic coding methods in your programming experience.
The Weekly Challenge - 365: Alphabet Index Digit Sum
This document has been created in a deliberate and orderly way which shows a good understanding of the problem at hand as well as the logic behind arriving at the answer; it also includes attention to detail when implementing the solution. The solution is practically designed as well as creatively developed and uses Perl features thoughtfully to create an efficient and effective answer.
The Weekly Challenge - 365: Valid Token Counter
It is a clear and well thought-out solution that uses a sound problem-solving method, reasoning clearly, and has clean, idiomatic Perl code. The method is easy to implement, efficient and has demonstrated the author's understanding of the problem and their attention to edge cases in the implementation process.
The Weekly Challenge #365
The post gives a comprehensive introduction to how to use Perl, as well as examples of its many capabilities. Each task has been addressed thoroughly by providing clear explanations and well‑structured code, illustrating the effective and creative use of Perl idiomatic patterns. All of these characteristics make this post an excellent resource for both learning Perl and using Perl as a reference.
Alphabet Digit Counter Token
This post presents a clear, thorough examination of the problem and provides an explanation of the solution to the problem through logical analysis. Roger has created a detailed description of the proposed solution, which includes smaller, clearer explanations and code so that all readers, whether looking for Perl or token-based parsing methods, can easily understand how to implement these methods in their own code.
Counting the index
A concise write-up, which clearly illustrates the two parts of the Weekly Challenge: counting an index, transforming alphabet position into repetitive digit sums, and validating tokens via concise logic expression, using both Python and Perl along with a clear explanation of the solution with examples of practical problem solving and proper implementation.
Weekly collections
NICEPERL's lists
Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.
Event reports
28th German Perl Workshop (2026, Berlin)
It sounds like the German Perl Workshop became a replacement to the mostly defunct YAPC::EU.
German Perl Workshop 2026 in Berlin
The usual very detailed review by domm.
Events
Perl Maven online: Testing in Perl - part 1
March 26, 2026
Perl Toolchain Summit 2026
April 23-26, 2026
The Perl and Raku Conference 2026
June 26-29, 2026, Greenville, SC, USA
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
update Module CoreList see: bce42ab11583f8e120361de2fd7b341cd0c9fc3e
bisect-runner.pl: Add example Demonstration program uses locally installed module.
perlsyn: remove reference to "do SUB" syntax This (long deprecated) syntax was removed in v5.20 (commit 8c74b4142557).
Cross-posted from my blog
Last week, the Perl community came together for the 28th German Perl Workshop. This year, it was held at the Heilandskirche in Berlin Moabit. Excitingly, we had the nave for the presentations.
While the name is still German Perl Workshop, we now draw attendees from all over the globe. Presenters came from India, the US and various European countries. Maybe it is time to announce it as a more international conference again.
Bringing the infrastructure to a Perl Workshop is a lot of additional hardware that we hopefully won't need, like looong HDMI cables, various adapters to HDMI, a bundle extension cords and duct tape of the non-Perl variant. Lee also brought the EPO recording set for recording the presentations. The set came back with me from Berlin, as its main use is nowadays recording the talks at a German Perl Workshop for later publication.
Organizing a conference usually means that my attention is divided between running the event, chatting with attendees and giving a presentation or two. Luckily other members of Frankfurt.pm and other long-time attendees are always there to lend a hand.
Over the years, we have organized the German Perl Workshop many times. Local organizers for 2027 already stepped up. Next year, we aim for the city of Hannover. We don't have the contract for a venue signed, so watch https://www.perl-workshop.de/news for announcements.
Such an event can't happen without the sponsors who support us financially. Let me quickly show their logos here:
I'm currently in a train from Berlin to Strasbourg and then onward to Marseille, traveling from the 28th(!) German Perl Workshop to the Koha Hackfest. I spend a few days after the Perl Workshop in Berlin with friends from school who moved to Berlin during/after university, hanging around at their homes and neighborhoods, visiting museums, professional industrial kitchens and other nice and foody places. But I want to review the Perl Workshop, so:
German Perl Workshop
It seems the last time I've attended a German Perl Workshop was in 2020 (literally days before the world shut down...), so I've missed a bunch of nice events and possibilities to meet up with old Perl friends. But even after this longish break it felt a bit like returning home :-)
I traveled to Berlin by sleeper train (worked without a problem) arriving on Monday morning a few hours before the workshop started. I went to a friends place (where I'm staying for the week), dumped my stuff, got a bike, and did a nice morning cycle through Tiergarten to the venue. Which was an actual church! And not even a secularized one.
Day 1
After a short introduction and welcome by Max Maischein (starting with a "Willkommen, liebe Gemeinde" fitting the location) he started the workshop with a talk on Claude Code and Coding-Agents. I only recently started to play around a bit with similar tools, so I could related to a lot of the topics mentioned. And I (again?) need to point out the blog post I Sold Out for $20 a Month and All I Got Was This Perfectly Generated Terraform which sums up my feelings and experiences with LLMs much better than I could.
Abigail then shared a nice story on how they (Booking.com) sharded a database, twice using some "interesting" tricks to move the data around and still getting reads from the correct replicas, all with nearly no downtime. Fun, but as "my" projects usually operate on a much smaller scale than Booking I will probably not try to recreate their solution.
For lunch I met with Michael at a nearby market hall for some Vietnamese food to do some planing for the upcoming Perl Toolchain Summit in Vienna.
Lars Dieckow then talked about data types in databases, or actually the lack of more complex types in databases and how one could still implement such types in SQL. Looks interesting, but probably a bit to hackish for me to actually use. I guess I have to continue handling such cases in code (which of course feels ugly, especially as I've learned to move more and more code into the DB using CTEs and window functions).
Next Flavio S. Glock showed his very impressive progress with PerlOnJava, a Perl distribution for the JVM. Cool, but probably not something I will use (mostly because I don't run Java anywhere, so adding it to our stack would make things more complex).
Then Lars showed us some of his beloved tools in Aus dem Nähkästchen, continuing a tradition started by Sven Guckes (RIP). I am already using some of the tools (realias, fzf, zoxide, htop, ripgrep) but now plan to finally clean up my dotfiles using xdg-ninja.
Now it was time for my first talk at this workshop, on Using class, the new-ish feature available in Perl (since 5.38) for native keywords for object-oriented programming. I also sneaked in some bibliographic data structures (MAB2 and MARCXML) to share my pain with the attendees. I was a tiny bit (more) nervous, as this was the first time I was using my current laptop (a Framework running Sway/Wayland) with an external projector, but wl-present worked like a charm. After the talk Wolfram Schneider showed me his MAB2->MARC online converter, which could maybe have been a basis for our tool, but then writing our own was a "fun" way to learn about MAB2.
The last talk of the day was Lee Johnson with I Bought A Scanner showing us how he got an old (ancient?) high-res foto scanner working again to scan his various film projects. Fun and interesting!
Between the end of the talks and the social event I went for some coffee with Paul Cochrane, and we where joined by Sawyer X and Flavio and some vegan tiramisu. Paul and me then cycled to the Indian restaurant through some light drizzle and along the Spree, and only then I realized that Paul cycled all the way from Hannover to Berlin. I was a bit envious (even though I in fact did cycle to Berlin 16 years ago (oh my, so long ago..)). Dinner was nice, but I did not stay too long.
Day 2
Tuesday started with Richard Jelinek first showing us his rather impressive off-grid house (or "A technocrat's house - 2050s standard") and the software used to automate it before moving on the the actual topic of his talk, Perl mit AI which turned out to be about a Perl implementation in Rust called pperl developed with massive LLM support. Which seems to be rather fast. As with PerlOnJava, I'm not sure I really want to use an alternative implementation (and of course currently pperl is marked as "Research Preview — WORK IN PROGRESS — please do not use in production environments") but maybe I will give it a try when it's more stable. Especially since we now have containers, which make setting up some experimental environments much easier.
Then Alexander Thurow shared his Thoughts on (Modern?) Software Development, lots of inspirational (or depressing) quotes and some LLM criticism lacking at the workshop (until now..)
Next up was Lars (again) with a talk on Hierarchien in SQL where we did a very nice derivation on how to get from some handcrafted SQL to recursive CTEs to query hierarchical graph data (DAG). I used (and even talked about) recursive CTEs a few times, but this was by far the best explanation I've ever seen. And we got to see some geizhals internals :-)
Sören Laird Sörries informed us on Digitale Souveränität und Made in Europe and I'm quite proud to say that I'm already using a lot of the services he showed (mailbox, Hetzner, fairphone, ..) though we could still do better (eg one project is still using a bunch of Google services)
Then Salve J. Nilsen (whose name I will promise to not mangle anymore) showed us his thoughts on What might a CPAN Steward organization look like?. We already talked about this topic a few weeks ago (in preparation of the Perl Toolchain Summit), so I was not paying a lot of attention (and instead hacked up a few short slides for a lightning talk) - Sorry. But in the discussion afterwards Salve clarified that the Cyber Resilience Act applies to all "CE-marked products" and that even a Perl API backend that power a mobile app running on a smartphone count as "CE-marked products". Before that I was under the assumption that only software running on actual physical products need the attestation. So we should really get this Steward organization going and hopefully even profit from it!
The last slot of the day was filled with the Lightning Talks hosted by R Geoffrey Avery and his gong. I submitted two and got a "double domm" slot, where I hurried through my microblog pipeline (on POSSE and getting not-twitter-tweets from my command line via some gitolite to my self hosted microblog and the on to Mastodon) followed by taking up Lars' challenge to show stuff from my own "Nähkästchen", in my case gopass and tofi (and some bash pipes) for an easy password manager.
We had the usual mixture of fun and/or informative short talks, but the highlight for me was Sebastian Gamaga, who did his first talk at a Perl event on How I learned about the problem differentiating a Hash from a HashRef. Good slides, well executed and showing a problem that I'm quite sure everybody encountered when first learning Perl (and I have to admit I also sometimes mix up hash/ref and regular/curly-braces when setting up a hash). Looking forward for a "proper" talk by Sebastian next year :-)
This evening I skipped having dinner with the Perl people, because I had to finish some slides for Wednesday and wanted to hang out with my non-Perl friends. But I've heard that a bunch of people had fun bouldering!
Day 3
I had a job call at 10:00 and (unfortunately) a bug to fix, so I missed the three talks in the morning session and only arrived at the venue during lunch break and in time for Paul Cochrane talking about Getting FIT in Perl (and fit he did get, too!). I've only recently started to collect exercise data (as I got a sport watch for my birthday) and being able to extract and analyze the data using my own software is indeed something I plan to do.
Next up was Julien Fiegehenn on Turning humans into SysAdmins, where he showed us how he used LLMs to adapt his developer mentorship framework to also work for sysadmin and getting them (LLMs, not fresh Sysadmins) to differentiate between Julian and Julien (among other things..)
For the final talk it was my turn again: Deploying Perl apps using Podman, make & gitlab. I'm not too happy with slides, as I had to rush a bit to finish them and did not properly highlight all the important points. But it still went well (enough) and it seemed that a few people found one of the main points (using bash / make in gitlab CI instead of specifying all the steps directly in .gitlab-ci.yml) useful.
Then Max spoke the closing words and announced the location of next years German Perl Workshop, which will take place in Hannover! Nice, I've never been there and plan to attend (and maybe join Paul on a bike ride there?)
Summary
As usual, a lot of thanks to the sponsors, the speakers, the orgas and the attendees. Thanks for making this nice event possible!
-
App::cpanminus - get, unpack, build and install modules from CPAN
- Version: 1.7049 on 2026-03-17, with 286 votes
- Previous CPAN version: 1.7048 was 1 year, 4 months, 18 days before
- Author: MIYAGAWA
-
App::HTTPThis - Export the current directory over HTTP
- Version: v0.11.1 on 2026-03-16, with 25 votes
- Previous CPAN version: v0.11.0 was 2 days before
- Author: DAVECROSS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260318.001 on 2026-03-18, with 25 votes
- Previous CPAN version: 20260315.002 was 3 days before
- Author: BRIANDFOY
-
Crypt::Passphrase - A module for managing passwords in a cryptographically agile manner
- Version: 0.022 on 2026-03-21, with 17 votes
- Previous CPAN version: 0.021 was 1 year, 1 month, 17 days before
- Author: LEONT
-
DBD::Pg - DBI PostgreSQL interface
- Version: 3.20.0 on 2026-03-19, with 103 votes
- Previous CPAN version: 3.19.0 was 4 days before
- Author: TURNSTEP
-
Git::CPAN::Patch - Patch CPAN modules using Git
- Version: 2.5.2 on 2026-03-18, with 45 votes
- Previous CPAN version: 2.5.1
- Author: YANICK
-
JSON - JSON (JavaScript Object Notation) encoder/decoder
- Version: 4.11 on 2026-03-22, with 109 votes
- Previous CPAN version: 4.10 was 3 years, 5 months, 13 days before
- Author: ISHIGAKI
-
JSON::PP - JSON::XS compatible pure-Perl module.
- Version: 4.18 on 2026-03-20, with 22 votes
- Previous CPAN version: 4.17_01 was 2 years, 7 months, 21 days before
- Author: ISHIGAKI
-
Log::Any - Bringing loggers and listeners together
- Version: 1.719 on 2026-03-16, with 69 votes
- Previous CPAN version: 1.718 was 9 months, 14 days before
- Author: PREACTION
-
MetaCPAN::API - (DEPRECATED) A comprehensive, DWIM-featured API to MetaCPAN
- Version: 0.52 on 2026-03-16, with 26 votes
- Previous CPAN version: 0.51 was 8 years, 9 months, 9 days before
- Author: HAARG
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260320 on 2026-03-20, with 44 votes
- Previous CPAN version: 5.20260308 was 11 days before
- Author: BINGOS
-
Net::SSLeay - Perl bindings for OpenSSL and LibreSSL
- Version: 1.96 on 2026-03-21, with 27 votes
- Previous CPAN version: 1.95_03
- Author: CHRISN
-
OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
- Version: 0.7009 on 2026-03-19, with 15 votes
- Previous CPAN version: 0.7008
- Author: ETJ
-
SPVM - The SPVM Language
- Version: 0.990150 on 2026-03-19, with 36 votes
- Previous CPAN version: 0.990149
- Author: KIMOTO
-
Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
- Version: 1.045 on 2026-03-19, with 14 votes
- Previous CPAN version: 1.044 was 10 days before
- Author: CHOROBA
-
TimeDate - Date and time formatting subroutines
- Version: 2.35 on 2026-03-21, with 28 votes
- Previous CPAN version: 2.34_03 was 1 day before
- Author: ATOOMIC
-
Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
- Version: 0.70 on 2026-03-19, with 20 votes
- Previous CPAN version: 0.69
- Author: CHANSEN
-
YAML::Syck - Fast, lightweight YAML loader and dumper
- Version: 1.39 on 2026-03-21, with 18 votes
- Previous CPAN version: 1.38
- Author: TODDR

We are re-opening the talk submissions with a new deadline of April 21, 2026. Please submit your 20 minute talks, and 50 minute talks at https://tprc.us/. Let us know if you need help with your submission or your talk development, because we have mentors who can listen to your ideas and guide you.
We are also taking submissions for interactive sessions. These are sessions that have a theme, but invite maximum audience participation; sessions which take advantage of the gathering of community members that have a wide range of experience and ideas to share. You would introduce the theme and moderate the session. If you have ideas for interactive sessions, but don’t want to moderate them yourself, please go to our wiki to enter your ideas, and maybe someone else will pick up the ball!
I am just curious if anyone can suggest anything else I might try to resolve an issue.
Since the 19th post scheduled server maintenance none of my CGI scripts work? I have confirmed the coding (even though I have not made any changes in 5+ years). If I copy my site files to a XAMPP server they run run fine. But tech support is unable to find anything wrong and is just throwing it back at me be it is third party software.
I have asked them to confirm that my owner permissions are valid and that the perl library is intact, but have not heard back yet. When I attempt to run any of my CGI scripts the server is generating a 500 error. I have checked everything I can think on my end. I have 755 permissions set. My files have all been uploaded in ASCII FTP mode. All of my HTML pages load. I have confirmed all of my Shebang lines are correct (even though I have not edited them recently).
I am really just wondering if there is anything else I can do to attempt to resolve the issue?
[link] [comments]
About eighteen months ago, I wrote a post called On the Bleading Edge about my decision to start using Perl’s new class feature in real code. I knew I was getting ahead of parts of the ecosystem. I knew there would be occasional pain. I decided the benefits were worth it.
I still think that’s true.
But every now and then, the bleading edge reminds you why it’s called that.
Recently, I lost a couple of days to a bug that turned out not to be in my code, not in the module I was installing, and not even in the module that module depended on — but in the installer’s understanding of modern Perl syntax.
This is the story.
The Symptom
I was building a Docker image for Aphra. As part of the build, I needed to install App::HTTPThis, which depends on Plack::App::DirectoryIndex, which depends on WebServer::DirIndex.
The Docker build failed with this error:
#13 45.66 --> Working on WebServer::DirIndex #13 45.66 Fetching https://www.cpan.org/authors/id/D/DA/DAVECROSS/WebServer-DirIndex-0.1.3.tar.gz ... OK #13 45.83 Configuring WebServer-DirIndex-v0.1.3 ... OK #13 46.21 Building WebServer-DirIndex-v0.1.3 ... OK #13 46.75 Successfully installed WebServer-DirIndex-v0.1.3 #13 46.84 ! Installing the dependencies failed: Installed version (undef) of WebServer::DirIndex is not in range 'v0.1.0' #13 46.84 ! Bailing out the installation for Plack-App-DirectoryIndex-v0.2.1.
Now, that’s a deeply confusing error message.
It clearly says that WebServer::DirIndex was successfully installed. And then immediately says that the installed version is undef and not in the required range.
At this point you start wondering if you’ve somehow broken version numbering, or if there’s a packaging error, or if the dependency chain is wrong.
But the version number in WebServer::DirIndex was fine. The module built. The tests passed. Everything looked normal.
So why did the installer think the version was undef?
When This Bug Appears
This only shows up in a fairly specific situation:
- A module uses modern Perl
classsyntax - The module defines a
$VERSION - Another module declares a prerequisite with a specific version requirement
- The installer tries to check the installed version without loading the module
- It uses Module::Metadata to extract
$VERSION - And the version of Module::Metadata it is using doesn’t properly understand
class
If you don’t specify a version requirement, you’ll probably never see this. Which is why I hadn’t seen it before. I don’t often pin minimum versions of my own modules, but in this case, the modules are more tightly coupled than I’d like, and specific versions are required.
So this bug only appears when you combine:
modern Perl syntax + version checks + older toolchain
Which is pretty much the definition of “bleading edge”.
The Real Culprit
The problem turned out to be an older version of Module::Metadata that had been fatpacked into cpanm.
cpanm uses Module::Metadata to inspect modules and extract $VERSION without loading the module. But the older Module::Metadata didn’t correctly understand the class keyword, so it couldn’t work out which package the $VERSION belonged to.
So when it checked the installed version, it found… nothing.
Hence:
Installed version (undef) of WebServer::DirIndex is not in range ‘v0.1.0’
The version wasn’t wrong. The installer just couldn’t see it.
An aside, you may find it amusing to hear an anecdote from my attempts to debug this problem.
I spun up a new Ubuntu Docker container, installed cpanm and tried to install Plack::App::DirectoryIndex. Initially, this gave the same error message. At least the problem was easily reproducible.
I then ran code that was very similar to the code cpanm uses to work out what a module’s version is.
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'This displayed an empty string. I was really onto something here. Module::Metadata couldn’t find the version.
I was using Module::Metadata version 1.000037 and, looking at the change log on CPAN, I saw this:
1.000038 2023-04-28 11:25:40Z-detects "class" syntax
$ perl -MModule::Metadata -E'say Module::Metadata->new_from_module("WebServer::DirIndex")->version'
0.1.3That seemed conclusive. Excitedly, I reran the Docker build.
It failed again.
You’ve probably worked out why. But it took me a frustrating half an hour to work it out.
cpanm doesn’t use the installed version of Module::Metadata. It uses its own, fatpacked version. Updating Module::Metadata wouldn’t fix my problem.
The Workaround
I found a workaround. That was to add a redundant package declaration alongside the class declaration, so older versions of Module::Metadata can still identify the package that owns $VERSION.
So instead of just this:
class WebServer::DirIndex {
our $VERSION = '0.1.3';
...
}I now have this:
package WebServer::DirIndex;
class WebServer::DirIndex {
our $VERSION = '0.1.3';
...
}It looks unnecessary. And in a perfect world, it would be unnecessary.
But it allows older tooling to work out the version correctly, and everything installs cleanly again.
The Proper Fix
Of course, the real fix was to update the toolchain.
So I raised an issue against App::cpanminus, pointing out that the fatpacked Module::Metadata was too old to cope properly with modules that use class.
Tatsuhiko Miyagawa responded very quickly, and a new release of cpanm appeared with an updated version of Module::Metadata.
This is one of the nice things about the Perl ecosystem. Sometimes you report a problem and the right person fixes it almost immediately.
When Do I Remove the Workaround?
This leaves me with an interesting question.
The correct fix is “use a recent cpanm”.
But the workaround is “add a redundant package line so older tooling doesn’t get confused”.
So when do I remove the workaround?
The answer is probably: not yet.
Because although a fixed cpanm exists, that doesn’t mean everyone is using it. Old Docker base images, CI environments, bootstrap scripts, and long-lived servers can all have surprisingly ancient versions of cpanm lurking in them.
And the workaround is harmless. It just offends my sense of neatness slightly.
So for now, the redundant package line stays. Not because modern Perl needs it, but because parts of the world around modern Perl are still catching up.
Life on the Bleading Edge
This is what life on the bleading edge actually looks like.
Not dramatic crashes. Not language bugs. Not catastrophic failures.
Just a tool, somewhere in the install chain, that looks at perfectly valid modern Perl code and quietly decides that your module doesn’t have a version number.
And then you lose two days proving that you are not, in fact, going mad.
But I’m still using class. And I’m still happy I am.
You just have to keep an eye on the whole toolchain — not just the language — when you decide to live a little closer to the future than everyone else.
The post Still on the [b]leading edge first appeared on Perl Hacks.
I am currently re-visiting the documentation for Perl's CGI module. In the section about the param() method, there is a warning about using that method in a list context; see here. The warning literally reads:
Warning - calling param() in list context can lead to vulnerabilities if you do not sanitise user input as it is possible to inject other param keys and values into your code. [...]
Then there is an example of what we should not do:
my %user_info = (
id => 1,
name => $q->param('name'),
);
I have understood the warning and the code except one thing:
How can calling param() in list context inject other "param keys" (as the citation calls it) into my code? Could somebody please give an example of a query string or of POST data that lets me reproduce this?
The question is specifically about parameter keys, not about possible multiple values for the same key.
Abstract
Even if you’re skeptical about AI writing your code, you’re leaving time on the table.
Many developers have been slow to adopt AI in their workflows, and that’s understandable. As AI coding assistants become more capable the anxiety is real - nobody wants to feel like they’re training their replacement. But we’re not there yet. Skilled developers who understand logic, mathematics, business needs and user experience will be essential to guide application development for the foreseeable future.
The smarter play is to let AI handle the parts of the job you never liked anyway - the documentation, the release notes, the boilerplate tests - while you stay focused on the work that actually requires your experience and judgment. You don’t need to go all in on day one. Here are six places to start.
1. Unit Test Writing
Writing unit tests is one of those tasks most developers know they should do more of and few enjoy doing. It’s methodical, time-consuming, and the worst time to write them is when the code reviewer asks if they pass.
TDD is a fine theory. In practice, writing tests before you’ve vetted your design means rewriting your tests every time the design evolves - which is often. Most experienced developers write tests after the design has settled, and that’s a perfectly reasonable approach.
The important thing is that they get written at all. Even a test that
simply validates use_ok(qw(Foo::Bar)) puts scaffolding in place that
can be expanded when new features are added or behavior changes. A
placeholder is infinitely more useful than nothing.
This is where AI earns its keep. Feed it a function or a module and it will identify the code paths that need coverage - the happy path, the edge cases, the boundary conditions, the error handling. It will suggest appropriate test data sets including the inputs most likely to expose bugs: empty strings, nulls, negative numbers, off-by-one values - the things a tired developer skips.
You review it, adjust it, own it. AI did the mechanical work of thinking through the permutations. You make sure it reflects how your code is actually used in the real world.
2. Documentation
“Documentation is like sex: when it’s good, it’s very, very good; and when it’s bad, it’s better than nothing.” - said someone somewhere.
Of course, there are developers that justify their disdain for writing documentation with one of two arguments (or both):
- The code is the documentation
- Documentation is wrong the moment it is written
It is true, the single source of truth regarding what code actually does is the code itself. What it is supposed to do is what documentation should be all about. When they diverge it’s either a defect in the software or a misunderstanding of the business requirement captured in the documentation.
Code that changes rapidly is difficult to document, but the intent of the code is not. Especially now with AI. It is trivial to ask AI to review the current documentation and align it with the code, negating point #2.
Feed AI a module and ask it to generate POD. It will describe what the code does. Your job is to verify that what it does is what it should do - which is a much faster review than writing from scratch.
3. Release Notes
If you’ve read this far you may have noticed the irony - this post was written by someone who just published a blog post about automating release notes with AI. So consider this section field-tested.
Release notes sit at the intersection of everything developers dislike: writing prose, summarizing work they’ve already mentally moved on from, and doing it with enough clarity that non-developers can understand what changed and why it matters. It’s the last thing standing between you and shipping.
The problem with feeding a git log to AI is that git logs are written for developers in the moment, not for readers after the fact. “Fix the thing” and “WIP” are not useful release note fodder.
The better approach is to give AI real context - a unified diff, a file manifest, and the actual source of the changed files. With those three inputs AI can identify the primary themes of a release, group related changes, and produce structured notes that actually reflect the architecture rather than just the line changes.
A simple make release-notes target can generate all three assets
automatically from your last git tag. Upload them, prompt for your
preferred format, and you have a first draft in seconds rather than
thirty minutes. Here’s how I built
it.
You still edit it. You add color, context, and the business rationale that only you know. But the mechanical work of reading every diff and turning it into coherent prose? Delegated.
4. Bug Triage
Debugging can be the most frustrating and the most rewarding experience for a developer. Most developers are predisposed to love a puzzle and there is nothing more puzzling than a race condition or a dangling pointer. Even though books and posters have been written about debugging it is sometimes difficult to know exactly where to start.
Describe the symptoms, share the relevant code, toss your theory at it. AI will validate or repudiate without ego - no colleague awkwardly telling you you’re wrong. It will suggest where to look, what telemetry to add, and before you know it you’re instrumenting the code that should have been instrumented from the start.
AI may not find your bug, but it will be a fantastic bug buddy.
5. Code Review
Since I’ve started using AI I’ve found that one of the most valuable things I can do with it is to give it my first draft of a piece of code. Anything more than a dozen or so lines is fair game.
Don’t waste your time polishing a piece of lava that just spewed from your noggin. There’s probably some gold in there and there’s definitely some ash. That’s ok. You created the framework for a discussion on design and implementation. Before you know it you have settled on a path.
AI’s strength is pattern recognition. It will recognize when your code needs to adopt a different pattern or when you nailed it. Get feedback. Push back. It’s not a one-way conversation. Question the approach, flag the inconsistencies that don’t feel right - your input into that review process is critical in evolving the molten rock into a solid foundation.
6. Legacy Code Deciphering
What defines “Legacy Code?” It’s a great question and hard to answer. And not to get too racy again, but as it has been said of pornography, I can’t exactly define it but I know it when I see it.
Fortunately (and yes I do mean fortunately) I have been involved in maintaining legacy code since the day I started working for a family run business in 1998. The code I maintained there was born literally in the late 70’s and still, to this day generates millions of dollars. You will never learn more about coding than by maintaining legacy code.
These are the major characteristics of legacy code from my experience (in order of visibility):
- It generates so much money for a company they could not possibly think of it being unavailable.
- It is monolithic and may in fact consist of modules in multiple languages.
- It is grown organically over the decades.
- It is more than 10 years old.
- The business rules are not documented, opaque and can only be discerned by a careful reading of the software. Product managers and users think they know what the software does, but probably do not have the entire picture.
- It cannot easily be re-written (by humans) because of #5.
- It contains as much dead code that is no longer serving any useful purpose as it does useful code.
I once maintained a C program that searched an ISAM database of legal judgments. The code had been ported from a proprietary in-memory binary tree implementation and was likely older than most of the developers reading this post. The business model was straightforward and terrifying - miss a judgment and we indemnify the client. Every change had to be essentially idempotent. You weren’t fixing code, you were performing surgery on a patient who would sue you if the scar was in the wrong place.
I was fortunate - there were no paydays for a client on my watch. But I wish I’d had AI back then. Not to write the code. To help me read it.
Now, where does AI come in? Points 5, 6, and definitely 7.
Throw a jabberwocky of a function at AI and ask it what it does. Not what it should do - what it actually does. The variable names are cryptic, the comments are either missing or lying, and the original author left the company during the Clinton administration. AI doesn’t care. It reads the code without preconception and gives you a plain English explanation of the logic, the assumptions baked in, and the side effects you never knew existed.
That explanation becomes your documentation. Those assumptions become your unit tests. Those side effects become the bug reports you never filed because you didn’t know they were bugs.
Dead code is where AI particularly shines. Show it a module and ask what’s unreachable. Ask what’s duplicated. Ask what hasn’t been touched in a decade but sits there quietly terrifying anyone who considers deleting it. AI will give you a map of the minefield so you can walk through it rather than around it forever.
Along the way AI will flag security vulnerabilities you never knew were there - input validation gaps, unsafe string handling, authentication assumptions that made sense in 1998 and are a liability today. It will also suggest where instrumentation is missing, the logging and telemetry that would have made every debugging session for the last twenty years shorter. You can’t go back and add it to history, but you can add it now before the next incident.
The irony of legacy code is that the skills required to understand it - patience, pattern recognition, the ability to hold an entire system in your head - are exactly the skills AI complements rather than replaces. You still need to understand the business. AI just helps you read the hieroglyphics.
Conclusion
None of the six items on this list require you to hand over the keys. You are still the architect, the decision maker, the person who understands the business and the user. AI is the tireless assistant who handles the parts of the job that drain your energy without advancing your craft.
The developers who thrive in the next decade won’t be the ones who resisted AI the longest. They’ll be the ones who figured out earliest how to delegate the tedious, the mechanical, and the repetitive - and spent the time they saved on the work that actually requires a human.
You don’t have to go all in. Start with a unit test. Paste some legacy code and ask AI to explain it or document it. Think of AI as that senior developer you go to with the tough problems - the one who has seen everything, judges nothing, and is available at 3am when the production system is on fire.
Only this one never sighs when you knock on the door.
Available now from my Wiki Haven: Perl.Wiki.html V 1.42 & the JSTree version.
Also, I've uploaded to CPAN 2 modules:
1: CPAN::MetaCurator V 1.13
2: CPAN::MetaPackager V 1.00
Q & A:
1: What is the relationship between these 2 modules?
CPAN::MetaPackager's scripts/build.db.sh inputs
a recent version of the Perl file 02packages.details.txt,
and outputs an SQLite file called cpan.metapackager.sqlite (15Mb).
The latter ships with the module.
Future versions of this module will use the differences between the db
and newer versions of 02packages.details.txt to do the usual thing of
add/change/delete entries in cpan.metapackager.sqlite.
2: CPAN::MetaCurator's scripts/build.db.sh inputs
an JSON export from Perl.Wiki.html called tiddlers.json,
and outputs an SQLite file called cpan.metacurator.sqlite (15Mb).
The latter ships with the module.
Then scripts/export.tree.sh outputs a file called cpan.metacurator.tree.html.
This latter file is a JSTree version of Perl.Wiki.html, as mentioned above.
Note: By setting the env var INCLUDE_PACKAGES to 1 before you run export.tree.sh
the code will read the cpan.metapackager.sqlite table 'packages' and that changes
the output tree a bit, since the code then knows the names of modules released
to CPAN.
Weekly Challenge 365
Each week Mohammad S. Anwar sends out The Weekly Challenge, a chance for all of us to come up with solutions to two weekly tasks. My solutions are written in Python first, and then converted to Perl. Unless otherwise stated, Copilot (and other AI tools) have NOT been used to generate the solution. It's a great way for us all to practice some coding.
Task 1: Alphabet Index Digit Sum
Task
You are given a string $str consisting of lowercase English letters, and an integer $k.
Write a script to convert a lowercase string into numbers using alphabet positions (a=1 … z=26), concatenate them to form an integer, then compute the sum of its digits repeatedly $k times, returning the final value.
My solution
This is a task of two parts. The first is to take the letters from input_string (as str is a reserved word in Python) to create a number. For this I use string.ascii_lowercase.index(letter)+1 to get each letter and append it to the digits variable. The +1 is because the letters start at 1, not 0.
def aid_sum(input_string: str, k: int) -> int:
digits = ''
for letter in input_string:
try:
digits += str(string.ascii_lowercase.index(letter)+1)
except ValueError:
raise ValueError(
f"The character '{letter}' does not appear to be a lower case letter"
)
The second part is to compute the sums of all the digits a specified number of times. For this I have a loop that performs this. It's a little clunky as Python treats strings and integers differently. If I have a single digit, I exit the loop early as further repetitions won't change the result.
for _ in range(k):
digits = str(sum(int(i) for i in digits))
if len(digits) == 1:
break
return int(digits)
As Perl doesn't care about strings vs integers (with a few exceptions), the code is more straight forward. The index function is used to find the position of the letter in the alphabet.
sub main ( $input_string, $k ) {
my $alphabet = join( "", "a" .. "z" );
my $digits = '';
foreach my $letter ( split //, $input_string ) {
my $idx = index( $alphabet, $letter );
if ( $idx == -1 ) {
die
"The character '$letter' does not appear to be a lower case letter\n";
}
$digits .= $idx + 1;
}
foreach ( 1 .. $k ) {
$digits = sum( split //, $digits );
}
say $digits;
}
Examples
$ ./ch-1.py abc 1
6
$ ./ch-1.py az 2
9
$ ./ch-1.py cat 1
6
$ ./ch-1.py dog 2
8
$ ./ch-1.py perl 3
6
Task 2: Valid Token Counter
Task
You are given a sentence.
Write a script to split the given sentence into space-separated tokens and count how many are valid words. A token is valid if it contains no digits, has at most one hyphen surrounded by lowercase letters, and at most one punctuation mark (!, ., ,) appearing only at the end.
My solution
This is a challenge where regular expression can be used to solve the problem. In both the Python and Perl solution, the regular expression used is ^[a-z]+(\-[a-z]+)?[!,\.]?$.
Breaking each part down:
-
^indicate the start of the string -
[a-z]+means one or more lower case letters -
(\-[a-z]+)?means optionally (the question mark) a hyphen and one or more lowercase letters. -
[!,\.]?means optionally a exclamation mark, comma or full stop. -
$means the end of the string.
This is a one liner in Python
def valid_token_counter(input_string: str) -> int:
return sum(
1 for word in input_string.split()
if re.search(r'^[a-z]+(\-[a-z]+)?[!,\.]?$', word)
)
The Perl solution is also one line (and an extra one to display the answer). The grep function returns the number of matches in a scalar context.
sub main ($input_string) {
my $count = grep { /^[a-z]+(\-[a-z]+)?[!,\.]?$/ } split /\s+/,
$input_string;
say $count;
}
Examples
$ ./ch-2.py "cat and dog"
3
$ ./ch-2.py "a-b c! d,e"
2
$ ./ch-2.py "hello-world! this is fun"
4
$ ./ch-2.py "ab- cd-ef gh- ij!"
2
$ ./ch-2.py "wow! a-b-c nice."
2
Answer
You can configure grub via several ways to use a specific kernel or you can configure grub to use the latest one, or you can tell grub to pick one from a selection.
One specific kernel
If you inspect /etc/grub/grub.cfg you’ll see entries like this:
# the \ are mine, these are usually one big line but for blog purposes I
# multilined them
menuentry 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' --class debian \
--class gnu-linux --class gnu --class os $menuentry_id_option \
'gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4' {
You can use two identifiers to configure grub; you can use 'Debian GNU/Linux GNU/Linux, with Linux 6.12.8-amd64' or you can use the $menuentry_id_option
with gnulinux-6.12.8-amd64-advanced-5522bbcf-dc03-4d36-a3fe-2902be938ed4.
All three of us attended this long meeting covering quite a bit ground:
CVE-2026-3381 obliges us to cut a 5.42.2 point release with an updated Compress::Raw::Zlib.
We accepted Philippe’s and Eric’s offer to handle the last dev releases of the cycle.
Olaf Alders requested more explicit EOL notices and has updated
perlpolicy.podand the release manager guide accordingly. We agreed that the release announcement mails for the final dev release and the stable release should also contain a brief note about the perl version which is falling out of support, and filed an issue to make this happen.We sent mail to kick off the voting process for some new core team member candidates.
We discussed the state of Devel::PPPort. It has been outdated for some time and needs to be unstuck.
We would like to get
customize.datdown to the only entry that cannot be removed (forversion.pm). We will try to coordinate with maintainers.We noticed that we missed the deprecation of multiple
use VERSIONdeclarations in the same scope, which was supposed to be fatalized in 5.44. It is too late now to do that in this dev cycle, so the warning will have to change to 5.46 and the deprecation revisited next cycle.Further on the topic of overlooked deprecations, we considered how to prevent this from continuing to happen. We decided that some kind of documentation of recurring PSC obligations during a cycle is needed, which would also include things like the contentious changes freeze and release blocker triage.
There was not much time left for release blocker triage, so we only did a little, which surfaced no candidate blockers so far. (A few already-definite blockers have been spotted and marked outside of triage.)
Beautiful Perl series
This post is part of the beautiful Perl features series.
See the introduction post for general explanations about the series.
Perl is famous for its regular expressions (in short: regexes): this technology had been known for a long time, but Perl was probably the first general-purpose programming language to integrate them into the core. Perl also augmented the domain-specific sublanguage of regular expressions with a large collection of extended patterns; some of these were later adopted by many other languages or products under the name "Perl-compatible regular expressions".
The whole territory of regular expressions is a vast topic; today we will merely focus on one very specific mechanism, namely the ability to define reusable subregexes within one regex. This powerful feature is an extended pattern not adopted yet in other programming languages, except those that rely on the PCRE library, a C library meant to be used outside of Perl, but with a regex dialect very close to Perl regexes. PHP and R are examples of such languages.
A glimpse at Perl extended patterns
Among the extended patterns of Perl regular expressions are:
- recursive subpatterns. The matching process can recurse, so it becomes possible to match nested structures, like parentheses nested at several levels. You may have read previously in several places (even in Perl's own FAQ documentation!) that regular expressions cannot parse HTML or XML ... but with recursive patterns this is no longer true!
- conditional expressions, where the result of a subpattern can determine where to branch for the rest of the match.
These mechanisms are extremely powerful, but quite hard to master; therefore they are seldom written directly by Perl programmers. The syntax is a bit awkward, due to the fact that when extended expressions were introduced, the syntax for new additional constructs had to be carefully chosen so as to avoid any conflict with existing constructs. Fortunately, some CPAN modules like Regexp::Common help to generate such regular expressions. Probably the most advanced of those is Damian Conway's Regexp::Grammars, an impressive tour de force able to compile recursive-descent grammars into Perl regular expressions! But grammars can also be written without any helper module: an example of a hand-written grammar can be seen in the perldata documentation, describing how Perl identifiers are parsed.
The DEFINE keyword
For this article we will narrow down to a specific construct at the intersection between recursive subpatterns and conditional expressions, namely the DEFINE keyword for defining named subpatterns. Just as you would split a complex algorithm into subroutines, here you can split a complex regular expression into subpatterns! The syntax is (?(DEFINE)(?<name>pattern)...) . An insertion of a named subpattern is written as (?&name) and can appear before the definition. Indeed, good practice as recommended by perlre is to start the regex with the main pattern, including references to subpatterns, and put the DEFINE part with definitions of subpatterns at the end.
The following example, borrowed from perlretut, illustrates the use of named subpatterns for parsing floating point numbers:
/^ (?&osg)\ * ( (?&int)(?&dec)? | (?&dec) )
(?: [eE](?&osg)(?&int) )?
$
(?(DEFINE)
(?<osg>[-+]?) # optional sign
(?<int>\d++) # integer
(?<dec>\.(?&int)) # decimal fraction
)/x
The DEFINE part doesn't consume any input, its sole role is to define the named subpatterns osg, int and dec. Those subpatterns are referenced from the main pattern at the top of the regex. Subpatterns improve readability and avoid duplication.
Example: detecting cross-site scripting attacks
Let's put DEFINE into practice for a practical problem: the goal is to prevent cross-site scripting attacks (abbreviated 'XSS') against web sites.
XSS attacks try to inject executable code in the inputs to the web site. The web server might then store such inputs, without noticing that these are not regular user data; later, when displaying a new web page that integrates that data, the malicious code becomes part of the generated page and is executed by the browser. The OWASP cheat sheet lists various techniques for performing such attacks.
Looking at the list, one can observe three main patterns for injecting executable javascript in an HTML page:
- within a
<script>tag; - within event-handling attributes to HTML nodes or SVG nodes, e.g.
onclick=...,onblur=..., etc.; - within hyperlinks to
javascript:URLs.
Attacks through the third pattern are the most pernicious because of a surprising aspect of the URL specification: it admits ASCII control characters or whitespace intermixed with the protocol part of the URL! As a result, an URL with embedded tabs, newlines, null or space characters like ja\tvas\ncript\x00:alert('XSS') is valid according to Web standards.
Many sources about XSS prevention take the position that input filtering is too hard, because of the large number of possible combinations, and therefore any approach based on regular expressions is doomed to be incomplete. Instead, they recommend approaches based on output filtering, where any user data injected into a Web page goes through an encoding process that makes sure that the characters cannot become executable code. The weak point of such approaches is that malicious code can nevertheless be stored on the server side, which is not very satisfactory intellectually, even if that code is made inocuous.
With the help of DEFINE, we can adopt another approach: perform sophisticated input filtering that will catch most malicious attacks. Here is a regular expression that successfully detects all XSS attacks listed in the OWASP cheat sheet:
my $prevent_XSS = qr/
( # capturing group
<script # embedded <script ...> tag
| # .. or ..
\b on\w{4,} \s* = # event handler: onclick=, onblur=, etc.
| # .. or ..
\b # inline 'javascript:' URL, possibly mixed with ASCII control chars
j (?&url_admitted_chars)
a (?&url_admitted_chars)
v (?&url_admitted_chars)
a (?&url_admitted_chars)
s (?&url_admitted_chars)
c (?&url_admitted_chars)
r (?&url_admitted_chars)
i (?&url_admitted_chars)
p (?&url_admitted_chars)
t (?&url_admitted_chars) :
) # end of capturing group
(?(DEFINE) # define the reusable subregex
(?<url_admitted_chars> [\x00-\x20]* ) # 0 or more ASCII control characters or space
)
/xi;
The url_admitted_chars subpattern matches any sequence of ASCII control characters or space (characters between hexadecimal positions 00 and 20 in the ASCII table); that subpattern is inserted after every single character of the javascript: word, so it will detect all possible combinations of embedded tabs, newlines, null characters or other exotic sequences.
All that remains to be done is to apply the $prevent_XSS regex to all inputs; depending on your Web architecture, this can be implemented easily at the intermediate layers of Catalyst or Mojolicious, or also at the level of Plack middleware.
Needless to say, this approach is not a substitute, but rather a complement to common output encoding techniques to enforce even better protection against XSS attacks.
Conclusion
Even if many other programming languages have now included regular expressions features, Perl remains the king in that domain, with extended patterns that open a whole new world of possibilities. With recursive patterns and with the DEFINE feature, Perl regexes can implement recursive-descent grammars, and the Regexp::Grammars module is here to help in using such functionalities. At a more modest level, the DEFINE mechanism helps to reuse subpatterns in hand-crafted regexes. What a beautiful feature!
About the cover picture
The image is an excerpt from Bach's fugue BWV 878 in the second book of the Well-Tempered Clavier. In these bars, the main theme is reused in diminution, where the note durations are halved with respect to the original presentation. A nice musical example of a subpattern!
The Problem: Generating Release Notes is Boring
You’ve just finished a marathon refactoring - perhaps splitting a monolithic script into proper modules-and now you need to write the release notes. You could feed an AI a messy git log, but if you want high-fidelity summaries that actually understand your architecture, you need to provide better context.
The Solution: AI Loves Boring Tasks
…and is pretty good at them too!
Instead of manually describing changes or hoping it can interpret my ChangeLog, I’ve automated the production of three ephemeral “Sidecar” assets. These are generated on the fly, uploaded to the LLM, and then purged after analysis - no storage required.
The Assets
- The Manifest (
.lst): A simple list of every file touched, ensuring the AI knows the exact scope of the release. - The Logic (
.diffs): A unified diff (usinggit diff --no-ext-diff) that provides the “what” and “why” of every code change. - The Context (
.tar.gz): This is the “secret sauce.” It contains the full source of the changed files, allowing the AI to see the final implementation - not just the delta.
The Makefile Implementation
If you’ve read any of my blog
posts you
know I’m a huge Makefile fan. To automate this I’m naturally going
to add a recipe to my Makefile or Makefile.am.
First, we explicitly set the shell to /usr/bin/env bash to ensure features
like brace expansion work consistently across all dev environments.
# Ensure a portable bash environment for advanced shell features
SHELL := /usr/bin/env bash
.PHONY: release-notes clean-local
# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)
release-notes:
@curr_ver=$(VERSION); \
last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
diffs="release-$$curr_ver.diffs"; \
diff_list="release-$$curr_ver.lst"; \
diff_tarball="release-$$curr_ver.tar.gz"; \
echo "Comparing $$last_tag to current $$curr_ver..."; \
git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
ls -alrt release-$$curr_ver*
clean-local:
@echo "Cleaning ephemeral release assets..."
rm -f release-*.{tar.gz,lst,diffs}
Breaking Down the Recipe
- The Shell Choice (
/usr/bin/env bash): We avoid hardcoding paths to ensure the script finds the correct Bash path on macOS, Linux, or inside a container. - The Version Override (
VERSION ?=): This allows the “pre-flight” trick: runningmake release-notes VERSION=HEADto iterate on notes before you’ve actually tagged the release. - Smart Tag Discovery (
--sort=-v:refname): Usingv:refnameforces Git to use semantic versioning logic (so1.10.0correctly follows1.2.0), while the glob pattern filters out “noisy” non-version tags. - The Diff Filter (
--diff-filter=AMR): This ensures the tarball only includes files that actually exist (Added, Modified, or Renamed). If a release deleted a file, this filter preventstarfrom erroring out when it can’t find the missing file on disk. - The Cleanup Crew (
clean-local): remove the ephemeral artifacts usingbashexpansion
The AI Prompt
Once your assets are generated, upload them to AI and use a prompt that enforces your specific formatting standards.
The Prompt:
I have uploaded a file manifest (`.lst`), a unified diff
(`.diffs`), and a tarball of the source code (`.tar.gz`). Please
analyze these to identify the primary themes of this
release. Generate a release entry using this protocol:
1. Separate versions with 71 dashes.
2. Include a high-level executive summary paragraph.
3. Indent bullet points exactly 21 spaces to maintain a vertical column.
4. Group changes under category labels like `Core Bedrock:` or `Miniserver:`.
5. Provide plain-text ASCII only inside a code block.
The Result
In a recent dry run for Bedrock 3.8.8, this exact workflow allowed the
AI to correctly identify that bedrock-miniserver.pl.in was
refactored into Bedrock::MiniServer and Bedrock::MiniServer::CLI,
while accurately noting that Bedrock::Error moved to the
Bedrock-Core distribution.
In actuality, version 3.8.8 focuses on the primary goals of MiniServer maturation (more batteries included), Apache decoupling, and distribution hygiene. Here’s AI’s version:
-----------------------------------------------------------------------
Bedrock 3.8.8
-----------------------------------------------------------------------
This release focuses on the maturation of the MiniServer component,
decoupling core Bedrock logic from Apache-specific dependencies, and
improving distribution hygiene. Key architectural changes include
refactoring the MiniServer into modular components and moving
exception handling into the Bedrock-Core distribution.
2026-03-17 - 3.8.8 - MiniServer Maturation and Apache Decoupling
Miniserver:
- Refactored bedrock-miniserver.pl into modular
Bedrock::MiniServer and Bedrock::MiniServer::CLI.
- Implemented zero-config scaffolding to
automatically create application trees.
- Integrated full Bedrock configuration pipeline
for parity with Apache environments.
- Updated bedrock_server_config to support both
getter and setter operations.
Core:
- Moved Bedrock::Error and Bedrock::Exception to
the Bedrock-Core distribution.
- Introduced Bedrock::FauxHandler as a production-
ready alias for test handlers.
- Added dist_dir() to BLM::Startup::Bedrock to
expose distribution paths to templates.
Fixes:
- Demoted Apache-specific modules (mod_perl2,
Apache2::Request) to optional recommendations.
- Improved Bedrock::Test::FauxHandler to handle
caller-supplied loggers and safe destruction.
Conclusion
As I mentioned in a response to a recent Medium article, AI can be an accelerator for seasoned professionals. You’re not cheating. You did the work. AI does the wordsmithing. You edit, add color, and ship. What used to take 30 minutes now takes 3. Now that’s working smarter, not harder!
Pro-Tip
Add this to the top of your Makefile
SHELL := /usr/bin/env bash
# Default to the version file, but allow command-line overrides
VERSION ?= $(shell cat VERSION)
Copy this to a file named release-notes.mk
.PHONY: release-notes clean-local
release-notes:
@curr_ver=$(VERSION); \
last_tag=$$(git tag -l '[0-9]*.[0-9]*.[0-9]*' --sort=-v:refname | head -n 1); \
diffs="release-$$curr_ver.diffs"; \
diff_list="release-$$curr_ver.lst"; \
diff_tarball="release-$$curr_ver.tar.gz"; \
echo "Comparing $$last_tag to current $$curr_ver..."; \
git diff --no-ext-diff "$$last_tag" "$$curr_ver" > "$$diffs"; \
git diff --name-only --diff-filter=AMR "$$last_tag" "$$curr_ver" > "$$diff_list"; \
tar -cf - -T "$$diff_list" --transform "s|^|release-$$curr_ver/|" | gzip > "$$diff_tarball"; \
ls -alrt release-$$curr_ver*
clean-local:
@echo "Cleaning ephemeral release assets..."
rm -f release-*.{tar.gz,lst,diffs}
Then add release-notes.mk to your Makefile
include release-notes.mk
A small group of volunteers continue to maintain the DBD::Oracle driver without any sponsorship or funding.
Who have curated yet another dev release which has hit the CPAN in the form of v1.91_5 - probably the last dev release before a new version.
It has grown to be quite large so will probably be released as 1.95 or something to give it some distance from the last release. For that same reason I am hoping for anyone using DBD::Oracle in their stack to help us test it out.
Given the nature of Oracle it is very challenging to exhaustively test it, despite having quite a respectable set of unit tests. There are several current supported OS and architectures, and supported client and server versions, on top of Perl versions. Plus all the historical OS and architectures (and client and server and Perl versions) which are still running in production. Support for which is as I mentioned entirely up to volunteers and folks sending in fixes or enhancements.
Because of this, my suggestion is to always set up a CI (or similar) with your specific versions and test every DBD::Oracle version prior to deploying it - rather than yolo'oing it with cpanm.
So, I invite everyone who can to run the dev release through and even try it in production where possible. Please send feedback via github

Dave writes:
Last month I worked on various miscellaneous issues, including a few performance and deparsing regressions.
Summary: * 3:00 GH #24110 ExtUtils::ParseXS after 5.51 prevents some XS modules to build * 2:49 GH# #24212 goto void XSUB in scalar context crashes * 7:19 XS: avoid core distros using void ST(0) hack * 2:40 fix up Deparse breakage * 5:41 remove OP_NULLs in OP_COND execution path
Total: * 21:29 (HH::MM)

Paul writes:
Not too much activity of my own this month here, as I spent a lot of Perl time working on other things like magic-v2 or some CPAN module ecosystem like Future::IO. Plus I had a stage show to finish building props for and manage the running of.
But I did manage to do:
- 3 = Continue work on attributes-v2 and write a provisional PR for the first stage
- https://github.com/Perl/perl5/pull/24171
- 3 = Bugfix in class.c in threaded builds
- https://github.com/Perl/perl5/issues/24150
- https://github.com/Perl/perl5/pull/24171
- 1 = More
foreachlvref neatening- https://github.com/Perl/perl5/pull/24202
- 3 = Various github code reviews
Total: 10 hours
Now that both attributes-v2 and magic-v2 are parked awaiting the start of the 5.45.x development cycle, most of my time until then will be spent on building up some more exciting features to launch those with, as well as continuing to focus on fixing any release-blocker bugs for 5.44.

Tony writes:
``` [Hours] [Activity] 2026/02/02 Monday 0.08 #24122 review updates and comment 0.17 #24063 review updates and apply to blead 0.28 #24062 approve with comment and bonus comment 0.92 #24071 review updates and approve 0.40 #24080 review updates, research and comment 0.18 #24122 review updates and approve 0.27 #24157 look into it and original ticket, comment on original ticket 0.58 #24134 review and comments 0.27 #24144 review and approve with comment 0.18 #24155 review and comment 0.48 #16865 debugging
0.90 #16865 debugging, start a bisect with a better test case
4.71
2026/02/03 Tuesday 0.17 review steve’s suggested maint-votes and vote 0.17 #24155 review updates and approve 1.30 #24073 recheck, comments and apply to blead 0.87 #24082 more review, follow-ups 0.83 #24105 work on threads support
0.65 #24105 more work on threads, hash randomization support
3.99
2026/02/04 Wednesday 0.13 github notifications 1.92 #24163 review, comments 0.48 #24105 rebase some more, fix tests, do a commit and push for CI (needs more work)
1.70 #24105 more cleanup and push for CI
4.23
2026/02/05 Thursday 0.20 github notifications 0.38 #24105 review CI results and fix some issues 1.75 #24082 research and comments 0.63 #24105 more CI results, update the various generated config files and push for CI 0.17 #23561 review updates and comment 0.40 #24163 research and follow-up
0.58 #24098 review updates and comments
4.11
2026/02/09 Monday 0.15 #24082 comment 0.20 #22040 comment 0.30 #24005 research, comment 0.33 #4106 rebase again and apply to blead 0.35 #24133 comment 0.35 #24168 review CI results and comment 0.25 #24098 comment 0.18 #24129 review updates and comment 0.92 #24160 review, comment, approve 0.17 #24136 review and briefly comment 0.78 #24179 review, comments
0.48 #16865 comment, try an approach
4.46
2026/02/10 Tuesday 0.62 #24163 comment 0.23 #24082 research
0.20 #24082 more research
1.05
2026/02/11 Wednesday 0.48 #24163 review updates and approve 0.73 #24129 review updates 0.45 #24098 research and follow-up comment 0.32 #24134 review updates and approve 0.17 #24080 review updates and approve 1.18 #22132 setup, testing and comments on ticket and upstream llvm ticket 0.32 #23561 review update and approve 0.42 #24179 review some more and make a suggestion
1.03 #24187 review and comments
5.10
2026/02/12 Thursday 0.43 #24136 research and comment 0.17 #24190 review and approve 0.90 #24182 review discussion and the change and approve 0.08 #24178 review and briefly comment 0.33 #24177 review, research and comment 0.08 #24187 brief follow-up 0.43 #24176 research, review and approve 0.27 #24191 research, testing 0.20 #24192 review and approve 0.38 #24056 debugging
0.58 #24056 debugging, something in find_lexical_cv()?
3.85
2026/02/16 Monday 0.52 github notifications 0.08 #24178 review updates and approve 2.20 #24098 review and comments 0.88 #24056 more debugging, find at least one bug 0.92 #24056 work up tests, testing, commit message and push for
CI, perldelta and re-push
4.60
2026/02/17 Tuesday 0.18 #24056 check CI results, rebase in case and re-push, open PR 24205 2.88 #24187 review, comments 0.47 #24187 more comments 0.23 reply email from Jim Keenan re git handling for testing PR
tests without the fixes
3.76
2026/02/18 Wednesday 3.02 #24187 review comments, work on fix for assertion, testing, push for CI 0.25 #24187 check CI, make perldelta and make PR 24211
0.35 #24098 review updates and approve
3.62
2026/02/19 Thursday 0.30 #24200 research and comment 0.47 #24215 review, wonder why cmp_version didn’t complain, find out and approve 0.08 #24208 review and comment 0.73 #24213 review, everything that needs saying had been said 0.22 #24206 review and comments 0.53 #24203 review, comment and approve 0.33 #24210 review, research and approve with comment
0.37 #24200 review, research and approve
3.03
2026/02/23 Monday 0.35 #24212 testing add #24213 to 5.42 votes 2.42 #24159 review and benchmarking, comment
0.73 #24187 try to break it
3.50
2026/02/24 Tuesday 0.35 github notifications 1.13 #24187 update PR 24211 commit message, rechecks 0.43 #24001 re-work tests on PR 24060
0.30 #24001 more re-work
2.21
2026/02/25 Wednesday 1.02 #24180 research, comments 0.22 #24206 review update and comment 0.28 #24208 review updates and comment 0.57 #24060 more tests
0.88 #24060 more tests, testing, debugging
2.97
2026/02/26 Thursday 0.47 #24211 minor fixes per comments 0.23 #24206 review updates and approve 0.22 #24180 review updates and approve 0.98 #24236 review and comments 1.30 #24228 review, testing and comments 0.08 #24236 research and comment
0.78 #24159 review updates, testing, comments
4.06
Which I calculate is 59.25 hours.
Approximately 50 tickets were reviewed or worked on, and 3 patches were applied. ```
I am sure there is some "Perl magic" that makes my code much shorter.
my %m = ("a" => 1, "b" => 12, "c" => "33");
my $str = "";
for (keys (%m))
{
$str .= $_ . "=" . $m {$_} . ", ";
}
$str = substr ($str, 0, -2); # remove last ", "
print $str; # OUTPUT: a=1, b=12, c=33
Is there some sort or lambda-style in Perl to make simple tasks not that "clumsy"?
e.g. $m.keys ().foreach (k,v => $k "=" . $v . ", ").join ()
Originally published at Perl Weekly 764
Hi there,
The Perl community continues to move forward with exciting updates and useful new tools. Recently, a new release of Dancer has been announced. In his blog post, Jason A. Crome shared the release of Dancer 2.10, bringing improvements and fixes to the popular web framework. Dancer has long been appreciated for making web development in Perl simple and expressive, and this new version continues that tradition. It is always encouraging to see mature Perl frameworks still actively maintained and evolving with the needs of developers.
Another interesting project worth exploring is Prima, introduced by Reinier Maliepaard. Prima is a powerful GUI toolkit for Perl, allowing developers to build graphical desktop applications. Many Perl developers are familiar with web or command-line tools, but Prima reminds us that Perl can also be used effectively for desktop interfaces. The project demonstrates how flexible the language can be when building different kinds of applications.
The Perl Steering Council also published a new UPDATE: PSC (217) | 2026-03-09. These regular updates give a useful overview of what is happening around the Perl core and governance. They help the community stay informed about ongoing discussions, development priorities, and future plans. Transparency like this is very valuable for an open source language, as it helps everyone understand how decisions are made and where the project is heading.
Finally, it is always nice to see new modules appearing in the CPAN ecosystem. Recently I released a small module called DBIx::Class::MockData, which is designed to help generate mock data when working with DBIx::Class in tests. Creating realistic data for database tests can sometimes take extra effort, so tools that simplify this process can be quite helpful. As always, CPAN continues to grow thanks to contributions from many developers in the Perl community.
Enjoy rest of the newsletter. Stay safe and healthy.
--
Your editor: Mohammad Sajid Anwar.
Announcements
Dancer 2.1.0 Released
In this short announcement, Jason A. Crome shares the release of Dancer 2.10, a new version of the popular Perl web framework Dancer. The post is brief and to the point, informing the community that the new version is now available on CPAN and ready for use. It highlights the continued maintenance and progress of the framework, which has long been valued for making web development in Perl simple and enjoyable.
Articles
This week in PSC (217) | 2026-03-09
The Perl Steering Council shares a short summary of their latest meeting and the topics currently on their radar. The meeting itself was brief, but it still covered a few important administrative and planning items related to the Perl core project. One of the main points discussed was the ongoing outreach to potential new members of the Perl core team. The council mentioned that they have contacted several people and are waiting for responses before holding a vote. Expanding or refreshing the group of contributors is an important step in keeping the Perl core development active and sustainable.
Mastering Perl Prima: A Step-by-Step Guide for Beginners
The article explains that Prima provides a rich set of widgets and tools for creating graphical interfaces such as windows, buttons, and other interactive elements. With relatively small pieces of code, developers can create a working GUI application and run it through Prima's event loop. This makes it possible to build desktop programs in Perl without relying only on command-line interfaces or web frameworks.
Beautiful Perl feature : two-sided constructs, in list or in scalar context
In this article, Laurent Dami explores an interesting Perl concept: two-sided constructs that behave differently depending on list or scalar context. The post explains how certain Perl expressions can adapt their behavior based on what the surrounding code expects, which is one of the language's distinctive and powerful features.
CPAN
Mail::Make
Mail::Make is a modern Perl module for building and sending MIME email messages with a clean, fluent API. It allows developers to construct messages step-by-step (adding headers, text, HTML, attachments, etc.) while automatically generating the correct MIME structure for the email.
DBIx::Class::MockData
The CPAN distribution DBIx-Class-MockData introduces a convenient way to generate mock data for testing applications built with DBIx::Class. It helps developers quickly populate schemas with realistic test records, making it easier to write and maintain database tests. Tools like this are particularly useful in projects using DBIx::Class, which maps relational database tables to Perl objects and is widely used in Perl web applications.
The Weekly Challenge
The Weekly Challenge by Mohammad Sajid Anwar will help you step out of your comfort-zone. You can even win prize money of $50 by participating in the weekly challenge. We pick one champion at the end of the month from among all of the contributors during the month, thanks to the sponsor Marc Perry.
The Weekly Challenge - 365
Welcome to a new week with a couple of fun tasks "Alphabet Index Digit Sum" and "Valid Token Counter". If you are new to the weekly challenge then why not join us and have fun every week. For more information, please read the FAQ.
RECAP - The Weekly Challenge - 364
Enjoy a quick recap of last week's contributions by Team PWC dealing with the "Decrypt String" and "Goal Parser" tasks in Perl and Raku. You will find plenty of solutions to keep you busy.
String Goal
The post showing effective use of features like gather/take and thoughtful string tokenization. The post combines readable code with solid explanation, making it useful and inspiring for anyone exploring Raku for text parsing tasks.
Perl Weekly Challenge: Week 364
The post provides a clear and well-structured walkthrough of Perl Weekly Challenge #364, presenting the problem statements alongside thoughtful explanations of the approach and implementation. The solutions are concise, readable, and demonstrate practical Perl/Raku techniques, making the article both informative and enjoyable for developers following the challenge.
Alternate Codes
This post presents solutions to Perl Weekly Challenge 364, with a strong focus on clear reasoning and elegant Perl implementations. The article walks through the logic behind each task and explains the approach in a concise but technical way, making it easy for readers to follow the thought process. It is a well-written challenge write-up that nicely demonstrates practical problem solving and expressive Perl code.
substituting strings!
The article offers a practical and technically rich walkthrough of the challenge tasks. The explanations are concise but clear, and the multiple implementations make the post especially interesting for readers who enjoy comparing solutions across languages and environments.
Perl Weekly Challenge 364
In this blog post, W. Luis Mochán shares his solutions to Perl Weekly Challenge 364, presenting concise and well-thought-out Perl implementations for both tasks. The article focuses on clear logic and often explores compact solutions, sometimes even demonstrating elegant one-liners and efficient use of Perl features.
Decrypted "715#15#15#112#": goooal!
The solutions demonstrate a thoughtful and elegant approach to Perl Weekly Challenge #364, combining clear reasoning with expressive Perl idioms. The code is concise yet readable, showing creative problem-solving and effective use of Perl's strengths to produce clean and well-structured implementations.
Andrés Cantor Goes West
The write-up balances technical detail with an informal and engaging style, making the reasoning behind the solutions easy to follow. It is an enjoyable and well-explained challenge post that highlights practical problem solving and thoughtful coding.
Weird encodings
This post shares Peter's solutions to Perl Weekly Challenge 364, presenting clear and well-structured Perl implementations for both tasks. It explains the reasoning behind the approach and walks the reader through the logic step by step, making the solutions easy to follow. Overall, it is a solid and educational write-up that demonstrates practical Perl problem-solving and clean coding style.
The Weekly Challenge - 364: Decrypt String
This post presents a clear and well-structured solution to one of the Perl Weekly Challenge tasks. Reinier explains the approach step by step and supports it with concise Perl code, making the logic easy to follow for readers interested in algorithmic problem solving. It is a solid technical walkthrough that demonstrates practical Perl usage while keeping the explanation accessible and educational.
The Weekly Challenge - 364: Goal Parser
This post presents a thoughtful solution to the second task of Perl Weekly Challenge 364, with a clear explanation of the algorithm and the reasoning behind it. Reinier walks through the logic step by step and supports it with concise Perl code, making the approach easy to understand. It is a well-written technical note that demonstrates practical problem solving and highlights Perl's strengths for implementing compact and readable solutions.
The Weekly Challenge #364
In this post, Robbie shares his Perl solutions for Perl Weekly Challenge 364, continuing his detailed and methodical style of writing about the weekly tasks. His solutions are well structured and focus on correctness and clarity, with carefully organised code and explanations that help readers understand the reasoning behind each step.
Decrypted Goals
In this post, Roger presents his solutions to Perl Weekly Challenge 364, focusing on the task involving "decrypted goals". The write-up explains the reasoning behind the algorithm and walks through a clear Perl implementation that solves the problem efficiently. It is a concise and technically solid article that demonstrates careful analysis and practical Perl problem-solving.
It's all about the translation
In this blog post, Simon shares his solutions to another Perl Weekly Challenge, following his usual workflow of first solving the tasks in Python and then translating the logic into Perl. This approach provides an interesting comparison between the two languages and highlights how similar algorithms can be implemented in different ways.
Rakudo
2026.10 Climbing CragCLI
Weekly collections
NICEPERL's lists
Great CPAN modules released last week;
MetaCPAN weekly report;
StackOverflow Perl report.
Events
German Perl/Raku Workshop 2026 in Berlin
March 16-18, 2026
Perl Toolchain Summit 2026
April 23-26, 2026
The Perl and Raku Conference 2026
June 26-29, 2026, Greenville, SC, USA
You joined the Perl Weekly to get weekly e-mails about the Perl programming language and related topics.
Want to see more? See the archives of all the issues.
Not yet subscribed to the newsletter? Join us free of charge!
(C) Copyright Gabor Szabo
The articles are copyright the respective authors.
Let’s talk about music programming! There are a million aspects to this subject, but today, we’ll touch on generating rhythmic patterns with mathematical and combinatorial techniques. These include the generation of partitions, necklaces, and Euclidean patterns.
Stefan and J. Richard Hollos wrote an excellent little book called “Creating Rhythms” that has been turned into C, Perl, and Python. It features a number of algorithms that produce or modify lists of numbers or bit-vectors (of ones and zeroes). These can be beat onsets (the ones) and rests (the zeroes) of a rhythm. We’ll check out these concepts with Perl.
For each example, we’ll save the MIDI with the MIDI::Util module. Also, in order to actually hear the rhythms, we will need a MIDI synthesizer. For these illustrations, fluidsynth will work. Of course, any MIDI capable synth will do! I often control my eurorack analog synthesizer with code (and a MIDI interface module).
Here’s how I start fluidsynth on my mac in the terminal, in a separate session. It uses a generic soundfont file (sf2) that can be downloaded here (124MB zip).
fluidsynth -a coreaudio -m coremidi -g 2.0 ~/Music/soundfont/FluidR3_GM.sf2
So, how does Perl know what output port to use? There are a few ways, but with JBARRETT’s MIDI::RtMidi::FFI::Device, you can do this:
use MIDI::RtMidi::FFI::Device ();
my $midi_in = RtMidiIn->new;
my $midi_out = RtMidiOut->new;
print "Input devices:\n";
$midi_in->print_ports;
print "\n";
print "Output devices:\n";
$midi_out->print_ports;
print "\n";
This shows that fluidsynth is alive and ready for interaction.
Okay, on with the show!
First-up, let’s look at partition algorithms. With the part() function, we can generate all partitions of n, where n is 5, and the “parts” all add up to 5. Then taking one of these (say, the third element), we convert it to a binary sequence that can be interpreted as a rhythmic phrase, and play it 4 times.
#!/usr/bin/env perl
use strict;
use warnings;
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $parts = $mcr->part(5);
# [ [ 1, 1, 1, 1, 1 ], [ 1, 1, 1, 2 ], [ 1, 2, 2 ], [ 1, 1, 3 ], [ 2, 3 ], [ 1, 4 ], [ 5 ] ]
my $p = $parts->[2]; # [ 1, 2, 2 ]
my $seq = $mcr->int2b([$p]); # [ [ 1, 1, 0, 1, 0 ] ]
Now we render and save the rhythm:
use MIDI::Util qw(setup_score);
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) {
for my $bit ($seq->[0]->@*) {
if ($bit) {
$score->n('en', 40);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-1.mid');
In order to play the MIDI file that is produced, we can use fluidsynth like this:
fluidsynth -i ~/Music/soundfont/FluidR3_GM.sf2 perldotcom-1.mid
Not terribly exciting yet.
Let’s see what the “compositions” of a number reveal. According to the Music::CreatingRhythms docs, a composition of a number is “the set of combinatorial variations of the partitions of n with the duplicates removed.”
Okay. Well, the 7 partitions of 5 are:
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 3], [1, 2, 2], [1, 4], [2, 3], [5]]
And the 16 compositions of 5 are:
[[1, 1, 1, 1, 1], [1, 1, 1, 2], [1, 1, 2, 1], [1, 1, 3], [1, 2, 1, 1], [1, 2, 2], [1, 3, 1], [1, 4], [2, 1, 1, 1], [2, 1, 2], [2, 2, 1], [2, 3], [3, 1, 1], [3, 2], [4, 1], [5]]
That is, the list of compositions has, not only the partition [1, 2, 2], but also its variations: [2, 1, 2] and [2, 2, 1]. Same with the other partitions. Selections from this list will produce possibly cool rhythms.
Here are the compositions of 5 turned into sequences, played by a snare drum, and written to the disk:
use Music::CreatingRhythms ();
use MIDI::Util qw(setup_score);
my $mcr = Music::CreatingRhythms->new;
my $comps = $mcr->compm(5, 3); # compositions of 5 with 3 elements
my $seq = $mcr->int2b($comps);
my $score = setup_score(bpm => 120, channel => 9);
for my $pattern ($seq->@*) {
for my $bit (@$pattern) {
if ($bit) {
$score->n('en', 40); # snare patch
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-2.mid');
A little better. Like a syncopated snare solo.
Sidebar
Another way to play the MIDI file is to use timidity. On my mac, with the soundfont specified in the timidity.cfg configuration file, this would be:
timidity -c ~/timidity.cfg -Od perldotcom-2.mid
To convert a MIDI file to an mp3 (or other audio formats), I do this:
timidity -c ~/timidity.cfg perldotcom-2.mid -Ow -o - | ffmpeg -i - -acodec libmp3lame -ab 64k perldotcom-2.mp3
Okay. Enough technical details! What if we want a kick bass drum and hi-hat cymbals, too? Refactor time…
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $s_comps = $mcr->compm(4, 2); # snare
my $s_seq = $mcr->int2b($s_comps);
my $k_comps = $mcr->compm(4, 3); # kick
my $k_seq = $mcr->int2b($k_comps);
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 8) { # repeats
my $s_choice = $s_seq->[ int rand @$s_seq ];
my $k_choice = $k_seq->[ int rand @$k_seq ];
for my $i (0 .. $#$s_choice) { # pattern position
my @notes = (42); # hi-hat every time
if ($s_choice->[$i]) {
push @notes, 40;
}
if ($k_choice->[$i]) {
push @notes, 36;
}
$score->n('en', @notes);
}
}
$score->write_score('perldotcom-3.mid');
Here we play generated kick and snare patterns, along with a steady hi-hat.
Next up, let’s look at rhythmic “necklaces.” Here we find many grooves of the world.

Image from The Geometry of Musical Rhythm
Rhythm necklaces are circular diagrams of equally spaced, connected nodes. A necklace is a lexicographical ordering with no rotational duplicates. For instance, the necklaces of 3 beats are [[1, 1, 1], [1, 1, 0], [1, 0, 0], [0, 0, 0]]. Notice that there is no [1, 0, 1] or [0, 1, 1]. Also, there are no rotated versions of [1, 0, 0], either.
So, how many 16 beat rhythm necklaces are there?
my $necklaces = $mcr->neck(16);
print scalar @$necklaces, "\n"; # 4116 of 'em!
Okay. Let’s generate necklaces of 8 instead, pull a random choice, and play the pattern with a percussion instrument.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $patch = shift || 75; # claves
my $mcr = Music::CreatingRhythms->new;
my $necklaces = $mcr->neck(8);
my $choice = $necklaces->[ int rand @$necklaces ];
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $bit (@$choice) { # pattern position
if ($bit) {
$score->n('en', $patch);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-4.mid');
Here we choose from all necklaces. But note that this also includes the sequence with all ones and the sequence with all zeroes. More sophisticated code might skip these.
More interesting would be playing simultaneous beats.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $necklaces = $mcr->neck(8);
my $x_choice = $necklaces->[ int rand @$necklaces ];
my $y_choice = $necklaces->[ int rand @$necklaces ];
my $z_choice = $necklaces->[ int rand @$necklaces ];
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $i (0 .. $#$x_choice) { # pattern position
my @notes;
if ($x_choice->[$i]) {
push @notes, 75; # claves
}
if ($y_choice->[$i]) {
push @notes, 63; # hi_conga
}
if ($z_choice->[$i]) {
push @notes, 64; # low_conga
}
$score->n('en', @notes);
}
}
$score->write_score('perldotcom-5.mid');
And that sounds like:
How about Euclidean patterns? What are they, and why are they named for a geometer?
Euclidean patterns are a set number of positions P that are filled with a number of beats Q that is less than or equal to P. They are named for Euclid because they are generated by applying the “Euclidean algorithm,” which was originally designed to find the greatest common divisor (GCD) of two numbers, to distribute musical beats as evenly as possible.
use MIDI::Util qw(setup_score);
use Music::CreatingRhythms ();
my $mcr = Music::CreatingRhythms->new;
my $beats = 16;
my $s_seq = $mcr->rotate_n(4, $mcr->euclid(2, $beats)); # snare
my $k_seq = $mcr->euclid(2, $beats); # kick
my $h_seq = $mcr->euclid(11, $beats); # hi-hats
my $score = setup_score(bpm => 120, channel => 9);
for (1 .. 4) { # repeats
for my $i (0 .. $beats - 1) { # pattern position
my @notes;
if ($s_seq->[$i]) {
push @notes, 40; # snare
}
if ($k_seq->[$i]) {
push @notes, 36; # kick
}
if ($h_seq->[$i]) {
push @notes, 42; # hi-hats
}
if (@notes) {
$score->n('en', @notes);
}
else {
$score->r('en');
}
}
}
$score->write_score('perldotcom-6.mid');
Now we’re talkin’ - an actual drum groove! To reiterate, the euclid() method distributes a number of beats, like 2 or 11, over the number of beats, 16. The kick and snare use the same arguments, but the snare pattern is rotated by 4 beats, so that they alternate.
So what have we learned today?
-
That you can use mathematical functions to generate sequences to represent rhythmic patterns.
-
That you can play an entire sequence or simultaneous notes with MIDI.
References:
-
App::Cmd - write command line apps with less suffering
- Version: 0.340 on 2026-03-13, with 50 votes
- Previous CPAN version: 0.339 was 21 days before
- Author: RJBS
-
App::HTTPThis - Export the current directory over HTTP
- Version: v0.11.0 on 2026-03-13, with 25 votes
- Previous CPAN version: 0.010 was 3 months, 9 days before
- Author: DAVECROSS
-
App::zipdetails - Display details about the internal structure of Zip files
- Version: 4.005 on 2026-03-08, with 65 votes
- Previous CPAN version: 4.004 was 1 year, 10 months, 8 days before
- Author: PMQS
-
CPAN::Audit - Audit CPAN distributions for known vulnerabilities
- Version: 20260308.002 on 2026-03-08, with 21 votes
- Previous CPAN version: 20250829.001 was 6 months, 10 days before
- Author: BRIANDFOY
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260311.002 on 2026-03-11, with 25 votes
- Previous CPAN version: 20260308.006 was 2 days before
- Author: BRIANDFOY
-
Dancer2 - Lightweight yet powerful web application framework
- Version: 2.1.0 on 2026-03-12, with 139 votes
- Previous CPAN version: 2.0.1 was 4 months, 20 days before
- Author: CROMEDOME
-
Data::Alias - Comprehensive set of aliasing operations
- Version: 1.30 on 2026-03-11, with 19 votes
- Previous CPAN version: 1.29 was 1 month, 8 days before
- Author: XMATH
-
DBD::Pg - DBI PostgreSQL interface
- Version: 3.19.0 on 2026-03-14, with 103 votes
- Previous CPAN version: 3.18.0 was 2 years, 3 months, 7 days before
- Author: TURNSTEP
-
IO::Compress - IO Interface to compressed data files/buffers
- Version: 2.219 on 2026-03-09, with 19 votes
- Previous CPAN version: 2.218 was before
- Author: PMQS
-
JSON::Schema::Modern - Validate data against a schema using a JSON Schema
- Version: 0.633 on 2026-03-13, with 16 votes
- Previous CPAN version: 0.632 was 2 months, 7 days before
- Author: ETHER
-
Math::Prime::Util - Utilities related to prime numbers, including fast sieves and factoring
- Version: 0.74 on 2026-03-13, with 22 votes
- Previous CPAN version: 0.74 was 1 day before
- Author: DANAJ
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.040000 on 2026-03-09, with 29 votes
- Previous CPAN version: 2.039000 was 8 days before
- Author: MICKEY
-
Module::CoreList - what modules shipped with versions of perl
- Version: 5.20260308 on 2026-03-08, with 44 votes
- Previous CPAN version: 5.20260220 was 15 days before
- Author: BINGOS
-
OpenGL - Perl bindings to the OpenGL API, GLU, and GLUT/FreeGLUT
- Version: 0.7007 on 2026-03-13, with 15 votes
- Previous CPAN version: 0.7006 was 10 months, 29 days before
- Author: ETJ
-
less - The Perl 5 language interpreter
- Version: 5.042001 on 2026-03-08, with 2248 votes
- Previous CPAN version: 5.042001 was 14 days before
- Author: SHAY
-
SPVM - The SPVM Language
- Version: 0.990146 on 2026-03-14, with 36 votes
- Previous CPAN version: 0.990145 was before
- Author: KIMOTO
-
Syntax::Construct - Explicitly state which non-feature constructs are used in the code.
- Version: 1.044 on 2026-03-09, with 14 votes
- Previous CPAN version: 1.043 was 8 months, 5 days before
- Author: CHOROBA
-
Test::Routine - composable units of assertion
- Version: 0.032 on 2026-03-12, with 13 votes
- Previous CPAN version: 0.031 was 2 years, 11 months before
- Author: RJBS
-
WWW::Mechanize::Chrome - automate the Chrome browser
- Version: 0.76 on 2026-03-13, with 22 votes
- Previous CPAN version: 0.75 was 4 months, 12 days before
- Author: CORION
-
X11::korgwm - a tiling window manager for X11
- Version: 6.1 on 2026-03-08, with 14 votes
- Previous CPAN version: 6.0 was before
- Author: ZHMYLOVE
This is the weekly favourites list of CPAN distributions. Votes count: 61
Week's winner: Langertha (+3)
Build date: 2026/03/14 22:28:35 GMT
Clicked for first time:
- Alien::libmaxminddb - Find or install libmaxminddb
- Container::Builder - Build Container archives.
- Data::HashMap - Fast type-specialized hash maps implemented in C
- Data::Path::XS - Fast path-based access to nested data structures
- EV::Future - Minimalist and high-performance async control flow for EV
- Graph::Easy::As_svg - Output a Graph::Easy as Scalable Vector Graphics (SVG)
- HTTP::Handy - A tiny HTTP/1.0 server for Perl 5.5.3+
- LaTeX::Replicase - Perl extension implementing a minimalistic engine for filling real TeX-LaTeX files that act as templates.
- Linux::Event - Front door for the Linux::Event reactor and proactor ecosystem
- Linux::Event::Listen - Listening sockets for Linux::Event
- LTSV::LINQ - LINQ-style query interface for LTSV files
- Mail::Make - Strict, Fluent MIME Email Builder
- Router::Ragel - Router module using Ragel finite state machine
- Search::Tokenizer - Decompose a string into tokens (words)
- Term::ReadLine::Repl - A batteries included interactive Term::ReadLine REPL module
- Test::Mockingbird - Advanced mocking library for Perl with support for dependency injection and spies
- Unicode::Towctrans -
- XML::PugiXML - Perl binding for pugixml C++ XML parser
Increasing its reputation:
- Affix (+1=5)
- App::cpm (+1=78)
- App::perlbrew (+1=181)
- Class::XSConstructor (+1=9)
- Compress::Zstd (+1=7)
- CtrlO::PDF (+1=4)
- Data::MessagePack (+1=18)
- Data::Random (+1=4)
- DateTime::Format::ISO8601 (+1=10)
- DBD::Oracle (+1=33)
- DBD::Pg (+1=103)
- DBIx::DataModel (+1=13)
- Encode::Simple (+1=6)
- EV (+1=50)
- Eval::Closure (+1=11)
- File::HomeDir (+1=36)
- File::Map (+1=24)
- Graph::Easy (+1=11)
- Iterator::Simple (+1=8)
- Langertha (+3=2)
- Locale::Unicode::Data (+1=2)
- LV (+2=4)
- Math::GMPz (+1=4)
- MetaCPAN::Client (+1=27)
- Moose (+1=335)
- MooX::Cmd (+1=9)
- Net::Server (+1=35)
- OpenGL (+1=15)
- PDL (+1=61)
- Perl::Critic (+1=135)
- Pinto (+1=66)
- PLS (+1=18)
- Readonly (+1=24)
- Reply (+1=63)
- Sentinel (+1=9)
- Server::Starter (+1=23)
- Test2::Plugin::SubtestFilter (+1=4)
- Test::LWP::UserAgent (+1=15)
- Text::Trim (+1=7)
- Try::Tiny (+1=181)
For those running a development version of git from master or next, you probably have seen it already. Today I was inspecting the git logs of git and found this little gem. It supports my workflow to the max.
You can now configure git status to compare branches with your current branch
in status. When you configure status.comparebranches you can use
@{upstream} and @{push} and you see both how far you’ve diverged from your
upstream and your push branch. For those, like me, who track an upstream branch
which differs from their push branch this is a mighty fine feature!
I am trying to understand the behavior of the following script under Perl 5.28.2:
sub split_and_print {
my $label = $_[0];
my $x = $_[1];
my @parts = split('\.', $x);
print sprintf("%s -> %s %s %.20f\n", $label, $parts[0], $parts[1], $x);
}
my @raw_values = ('253.38888888888889', '373.49999999999994');
for my $raw_value (@raw_values) {
split_and_print("'$raw_value'", $raw_value);
split_and_print("1.0 * '$raw_value'", 1.0 * $raw_value);
}
for me, this prints
'253.38888888888889' -> 253 38888888888889 253.38888888888888573092
1.0 * '253.38888888888889' -> 253 388888888889 253.38888888888888573092
'373.49999999999994' -> 373 49999999999994 373.49999999999994315658
1.0 * '373.49999999999994' -> 373 5 373.49999999999994315658
All of that is as expected, except for the last line: I don't understand why, during the automatic conversion of $x from a number to a string in the call to split it is converted into 373.5. print(373.49999999999994 - 373.5) says -5.6843418860808e-14, so Perl knows that those numbers are not equal (i.e. it's not about a limited precision of floating points in Perl).
perlnumber says
As mentioned earlier, Perl can store a number in any one of three formats, but most operators typically understand only one of those formats. When a numeric value is passed as an argument to such an operator, it will be converted to the format understood by the operator.
[...]
If the source number is outside of the limits representable in the target form, a representation of the closest limit is used. (Loss of information)
If the source number is between two numbers representable in the target form, a representation of one of these numbers is used. (Loss of information)
But '373.5' doesn't seem to be the "closest limit" of representing 373.49999999999994 as a string -- that would be '373.49999999999994', or some other decimal representation that, when converted back to a number yields the original value.
Also: what is different about 253.38888888888889?
I am looking for a definite reference that explains how exactly the automatic conversion of numbers to strings works in Perl.
TL;DR
I didn’t like how the default zsh prompt truncation works. My solution, used in
my own custom-made prompt (fully supported by promptinit), uses a custom
precmd hook to dynamically determine the terminal’s available width.
Instead of blind chopping, my custom logic ensures human-readable truncation by following simple rules: it always preserves the home directory (∼) and the current directory name, only removing or shortening non-critical segments in the middle to keep the PS1 clean, contextual, and perfectly single-line. This is done via a so-called “Zig-Zag” pattern or string splitting on certain delimiters.

The deadline for talks looms large, but assistance awaits!
This year, we have coaches available to help write your talk description, and to support you in developing the talk.
If you have a talk you would like to give, but cannot flesh out the idea before the deadline (March 15th; 6 days from now!), you should submit your bare-bones idea and check "Yes" on "Do you need assistance in developing this talk?".
We have more schedule space for talks than we did last year, and we would love to add new voices and wider topics, but time is of the essence, so go to https://tprc.us/ , and spill the beans on your percolating ideas!
In my Perl code, I'm writing a package within which I define a __DATA__ section that embeds some Perl code.
Here is an excerpt of the code that gives error:
package remote {
__DATA__
print "$ENV{HOME}\n";
}
as show below
Missing right curly or square bracket at ....
The lexer counted more opening curly or square brackets than closing ones.
As a general rule, you'll find it's missing near the place you were last editing.
I can't seem to find any mis-matched brackets.
On the contrary, when I re-write the same package without braces, the code works.
package remote;
__DATA__
print "$ENV{HOME}\n";
I'd be grateful, if the experienced folks can highlight the gap in my understanding. FWIW, I'm using Perl 5.36.1 in case that matters.
-
Clone - recursively copy Perl datatypes
- Version: 0.48 on 2026-03-02, with 33 votes
- Previous CPAN version: 0.48_07 was 6 days before
- Author: ATOOMIC
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260301.001 on 2026-03-01, with 25 votes
- Previous CPAN version: 20260228.001
- Author: BRIANDFOY
-
Date::Manip - Date manipulation routines
- Version: 6.99 on 2026-03-02, with 20 votes
- Previous CPAN version: 6.98 was 9 months before
- Author: SBECK
-
DateTime::TimeZone - Time zone object base class and factory
- Version: 2.67 on 2026-03-05, with 22 votes
- Previous CPAN version: 2.66 was 2 months, 25 days before
- Author: DROLSKY
-
Devel::Cover - Code coverage metrics for Perl
- Version: 1.52 on 2026-03-07, with 104 votes
- Previous CPAN version: 1.51 was 7 months, 11 days before
- Author: PJCJ
-
ExtUtils::MakeMaker - Create a module Makefile
- Version: 7.78 on 2026-03-03, with 64 votes
- Previous CPAN version: 7.77_03 was 1 day before
- Author: BINGOS
-
Mail::DMARC - Perl implementation of DMARC
- Version: 1.20260306 on 2026-03-06, with 37 votes
- Previous CPAN version: 1.20260301 was 5 days before
- Author: MSIMERSON
-
Module::Build::Tiny - A tiny replacement for Module::Build
- Version: 0.053 on 2026-03-03, with 16 votes
- Previous CPAN version: 0.052 was 9 months, 22 days before
- Author: LEONT
-
Number::Phone - base class for Number::Phone::* modules
- Version: 4.0010 on 2026-03-06, with 24 votes
- Previous CPAN version: 4.0009 was 2 months, 27 days before
- Author: DCANTRELL
-
PDL - Perl Data Language
- Version: 2.103 on 2026-03-03, with 101 votes
- Previous CPAN version: 2.102
- Author: ETJ
-
SPVM - The SPVM Language
- Version: 0.990141 on 2026-03-06, with 36 votes
- Previous CPAN version: 0.990140
- Author: KIMOTO
-
SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
- Version: 2.89 on 2026-03-05, with 18 votes
- Previous CPAN version: 2.88 was 9 days before
- Author: MANWAR
-
Sys::Virt - libvirt Perl API
- Version: v12.1.0 on 2026-03-03, with 17 votes
- Previous CPAN version: v12.0.0 was 1 month, 18 days before
- Author: DANBERR
-
Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
- Version: 0.68 on 2026-03-02, with 20 votes
- Previous CPAN version: 0.67
- Author: CHANSEN
-
X11::korgwm - a tiling window manager for X11
- Version: 6.0 on 2026-03-07, with 14 votes
- Previous CPAN version: 5.0 was 1 year, 1 month, 15 days before
- Author: ZHMYLOVE
-
Zonemaster::Engine - A tool to check the quality of a DNS zone
- Version: 8.001001 on 2026-03-04, with 35 votes
- Previous CPAN version: 8.001000 was 2 months, 16 days before
- Author: ZNMSTR
In the zshell you can use CORRECT_IGNORE_FILE to ignore files for spelling
corrections (or autocorrect for commands). While handy, it is somewhat limited
as it is global. Now, I wanted to ignore it only for git and not other
commands. But I haven’t found a way to only target git without having to make a
wrapper around git (which I don’t want to do).
So I wrote an autoloaded function that does this for me. The idea is rather
simple. In your .zshrc you set a zstyle that tells which file should be
ignored based on files (or directories) that exist in the current directory.
Based on this you build the CORRECT_IGNORE_FILE environment variable or you
just unset it. This function is then hooked into the chpwd action. I went
with three default options, check dir, file, or just exist: d, f, or e. File
wins, then directory, then exists.
-
Amon2 - lightweight web application framework
- Version: 6.18 on 2026-02-28, with 27 votes
- Previous CPAN version: 6.17 was 1 day before
- Author: TOKUHIROM
-
App::DBBrowser - Browse SQLite/MySQL/PostgreSQL databases and their tables interactively.
- Version: 2.439 on 2026-02-23, with 18 votes
- Previous CPAN version: 2.438 was 1 month, 29 days before
- Author: KUERBIS
-
Beam::Wire - Lightweight Dependency Injection Container
- Version: 1.031 on 2026-02-25, with 19 votes
- Previous CPAN version: 1.030 was 20 days before
- Author: PREACTION
-
CPAN::Uploader - upload things to the CPAN
- Version: 0.103019 on 2026-02-23, with 25 votes
- Previous CPAN version: 0.103018 was 3 years, 1 month, 9 days before
- Author: RJBS
-
CPANSA::DB - the CPAN Security Advisory data as a Perl data structure, mostly for CPAN::Audit
- Version: 20260228.001 on 2026-02-28, with 25 votes
- Previous CPAN version: 20260225.001 was 2 days before
- Author: BRIANDFOY
-
DBD::mysql - A MySQL driver for the Perl5 Database Interface (DBI)
- Version: 4.055 on 2026-02-23, with 67 votes
- Previous CPAN version: 5.013 was 6 months, 19 days before
- Author: DVEEDEN
-
Google::Ads::GoogleAds::Client - Google Ads API Client Library for Perl
- Version: v31.0.0 on 2026-02-25, with 20 votes
- Previous CPAN version: v30.0.0 was 27 days before
- Author: DORASUN
-
LWP::Protocol::https - Provide https support for LWP::UserAgent
- Version: 6.15 on 2026-02-23, with 22 votes
- Previous CPAN version: 6.14 was 1 year, 11 months, 12 days before
- Author: OALDERS
-
Mail::DMARC - Perl implementation of DMARC
- Version: 1.20260226 on 2026-02-27, with 36 votes
- Previous CPAN version: 1.20250805 was 6 months, 21 days before
- Author: MSIMERSON
-
MetaCPAN::Client - A comprehensive, DWIM-featured client to the MetaCPAN API
- Version: 2.039000 on 2026-02-28, with 27 votes
- Previous CPAN version: 2.038000 was 29 days before
- Author: MICKEY
-
SPVM - The SPVM Language
- Version: 0.990138 on 2026-02-28, with 36 votes
- Previous CPAN version: 0.990137 was before
- Author: KIMOTO
-
SVG - Perl extension for generating Scalable Vector Graphics (SVG) documents.
- Version: 2.88 on 2026-02-23, with 18 votes
- Previous CPAN version: 2.87 was 3 years, 9 months, 3 days before
- Author: MANWAR
-
Test2::Harness - A new and improved test harness with better Test2 integration.
- Version: 1.000163 on 2026-02-24, with 28 votes
- Previous CPAN version: 1.000162 was 3 days before
- Author: EXODIST
-
Tickit - Terminal Interface Construction KIT
- Version: 0.75 on 2026-02-27, with 29 votes
- Previous CPAN version: 0.74 was 2 years, 5 months, 22 days before
- Author: PEVANS
-
TimeDate - Date and time formatting subroutines
- Version: 2.34 on 2026-02-28, with 28 votes
- Previous CPAN version: 2.34_01
- Author: ATOOMIC
-
Unicode::UTF8 - Encoding and decoding of UTF-8 encoding form
- Version: 0.66 on 2026-02-25, with 20 votes
- Previous CPAN version: 0.65 was 1 day before
- Author: CHANSEN


